Kandria
I'm tired
Sorry if this entry is a bit of a mess, I'm tired from studying all day. Got the last exam for the semester coming up tomorrow, and I don't feel well prepared for it. I wish I could just focus on Kandria instead!
Like the good student I am, I did not spend the entire week studying though. I've also spent a significant chunk of time working on the audio backend for Kandria, and I thought I could talk about all of that stuff a little bit today. Audio is an interesting and challenging field!
It's interesting since there's a ton of different ways to manipulate and tweak audio to achieve interesting effects. The amount of knobs, dials, and switches present on any synthesizer alone is crazy. Add entire pipelines of effects and modulators to that and the amount of stuff you have to master quickly explodes.
It's also very challenging, since for most audio tasks you are concerned with real-time processing. If you can't produce audio signal fast enough you're either going to create stutters and noise in the output, or have to process large amounts of signals in batches, which leads to increasing delays between an event starting and the output actually being audible. Obviously both of these things are pretty bad news.
I'm by no means an audio engineer, so all of these things are doubly scary to me, since I don't have a good way of estimating whether I'm doing the right thing, having no prior experience or education to back it up with. Given all that you might wonder why I'm even working with this stuff to begin with. As with so many things, it's a combination of yak shaving, and being stubborn.
For my games I've long needed an audio system. Unfortunately all of the sound systems I'm aware of out there are either: 1) intended for audio workstations and not games, thus targeting different constraints and setups 2) not available under a permissive or free license 3) not capable of doing enough, or 4) tied to huge C/C++ systems that are a tremendous pain to control. And so I set up to build my own.
Being concerned with the hard real-time constraints, I decided to write the bulk of the library in C instead of my home language of Lisp. I typically try to avoid C as much as possible as it's harder to interact with, harder to debug, and harder to deploy reliably. However, these problems can be mitigated to a large extent if you control the library yourself and design it int a sensible way, which I have attempted to do.
The result is libmixed. It's available as a pretty portable C library with minimal external dependencies, making it much easier to deploy and distribute. I hope that, being written in C, it'll also find some use outside with other people, in projects and languages outside my own. It would definitely be nice to see, as I think there's some good potential here.
Libmixed primarily concerns itself with three things:
Providing an API for managing sound sample buffers as well as an architecture for audio processing units (called "segments")
Code to efficiently pack and unpack audio data from other external formats into standardised float buffers. This includes sample rate conversion.
Standard implementations of useful sound processing tasks such as pitch shifting, linear mixing, spatial audio, channel conversion, banding, gating, etc.
It specifically does not include code to handle sound output or decoding audio files. Doing so makes it much lighter, and much more portable. It also allows users to choose the formats and backends they care about.
However, without being able to read from files and being able to actually play back generated audio, it would be pretty useless for Kandria. That's where cl-mixed comes into play, or more specifically its extensions. That's also what I've been working on for the most part the past week.
Now in the lisp world we can make use of dynamically binding to libraries we need or detect without creating hard version dependencies that make deployment difficult. It's also a lot easier to be able to interactively debug these things without having to go through the whole edit/recompile/redeploy/restart cycle. Anyhow, I've now implemented a sizeable number of backends that should work on Windows, Linux, MacOS, and even FreeBSD. There's still a few extra ones that don't work as they should, and I've been tearing my hair out trying to figure out why.
Still, with that done, the last piece of the puzzle can be tackled: Harmony. Harmony performs the resource and system management part of the equation to allow you to easily add things to play back and perform the tedious buffer management and allocation stuff for you.
As you might have noticed, all three of these have existed for a while, and they.. mostly worked, too. However, some serious problems arose that prompted a fundamental rewrite in libmixed. After that, I decided to start from scratch on the upper layers too, so Harmony still needs to be redone now before things are ready for use in Kandria.
In any case, I feel like I'm seeing the light at the end of the tunnel, so I really hope I'm not jynxing myself when I say: there will be audio in Kandria, soon.
Also: starting next week I'll be at the office working on Kandria three days a week. That should provide some more productivity to move the project along. See you next week!