2024-10-20

MOD playback

First a little demo:

It should be self explanatory by now. Music can be toggled through the menu and B button.

That said, here's how it's done:

MOD tracker

MOD files came up in the 1990s and were used in games and demos. Like MIDI files, they contain instructions to play music, but they also contain the samples to be played. This way, the music files are fairly small and yet can sound very unique. For a collection of mod files and its derivatives, check out modarchive.org.

The reason why I am interested in MOD files is that they are small and can be played quite efficiently on embedded devices while allowing long and complex music tracks.

I already experimented with this in 2018 on the Tiny Arcade, so I knew this is working.

HxCMOD

HxCMOD is a MOD player by Jean-François DEL NERO and I used it back then already.

It is a well optimized C library that is intended to be used on embedded devices: It doesn't use dynamic memory allocation and is designed to be fast and efficient - exactly what I need.

General architecture

Raylib comes with various MOD players, but these are not optimized for embedded devices.

Moreover, I need to make sure I can play music and sound in parallel for a few reasons:

So the desired architecture on the RP2350 is as follows:

Before the integration of audio, my game code had 2 entry points: init and update. The init function is called once and the update function is called every frame, receiving a context struct that contains the screen buffer and the input states.

To accommodate the audio process, I decided to add a 3rd entry point: audioUpdate. This function is called by the audio process and receives an audio context struct which contains the following fields:

The audio update is only called when the audio buffer has just been finished to be read out by the interrupt handler. This is very similar to how audio streaming on desktop works, where the audio driver calls a callback function when new audio data should be provided.

Raylib integration

The integration into raylib is quite straight forward: first we create a new audio stream with the desired sample rate and size. Then we set the audio stream callback to our audio system callback function and start playing the audio stream:

AidioStream audioStream = LoadAudioStream(SAMPLE_RATE, SAMPLE_SIZE, 1);
SetAudioStreamCallback(audioStream, AudioSystemCallback);
PlayAudioStream(audioStream);

The AudioSystemCallback function is fairly simple:

void AudioSystemCallback(void *buffer, unsigned int frames)
{
    // reset the buffer content
    for (unsigned int i = 0; i < frames; i++)
    {
        ((short *)buffer)[i] = 0;
    }
    // the audioUpdate function is provided by the game code DLL - it may temporarily not be available
    // during recompiling the game code
    if (audioUpdate != NULL && (!isPaused || stepAudio))
    {
        stepAudio = 0;
        // this is subject to change; for now it works and the idea is, that each channel
        // can receive one instruction per audio frame update to do something. Channel
        // 0 is the MOD player channel.
        for (int i=0;i0};
        }
        audioCtx.frames = frames;
        audioCtx.sampleRate = SAMPLE_RATE;
        audioCtx.sampleSize = SAMPLE_SIZE;
        // the buffer is provided to the audio update function and needs to be
        // updated with the audio data we want to have (e.g. music)
        audioCtx.outBuffer = buffer;

        // call the audio update function provided by the game code
        audioUpdate(&audioCtx);

        // after the audio update, we copy the channel status back to the runtime context
        // so the main loop can react to it
        for (int i=0;i0};
        }
    }
}

The audioUpdate function is provided by the game code DLL and is platform independent. It is a bit too long to show here, but it needs to read out the instructions, fill the audio buffer and provide channel status updates. Everything meaningful is done in this part of the application - there's no platform specific code in there.

Buffer sizes

A little background information: Choosing the right buffer sizes is important: Small buffers allow for low latency but can more easily run out of data. Large buffers can introduce latency but are more stable.

My first implementation that ran on the RP2350 didn't use the 2nd core and whenever the game loaded a new scene, the buffer ran empty since the main loop was blocked by the scene loading and couldn't call the audio update function. When increasing the buffer size, the latency felt quite high (~200ms) and yet it would still run dry when loading a new scene.

Using the 2nd core

First I have to admit that I don't know much about multicore programming and I am not sure if I am doing it right, but this seems to work. The data exchange uses a shared memory area and there are currently no locks or semaphores in place - because these are not available in the RP2350 SDK.

This is one of the reasons why I want to change the way I exchange data instructions between the main loop and the audio process. This still worked however, and it wasn't difficult at all:

// during init, we launch the audio core
multicore_launch_core1(Audio_core);

// ...

void Audio_core()
{
    // initialize the Audio relevant pins and interrupts
    Audio_init();

    // obtain the shared memory area: the RuntimeContext struct
    RuntimeContext *ctx = RuntimeContext_get();
    while (1)
    {
        // the audio buffer is only filled when it is empty
        if (!soundBuffer.bufferReady)
        {
            AudioContext *audioContext = AudioContext_get();

            // there are two buffers we use int alternation, based on at which point the 
            // audio frame is being read out by the interrupt handler
            uint16_t *buffer = soundBuffer.currentAudioBank ? soundBuffer.samplesA : soundBuffer.samplesB;
            audioContext->frames = ENGINE_AUDIO_BUFFER_SIZE;
            audioContext->outBuffer = (char*) buffer;
            audioContext->sampleRate = 22050;
            audioContext->sampleSize = 16;
                
            // sync the instructions & call the audio update - just like with the raylib audio stream
            for (int i=0;i<5;i++)
            {
                audioContext->inSfxInstructions[i] = ctx->outSfxInstructions[i];
                ctx->outSfxInstructions[i] = (SFXInstruction){0};
            }
            memset(buffer, 0, ENGINE_AUDIO_BUFFER_SIZE * 2);
            audioUpdate(audioContext);
            for (int i=0;i<5;i++)
            {
                ctx->sfxChannelStatus[i] = audioContext->outSfxChannelStatus[i];
                audioContext->inSfxInstructions[i] = (SFXInstruction){0};
            }
            // flag the buffer as ready
            soundBuffer.bufferReady = 1;
        }

        // sleep a bit to not hog the CPU (could be improved, I guess)
        sleep_ms(1);
    }
}

It doesn't look too different from how it's done in raylib (and other similar frameworks). What is maybe now interesting, is the interrupt handler function:


// I honestly have no idea what in detail is going on here; it is verbatim from Tiny Circuit's
// Tiny Game Engine, which can be found here:
// https://github.com/TinyCircuits/TinyCircuits-Tiny-Game-Engine/blob/main/src/audio/engine_audio_module.c
// With hardware instructions, it's important to do everything in the right order and using the right
// values, otherwise there's usually nothing working at all in my experience. I therefore wouldn't be touching it
// unless I'd have to.
void Audio_init()
{
    //generate the interrupt at the audio sample rate to set the PWM duty cycle
    audio_callback_pwm_pin_slice = pwm_gpio_to_slice_num(AUDIO_CALLBACK_PWM_PIN);
    pwm_clear_irq(audio_callback_pwm_pin_slice);
    pwm_set_irq_enabled(audio_callback_pwm_pin_slice, true);
    irq_set_exclusive_handler(PWM_IRQ_WRAP_0, repeating_audio_callback);
    irq_set_priority(PWM_IRQ_WRAP_0, 1);
    irq_set_enabled(PWM_IRQ_WRAP_0, true);
    audio_callback_pwm_pin_config = pwm_get_default_config();
    pwm_config_set_clkdiv_int(&audio_callback_pwm_pin_config, 1);
    engine_audio_adjust_playback_with_freq(150 * 1000 * 1000);
    pwm_init(audio_callback_pwm_pin_slice, &audio_callback_pwm_pin_config, true);

    engine_audio_setup_playback();
}

// same here: Sourced from TinyCircuit's Tiny Game Engine
void engine_audio_setup_playback(){
    // Setup amplifier but make sure it is disabled while PWM is being setup
    gpio_init(AUDIO_ENABLE_PIN);
    gpio_set_dir(AUDIO_ENABLE_PIN, GPIO_OUT);
    gpio_put(AUDIO_ENABLE_PIN, 0);

    // Setup PWM audio pin, bit-depth, and frequency. Duty cycle
    // is only adjusted PWM parameter as samples are retrievd from
    // channel sources
    uint audio_pwm_pin_slice = pwm_gpio_to_slice_num(AUDIO_PWM_PIN);
    gpio_set_function(AUDIO_PWM_PIN, GPIO_FUNC_PWM);
    pwm_config audio_pwm_pin_config = pwm_get_default_config();
    pwm_config_set_clkdiv_int(&audio_pwm_pin_config, 1);
    pwm_config_set_wrap(&audio_pwm_pin_config, 512);   // 125MHz / 1024 = 122kHz
    pwm_init(audio_pwm_pin_slice, &audio_pwm_pin_config, true);

    // Now allow sound to play by enabling the amplifier
    gpio_put(AUDIO_ENABLE_PIN, 1);
}

// the interrupt handler itself is pretty simple though:
void repeating_audio_callback(void){
    // using a simple counter to keep track of the current audio sample position
    // depending on the math, we select the current audio bank
    uint16_t currentAudioSamplePosition = audioWaveUpdateCounter % ENGINE_AUDIO_BUFFER_SIZE;
    uint8_t currentAudioBank = audioWaveUpdateCounter / ENGINE_AUDIO_BUFFER_SIZE % 2;

    // if the audio bank has changed, we check if the buffer was flagged as ready and I
    // use the LED to signal the status for debugging purposes - it's was a few times red
    // when I used a single core, but the 2nd core keeps up steadily withotu missing a beat
    if (soundBuffer.currentAudioBank != currentAudioBank) {
        if (!soundBuffer.bufferReady) {
            setRGB(1, 0, 0);
        }
        else
        {
            setRGB(0, 1, 0);
        }
        // signal that the buffer has swapped and we need the loop to fill the buffer again
        soundBuffer.bufferReady = 0;
        soundBuffer.currentAudioBank = currentAudioBank;
    }
    
    // select the current audio bank and the current audio sample position
    // this is where I wasted a huge amount of time: The samples are 16 bit signed integers
    // Initially I thought it was 16 bit UNSIGNED, and this caused the sound to be faint and
    // crackling a lot. I only figured that out after visualizing the signal on the screen
    // like with an oscilloscope.
    int16_t *bufferBank = currentAudioBank == 0 ? soundBuffer.samplesA : soundBuffer.samplesB;
    // the samples are 16 bit signed integers, so we need to adjust the range to 0-511, which
    // is the operation range of the PWM. Currently, I am just dividing by 32; when
    // dividing by 128, like the math would suggest (converting 16 bit to 9 bit), the sound
    // was extremely faint. I am cutting off the negative values and the values above 511
    // to keep it in the right range.
    int16_t sample = bufferBank[currentAudioSamplePosition] / 32 + 256;
    if (sample < 0)
    {
        sample = 0;
    }
    else
    if (sample > 511)
    {
        sample = 511;
    }
    
    // set the PWM level
    pwm_set_gpio_level(AUDIO_PWM_PIN, (uint16_t) sample);
    
    audioWaveUpdateCounter++;
    pwm_clear_irq(audio_callback_pwm_pin_slice);
}

The implementation works quite well and the sound is stable, even when the main loop is blocked for a longer period of time. The latency is low and the sound quality is good.

Here's how the oscilloscope view looks like on device: Oscilloscope view of the sound signal

Web version

In theory, the web version is quite identical: The audio update function is called by the audio stream callback and the audio buffer is filled with the audio data.

The devil is however in the details: The audio worklet is running in an isolated context and can't access the main thread. It can only communicate with the main thread via messages. The problem is, that I am not aware of a solution to access the WASM instance from the audio worklet.

My initial attempt was to use a shared memory array and filling it with the audio data. While this worked, it suffered from the same problem as on the RP2350: When the main loop was blocking, the audio buffer ran empty unless the buffer was way too large than to be suitable for game play. I was hoping that this simple solution would be good enough, but it wasn't.

Not being well versed in web development, I decided to do an old school trick: Just run two instances of the same problem and let them communicate via messages. To do this, the worklet needs to create its own WASM instance next to the main loops WASM instance. There's much to improve, but the principle works: The main thread fetches the WASM file (which means downloading the WASM file twice 🙁) and pushes it to the audio worklet (which can't fetch the bytes). Creating the WASM instance is not a smooth experience, since it usually is done through the JS file that's typically compiled together with the WASM file. I managed to get it working eventually, but it feels quite hacky. The resulting audio quality however is pretty good.

Some notes:

Conclusion

I have now an audio system in place for 3 platforms: Desktop, RP2350 and Web. Since the audio generation is platform independent, I should be able to add more features without having to worry about the platform specifics.

I haven't measured the performance on the RP2350, but since it kinda worked even during rendering and loading, I am thinking to increase the specs. Originally I planned to support 1 music channel and 4 sound effect channels, but I am thinking to use 2 music channels instead and 8 sound effect channels. This allows music transitions and more complex sound effects.

The way how I send instructions to the audio channel is, I believe, not optimal. I am thinking to use a ring buffer instead with incrementing IDs. I believe this could be more stable in a threaded shared-memory environment. Although, I have to admit that the current way to instruct the audio channels to play something is dead simple:

// 8 bytes for instructions; 0 channel = music channel
ctx->outSfxInstructions[0] = (SFXInstruction)
{
    .type = SFXINSTRUCTION_TYPE_PLAY,
    .id = musicId,
    .updateMask = SFXINSTRUCTION_UPDATE_MASK_VOLUME,
    .volume = 150
};

But next I want to add WAV and audio generation support. I want to allow procedural sound effects to create noise or electronic sounds (beeps + boops).

When this is in, I will return to making the game work again.

The reason I did sound now is that there's an upcoming game jam that I want to try participating in and without sounds, it's not that great.

🍪