Skip to content

Direct Audio Engine Access

Warning

This is for the advanced users only.

Info

This demo only runs well in Firefox. Chrome seems to have trouble with AudioBufferSourceNodes.

It is recommended to use a simple playback Audio Worklet such as this one.

Sometimes, it is necessary for the script to have direct access to the synthesizer's audio engine for various reasons. While one can use spessasynth_core directly, this will require implementing the audio effects manually.

This page is intended to show how to use both spessasynth_core and spessasynth_lib to maintain full feature set of the Synthesizer class, while rendering in the main thread and having the full access to the audio engine.

General Approach

spessasynth_lib exposes both audio processors, allowing us to connect them to the synthesizer directly.

A simple audio loop that achieves this is as follows:

  1. Create the Float32Array buffers for the dry, chorus and reverb outputs.
  2. Perform any custom tasks needed and then render the audio
  3. Send the processed audio to playback nodes, like a custom audio worklet or AudioBufferSourceNodes
  4. The node plays back to the target node (a simple BufferSource)

Showing channel status example

See this demo live

Below is an example that shows the current channel status, which is something that cannot be achieved with just the WorkletSynthesizer class.

main_thread_rendering.html
<label for="sound_bank_input">Upload the sound bank.</label>
<input accept=".sf2, .sf3, .dls" id="sound_bank_input" type="file" />
<label for="midi_input">Select the MIDI file</label>
<input accept=".midi, .mid, .rmi, .smf" id="midi_input" type="file" />
<h2>Voice list</h2>
<div
    id="voice_list"
    style="display: flex; width: 100%; justify-content: space-evenly"
></div>
<!-- note the type="module" -->
<script src="main_thread_rendering.js" type="module"></script>

Nothing special here.

main_thread_rendering.js
import {
    BasicMIDI,
    SoundBankLoader,
    SpessaSynthProcessor,
    SpessaSynthSequencer
} from "spessasynth_core";

// This demo shows how to render in the main thread in real time
// Use firefox for this, chromium poorly handles audio buffers being used like this
// For chromium, consider making a simple playback worklet processor instead

// Create a new audio context
const context = new AudioContext({
    sampleRate: 44_100
});

// Wait for the user to upload the sound bank
document
    .querySelector("#sound_bank_input")
    .addEventListener("change", async (event) => {
        /**
         * If no file is selected, exit early
         * @type {FileList}
         */
        const files = event.target?.files;
        if (!files[0]) {
            return;
        }

        // Resume the audio context so audio processing can begin
        await context.resume();

        // Read the uploaded file into an ArrayBuffer
        const fontBuffer = await files[0].arrayBuffer();

        // Create an instance of the synthesizer and load it with the sound bank
        const synth = new SpessaSynthProcessor(44_100);
        synth.soundBankManager.addSoundBank(
            SoundBankLoader.fromArrayBuffer(fontBuffer),
            "main"
        );

        // Initialize the sequencer for MIDI playback
        const seq = new SpessaSynthSequencer(synth);

        // THE MAIN AUDIO RENDERING LOOP IS HERE
        setInterval(() => {
            // Get the synthesizer’s internal current time
            const synTime = synth.currentSynthTime;

            // If the synth time is significantly ahead of the context time, skip rendering
            // (wait for the context to catch up)
            if (synTime > context.currentTime + 0.1) {
                return;
            }

            // Create empty stereo buffers for dry signal, reverb, and chorus outputs
            const QUANTUM = 128;
            const BUFFER_SIZE = 2048;
            const outputL = new Float32Array(BUFFER_SIZE);
            const outputR = new Float32Array(BUFFER_SIZE);

            let rendered = 0;
            while (rendered < BUFFER_SIZE) {
                // Play back the MIDI file
                seq.processTick();

                // Render the next chunk of audio into the provided buffers
                synth.process(outputL, outputR, rendered, QUANTUM);
                rendered += QUANTUM;
            }

            // Create an AudioBuffer to hold the sample data
            const outBuffer = new AudioBuffer({
                numberOfChannels: 2,
                length: BUFFER_SIZE,
                sampleRate: 44_100
            });

            // Copy the left and right channel data into the audio buffer
            outBuffer.copyToChannel(outputL, 0);
            outBuffer.copyToChannel(outputR, 1);

            // Create a source node from the buffer and connect it to the desired output
            const source = new AudioBufferSourceNode(context, {
                buffer: outBuffer
            });
            source.connect(context.destination);

            // Schedule the buffer to play at the synth’s current time
            source.start(synTime);
        });

        // List all the voices currently playing
        const list = document.querySelector("#voice_list");
        /**
         * @type {HTMLPreElement[]}
         * create and store a <pre> element for each of the 16 MIDI channels
         * each one will be used to display information about active voices on a given channel
         */
        const voiceListElements = [];
        for (let index = 0; index < 16; index++) {
            const element = document.createElement("pre");
            voiceListElements.push(element);
            list.append(element);
        }
        // Set up an interval to regularly update the voice display for each channel
        setInterval(() => {
            // Note: this code is working directly with the synth engine.
            // Advanced users only.
            const core = synth.midiChannels[0].synthCore;

            // Start building the display string with the channel number
            const textData = voiceListElements.map(
                (_, chanNumber) => `Channel ${chanNumber + 1}:\n`
            );
            for (const voice of core.voices) {
                if (!voice.isActive) continue;

                // Get the corresponding element for this channel

                // Append a line for each currently active voice with its MIDI note
                textData[voice.channel] += `note: ${voice.midiNote}\n`;
            }

            for (const [
                index,
                voiceListElement
            ] of voiceListElements.entries()) {
                voiceListElement.textContent = textData[index];
            }
        }, 100);

        // Set up the MIDI player
        document
            .querySelector("#midi_input")
            .addEventListener("change", async (event) => {
                // Verify if the file is really there
                if (!event.target?.files[0]) {
                    return;
                }
                // Parse and play the file
                const file = event.target.files[0];
                const midi = BasicMIDI.fromArrayBuffer(
                    await file.arrayBuffer()
                );
                seq.loadNewSongList([midi]);
                seq.play();
            });
    });

The audio loop presented in this script is very similar to the one shown above:

  1. Make sure that the synthesizer is not too far ahead
  2. Create the buffers
  3. Process the MIDI playback and render audio
  4. Create buffer sources and play back the rendered chunks through them

There is another loop that displays all the voices. It is independent of the audio loop.