Skip to main content

Audio Processing

Dyte's SDK supports middlewares which are pluggable functions that can be applied to both audio and video streams in a meeting. This lets you tap in the media to either observe or modify. This guide covers the following.

  1. Create an audio middleware
  2. Add or remove audio middlewares

Create an audio middleware
Core
​

Audio can be altered using ScriptProcessor and AudioWorkletProcessor. We support both ways of performing audio alterations. The middleware function has access to the audioContext which can help the developer perform operations on the audio track. Audio middleware function can be synchronous or asychronous and is expected to return a AudioWorkletNode or ScriptProcessorNode. Here is a sample audio middleware function for your reference:

//Somewhere in your codebase
const meeting = await DyteClient.init(...);

async function addWhiteNoise(audioContext) {
const moduleScript = `
class WhiteNoiseProcessor extends AudioWorkletProcessor {
process (inputs, outputs, parameters) {
const output = outputs[0]
output.forEach(channel => {
for (let i = 0; i <TabItem channel.length; i++) {
channel[i] = Math.random() * 1.0 - 0.5
}
})
return true
}
}

registerProcessor('white-noise-processor', WhiteNoiseProcessor);
`;

const scriptUrl = URL.createObjectURL(new Blob([moduleScript], { type: 'text/javascript' }));
await audioContext.audioWorklet.addModule(scriptUrl);

const whiteNoise = new AudioWorkletNode(audioContext, 'white-noise-processor');
return whiteNoise;
}

// Add audio middleware
meeting.self.addAudioMiddleware(addWhiteNoise);

Add or remove audio middlewares
Core
​

Middlewares can be added to a meeting, which is why they are associated with a meeting object. To add a audio middleware, you need to call the addAudioMiddleware. This method takes in a audio middleware function as a parameter, creating this middleware object was covered in the previous section. To remove a audio middleware, you need to call the removeAudioMiddleware method. This method also takes in the audio middleware function as a parameter.

meeting.self.addAudioMiddleware(YourMiddlewareFunction);

// once done, in a later section remove the middleware
meeting.self.removeAudioMiddleware(YourMiddlewareFunction);
Note

In case you are building an Audio Transcriptions middleware or any sort of middleware that doesn’t alter the original audio stream, remember to feed the output channel with whatever you get from inputChannel so that the audio can be sent to the next middleware. Otherwise, a blank audio buffer will be sent to the next middleware.

If you'd instead like to perform operations on the audio stream through your own implementation, you can also programatically access the meeting's audio stream. This is accessed from the meeting object.

//Somewhere in your codebase
const meeting = await DyteClient.init(...);

// You can get the final track which everybody will hear if your microphone is turned on after all the middlewares are applied
meeting.self.audioTrack()

// You can also get raw audio track which is the raw microphone track before any middlewares are applied to it
meeting.self.rawAudioTrack();