Writing Retrowave in Angular

Web Audio API has been around for a while now and there are lots of great articles about it. So I will not go into details regarding the API. What I will tell you is Web Audio can be Angular's best friend if you introduce it well. So let's do this.In Web Audio API you create a graph of audio nodes that process the sound passing through them. They can change volume, introduce delay or distort the signal. Browsers have special AudioNodes with various parameters to handle this. Initially, one would create them with factory functions of AudioContext : But since then they became proper constructors which means you can extend them. This allows us to elegantly and declaratively use Web Audio API in Angular. Angular directives are classes and they can extend existing native classes. Typical feedback loop to create echo effect with Web Audio looks like this: We can see that vanilla code is purely imperative. We create objects, set parameters, manually assemble the graph using connect method. In the example above we use HTML audio tag. When user presses play he would hear echo on his audio file. We will replicate this case using directives. AudioContext will be delivered through Dependency Injection. Both GainNode and DelayNode have only one parameter each — gain and delay time. That is not just a number, it is an AudioParam . We will see what that means a bit later. To declaratively link our nodes into graph we will add AUDIO_NODE token. All our directives will provide it. Directives take closest node from DI and connect with it. We've also added exportAs — it allows us to grab node with template reference variables . Now we can build graph with template: We will end a branch and direct sound to the speakers with waAudioDestinationNode : To be able to create loops like in the echo example above Dependency Injection is not enough. We will make a special directive. It would allow us to pass node as input to connect to it: Both those directives extend GainNode which creates an extra node in the graph. It allows us to disconnect it in ngOnDestroy easily. We do not need to remember everything that is connected to our directive. We can just disconnect this from everything at once. The last directive we need to complete our example is a bit different. It's a source node and it's always at the top of our graph. We will put a directive on audio tag and it will turn it into MediaElementAudioSourceNode for us: Now let's create the echo example with our directives: There are lots of different nodes in Web Audio API but all of them can be implemented using similar approach. Two other important source nodes are OscillatorNode and AudioBufferSourceNode . Often we do not want to add anything into DOM. And there is no need to provide audio file controls to the user. In that case AudioBufferSourceNode is a better option than audio tag. Only inconvenience is — it works with AudioBuffer unlike audio tag which takes a link to an audio asset. We can create a service to mitigate that: Now we can create a directive that works both with AudioBuffer and audio asset URL: Audio nodes have a special kind of properties — AudioParam . For example gain in GainNode . That's why we used setter for it. Such property value can be automated. You can set it to change linearly, exponentially or even over an array of values in a given time. We need some sort of handler which would allow us to take care of this for all such inputs of our directives. Decorator is a good option for this case: Decorator would pass processing to a dedicated function: Strong types will not allow us to accidentally use it for a non-existent parameter. So what would AudioParamInput type look like? Besides number it would include an automation object: processAudioParam function translates those objects into native API commands. It's pretty boring so I will just describe the principle. If current value is 0 and we want it to linearly change to 1 in a second we would pass {value: 1, duration: 1, mode: ‘linear’} . For complex automation we would also need to support an array of such objects. We would typically pass an automation object with short duration instead of plane number . It prevents audible clicking artifacts when parameter changes abruptly. But it's not convenient to do it manually all the time. Let's create a pipe that would take target value, duration and optional mode as arguments: Besides, AudioParam can be automated by connecting an oscillator to it. Usually a frequency lower than 1 is used and it is called an LFO — Low Frequency Oscillator. It can create movement in sound. In example below it adds texture to otherwise static chords. It modulates frequency of a filter they pass through. To connect oscillator to a parameter we can use our waOutput directive. We can access node thanks to exportAs : Web Audio API can be used for different things. From real time processing of a voice for a podcast to math computations, Fourier transforms and more. Let's compose a short music piece using our directives: We will start with simple task — straight drum beat. To count beats we will create a stream and add it to DI: We have 4 beats per measure. Let's map our stream: Now it gives us true in the beginning and false in the middle of each bar. We would use it to play audio samples: Now let's add melody. We will use numbers to indicate notes where 69 means middle A note. Function that translates this number to frequency can be easily found on Wikipedia. Here's our tune: Our component will play right frequency for each note each beat: And inside its template we will have a real synthesizer! But first we need another pipe. It would automate volume with ADSR-envelope. That means "Attack, Decay, Sustain, Release" and here's how it looks: In our case we need for the sound to quickly start and then fade away. Pipe is rather simple: Now we will use it for our synth tune: Let's figure out what's going on here. We have two oscillators. First one is just a sine wave passed through ADSR pipe. Second one is same echo loop we've seen, except this time it passes through ConvolverNode . It creates room acoustics using impulse response. It's a big an interesting subject of its own, but it is outside this article's scope. All other tracks in our song are made similarly. Nodes are connected to each other, parameters are automated with LFOs or change smoothly via our pipes. I only went over a small portion of this subject, simplifying corner cases. We've made a complete conversion of Web Audio API into declarative Angular open-source library @ng-web-apis/audio . It covers all the nodes and features. This library is a part of a bigger project called Web APIs for Angular — an initiative with a goal of creating lightweight, high quality wrappers of native APIs for idiomatic use with Angular. So if you want to try, say, Payment Request API or play with your MIDI keyboard in browser — you are very welcome to browse all our releases so far .

Thumbnail Image of Tutorial Writing Retrowave in Angular