Tutorials on Music

Learn about Music from fellow newline community members!

  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL
  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL

Jam on your MIDI keyboard in Angular

Web MIDI API is an interesting tool. Even though it has been around for a while now, it is still only supported by Chrome. But that's not gonna stop us from creating a playable synthesizer in Angular. It is time we bring Web Audio API to the next level!Previously, we spoke about declarative use of Web Audio API in Angular . Programming music is fun and all but how about actually playing it? There is a standard called MIDI. It's a messaging protocol for electronic instruments data exchange developed back in the 80s. And Chrome supports it natively . It means if you have a synthesizer or a MIDI keyboard — you can hook it up to a computer and read what's played in the browser. You can even control other hardware from your machine. Let's learn how to do it the Angular way. There's not a lot of documentation about it other than the specs. You request access to MIDI devices from navigator and you receive all MIDI inputs and outputs in a Promise . Those inputs and outputs (also called ports) act like event targets. Communication is performed through MIDIMessageEvents which carry Uint8Array data. Each message is 3 bytes at most. The first one is called a status byte . Every number has a particular role like key press or pitch bend. Second and third integers are called data bytes . In the case of key press second byte tells us what key is pressed and the third one is the velocity (how loud note is played). Full spec is available on the official MIDI website . In Angular we handle events with Observables so first step to adopt Web MIDI API would be to convert it to RxJs. To subscribe to events we first need to get MIDIAccess object to reach all inputs. As mentioned before, we request it from navigator and we get a Promise as a response. Thankfully, RxJs works with Promises too. We can create an Injection Token using NAVIGATOR from @ng-web-apis/common package. This way we are not using global objects directly: Now that we have it, we can subscribe to all MIDI events. We can make Observable two ways: Since in this case there is not much set up required a token would suffice. With a bit of extra code to handle Promise rejection, subscribing to all events would look like this: We can extract particular MIDI port from MIDIAccess in case we want to, say, send an outgoing message. Let's add another token and a prepared provider to do this with ease: To work with our stream we need to add some custom operators. After all, we shouldn't always analyze raw event data to understand what we're dealing with. Operators can be roughly broken into two categories: monotype and mapping . With the first group, we can filter out the stream to events of interest. For example, only listen to played notes or volume sliders. Second group would alter elements for us, like dropping the rest of the event and only deliver data array. Here's how we can listen to messages only from a given channel (out of 16): The status byte is organized in groups of 16: 128–143 are noteOff messages for each of the 16 channels, 144–159 are noteOn, etc. So if we divide status byte by 16 and get the reminder — we end up with that message's channel. If we only care about played notes, we can write the following operator: Now we can chain such operators to get a stream we need: Time put all this to work! With a little help from our Web Audio API library discussed in my previous article , we can create a nice sounding synth with just a few directives. Then we need to feed it played notes from the stream we've assembled. We will use the last code example as a starting point. To have polyphonic synthesizer we need to keep track of all played notes so we will add scan to our chain: To alter the volume of held keys and to not end sound abruptly when we let go we will create a proper ADSR-pipe (the previous article had a simplified version): With this pipe we can throw in a fine synth in the template: We iterate over accumulated notes with built-in keyvalue pipe tracking items by played key. Then we have 2 oscillators playing those frequencies. And at the end — a reverberation effect with ConvolverNode . Pretty basic setup and not a lot of code, but it gives us a playable instrument with rich sound. You can go ahead and give it a try in our interactive demo below. In Angular, we are used to working with events using RxJs. And Web MIDI API is not much different from regular events. With some architectural decisions and tokens, we managed to use MIDI in an Angular app. The solution we created is available as @ng-web-apis/midi open-source package. It focuses mostly on receiving events. If you see something that is missing like a helper function or another operator — feel free to open an issue . This library is a part of a bigger project called Web APIs for Angular — an initiative to create lightweight, high-quality wrappers of native APIs for idiomatic use with Angular. So if you want to try Payment Request API or need a declarative Intersection Observer — you are very welcome to browse all our releases so far .

Thumbnail Image of Tutorial Jam on your MIDI keyboard in Angular

Writing Retrowave in Angular

Web Audio API has been around for a while now and there are lots of great articles about it. So I will not go into details regarding the API. What I will tell you is Web Audio can be Angular's best friend if you introduce it well. So let's do this.In Web Audio API you create a graph of audio nodes that process the sound passing through them. They can change volume, introduce delay or distort the signal. Browsers have special AudioNodes with various parameters to handle this. Initially, one would create them with factory functions of AudioContext : But since then they became proper constructors which means you can extend them. This allows us to elegantly and declaratively use Web Audio API in Angular. Angular directives are classes and they can extend existing native classes. Typical feedback loop to create echo effect with Web Audio looks like this: We can see that vanilla code is purely imperative. We create objects, set parameters, manually assemble the graph using connect method. In the example above we use HTML audio tag. When user presses play he would hear echo on his audio file. We will replicate this case using directives. AudioContext will be delivered through Dependency Injection. Both GainNode and DelayNode have only one parameter each — gain and delay time. That is not just a number, it is an AudioParam . We will see what that means a bit later. To declaratively link our nodes into graph we will add AUDIO_NODE token. All our directives will provide it. Directives take closest node from DI and connect with it. We've also added exportAs — it allows us to grab node with template reference variables . Now we can build graph with template: We will end a branch and direct sound to the speakers with waAudioDestinationNode : To be able to create loops like in the echo example above Dependency Injection is not enough. We will make a special directive. It would allow us to pass node as input to connect to it: Both those directives extend GainNode which creates an extra node in the graph. It allows us to disconnect it in ngOnDestroy easily. We do not need to remember everything that is connected to our directive. We can just disconnect this from everything at once. The last directive we need to complete our example is a bit different. It's a source node and it's always at the top of our graph. We will put a directive on audio tag and it will turn it into MediaElementAudioSourceNode for us: Now let's create the echo example with our directives: There are lots of different nodes in Web Audio API but all of them can be implemented using similar approach. Two other important source nodes are OscillatorNode and AudioBufferSourceNode . Often we do not want to add anything into DOM. And there is no need to provide audio file controls to the user. In that case AudioBufferSourceNode is a better option than audio tag. Only inconvenience is — it works with AudioBuffer unlike audio tag which takes a link to an audio asset. We can create a service to mitigate that: Now we can create a directive that works both with AudioBuffer and audio asset URL: Audio nodes have a special kind of properties — AudioParam . For example gain in GainNode . That's why we used setter for it. Such property value can be automated. You can set it to change linearly, exponentially or even over an array of values in a given time. We need some sort of handler which would allow us to take care of this for all such inputs of our directives. Decorator is a good option for this case: Decorator would pass processing to a dedicated function: Strong types will not allow us to accidentally use it for a non-existent parameter. So what would AudioParamInput type look like? Besides number it would include an automation object: processAudioParam function translates those objects into native API commands. It's pretty boring so I will just describe the principle. If current value is 0 and we want it to linearly change to 1 in a second we would pass {value: 1, duration: 1, mode: ‘linear’} . For complex automation we would also need to support an array of such objects. We would typically pass an automation object with short duration instead of plane number . It prevents audible clicking artifacts when parameter changes abruptly. But it's not convenient to do it manually all the time. Let's create a pipe that would take target value, duration and optional mode as arguments: Besides, AudioParam can be automated by connecting an oscillator to it. Usually a frequency lower than 1 is used and it is called an LFO — Low Frequency Oscillator. It can create movement in sound. In example below it adds texture to otherwise static chords. It modulates frequency of a filter they pass through. To connect oscillator to a parameter we can use our waOutput directive. We can access node thanks to exportAs : Web Audio API can be used for different things. From real time processing of a voice for a podcast to math computations, Fourier transforms and more. Let's compose a short music piece using our directives: We will start with simple task — straight drum beat. To count beats we will create a stream and add it to DI: We have 4 beats per measure. Let's map our stream: Now it gives us true in the beginning and false in the middle of each bar. We would use it to play audio samples: Now let's add melody. We will use numbers to indicate notes where 69 means middle A note. Function that translates this number to frequency can be easily found on Wikipedia. Here's our tune: Our component will play right frequency for each note each beat: And inside its template we will have a real synthesizer! But first we need another pipe. It would automate volume with ADSR-envelope. That means "Attack, Decay, Sustain, Release" and here's how it looks: In our case we need for the sound to quickly start and then fade away. Pipe is rather simple: Now we will use it for our synth tune: Let's figure out what's going on here. We have two oscillators. First one is just a sine wave passed through ADSR pipe. Second one is same echo loop we've seen, except this time it passes through ConvolverNode . It creates room acoustics using impulse response. It's a big an interesting subject of its own, but it is outside this article's scope. All other tracks in our song are made similarly. Nodes are connected to each other, parameters are automated with LFOs or change smoothly via our pipes. I only went over a small portion of this subject, simplifying corner cases. We've made a complete conversion of Web Audio API into declarative Angular open-source library @ng-web-apis/audio . It covers all the nodes and features. This library is a part of a bigger project called Web APIs for Angular — an initiative with a goal of creating lightweight, high quality wrappers of native APIs for idiomatic use with Angular. So if you want to try, say, Payment Request API or play with your MIDI keyboard in browser — you are very welcome to browse all our releases so far .

Thumbnail Image of Tutorial Writing Retrowave in Angular

I got a job offer, thanks in a big part to your teaching. They sent a test as part of the interview process, and this was a huge help to implement my own Node server.

This has been a really good investment!

Advance your career with newline Pro.

Only $30 per month for unlimited access to over 60+ books, guides and courses!

Learn More