Back in my teenage years I used to make lighting controllers that were triggered by music for local discos so this project was driven by curiosity to see if I could use a big screen TV instead of spotlights. Obviously a screen will never be a spotlight, so don’t take that statement too literally!
It’s fairly complicated so the best bet to understanding it is to start with the examples given for the API and change values to see what happens.
An understanding of audio would be helpful to make the most of it.
The docs don’t appear to state the minimum frequency it resolves, but the upper one is based on half the sample rate defined with the analyserNode.fftSize setting.
I wanted to replicate a disco lighting sequencer and sound to light unit
Light sequencers are used all over the place. Basically you have a string of lights that are all independently controlled so you can make effects like a light chaser where each light is switched on one after the other.
A light chaser is more interesting if it can be triggered to the beat of music, which is where you can again use the audio API.
A sound to light unit on the other hand usually has three channels for driving lights where each channel is controlled by low, mid and high frequencies.
I’ve uploaded a video of how the experimental app I built works to YouTube. At the end of this post there is also a link to the app so you can play around with it.
Two types of audio compression
Modern club / pop music usually has a lot of compression added to it in the recording studio so it tends to be all at the same level. There isn’t much difference between the loud and quiet sections. It’s often described as having a lack of dynamic range.
Audio files might also have a lot of compression added to reduce the file size. This type of compression removes the higher frequencies from the audio.
The first problem can cause problems trying to pull out the beat of the music as there might not be an obvious level related one, while the second one effects visualisations based on higher frequencies..
Audio trigger point
To trigger the sequencer or sound to light effects you need to process the audio to get something from it that can be used as a trigger.
Contrary to the above about compression, you will have audio files that might be quiet, loud or a mix of both.
Using the API to create triggers
First off, you need to define a reference or baseline level that tracks the audio level so it works with both loud and quiet music. The triggers are then based on any sounds being above this level. This ‘should’ allow it pick out something like a drum.
The application does this by monitoring the audio level at 500Hz and creating an average from it.
Anything over the average point might be determined to be a trigger.
As this is a trial and error process I created a control panel with drop down selectors for changing variables. This allows you to try different things out without having to keep restarting the software.
I have uploaded the application if you want to have a look at it. Your screen resolution will need to be greater than 1240 pixels wide to keep all the circles from wrapping to the next row. It’s fairly large as I wanted to test how it performed on a large screen TV.
Click the Browse button to select an audio file from your hard drive. Nothing will be uploaded to the server.
The software picks up the volume set by the onscreen player, not the main volume control of your PC or laptop, so if it doesn’t do anything turn the volume up in the browser control.
Don’t expect anything too fantastic, it’s just an experimental application.