This is my initial stab at a wearable music visualizer for DS 501 in Spring ‘16. I’ve had concepts and versions of such a device kicking around my head for quite some time, to the point that I’d already acquired the majority of the electronic components before planning began. The idea is simple - take in an audio input, break it down into the appropriate channels, and convey the intensity of these channels to everyone in the immediate vicinity.
To see it in action, check it out here.
It’s a hat. Well, a hat covered in LEDs and wires. We take an audio signal through a 3.5mm jack (which unfortunately runs right to the top of our head), break out the left and right channels, perform some simple spectrum analysis, and adjust the intensity of the LEDs according to the sound profile. The analysis is dynamic, so any audio signal will work.
The intent is to get something of a light show synced to the music, with the goal of making a visual distinction of the left and right sound channels and to augment the perception of the sound stage.
The final product is not easily wearable, but it is an entertaining experience. The LEDs are bright enough that in a dark room you get a very distinct visual representation of the sound. For a video of the device in action, scroll to the bottom of this page.
Let’s take a look at the bill of materials:
Of particular interest are the MSGEQ7 graphic equalizer chips. They’re remarkably easy to connect and provide an analog measure of the “intensity” of frequencies in various bands from 63Hz to 16kHz. The quotes are necessary - the relation between dB of the input and the output value is not immediate.
These chips are really nothing more than some bandpass filters feeding into accumulaters that are index by a multiplexer, but they allow us to get a good enough approximation of the Fourier transform of our input waveform. Normally, we’d get this by using the Discrete Fourier Transform (DFT), but the Arduino Pro Mini hooking everything up is only clocked at 8MHz. In my previous experiments, that’s not quite enough clock speed to both analyze the input signal and drive the LEDs without introducing some noticable flicker.
We also use a PWM LED driver with 12 channels to take some of the workload off the Arduino. The Arduino communicates with the driver through an SPI, so we get to control a ton of LEDs with only a few pins. This also lets us pump a little more power into the LEDs by avoiding the Pro Mini’s built-in 3.3v regulator, so we get that much more vivid of a presentation.
This PCB represents the operational brains of the project. On the left, you can see the audio input cable and the output port. The two square chips in the middle are the MSGEQ7s. On the right, we have the Arduino Pro Mini that ties all the other components together. You can see the JST cord for the LiPo battery pack. We’re driving the LED driver board with the full 3.7v - the Arduino has an onboard regulator stepping that down to the required 3.3v logic level.
Most of the work for this build went into the electronics. The LEDs are arranged around the brim of the hat, with the LED driver in the very center. The other electronics, including the battery and the graphic equalizer chips, are mounted on the flattest part of the forehead. Sadly, these components are bulky and cumberesome, and far too large to be mounted more subtly. Future iterations definitely have room for improvement.
There are a few issues with the current design. The most glaring is the necessity of wires from the forehead to the audio source. This is especially an issue given the length of the current 3.5mm jack - it’s only around 18 inches. It’d be great if we could make the audio signal wireless, perhaps via Bluetooth or an offboard controller.
The wires are also incredibly conspicuous. The thing looks less like a hat and more like a science experiment. Which it kind of is. We’ve also got large flat objects (the battery and the PCB) trying to conform to my distinctly curved head. For future iterations I’d like to spend considerable time embedding these objects into the hat. That might mean moving to SMD ICs, shrinking the battery, going to flexible PCBs, or a host of other design choices.
Finally, we’re performing a very simple signal analysis here. I’d imagine we’ve plenty of clock cycles left over on the Arduino (especially considering the inclusion of the LED driver), so there’s no excuse for not performing a little post-processing before setting the LED intensity. In particular, beat detection is pretty weak. I realize this is more a semantic property of an audio signal, but there are several approaches that might lead to more powerful visualizations.
(Note: the second iteration attempted to address each of these problems, and succeeded only on a better visualization and a nicer hat. It still sits, un-worn and un-used, in my desk drawer.)