I have a love-hate relationship with RGB-LED strips. Before starting my Daft Punk Helmet project last year, I was getting used to using individual 4-pin RGB-LEDs for my colored-lighting needs. Once I realized I would need 96 of these LEDs for the gold helmet (384 leads, 288 channels to multiplex), I broke down and ordered a strip of WS2812s. These are surface-mount RGB-LEDs with built-in memory and driving. With three wires at one end of the strip to provide power and data, you can easily control the color of each individual LED on the strip with an Arduino or similar controller. I was in love.
But then I started to see the dark side of easy-to-use LED strips like this. Like so many other LED strip projects floating around the internet, my ideas started to lack originality. Instead of exploring the visual capabilities of over 16 million unique colors possible per pixel, I was blinking back and forth between primary colors and letting rainbow gradients run wild. Maybe I was being overwhelmed with possibilities; maybe the low difficulty made me lazy. Either way, the quality of my projects was suffering, and I hated it.
Zero-effort xmas lights
To start this off, let's differentiate the goal of this project from the projection mapping commonly used in art and advertising these days. In standard projection mapping, one or more regular projectors are used to display a 2D image on an arbitrary 3D surface. Specialized software determines how to warp the projected images so that they appear normal on the projection surface. For my project, I have pixels that are placed arbitrarily on the arbitrary 3D surface. This added complication is like placing a glass cup in front of the projector in standard projection mapping and expecting the warped image to still yield a comprehensible picture. In a sense, two mappings have to be done. 1 - Find the mapping of pixels to real space, 2 - find the mapping of real space to source image space.
While I'm using my laptop to do the heavy lifting of computing the appropriate pixel mapping, I need to use a microprocessor to interface with the LED strip. I'm using my prototyping Arduino Mega to interpret commands sent by the laptop via USB and send data to the LED strip. For my first arbitrary-geometry LED display, I wrapped about 4 meters of LEDs (243 total) around a foam roller.
Nothing says professional like duct tape and styrofoam.
The first step in the process is to find the physical coordinates of each LED in real space. Luckily, I chose a configuration that can be described mathematically. The strip wraps around a cylinder in a helix with some spacing between each successive loop. In my projection code, I slowly march up and around an imaginary cylinder placing LEDs every now and then.
We can plot the locations of every LED to confirm this works:
Next, we have to project the real-space location of each LED back on to the source image we want to display. The real-space locations are three-dimensional while the source image is two-dimensional, so we need a way of 'flattening' the LEDs down. The way in which we chose to do this depends on both the display geometry and how we want to view the display. For our cylinder example, we can start by unwrapping the LEDs:
Plotting the location of each LED on the source image gives:
Source, Image Coordinates, Result
Here, I've chosen the LED colors based on the nearest color found in the SMPTE color bars. Displayed on the LED strip, this kind of mapping shows the source image stretched around the cylinder.
From any one direction, the source image is difficult to see. Another method of projecting the LED positions on to the source image is to flatten them along a single direction perpendicular to the cylinder axis.
Source, Image Coordinates, Result
This allows the viewer to see the entire image from one direction.
Of course, the image is also projected on to the other side of the cylinder, but appears backwards. With this more useful projection, we can try to watch a short video:
In this example, I'm using OpenCV to grab frames from a source video, compute the projection on to display pixels, and send the pixel values out to the Arduino Mega. Without spending too much time on optimization, I've been able to get this method to run at around 15 frames per second, so the recording above has been sped up a bit to match the source video.
An important thing to note in that recording is the low amount of image distortion. Even though the LEDs are wrapped around a cylinder and directed normal to the surface, the projection prevents the source image from appearing distorted when viewed from the correct angle. However, if you look at the various color bar tests I've done, You can see that vertical lines in the source image do not always appear completely vertical when projected. This is not due to any inherent flaw in the projection math, but instead caused by improper assumptions on the LED locations. I've assumed that I wrapped the LED strip perfectly around the cylinder so that my mathematical model of a helix matches the locations perfectly. In reality, the spacing and angle of each loop around the cylinder was slightly off, introducing slight variations between the real positions and the programmed positions that got worse towards the bottom. To increase the accuracy of this projection exercise, these variations need to be dealt with.
There are three ways I see of correcting for the issue of imperfect LED placement:
- Place them perfectly. Use a caliper and a lot of patience to get sub-millimeter accuracy on the placement of each LED.
- Place them arbitrarily, but measure the positions perfectly. Use accurate measurements of each LED position instead of a mathematical model.
- Place them arbitrarily, get the computer to figure out where they are.
At this point, I've already made a decent amount of progress on letting the projection code figure out the location of each LED. I'm using a few stripped down webcams as a stereoscopic vision system to watch for blinking LEDs and determine their positions in 3D space. I'll save the details for later, but here's a quick shot of one of the cameras mounted in a custom printed frame:
The code for this project (including the stereoscopic vision part) can be found on my github.