Spatially-Mapped Christmas Lights | ch00ftech Industries

Spatially-Mapped Christmas Lights | ch00ftech Industries

Spatially-Mapped Christmas Lights | ch00ftech Industries


https://www.youtube.com/check out?v=O6vv8hobSkM

Keep in mind me?

When I was a kid, each and every Christmas would involve a trip to Coleman’s Nursery to see all of the creepy animatronic Christmas decorations and drink scorching coco.

Which is me on the remaining:Unfortunately, Coleman’s shut up in 2003, but my memories of the area are nonetheless vivid.

Out back again, they experienced a pretty terrific nativity scene surrounded by some wonderful bushes shrouded in net lights.  Net lights were being a fairly new matter back in …1998?, and numerous men and women liked the simplicity of spreading a blanket of evenly spaced lights more than their vegetation fairly than stringing them by hand.

The Coleman Nursery web lights were unique even though simply because they were animated! Wondering again on it, they should have been wired up comparable to regular “theater chase” Xmas lights with a few phases performed in series so they appeared to transfer but in a grid instead of a line. It was a truly outstanding screen for a ten-year-old’s eyes.

Though the animated patterns had been unquestionably engaging, there was truly only 1 probable sample that was challenging-wired into the lights.  As a child, I constantly believed it would be neat if they could demonstrate an arbitrary pattern.

This was around 20 yrs back again prior to LED lights ended up a thing, and certainly ahead of individually addressable LED lights were being a issue.  I considered it’d be enjoyment to revisit the thought and see what could be carried out with modern technology.

Talking from a high degree, the intent of any type of display is to trick your brain into pondering you’re viewing a genuine object or scene.  Over the yrs, display know-how has formulated unique means to consider edge of your optical procedure to present a real looking picture.

Relatively than exhibiting a complete gamut of colour, modern displays just use Pink, Inexperienced, and Blue (matching up with the wavelengths your eyes detect) somewhat than showing a relocating object, a screen will exhibit a series of static photographs relying on your brain to piece them into sleek motion.

Getting this 1 move additional, the human brain is good at pinpointing patterns.  For example, seem at this condition:

You may possibly see a white triangle.  In simple fact, it can be just a couple of Pacman shapes and some V styles.  It turns out that you only require to trace at a condition for your brain to piece it into the entire variety.

The screen you’re on the lookout at proper now is incapable of exhibiting spherical objects like the letter O.  It can only illuminate pixels arranged in a square grid, so any round form is an approximation centered on some intelligent math.  If we can depict arbitrary designs on a sq. grid, what about a non-square grid? or even a non-uniform grid?

When I left school, I commenced implementing for work opportunities and ultimately bought a mobile phone job interview with Newark, the digital part distributor.  It was only on the night prior to my interview that I realized that I experienced in point not used to the electronics distributor Newark, but alternatively the DJ equipment company Numark.  I went by way of with the interview anyway, and due to the fact it was the only offer I bought, I took it. I’ve considering the fact that named it my mulligan job.

Numark turned out to be the principal model underneath which several other models operated.  One of them was Akai Professional, makers of the beautiful MPC Renaissance.

On the still left there, you will see 16 knobs that can be assigned to several audio processing features.  Over just about every knob are 15 LEDs which are computer software managed, and can display screen data regarding the position of every knob (achieve, L/R fade, and so on).

Just after powering up the MPC Renaissance, it will enter “Vegas Mode” prior to it connects above USB.  Vegas Mode is intended to flash a bunch of lights and make the unit search alluring in the storefront of your local Guitar Center or whatever (for authentic, I will not know anything about tunes creation, I just labored there).  Pay near attention to the knob LEDs in the online video below:

https://www.youtube.com/view?v=DxKqrj2fG8E

If you look meticulously, you can expect to see that they spell out “A K A I.”  To this working day, this is my only serious primary contribution to a purchaser electronic system. I was bored at perform just one day and figured out how to do it.  They liked it so significantly that they shipped it like that.  The animation alone requires up about 50 % the firmware space.

On the off prospect that nobody would think me, I did this on my final working day of work:

In computer software, each individual of the 240 LEDs are mapped to their associated knob and requested by their area in the 15 LEDs all over that knob.  That is not to say they can not be managed arbitrarily even so, and with a small Python, they can be controlled as proven over to show arbitrary illustrations or photos or animations.

At the time, I feel I wrote a pretty clumsy program to assistance me manually map each individual of the 240 LEDs to their actual physical locations on the system (I remember clicking a ton).  Looking back on it now, I assume the approach can be streamlined and even produced purchaser completely ready for arbitrary LED arrangements.

LIKE Christmas TREES!

Animated LED lights definitely aren’t new, but they usually animate relative to their get on the strand.  Your operate-of-the-mill addressable LED strand will occur with a controller box that has dozens of animations these as theater chases or rainbow fades, but simply because the controller does not know how the LEDs are oriented, the over-all visual appearance will count on how the LEDs are strung up. It eventually only performs when they’re in a straight line.

The only exception to this are techniques that constrain the locale of every single LED gentle this sort of as these new gross “Tree Dazzler” things I’ve noticed this year:

Yay! Plastic!

The target for this project was to make a technique by which a non-technological user can randomly string LEDs through a tree or bush in a regular trend, stage a camera at them, and then see them animate in exciting ways that are not probable with common LED strand lights.

So the hardware for this challenge is rather light. As you might have seen, I’ve been a small hectic currently and have not offered my site the notice it warrants.  Still even though, I consider this post hello-lights a proof-of-thought that could be streamlined into a really slick hardware device as I am going to define in the summary section.

LEDs

I have in fact been sitting on this task notion for a number of several years now.  In actuality, the LED strips for my get together lights were originally ordered for this project, but I ended up creating party lights mainly because A) Those people flat ribbons really don’t string effectively in Christmas trees, and B) These strands illuminate 3 LEDs for every phase.

I can’t don’t forget if suitable twinkle-light design separately addressable, RGB Neo-pixel LED strands have been accessible at the time, but I finished up ordering some last two Halloweens in the past to make the most played-out Halloween decoration considering that the smoke machine:

Yep, it claimed bizarre points.  Mostly snarky political strange factors.

They’re not terribly sleek, but they perform. As far as I can convey to, they’re just tiny circuit boards soldered collectively and caught in a vaguely twinkle-gentle shaped mildew:

Anyway, Neo-pixel (or WS2811) LEDs are tremendous straightforward to command. Not simply because the LEDs them selves are great but because there is a large amount of money of aid available for them from maker-variety communities.  This individual strand accepts 5V, GND, and a info line and, with the support of some Arduino libraries, is capable to illuminate just about every LED with 24 bits of RGB shade.

These LEDs are meant to be daisy-chained with each other where they share power and ground rails and have a single-wire knowledge line that goes from the output of a single LED to the enter of the following.

Sadly, there is a really considerable volume of impedance in these electricity busses, and at the time you connect two or 3 50-bulb strands jointly, you can count on to see some voltage fall.

What is enjoyable about this voltage drop is that it essentially demonstrates up visually in the LEDs on their own.  To deliver white, you want purple, environmentally friendly, and blue LEDs.  Green and Blue LEDs usually want about 3.3V whilst pink only requirements 1.9.  So when you consider to exhibit white on all 250 LEDs, you get this:

Luckily, the LED makers anticipate this dilemma, so just about every finish of the 50-bulb strand has some loose 5V and GND wires you can solder to a beefier electric power connection.  I made use of some thick speaker wire to increase power faucets to the conclude of the 250 bulb strand and someplace in the center.

While I nevertheless experienced to cap my brightness at 50% for entire-white because my energy provide only gives 5A.

This is where the job gets mildly complicated.  The software portion of this project can be break up into three capabilities: LED Handle, Mapping, and Exhibit

LED Control

The Arduino is really dumb.  It pretty considerably just sits as a bridge among my Python script operating on a host Laptop and the LEDs them selves.

Here, I’m accepting 750 bytes in excess of the serial bus (representing pink, environmentally friendly, and blue values for 250 LEDs) and pumping them out to the NeoPixel LEDs. The Neopixel library accepts a one 24 bit amount for each individual LED:

void loop() 
 for(uint16_t i= i

The only other exciting thing about the Arduino code is that it had to operate at 460,800 baud in order to keep up the LEDs at a reasonable frame rate.

Mapping

In order to make fun animations on the LEDs, we need to know the exact location of each LED.  With the MPC Renaissance, I started with a picture of the device and wrote a script that would record where I clicked on that picture.  By clicking on the LEDs in the order they were addressed in software, I essentially mapped the LED software address to their physical locations.

We're in 2017 now though and everything is supposed to be solved with computer vision (or neural nets).

There's a great open source project called OpenCV (Open Computer Vision) which has a bunch of awesome tools for giving robots eyeballs and letting them do the boring work for you like read license plates.

As someone who is terrible at software and can only really write in C and Python, this was surprisingly not scary to set up.  Once you get all the necessary libraries installed, you can hook up a webcam and start working with images.

This little routine captures an image, converts it to greyscale, locates the brightest spot on that image, records the spot, draws a locating dot on the original image, and saves it on the hard drive:

camera_capture = get_image()
gray = cv2.cvtColor(camera_capture, cv2.COLOR_BGR2GRAY)
(minVal, maxVal, minLoc, maxLoc) = cv2.minMaxLoc(gray)
cv2.circle(camera_capture,(maxLoc),10,(0,255,0),-1)
file = "imagesimage"+str(i)+".png"
cv2.imwrite(file, camera_capture)

In order to map every LED to a physical location, all I needed to do is light up each LED in turn and run this routine.

Ideally, this would look like this:

But because the cv2.minMaxLoc() function grabs the absolute brightest single pixel in the image, it is extremely susceptible to noise.  I often ended up with this kind of result:

Where the LED on my power supply overpowered the target LED.

In order to improve the results, I applied a Gaussian blur with:

gray = cv2.GaussianBlur(gray, (19,19),0)

A Gaussian blur effectively averages each pixel's value with the values of the pixels around it.  Consequently, a single super bright pixel will be mellowed out while a large grouping of bright pixels will average together to produce the new brightest pixel.  Using this method, I had few errors in pixel mapping.  The X and Y pixel coordinates of each LED were stored in an array for later use.

colormap = [(26, 212), (309, 470), (304, 462),....

What's fun about the pixel mapping is that it doesn't necessarily have to map to the location of the physical LED. It only needs to map to the brightest spot produced by the LED. I found in a lot of situations that the LEDs tucked farther into the tree had no line-of-sight to the camera, so the software grabbed a portion of the tree illuminated by the LED instead.  Because our animations will be playing back in exactly the manner they were recorded, this is fine.

Display

Once the LEDs were mapped, I was left with an array of their locations in the image frame.  Graphically, this would look something like this:

That little guy? I wouldn't worry about that little guy...

With this map, all the software needed to do is lay the map over the image:

And then sample the image's color in each location. The end result is here:

Or in Python:

file = "giftbox.png"
giftimage = cv2.imread(file, cv2.IMREAD_COLOR)
for i in range(len(colormap)):
  tmp = giftimage[colormap[i][1],colormap[i][0]]
  colors[i] = [tmp[1],tmp[2],tmp[0]]
printcolors(colors)

Ta da!  As you can see, it works best with simpler images.

Animation

One thing I noticed earlier on is that animations work best when they're anti-aliased. Aliasing is most familiar when referring to trying to represent non-square objects on a screen with square pixels.  In the below image, the top line has been anti-aliased and looks smooth while the bottom line sticks rigidly to the pixel grid:

What I found was that when I was displaying images that did not adhere to a Christmas-tree-shaped pixel array (which is to say, anything), it was difficult to make out shapes.

This was most readily apparent when doing the scrolling text effect.  With no anti-aliasing, the LEDs went from off to full bright as the text went by. This jagged animation was disorienting and made it difficult to make out the text.

By first blurring the image or "anti-aliasing" it, I was able to make the motion more gradual, and I found that it made it a lot easier to recognize the letter shapes and "connect the dots" so to speak for the dark portions of the tree.

Image Animations

I wrote a few scripts like one that would scroll the image from left to right:

while(1):
  if j<4490:
    j+=4
  else:
    j=0
  for i in range(len(colormap)):
    tmp = myimage[colormap[i][1],colormap[i][0]+j]
    colors[i] = [tmp[1],tmp[2],tmp[0]]
  printcolors(colors)
  time.sleep(.009)

Or one that would move the colormap around the image in a circle:

while(1):
  if j<1000:
    j+=1
  else:
    j=0
  for i in range(len(colormap)):
      tmp = myimage[colormap[i][1]+240+int(240*math.sin(j*2*math.pi/1000)),colormap[i][0]+320+int(320*math.cos(j*2*math.pi/1000))]
      colors[i] = [tmp[1]/2,tmp[2]/2,tmp[0]/2]
  printcolors(colors)
  time.sleep(.015)

I used this to do the color stripes animation with this image:

The time.sleep() command you see is to allow time for the previous frame to make it to the LEDs before sending the next. Poor-man's flow control.

I was even able to use the webcam itself as a source:

while(1):
  camera_capture = get_image()
  for i in range(len(colormap)):
    tmp=camera_capture[colormap[i][1],colormap[i][0]]
    colors[i] = [tmp[1],tmp[2],tmp[0]]
  printcolors(colors)

The effect wasn't that amazing though since the video source was so analog and offered little contrast. I could see major shapes on the tree if I waved my hand in front of the camera, but not much else.  It also didn't help that the camera's auto gain settings were constantly adjusting the brightness of the tree.

Doom

So there's this whole thing online about getting Doom to run on things like an iPod Nano  or a graphing calculator, so I thought it'd be fun to try to get Doom to run on a Christmas tree!

Obviously, the tree itself won't be running Doom, but Doom's colorful graphics, low resolution, and name recognition made it a great target for my tree.

For the video, I used Freedoom.

In order to get Doom to show up on the tree, I wrote a script that would take a 640x480 pixel portion of my computer's display (starting 500 pixels from the top and left of the screen) and use that as a video source for my tree.

while(1):
  img = ImageGrab.grab(bbox=(500,500,1140,980))
  img_np = np.array(img)
  frame = cv2.cvtColor(img_np, cv2.COLOR_BGR2RGB) 
  frame = cv2.GaussianBlur(frame, (5,5),0) 
  for i in range(len(colormap)): 
    tmp=frame[colormap[i][1],colormap[i][0]] 
    colors[i] = [tmp[1]/2,tmp[2]/2,tmp[0]/2] 
    printcolors(colors)

If you think taking screen grabs for an input video source is inefficient, you'd be right.  This script only ran at about 2fps on my top of the line 2012 iMac.  I ended up having to run it on my VR gaming PC to get a respectable frame rate.  This is probably the first time outside of gaming that that Mac has felt slow 🙁

Anyone, if anyone knows a real way to do this that doesn't involve screen grabs, let me know.

So yes, this is really just a proof of concept, but I think it has legs as a really great consumer device.

The way I see it working is with some sort of bluetooth box on the LED strand and a smartphone app.  Pair the box to the phone, string the lights, aim the phone's camera at the box, and stand back.

Then of course you could download a huge library of animations, draw on the tree with a paint application, play Tetris, whatever you want.

Didn't have time to do all that this time around though.  Maybe next year!




Source link