We were contacted about two weeks ago about doing an “Elevator-controlled video switcher”. Here’s the initial inquiry: “we have an elevator at an event… we want to play one file when it’s going up and then a second when it’s going down…”
Of course, the first question is: Can someone push a button? No. Can we hook into the elevator controls? No. Not even a little? No. They had three ceiling-mounted video screens, and it was up to us to find a way to sense when the elevator was moving, and in which direction, and then trigger the appropriate video.
The end result looked like this:
We built a standalone video player that hooked into custom built position sensors to tell us where the elevator was. When the elevator went up, it played one video file, when it went down, it played another, and it had splash screens at each floor while idle. Our logic allowed us to tweak the configuration on-site, and we even added functionality on the fly.
The final effect was pretty magical: as the elevator started to move up, the first video played. 5 seconds after it arrived at the second floor, the video crossfaded to a splash screen. As soon as the elevator started its trip down, the second video played, and again 5 seconds after arrival, the video crossfaded to the splash screen. Nobody had to remember to do anything: the elevator’s own movement was triggering the video.
More (much more) after the break.
The event had folks sitting in one session, then being moved to a second session that was one floor up. This elevator was the way they got up and then back down for the main keynote in another room on the first floor. When you got in the elevator, there was a logo on the screens. As the elevator started to rise, the logo crossfaded to a video about “Up into the wild blue yonder.” A few seconds after arriving at the second floor, the screens switched back to the logo, flipped upside-down: the elevator was a “feed through” type, so people getting on at the second floor were coming from the opposite direction. While the screens were flat, we wanted the audience to have a similar experience in both directions. The ride down had a video of a ceiling fan (which worked nicely with the natural airflow through the elevator.) Overall, it was a nice effect, and made an otherwise boring 25-second elevator ride into a far more themed environment.
Making it Work
We walked into the event with two different plans fully implemented, as well as a few backup plans if parts of either of those failed.
The first plan (Plan A) was to use an accelerometer to measure the movement of the elevator directly, and then plot out our position as best we could. We used a first-order kalman filter to smooth noise out of the accelerometer reading, and then we integrated the readings. Here’s the theory: when we integrate an acceleration with respect to time, we get a speed and a direction. When we integrate the speed with respect to time, we get a position. We didn’t care too much about position; we just needed to know if we were going up or going down, so we only integrated the acceleration once to get speed:
The vertical scale is speed, in not-real-units. The horizontal scale is time, in milliseconds. Those big spikes around 60000 are road cases full of feeder getting loaded on the elevator. The big depression just after 90 seconds is the elevator going up and the big mountain just after 100 seconds is the elevator slowing down to stop. While the elevator’s movement signal was very easy to distinguish from noise, on small timescales (less than 1/4 second), big disruptions were hard to filter, and we needed to be able to trigger the video within a 1/4 second of the elevator moving. Here’s a zoomed in section of our big noise spike:
Luckily, most noise spikes were not this large, so we thought it might work.
We ended up building a state machine that tracked the position of the elevator. It had a total of 8 states:
- At 1 (first floor)
- Trigger Up
- Between 1 and 2
- Slowing for 2
- At 2
- Trigger Down
- Between 2 and 1
- Slowing for 1
Noise spikes only happened when At 1 or At 2, so while only a few false triggers happened, the controller reset when it realized it had mis-fired.
This worked! After several hours of testing and tweaking, it was doing pretty well. And then it missed. For no reason at all, it didn’t see us stop at the 2nd floor, decided it was a false trigger, and got lost until we went back to the first floor. End of the world? No. But we needed this to be 100%, or as close to that as we could get in the real world.
So we went to Plan B. Plan B’s first incarnation used line-follower IR reflectance sensors to see targets attached to the wall. The problem is that IR reflectance sensors have an effective range of about 1/8″. There’s no way we were going to get an 1/8″ tolerance in a freight elevator chase with 25 people on the elevator, so we made long range reflectance sensors that looked like this:
These are two very bright amber LEDs with a photocell in the center. We used 0603 surface mount resistors for the photocell voltage divider and the current limiters on the LEDs. On site, we wrapped the photocell in a little tube of blackwrap to keep it from seeing the LEDs directly. We then found a convenient spot on the elevator that we could mount these and point them at the wall.
On the first floor, we placed targets on the wall where one sensor saw white and the other saw black. On the second floor we did the same, but we flipped the targets so the sensors would see the opposite colors. The data were clear:
When red was greater than blue by a certain amount and for a certain amount of time, we knew we were at the first floor. When blue was greater than red by a lot (that sensor worked better; photocells are not a reliable bunch), we knew we were at the second floor. Unclear data? We were somewhere in between.
We were back to a state machine, with two known states and a few “past” states:
- At first floor (directly sensed)
- Just left first floor, need to trigger up video
- Have triggered up video.
- At second floor (directly sensed)
- Just left second floor, need to trigger down video
- Have triggered down video
We ended up adding two more states to take care of the splash screens when idle at the first or second floor, but those came later. They essentially functioned the same as the “need to trigger”/”have triggered” states, but had to resist re-triggering themselves: we needed a “just arrived” state.
Quite happily, this worked. The only time we saw a failure was when the elevator operator stopped the elevator after it started moving and then restarted it; the elevator moved much more slowly, and it caused a false trigger. This wouldn’t happen during the event, so we felt we were safe.
To actually playback the video, we used a program called Resolume. Resolume is Video Jockey (VJ) software, and we chose it because it could handle cross fading between streams at random, and it could operate blind: full screen on the primary display, using a keyboard input to switch video.
We set up Resolume to accept keys 1 through 6 as switching between streams 1 through 6 (which is its default mode), and used a Pololu Wixel in HID Keyboard mode. We could have used the ATmega16u2 on the Arduino to do this, but we needed to be able to load code through the 16u2, and we already had the libraries to make the Wixel an HID device. The Wixel accepted digital high/low signals on 6 of its pins to trigger 1 through 6. So when pin 0 went high, the Wixel pressed 1 on the keyboard. Releasing the pin or driving it low caused the Wixel to release the key. We also used an onboard LED to indicate that it was pressing keys, to aid in debugging.
The video output was fed into a DVI Distribution Amp to split to the three screens.
Here’s the source code, which has been cleaned up a little and commented a bit more than what we used in production. We’ve also posted the acceleration-based code for reference, but without cleaning it up. It was not used in production.
(Also: any trademarks are the property of their respective holders)