Public Lab Research note


Late-night IRCAM hack

by donblair | March 05, 2013 19:09 05 Mar 19:09 | #6211 | #6211

Background

Craig Versek rolled into Amherst last week bearing whiskey; clearly, it was time to break out some webcams, a Raspberry Pi, and some shot glasses, and work on near-infrared image capture.

What we wanted to do

Ultimately, we'd like to get a (Linux-running) Raspberry Pi working as an 'near infrared point and shoot' -- capturing both visible (VIS) and near infrared (NIR) images and combining them into one, nice composite image.

Right now, we're basically just doing our development on a laptop running Ubuntu with two webcams connected, and trying to make sure that all of the software libraries we're using are easily ported to the Raspberry Pi.

What we (sort of) did

The first item of business was to transform one of the webcams into an NIR cam. So that's what we (sort of) did. We fetched some of the webcams used in the Public Lab DIY Spectrometer -- boxy little Syba cameras -- and followed Public Lab's instructions for disassembling it.

First off: it was really quite satisfying / anxiety-producing to take a brand-new, cute little webcam:

snap!

And -- snap it in half! You might find yourself paused in the following stance for several seconds:

snap!

But after a sip or three of whisky, you'll like find that any squeamishness is eventually overcome. This was my experience:

snap!

Above, the half of the webcam that actually contains the circuitry + camera is on the left. So we discarded the other half (i.e., we piled it on my desk for possible later re-use, next to the aluminum foil and sodium hydroxide), and split open the half that had the goodies inside:

snap!

Continuing to follow the Public Lab instructions, we removed the lens, revealing the IR filter (which popped off nicely after being poked at with a knife for a bit):

snap!

In the above photo, you're seeing the lens on the lower left, and the IR filter on upper left. (The dangling bit coming off the circuit board is a -- what is it? a microphone? a light sensor? We didn't know.) Now that we'd unscrewed the lens, we needed to readjust the focus (which is done by screwing the lens back on sufficiently to render the webcam image as in-focus as possible). This required poking the (mystery dangling bit) back into its original home in the case, while leaving the case semi-open; not too difficult, even after 5 sips of whiskey. And this might've been a pointless exercise in any case, as it seems increasingly likely on reflection (and after hearing from Jeff) that it was simply a microphone, which we could have simply snipped off.

Eventually, we found the best focus we could (which didn't seem to be as good as the original focus, pre-hack ...?). We didn't have any developed film on-hand to block visible light, so we just snapped the webcam back together, and rubber-band-ed it together with an un-modified version of the same webcam. We then started taking some snapshots with both of them simultaneously. You can see the webcams poking above my laptop screen, and a photo we took of Craig lighting a candle:

snap!

At this point the whiskey was mostly done; I went to sleep. Craig fetched some beer, and started in on coding.

The results to-date of our coding efforts are here (Craig's repository) and here (my repository), based on a nice outline and overview of useful features here (Jeff Warrens's repository -- see the 'Issues' tab). Jeff suggested early on that we look into using a nice Linux program called 'fswebcam' to capture images, and that's what ended up working very nicely for us. I'd stumbled upon SimpleCV as a nice set of libraries for doing image manipulation (which also seems possible to install on the Raspberry pi). But Craig surged ahead that night, finding a way of capturing images from two webcams nearly simultaneously (non trivial) and saving the images from each in time-stamped, appropriately-labeled files.

The basic idea here is to be able to capture pictures from both webcams simultaneously of the same thing, and then have the computer overlay the NIR image on the VIS image in a pleasing fashion. Within a few days, Craig had also figured out how to do this -- which required tweaking the SimpleCV library quite a bit. Craig's code identifies common features in the two images, and uses these features to calculate an "affine transform" (think: warping) of image A such that the features in image A match positions with the same features in image B. And: it's working really well!

(Aside: thanks to whomever it was who fixed parts of the SimpleCV libraries, recently!)

I pulled his updated code into my repository so that I could play with it. I plugged in the webcams, and took a shot of some plants (one dead, one living) on my girlfriend's dresser. Here's the image from the unmodified, 'visible' webcam:

snap!

And here's the image from the webcam that has had its IR filter removed:

snap!

Running Craig's code generates this composite image:

snap!

Whee! You can see that the affine transform had to warp the original image by skewing it to the right a bit in order to get the identified features to match up nicely.

What's next

I guess the next to-do is going to be to find some developed film in order to block visible light to the IR webcam. Then it might get a bit trickier to identify common features in both images ...

And: we need to see how well this works on the R-Pi!


17 Comments

Wow, amazing! I can also send you some developed film, i have a whole roll.

Question! This sentence caught my eye!

The dangling bit coming off the circuit board is a light sensor that helps the webcam find the proper exposure levels.

WHAT!? really? I assumed it was a microphone! I've been clipping it out of all the spectrometers! If that's true, we have another way to control exposure on the spectrometers... would be very exciting. Can you confirm this?

Thanks, great writeup, don, and AMAZING to see it starting to work!!

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


HAH! We can absolutely not confirm the statement in that sentence. That assessment (guess) was made late at night, after much whiskey. If you've been clipping it off and things have been working fine, then our assessment is likely wrong :) In fact, I shall revise the post to include a more tentative statement about the nature of that dangling bit.

Thanks so much re: the developed film! I'll rummage about here to see if I can find any (was also thinking I might ask CVS in case they discard it regularly and wouldn't mind giving some to me) -- but if I somehow can't find any, and it's easy to mail it along, then that'd be cool!

Reply to this comment...


OK, still interested if that's possible.

Re: film - you can prob. get a cheap roll of color film, pull it entirely out of its roll, roll it up, and hand it to the 1 hour photo place. That's good for some shocked looks too :-P

I was able to get film for $3.50 and get it made into negatives for $3.

Reply to this comment...


Oh, cool! Yeah, let me just do that! I will make sure to mutter "citizen science -- no time to explain" as I unroll the film in-store.

Reply to this comment...


Hi Guys,

I am one of the SimpleCV devs and I noticed your post. I think we work together to fix your issues and create a better library all around. Can you either toss a post up on our help forum (http://help.simplecv.org) or file an issue github and we'll take care of it just as soon as we can.

Reply to this comment...


Don and Craig, This is incredible progress. It's really reassuring that Craig is already doing affine registration. It boggles the mind that the necessary libraries might fit on an SD card and run on a Pi.
I assume you have each camera plugged into its own USB port? What is the trick to synchronizing the shutters? Can you override or pregame the camera's autoexposure routine so it does not contribute to unequal shutter lag? Would it be possible to connect the two cameras with a Y cable so one signal from a USB port went to both cameras?

Before your next hack session maybe you should tidy up that dresser a bit.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


@warren on March 5, 2013 - 14:53.

Wow, amazing! I can also send you some developed film, i have a whole roll.

Hey, if you're giving out free film, maybe you can send me some too? You can put it in with my mobile spectrometer kit shipment -- if you have any control over that!

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


@cfastie on March 6, 2013 - 23:46.

It boggles the mind that the necessary libraries might fit on an SD card and run on a Pi

Heck, I got 6 gigs left on my 8 GB SD after installing Debian 6 on my RPi. Software - at least in the Linux world - is small these days compared to storage media and data files.

What is the trick to synchronizing the shutters?

Having a stationary target ;) In all honesty, we haven't thought too much about this; although, we did run two fswebcam processes asynchronously so they wouldn't block each other and that made up for most of the lag.

Can you override or pregame the camera's autoexposure routine so it does not contribute to unequal shutter lag?

Possibly, we are looking at some Python library (SimpleCV/opencv) that will give us more control than fswebcam. Unfortunately, I have been struggling with some issues in the underlying linux drivers (v4l2) that prevent me from getting two webcams running simultaneously at high res - the nasty "Cannot allocate memory" bug that plagues many users. But, there is hope... Don's computer seems to be immune to this issue and he is running a new version of the software... so I guess its about time for me to do a distribution upgrade. I'll report back whether or not this issue still exists in Ubuntu 12.10 with v4l2 v0.8.8, since I heard Jeff was having a similar problem.

Would it be possible to connect the two cameras with a Y cable so one signal from a USB port went to both cameras?

I don't think so, because these UVC devices rely on a two-way link, unlike - perhaps -- CHDK cameras.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


Oh that's right, the photos have to come back from the cameras into the USB ports so it would be hard to share one port.

Shutter sync won't be that important for ground based plant cam uses, but aerial work usually requires pretty good sync. This note has a link to a nice timer with a big 1/100 second display which is great for documenting the sync you are getting. Unfortunately the timer is Windows only.

That's nuts that the Pi operating system inhabits only 2 GB.

Reply to this comment...


Sadly, the v4l2 upgrade did not fix the USB bandwidth problem for dual webcams; this issue probably isn't a bug but really a limitation of the cheap webcam hardware with the expected usage case of the drivers (full video streaming). The webcams have no on-board mjpeg encoder, which is one method of lowering the bandwidth requirements. The reason that Don's laptop works must be that his USB ports are on separate conrollers so are not fighting for the bandwidth - I don't think this is the case with the RPi's two USB ports. Short of finding (or writing!) a new driver that allows low FPS high res. capture, we are left with a semi-satisfactory work-around: force the stream data structure to unload after taking the capture, but before loading the other camera's stream; unfortunately, this introduces a ~0.5 second lag on my fast computer, which may be worse on the RPi. (Since this is one program, the timing can be done in situ by requesting timestamps from the OS [easier than it sounds].) The lag would only be about 0.005 seconds or less if we didn't have to reload the stream every time. I am going to muck around a bit more with the low-level stuff and see if we can't do better.

Reply to this comment...


For those of you who really want to know the details of the USB bandwidth problem, the UVC driver site has this FAQ item.

Reply to this comment...


Wow, it sounds like you are operating right on the edge of the system's capabilities. That problem is related to streaming video, and for the IRcam you need only transfer a single frame from each camera. But I guess the problem is that with little or no RAM for buffering, the cameras must transfer the photo as soon as it is captured. And for synchronous photos, that means two full frames trying to get through one USB controller at the same time and maxing out the limit. Is one option an additional USB controller, or would that be more trouble and expense than just using two Pis? An upgrade to the cameras might help if the new ones had more RAM or MJPEG compression, but that would probably cost more than a second Pi.

Maybe the land-based Plant Cam can have asychronous capture and a more expensive option is needed for aerial work.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


Chris, your comments are right on the "money" (pun intended). It looks like the best we can do with these budget cams, short of writing a new driver (if even possible given the hardware constraints), is have two modes: high res (1280x1024)/high lag (~0.5 sec); low res (320x240)/low lag(~0.005sec).

Is one option an additional USB controller, or would that be more trouble and expense than just using two Pis?

I'm not sure if that is possible and cheaper, but maybe we could do one USB camera and one ethernet camera, since on the Model B they are separate hardware controllers. Two RPi's is also a realistic option though synchronization would be tricky, but the money might be better spent on fancier cameras.

Maybe the land-based Plant Cam can have asychronous capture and a more expensive option is needed for aerial work.

I'm mostly in agreement, but I'm going to toss out a hypothetical to the experts: For the aerial mapping application, maybe the lag isn't as bad as it seems -- remember that the scene isn't moving, only the perspective is. If we take enough shots to get coverage, then maybe the images can be affine-aligned, then stitched into two sheets, IR and VIS, then overlaid. Another problem might be motion blur, but as you know, this could be algorithmically corrected too. We could call it an "IR/VIS point and shoot (and stitch) camera".

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


You're right that for aerial work a 0.5 second lag is not terminal. The whole rig will move some, but the photos from the two cameras will still overlap a lot. They can be registered for making NRG or NDVI, but the overlap will sometimes be only 60-80%, so the cropped composite will generally be much smaller than a full frame. So more photo pairs will be needed to get complete coverage of the scene, more photo pairs will have to be registered, normalized, and combined into multispectral composites, and there will be more composite images to stitch together into multispectral maps, which will have more stitching seams and artifacts. But it will still work.

In fact, it provides some interesting information. If the lag is constant, photo pairs that do not overlap a lot will have been taken while the rig was moving a lot. Those photos will have more motion blur and probably be less vertical than photos from pairs that overlap a lot. So a solution is to take a lot of pairs and throw out the pairs that overlap less than x. The result will be better quality than your average pair of perfectly synchronous kite photos.

Stitching all the VIS into one map and all the NIR into another and then rectifying the maps is a non starter. The rectification will be lousy. And the stitching process is manual, so you only want to do it once. Or three times if you want VIS, NRG, and NDVI. Or once if somebody figures out how to use the information in the first manually stitched map to create the other two. You would have to record all of the manual placement and affine transforms for each image. No problemo, Baby.

Reply to this comment...


Reposting a comment that somehow slipped through the cracks:

Comment by kscottz: Hi Guys, I am one of the SimpleCV devs and I noticed your post. I think we work together to fix your issues and create a better library all around. Can you either toss a post up on our help forum (http://help.simplecv.org [1]) or file an issue github and we'll take care of it just as soon as we can.

Hi kscottz, We are trying to do image registration (affine alignment) with shots taken from slightly different viewpoints. The issue I believe was already raised a while ago in this thread on your help forum. In the reply by "Rishi Mukherjee" he gives a link to a patch that he wrote which makes a change to the homography matrix of the method ImageClass.Image.findKeypointMatch in this github pull request. I cannot vouch for the sanity of this patch, but I can confirm that the homography matrix that it exports is compatible with the function cv2.warpPerspective; whereas, the method in the latest pip install SimpleCV (v1.3) exports a homography matrix that gives an incorrect/incompatible transformation that is rotated and somewhat distorted. I don't know that much about image manipulation, but maybe the problem has something to do with the different conventions used?

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


follow the technical Python code discussion here: https://github.com/ingenuitas/SimpleCV/issues/320

Reply to this comment...


In pursuing the image registration problem, some more bugs were found in SimpleCV that cause some colorspace image formats to get accidently inverted during geometrical transformations. (Really the fault occurs because of row-major vs. column-major array formatting contentions - a tired old example of computational conventions that don't "really matter" but no one can agree on!)

Don't get me wrong, I am glad that the SimpleCV project exsists and I will continue using it as long as the bug fixes keep coming. Its because of these silly format incompatibilities and the awful legacy of C/C++ syntax that Python OpenCV library is not already simple!

For technical code discussion these issues can be tracked at: https://github.com/ingenuitas/SimpleCV/issues/339

Reply to this comment...


Login to comment.