Public Lab Research note


Video tutorial: Creating false-color NDVI with aerial wetlands imagery

by warren | October 27, 2011 18:52 27 Oct 18:52 | #523 | #523

Learn to use the open-source GIMP application to create a Normalized Differential Vegetation Index image from infrared and visible-light aerial photographs. Also explore false-color techniques for presenting the data.

Photoshop PSD (170mb)

Files Size Uploaded
barataria-ndvi.jpg 1.74 MB 2011-11-04 14:35:13 +0000
barataria-ndvi-falsecolor.jpg 2.58 MB 2011-11-04 14:35:26 +0000

I did this Help out by offering feedback! Browse other activities for "multispectral-imaging"


People who did this (0)

None yet. Be the first to post one!


18 Comments

For Warren, Where can I get two starter images like you used here so that I can learn to do the NDVI transformation in GIMP: http://publiclab.org/notes/warren/10-27-2011/video-tutorial-creating-false-color-ndvi-aerial-wetlands-imagery

Marlin

Reply to this comment...


Hi, you can do this online, actually, at http://infragram.org, if you use a single image. But for some samples, go back in the history for research notes tagged "NDVI" and you should find some examples: http://publiclab.org/tag/ndvi especially early on!

Reply to this comment...


@warren, I was wondering if it would it be possible to use a variation of this technique to combine a series of images taken over the course of a growing season for the purpose of highlighting differences in the rate of photosynthesis that occurred in an agricultural field? If so, do you think that would be complicated or would it just involve adding more images and tweaking in some way what you show in the video? Thanks!

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


Hmm, you should check out this video as an example -- is this close to the technique you're looking for, albeit of a different subject? https://publiclab.org/notes/cfastie/06-01-2013/bee-movie

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


@warren, Thanks for the response, that's interesting, but not exactly what I am looking for. I am actually interested in trying to identify subsurface archaeological features that become visible in crop growth (particularly during droughts) due to the effects of past human activity that is no longer visible on the surface (such as the construction of trenches or mounds that have since been leveled out). My thought is that if these features are visible when the crops are stressed (during drought), it might also be possible to detect them during non-drought seasons by highlighting differences in the rate of photosynthesis (or vigorousness of plant growth) over the course of a season. So, I think it would be a matter of taking a series of identical images (weekly perhaps) and then overlaying/processing them in some way that would highlight the differences in rate of photosynthesis. For example, crops might grow more vigorously in one area because it contains a decayed post hole with organic matter that retains water, while another area might grow less vigorously because the soil was compressed by the construction of a mound ... even if, perhaps, all the growth evens out by the end of the season. Do you think this type of analysis seem possible or not really viable?

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


Hi Steven,

This type of analysis should be viable. I have not seen examples of pure NIR photos providing more information than good normal photos of crop marks. But an index like NDVI should be able to discern more subtle differences in plant stress. You might have to exploit the red edge to see subtle changes in the reflectance of red and NIR that happen as plants become stressed.
reflectance2.jpg

So you might need two cameras each with rather narrow bands: one red and one NIR. I took photos with such cameras last week and will try to post a research note this week. There were crops (but no crop marks) in the photos.

Chris

Reply to this comment...


Thanks @cfastie ... So, are you saying this would be a matter of using one camera with the blue filter and one camera with the Rosco #19 "Fire" filter ... and then overlaying and processing those images together similar to as described in this video? Or do you mean trying the blue and trying the red and comparing the results to see which works best?

One particular aspect of this that I am interested in is whether it might be possible to think of this in a different way ... not just trying to get the best individual image at one point in time, but rather taking a series of images over time and processing them in a way that might bring smaller changes to the forefront by building upon subtle changes in the images from one picture to the next in a series. I'm not sure if that is possible, but would be interested in playing with the idea if it might be viable.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


There are a few options for getting both a visible light image and an NIR image of the same scene. The DIY approaches vary in their ability to distinguish healthy from stressed plants. To maximize your chances of distinguishing healthy from slightly stressed plants, capturing a pure NIR image and a pure red image might be required. The most straightforward way to do that is to use two cameras each with a narrow band filter (e.g., one for red between 600 and 700nm, and one for NIR between 740 and 900 nm). Or one camera can be used and the filters switched between photos. These filters can be expensive and might require long exposures. These limitations drive most users to sacrifice analytical power, especially when trying to produce aerial NDVI images.

DIY single camera systems can simulate NDVI, but it is not possible to simultaneously capture both red and NIR images that are not cross contaminated (e.g., more here). The ability to detect small differences in plant stress might be compromised by this shortcoming. So if you don't need to fly the camera, a two camera system is preferable.

Producing a time series of images of your site could enhance your ability to detect patterns. One reason is that the patterns you are looking for may be invisible to the eye, so you only know what can be revealed after you have taken the photos. If you knew when the NDVI pattern was most obvious, just one pair of photos would suffice. I am not sure about other benefits of a time lapse series of images, but it would certainly be cool to watch a video and see the patterns appear. This would require a fixed photo point and dedicated attention to the project.

Reply to this comment...


Yes, a long exposure wouldn't be practical because of the instability of the platform the camera would be on (cost is also a limiting factor). My thought was to mount the camera(s) on a quad-copter with a GPS navigation unit so I can return to (close-to) the same position each time to photograph. My hope was to use a Rasberry Pi with a NOIR camera module to capture the pictures. This setup would be light enough that I could fly two cameras on the quad-copter if needed. From what you are saying, though, it sounds like such a setup may not be of sufficient quality to be helpful. As far as taking a series of photos over the growing season, I suppose, if nothing else, doing so might help identify when the NDVI pattern was most obvious. My plan was to start with a site that has known subsurface features, some of which have already been mapped, so I can compare that with the NDVI results. I agree that a time-lapse video would be a cool outcome, though might not be possible with the setup I envision.

Reply to this comment...


Can you operate two Pi cameras from one Pi? If so, you should be able to trigger the shutters simultaneously which is important. The Pi NoIR could have an inexpensive filter like a Wratten 87 which passes only NIR. That would capture a pure NIR image, albeit broad band (750-950?nm). A normal Pi cam could have a red filter like Wratten 25A which passes all red and also NIR. But the camera's IR block filter would block most NIR, so only red would be captured. This might produce a fairly pure red image, albeit broad band (600-700nm). You could also just use the red channel of the unmodified camera, but that will be contaminated with green and blue to some unknown degree. This would be inexpensive (e.g., filters) and could produce good results.

To process the photos pairs, Ned Horning's plugin for Fiji is all you need. It will take two directories full of the VIS and NIR photo pairs and make NDVI images from them. It can also automate calibration of the NDVI images if you place targets of known VIS and NIR reflectance in each photo. That will be required if you wanted to make quantitative comparisons among NDVI images made at different times, or make a time lapse video.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


Thanks, that's helpful . Yes, it is possible to operate two PI cameras from one Pi (though I'm still sorting through the debate/speculation about the best way to do that), so simultaneous triggers should be possible. The NOIR camera comes with a Roscolux #2007 Storaro Blue gel, I'm not sure how that compares to the Wratten 87, but any gel can be installed. So, if I understand correctly,your suggestion is to use a standard PiCam but add a Wratten 25A filter (while leaving the pre-installed IR filter in place) and also use a NOIR camera with a Wratten 87 ... then take a series of simultaneous pictures and process them using Ned Horning's plugin. Once I have the calibrated NDVI images, I can then make a time lapse or process them in some way to make a quantitative comparison (perhaps playing around with the techniques described in the video to see what might work? .... or maybe overlaying the images in Gimp with partial opacity?). I'm not sure if there would be a systematic approach to trying to amplify the small differences in color that (hopefully) would accumulate in each image in the series or if it would just be a matter of trial and error using Gimp. Do you know if there is a reference somewhere where I can learn more about calibrating using targets of known VIS and NIR reflectance? Thanks again.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


If you use the Fiji plugin to make NDVI, you will not need the techniques in the video above. The plugin does the comparison of NIR and VIS for each pixel and makes a new image based on NDVI (or other index) values. That is the step which can reveal patterns of plant stress. A few parameters can be varied to fine tune the computation of index values.

The NDVI image produced by the plugin will not be calibrated unless you follow a separate workflow which starts with including targets in each photo (or some photos). Ned is still working on this workflow. At this point there are not standard targets or procedures.

Comparing images made at different times could reveal temporal patterns of plant stress, but the individual NDVI images will have the most power to reveal spatial patterns caused by substrate differences. The trick is to get good NDVI images, and that might require carefully selected filters and proper exposure of each photo.

The #2007 filter is intended for producing NDVI images with a single Pi NoIR camera. The red channel will capture mostly NIR (with some visible) and the blue channel will capture mostly blue (with some NIR and other VIS). Using those two channels to compute NDVI sort of works, but I have yet to see Mobius NDVI images that have more information about plant health than a normal color photo. That will require more care.

Reply to this comment...


Ok, so if I understand you correctly, using a standard PiCam with the IR filter un-removed and a Wratten 25A filter added to the PiCam, while taking simultaneous pictures with a NoIR Camera containing a Wratten 87 filter (and then processing the pairs using the Fuji plugin) would probably be worth a try?

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


Correct. If the Wratten 25A camera requires exposures that are too long, you can just shoot normal photos and use the red channel for VIS (so don't install the Wratten 25A, just put it in front of the lens). Flying during the brightest part of the brightest days will help a lot.

You might also want to shoot with the Wratten 87 in front of the lens of the Pi NoIR, so you keep your options open. For example, if you put the Wratten 25A in front of the Pi NoIR, the red channel will be mostly red light and the blue channel will be mostly NIR. So that one camera can provide a facsimile of NDVI.

If you fly both cameras (Pi Cam with no added filter and Pi NoIR with Wratten 87) you get four channels (R,G, B, and NIR). That makes many indices and false color IR images (NRG) possible. The VIS bands are not very pure colors, but it doesn't matter so much for false color IR.

Reply to this comment...


is it possible to operate/connect 2 pi cameras simultaneously with one pi ???

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


you may want to connect with @John_Wells and the West Lothian Archaeological Society about filters for identifying underground features. They've been using low-cost IR imaging for that purpose for a number of years. http://www.armadale.org.uk/phototech02.htm http://www.armadale.org.uk/phototech05.htm

Reply to this comment...


Hi Warren, I followed through your tutorial. Here is the VIS and NIR images that I captured

VIS Image

rgb.JPG

NIR Image

nir.JPG

This the output generated with Gimp

output.jpg

This is my 1st time making NDVI image, I am not sure if the image I made is any good, could you provide me guidance or advice? Thank you for the amazing tutorial.

Is this a question? Click here to post it to the Questions page.

Hi @Rick88 your image looks OK - but your infrared image may benefit from being white balanced - there are some good resources here at #white-balance.

That said, the easiest way to know if your basic technique is working well is to look at an area of very consistent vegetation - grass, for example - and see if you can see variation in the image then. The leaves you're looking at are in various lighting situations and angles, so it's a little harder to pull out useful information. Hope that helps!


Reply to this comment...


Login to comment.