This fall, the crisis mapping team extended their work on flood mapping to measure the water level of Lake Tahoe over time from satellite imagery. The resulting tool will allow the Forest Service to monitor the habitat of the endangered Tahoe Yellow Cress, plan for dock locations with water access, and better control dams in small nearby lakes, along with gaining a better understanding of the Lake Tahoe ecosystem. We plan to extend this work to measure lakes worldwide, allowing scientists to better measure and understand drought and glacial melt.
The work was done by three interns from the DEVELOP program: Nolan Cate, Anton Surunis, and Chelsea Ackroyd, in collaboration with the Lake Tahoe US Forest Service. Please enjoy the video they created explaining the project:
The Crisis Mapping team recently wrapped up our research on flood mapping.
Previously, we evaluated existing MODIS flood mapping algorithms across a diverse range of flood conditions. From this study, we realized that all of the existing algorithms will fail for some floods, but rarely do all of the algorithms fail for the same flood. Hence, we investigated how to combine these approaches to create a new algorithm that is more robust and reliable.
To do so, we turned to Adaboost, a common technique for combining multiple classifiers. Our algorithm takes in multiple “weak classifiers” generated from combinations of satellite bands, and combines the weak classifiers with a weighting scheme. The weights for the weak classifiers are learned from non-flood data and a permanent water mask. Information from a Digital Elevation Map (DEM) is then applied in a post-processing step to further improve the generated flood map.
In the above image, the top left segment shows three channels of the raw MODIS satellite data as an RGB image. The top center and top right segments show two different previously developed algorithms, which serve as weak classifiers for Adaboost. The bottom left shows the sum of all Adaboost’s weak classifiers with trained weights. Finally, the bottom center shows the thresholded output flood map from Adaboost, and the bottom right the results after post-processing with the DEM.
We’ve recently begun a new collaboration, the Crisis Mapping project, with the Google Crisis Response team. Our goal is to develop tools to aid victims in a crisis through the use of map imagery processed by Google Earth Engine.
Google Earth Engine is a new tool for parallel computation on satellite imagery, enabling data processing on enormous scales using Google’s cloud infrastructure. Earth Engine takes care of tiling, georeferencing, and varying image resolutions automatically, letting researchers focus on the core aspects of their algorithms. Many popular georeferenced datasets are already loaded into Earth Engine for quick and easy use, such as 40 years of Landsat data, making historical data easily accessible. Earth Engine also provides an interactive sandbox for quickly prototyping image processing algorithms, and lazily computes results only for the parts of the image which are visible.
As a first step in the Crisis Mapping project, we decided to investigate flood mapping from satellite imagery. With flood maps, responders can know the extent of a flood and plan a more effective response.
We used data from MODIS (upper image, bands 1, 2 and 6 as RGB), which has the advantage of capturing an image of nearly every point on the Earth’s surface daily, making it suitable for fast responses to flooding. However, MODIS is blocked by clouds, which are often present in flooded areas. In the future we plan to investigate radar-based approaches to address this shortcoming. The MODIS data is coupled with images from Landsat (lower image, same region as the MODIS image), from which we derive ground truth data both for training and to evaluate the results of the algorithms on MODIS data. Landsat data is available infrequently and is thus less suitable for flood mapping.
We compared a number of existing flood mapping approaches in a trade study: (more…)
Our open source, free 3D modeling software for satellites just had its 2.1 release. This includes a bunch of bug fixes plus a few new features. Most importantly we’ve added support for a generic satellite camera model called the RPC model. RPCs are just big polynomials that map geodetic coordinates to image coordinates but most importantly just about every commercial satellite company ships an RPC with their imagery. This allows Ames Stereo Pipeline to process imagery from new sources that we haven’t previously been able to work with like GeoEye.
The above picture is an example shaded colorized elevation model of the city Hobart in Australia. That image was created from example stereo imagery provided from GeoEye’s website and represents a difficult stereo pair for us to process. On the northeast corner of the image is a bunch of salt and pepper noise, which represents the water of the bay that we couldn’t correlate into 3D. In the southwest are mountains that have a dense forest with a texture that changes rapidly with viewing angle. Despite these problems you can see that our software was able to extract roads and buildings to some degree. This is interesting primarily because we wrote the software to work on the bare surface found on the Moon or Mars. Slowly we are improving so that we can support all kinds of terrain. For now, we recommend that our users apply ASP to imagery of bare rock, grasslands, snow, and ice for best results.
Our open source, free 3D modeling software for satellites just had its 2.0 release. Many improvements were made, include the previously mentioned Digital Globe support. Users will find the code faster and memory efficient compared to our prior release.
Binaries and source code for Ames Stereo Pipeline is available from its project page:
Our developers will be at the Planetary Data Workshop in Flagstaff, Arizona next week. We’ll be giving an overview of ASP v2 and performing tutorials for how to process HiRISE and LRO-NAC imagery with the new software. For those who can’t make it, presentations will be made available online after the conference.
For those not familiar, Ames Stereo Pipeline (ASP) is an open source toolset provided for free by IRG. We built it on top of another fantastic collection of tools called ISIS provided by USGS. This allowed our group and the public to produce 3D models of planets and moons across our solar system. ASP played an essential role in our Lunar and Mapping Modeling project where we produced high-resolution elevation models using images from the 1970’s Apollo missions.
Up until recently, ASP hasn’t ever performed work using satellite images of the Earth. This will change in our next binary release where we will be adding support for stereo images captured by Digital Globe. Our source code on Github has recently added a change to allow ASP to read camera information from Digital Globe’s XML format. This has allowed us to produce our first colorized height model of the Rockies shown right.
The Earth Sciences directorate of NASA has graciously funded our recent developments in ASP. We have the immediate hope that our tools will allow researchers to model ice-sheet flows in both the Antarctic and Greenland. The autonomous nature of ASP will allow scientists to get weekly ice flow updates where it was previously impossible due to manual costs. This potentially gives researchers new methods for insights into our changing climate.
Not everyone at IRG plays with robots. Some play with Eclipse and will never be understood. Others, like the Mapmakers, play with satellite imagery. IRG is happy to announce that those oddballs just completed a 3-year project to produce the “Apollo Zone” Digital Image Mosaic (DIM) and Digital Elevation Model (DEM). These maps cover approximately 18% of the Lunar surface at a resolution of 1024 pixels per degree (or about 40 m/pixel).
To preview the “Apollo Zone” maps, download the following KML file for viewing in Google Earth.
Once you open the file in Google Earth you will have radio options to view both maps overlaid Google Earth’s Moon mode. These maps have also been upload to the Lunar Mapping and Modeling Project (LMMP) portal and will soon be available for visualization and download via that site.
These maps are created from the Apollo Metric (Mapping) Camera, which flew aboard Apollo 15, 16, and 17 in the 1970s. Much later, ASU’s Apollo Image Archive scanned them into high-resolution digital form. This made it much easier for scientists and the public alike to explore the dataset. From there, IRG aligned 4000 of the images and then processed them for 3D data using the Pleiades Super Computer. IRG’s open source software, Vision Workbench and Ames Stereo Pipeline performed all the work. Those same tools can be used to process other datasets like the Lunar or Mars Reconnaissance Orbiters.
This work was funded by the Lunar Mapping and Modeling Project (LMMP). We gratefully acknowledge the support of our collaborators at NASA MSFC, NASA GSFC, JPL and USGS. Our special thanks go to Ray French and Mark Nall for their support and leadership of LMMP.