This fall, the crisis mapping team extended their work on flood mapping to measure the water level of Lake Tahoe over time from satellite imagery. The resulting tool will allow the Forest Service to monitor the habitat of the endangered Tahoe Yellow Cress, plan for dock locations with water access, and better control dams in small nearby lakes, along with gaining a better understanding of the Lake Tahoe ecosystem. We plan to extend this work to measure lakes worldwide, allowing scientists to better measure and understand drought and glacial melt.
The work was done by three interns from the DEVELOP program: Nolan Cate, Anton Surunis, and Chelsea Ackroyd, in collaboration with the Lake Tahoe US Forest Service. Please enjoy the video they created explaining the project:
The Crisis Mapping team recently wrapped up our research on flood mapping.
Previously, we evaluated existing MODIS flood mapping algorithms across a diverse range of flood conditions. From this study, we realized that all of the existing algorithms will fail for some floods, but rarely do all of the algorithms fail for the same flood. Hence, we investigated how to combine these approaches to create a new algorithm that is more robust and reliable.
To do so, we turned to Adaboost, a common technique for combining multiple classifiers. Our algorithm takes in multiple “weak classifiers” generated from combinations of satellite bands, and combines the weak classifiers with a weighting scheme. The weights for the weak classifiers are learned from non-flood data and a permanent water mask. Information from a Digital Elevation Map (DEM) is then applied in a post-processing step to further improve the generated flood map.
In the above image, the top left segment shows three channels of the raw MODIS satellite data as an RGB image. The top center and top right segments show two different previously developed algorithms, which serve as weak classifiers for Adaboost. The bottom left shows the sum of all Adaboost’s weak classifiers with trained weights. Finally, the bottom center shows the thresholded output flood map from Adaboost, and the bottom right the results after post-processing with the DEM.
Short story: today went very well. We got in, set up, and should be on track for rover checkout and connectivity testing back to Ames tomorrow.
We plan to use the morning for rover/systems checkout and driving around the area a bit without instruments installed (to protect instruments and reduce operational concerns to focus on driving tests), then install instruments and possibly the source after lunch. We can test comms between the field and Ames in parallel with the rover checkout.
The team got in to the staging area with not too much trouble. We met at the Zzyzx Road turnoff, unhitched the trailer, and most of the team drove down the access road to survey the route and set up some infrastructure to cross the gullies, leaving the volunteer (yours truly) to watch the trailer and box truck. When they returned, we re-hitched and headed in with the trailer and box truck. The boards they used to fill in the worst ditches worked out very well.
Setup proceeded well. By the end of the day, we think we have the generators Tropos, GPS, and MROC ops trailer set up to a workable configuration, with tables, chairs, most consoles, power and data wired throughout. We managed to do a quick connectivity check between the field trailer and the robot (KRex), as well as a connectivity check between one voice terminal and the server at Ames. No voice check, only confirmed that the client connected to the server. We shut down and packed up on time to depart from the field site right at 6:00p. The sun was going down but the light was sufficient for driving out, so sticking with that end of day time should work out.
We will also need to rework the Tropos antenna and GPS base station setup tomorrow as well. We didn’t get them to a good enough state for overnight winds (which we were informed can get up to 50 mph / 80kph, though it didn’t seem like it was that bad last night).
We’ve recently begun a new collaboration, the Crisis Mapping project, with the Google Crisis Response team. Our goal is to develop tools to aid victims in a crisis through the use of map imagery processed by Google Earth Engine.
Google Earth Engine is a new tool for parallel computation on satellite imagery, enabling data processing on enormous scales using Google’s cloud infrastructure. Earth Engine takes care of tiling, georeferencing, and varying image resolutions automatically, letting researchers focus on the core aspects of their algorithms. Many popular georeferenced datasets are already loaded into Earth Engine for quick and easy use, such as 40 years of Landsat data, making historical data easily accessible. Earth Engine also provides an interactive sandbox for quickly prototyping image processing algorithms, and lazily computes results only for the parts of the image which are visible.
As a first step in the Crisis Mapping project, we decided to investigate flood mapping from satellite imagery. With flood maps, responders can know the extent of a flood and plan a more effective response.
We used data from MODIS (upper image, bands 1, 2 and 6 as RGB), which has the advantage of capturing an image of nearly every point on the Earth’s surface daily, making it suitable for fast responses to flooding. However, MODIS is blocked by clouds, which are often present in flooded areas. In the future we plan to investigate radar-based approaches to address this shortcoming. The MODIS data is coupled with images from Landsat (lower image, same region as the MODIS image), from which we derive ground truth data both for training and to evaluate the results of the algorithms on MODIS data. Landsat data is available infrequently and is thus less suitable for flood mapping.
We compared a number of existing flood mapping approaches in a trade study: (more…)
If you’ve been following this blog, you know that a big project for IRG this past year has been the Human Exploration Technology Surface Telerobotics Project. The project culminated this summer in three on-orbit test sessions, when we got to see our work used in space. The first session took place on June 17 with Chris Cassidy, the second session on July 26 with Luca Parmitano, and the third session on August 20 with Karen Nyberg. The three crew members successfully scouted for, deployed, and inspected a simulated radio telescope by remotely commanding our K10 rover at the IRG Roverscape. Here is some of the media coverage of our tests:
Last week IRG’s Surface Telerobotics Project conducted its first Operational Readiness Test (ORT). In the Surface Telerobotics Project, an astronaut on the International Space Station will control the K10 rover on earth to simulate deploying a radio telescope. This experiment will be the first time that an astronaut in space will control a full-scale rover on the ground in real time. The experiment is scheduled for July and August 2013, and before then we have three ORTs scheduled as practice runs to make sure everything goes smoothly.
Here’s a video from the first ORT. We are operating on IRG’s new Roverscape, a 2 acre area at Ames that is specially designed for our rovers with hills, craters, and a flat area. Most of our team is working inside the Roverscape building, which is so new that it was still being constructed as we were testing.
K10 Red was the rover of the day. For the real experiment, K10 Black (Red’s twin) will be ready to swap in if Red breaks. Near the end of the video, you’ll see K10 Red deploy a roll of Kapton film that simulates an arm of a radio antenna. On the moon, the film would stay in place, but the wind on Earth requires that we put weights on the film to keep it from blowing away.
Last week Adrian and I (Vytas SunSpiral) presented our work on “Super Ball Bot” a tensegrity robot for planetary landing and exploration, at the NIAC (NASA Innovative Advanced Concepts) Program’s Spring Symposium. It was really fun to share all the progress we have made in the mission concept development and engineering analysis. The best aspect of this is that our work is supporting our initial intuition that this concept is workable and not as crazy as it initially sounded. Luckily for us, the NIAC program is designed to try out these high risk, but high pay-off, concepts for new technologies for space exploration. Thus, when the BBC interviewed us, we took it as a good sign that they called us “NASA’s crazy robot lab.” Balancing that view, Tech Buzzer called us “Not actually crazy. But certainly innovative and ambitious.” And while the Tech Buzzer article has many factual errors, they are right about the innovation and ambition — we are developing an idea that has never been tried before, and if it works (which we think it will — with a lot more hard work), then it could change the future of robotics and space exploration.
Watch the video below to find out more, and see my earlier post where I first described the project when we started (much has evolved since then!).
Our open source, free 3D modeling software for satellites just had its 2.1 release. This includes a bunch of bug fixes plus a few new features. Most importantly we’ve added support for a generic satellite camera model called the RPC model. RPCs are just big polynomials that map geodetic coordinates to image coordinates but most importantly just about every commercial satellite company ships an RPC with their imagery. This allows Ames Stereo Pipeline to process imagery from new sources that we haven’t previously been able to work with like GeoEye.
The above picture is an example shaded colorized elevation model of the city Hobart in Australia. That image was created from example stereo imagery provided from GeoEye’s website and represents a difficult stereo pair for us to process. On the northeast corner of the image is a bunch of salt and pepper noise, which represents the water of the bay that we couldn’t correlate into 3D. In the southwest are mountains that have a dense forest with a texture that changes rapidly with viewing angle. Despite these problems you can see that our software was able to extract roads and buildings to some degree. This is interesting primarily because we wrote the software to work on the bare surface found on the Moon or Mars. Slowly we are improving so that we can support all kinds of terrain. For now, we recommend that our users apply ASP to imagery of bare rock, grasslands, snow, and ice for best results.