Short story: today went very well. We got in, set up, and should be on track for rover checkout and connectivity testing back to Ames tomorrow.
We plan to use the morning for rover/systems checkout and driving around the area a bit without instruments installed (to protect instruments and reduce operational concerns to focus on driving tests), then install instruments and possibly the source after lunch. We can test comms between the field and Ames in parallel with the rover checkout.
The team got in to the staging area with not too much trouble. We met at the Zzyzx Road turnoff, unhitched the trailer, and most of the team drove down the access road to survey the route and set up some infrastructure to cross the gullies, leaving the volunteer (yours truly) to watch the trailer and box truck. When they returned, we re-hitched and headed in with the trailer and box truck. The boards they used to fill in the worst ditches worked out very well.
Setup proceeded well. By the end of the day, we think we have the generators Tropos, GPS, and MROC ops trailer set up to a workable configuration, with tables, chairs, most consoles, power and data wired throughout. We managed to do a quick connectivity check between the field trailer and the robot (KRex), as well as a connectivity check between one voice terminal and the server at Ames. No voice check, only confirmed that the client connected to the server. We shut down and packed up on time to depart from the field site right at 6:00p. The sun was going down but the light was sufficient for driving out, so sticking with that end of day time should work out.
We will also need to rework the Tropos antenna and GPS base station setup tomorrow as well. We didn’t get them to a good enough state for overnight winds (which we were informed can get up to 50 mph / 80kph, though it didn’t seem like it was that bad last night).
We’ve recently begun a new collaboration, the Crisis Mapping project, with the Google Crisis Response team. Our goal is to develop tools to aid victims in a crisis through the use of map imagery processed by Google Earth Engine.
Google Earth Engine is a new tool for parallel computation on satellite imagery, enabling data processing on enormous scales using Google’s cloud infrastructure. Earth Engine takes care of tiling, georeferencing, and varying image resolutions automatically, letting researchers focus on the core aspects of their algorithms. Many popular georeferenced datasets are already loaded into Earth Engine for quick and easy use, such as 40 years of Landsat data, making historical data easily accessible. Earth Engine also provides an interactive sandbox for quickly prototyping image processing algorithms, and lazily computes results only for the parts of the image which are visible.
As a first step in the Crisis Mapping project, we decided to investigate flood mapping from satellite imagery. With flood maps, responders can know the extent of a flood and plan a more effective response.
We used data from MODIS (upper image, bands 1, 2 and 6 as RGB), which has the advantage of capturing an image of nearly every point on the Earth’s surface daily, making it suitable for fast responses to flooding. However, MODIS is blocked by clouds, which are often present in flooded areas. In the future we plan to investigate radar-based approaches to address this shortcoming. The MODIS data is coupled with images from Landsat (lower image, same region as the MODIS image), from which we derive ground truth data both for training and to evaluate the results of the algorithms on MODIS data. Landsat data is available infrequently and is thus less suitable for flood mapping.
We compared a number of existing flood mapping approaches in a trade study: (more…)
If you’ve been following this blog, you know that a big project for IRG this past year has been the Human Exploration Technology Surface Telerobotics Project. The project culminated this summer in three on-orbit test sessions, when we got to see our work used in space. The first session took place on June 17 with Chris Cassidy, the second session on July 26 with Luca Parmitano, and the third session on August 20 with Karen Nyberg. The three crew members successfully scouted for, deployed, and inspected a simulated radio telescope by remotely commanding our K10 rover at the IRG Roverscape. Here is some of the media coverage of our tests:
Last week IRG’s Surface Telerobotics Project conducted its first Operational Readiness Test (ORT). In the Surface Telerobotics Project, an astronaut on the International Space Station will control the K10 rover on earth to simulate deploying a radio telescope. This experiment will be the first time that an astronaut in space will control a full-scale rover on the ground in real time. The experiment is scheduled for July and August 2013, and before then we have three ORTs scheduled as practice runs to make sure everything goes smoothly.
Here’s a video from the first ORT. We are operating on IRG’s new Roverscape, a 2 acre area at Ames that is specially designed for our rovers with hills, craters, and a flat area. Most of our team is working inside the Roverscape building, which is so new that it was still being constructed as we were testing.
K10 Red was the rover of the day. For the real experiment, K10 Black (Red’s twin) will be ready to swap in if Red breaks. Near the end of the video, you’ll see K10 Red deploy a roll of Kapton film that simulates an arm of a radio antenna. On the moon, the film would stay in place, but the wind on Earth requires that we put weights on the film to keep it from blowing away.
Last week Adrian and I (Vytas SunSpiral) presented our work on “Super Ball Bot” a tensegrity robot for planetary landing and exploration, at the NIAC (NASA Innovative Advanced Concepts) Program’s Spring Symposium. It was really fun to share all the progress we have made in the mission concept development and engineering analysis. The best aspect of this is that our work is supporting our initial intuition that this concept is workable and not as crazy as it initially sounded. Luckily for us, the NIAC program is designed to try out these high risk, but high pay-off, concepts for new technologies for space exploration. Thus, when the BBC interviewed us, we took it as a good sign that they called us “NASA’s crazy robot lab.” Balancing that view, Tech Buzzer called us “Not actually crazy. But certainly innovative and ambitious.” And while the Tech Buzzer article has many factual errors, they are right about the innovation and ambition — we are developing an idea that has never been tried before, and if it works (which we think it will — with a lot more hard work), then it could change the future of robotics and space exploration.
Watch the video below to find out more, and see my earlier post where I first described the project when we started (much has evolved since then!).
Our open source, free 3D modeling software for satellites just had its 2.1 release. This includes a bunch of bug fixes plus a few new features. Most importantly we’ve added support for a generic satellite camera model called the RPC model. RPCs are just big polynomials that map geodetic coordinates to image coordinates but most importantly just about every commercial satellite company ships an RPC with their imagery. This allows Ames Stereo Pipeline to process imagery from new sources that we haven’t previously been able to work with like GeoEye.
The above picture is an example shaded colorized elevation model of the city Hobart in Australia. That image was created from example stereo imagery provided from GeoEye’s website and represents a difficult stereo pair for us to process. On the northeast corner of the image is a bunch of salt and pepper noise, which represents the water of the bay that we couldn’t correlate into 3D. In the southwest are mountains that have a dense forest with a texture that changes rapidly with viewing angle. Despite these problems you can see that our software was able to extract roads and buildings to some degree. This is interesting primarily because we wrote the software to work on the bare surface found on the Moon or Mars. Slowly we are improving so that we can support all kinds of terrain. For now, we recommend that our users apply ASP to imagery of bare rock, grasslands, snow, and ice for best results.
Currently, one of our most exciting areas of research is our exploration of the intersection of biology and tensegrity robots. The inspiration for this research comes from the idea of “Biotensegrity” pioneered by Dr. Steven Levin, which holds that tensegrity structures are a good model for how forces move through our bodies. Thus, instead of the common sense “bone-centric” model where force passes comprehensively from bone to bone, one should take a fascia-centric view that looks at the global fascia network (i.e. continuous chains of muscles and ligaments) as the primary load paths in the body. (For more info on fascia see my prior posts fascia, bones, and muscles, and Fascia and Motion.).
Tom Flemons’ Tensegrity Model of the Spine
To date, the vast majority of tensegrity research has focused on static tensegrity structures, but it turns out that they have many qualities which make them well suited for motion, especially the type of motion required of a robot (or animal) moving in the real world outside the safety of factories or laboratories. As I discuss in an earlier post, these advantages largely center around how tensegrity structures can distribute forces into the structure, instead of accumulating and magnifying forces through leverage, which is what happens in a normal rigidly connected robot.
Using the Tensegrity Robotics Simulator that we have been developing over the last year, we have been exploring biologically inspired tensegrity robots. Our initial focus is on a “snake” or “spine” like tensegrity robot, which is inspired by the models of a tensegrity spine created by Tom Flemons. For ease of modeling, our “snake” uses tetrahedron shaped elements, which look different from vertebrae, but maintain a similar topology of connectivity. Thus, each “vertebrae” of our snake is connected and controlled by cables to the next “vertebrae” and has no rigid hinges or joints. Compared to a regular robotic snake, this approach has the advantage that forces are not magnified via leverage through the body. As a result, we are able to explore real distributed control approaches because local actions stay predominately local without the unexpected global consequences experienced in a rigid robot.
In the following video we show our simulated “tensegrity snake” moving over different terrains while using a distributed and decentralized oscillatory control system. This first experiment uses controls with no world knowledge or motion planning, yet we see that it is capable of traversing a variety of complex terrains. Brian Tietz, a NASA Space Technology Research Fellow from Case Western Reserve University’s BioRobotics lab has been developing the snake tensegrity simulation and controls.
We have focused on distributed force controls because we want to maximize the competence of the structure’s interaction with the environment in order to simplify higher-level goal-oriented task controls. This approach mirrors the division between the mammalian spine, which is decentralized and primarily concerned with forces and rhythm, and the mammalian brain, which is concerned with task based motion planning and interfacing with the highly competent spine/body for execution.
Our work on distributed controls is influenced by theories of neuroscience that focus on networks of Central Pattern Generators (CPG) for distributed control of complex coordinated behaviors. We implemented a distributed (one controller per string) version of impedance control (which balances the needs of force and length control) on our simulated “tensegrity snake” robot and experimented with a variety of oscillatory controls on string tension and length. The version shown in the video implements a two level controller for each string, where the higher level control produces an open-loop sine wave for the tension control, and the lower level performs stabilizing feedback on position and velocity.
We found that even with this simple reactive control, our robot could generate a variety of gaits and navigate a wide range of obstacles which would normally require motion planning and structure specific gaits. We believe that this high level of motion competence at the reactive structural level will lead to impressive capabilities as we continue to explore closed loop CPG controls. We have initially focused on mobility tasks because recent research shows that neural-controls of goal-oriented manipulation are based in the same oscillatory controls found in mobility. Thus, as we mature our understanding of this new technology we will be able to extend it to goal-oriented manipulation tasks as we incorporate task-space sensory information.
A prototype tensegrity “snake” robot which will be used to verify the algorithms developed in simulation
Finally, to see more about our other research into dynamic tensegrity robots, please see my recent post on our SuperBall Bot project, where we are developing a planetary landing and mobility system with a tensegrity robot.
Figure 1. L2-Farside Mission Concept (image from Lockheed Martin)
Surface Telerobotics is a planned 2013 test to examine how astronauts in the International Space Station (ISS) can remotely operate a surface robot across short time delays. This test will be performed during Increment 35/36 to obtain baseline engineering data and will improve our understanding of how to: (1) deploy a crew-controlled telerobotic system for performing surface activities and (2) conduct joint human-robot exploration operations. This test will also help reduce risks for future human missions, identify technical gaps, and refine key system requirements.
The Moon’s farside is a possible early goal for missions beyond Low Earth Orbit (LEO) using the Orion Multi-Purpose Crew Vehicle (MPCV) to explore incrementally more distant destinations. The lunar L2 Lagrange Point is a location where the combined gravity of the Earth and Moon allows a spacecraft to be synchronized with the Moon in its orbit around the Earth, so that the spacecraft is relatively stationary over the farside of the Moon. Such a mission would be a proving ground for future exploration missions to deep space while also overseeing scientifically important investigations.
Figure 2. Low frequency radio telescope
From the Lagrange Point, an astronaut would teleoperate a rover on the lunar farside that would deploy a low radio frequency telescope to acquire observations of the Universe’s first stars/galaxies. This is a key science objective of the 2010 Astronomy & Astrophysics Decadal Survey. During Surface Telerobotics operations, we will simulate a telescope/antenna deployment.
Figure 3. K10 Rover
The ISS crew will control a NASA K10 planetary rover operating at a NASA outdoor robotics testbed. The rover will carry a variety of sensors and instruments, including a high resolution panoramic imager, a 3D scanning lidar, and an antenna deployment mechanism.
The crew will control the robot in “command sequencing with interactive monitoring” mode. The crew will command the rover to execute pre-generated sequences of tasks including drives and instrument measurements. The robot will execute the sequence autonomously while the crew monitors live telemetry and instrument data. If an issue arises, the crew can interrupt the current sequence and use basic manual tele-op commands to maneuver the robot out of trouble or command additional data acquisitions.
As a first step to the 2013 test, on September 22nd, at the 2012 International Observe the Moon Night event at NASA Ames Research Center, we demonstrated deployment of a polymide film antenna substrate. The video below shows the K10 rover playing out a polymide (kapton) film on the Ames Marscape.