Surface Telerobotics Success

September 12 2013

If you’ve been following this blog, you know that a big project for IRG this past year has been the Human Exploration Technology Surface Telerobotics Project. The project culminated this summer in three on-orbit test sessions, when we got to see our work used in space. The first session took place on June 17 with Chris Cassidy, the second session on July 26 with Luca Parmitano, and the third session on August 20 with Karen Nyberg. The three crew members successfully scouted for, deployed, and inspected a simulated radio telescope by remotely commanding our K10 rover at the IRG Roverscape. Here is some of the media coverage of our tests:


(CNET, 2013)


(ISS Update, NASA Television, 2013)


(Space Station Live, NASA Television, 2013)

Leave a comment.

Surface Telerobotics Intro Video

May 8 2013

We made this video to introduce the Surface Telerobotics Project to the astronaut who will command the K10 rover. Enjoy!

Leave a comment.

First Operational Readiness Test for the Surface Telerobotics Project

May 7 2013

Last week IRG’s Surface Telerobotics Project conducted its first Operational Readiness Test (ORT). In the Surface Telerobotics Project, an astronaut on the International Space Station will control the K10 rover on earth to simulate deploying a radio telescope. This experiment will be the first time that an astronaut in space will control a full-scale rover on the ground in real time. The experiment is scheduled for July and August 2013, and before then we have three ORTs scheduled as practice runs to make sure everything goes smoothly.

Here’s a video from the first ORT. We are operating on IRG’s new Roverscape, a 2 acre area at Ames that is specially designed for our rovers with hills, craters, and a flat area. Most of our team is working inside the Roverscape building, which is so new that it was still being constructed as we were testing.

K10 Red was the rover of the day. For the real experiment, K10 Black (Red’s twin) will be ready to swap in if Red breaks. Near the end of the video, you’ll see K10 Red deploy a roll of Kapton film that simulates an arm of a radio antenna. On the moon, the film would stay in place, but the wind on Earth requires that we put weights on the film to keep it from blowing away.

Next week is ORT 2!

Leave a comment.

Presentation on Tensegrity Robots for Planetary Exploration

March 21 2013

Last week Adrian and I (Vytas SunSpiral) presented our work on “Super Ball Bot” a tensegrity robot for planetary landing and exploration, at the NIAC (NASA Innovative Advanced Concepts) Program’s Spring Symposium. It was really fun to share all the progress we have made in the mission concept development and engineering analysis. The best aspect of this is that our work is supporting our initial intuition that this concept is workable and not as crazy as it initially sounded. Luckily for us, the NIAC program is designed to try out these high risk, but high pay-off, concepts for new technologies for space exploration. Thus, when the BBC interviewed us, we took it as a good sign that they called us “NASA’s crazy robot lab.” Balancing that view, Tech Buzzer called us “Not actually crazy. But certainly innovative and ambitious.” And while the Tech Buzzer article has many factual errors, they are right about the innovation and ambition — we are developing an idea that has never been tried before, and if it works (which we think it will — with a lot more hard work), then it could change the future of robotics and space exploration.

Watch the video below to find out more, and see my earlier post where I first described the project when we started (much has evolved since then!).

Watch live streaming video from niac2013 at livestream.com

Since I mentioned some of the media attention we have received, we also got covered by Time right after the project was announced.

(This is a repost from Vytas SunSpirals Personal Blog)

Leave a comment.

Ames Stereo Pipeline 2.1 Released

January 9 2013

Our open source, free 3D modeling software for satellites just had its 2.1 release. This includes a bunch of bug fixes plus a few new features. Most importantly we’ve added support for a generic satellite camera model called the RPC model. RPCs are just big polynomials that map geodetic coordinates to image coordinates but most importantly just about every commercial satellite company ships an RPC with their imagery. This allows Ames Stereo Pipeline to process imagery from new sources that we haven’t previously been able to work with like GeoEye.

 

The above picture is an example shaded colorized elevation model of the city Hobart in Australia. That image was created from example stereo imagery provided from GeoEye’s website and represents a difficult stereo pair for us to process. On the northeast corner of the image is a bunch of salt and pepper noise, which represents the water of the bay that we couldn’t correlate into 3D. In the southwest are mountains that have a dense forest with a texture that changes rapidly with viewing angle. Despite these problems you can see that our software was able to extract roads and buildings to some degree. This is interesting primarily because we wrote the software to work on the bare surface found on the Moon or Mars. Slowly we are improving so that we can support all kinds of terrain. For now, we recommend that our users apply ASP to imagery of bare rock, grasslands, snow, and ice for best results.

Leave a comment.

Tensegrity Snake Robot

October 25 2012

(This is a repost from Vytas’ personal blog)

Currently, one of our most exciting areas of research is our exploration of the intersection of biology and tensegrity robots. The inspiration for this research comes from the idea of “Biotensegrity” pioneered by Dr. Steven Levin, which holds that tensegrity structures are a good model for how forces move through our bodies. Thus, instead of the common sense “bone-centric” model where force passes comprehensively from bone to bone, one should take a fascia-centric view that looks at the global fascia network (i.e. continuous chains of muscles and ligaments) as the primary load paths in the body. (For more info on fascia see my prior posts fascia, bones, and muscles, and Fascia and Motion.).

Tom Flemons’ Tensegrity Model of the Spine

To date, the vast majority of tensegrity research has focused on static tensegrity structures, but it turns out that they have many qualities which make them well suited for motion, especially the type of motion required of a robot (or animal) moving in the real world outside the safety of factories or laboratories. As I discuss in an earlier post, these advantages largely center around how tensegrity structures can distribute forces into the structure, instead of accumulating and magnifying forces through leverage, which is what happens in a normal rigidly connected robot.

Using the Tensegrity Robotics Simulator that we have been developing over the last year, we have been exploring biologically inspired tensegrity robots. Our initial focus is on a “snake” or “spine” like tensegrity robot, which is inspired by the models of a tensegrity spine created by Tom Flemons. For ease of modeling, our “snake” uses tetrahedron shaped elements, which look different from vertebrae, but maintain a similar topology of connectivity. Thus, each “vertebrae” of our snake is connected and controlled by cables to the next “vertebrae” and has no rigid hinges or joints. Compared to a regular robotic snake, this approach has the advantage that forces are not magnified via leverage through the body. As a result, we are able to explore real distributed control approaches because local actions stay predominately local without the unexpected global consequences experienced in a rigid robot.

In the following video we show our simulated “tensegrity snake” moving over different terrains while using a distributed and decentralized oscillatory control system. This first experiment uses controls with no world knowledge or motion planning, yet we see that it is capable of traversing a variety of complex terrains. Brian Tietz, a NASA Space Technology Research Fellow from Case Western Reserve University’s BioRobotics lab has been developing the snake tensegrity simulation and controls.

We have focused on distributed force controls because we want to maximize the competence of the structure’s interaction with the environment in order to simplify higher-level goal-oriented task controls. This approach mirrors the division between the mammalian spine, which is decentralized and primarily concerned with forces and rhythm, and the mammalian brain, which is concerned with task based motion planning and interfacing with the highly competent spine/body for execution.

Our work on distributed controls is influenced by theories of neuroscience that focus on networks of Central Pattern Generators (CPG) for distributed control of complex coordinated behaviors. We implemented a distributed (one controller per string) version of impedance control (which balances the needs of force and length control) on our simulated “tensegrity snake” robot and experimented with a variety of oscillatory controls on string tension and length. The version shown in the video implements a two level controller for each string, where the higher level control produces an open-loop sine wave for the tension control, and the lower level performs stabilizing feedback on position and velocity.

We found that even with this simple reactive control, our robot could generate a variety of gaits and navigate a wide range of obstacles which would normally require motion planning and structure specific gaits. We believe that this high level of motion competence at the reactive structural level will lead to impressive capabilities as we continue to explore closed loop CPG controls. We have initially focused on mobility tasks because recent research shows that neural-controls of goal-oriented manipulation are based in the same oscillatory controls found in mobility. Thus, as we mature our understanding of this new technology we will be able to extend it to goal-oriented manipulation tasks as we incorporate task-space sensory information.

In order to validate our progress in simulation, we are hard at work building a physical tensegrity snake robot. The initial prototype was built by a team of students at the University of Idaho as part of their senior capstone engineering team project. We are working on rebuilding the control system in order to accommodate the controls we have developed in simulation.

A prototype tensegrity “snake” robot which will be used to verify the algorithms developed in simulation

Finally, to see more about our other research into dynamic tensegrity robots, please see my recent post on our SuperBall Bot project, where we are developing a planetary landing and mobility system with a tensegrity robot.

Leave a comment.

Surface Telerobotics from the ISS

September 25 2012

Figure 1. L2-Farside Mission Concept (image from Lockheed Martin)

Surface Telerobotics is a planned 2013 test to examine how astronauts in the International Space Station (ISS) can remotely operate a surface robot across short time delays. This test will be performed during Increment 35/36 to obtain baseline engineering data and will improve our understanding of how to: (1) deploy a crew-controlled telerobotic system for performing surface activities and (2) conduct joint human-robot exploration operations. This test will also help reduce risks for future human missions, identify technical gaps, and refine key system requirements.

The Moon’s farside is a possible early goal for missions beyond Low Earth Orbit (LEO) using the Orion Multi-Purpose Crew Vehicle (MPCV) to explore incrementally more distant destinations. The lunar L2 Lagrange Point is a location where the combined gravity of the Earth and Moon allows a spacecraft to be synchronized with the Moon in its orbit around the Earth, so that the spacecraft is relatively stationary over the farside of the Moon.  Such a mission would be a proving ground for future exploration missions to deep space while also overseeing scientifically important investigations.

Figure 2. Low frequency radio telescope

From the Lagrange Point, an astronaut would teleoperate a rover on the lunar farside that would deploy a low radio frequency telescope to acquire observations of the Universe’s first stars/galaxies.  This is a key science objective of the 2010 Astronomy & Astrophysics Decadal Survey. During Surface Telerobotics operations, we will simulate a telescope/antenna deployment.

Figure 3. K10 Rover

The ISS crew will control a NASA K10 planetary rover operating at a NASA outdoor robotics testbed.  The rover will carry a variety of sensors and instruments, including a high resolution panoramic imager, a 3D scanning lidar, and an antenna deployment mechanism.

The crew will control the robot in “command sequencing with interactive monitoring” mode.  The crew will command the rover to execute pre-generated sequences of tasks including drives and instrument measurements.  The robot will execute the sequence autonomously while the crew monitors live telemetry and instrument data.  If an issue arises, the crew can interrupt the current sequence and use basic manual tele-op commands to maneuver the robot out of trouble or command additional data acquisitions.

As a first step to the 2013 test, on September 22nd, at the 2012 International Observe the Moon Night event at NASA Ames Research Center, we demonstrated deployment of a polymide film antenna substrate.  The video below shows the K10 rover playing out a polymide (kapton) film on the Ames Marscape.

Leave a comment.

A concept drawing of the Super Ball Bot structure

Super Ball Bot – Structures for Planetary Landing and Exploration

September 19 2012

Recently we got the great news that we were awarded funding from NASA’s Office of Chief Technologist for the NASA Innovative Advanced Concept (NIAC) proposal “Super Ball Bot – Structures for Planetary Landing and Exploration.” The proposed research  revolves around a radical departure from traditional rigid robotics to “tensegrity” robots composed entirely of interlocking rods and cables.   Out of more than 600 white papers originally submitted, this proposal is one out of only 18 that were funded for 2012. Tensegrities, which Buckminster Fuller helped discover, are counter-intuitive tension structures with no rigid connections and are uniquely robust, light-weight, and deployable. Co-led by Vytas SunSpiral (Intelligent Robotics Group) and Adrian Agogino (Robust Software Engineering Group), and collaborating with David Atkinson of the University of Idaho, the project is developing a mission concept where a “Super Ball Bot” bounces to a landing on a planet, then deforms itself to roll to locations of scientific interest.  This combination of functions is possible because of the unique structural qualities of tensegrities which can be deployed from small volumes, are lightweight, and can absorb significant impact shocks.  Thus, they can be used much like an airbag for landing on a planetary surface, and then deformed in a controlled manner to roll the spacecraft around the surface to locations of scientific interest.

A concept drawing of the mission, where many Super Ball Bots could be deployed and bounce to a landing before moving and exploring the surface.

 

These unusual structures are hard to control traditionally so Vytas and Adrian are experimenting with controlling them using machine learning algorithms and neuroscience inspired oscillatory controls known as Central Pattern Generators (CPG’s).  Adrian’s work on multiagent systems and learning provide robust solutions to numerous complex design and control problems. These learning systems can be adaptive, and can generate control solutions to complex structures too complicated to be designed by hand.  This approach is well suited for tensegrity structures which are complex non-linear systems whose control-theory is still being developed.  Vytas has been researching robotic manipulation and mobility for over a decade and in recent years has been focused on the game-changing capabilities of tensegrity robots due to their unique structural properties.  His quest to tap their potential has lead him to investigate oscillatory control approaches from the field of neuroscience, such as Central Pattern Generators (CPG’s), which show promise for efficient control of these robots.

A concept drawing of the Super Ball Bot structure

 

While the Super Ball Bot project has just started, we already have some exciting initial results from the machine learning efforts.  During the last year, Vytas led the development of a physics based tensegrity simulator built on-top of the open-source Bullet Physics Engine.  We have been using that simulator to explore novel tensegrity structures and control approaches, and will write a separate post about the oscillatory control of a snake-like tensegrity robot and its ability to traverse many complex terrains with fully distributed control algorithms. For NIAC we are now using this simulator to test mission related properties of tensegrities.  The following video shows two drop tests where we simulate a tensegrity robot landing. The results confirm what we see in physical models in our lab, which is that these structures do a great job absorbing impact forces, even as we vary the stiffness of the strings.

 

 

Since the NIAC proposal was awarded, we have focused on evolving the motion controls of a rolling tensegrity robot and have early simulation results which show it safely rolling through a rocky terrain.

 

 

To date, most of the research into control of tensegrity robots has focused on slow motions which do not excite the dynamics of the structure. Wanting to show that tensegrity robots can be fast and dynamic movers, we are exploring what is possible when the structure is driven at the limits of dynamic stability.

To explore the maximum speed achievable by our tensegrity robot, Adrian’s intern, Atil Iscen, has been developing an evolutionary control approach where a large population of random tensegrity controllers are evaluated based on their ability to move the farthest distance within a fixed amount of time. Then, the worst performing members are eliminated from the population and the best ones are replicated and mutated, allowing the mutations of the good controllers to become even better.

Our best solutions so far evolve parameters to a distributed oscillatory controller where the lengths of groups of three cables (making a facet) are controlled by the values of a sine-wave. The job of evolution is then to control the phase offset, period, and amplitude of the sine wave for the strings. The breakthrough of this approach is that it enables fast dynamic motion, without requiring the computationally expensive modeling and analysis necessary for a centrally computed controller.

Our preliminary results show that tensegrity robots are indeed capable of fast dynamic motion, and that the evolutionary approach is successful at finding difficult to model dynamic controllers.

In the following video we show:
1) Slowly moving hand-crafted controller showing the difficulty of this problem.
2) An evolved controller showing high speed mobility
3) An evolved controller showing high speeds while handling rough terrain

 

 

While it is exciting to see such fast and dynamic motion from a tensegrity robot, rolling at the limits of stability is not the control approach we need for a space mission. When exploring another planet we need to balance the needs of making progress with concerns about energy efficiency and stability. Thus, we evolved a new controller with a tighter cap on the amount of stretch and energy available for each string.  With that change we find results which appear stable and far more appropriate for exploration of a distant planet.

 

 

These results are preliminary and we expect to continue to improve the stability, energy efficiency, and terrain handling. Still, it is important to explore the upper limits of speed and dynamic performance. Further, we are establishing that evolutionary approaches are capable of parameter tuning and optimizing the performance of distributed control systems for dynamic tensegrity robots. This is important due to the deep challenges in hand crafting the dynamics of these complex and non-linear systems.

Moving forward we plan on exploring increasingly complex structures and distributed control architectures within which we will deploy our learning algorithms to tune performance. In other work we have already shown success at deploying distributed impedance control on tensegrity robots, along with compelling results from biologically inspired Central Pattern Generators (CPG’s). Both of these approaches require significant amounts of hand tuning of parameters, which our learning algorithms should be able to improve upon. Beyond the evolutionary approaches used so far, we also expect to explore multiagent control.

 

Leave a comment.

Destination Innovation features HET Project

July 27 2012

Original link at http://youtu.be/YvgFhNwD2Us.

Leave a comment.

Smartphone Phones Home

July 26 2012

On July 2, IRG conducted its second on-orbit test with the Nexus S smartphone. This test exercised the communication path from the phone on ISS to Johnson Space Center, in preparation for a future demonstration of remote control of a SPHERES robot on the ISS by an operator at JSC.

For this first communications test, the phone sent live video from the ISS to JSC, and the ground sent and received short ping-like packets. The phone transmitted data via wifi to an ISS laptop, and the laptop sent the data over the standard Ku band satellite link to JSC.

During the test, we encountered such hiccups as a regularly-scheduled LOS (loss of signal, when the ISS is not in communication with the ground) and a router failing on the ISS. We recovered and were able to reach our goal of ten minutes of communication. The lessons we learned in this test will serve us well during our next on-orbit test, which will happen no earlier than the end of October.

Astronaut Joe Acaba performs the HET Comms Test in the US Lab, as seen by the HET Smartphone.

Leave a comment.