Tensegrity Snake Robot

October 25 2012

(This is a repost from Vytas’ personal blog)

Currently, one of our most exciting areas of research is our exploration of the intersection of biology and tensegrity robots. The inspiration for this research comes from the idea of “Biotensegrity” pioneered by Dr. Steven Levin, which holds that tensegrity structures are a good model for how forces move through our bodies. Thus, instead of the common sense “bone-centric” model where force passes comprehensively from bone to bone, one should take a fascia-centric view that looks at the global fascia network (i.e. continuous chains of muscles and ligaments) as the primary load paths in the body. (For more info on fascia see my prior posts fascia, bones, and muscles, and Fascia and Motion.).

Tom Flemons’ Tensegrity Model of the Spine

To date, the vast majority of tensegrity research has focused on static tensegrity structures, but it turns out that they have many qualities which make them well suited for motion, especially the type of motion required of a robot (or animal) moving in the real world outside the safety of factories or laboratories. As I discuss in an earlier post, these advantages largely center around how tensegrity structures can distribute forces into the structure, instead of accumulating and magnifying forces through leverage, which is what happens in a normal rigidly connected robot.

Using the Tensegrity Robotics Simulator that we have been developing over the last year, we have been exploring biologically inspired tensegrity robots. Our initial focus is on a “snake” or “spine” like tensegrity robot, which is inspired by the models of a tensegrity spine created by Tom Flemons. For ease of modeling, our “snake” uses tetrahedron shaped elements, which look different from vertebrae, but maintain a similar topology of connectivity. Thus, each “vertebrae” of our snake is connected and controlled by cables to the next “vertebrae” and has no rigid hinges or joints. Compared to a regular robotic snake, this approach has the advantage that forces are not magnified via leverage through the body. As a result, we are able to explore real distributed control approaches because local actions stay predominately local without the unexpected global consequences experienced in a rigid robot.

In the following video we show our simulated “tensegrity snake” moving over different terrains while using a distributed and decentralized oscillatory control system. This first experiment uses controls with no world knowledge or motion planning, yet we see that it is capable of traversing a variety of complex terrains. Brian Tietz, a NASA Space Technology Research Fellow from Case Western Reserve University’s BioRobotics lab has been developing the snake tensegrity simulation and controls.

We have focused on distributed force controls because we want to maximize the competence of the structure’s interaction with the environment in order to simplify higher-level goal-oriented task controls. This approach mirrors the division between the mammalian spine, which is decentralized and primarily concerned with forces and rhythm, and the mammalian brain, which is concerned with task based motion planning and interfacing with the highly competent spine/body for execution.

Our work on distributed controls is influenced by theories of neuroscience that focus on networks of Central Pattern Generators (CPG) for distributed control of complex coordinated behaviors. We implemented a distributed (one controller per string) version of impedance control (which balances the needs of force and length control) on our simulated “tensegrity snake” robot and experimented with a variety of oscillatory controls on string tension and length. The version shown in the video implements a two level controller for each string, where the higher level control produces an open-loop sine wave for the tension control, and the lower level performs stabilizing feedback on position and velocity.

We found that even with this simple reactive control, our robot could generate a variety of gaits and navigate a wide range of obstacles which would normally require motion planning and structure specific gaits. We believe that this high level of motion competence at the reactive structural level will lead to impressive capabilities as we continue to explore closed loop CPG controls. We have initially focused on mobility tasks because recent research shows that neural-controls of goal-oriented manipulation are based in the same oscillatory controls found in mobility. Thus, as we mature our understanding of this new technology we will be able to extend it to goal-oriented manipulation tasks as we incorporate task-space sensory information.

In order to validate our progress in simulation, we are hard at work building a physical tensegrity snake robot. The initial prototype was built by a team of students at the University of Idaho as part of their senior capstone engineering team project. We are working on rebuilding the control system in order to accommodate the controls we have developed in simulation.

A prototype tensegrity “snake” robot which will be used to verify the algorithms developed in simulation

Finally, to see more about our other research into dynamic tensegrity robots, please see my recent post on our SuperBall Bot project, where we are developing a planetary landing and mobility system with a tensegrity robot.

Leave a comment.

Surface Telerobotics from the ISS

September 25 2012

Figure 1. L2-Farside Mission Concept (image from Lockheed Martin)

Surface Telerobotics is a planned 2013 test to examine how astronauts in the International Space Station (ISS) can remotely operate a surface robot across short time delays. This test will be performed during Increment 35/36 to obtain baseline engineering data and will improve our understanding of how to: (1) deploy a crew-controlled telerobotic system for performing surface activities and (2) conduct joint human-robot exploration operations. This test will also help reduce risks for future human missions, identify technical gaps, and refine key system requirements.

The Moon’s farside is a possible early goal for missions beyond Low Earth Orbit (LEO) using the Orion Multi-Purpose Crew Vehicle (MPCV) to explore incrementally more distant destinations. The lunar L2 Lagrange Point is a location where the combined gravity of the Earth and Moon allows a spacecraft to be synchronized with the Moon in its orbit around the Earth, so that the spacecraft is relatively stationary over the farside of the Moon.  Such a mission would be a proving ground for future exploration missions to deep space while also overseeing scientifically important investigations.

Figure 2. Low frequency radio telescope

From the Lagrange Point, an astronaut would teleoperate a rover on the lunar farside that would deploy a low radio frequency telescope to acquire observations of the Universe’s first stars/galaxies.  This is a key science objective of the 2010 Astronomy & Astrophysics Decadal Survey. During Surface Telerobotics operations, we will simulate a telescope/antenna deployment.

Figure 3. K10 Rover

The ISS crew will control a NASA K10 planetary rover operating at a NASA outdoor robotics testbed.  The rover will carry a variety of sensors and instruments, including a high resolution panoramic imager, a 3D scanning lidar, and an antenna deployment mechanism.

The crew will control the robot in “command sequencing with interactive monitoring” mode.  The crew will command the rover to execute pre-generated sequences of tasks including drives and instrument measurements.  The robot will execute the sequence autonomously while the crew monitors live telemetry and instrument data.  If an issue arises, the crew can interrupt the current sequence and use basic manual tele-op commands to maneuver the robot out of trouble or command additional data acquisitions.

As a first step to the 2013 test, on September 22nd, at the 2012 International Observe the Moon Night event at NASA Ames Research Center, we demonstrated deployment of a polymide film antenna substrate.  The video below shows the K10 rover playing out a polymide (kapton) film on the Ames Marscape.

Leave a comment.

A concept drawing of the Super Ball Bot structure

Super Ball Bot – Structures for Planetary Landing and Exploration

September 19 2012

Recently we got the great news that we were awarded funding from NASA’s Office of Chief Technologist for the NASA Innovative Advanced Concept (NIAC) proposal “Super Ball Bot – Structures for Planetary Landing and Exploration.” The proposed research  revolves around a radical departure from traditional rigid robotics to “tensegrity” robots composed entirely of interlocking rods and cables.   Out of more than 600 white papers originally submitted, this proposal is one out of only 18 that were funded for 2012. Tensegrities, which Buckminster Fuller helped discover, are counter-intuitive tension structures with no rigid connections and are uniquely robust, light-weight, and deployable. Co-led by Vytas SunSpiral (Intelligent Robotics Group) and Adrian Agogino (Robust Software Engineering Group), and collaborating with David Atkinson of the University of Idaho, the project is developing a mission concept where a “Super Ball Bot” bounces to a landing on a planet, then deforms itself to roll to locations of scientific interest.  This combination of functions is possible because of the unique structural qualities of tensegrities which can be deployed from small volumes, are lightweight, and can absorb significant impact shocks.  Thus, they can be used much like an airbag for landing on a planetary surface, and then deformed in a controlled manner to roll the spacecraft around the surface to locations of scientific interest.

A concept drawing of the mission, where many Super Ball Bots could be deployed and bounce to a landing before moving and exploring the surface.

 

These unusual structures are hard to control traditionally so Vytas and Adrian are experimenting with controlling them using machine learning algorithms and neuroscience inspired oscillatory controls known as Central Pattern Generators (CPG’s).  Adrian’s work on multiagent systems and learning provide robust solutions to numerous complex design and control problems. These learning systems can be adaptive, and can generate control solutions to complex structures too complicated to be designed by hand.  This approach is well suited for tensegrity structures which are complex non-linear systems whose control-theory is still being developed.  Vytas has been researching robotic manipulation and mobility for over a decade and in recent years has been focused on the game-changing capabilities of tensegrity robots due to their unique structural properties.  His quest to tap their potential has lead him to investigate oscillatory control approaches from the field of neuroscience, such as Central Pattern Generators (CPG’s), which show promise for efficient control of these robots.

A concept drawing of the Super Ball Bot structure

 

While the Super Ball Bot project has just started, we already have some exciting initial results from the machine learning efforts.  During the last year, Vytas led the development of a physics based tensegrity simulator built on-top of the open-source Bullet Physics Engine.  We have been using that simulator to explore novel tensegrity structures and control approaches, and will write a separate post about the oscillatory control of a snake-like tensegrity robot and its ability to traverse many complex terrains with fully distributed control algorithms. For NIAC we are now using this simulator to test mission related properties of tensegrities.  The following video shows two drop tests where we simulate a tensegrity robot landing. The results confirm what we see in physical models in our lab, which is that these structures do a great job absorbing impact forces, even as we vary the stiffness of the strings.

 

 

Since the NIAC proposal was awarded, we have focused on evolving the motion controls of a rolling tensegrity robot and have early simulation results which show it safely rolling through a rocky terrain.

 

 

To date, most of the research into control of tensegrity robots has focused on slow motions which do not excite the dynamics of the structure. Wanting to show that tensegrity robots can be fast and dynamic movers, we are exploring what is possible when the structure is driven at the limits of dynamic stability.

To explore the maximum speed achievable by our tensegrity robot, Adrian’s intern, Atil Iscen, has been developing an evolutionary control approach where a large population of random tensegrity controllers are evaluated based on their ability to move the farthest distance within a fixed amount of time. Then, the worst performing members are eliminated from the population and the best ones are replicated and mutated, allowing the mutations of the good controllers to become even better.

Our best solutions so far evolve parameters to a distributed oscillatory controller where the lengths of groups of three cables (making a facet) are controlled by the values of a sine-wave. The job of evolution is then to control the phase offset, period, and amplitude of the sine wave for the strings. The breakthrough of this approach is that it enables fast dynamic motion, without requiring the computationally expensive modeling and analysis necessary for a centrally computed controller.

Our preliminary results show that tensegrity robots are indeed capable of fast dynamic motion, and that the evolutionary approach is successful at finding difficult to model dynamic controllers.

In the following video we show:
1) Slowly moving hand-crafted controller showing the difficulty of this problem.
2) An evolved controller showing high speed mobility
3) An evolved controller showing high speeds while handling rough terrain

 

 

While it is exciting to see such fast and dynamic motion from a tensegrity robot, rolling at the limits of stability is not the control approach we need for a space mission. When exploring another planet we need to balance the needs of making progress with concerns about energy efficiency and stability. Thus, we evolved a new controller with a tighter cap on the amount of stretch and energy available for each string.  With that change we find results which appear stable and far more appropriate for exploration of a distant planet.

 

 

These results are preliminary and we expect to continue to improve the stability, energy efficiency, and terrain handling. Still, it is important to explore the upper limits of speed and dynamic performance. Further, we are establishing that evolutionary approaches are capable of parameter tuning and optimizing the performance of distributed control systems for dynamic tensegrity robots. This is important due to the deep challenges in hand crafting the dynamics of these complex and non-linear systems.

Moving forward we plan on exploring increasingly complex structures and distributed control architectures within which we will deploy our learning algorithms to tune performance. In other work we have already shown success at deploying distributed impedance control on tensegrity robots, along with compelling results from biologically inspired Central Pattern Generators (CPG’s). Both of these approaches require significant amounts of hand tuning of parameters, which our learning algorithms should be able to improve upon. Beyond the evolutionary approaches used so far, we also expect to explore multiagent control.

 

Leave a comment.

Destination Innovation features HET Project

July 27 2012

Original link at http://youtu.be/YvgFhNwD2Us.

Leave a comment.

Smartphone Phones Home

July 26 2012

On July 2, IRG conducted its second on-orbit test with the Nexus S smartphone. This test exercised the communication path from the phone on ISS to Johnson Space Center, in preparation for a future demonstration of remote control of a SPHERES robot on the ISS by an operator at JSC.

For this first communications test, the phone sent live video from the ISS to JSC, and the ground sent and received short ping-like packets. The phone transmitted data via wifi to an ISS laptop, and the laptop sent the data over the standard Ku band satellite link to JSC.

During the test, we encountered such hiccups as a regularly-scheduled LOS (loss of signal, when the ISS is not in communication with the ground) and a router failing on the ISS. We recovered and were able to reach our goal of ten minutes of communication. The lessons we learned in this test will serve us well during our next on-orbit test, which will happen no earlier than the end of October.

Leave a comment.

SmartSPHERES Interview on NASA TV

July 2 2012

Original link at http://www.nasa.gov/multimedia/videogallery/index.html?media_id=147267281.

Leave a comment.

Ames Stereo Pipeline 2.0 Released

June 22 2012

Our open source, free 3D modeling software for satellites just had its 2.0 release. Many improvements were made, include the previously mentioned Digital Globe support. Users will find the code faster and memory efficient compared to our prior release.

Binaries and source code for Ames Stereo Pipeline is available from its project page:

irg.arc.nasa.gov/ngt/stereo

Our developers will be at the Planetary Data Workshop in Flagstaff, Arizona next week. We’ll be giving an overview of ASP v2 and performing tutorials for how to process HiRISE and LRO-NAC imagery with the new software. For those who can’t make it, presentations will be made available online after the conference.

Leave a comment.

Pictures from K-REX Dev Week

April 30 2012

Last week IRGers held a Dev Week in Marscape, their 40m by 80m outdoor rover test facility. Recently KRex had been living in a small indoor space while its motor controllers were being worked on, so the Dev Week was a chance to get the robot into a natural environment for some real-world testing.

KRex and K10

K-REX, IRG's newest robot, in the Marscape with the older and smaller K10.

Because K-REX is a research robot, there are always new pieces to debug. During the week, everybody developing software or hardware for KRex took turns testing and taking data on K-REX.

Inside the trailer that serves as a control center while the robot is on Marscape. Many people in IRG work on K-REX.

Some of the tasks for the week included: testing the emergency stop on steep slopes, updating the stereo module from custom code to the OpenCV implementation, evaluating terrain maps, and improving the display shown to the ground operators.

Display of the map that K-REX produces for navigation.

Along the way, there were also avionics problems, networking issues, and a flat tire.

Overall, it was an incredibly intense, productive, and successful week!

K-REX on Marscape

K-REX outside the Marscape trailer. The 80' by 120' wind tunnel is visible in the background.

(4) comments.

Results from HET Smartphone in Space

April 6 2012

For those of you waiting to see the results from our on-orbit checkout of the smartphone, this post summarizes our data.

The Tests

SPHERES operates inside the Japanese Experiment Module (JEM).  As shown in the diagram below, we’ve defined a JEM coordinate system with X forward, Y starboard, Z toward the deck, and the origin in the middle of the module.  For our test, Expedition 29 Commander Mike Fossum velcroed the smartphone to the -X face of the sphere and placed the sphere at the origin of the coordinate system.  From a laptop, he ran a program on the sphere to translate it one meter to +X and back to center, one meter to +Y and back, and one meter to +Z and back.  Then the sphere made a full rotation about each of the X, Y, and Z axes.

ISS and Ground Coordinate Systems

Figure 1 - Coordinate Systems. We define X-forward, Y-starboard, and Z-deck in the Japanese Experiment Module. On the ground Y and Z are parallel to the plane of the table, and X is up. The smartphone uses the coordinate system defined in the Android documentation which has X up and Z pointing out of the screen. The sphere itself has its own coordinate system.

After the on-orbit test, we ran a similar series of tests with a sphere and a smartphone on the ground.  On the ground, the sphere floats in an air carriage on a table and is constrained to three degrees of freedom.  Usually planar degrees of freedom are called X, Y, and rotation about Z, but because of the shape of our ground lab, we had to label these degrees Y, Z, and rotation about X.  Everyone should try thinking sideways now and then.

Figure 2 - SPHERES and HET Smartphone on an air carriage. The black tanks contain carbon dioxide, which is forced out the bottom of the blue puck. When the air carriage is on a flat, level surface, the CO2 overcomes enough friction for the sphere to move itself around with its thrusters. In our lab, the table top is the YZ plane.

The Logger App (available on the Android Market here) ran on the phone during all these tests.  The app recorded data from all available sensors on the phone, though not all the sensor data was usable.  For instance, in space the GPS never got a lock on enough satellites to figure out a position; that’s not surprising since the unit wasn’t designed to be 200 miles above sea level, whizzing around the earth every 90 minutes.  The battery temperature stayed normal, and the proximity sensor was also unenlightening, since the astronaut didn’t put the phone up to his face.  The fun data came from the gyroscope, accelerometer, and magnetic field sensor, which we look at in the next section.

The Data

In this section, we compare data collected in orbit to data collected on the ground.  First, let’s look at the gyroscope data.  Here are plots that show the sphere rotating 360 degrees about its Y axis.

Figure 3 - Smartphone gyroscope data shows that the SPHERES unit moves differently in orbit than on earth.

The X axis shows time in nanoseconds since the phone started up. The Y axis shows angular speed in radians per second.  The top plot shows rotation about Y, the bottom plot shows rotation about negative Y, so the spheres were in fact rotating opposite directions during these tests.  The graphs each have four humps because of the way the motion was programmed; the sphere rotated 90 degrees and paused, then another 90 degrees and paused, and so on. The sphere came closer to a complete stop between turns in orbit than it did on the ground.

The ISS sphere turned faster (reaching nearly -0.3 radians per second) than the ground sphere (barely 0.2 radians per second), resulting in the test taking less time to run on orbit (~25 s) than it did on the ground (~45 s).  The spheres do not have a “speed” setting, they simply go to their commanded position as quickly as possible, so the sphere in space was faster because it did not have to pull an air carriage around with it, and because it had less friction.

In the orbit plot, there are spikes causing “stair-steps” at regular intervals.  Those spikes are caused by the sphere’s thrusters firing.  The thruster spikes are visible but smaller in the ground plot, once again because of mass and friction differences.  The blue and red X and Z lines are also interesting: in the ground plot, they are perfectly flat.  The sphere was securely in an air carriage and couldn’t tip.  In the orbit plot, the graph shows a bobble in X and Z at the beginning of the test as the sphere tried to align itself with the coordinate axes.  The sphere aligns well by the end of the first quarter rotation, but the X and Z lines show residual oscillation, showing that the sphere is clearly not sitting on a level table.

Figure 4 - Magnetometer data does not show as many differences between the ground and on-orbit tests.

Here are plots of magnetometer data gathered during the same tests.  The X axis of the plot is time in nanoseconds and the Y axis is magnetic field strength in microTesla.  We can tell that the phone was rotating about its Y axis, and that the sphere on the ground is more stable than the true free flyer.  The plots suggest that the magnetic field strength in orbit is lower than it is on the ground, but we can’t draw conclusions because we didn’t calibrate the magnetometers on either of the phones before running the tests.  We decided that calibrating the magnetometers was not important enough to warrant the time required, particularly considering the busy schedule of the ISS crew.

Figure 5 - Gravity Sensor. Data from the gravity sensor shows the phone was actually in orbit!

I enjoyed the data from the gravity sensor on the phone.  As you can see, gravity on earth is a healthy 9.8 m/s^2, and the gravity measured in orbit is 0 m/s^2.  (It looks like there’s a very small negative bias in the phone sensors).  If you think we staged the pictures of the floating sphere, here is the hard data to prove we didn’t :-).

Figure 6 - Linear Accelerometer. The sphere did not move fast enough to register on the linear accelerometer.

The sphere’s movement did not register on the linear accelerometer, either on the ground or in orbit.  The sphere has a mass of about four kilograms with only twelve 0.1 N thrusters to move it around, so it moves at a very sedate pace.  The phone is calibrated to measure faster motions.

And … everybody’s favorite sensor, the video camera.

The smartphone recorded this video during the Smartphone-SPHERES thruster checkout test. In the video, Mike Fossum places the sphere (with the smartphone already attached) in position and starts the test. All tests begin with 10 seconds of the sphere drifting while its state estimator converges. The video shows flashes from JAXA astronaut Satoshi Furukawa taking pictures during this test. It appears that the flashes cause the sphere to reset in the middle of the test, and the sphere ends up drifting. One of the sonar beacons that is part of the sphere’s localization system comes into view at 0:51. It is a small dark box with a green light in the upper lefthand corner. It’s visible again at 1:18.

So there you have it, results from the first smartphone to operate in orbit. Because the sphere already has its own suite of sensors and well-tested state estimation code, we do not currently plan to use the sensors on the smartphone for localization. In fact, our next goal will be to connect the sphere and the smartphone with a cable so they can exchange data, including the sphere’s position. The phone will receive commands from a laptop over wi-fi, and send position commands to the sphere through the cable. We hope to test the new sphere-phone system later this year.

(6) comments.

Meet a Robot: K-REX

March 30 2012

IRG has quite a few robots and not all of them get their fair share of publicity. Today we’d like to show you one of our upcoming stars, K-REX. This brute is as big as car and has independent drives and steering for each wheel. He’ll serve as our most versatile test platform with which we hope to drive faster and over more difficult terrain.

K-REX was designed by ProtoInnovations in Pittsburgh as part of an SBIR awarded by NASA. IRG adopted the prototype robot after the completion of the project. We plan to use this machine in testing future mission scenarios and guidance software here on Earth only.

This new robot’s most interesting feature is its modularity. Each rocker, the members on the sides by which the wheels are attached, are hot swappable. In the event of a failure, a lever can be pulled and whole rocker can be removed without tools. Thus allowing for fast recovery in the field. The central module is essentially a large trunk that allows plenty of space for payloads such as LIDAR, ground penetrating radar, a robotic arm, or possibly a drill. Compared to K10, IRG’s older platform, K-REX can carry heaver payloads and drive twice as fast.

Currently IRG’s roboticists are busy improving upon K-REX after it had its first big field test last December.  During that event K-REX drove just about everywhere in an unused basalt quarry near the San Luis Reservoir. That test answered questions like “What is the optimal LIDAR placement?”, “How steep of a slope can we drive?”, and “What rocks can we tackle over?”. Now the team is working on improving motor control between the 4 steering columns and also addressing battery life. These problems are point of excitement here in IRG because K-REX is a blank slate by which new ideas can be tried on. Now is the time for the team to try new battery chemistries and to investigate new forms of motor control.

K-REX will not replace IRG’s older robots but instead will be become an additional platform to help test questions that the smaller K10’s could not. As the summer rolls around, K-REX will get more action in the sun. The next upcoming tasks is a Centaur-2 standin for navigation, and then later a “lava tube” test, which we’ll be sure to share pictures from.

(8) comments.

  • Measuring Lake Tahoe from Space

    This fall, the crisis mapping team extended their work on flood mapping to measure the water level of Lake Tahoe over time ... more

  • Flood Mapping with Adaboost

    The Crisis Mapping team recently wrapped up our research on flood mapping. Previously, we evaluated existing MODIS flood mapping algorithms across a diverse range ... more

  • Crisis Mapping Toolkit Released

    We are excited to announce the open source release of the Crisis Mapping Toolkit under the Apache 2.0 license! The ... more