If you’ve been following this blog, you know that a big project for IRG this past year has been the Human Exploration Technology Surface Telerobotics Project. The project culminated this summer in three on-orbit test sessions, when we got to see our work used in space. The first session took place on June 17 with Chris Cassidy, the second session on July 26 with Luca Parmitano, and the third session on August 20 with Karen Nyberg. The three crew members successfully scouted for, deployed, and inspected a simulated radio telescope by remotely commanding our K10 rover at the IRG Roverscape. Here is some of the media coverage of our tests:
We made this video to introduce the Surface Telerobotics Project to the astronaut who will command the K10 rover. Enjoy!
Last week IRG’s Surface Telerobotics Project conducted its first Operational Readiness Test (ORT). In the Surface Telerobotics Project, an astronaut on the International Space Station will control the K10 rover on earth to simulate deploying a radio telescope. This experiment will be the first time that an astronaut in space will control a full-scale rover on the ground in real time. The experiment is scheduled for July and August 2013, and before then we have three ORTs scheduled as practice runs to make sure everything goes smoothly.
Here’s a video from the first ORT. We are operating on IRG’s new Roverscape, a 2 acre area at Ames that is specially designed for our rovers with hills, craters, and a flat area. Most of our team is working inside the Roverscape building, which is so new that it was still being constructed as we were testing.
K10 Red was the rover of the day. For the real experiment, K10 Black (Red’s twin) will be ready to swap in if Red breaks. Near the end of the video, you’ll see K10 Red deploy a roll of Kapton film that simulates an arm of a radio antenna. On the moon, the film would stay in place, but the wind on Earth requires that we put weights on the film to keep it from blowing away.
Next week is ORT 2!
Surface Telerobotics is a planned 2013 test to examine how astronauts in the International Space Station (ISS) can remotely operate a surface robot across short time delays. This test will be performed during Increment 35/36 to obtain baseline engineering data and will improve our understanding of how to: (1) deploy a crew-controlled telerobotic system for performing surface activities and (2) conduct joint human-robot exploration operations. This test will also help reduce risks for future human missions, identify technical gaps, and refine key system requirements.
The Moon’s farside is a possible early goal for missions beyond Low Earth Orbit (LEO) using the Orion Multi-Purpose Crew Vehicle (MPCV) to explore incrementally more distant destinations. The lunar L2 Lagrange Point is a location where the combined gravity of the Earth and Moon allows a spacecraft to be synchronized with the Moon in its orbit around the Earth, so that the spacecraft is relatively stationary over the farside of the Moon. Such a mission would be a proving ground for future exploration missions to deep space while also overseeing scientifically important investigations.
From the Lagrange Point, an astronaut would teleoperate a rover on the lunar farside that would deploy a low radio frequency telescope to acquire observations of the Universe’s first stars/galaxies. This is a key science objective of the 2010 Astronomy & Astrophysics Decadal Survey. During Surface Telerobotics operations, we will simulate a telescope/antenna deployment.
The ISS crew will control a NASA K10 planetary rover operating at a NASA outdoor robotics testbed. The rover will carry a variety of sensors and instruments, including a high resolution panoramic imager, a 3D scanning lidar, and an antenna deployment mechanism.
The crew will control the robot in “command sequencing with interactive monitoring” mode. The crew will command the rover to execute pre-generated sequences of tasks including drives and instrument measurements. The robot will execute the sequence autonomously while the crew monitors live telemetry and instrument data. If an issue arises, the crew can interrupt the current sequence and use basic manual tele-op commands to maneuver the robot out of trouble or command additional data acquisitions.
As a first step to the 2013 test, on September 22nd, at the 2012 International Observe the Moon Night event at NASA Ames Research Center, we demonstrated deployment of a polymide film antenna substrate. The video below shows the K10 rover playing out a polymide (kapton) film on the Ames Marscape.
On July 2, IRG conducted its second on-orbit test with the Nexus S smartphone. This test exercised the communication path from the phone on ISS to Johnson Space Center, in preparation for a future demonstration of remote control of a SPHERES robot on the ISS by an operator at JSC.
For this first communications test, the phone sent live video from the ISS to JSC, and the ground sent and received short ping-like packets. The phone transmitted data via wifi to an ISS laptop, and the laptop sent the data over the standard Ku band satellite link to JSC.
During the test, we encountered such hiccups as a regularly-scheduled LOS (loss of signal, when the ISS is not in communication with the ground) and a router failing on the ISS. We recovered and were able to reach our goal of ten minutes of communication. The lessons we learned in this test will serve us well during our next on-orbit test, which will happen no earlier than the end of October.
Last week IRGers held a Dev Week in Marscape, their 40m by 80m outdoor rover test facility. Recently KRex had been living in a small indoor space while its motor controllers were being worked on, so the Dev Week was a chance to get the robot into a natural environment for some real-world testing.
Because K-REX is a research robot, there are always new pieces to debug. During the week, everybody developing software or hardware for KRex took turns testing and taking data on K-REX.
Some of the tasks for the week included: testing the emergency stop on steep slopes, updating the stereo module from custom code to the OpenCV implementation, evaluating terrain maps, and improving the display shown to the ground operators.
Along the way, there were also avionics problems, networking issues, and a flat tire.
Overall, it was an incredibly intense, productive, and successful week!
For those of you waiting to see the results from our on-orbit checkout of the smartphone, this post summarizes our data.
SPHERES operates inside the Japanese Experiment Module (JEM). As shown in the diagram below, we’ve defined a JEM coordinate system with X forward, Y starboard, Z toward the deck, and the origin in the middle of the module. For our test, Expedition 29 Commander Mike Fossum velcroed the smartphone to the -X face of the sphere and placed the sphere at the origin of the coordinate system. From a laptop, he ran a program on the sphere to translate it one meter to +X and back to center, one meter to +Y and back, and one meter to +Z and back. Then the sphere made a full rotation about each of the X, Y, and Z axes.
After the on-orbit test, we ran a similar series of tests with a sphere and a smartphone on the ground. On the ground, the sphere floats in an air carriage on a table and is constrained to three degrees of freedom. Usually planar degrees of freedom are called X, Y, and rotation about Z, but because of the shape of our ground lab, we had to label these degrees Y, Z, and rotation about X. Everyone should try thinking sideways now and then.
The Logger App (available on the Android Market here) ran on the phone during all these tests. The app recorded data from all available sensors on the phone, though not all the sensor data was usable. For instance, in space the GPS never got a lock on enough satellites to figure out a position; that’s not surprising since the unit wasn’t designed to be 200 miles above sea level, whizzing around the earth every 90 minutes. The battery temperature stayed normal, and the proximity sensor was also unenlightening, since the astronaut didn’t put the phone up to his face. The fun data came from the gyroscope, accelerometer, and magnetic field sensor, which we look at in the next section.
In this section, we compare data collected in orbit to data collected on the ground. First, let’s look at the gyroscope data. Here are plots that show the sphere rotating 360 degrees about its Y axis.
The X axis shows time in nanoseconds since the phone started up. The Y axis shows angular speed in radians per second. The top plot shows rotation about Y, the bottom plot shows rotation about negative Y, so the spheres were in fact rotating opposite directions during these tests. The graphs each have four humps because of the way the motion was programmed; the sphere rotated 90 degrees and paused, then another 90 degrees and paused, and so on. The sphere came closer to a complete stop between turns in orbit than it did on the ground.
The ISS sphere turned faster (reaching nearly -0.3 radians per second) than the ground sphere (barely 0.2 radians per second), resulting in the test taking less time to run on orbit (~25 s) than it did on the ground (~45 s). The spheres do not have a “speed” setting, they simply go to their commanded position as quickly as possible, so the sphere in space was faster because it did not have to pull an air carriage around with it, and because it had less friction.
In the orbit plot, there are spikes causing “stair-steps” at regular intervals. Those spikes are caused by the sphere’s thrusters firing. The thruster spikes are visible but smaller in the ground plot, once again because of mass and friction differences. The blue and red X and Z lines are also interesting: in the ground plot, they are perfectly flat. The sphere was securely in an air carriage and couldn’t tip. In the orbit plot, the graph shows a bobble in X and Z at the beginning of the test as the sphere tried to align itself with the coordinate axes. The sphere aligns well by the end of the first quarter rotation, but the X and Z lines show residual oscillation, showing that the sphere is clearly not sitting on a level table.
Here are plots of magnetometer data gathered during the same tests. The X axis of the plot is time in nanoseconds and the Y axis is magnetic field strength in microTesla. We can tell that the phone was rotating about its Y axis, and that the sphere on the ground is more stable than the true free flyer. The plots suggest that the magnetic field strength in orbit is lower than it is on the ground, but we can’t draw conclusions because we didn’t calibrate the magnetometers on either of the phones before running the tests. We decided that calibrating the magnetometers was not important enough to warrant the time required, particularly considering the busy schedule of the ISS crew.
I enjoyed the data from the gravity sensor on the phone. As you can see, gravity on earth is a healthy 9.8 m/s^2, and the gravity measured in orbit is 0 m/s^2. (It looks like there’s a very small negative bias in the phone sensors). If you think we staged the pictures of the floating sphere, here is the hard data to prove we didn’t :-).
The sphere’s movement did not register on the linear accelerometer, either on the ground or in orbit. The sphere has a mass of about four kilograms with only twelve 0.1 N thrusters to move it around, so it moves at a very sedate pace. The phone is calibrated to measure faster motions.
And … everybody’s favorite sensor, the video camera.
The smartphone recorded this video during the Smartphone-SPHERES thruster checkout test. In the video, Mike Fossum places the sphere (with the smartphone already attached) in position and starts the test. All tests begin with 10 seconds of the sphere drifting while its state estimator converges. The video shows flashes from JAXA astronaut Satoshi Furukawa taking pictures during this test. It appears that the flashes cause the sphere to reset in the middle of the test, and the sphere ends up drifting. One of the sonar beacons that is part of the sphere’s localization system comes into view at 0:51. It is a small dark box with a green light in the upper lefthand corner. It’s visible again at 1:18.
So there you have it, results from the first smartphone to operate in orbit. Because the sphere already has its own suite of sensors and well-tested state estimation code, we do not currently plan to use the sensors on the smartphone for localization. In fact, our next goal will be to connect the sphere and the smartphone with a cable so they can exchange data, including the sphere’s position. The phone will receive commands from a laptop over wi-fi, and send position commands to the sphere through the cable. We hope to test the new sphere-phone system later this year.
IRG has quite a few robots and not all of them get their fair share of publicity. Today we’d like to show you one of our upcoming stars, K-REX. This brute is as big as car and has independent drives and steering for each wheel. He’ll serve as our most versatile test platform with which we hope to drive faster and over more difficult terrain.
K-REX was designed by ProtoInnovations in Pittsburgh as part of an SBIR awarded by NASA. IRG adopted the prototype robot after the completion of the project. We plan to use this machine in testing future mission scenarios and guidance software here on Earth only.
This new robot’s most interesting feature is its modularity. Each rocker, the members on the sides by which the wheels are attached, are hot swappable. In the event of a failure, a lever can be pulled and whole rocker can be removed without tools. Thus allowing for fast recovery in the field. The central module is essentially a large trunk that allows plenty of space for payloads such as LIDAR, ground penetrating radar, a robotic arm, or possibly a drill. Compared to K10, IRG’s older platform, K-REX can carry heaver payloads and drive twice as fast.
Currently IRG’s roboticists are busy improving upon K-REX after it had its first big field test last December. During that event K-REX drove just about everywhere in an unused basalt quarry near the San Luis Reservoir. That test answered questions like “What is the optimal LIDAR placement?”, “How steep of a slope can we drive?”, and “What rocks can we tackle over?”. Now the team is working on improving motor control between the 4 steering columns and also addressing battery life. These problems are point of excitement here in IRG because K-REX is a blank slate by which new ideas can be tried on. Now is the time for the team to try new battery chemistries and to investigate new forms of motor control.
K-REX will not replace IRG’s older robots but instead will be become an additional platform to help test questions that the smaller K10’s could not. As the summer rolls around, K-REX will get more action in the sun. The next upcoming tasks is a Centaur-2 standin for navigation, and then later a “lava tube” test, which we’ll be sure to share pictures from.
There’s a short article with a video about our Human Exploration Telerobotics project on the IEEE Spectrum website.
This fall, the crisis mapping team extended their work on flood mapping to measure the water level of Lake Tahoe over time ... more>
The Crisis Mapping team recently wrapped up our research on flood mapping. Previously, we evaluated existing MODIS flood mapping algorithms across a diverse range ... more>
We are excited to announce the open source release of the Crisis Mapping Toolkit under the Apache 2.0 license! The ... more>