For Better user experience please view the website in portrait Mode

NASA’s “New Apollo Moment”: Naturally Guiding Robotic Avatars In Space Exploration

Feb 25, 2014

Michael Venables in Technology And Society – Medium.com - February 25, 2014 – No matter how completely technics relies upon the objective procedures of the sciences, it does not form an independent system, like the universe: it exists as an element in human culture and it promises well or ill as the social groups that exploit it promise well or ill.— Lewis Mumford

I had the opportunity a couple weeks ago to speak with NASA’s Jeff Norris about the future of NASA’s ongoing development of complex, human-robotic interfaces for space exploration. Norris heads up the Planning Software Systems Group (Planning and Execution Systems) at NASA Jet Propulsion Laboratory. Underneath this prosaic government designation, his group is doing the most advanced research in the world (and the most interesting) on how to integrate video game and consumer technologies within the technical framework of human-guided, robotic space exploration.

I started the interview asking about the partnership with Sony’s Magic Lab regarding the creation of an interactive space exploration module on the PlayStation 4 platform. Because of NDA restrictions, Norris couldn't comment on the status of the Sony partnership, but he did confirm that it would play a “major part of the strategy that we’re pursuing in this area.” He added that a lot of the technologies being developed in places like the video game industry are highly applicable to the work that’s being done at NASA. So what kinds of research is Norris’ group doing in their laboratory sanctum at NASA Jet Propulsion Laboratory?

I opened my interview asking for an explanation of what Norris meant with the phrase, “mission-critical agility,” a concept about which he cares deeply, and on which he has given talks outside his official capacity. NASA wants to accomplish as much as possible with the resources that are available. Norris frames the concept in terms of trying to do challenging things, but thinking about new ways to do them — all the time. Building a better spacecraft, writing better software — it means finding ways to safely experiment with that—to go to more places and get more science. He believes that “agility” is a buzz-word, a broad label for doing business in a way that is accelerated, that is very responsive to the needs of the scientists in the larger user community.

Norris leads a set of projects in a project group that is known as HRS (Human-Robotic Systems). This is the group that’s designing the future of human-robotic communication. It’s called R.A.P.I.D., a communication protocol that NASA has developed that establishes a consistent way for robots to communicate with the systems that control them. He says that as we consider the future of human space exploration, it is more of a cooperation between humans and their robotic tools, robots that are supporting us in our human exploration.

The expectation, Norris says, is that there will be many kinds of robots that are specialized for different purposes. To try and make mission robots easier to control by astronauts and mission control, one of the developed protocols is for robots to speak in a consistent fashion. R.A.P.I.D. was developed by NASA-JPL, Ames and Johnson Space Centers and open-sourced, so that anyone with an interest in robotics can adapt it to their own robots. Within the Human-Robotics Systems project, Norris maintains specialized areas of interest. He is involved with the development of interfaces — the visualization technologies that will make humans beings more effective when they are controlling robots — when they are interacting with the data that those robots return to mission control.

I asked Norris what specific video game technologies NASA is using in the larger NASA enterprise of integrating it into its most advanced research and development efforts. These involve some core video game technologies and some that is what he calls “on the fringe” of video game tech. The list includes projects with the Oculus Rift head-mounted display, and a long-term working relationship with Microsoft, with NASA having done software development work with the v.1 generation Kinect sensor before it was released to the general market. This led to a number of projects, among them the XBOX Live video game, Mars Rover Landing, NASA’s first console video game, released in July 2012, just before the Rover landing. NASA has had some discussions at a high level with Nintendo and is working with Sony’s Magic Lab, as mentioned previously.

NASA is currently working with Sixense, the company behind the STEM System’s wireless motion tracking technology. Norris mentioned that they had been sharing data with the company and some of the applications they had been working on. Sixense took some of that and they adapted their STEM sensors to allow an animated astronaut model to walk around a Martian scene with the Rover. STEM System’s motion tracking and control are added to the virtual Martian landscape in the video, showing the level of immersion and interaction that can be experienced by the user/operator.

Norris also mentions NASA/JPL’s work with Leap Motion, not strictly speaking, a device restricted to gaming. It’s a 3-D motion and gesture controller for 3D game applications, instructional 3D music applications, 3D design and 3D learning environments. But he emphasizes that gaming in particular is not NASA/JPL’s core focus, but really consumer technology. These are the devices have had so much money invested in them [so as] to be highly usable, and [that] NASA is finding to be quite applicable to current projects.

Another company that NASA/JPL is working with, Norris adds, is ZSpace, a company that make 3D holographic imaging displays that allow interaction with simulated objects in virtual environments. Both Leap Motion and ZSpace are being tested as interface technologies for future NASA robots such as the All-Terrain Hex-Limbed Extra-Terrestrial Explorer (ATHLETE) Rover, pictured below in an advanced research task of “rough and steep terrain lunar surface mobility.”

Norris also mentions that NASA/JPL makes a lot of use of PrimeSense’s sensors, (3D “depth sensing”) 3D-range sensing technology he says is very similar to the sensor inside the first-generation Kinect. NASA/JPL is also a contributing member of the Google Glass development program.

Radhakrishnan, the chairman of India’s ISRO, had commented in November 2013 (with much brouhaha in the press) on how conducting limited, more comprehensive ground tests has helped the Indian space agency operate on a lean, cost-effective budget for the Mangalyaan mission.

I asked Norris how feasible it would be for NASA to use virtual reality technology to conduct simulated ground tests and perhaps save money during costly functional testing of Mars mission craft. Norris told me he remains passionate about using virtual reality in space exploration for both testing and mission execution. But he hesitates to say that VR has the potential to replace many of NASA spacecraft ground tests. Many of the tests deal with the performance of physical components of the system and the ways that the software, the avionics and the hardware of the vehicle interact with each other, especially in the extreme environmental conditions that are encountered in space, and in places like planet Mars.

Virtual reality’s promise is to engage a human operator with a task in a way that is highly natural to them, in a way, Norris adds, that is very similar to what they engage with and interact with in the natural world. NASA has used virtual reality in mission rehearsals for crew members at the Dallas Space Center for many years. It’s reaching a little too far for VR technology to replace, for example, an environmental test for a spacecraft, Norris says. It is difficult to justify or to declare how much testing is enough, he adds. But, Norris points out, NASA engineers still discover new things they didn’t expect—that have threatened and even ended missions. He cautions that those tests, while they may seem expensive and cumbersome, do have a purpose.

I asked Norris how NASA could use modular, multi-use robotic exploration techniques (as in the Modular Common Spacecraft Bus) to reach Deming’s quality ideal of “faster, better, cheaper” while minimizing risks to mission astronauts and hardware. He is quick to point out several examples at NASA that point to some “creative reuse” of mission components, such as the Phoenix Mars Mission (August 2007), which was very similar in many ways to the failed Mars Polar Lander Mission (January 1999). Some of the flexible bare components from the Mars Polar Lander were used — pieces of hardware were used in flight (that were meant to be discarded), were used again. Norris adds that other things had changed about the spacecraft, including many of those instruments, so the system had to accommodate the reuse of those components. Norris points out that one can find many other examples throughout NASA missions where they had to get creative in such a way to simply control costs, and get more done.

Norris’s specialty, however, lies in developing “ground software,” having spent 15 years developing the systems that control NASA spacecraft. In that area, they have embraced and have directly benefited from the“modular architecture” I mentioned earlier. For example, Norris points out that a significant portion of the control software for Spirit and Opportunity, Phoenix and the Curiosity Mars Science Laboratory Rover are built on top of an architecture called OSDR — involving Eclipse, a component-based framework for the Java programming language. Norris mentions that NASA is now looking forward towards more web-based architecture and from there is again emphasizing component-based architecture, and his team is looking to choose one of them which is advantageous to the needs of his NASA team.

Norris reminds me of one of the philosophies floating around NASA. It is trying to extract what NASA workers call “multiple-mission capabilities”— the parts of the operation systems that apply to multiple missions into the core packages, and then is0late the mission-specific code that they need for a particular mission: things that might pertain to a particular instrument, or a particular destination. He adds that they have had a lot of success reducing costs using that strategy.

I asked Norris what he would tell the wo(man) on the street about what robots will be able to do for us in space. How does he manage to characterize the promises of robotic space exploration to the public? This is clearly the thing Norris is most passionate about: thinking about how people are going to interact with robots in the future of space exploration.

“We want to go to a lot of different places. Mars is interesting, and we want to go there very much, but there are so many other places in the solar system. The ability to build a robot that is perfectly suited to a potentially very hazardous environment, that’s going to go swimming in the rains of Saturn, or something like that. The ability to build a robot that is optimized for that task, and then to control it in a way that makes you feel like you are there, to me feels like a very powerful competence. Because, here we are, able to use technologies that make us feel present in that environment, but in a way of inhabiting a robotic avatar that is perfectly attuned to that environment. That’s pretty phenomenal”.

Norris is quick to emphasize that, right now, with our existing technologies, it wouldn't be desirable to put a human being, no matter how nice their space suit, in the rings of Saturn. But NASA can put a robot there and control it in a way that makes us feel like we are right there, floating in the midst of the Saturnian rings.

Norris says that robotic avatars are taking scale to [the point of] including so many people in the experience, in the journey of exploration. He adds that he would love to see a future, as he puts it,

“…looking back on the 1969 Apollo moment when 600 million people sat in front of a television and watched Neil Armstrong taking his first steps on the moon. I think there was a magic about that — that was wrapped up in the fact that not only were we doing something that had never been done before, but so many people were there with us. And they were there because we found a medium, in television, that engaged them and let them feel a part of it, in a way that they had never experienced before”.

Looking to the future, Norris believes that robots and the kinds of interfaces NASA is working on can deliver us a “new Apollo moment” and envisions how, “We look forward to the day when we put human boots on the soil of Mars. It will be a human accompanied by robots who are supporting them. And I want a billion human beings to be standing right there beside the astronaut, inhabiting those robotic avatars, almost welcoming them to the surface of Mars. That, I think, is the promise of these technologies”.

Norris adds that then, when we look forward to exploring beyond the solar system, to other places, that one of the nice things about using robots is that we can send them in many different directions at once. He explains that even if the robots take many years to reach their destination, we can basically wait for each of the robots to arrive, and then just flit, just jump between them and consume the data that they are returning for us. Robotic data jumping transter, as it were.

Norris sees robots as “marvelous tools” for exploration — a great support to and a companion to human space exploration, which he finds also very exciting.

I asked Norris about the SuperBall Bot Tensegrity Planetary Lander project, the space exploration robots known as “tensegrity robots.” Norris sees this NASA Ames project as a great example of the diversity and the ingenuity of the people who develop robots at NASA. He muses that he doesn't spend his time thinking about new kinds of robots, but he thinks about how to drive them.

Norris maintains that one of the things that makes his job very fun is that he has learned that there is no limit to the ingenuity of the people who think about new kinds of robots for him to learn how to drive. The tensegrity robot is in an early stage of development and must pass through phases of having the robots actually locomote, be packed and be deployed on a mission. When they reach a point where they are starting to think seriously about how to accomplish missions, Norris says that we better believe he will be very excited about trying to help them control the robot and interact both with it and the environment it is exploring!

I asked Norris what he would say to those who are skeptical about the great promise of robotic technology. He says that humans are marvelous explorers. He recalls that we have a great history of humans of exploring, of just being drawn to unknown situations. He would call attention to the amazing ability that humans have, to rapidly understand environments just by being in them. He likens this human ability to our experience of turning on a light in a dark room, and orienting quickly to the size and configuration of that physical space.

“When we think about exploring other places, part of the reason for that is because of our natural abilities. The challenge of exploring other environments, distant environments, is that we have to think about the environments that are not safe, the radiation in the environment and the distances make them not appropriate places for humans to go right now. When we think about sending a robot there, if we want those humans to be as effective as an explorer in that distance, in that environment as they are here on Earth, then we have to find a way to engage all those natural abilities that humans are endowed with, as effectively in this task as if they were exploring a canyon in Arizona. The way we do that is, I believe, building interfaces that connect the features of the robot and the abilities of the robot to a human in a way that is so natural that the humans’ natural abilities work in their advantage and not against them”.

If we look at unimaginative human-robotic interfaces, just having someone stare at pictures on a screen and use a general-purpose interface like a mouse and keyboard to try and control it, Norris contends that those interfaces are not designed to engage the natural abilities of a human as effectively as interfaces that use virtual reality or body tracking or other kinds of interfaces. What’s happening to that person when they are using those more traditional interfaces, Norris concludes, is that they are having to constantly convert from the abstract that they are seeing on the screen, or the abstract input that they are accomplishing through a keyboard or a mouse into what really is happening on the other side.

“We are trying to remove that abstraction. It’s not that we’re trying to fool them into thinking that they’re there, that’s not the part. But just to let their natural abilities operate as if they were there. That’s what it’s about. I see this as this is the way that we make our robotic assets something that can allow us to naturally explore space and distance environments that we’re visiting as naturally as we explore the places we explore on Earth”.

As I think about and reflect on the work of Norris and other scientists at NASA and elsewhere, I see a great movement of researchers all over the world, all working together to make real our human dream of operating robots with “telepresence” — to lead the exploration of space into the future. Norris speaks of the great idea of “telexploration” in a 2013 Von Karman lecture. Making low-cost holodecks for every NASA scientist so they can see, hear and touch other, distant worlds. And for all of us who want to explore the Great Expanse.

This new wave of space exploration will be lead by robotic explorers who are guided by human operators (teleoperation) immersed in remote environments, controlling the motion and actions of the tele-operated machines with one manner of Norris’s human-machine interface (HMI). In this way, human researchers will perceive the colors, the light, the sound and perhaps even the touch (with interactive haptic technology) planetary environments, acting naturally in them as if they were physically present on that distant landscape.

NASA’s has already amassed a world-renowned track record of space exploration using cutting-edge technologies that simply speaks for itself. The agency’s current research projects that support robotic space exploration include Robonaut 2, a humanoid robot with “dexterous manipulation,” the ability to use one’s hand to do work with a dexterity superior to a suited astronaut’s. R2 is now learning to practice some telemedicine techniques. An R2 teleoperator guided the robot in performing an ultrasound scan on a mannequin.

Humans operators use R2's dexterity to perform tasks efficiently, using the correct level of force and tracking progress with R2's vision system. R2 also used a syringe (via teleoperator control), demonstrating the robot’s capabilities for telemedicine. Future mission needs might require physicians to conduct complex medical procedures on humans in distant mission locations in low Earth orbit or even in the expanse of deep space.

A joint NASA and CSA project, the Special Purpose Dexterous Manipulator (SPDM), also know as Dextre, the ISS robotic handyperson, operates as a robotic satellite refueler, designed to refuel satellites on the exterior of the International Space Station. Dextre uses four unique Robotic Refueling Mission (RRM) tools for satellite-servicing and refueling tasks—including cutting and manipulating protective blankets and wires, unscrewing caps and accessing valves, transferring fluid, and leaving a new cap in place for future refueling activities.

NASA has also designed a robotic moon miner, called RASSOR (Regolith Advanced Surface Systems Operations Robot). This NASA robot is designed to autonomously drive around the Moon, scoop and haul up to 40 pounds of moon regolith. RASSOR would then pour the lunar dirt into a processing plant on a larger lander which would extract water, hydrogen, and oxygen from the lunar surface.

The Super Ball Bots are deployable exploratory robots, designed to bounce on the planetary surface, deform and roll to any location during surface exploration missions. NASA is experimenting with controlling these robots with machine learning algorithms and oscillatory controls known as “Central Pattern Generators,” inspired by the biological neural networks controlling human locomotion.

The NASA Centaur rovers were designed to carry the agency’s Robonaut upper bodies, in addition to other payloads. The Centaur 2 Rover is the second-gen model of the series, and is integrated with the Robonaut R2A torso. The Centaur 2 hardware has robotic mobility, and is carrying the world’s most advanced dexterous “mobile manipulation” system with hybrid rover/arm manipulation. Climbing legs were tested in December for movement around the ISS. Future R2 features include: prospecting sensors, deeper excavation implements and devices for converting planetary raw materials into useable products.

The impressive Curiosity Rover needs no introduction. This is the Earth vehicle-sized robot that just did a 329-foot/100.3-meter backward drive over Martian terrain.

The series of nine images making up this animation were taken by the rear Hazard-Avoidance Camera (rear Hazcam) on NASA’s Curiosity Mars rover as the rover drove over a dune spanning “Dingo Gap” on Mars.

And, to show the scale of the NASA’s Curiosity Mars exploration, here’s a map below showing the formidable route driven and route planned for the rover from before reaching “Dingo Gap” — in upper right — to the mission’s next science waypoint, “Kimberley” (formerly referred to as “KMS-9") — in lower left, (with the planned route marked in yellow).

Curiosity has driven an impressive 937 feet (285.5 meters) on the Martian surface since the rover’s Feb. 9 dune-crossing, for a total odometry of 3.24 miles (5.21 kilometers) since its August 2012 landing.

George Bernard Shaw once said, “You see things; and you say, ‘Why?’ But I dream things that never were; and I say, “Why not?”

This seems to be the modus operandi of NASA business, a renewed force, bent on using advanced technologies that help humans scale up to the future challenges of deep space exploration. Of an agency that, by default, researches, creates, tests, deploys and quality checks awesome technologies for their future potential — seeking to make real for all of us what seemed like a distant figment in our cultural imagination, a fancy on science fiction manuscripts. With the efforts of men like Jeffrey Norris, I am inspired by NASA’s solid commitment to launching technologies that test the confines of that once impossible star trek of exploring deep space.

I am looking forward to this cooperative, human-robot future. A new Apollo moment to come, of human explorers joining our robotic avatars on the surface of Mars. Where NASA astronauts might brush of the rust-colored regolith off their boots. When, in the dim sunlight of the early morning Martian mist, the astronauts may meet up with their colleagues, the vanguard of NASA robotic explorers, standing there on a red sand-covered rocky mound. And the NASA astronauts say to their robotic colleagues waiting for their very arrival, “Do you see my friends — we are still dreaming of tomorrow!”