Home Author Information
Committee Contact Us Corporate Grants
Registration Schedule Tours
Travel and Local Area Information Workshops SMC-IT Group Photo Past SMC-IT Archives
CFP Advertisment Poster

ESA James Webb Space Telescope
DARPA F6 Constellation
Orbiting Carbon Observatory
Shuttle Unity Canadian Arm
NASA Solar Probe
Phoenix Rover
Interplanetary Internet
Space Based Space Surveillance Satellite
Bimodal NTR Space Transfer Vehicle Concept
STSS-ATRR Satellite

SMC-IT 2011

Reliable Software

Autonomy & Automation

CubeSat Software

Space Cybersecurity

Robotics Software

Engineering Design Tools

Fault Management

Real-Time Embedded Systems

Machine Vision

Image Processing

Flight Computing

Novel Applications

Mission Architecture Design

Operations Technologies

Middleware Services

Knowledge Management

Integrated System Health Management

Astronaut Support IT

Science Software Applications

On-board vs Ground Computing

Space Communications

Smart Instruments

Mission Assurance IT

Software Architectures & Tools


OCTOBER 1, 2010
Call for Full Papers and Mini-Workshop Summaries

NOVEMBER 1, 2010
Author Submission Website Open

DECEMBER 31, 2010
Call for Full Papers and Mini-Workshop Summaries

MARCH 20, 2011
Author Acceptance Notification

MAY 19, 2011
Early Bird Registration Opens

MAY 19, 2011
Preliminary Program Announced

MAY 20, 2011
Camera Ready Manuscripts Due (incorporating reviewer comments) for upload to the IEEE CPS website

JULY 1, 2011
Regular Registration Opens

AUGUST 2 - 4, 2011

(August 5, 2011)

USGS, Menlo Park Tour


NASA Ames Research Center Tour


Computer History Museum


Intel Museum (on your own)
Intel Museum


Hiller Aviation Museum

Ames Wind TunnelCrowne Plaza Cabana HotelComputer-History-MuseumAmes Pleiades SupercomputerIntel 80 Core Teraflops Research Wafer
NOTE: To receive future announcements, please send a blank email to:




Held in conjunction with the Fourth International Conference on Space Mission Challenges for Information Technology (SMC-IT-2011), August 2 - 4, 2011,
in Palo Alto, California, USA


The complexity of future space operations will require innovative approaches for effectively and safely conveying information between crewmembers, as well as between crew members and vehicle management software and ground control. The potential scope of auditory displays and speech interfaces has expanded beyond air-ground speech communication and simple alarms, to also include speech-based fault management instructions or alerts, heads-up directional guidance for EVA, spoken dialogue systems, auditory-haptic substitution, and the use of auditory cues for situational awareness. This workshop will examine the challenges for auditory displays to best address the problem of optimizing the human-machine interface for space operations.

Theme and Goals:

The theme of the workshop is “Technologies that will enable auditory displays and spoken interactive assistants to best address the needs of the crew in all phases of flight and extra-vehicular operations”. This can include lessons learned from aeronautic and military communication control systems as well as other complex human-machine interfaces. The methodology of the workshop involves presentations from subject matter experts; panel discussions; and responses to questions from the audience and the moderator. Some of the questions to be addressed include best methods for auditory display management; criteria for preventing information overload; and considerations for tailoring the use of speech interfaces to best meet the needs of the users. The goals of the workshop are to illuminate the possibilities and complexities of interactive displays as a means for optimizing human-machine interfaces. The workshop will begin with a tutorial presentation on technical and perceptual aspects of auditory displays and spoken interactive information system interfaces (provided by the moderators). This will be followed by shorter presentations from domain experts in human interfaces. The workshop concludes with an interactive question session from the domain experts.

Participants and Abstracts:

Tutorial overview: Auditory Displays, Durand R. Begault, Human Systems Integration Division, NASA Ames Research Center, Advanced Controls and Displays Group.

From the standpoint of communications engineering, the design of an integrated approach for the entirety of acoustic information that arrives at the ears of a user is referred to as an auditory display- including radio communications, synthetic speech, caution and warning, and confirmatory audio feedback. When properly designed, interactive auditory displays can provide situational awareness regarding the location, proximity and importance of multiple information streams, agents, and collaborators.

Tutorial overview: Spoken Dialogue Systems for Space and Lunar Exploration. Jim Hieronymus, NASA Ames Research Center.

Building spoken dialogue systems for space applications requires systems which are flexible, portable to new applications, robust to noise and able to discriminate between speech intended for the system and conversations with other astronauts and systems.  Our systems are built to be flexible by using general typed unification grammars for the language models which can be specialized using example data.  These are designed so that most sensible ways of expressing a request are correctly recognized semantically.  The language models are tuned with extensive user feedback and data if available.  Open microphone speech recognition is important to hands free, always available operation.   

Binaural and beam-forming auditory delivery systems for mission critical contexts, Peter Otto, MFA, Director, Sonic Arts Research and Development, Technology Director, UCSD California Institute for Telecommunications and Information Technology (Calit2) potto@ucsd.edu;  (822) 246 0311 (UCSD Office)

Persistent problems with the use of headphones and traditional wide-diffusion speaker systems for auditory display have prompted military, vehicular, control room, emergency, medical, conferencing and other experts to consider using alternative means of delivering auditory information in a variety of communications environments, including high-stress and mission critical contexts. The expanding use of 3D imagery and massive, networked displays adds complexity and multidimensionality to these issues. Specifically, listener fatigue, intelligibility, multi-source tracking, user interface, and integration with multiple or massive visual displays and VR systems are all factors that fail to various degrees in traditional audio delivery systems. Availability of small transducers, low power/high efficiency amplifiers, inexpensive and powerful DSP chips, and advanced acoustics modeling software enable experimentation with a variety of audio delivery modalities that have resisted cost-effective or portable solutions in the past. Human factors engineering, psychoacoustics and perception science, combined with new or reconsidered technological solutions suggest that significant gains may be possible in the near future. The author will demonstrate several experimental auditory delivery modalities using new hardware and software developed at UCSD/CalIT2.

Decision Support Technologies. Robert McCann, NASA Ames Research Center, Human-Systems Integration Division

As we approach the end of the Shuttle era, NASA has accumulated over 50 years of experience with developing and implementing operations concepts for human missions into space.  However, no crewed mission has ever been mounted to a destination beyond the Earth-Moon system. One of the grandest challenges that NASA must meet in order to enable journeys to more distant destinations, such as Mars, is how to “re-tool” mission operations to adapt to the fact that speed-of-light limitations are going to force astronauts to deal with mission problems, such as systems malfunctions and equipment failures, without real-time assistance from subject matter experts at mission control.  I will discuss the scope of this problem and discuss the crew-machine interfaces and decision support technologies that must be developed, tested, and built to provide the astronauts with the capabilities they will need to manage deep-space mission anomalies.

Supervision and Control of Multiple Semi-Autonomous Agents: Roles for Auditory and Speech Information in User Interfaces. Collin Green, NASA Ames Research Center, Human-Systems Integration Division, Human-Computer Interaction Group

In the future, well-designed user interfaces will be needed for the effective supervision and control of multi-robot groups in survey, exploration, environmental sampling, search & rescue, construction, or maintenance tasks. What roles exist for auditory or spoken information in the context of this kind of human-robot interaction (HRI)?  While there is substantial work on auditory and speech-centered interaction with single robots, those findings may not be relevant to the supervision and control of many semi-autonomous robots.  Rather, the use of auditory and speech information in other domains: control systems for multiple unmanned aerial vehicles, real-time strategy video games, systems for tactical command or search and rescue, and air traffic management systems may provide insight to how visual information can best be complemented with aural information.  This talk will explore Real-Time Strategy (RTS) video games as an analog for multiple-robot supervision and control systems.  Specifically, some interesting uses of sound in RTS games will be discussed in the context of their potential application to multi-robot supervision and control.

Military Communications. CDR Michael Lowe, United States Navy

The potential of advanced auditory displays will be discussed from the perspective of Naval operations.

Spatial Auditory Displays for Enhancing Situational Awareness During Non-terrestrial and Remote Exploration. Elizabeth M. Wenzel, NASA Ames Research Center, Human-Systems Integration Division, and Martine Godfroy, NASA-Ames Research Center, San Jose State University Research Foundation

During Extra-Vehicular Activities (EVA), the EVA astronaut must maintain situational awareness (SA) of a number of spatially distributed "targets" such as other team members (human and robotic), the rover, the lander/habitat or other safe havens. These targets are often outside the astronaut's immediate field of view and visual resources are needed for other task demands.  The authors have been developing real-time spatial auditory display systems and investigating their use in applications such as navigation in virtual environments, tele-robotic control, and caution and warning systems. Initial development efforts in ambient auditory displays for SA resulted in a demonstration of an “orientation beacon” display at NASA Ames specifically for EVA applications. This auditory display prototype created an ambient environment with non-intrusive "beacons" that enhance situational awareness without imposing undue distraction or workload. Current work has focused on the development of a software test bed for experimental evaluation of a revised beacon display prototype, an audio-visual simulation of a spatial audio augmented-reality display for tele-robotic planetary exploration on Mars. Recently, a study was completed that compared performance with different types of displays for aiding orientation during exploration: an auditory orientation aid, a 2D visual orientation aid, and a combined auditory-visual orientation aid. Preliminary data have confirmed the hypothesis that the presence of spatial auditory cueing enhances performance compared to a 2D visual aid, particularly in terms of shorter average response times to orient toward and acquire a target out side the field of view. Future work will address the usability and efficiency of 3D audio in the context of multimodal displays for aiding orientation, navigation, and way finding during non-terrestrial exploration.


Session Chair(s): Durand R. Begault (POC), NASA ARC, UC Berkeley, USA, Durand.R.Begault@nasa.gov
James L. Hieronymus, NASA ARC, USA