The following are a summary of research projects I have been involved in since joining the Ph.D. program in Fall, 2000.  They are organized by date, with the most recent project listed first.  With the exception of a few early applications, all have involved implementation in both simulation and on real robots.  My thesis work included several of the following applications.

 

Recent Work in Kharkiv

Included in Dissertation Work – “Acoustical Awareness for Intelligent Robotic Action” (thesis) (overview)

Previous Work

 

Mechanical Fault Detection

The goal of this work is to use knowledge about what the auditory scene should sound like, to identify acoustic change in older industrial machinery.  Ideally, besides identifying on/off, the robot should be able to recognize different functions of the equipment, as well as identify when something is not operating correctly.

 

This work is being performed at the Kharkiv National University of Radio-Electronics as part of the U.S. Student Fulbright Exchange program.

 

Acoustic Ray-Tracing with Robots

Previous work in identifying change found a relationship between predicted volume and measured classification ratios.  Therefore, we first built accurate models of the sound-scape using robot gathered data and verified the predictions by sampling the environment.

 

Publications:

Eric Martinson, Ray-Tracing with Robots, submitted to IEEE Transactions on Robotics, Feb. 2009

 

Eric Martinson, "Using a Mobile Robot to Detect Changes to the Auditory Scene,” Proceedings of the Third International Radio-Electronics Forum, Sudak, Ukraine, Sept 2008

 

 

 

Robotic Discovery of the Auditory Scene

The goal of this work is to build a robot that can autonomously explore the soundscape, and discover knowledge that will allow it to enhance other auditory applications.  This work currently consists of three parts: (1) discovering source locations with the auditory evidence grid; (2) building models of source directivity; and (3) estimating the volume of noise due to these sound sources found throughout the environment.

 

This work was performed in part at the Navy Center for Applied Research in Artificial Intelligence, in cooperation with Alan Schultz.

 

2Speech_HighRes_Phat_Speech_4Mics

Auditory Evidence Grids

Microphone arrays mounted on mobile robots cannot typically localize sources in two dimensions.  The closeness of the microphones limits their accuracy to finding angular measurements only.

 

By moving the robot and recording its position over time, we can combine multiple angular estimates together to triangulate upon active source positions in the environment.  Based on the evidence grid representation, auditory evidence grids can localize one or more sources in the environment using as few as two microphones.

 

Estimating Source Directivity

Once a source position has been identified, a robot can now sample from a wide range of positions around the source to both improve its localization of the source, and identify the directivity of the source with respect to angle


Noise Contour Maps

The robot is now able to identify three pieces of information about the auditory scene.  It can identify source location, volume, and directivity.  Using the idea of spherical spreading from the source, we can build noise contour maps, predicting how loud different parts of the environment should be to the robot.  These can then be used to guide a robot to quieter areas, improving its signal-to-noise ratio.

 

Publications:

Eric Martinson and Alan Schultz, Robotic Discovery of the Auditory Scene, to be published in the Proceedings of the International Conference on Robotics and Automation (ICRA), Rome, Italy, April 10-14, 2007

 

Eric Martinson and Alan Schultz, Auditory Evidence Grids, Proceedings of the International Conference on Intelligent Robots and Systems (IROS), Beijing, China, Oct 9-15, 2006

 

 

Mobile Security Guard

The goal of this work is to survey the auditory scene as part of a security guard application.  Given a mobile robot that patrols the environment on regular intervals, the goal is to apply a robot’s a priori knowledge of sound flow (see Robotic Discovery of the Auditory Scene) through the environment to identify changed or new sound sources along a robot’s patrol route.  By using prior, or self-discovered, knowledge of the environment being patrolled, acoustical awareness allows the robot to predict what should and should not be heard at different locations throughout the environment.

 

 

Environmental Setup.bmp

Environmental Setup

The robot used was a pioneer2-dxe robot equipped with a SICK LMS and 4-element microphone array.  The robot followed a circular patrol route through a 8x10 m2 environment, listening for either a new sound source, or changes (such as disabling/enabling, or volume adjustments) to existing sound sources.  The sources tested were designed to mimic an office environment, and included the robot itself, as well as a filter, fountain, and radio playing music.

 

PredictionMeasured.bmp

Estimating Change

New sound sources in the environment were detected using auditory evidence grids (see Robotic Discovery of the Auditory Scene), but changes to existing sound sources were detected by comparing the expected volume of sound sources in different areas with the classification results using Mel-Frequency Cepstral Coefficients. 

 

Publications:

Eric Martinson, "Using a Mobile Robot to Detect Changes to the Auditory Scene,” Proceedings of the Third International Radio-Electronics Forum, Sudak, Ukraine, Sept 2008

 

The Stealthy Approach

As an observer, a robot’s primary virtues are patience and tolerance.  If tasked with watching for a tiger in the environment, the robot, like a stationary camera, can wait as long as it’s batteries hold up for the animal to finally cross its path.  It does not get bored, and it does not get uncomfortable with remaining in place for a long time.  Best of all, if the robot, or its human partner, decide that it is located poorly, then it can move to another location.  In the future, these robotic advantages of tolerance, patience, and mobility will serve well for observing, not only, animals, but also natural events, people, or even locations (e.g. security guard).  In most of the current applications, however, the robotic platform being used is not a small, unobtrusive robot.  Military applications, for instance, often use planes to cover as wide a region as possible, accompanied by all the noise of keeping the plane in the air.  Ground robots, either for military, police (bomb-squad), or building security applications, have a similar problem in that they have to be fairly large for the sake of robustness.  As such, these robots are noisy due to extra onboard cooling fans and motors designed to move heavier equipment.  How can such a noisy robot be used to quietly observe, or approach a target, when the target is a flight risk?  We believe that the solution to this problem lies in making a robot aware of the surrounding auditory scene.  By knowing something about the listener, the environment, the sound sources, and the physical principles that govern how they each affect sound flow, a robot can make predictions about how it will be perceived by a listener, and adjust its navigational strategies appropriately.

Masking1.bmp

 

Masking2.bmp

·         Given

An observer with a known location, and a mobile robot.

 

·         Goal

Approach the observer without being aurally detected.

 

·         How

Model the difference in overall, and directional, volume change at the observer’s location due to the robot moving through any given area of the environment.  The result is a map that the robot can use for planning the least intrusive path to the observer.

 

·         Result

Initial results using a heuristic to move a robot in front of a known sound source demonstrate the expected improvement in hiding.  Work to incorporate enhanced auditory scene models as well as robotic movement effects is ongoing.

 

 

Publications:

Eric Martinson, "Hiding the Acoustic Signature of a Mobile Robot", Proceedings of the International Conference on Intelligent Robots and Systems (IROS), San Diego, CA, Oct 29-Nov 2, 2007

 

 

Information Kiosk

Effective communication with a mobile robot using speech is a difficult problem even when you can control the auditory scene.  Robot ego-noise, echoes, and human interference are all common sources of decreased intelligibility.  In real-world environments, however, these common problems are supplemented with many different types of background noise sources.  For instance, military scenarios might be punctuated by high decibel plane noise and bursts from weaponry that mask parts of the speech output from the robot.  Even in non-military settings, however, fans, computers, alarms, and transportation noise can cause enough interference that they might render a traditional speech interface unintelligible.  In this work, we seek to overcome these problems by applying robotic advantages of sensing and mobility to a text-to-speech interface.  Using perspective taking skills to predict how the human user is being affected by new sound sources, a robot can adjust its speaking patterns and/or reposition itself within the environment to limit the negative impact on intelligibility, making a speech interface easier to use.

 

This work was tested entirely at the Navy Center for Applied Research in Artificial Intelligence (NRL) in cooperation with Derek Brock.

robot_reduced

The B21r (located at the Navy Center for Applied Research in Artificial Intelligence) uses adapts to the surrounding auditory scene by:

 

·         Rotating to face a human user

·         Increasing/ Decreasing Volume

·         Pausing during periods of excessive noise

·         Moving to another less noisy location

 

 

 

 

Publications:

Eric Martinson and Derek Brock, "Improving Human-Robot Interaction through Adaptation to the Auditory Scene", Proceedings of the 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI), Washington, DC, Mar 9-11, 2007

 

Derek Brock and Eric Martinson, "Exploring the Utility of Giving Robots Auditory Perspective-Taking Abilities", Proceedings of the International Conference on Auditory Display, London, UK, June 20-23, 2006

 

Eric Martinson and Derek Brock, Auditory Perspective Taking, Proceedings of the 1st ACM/IEEE International Conference on Human-Robot Interaction (HRI), Salt Lake City, UT, Mar 2-4, 2006

 

Disambiguation of Natural Language and Vision Using Acoustics

For any sensing modality, there is always a high degree of ambiguity.  By combining sensing modalities with strengths in different areas, we could remove some of this ambiguity for human-robot interaction applications. 


This work was completed at the
Navy Center for Applied Research in Artificial Intelligence (NRL).

VLAD1.bmp

Acoustic Sensing:

Speech detection and direction.

 

Disambiguation Use:

·         Identifies who is speaking and assists in orienting a camera on the speaker.

·         Assists natural language processing by detecting the onset of a conversation, and recognizing interruptions by other speakers.

 

Publications:

Samuel Blisard, Benjamin Fransen, Matthew Marge, Eric Martinson, Vlad Morariu, Scott Thomas, and Dennis Perzanowski, "Using Vision, Acoustics, and Natural Language for Disambiguation", Proceedings of the 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI), Washington, D.C., Mar 9-11, 2007

 

 

Sampled-Data Noise Maps

If a robot knows its position in the environment, and is equipped with a microphone, it can created Noise Maps of the environment by collecting large amounts of samples over a wide area and then interpolating.  Such sampled-data noise maps are then used to aid a robot in avoiding excessively loud areas of the environment so as to improve its overall Signal-to-Noise ratio. 

This work was done in the Aware Home Research Institute.

NMap2.bmp

Robot:

The robot used was a pioneer2-dxe robot equipped with a full sonar ring for real-time obstacle avoidance.  A microphone mounted 1’ above the robot was used to collect audio samples from the environment. 

 

Localization System:

The localization system, designed by Zhonghao Yang, typically tracks people through the home environment.  The same system could also track a robot through three rooms (kitchen, dining room, and living room), providing position and orientation of the robot.

NMap1.bmp

Sampled Data Map:

The sampled data noise map on the left demonstrates the complexity of the problem.  One of the peaks in the contour map is from a radio on the dining room table, while the second peak is due to robot noise that occurs while the robot tries to cross difficult terrain.  Both areas should thus be avoided when trying to improve the signal-to-noise ratio.

 

Publications:

Eric Martinson and Ronald C. Arkin, Noise Maps for Acoustically Sensitive Navigation, Proc. Of SPIE, vol. 5609, Oct. 2004

 

Distributed Robotic Arrays

The goal of this work was to form a narrow band sensor array from a group of randomly distributed mobile microphone elements, assuming only local sensing from each element in the array.

This work was done with David Payton at HRL Laboratories.  

LineFormation.bmp

The proposed method used distance and angle measurements to nearby robots to locally correct errors in the final array.  As part of this work, a comparative study of behavioral methods was completed, including a comparison to the “Artificial Physics” approach proposed by W. Spears and D. Gordon

 

Patent:

Payton, David and Eric Martinson, “Arranging Mobile Sensors into a Predetermined Pattern”, US Patent # 7379840, May 27, 2008

 

Publications:

Eric Martinson and David Payton, Lattice Formation in Mobile Autonomous Sensor Arrays, Lecture Notes in Computer Science, Springer Berlin/Heidelberg, vol. 3342, 2005, p98-111

 

 

Marco-Polo Localization

The goal of this work was to use a team of robots equipped with sound emitters and microphones to solve the range-only SLAM problem.

marcopolo2

 

 

3 Nomad 150 Robots moved randomly through the environment, avoiding obstacles and each other.  A fourth robot was also used, but remained stationary.  At regular intervals, all robots stopped and generated audio signals that could be detected by microphones located on each robot.  The difference in time between the detection of the audio signal on the generating robot and the receiving robot allowed us to estimate the distance between robots at each time step.

sim_tracks.jpg

Using the measured distances between robots over time, we could reconstruct the relative positions of the robots, effectively mapping the reachable areas of the environment.

 

Publications:

Martinson, E. and Dellaert, F. "Marco Polo Localization" Proceedings of the International. Conference on Robotics and Automation (ICRA 03), May, 2003

Dellaert, F. and Allegre, F. and Martinson, E. "Intrinsic Localization and Mapping with 2 Applications: Diffusion Mapping and Marco Polo Localization",Proceedings of the International. Conference on Robotics and Automation (ICRA 03), May, 2003

 

Robot Behavioral Selection Using Q-Learning

The integration of Q-learning methods for behavioral assemblage selection at a level above gain adjustment.  This work was explored as part of the DARPA Mobile Autonomous Robot Software (MARS) project at Georgia Tech.

Qlearn2.bmp

 

Q-learning is used on individual robots for coordinating a mission tasked team of robots in a complex scenario.  To reduce the size of the state space, actions are grouped into sets of related behaviors called roles and represented by behavioral assemblages.

 

Training of the robot q-learners, both simulation and on real robots (below), resulted in improved performance over heterogeneous teams despite lacking a common reward function for all learners.

Qlearn1.bmp

 

Publications:

Eric Martinson and Ronald C. Arkin, Learning to Role-Switch in Multi-Robot Systems, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), IEEE, Sept., 2003.

 

Ronald C. Arkin, Yoichiro Endo, Brian Lee, Douglas C. MacKenzie, and Eric Martinson, Multistrategy Learning Methods for Multirobot Systems, NRL Workshop on Multi-Robot Systems, Washington D.C., 2003.

 

Eric Martinson, Alexander Stoytchev, and Ronald C. Arkin, "Robot Behavioral Selection Using Q-learning", In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), EPFL, Switzerland, Sep. 30 - Oct. 4, 2002.