Robot Control with the "Neural Gas" Algorithm

Introduction

Our SoftArm robot and its related systems are not only a valuable analogue to the workings of biological visuo-motor coordination systems, but also serve as a flexible testbed for developing adaptive algorithms applicable in the real world. The robot's hysteretic behavior makes it extremely difficult to control with the accuracy needed for any real-world application. On the other hand, its unique physical flexibility is a very desirable quality in many applications, such as various human-robot interaction scenarios. To overcome the unpredictable aspects of controlling this robot, we use a biologically inspired adaptive algorithm -- the "neural gas" -- to realize accurate control. This work is carried out by Stanislav Berkovich. He rejoined the Theoretical Biophysics Group in November 1993 as a graduate student in Computer Science, after spending two years at Sony CRL, Tokyo.

Description

Our effort builds upon the group's prior research into developing and applying self-organizing neural network algorithms to allow the SoftArm robot to accurately position its end effector and grasp objects. This work is a continuation of that of Kakali Sarkar as described in last year's progress report. In short, we are applying the "neural gas" algorithm to map points in the four-dimensional visual input space (two two-dimensional camera images of the workspace) into control signals for our SoftArm robot.

The project is currently being modified to use a faster and more sophisticated vision system that is under development by other members of the group. In addition, the system has been optimized to use an order of magnitude fewer neurons to achieve a positioning accuracy limited only by the resolution of the cameras. As before, the robot positions its end effector by using successive movements, relying on feedback from the cameras to achieve successive corrections, until the positioning error reaches a desired tolerance (at which point the robot grasps a target at the desired position, as described in the previous literature). Each correction modifies the state of the neurons representing the visuo-motor map in such a way that successive positioning cycles reach the desired error with fewer corrective movements. The system, thus, does not distinguish between a "learning phase" and an "operating phase" since it continually adjusts its state to satisfy the error constraint. In this way, it can position the robot accurately, even when the physical characteristics of the robot change. However, a "setup phase" (about 200 cycles) is required to organize the neurons from their initial random state into a working visuo-motor map.

The key mechanism in the "neural gas" algorithm is its ability to break up the visual input space into smaller regions that can be represented more easily by local maps. In addition, it facilitates a co-operative approach for adapting these local maps. In other words, each local map evolves by using information from many of its neighboring maps (a trait common to biological systems). The figureigure shows the initial adaptation of the "neural gas". The "positions" of the neurons -- the centers of each neuron's mapping region -- are superimposed on an image of the workspace. The robot end effector and the rubber rat (Basil) are colored and shown in black. The small black rectangles represent the positions of the neurons projected onto the camera image. Future research will concentrate upon employing the algorithm in situations where accuracy is of principal importance, either in the context of movements performed by biological systems or for technical applications.

The initial state of the system (left). The state after the setup phase (right): the neurons have "moved" into the area of the workspace.