Biologically Plausible Models of Motor Control

Introduction

To date, models of visuo-motor control in biological systems, have, to a large extent, been confined to systems capable of performing simple sensory-to-motor transformations. For example, in employing neural algorithms to control the SoftArm, the research effort of the group was devoted to developing networks that were capable of learning the transformations between the visual coordinates of the end effector of the robot and the motor commands necessary to position the end effector at a given point. In contrast, however, movement in biological systems is the result of information processing occurring concurrently in a hierarchy of motor centers within the nervous system. Furthermore, while visual information is of great importance to movement it does not constitute the sole source of input to the nervous system. Proprioceptive input, that is information derived from sensors which signal the internal states of limbs themselves, is of the utmost importance for accurate motor control. This fact is reflected in the considerable area of the cerebral cortex devoted to processing information of this type. We want to clarify the manner in which the various centers associated with motor control within the cerebral cortex contribute to motor control. Furthermore, we wish to elucidate the manner in which these motor centers can jointly program and coordinate movement. This work is undertaken by Ken Wallace, a post-doctoral researcher who joined the group in 1992, having completed graduate research at the University of Oxford.

Description

During visually guided movements, information related to the visual field will initiate the process of programming the required movement. However, proprioceptive input will also be required to indicate the correct context within which the movement should be performed. This is necessary because limbs have many more degrees of freedom than is strictly necessary to allow movement in space. This introduces a degree of redundancy to the problem in which different muscles can be employed in varying fashions to achieve the same results. Furthermore, the ability of individual muscles to contribute to movement is, in general, dependent upon the starting and end points of the movement. Accurate control of movement requires, therefore, that the central nervous system is capable of taking account of these factors when calculating the optimal pattern of muscle recruitment to achieve the desired movement. In other words, a degree of ``context sensitivity" must be introduced into the programming of particular movements.

In extending the techniques and neural architectures that have been developed during the period of funding provided by the Carver Charitable Trust, our attention has now focussed upon models that are capable of accounting for the processing occurring within several distinct areas of the cerebral cortex. This issue draws largely from the existing body of knowledge regarding how individual areas of the brain respond during movement. The parietal cortex, for example, is known to be intimately involved in the association of exteroceptive input, that is information regarding the external environment, and proprioceptive sensory input. As such, the parietal cortex is able to associate visual signals regarding target and limb position with afferent input provided by body sensors indicating the current position of the limb. On the other hand, several structures, including the parietal cortex, sensory cortex and motor cortex, have been implicated in formulating the correct context of a movement.

We have developed a model which takes account of some of the principal stages responsible for sensory to motor transformations found within the cerebral cortex. This model explicitly accounts for processing occurring within the visual and the parietal cortices. In addition, it includes a lumped model of the motor areas of the cerebral cortex, mid-brain and spinal segmental level of motor control as well as certain types of proprioceptive information regarding the internal state of the limb. The neural networks of this simulation learn through the random exploration of the workspace of the SoftArm, in a similar fashion to the way a child makes random movements during play. During this process the system learns maps of commands which are capable of positioning the SoftArm at particular locations within the workspace. The figure illustrates the development of one such motor map, used to control movement of the "wrist" of the SoftArm, at four points during learning. The top left frame illustrates the state of this map prior to any learning. Here the colored squares represent the states of the individual motor cells which constitute this map. The top right hand frame illustrates the organization that has evolved in this map, following a period of learning corresponding to 1000 time steps. As can be seen, the basic structure of the map has already become evident. The lower left and right frames illustrate the subsequent development of this map after another 1000 and 2000 time steps, respectively. Only after this learning phase is the system capable of performing coordinated movement.

The development over time of a motor map used to control movement of the "wrist" of the SoftArm. Colors represent the states of individual motor cells which constitute the map. The state of each cell is initially set randomly (top left) with the final structure of the map (bottom right) evolving during the learningprocess.

To date, we have been successful in employing this model to control coarse positioning of the SoftArm: at present, the error in the absolute position attained by the movement is between 3 and 9cm. The learning observed using this model is, however, very characteristic of the early stages of skilled movement acquisition observed during the development of motor skills in primates: highly accurate movements are only possible once the ability to learn more approximate positioning skills has been acquired. In addition, it is important to separate the distinct issues of positioning the limb and manipulation of the hand. In primates these functions, although related, reflect the action of distinct areas of not only the motor cortex, but also the basal ganglia and the cerebellum.

To pursue this work we have recently turned our attention to the question of how we can model the next stage of the learning process, that is the acquisition of more skilled movements. In this respect we are currently investigating how context sensitivity may be introduced into the present simulations. One particularly interesting aspect of the work employs the mechanical characteristics of the SoftArm. The compliance of the arm can be adjusted such that movements which are programmed in a similar manner reach entirely different end points in the workspace, depending upon the values specified for the muscles of the SoftArm. Such a situation is very reminiscent of the situation found in primates where it has been postulated that joint stiffness, the reciprocal of joint compliance, is explicitly modulated to achieve the desired end point of a movement.