From neurons to Motions

At the level-1, sensorimotor level, perceptual common coding principles apply. This means that all behaviour is encoded as position (actually, joint angle) change. Causes (eg forces) are not computed explicitly because of the impossibility of doing so in all but the most trivial cases. In the general case, solutions to all somatic force systems are massively underdetermined. Therefore, only effects (eg positions) are measured and manipulated, with the forces required to achieve the commanded positions being generated implicitly by cybernetic (feedback) loops. This governance scheme is summarised as ‘what you want is what you get’. The brain specifies the desired posture (set of limb positions) which is then created. It knows where all its extremities are because this set of locations is, thanks to common coding, exactly the same as its commands. In computer science jargon, this principle is called Write-Only Memory (WOM) [1]. This method of governing motion solves the conundrum of efference copy [2]. Figure 6.1 below depicts the method of governing the simplest type of level-1 subsystem, exemplarised by the elbow joint and its musculature.


Figure 6.1



Of primary interest to the sensorimotor brain is this- what part of the changes in the sensory field are due to self-motion. Once 'domestic' changes due to self-motion have been eliminated (they are uninformative), the sensory changes that remain are those caused by external (third party) agencies. It is only these 'foreign' changes that are interesting, since they include sources of value and opportunity (eg prey) as well as potentially dangerous actors (eg predators).

We now turn our attention to spatiotemporal (level-2) concerns. Even though a portion of conscious awareness (called proprioception) is devoted to level-1 entities, the bulk of what we mean when we use the term ‘consciousness’ refers to level-2 entities such as the objects and other people that we share 3D space with, and, indeed, with the 3D space ‘background’ itself.

Not unexpectedly, level-2 operations are compounded forms of those at level-1 [3]. Since static systems are easier to analyse than dynamic ones, evolution substitutes a static equivalent for its dynamic counterpart, wherever it can. At the most fundamental (cybernetic) level, we see this principle in operation whenever instinctive setpoints, those guardians of biostructure, are augmented by learned offsets, so turning structure into process. Voluntary movements also work using this principle, consisting of dynamic animations of statically determinate (eg gravitationally stable) postures. Each posture represents a semantic macro-state, consisting of all the joint angles that completely define that posture.  

Thus the animation itself forms a synchronous Moore machine. It is a synchronous machine because during the transitions between ‘keyframe’ postures, all the joint angles change simultaneously. It is a Moore automaton because each state can transition to the next state without any additional inputs. So how is it governed? The intermediate steps form another, complementary, type of automaton, the Mealy machine. These automata have transition arcs which possess both input and output symbols (unlike those of the Moore machine, which have none). Each time a single joint angle changes its microstate (eg goes from the 0-15 degree microposition sub-category to the adjacent, 15-30 degree one), the brain sends an ‘increment’ input symbol, and then receives the ‘its done’ symbol as output in return. Obviously, there as many Mealy machines as there are joint angles, where the number of transitions within each  Mealy m/c equals the number of microstates. 

At the topmost level, the keyframe postures that form the backbone of the animaton (a combination of ‘animation‘ and ‘automaton‘) must be consciously (actually, thats wrong, what I mean is voluntarily) chosen. This is done by interpolating [4] (into, say, n segments) the spatial trajectory between the current position and the target position, the one corresponding to the top-level goal state. This is where the cerebrum and cerebellum act together in perfect partnership (see figure 6.2 below).


Figure 6.2


The diagram above specifies the neural connections needed for the brain to control the position of its own body parts. It is enough for these circuits to just govern the 'where', since the brain already knows the 'what' of its own body parts. But what about other subjects, other objects and the 3D space and background properties? Here, the 'what' signals are the most important ones. However, it so happens that our brains use the conscious 'what' signals for its own body parts as well! This explains why we have memories which only store semantic states (percept categories, equivalent to data structure types). Sometimes this 'kludge' breaks down, and conscious (level-2) information about self-position conflicts with proprioceptive (level-1) inputs. Under the influence of some drugs, hypnosis or some illnesses, so-called 'out-of-body' [5] experiences can occur. When standing still at traffic lights, we perceive a sudden backward motion due to the forward creep of adjacent vehicles. Infants must learn the association between conscious and subconscious spatial perception - it is not instinctive. 

The brain excels at doing such clever, economical encoding strategies. Another example is that it encodes all complex arrays of objects (including the  background scenery in any situation) as (non-self, right-brain) postures. This allows it to track other (non-ballistic) beings and ballistic objects using the same machinery it uses to manage its own behaviours. 


1. This governance category was originally intended as a kind of 'in' joke amongst ancient geeks(sic), but it was soon realised that it characterises the I/O of valid subsystems, such as printers and other terminal devices.

2. For a full biomechanical analysis of this issue see Anatol Feldman 

3. this is yet another example of the universal (or is it ubiquitous?) principle of composition, the same rule which says that the meaning of a composition of parts (eg a sentence) is the composition of the meanings of the individual parts (eg words).

4. If you doubt the brain's ability perform smooth arithmetic interpolation, stand in front of a mirror, and deliberately dart your eyes in a horizontal arc, first hard-right then hard-left then back again. You will certainly feel your eyes move, but you will see absolutely ZERO movement. Now that's interpolation! (Interpolating between a start point then back again averages to zero).

5. the author participated in an Australian aboriginal 'inma' (singing) in which an altered state of mind was induced by repetitious chanting, causing everyone in the group to experience the conscious perception of floating a half meter or so above the ground! This really happened to me and a couple of 'whitefella' friends at Cleland National Park in South Australia in the 1980's.

GOLEM Conscious Computers
All rights reserved 2020
Powered by Webnode
Create your website for free! This website was made with Webnode. Create your own for free today! Get started