From neurons to Notions

In the previous section, the neural framework for the production of movement is outlined. But most of our conscious life is concerned with thoughts and ideas, concepts and contradictions. Can the same machinery that helps me ride my bike and navigate the roads also help me to read about cycling in magazines and help me to navigate around cycling websites? Clearly the answer is yes. The supreme elegance involved in this mechanism has led to its well-deserved title as 'the ultimate machine'. 

Cortical neurons are recursively connected in a layer 6 neurons deep. The brain uses linguistic cognition. This semantic ultramemory is a strictly combinatoric coding device (because of its use of the all-powerful compositionality property). The only way to utilise sequential coding, such as that needed for syntactic permutations [1], is to employ recursive traversal through the neural layers. The semantic state neurons encode stereotypical situational properties, meaning that situations each have a unique semantic 'spectrum' of characteristically constituent datatypes. The syntactic transition neurons are the agents of change for these semantic states, governed by the goal-seeking behaviour of the spotlight of attention.

In the figure below, the left-side diagram depicts the canonical neuron used to build this 'ultramemory', while the right-side diagram depicts a section of the memory matrix that contains millions of 'ultraneurons'. So that the cortex fits into the birth-limited [3] volume of the skull, it consists of a deeply folded sheet [2] of neuronal columns.


(a)                                                                             (b)

Figure 7.1


In figure 7.1(a) above, neurons in the ultramemory receive
(i) direct inputs 'a', 'b', and 'c' from the GOLEM's afferent (sensor-side) channel
(ii) indirect meta-inputs 'e' & 'g' from the GOLEM's efferent (motor-side) channel.
(iii) Input 'f' is the neuron's idempotent input, and is used for autolatching purposes.
(iv) input 'g' is the phasic input caused by the 'spotlight of attention'
(v) input 'e' is the tonic input which implements long-term memory.
Typically, inputs 'g' and 'e' are 'ganged', ie they carry the output signal on 'party line' axons.

long term memory

The inputs a, b, and c are representative of the neuron's 'normal' (sensory and meta-sensory) inputs. These inputs encode long term memories, letting us recognise 'our favorite things' [4].  At the next layer down, this LTM neuron is linked to its constituent percept categories. In the general case, 'a' might be color, 'b' might be texture and 'c' might be shape. 

short term memory

The neuron's thresholded output is depicted as 'd', while 'f' represents the neuron's ability to 'autolatch'. This behaviour occurs when the neuron's levels of excitation are above and beyond 'normal' levels, such as those supplied by top down 'attention' signal 'g'. Autolatching via input 'f' is the mode of persistent overexcitation that is the mechanism which creates short-term memories [5].  The usefulness of autolatched neurons is apparent when moving around a situation. Even when you turn your back on recently attended items, your experiences are temporarily buffered [6] by short-term memory latching. If you then turn around once more, less scarce attentional resources are needed to bring latched STM items back into consciousness, because they are already in a half-excited state


Figure 7.2


The figure above depicts a section of neurons somewhere within the 'ultramemory'. Imagine the spotlight of attention has fallen upon the memory row marked 'where', the one with red neurons. The output 'party line' axon of this efferent neuron runs across the memory and forms a ganged input for the blue neurons, the ones that code for a particular kind of object or entity, denoted as 'what'. The effect of the horizontal ganged inputs of the red 'where' neurons is felt at ALL of the blue neurons in the same row of memory. HOWEVER, the specific column of  blue 'what' neurons form an pre-excited subset of all the neurons in the blue row. THEREFORE only those neurons at the intersection of the 'what' and 'where' crosshairs become excited enough to autolatch. 

feature binding

Treisman & Gelade [7] demonstrate how "..focal attention provides the 'glue' which integrates the initially separable features into unitary objects. Once they have been correctly registered, the compound objects continue to be perceived and stored as such".  The mechanism they describe is easily explained by the ability of the UM to create STM's.  As well as creating new features from pre-existing ones, brains also continually swap pre-existing features in and out of the 'spotlight of attention', via the mechanism of neural autolatching. Pre-existent LTM items are brought into STM (via autolatching) as we enter a given situation. Then as we leave that situation we (by means of a mechanism discussed later) remove them from STM, returning them to LTM, and freeing up our limited conscious resources to manage the uncertainty of the next situation.

1. Miller's magic number of 5-7 probably arises from the average depth of the neuronal layer

2. the word 'cortex' is latin for 'bark'

3. the implicit assumption being that if the newborn's skull were any larger, an unacceptably greater proportion of  mothers would die in childbirth. Due to the evolution of bipedalism, human females’ vagina ‘faces’ slightly forward of the vertical, compared to that of the great apes, which are all oriented to the rear (dorsal) . This means humans can have intercourse facing each other, which (presumably) enhances pair bonding, but it also means the newborn baby (whose head is its widest point, due to its large brain) must pass through a limited width pelvic gap. Clearly, a large brain confers considerable fitness in evolutionary terms.

4. Each one of a subject's 'favourite things' is coded by its own neuron (apologies to Mary Poppins)

5. as well as, unfortunately, PTSD flashbacks which are so hard to delete

6. The idea of buffering (also called caching) is at least as old as computer science itself.

7. Treisman, A. & Gelade, G. (1980) A feature-integration theory of attention. Cognitive Psychology 12: 97-136

8. Combinatoric methods simplify many types of analyses. Consider the Chinese Restaurant Process model: at time n, n customers have been partitioned among mn tables (or blocks of the partition). The results of this process are exchangeable, meaning the order in which the customers sit does not affect the probability of the final distribution. Also consider LDA and other ML techniques, which are based on ‘bag of words’ and ‘bag of topics’ axioms. This (combinatoric) property greatly simplifies a number of problems in population genetics, linguistic analysis, and image recognition. 


GOLEM Conscious Computers
All rights reserved 2020
Powered by Webnode
Create your website for free! This website was made with Webnode. Create your own for free today! Get started