Programmer’s Addendum

GOLEM/TDE (GT) theory is fully summarised in nine sections, numbered 0-9. The discussion in these pages is deliberately dense and terse. The aim of these sections is to cover all key aspects of human cognition in as compact a manner as possible, as in a ‘cook’s tour’, or ‘executive summary‘. None of the key discoveries have been omitted. While the design is completely specified, there are insufficient details to permit the translation of the design into a working machine. 

The purpose of the alphabetically labelled sections A-F is to provide the prototypical details that  were deemed unsuitable to be included in sections numbered 0-8. Some of the material in these programmer’s appendices is deliberately repetitive. It is better to err on the side of pedantry, than to fail to explain some small but vital piece of the puzzle. 

Before the programming task begins, the programmer should understand at least one thing - GOLEM is an intentionally simple model. Its simplicity is itself both a significant discovery and a statement of philosophical approach. Without such a simple, though completely competent, model, it literally becomes impossible to meaningfully interpret brain scans, such as fMRI & BOLD imagery. In other words, one of a simple model’s main uses is to increase the signal/noise ratio of the model.  Model ‘noise’ is quite distinct from data ‘noise’- it is a top-down, not bottom-up, source of uncertainty in any of the model’s expected behaviours.

Implementing GOLEM - constructing a software ‘self’

The fundamental data structure of the TDE/GOLEM is something called a Marr-Chomsky Trierarchy (= three level hierarchy). For the purposes of constructing an emulation, we only need to understand GOLEM, not TDE. The TDE is the fractal mapping of the GOLEM functions to real human neuroarchitecture. In short, the TDE is of interest mainly to neuroscientists. If we intend to build our own synthetic ‘cog’, there is no reason why we need to slavishly copy the brain’s anatomy. As long as we follow GOLEM design principles, the synthetic system will function in an identical manner.

Figure A.1 depicts the Marr-Chomsky Duplex (counterflow) Trierarchy [1]. 

The ‘Marr’ part is named after late MIT neuroscientist David Marr, who proposed a rather common-sense idea- that the brain’s neuroanatomic ‘modules’ each have a ‘computational’ purpose, or teleology [2]. In other words, each part (sub-divided fractally) ‘does’ something to the data which passes through it. To recent research degree graduates, who often take computer science electives, this step seems only sensible, implying that brain modules perform the same functions as software programs.


Figure A.1


6 (= 2 x 3) cortical layers

The ‘Chomsky’ part is named after cunning linguist and political adversarian, Noam Chomsky. Even though his later work (the paradoxically named ‘minimalist‘ program- there are four source documents!) is thought to be inconclusive, his earlier work on programming language grammars is deemed essential reading.  In figure A.1, the three Chomskyan levels are labelled, from the top down- ‘semantics’, ‘syntax’ and ‘symbols’. 

The three Chomskyan levels match their three Marrian ( from the top down) equivalents ‘computational’, ‘algorithmic’ and ‘endemic/native’. It is this one-to-one functional match between the top-down Chomsky levels and the bottom-up Marr levels that gives GOLEM its attributive capability. It is by performing this match at the topmost level that reveals the equivalence between Marrian teleology (goal, or purpose) and Chomskyan i-semantics [6]. This implies that GOLEM’s computational goals are i-semantic states, and GOLEM’s algorithmic methods are i-semantic state transitions, ie i-syntax.

Note that the Marr and Chomsky trierarchies are physically (ie architectonically) overlaid. However, it is important to note that the information flow in each of the duplex trierarchies are in opposite directions. The inevitable question is- what empirical support is there for this duplex (2 x 3) layer model? By inspection, the cerebral cortex has six layers, 3 afferent (they handle incoming information) and 3 efferent (they handle outgoing information). This situation (which is comparable with that depicted in figure A.1) is depicted in figure A.2 below. Note that the function of a neuron is a function of two factors- location of the soma / dendrites (grey matter) and span/ target of the axon (white matter) [7]. 


Figure A.2


Hierarchical representation of function

What is a hierarchy? Basically it is a data structure shaped like an inverted tree. The sort of hierarchy used by the GOLEM (and our brains) is existential, ie it is one that describes an ontology [4], a set of all things, or 'what there is'. The conventional way of describing ontologies is to link parent nodes to children nodes by the ‘cupola’, or ‘is-a(-type-of)’ relationship. In this scheme, parents represent superclasses and children represent subclasses. For example, a parent node might be ‘automobiles’, with its child nodes being ‘passenger vehicles’, ‘cargo vehicles’ and ‘race cars’. If we now look at the child node ‘passenger vehicles’ as a parent node, its children might include the brand names of passenger vehicles, eg ‘Holden’, ‘Ford’, and ‘Toyota’. 

The problem with encoding ontologies this way is that the decisions about which ‘children‘ belong to which ‘parents‘ is externally determined. The memory acts only as a storage device, and does little else apart from that function. Consider our vehicle classification tree example. If you can only observe a single vehicle (a ‘singleton examplar‘, if you will), then the only (internal) way to tell which of the three classes it belongs to is checking to see if it possesses specialised structural features which differentiate each class. You wouldn’t check for wheels, because, while not all vehicles have them eg hovercraft, having wheels or not gives little clue to their ‘use class‘. Checking the number of doors is a better idea- if the vehicle has four doors, large windows and an in-car entertainment system, it is probably a ‘passenger‘ and not a cargo or race vehicle. 

The main constituents of a system are its primary design [4] parameters, and so they determine its utilitarian (instrumental) limitations [5]. If we arrange matters so that the relationship linking downstream (children) nodes to upstream (parent) nodes is not existence but ownership or association ('has-a'), we get a memory which not only stores similarity classes, but performs useful discriminations by using an easily computed (threshold or ‘setpoint’) metric - the presence (or absence) of ‘typical’ properties. This kind of hierarchy is typically used to express parts-whole relationships. The parent node ‘table’ might then include primary children nodes ‘leg’ and ‘top’ with a child node representing the pseudoprimary property of number of legs. Each of the tables’s parts is thresholded [8]. If the top is not flat enough, it might be another kind of leg-supported furniture, like a side cabinet or settee. Almost anything will serve as a leg, so the shape threshold that defines what a leg is would be quite a low (permissive) one. Pseudoprimary properties like the number of legs is also thresholded- if the number of legs is less than 3, the so-called table will tip over. But a monopod (a one-legged tripod) is nonetheless a very useful addition to the photographer's toolkit. 

Neocybernetics = 'what' (setpoints) + 'where' (offsets)

It is neocybernetics of brains that gives them two types of output- ‘what’ and ‘where’ streams. In a given situation, for example, the primary component properties of the Marrian 'what' channel might be a='color' and b='shape'. At the next level up, these might combine to form another pseudo-primary property called c='texture'. Varying combinations of color, shape and texture can then be used to characterise different parts of the situation, by forming secondary, tertiary and higher order ‘what’ properties. In a similar way, the primary components of the Chomskyan ‘where’ channel will be spatial, eg ‘x’,’y’, and ‘z’ or the more bioplausible version,   which uses spherical coordinates- ‘yaw/pan’, ‘pitch/elevation’ and ‘zoom/range’.

This is precisely the kind of hierarchy that is straightforward to implement in neural networks [3].


Figure A.3


There are six layers of cerebral cortex:

  • Molecular (plexiform) layer.
  • External granular layer.
  • External pyramidal layer.
  • Internal granular layer.
  • Internal pyramidal layer.
  • Multiform (fusiform) layer.

Selecting an exemplar of the kind chosen by the Marrian 'what' network is the task of the Chomskan 'where' hierarchy. It is the intersection between top-down attention (the ‘where’ stream) and bottom-up awareness (the ‘what’ stream) which constructs our conscious content.

Further examination of the GOLEM will yield additional design details. Figure A.3 depicts the mapping between the neocybernetic ‘what’ and ‘where’ functions.  

1. use the acronym MCDT, if needed. otherwise plain ’GOLEM’ is just fine. They are entirely synonymous.

2. For most of the 20th Century, however, making this simple, almost 'obvious' conceptual leap by using abductive/retroductive logic was a bridge too far. Since the time of Behaviourism (1920's -1950's), the psychological sciences learned to fear one word above all else- teleology. If the 'bathwater' is teleology (derived from the classic greek for ' ultimate purpose' or 'end use') then the 'baby' [2] is the principle of cause and effect itself. Without cause and effect, without agency- we are all reduced to epiphenomenal zombies. To admit to teleology was tantamount to admitting that nature has a 'grand designer', someone (or something) to which purpose could be ascribed. Under the 'secular religion' (if you will) of Darwinism, this belief is heretical.

3. TGT research supports Chomsky’s i-language theory: we make sense of strings of words because our brains combine words into constituents in a hierarchical manner-a process that reflects an 'internal grammar' mechanism. Recent research which appears in the latest issue of the journal Nature Neuroscience, builds on Chomsky's 1957 work, Syntactic Structures (1957). It posited that we can recognize a phrase such as "Colorless green ideas sleep furiously" as both nonsensical and grammatically correct because we have an abstract knowledge base that allows us to make such distinctions even though the statistical relations between words are non-existent. This runs counter to current mainstream thought which claims that our brains function in a statistical manner.

4. even in systems that have evolved in response to reproductive fitness parameters, we can still confidently use the term ‘design’ because while there is no grand designer (eg god) in science, there most definitely IS a grand design, namely the DNA code which links all living things hierarchically.

5. ‘teleology is a bitch’. Like the frame problem, this matter was resolved in the last century, but rears its ugly head from time to time. YES you can use a long-handled mallet as a back-scratcher, or a orchestra conductor’s baton as a window-prop, but NO that is not the purpose intended by the combination of (semantics) functions represented by its constituent parts.

6. Chomsky introduced the term i-language to describe the cognitive (internal) aspect of the more familiar external speech. I have extended his terminology by suggesting that the internal versions of syntax and semantics be called i-syntax and i-semantics

7. 'white' matter is really a creamy off-white colour due to the electrically insulating oligocentric (neuro)glia called 'Schwann cells'. These wrap around the axon like a roll of paper around a cardboard tube.

8. Thresholds enable sufficient variability so that one basic type can represent many different tokens (end cases). This is perhaps an argument for the evolution of offsets as augmented setpoints.


GOLEM Conscious Computers
All rights reserved 2020
Powered by Webnode
Create your website for free! This website was made with Webnode. Create your own for free today! Get started