synaptic theories of neural plasticity are wrong

A controversial aspect of GOLEM Theory (GT) is its outright rejection of synaptic mechanisms of neural plasticity. Synaptic theories of neural plasticity (STNP) fail on two main counts-
(i) LATENCY-STNP learning models take much more time than is observed in real brains by at least an order of magnitude
(ii) GRANULARITY - 'backprop' and other STNP training methods are inherently and unavoidably global (ie all the weights must be changed each time anything, however small, is learned), while real brains use local adaptation mechanisms. The reason is simple- brains are linguistic, so, like languages, they learn one new idea/word at a time [1].

GT proposes that excited states of recurrent (ie looped) neural circuits provide the tonic biases which create all conscious memories, short and long term. In his first (1949) paper on the topic [2], none other than Warren McCulloch himself describes this method of encoding change in neural network designs as being the most biologically plausible, ie the most likely to be used by real brains. "The nervous system contains many circular paths whose activity so regenerates the excitation of any participant neuron that reference to time past becomes indefinite."

Chomsky remarked [3] that one major problem with words is attaching any reliable semantic relation to them. So did Lewis Carroll, sort of [4].  In many cases, we are unable to predict whether a word is even a noun, verb or whatever.  In TDE/GOLEM Theory (TGT), this fact plays a central role in its semantic model. Each word by itself denotes a broad class of related percepts, known as a semantic state. The term 'morphology' is used by professional linguists to denote this  fact. GT takes this one step further by making the following assertion: Words and sentences have the same interrelationship as neurons and their interconnected representations. They stand for percept classes. When they are used together with other words in a sentence to describe a situation, the semantic ‘solution’ to the set of limitations represented by the sentence is much more constrained than that represented by the words taken individually. The reason for this is the same as the reason that neurons behave in a similar manner- they stand for features, or percepts, so taken together, they stand for percept classes, or a semantic state eg typically referring to one of the subject’s  past experienced situations, current situation or predicted situations (beliefs). The semantic brain is a set-theoretic computation system.

By this metric of his, no other linguistic theory, past or present, represents a solution to this aspect of language. TGT does represent a valid solution because it addresses the issue fundamentally, ie at the neuro-representational level. 



1. Imagine if you needed to recompile (ie reinstall) your entire operating system each time you added a new file or edited an old one!

2. McCulloch, W. & Pitts, W. (1949) A logical calculus of the ideas immanent in nervous activity.

3. If Noam Chomsky once had an opinion on some aspect of language, he is now (reputedly) unlikely to still hold it. 

4. "Must a name mean something?" Alice asks Humpty Dumpty, only to get this answer: "When I use a word... it means just what I choose it to mean - neither more nor less."

GOLEM Conscious Computers
All rights reserved 2020
Powered by Webnode
Create your website for free! This website was made with Webnode. Create your own for free today! Get started