A Brief History of Computation - incomplete - do not cite

The everyday use of numbers and counting for commercial use preceded any formal notions of abstract number systems. The first formal system for modelling reality with numbers was arithmetic. Arithmetic was the world's first computer program. Arithmetic is a straightforward[1] extension of linguistic concepts of nouns, adjectives and verbs. Nouns could stand for things, adjectives could stand for the number (or another property, like colour) of them, and verbs could stand for operators like adding, subtracting etc. 

Algebra also seems a logical extension of arithmetic, in which abstract symbols play the key role. The most important benefit of Algebra over arithmetic is the ability to change the 'subject' of the formula. This is also what we do in everyday language. Perhaps the (reputedly Arabian) inventor of algebra saw the connection too.

The ability to change the subject of the formula leads to the concept of functions. The subject is the computational 'output' of the function, it is the thing you don't know and want to find out. Unfortunately, not all formula can be 'inverted' in this way. This reality led to the need for repetitive calculation (= computation) of intermediate results. These intermediate numbers are the current best estimates of the answer you need, and are not perfectly accurate by their very nature. Isaac Newton invented an eponymous method for solving such problems, a method which is still used today. Every answer is the best available result, and is associated with an systematic estimate of error.

The first 'computers' were people [2] paid to 'crank the handle' on  mechanical, or electromechanical calculators, machines with typewriter keyboards for input which printed out their answers. In the 1930's and 1940's several people (Turing, Church, Von Neumann etc)  began to seriously consider the consequences of replacing the (clever) people who did the computation with (stupid) machines. What if the intermediate computations somehow diverged rather than converged? Then, the machine would never halt. But how would the 'programmer' know if it was stuck in one of these 'infinite loops' or just taking a long time to compute a sufficiently accurate result? This is the so-called 'halting' problem.

Alan Turing wondered if there were some way of deciding beforehand whether the computation was convergent or divergent by examining the form of the problem (its formula, description or specification). This is why the 'halting problem' is more usually called the 'decision problem'[3].

1. I'm sure the person who invented it didn't believe it was so obvious.

2. in books printed before the second world war

3. The Entscheidungsproblem was proposed by german mathematicians David Hilbert and Wilhelm Ackerman in 1928. Alonzo Church and Alan Turing published independent papers in 1936 proving that a general solution is impossible (the Church-Turing thesis)

GOLEM Conscious Computers
All rights reserved 2020
Powered by Webnode
Create your website for free! This website was made with Webnode. Create your own for free today! Get started