lieroplantir@gmail.com 

Notes:

How is the brain different from a computer?


by Liero Plantir
10/04/2021



This post is based on a section of the book, The Singularity Is Near by Ray Kurzweil. I wanted to summarise it in order to understand some of the points better. As you can tell by the title, I’ll be laying out some of the main differences between the brain and a computer. The book was originally published in 2005, so I’ve expanded and updated the facts that are out of date, however the majority of the points are fundamental differences, thus they still hold true.



1. The brains circuits are very slow, however it’s massively parallel

The amount of time required for a neuron and its synapses to reset themselves after the neuron fires is very slow. There are very few neuron-firing cycles available to make pattern-recognition decisions. fMRI and MEG scans show that judgements that do not require resolving ambiguities appear to be made in a single-neuron firing cycle (less than 20 milliseconds), involving essentially no repeated processes. Recognition of objects for humans occurs in about 150 milliseconds, so that even if we ‘think something is over’, the number of cycles of operation is measured in hundreds or thousands at most, not billions, as with a typical computer.

However, the brain is massively parallel meaning that it has on the order of one hundred trillion interneuronal connections, each potentially processing information simultaneously. These two factors (slow cycle time and massive parallelism) result in a certain level of computational capacity for the brain. It’s theorised that the brain operates at approximately 1 exaFLOP. The leading supercomputers are currently capable of around 200 petaFLOPS so the brain still operates on the next order higher. However, it’s not necessary to use the same granularity of parallel processing as the brain itself, so long as we match the overall computational speed and memory capacity needed.


2. The brain combines analog and digital phenomena


The topology of connections in the brain is essentially digital - a connection exists, or it doesn’t. An axon firing is not entirely digital but closely approximates a digital process. Mostly every function in the brain is analog and is filled with non-linearities (sudden shifts in output, rather than levels changing smoothly) that are substantially more complex than the classical model that we’ve been using for neurons.

The detailed, nonlinear dynamics of a neuron and all of its constituents (dendrites, spines, channels and axons) can be modelled through the maths of nonlinear systems. These mathematical models can then be simulated on a digital computer to any desired degree of accuracy. If we simulate the neural regions using transistors in their native analog mode rather than through digital computation, this approach can provide improved capacity by three or four orders of magnitude.


3. The brain rewires itself


Dendrites are continually exploring new spines and synapses. The topology and conductance of dendrites and synapses are also continually adapting. The nervous system is self-organising at all levels of its organisation. While the mathematical techniques used in computerised pattern-recognition systems such as neural nets and Markov models are much simpler than those used in the brain, we do have substantial engineering experience with self-organising models.

On the other hand contemporary computers don’t literally rewire themselves (although emerging ‘self-healing systems’ are starting to do this), but we can effectively simulate this process in software. In the future, we can implement this in hardware, as well, although there may be advantages to implementing most self-organisation in software, which provides more flexibility for programmers.


4. Most of the details in the brain are random

While there is a great deal of stochastic (random within carefully controlled constraints) processes in every aspect of the brain, it’s not necessary to model every ‘dimple’ on the surface of every dendrite, any more than it is necessary to model every tiny variation in the surface of every transistor in understanding the principles of operation of a computer. However, certain details are critical in decoding the principles of operation within the brain, which compels us to distinguish between them and those that comprise of stochastic noise or chaos. The chaotic (random and unpredictable) aspects of neural function can be modelled using the mathematical techniques of complexity theory and chaos theory.


5. The brain uses emergent properties


Intelligent behaviour is an emergent property of the brain’s chaotic and complex activity. A good analogy for this is the design of termite and ant colonies, with their delicately constructed interconnecting tunnels and ventilation systems. Despite the design, the ant and termite hills have no master architects; the architecture emerges from the unpredictable interactions of the all the colony members, each following relatively simple rules.


6. The brain is imperfect


The nature of complex adaptive systems means that the emergent intelligence of the brain's decisions is suboptimal (it reflects a lower level of intelligence than would be represented by an optimal arrangement of its elements). It needs only to be good enough, which in the case of our species meant a level of intelligence sufficient to enable us to outwit the competitors in our ecological niche. For example, primates who also combine a cognitive function with an opposable appendage but whose brains are not as developed as humans and whose hands do not work as well.


7. We contradict ourselves


A variety of ideas and approaches, including conflicting ones, leads to superior outcomes. Our brains are quite capable of holding contradictory views. In fact, we thrive on this internal diversity. An obvious example of this is that in human society, especially a democratic one, there are constructive ways of resolving multiple viewpoints.


8. The brain uses evolution and the patterns are important


The basic learning paradigm used by the brain is an evolutionary one. The patterns of connections that are most successful in making sense of the world are the decisions that survive.  A newborn’s brain contains mostly randomly linked interneuronal connections, and only a portion of those survive in the two-year-old brain.

Certain details of the brain's chaotic self-organising methods, expressed as model constraints (rules defining the initial conditions and the means for self-organisation), are crucial, whereas many details within the constraints are initially set randomly. The system then self-organises and gradually represents the invariant features of the information that has been presented to the system. The resulting information is not found in specific nodes or connections but rather is a distributed pattern.


9. The brain is holographic and deeply connected


There is an analogy between distributed information in a hologram and the method of information representation in brain networks. We also find this in the self-organising methods used in computerised pattern recognition, such as neural nets, Markov models, and genetic algorithms.

In addition, the brain gets its reliance from being a deeply connected network in which information has many ways of navigating from one point to another. The Internet is a good analogy for this as it becomes increasingly more stable as the number of its constituent nodes has increased. Entire hubs of the Internet can become inoperative without ever bringing down the entire network. Similarly, we continually lose neurons without affecting the integrity of the entire brain. Another relevant example is Bitcoin, as the strength of the blockchain (network) is positively correlated to the quantity of nodes.


10. The design of a brain region is simpler than the design of a neuron


Models often get simpler at a higher level, not more complex. For instance, a computer. We don’t need to understand the detailed physics of semiconductors to model a transistor, and the equations underlying a single real transistor are complex. However, a digital circuit that multiplies two numbers, although involving hundreds of transistors, can be modelled far more simply with only a few formulas. An entire computer with billions of transistors can be modelled through its instruction set and register description, which can be described on a handful of written pages of text and mathematical transformations.