The computer Is the ultimate tool, placing Inanimate objects Like typewriters, notepads, calculators, photo-albums, televisions, books; running complex models of attractors, road networks, the weather, the stock market and the universe; and replicating human bank tellers, porters, telephone operators, pilots, teachers, even doctors  and artists.  Analogies have been drawn between mind and computer, but the computer is so much more than a metaphor. My contention Is that the usage, by philosophers, psychologists and cognitive scientists, of a computer as a metaphor for the mind Is misuse.
The mind, true’s most complex survival mechanism, is nothing more or less than a highly sophisticated, finely tuned computer.  Emergence, layers of abstraction, and Turing machines One of the first computers was born out of the need to calculate large arithmetic operations by Babbage and was based on the fact that “an assemblage of unskilled workers, each knowing very little about the large computation” :133 could be replaced by machines. Thus Babbage noticed that simple rules produce emergent phenomena. Emergence is produced by complex biological systems in which the hole is more than the sum of its parts. 8] Each worker can be thought of as a neuron or an ant, on their own practically useless, but placed in an network, or a colony, highly complex behavior Is produced by following simple rules. The simple rules percolate upwards creating a global effect: answers to complicated arithmetic operations In the case of Babbage, gliders In the case of Convoy’s Game of Life, sociality in the case of ants, DNA in the case of organisms and minds in the case of neurons. Baggage’s realization was the first step in the creation of the computer.
Building on this was von Neumann who used computers as a representation of the human brain, closer to a tools-to-theories fashion.  He promoted the “binary digit” nature of the neuron and “consider[deed] living organisms as If they were purely digital although he did question “how legitimate It Is to transfer our experience with computing machines to natural In agreement with Von Newsman’s comments, it is not a necessity that components of computers have a direct correspondence to those in neurological and cognitive theories in order to claim that a mind is a computer.
The stored-instruction computer, the architecture bearing Von Newsman’s name, clearly demonstrates that hardware, the physical realization, can be abstracted away from software. Abstraction layers[notes 1] are each new layer adds complexity, organization and an emergent phenomenon.  Abstraction layers are deployed by Mar, Chomsky, Plushy, Rinehart and McClellan, Newell and Anderson in their cognitive theories, in computer science and in the natural sciences. 8] For example a desktop computer can be subdivided into the following simplified layers of abstraction: the electronics, logical gates, adders and (De)multiplexers; the hardware, Heads, the CPU and RAM; and the software, word-processors, web-browsers and music-players. When discussing web- browsers there is no need to care about logical gates or what part of RAM is in use, because it is impossible to reduce higher layers to lower ones although anything that occurs at one layer reverberates to all others.
The ability to abstract away, ignoring, the components of the implementation of lower layers, provided the exchange of information that occurs where layers meet conforms to specified standards, is known as the principle of multiple reliability (MR.). Computers can be made of vacuum tubes, neurons, DNA or quantum computing chips and are proven to be computationally equivalent. [1 5] Emergence and the principle of MR. appear to refute the reductionism philosophical theory, which claims that wholes are always exactly the sum of their parts. 16] As shown above wholes are independent of and more than the sum of their parts. Turing worked on the high-level problem of whether machines can think leading to the formulation of the Turing Test.  Turing’s work was inspired by human intelligence, unlike previously mentioned examples; he created he mathematical concept of the Turing machine (TM), on which computers today are based. The TM is composed of a set of internal states and a head that can read and write, O or 1, to an infinitely long tape by moving left or right.
Turing, using logic and mathematics, proved that Tm’s are extremely computationally powerful. The Church- Turing (C-T) thesis, which states that for any given set of instructions a TM can be built to follow them, has been met with no counter-examples to date. Many variants of Tm’s exist, amongst them: analogue, continuous, non-deterministic, probabilistic, really, decimal and quantum Tm’s, and are all computationally equivalent to classical Tm’s; this property is known as Turing equivalence. 18] Therefore, non- determinism, seemingly a very natural, random and human property, is in fact expressible in a deterministic way since it is Turing equivalent. The same can be said for continuity, as expressed by Sieve, since it can be “simulated” by a non- continuous TM. A TM, say a calculator, with its input being numbers and operations, is a very rigid framework since it can only execute specific algorithms. The universal
TM (TM) is a TM that accepts as input: a TM to be emulated along with data to be manipulated, adapting the previous example: input would be a calculator algorithm, numbers and operations on numbers. Any TM can be implemented by a I-ITEM, any algorithm can be implemented by a TM.  What does it mean to say that the mind is a computer? The C-T thesis means that computationally, but not necessarily structurally nor biologically, minds and computers are equivalent in power and so it is physically possible to design a computer that emulates the mind on at least the highest possible abstraction layer.
Such a computer would pass the Turing test with flying emotions as their behavioral expressions would emerge. This is all dependent on finding a way to input to a TM: a mind and all relevant environmental stimuli. Meaning a physical computer can be built from which a mind emerges, given the right hardware realization for speed; it does not necessarily follow that an inspection of the human brain’s internal parts would lead to the discovery of anything about its, the computer’s, anatomy and vice versa.
Saying the mind is a computer means we can analyses it in an analogous way; showing parallels, but not equivalence, between abstraction layers, for example: natural languages with programming languages, mental language with machine language, brain regions with hardware components, neural networks[notes 2] with electronic circuitry, neurons with logical gates and axons with wires. Mind-computer equivalence allows analysis of mental computations, functions, in terms of space and time complexity, but not necessarily in a Foddering way.
Space complexity is the amount of mental resources devoted to the computation and time complexity is the amount of time taken to produce output, given as a function of the input. This functionalist view of mental processing is further supported by the principle of MR.. Material realization is unimportant, functions are a layer of abstraction above. The MR. principle removes any dualist notions, minds emerge from brains in the way they could from computers and it is only due to its environment that the brain is physically realized the way it is.
On the other hand if two brains could be created with identical physical structure and they were in exactly the same state at the same time they would be indistinguishable, the two minds supervening on those brains would be identical. Since we know two Tm’s with the same structure are identical only if they are also in the same state at the same time.  The equivalence also explains why access to mental states is not always possible. A computer program at high-level does not have access to what specific logical gates are doing: the activation of a logical gate is part of its “subconscious”.
Abstraction layers of mind show problems of mental causality can be solved by explicitly defining what lies on each layer. It cannot be claimed activation of a logical gate caused a web-browser to open a new window any more than a neuron ring caused a human to eat. Causality must be bounded to within layers of abstraction in order for it to make sense: a logical gate caused the next gate’s activation and on a higher level clicking the mouse caused a web-browser to open a new window. The following thought experiment is proposed.
A computer is designed with the following parts: core instructions, library of optional instructions, perception/input mechanisms, reaction/output mechanisms; and the following abilities: interpretation and adaptation of own instructions and comparison of optional instructions. The central core of instructions enables it to interpret its own programming code and thus it possess the ability to evaluate it using feedback from its environment, thus it is able to decide what program in itself to run.
This computer can program itself, by “reading” itself, deciding on modifications to its own high-level algorithms. It thus forms opinions on which rules to follow in which situation and for what sort of result. A similar experiment has been proposed by Slogan.  Does this machine not have theory of mind? Do minds not perform high-level functions by programming themselves in much the same way? Part of the reason there are very being explicitly informed of the behavioral output, say touching one’s nose whilst lying, one can easily stop oneself.
Resulting in consciously “reprogramming” in order to change the output, the behavior. And is it? Dresses claimed “commonsense capacity” and “background knowledge” :275 are needed for computer to function as mind and rules and facts are not enough to represent these concepts and therefore a machine cannot contain them.  This is flawed reasoning, Just because something is implicitly stored does not mean it is not expressible in rules and facts. Comparing implicit with explicit knowledge is not comparing like with like: they are completely different abstraction layers. 13] Sear believes that a computer’s possession of a mind is equivalent to it running a simulation of a stomach digesting food; and because the stomach is not real thus the mind must also be not real. But brains can run “simulations” too, the imagination or hallucinations, and in much the same way the brain “simulates” the mind, any other view would be dualist. Consciousness is nothing more than a high-level mechanism evolved to give humans an identity. The notion that “the mind is to the brain as the program is to the hardware”[notes 3] is only true in theory, currently.
The mind and programs reside on the highest possible abstraction layer, but minds emerge yet most programs are designed and written. This creates a distinction in practice in general, but evolutionary algorithmic techniques, which are becoming increasingly popular, produce strong emergence.  Sears’s Chinese room (CRY) thought experiment is supposed to show how computers will never have intentional states, semantic meaning and that functionalism is wrong. 8] Symbols gain their meaning through percolating upwards through the use of syntax on basic semantic links to the real world; thus, there is no need for homunculus.  Neurons in the brain know nothing of language in the same way the person in the Chinese Room does not know Chinese. Symbolic manipulation is but a high-level addition humans perceive, Tm’s use symbols but a connectionist approach, using for example a hardware implementation of Nuns, does not.
Sear accuses strong AY of dualism ignoring that Tm’s prove to us hardware and software are equivalent[notes 4], which is much eke saying physical and mental are the same. He claims hardware is inflexible and when it underpins a system it affects it to a great extent. This is only true about time- complexity. Ants form a collective brain, albeit without the human type of consciousness. In theory there is nothing to stop ants from evolving a collective consciousness within a social colony given there is evolutionary pressure to do so.
Sear also attributes the term behaviorism to strong AY due to the Turing test. Currently, there is no way to examine intelligence, other than behaviorally; roving input and evaluating output. Nothing would differentiate intelligence from a “mock-up”, meaning they are one and the same. Any other interpretation is dualist. [notes 5] The Churchyards accept the mind-computer hypothesis but claim that because brains are parallel computers, neurons are analogue, axons are bidirectional and the brain is a dynamical system, they should not be thought of in terms of analogue computers. 24] But the C-T thesis and the principle of MR. mean that if at each abstraction layer we have equivalent exchange of information these sing GГ¶dell’s first incompleteness theorem, which states that all consistent axiomatic formulations of number theory include undividable propositions. :17 Formal systems using number theory[notes 6] cannot contain both a complete and consistent set of axioms, nor can an infinite list of complete and consistent axioms be produced.
So there exist formulae that a computer cannot prove to be true but the mind can see and show they are true. Therefore, minds and computers are not equivalent.  GГ¶dell’s theorem applies only to formal, consistent, deductive systems. The claim is that the mind uses abduction and induction, along with deduction, and cognitive dissonance and cognitive biases show a lack of consistency that is highly ingrained. There is no reason for the mind to be consistent if survival can be attained without it.
The mind is a quasi-consistent system, that is not only inconsistent but inconsistent in its inconsistency, some times being consistent and others fundamentally not. Computers are also able to apply non-deductive reasoning, change their goals, learn, behave non-deterministically and probabilistically and therefore are by definition unaffected by GГ¶dell’s theorem. 32] Another point Lucas makes is that computers cannot gauge their own performance without “becoming a different machine”,:125 adding to their complexity by augmenting their programming code.
But computers perform recursion seemingly in the same way minds do. The power of recursion evidently lies in the possibility of defining an infinite set of objects by a finite statement. In the same manner, an infinite number of computations can be described by a finite recursive program, even if this program contains no explicit repetitions. “ When one knows fact X, one can recursively now they know fact X and know they know they know, ad infinitum, fact X. The difficulty in actually contemplating such notions grows with each step. Edit] Conclusion On the one hand the brain was built in single evolutionary steps from the bottom up: from a single neuron-like structure to a collection of neurons to a proto-brain which slowly incremented in size, computational ability and speed until the mind emerged. On the other hand, computers are engineered from the top down and are thus from conception easier to analyses. The function performed by the human brain to produce a mind is unknown and has evolved over time. So a computer endowed with a method of adaptation and evolutionary pressure to need to produce a mind, will do just that.
In both cases a mind emerges from and supervenes on the physical realization beneath. Disagreements about mind-computer equivalence normally boil down to comparisons between dissimilar abstraction layers, misunderstanding the role and importance of Tm’s, the C-T thesis and MR.. Expecting humans in about a century to not only understand but duplicate what evolution has been perfecting for millions of years is unrealistic. If something like a mind cannot be run on a TM it is cause humans have yet to discover its algorithm and not because it does not exist.
Polemics of the mind-computer equivalence thesis need to face up to the reality of computers creating art, music, (almost) passing the Turing test, making scientific discoveries, learning, perceiving their environment and processing natural language all to very impressive degrees. They also need to accept that the tuned by evolution for survival and, since computers can evolve, will be acquired in some form or another given enough time. Maybe some find it hard to accept the computer-mind equivalence because they think it insulting.
It is degrading to them even to postulate that computers can, even in theory, do all that their own minds can. Perhaps they also are intimidated by the notion of functional completeness, that logic, computers and therefore minds can be built using Just AND gates,[notes 7] for example. But the fact that the mind is capable of understanding itself, the most complex machinery in nature, to the point of being able to create a, potentially, better copy, in terms of memory capacity, arithmetic manipulation, speed of processing and lifespan, is one of the greatest endeavourers humanity has undertaken.