Is the brain a machine that operates according to an algorithm? I have doubts.
Turing's halting problem and Gödel's incompleteness theorem are strong arguments against this notion.
This title sort of implies that there'd be a thought process to read.WHY THE BRAIN IS NOT A MACHINE
This sentence shows the beginning or the basis of a thought process.Turing's halting problem and Gödel's incompleteness theorem are strong arguments against this notion.
But where's the actual thoughts process?
For example, I can't just say Varcron is scummy because he made x post. I mean I could but much good it'd do?
lmfao u guys
@Oberon i hope this is the case
I do not know if the mind is a computer or not but the way it functions is how we are modeling the next generation of computers and is the leading design factor (to my limited understanding) in how we run AI through either analog or digital functions.
Maybe this will give you some answers?
I was trying to start a debate about this but I don't know how to phrase my thoughts. I don't believe the mind is a machine because of the way I think, and the way others think. I often get stumped by logic where it seems logic leads to two mutually contradictory conclusions, there is something other than logic that often leads me to either pick one of them or formulate a new conclusion, and this something is NOT from a formal system
You can do something without understanding what it is, and there's a very clear difference between doing with understanding and without, in behaviour. Give someone a list of instructions to follow without telling them what they do or what they're for. They will follow them and seem intelligent if you gave them something "smart" to do, but not have a fucking clue of what they're doing. It's like studying for an exam; there's a huge difference if you do it mechanically, and if you actually understand the material.
This is basically what The Chinese Room Argument states, that mere algorithmic computation can never lead to understanding (and it is in effect devoid of "meaning"). If you were to accept the premise that mere computation is enough for understanding and consciousness, then it must follow that it does not matter how the computation is performed. This way, you can have a piece of paper that is conscious, or a system of water pipes that happens to implement an extremely complicated program. This is clearly absurd.
On the other hand, if you were to accept the premise that the mind is indeed not a machine, then we have to extend the hierarchy of computational machines (and the definition of "computation") to include the mind as well. The hierarchy would now include the human mind as a hyper-turing machine (I suspect two or three levels up in the hierarchy compared to Turing machines, to leave space for animals and babies). This leads to some interesting conjectures in ethics and philosophy.
Firstly, it implies that there is no single "rule" of morality that applies in all scenarios; not even multiple ones that apply in different contexts, but rather something that is a kind of "rule" that is non-algorithmic in nature. This does not necessarily mean that morality is subjective; it could be entirely objective but uncomputable by algorithmic means (would anyone even be surprised about that?).
Second, it implies that the set of all possible thoughts humans can have is larger in size than the set of rational/integral/natural numbers, possibly as large as the set of irrationals, because otherwise Turing machines would be capable of generating all possible thoughts, as a kind of "language".
Third, if you were to extend the notion of the halting problem to human beings (it has already been done for hypothetical hyper-turing machines, such as Oracle machines), then it might be the case that in the interactions between two or multiple human beings, the "optimal" solution in all scenarios (or any solution at all) is undecidable. Maybe the game of Mafia is undecidable, has anyone ever thought about that?
What's the meaning of the word "machine" in the context of this topic?
The brain is a complex organ. It somehow allows us to be conscious of our surroundings, but it is a mystery to what really contributes to this casuality. My favorite description of this is in the book "The Name of The Wind", where one of the characters describes it as such:
"Each of us has two minds: a waking mind and a sleeping mind. Our waking mind is what thinks and talks and reasons. But the sleeping mind is more powerful. It sees deeply to the heart of things. It is the part of us that dreams. It remembers everything. It gives us intuition."
I'd like to propose this thought onto the table: the brain exists on a sort of limbo on the edge of chaos. Without order, there would be no computational ability. Without randomness, there would be rational thought nor imaginative ability. Somehow, these two factors by itself wouldn't be sufficient enough for sophisticated lifeforms, but when put together they complement each other and bring the perfect balance.
I liked the display, early in the vid, of how complicated are the output actions of even a single neuron in the brain, and we've bilions of them.
The AI's ability/method of recognizing things from inputs (vision now, betcha smell touch sound will be next?) seems very human-brain-like. But AFAIK that's where AI's humanlikeness ends. The way AI's learn games is just trial and error a bilion times. You put that AI in the same situation and it will do the same action each time (Leela Chess doesn't count, the slight RNG is artificial I believe). We learn in more ways than just that, and our actions on the tiniest of detail are not reproducible.
Humor me this:
Ok, machines would always get to the same output with the same inputs, a person could and sometimes would get to different conclusions with the same information presented depending on the day. Why do you think that is?
What do you think of the idea that it's due to the fact that we have billions and billions of inputs which can never be the same and that's the reason our outputs can be different with the same inputs of information?
In otherwords, could you agree with the thought that we ARE operating according to an algorithm but it's just that the number of inputs we are operating on is insane and therefore unreproducible? It seems to me that you're taking only information inputs into account and noticed that the output is determined by more than just our "algorithm of logic" (quotation marks because unsure of term used). We operate on more "algorithms" than just our logic. We operate on more inputs than just information.
My view is that both machines and humans operate on the input-algorithm-output basis, we're just more complex.
But like, we certainly aren't "computational in nature", as I understand the phrase, because we operate on more than just information and logic. Maybe operating on information and logic is what it means to be a machine? In that case our brains are certainly not machines. :P
Last edited by OzyWho; March 15th, 2022 at 06:06 PM.
I mean just think about it. If you are just executing algorithms, what your basically asking is to literally have a list of instructions for every single possible situation, and that's just not tenable. Literally progressing through life mechanically as though it were math.
There is a word for such people who live mechanically; they are called idiots. And even idiots are more sophisticated than any computer.
Some parts of your question here merits newton's flaming laser sword. There is no data I know of in existence to validate any opinion on the subject. The two edges I know of would be between how AI processes are modeled and some of the help I have gotten over the years from the Center of Excellence in Waco. The approaches they have had for treatment of PTSD and TBI have been very elegant although I am deeply angry about how the central Texas VA has crippled their program.
Back on topic deterministic cognition really changed the way I think. When I was exposed to the idea that every thought is some combination of previous experiences and current circumstances it changed the way I thought. I intentionally began seeking out ways to 'think outside the box' because I thought I recognized what that box was. I am not sure if I was right so many years ago but I certainly have been able to think differently than most people over the years. I would say that even if our thought processes are somewhat deterministic we still have free will given our ability to steer that combination of factors.
The best argument I've heard of in favour of my opinion is probably the Chinese room argument. Searle makes some points I don't entirely agree with though; while he does agree with me that machines cannot have understanding, he makes this weird (in my opinion, anyway) distinction between "having understanding" and "acting intelligently", in the sense that, machines, although deprived of understanding, can act intelligently, which is a bit of a weird conclusion to derive.
In that sense he appears to be conflating consciousness and understanding.
Penrose has a very similar position to me, namely he believes that machines can have neither understanding nor act intelligently; so therefore you would be able to tell, by the means of some kind of test (a Turing test, if you'd like), whether an object has understanding or not (he goes further than this and says that they would probably also be conscious, like you and me, and I agree with this; however, this is mostly based on aesthetics as I don't think there's any hard evidence for or against it).
Other than those two, the only people who've sorta talked about this from the anti-computationalist side are Gödel, Leibniz (he was a dualist) and Putnam. I've not read anything Putnam has said so I'm not familiar with his arguments.
It's interesting to see that this view is not only incredibly fringe, but nobody seems to be trying to argue in favour of the anti-computationalist view; everybody's accepted the computationalist theory, seemingly without even considering whether it is correct or not.
Certainly, my course programme has never had a debate over whether or not artificial intelligence actually is intelligence. Everyone sorts of accepts this assumption but nobody questions it. It's weird.
And yes I agree that thinking is deterministic. I am aware there is little hard evidence out there but I am pretty convinced of this as, the more intelligent people are, you'd expect them to be more random, which is a bit strange. I mean how could something that gets more random be better? Shouldn't it get... worse, approaching a 50/50 split?
An algorithm with changing inputs and outputs is still an algorithm, and can therefore be simulated by a static algorithm, because the two classes are equivalent in computational capability. Hence, a non-computational mind cannot be one along those lines.
There is a countable number of algorithms. You can have a Turing machine that simply generates those algorithms - and inputs and outputs - of the mind, and runs them entirely deterministically.
The number of thoughts a non-computational mind could generate is uncountable, because again otherwise a Turing machine would be able to o generate them. Hence: not an algorithm.
I personally find open speculation that can not hope to solve an issue very interesting but I also think this question is not one I can take a solid stance on. The question also reminds me of the theory that we do not exist and are all a computer program running a simulation. Are we? I doubt it but have no justification to say its impossible.
I of course do not believe this is the case but it wasn't the main point of my argument. Like you could argue our intelligence is "deterministic" and thus we lack free will but that does not necessarily mean our intelligence is mechanistic.
I question however on the philosophical aspect of things what the difference between a simulation and and a universe created by a Supreme Being is. The Supreme Being dreamt it up from nothing. Isn't that a simulation?
If you were to ask me, in the hypothetical hierarchy of computational machines that is extended by hyper-Turing Machines, God would be the ultimate hyper-turing Machine, infinitely capable of all possible computations. In light of this, you can view God having created the Universe as a kind of "hyper-computation" and, since simulations must be the result of some computation, the very boring answer is that, yes, we live in a simulation
Also Ozy I just realised I didn't explain the terms I used properly. An algorithm is a list of instructions, but the problem is that, no matter how much you vary this so-called input/output that humans have, it still is an algorithm, even if it somehow "changes". If the derivative of the algorithm, if you want to call it that (the "rule" according to which the algorithm changes, basically), is still algorithmic and decidable by Turing machines, then in the end what you would have is still an algorithm, and therefore the brain would still be a machine.
The only way in which having a changing algorithm matters is if the rule according to which the algorithm changes is not computable by a Turing machine, because in that case, it really is the "changing algorithm" doing the heavy lifting and turning our Turing machine into a hyper computer.
Some other terms:
A Turing machine. You do not really need to know the formal description; I know it and trust me, its not even useful to know the definition because it is not immediately apparent why Turing machines have the capabilities that they do.
If you still want to know what a Turing machine is: it's a hypothetical computer with a finite tape that can be extended indefinitely, with a tape reader that can move left or right across the tape, one cell at a time, and that can read and write symbols from/to the tape. It also has a (finite) set of possible states that it can be in at any given point.
At each time step, the machine does the following, in this order:
1. Write a new symbol or erase the symbol at the current tape reader's position
2. Move the tape head to the left, right, or stay at current position
3. Change its state to a new one or keep the same state
It has a giant table of transitions for each combination of symbol read and state.
If you're left feeling confused after having read that, trust me, you're not the only one. The only reason I'm arguing against it is because I have programmed a decent amount and I know what Turing machines are capable of doing, because they are exactly as powerful as traditional programming languages.
Technically they aren't as powerful as programming languages, because Turing machines need an infinite or finite but infinitely expandable amount of memory, which, obviously, no programming language can have. But that's a very boring requirement that nobody gives a shit about, so...
It is quite plain that the brain is at least as capable as a (finite) Turing machine, because you can simulate a Turing machine by hand. Literally. It's not even hard, just boring and time-consuming to do. Turning oneself into an infinite Turing complete machine can be done by merely gaining access to an infinite amount of paper for you to write stuff on. It's not interesting
And it is quite obvious the brain is at least equivalent to a finite Turing machine (also known as a Linear Bounded Automaton), because Swiss German, the language spoken in Switzerland, is known to be unparsable to anything less powerful than a Linear Bounded Automaton.
To be clear, are you discussing whether or not you think the brain is deterministic?
From your posts it sounds like that's not what you're interested in, what is it exactly and succinctly that you mean?
Can the human mind be reduced to a Turing Machine is not necessarily a "yes" if the mind is deterministic, eg your analogy.
But if the answer to the question of if the human mind can be reduced to a Turing machine is yes, then that implies the mind is deterministic. And I believe you do not think the mind is deterministic and therefore you must also believe the answer to the question of if the mind can be reduced to a Turing machine must be "no"
Last edited by Lag; March 28th, 2022 at 07:29 AM.
I believe the mind is deterministic though. I have big problems pbelieving intellect and genius is the result of random chance. Why, does that mean Einstein was the most random of us all?
So my position is that the mind is non-computational and deterministic, and I am using the term here strictly with regards to the operation of the mind without considering the whole free will vs determinism she-bang. (I believe we do have free will even if we are deterministic creatures FTR, but let's not get into that).
The half life of an isotope of Uranium-235 is 700,000,000 years,
The half life of an isotope of Uranium-234 is 245,500 years,
But radioactive decay is non-deterministic. And yet they have different rates of decay. Just a trivial example of how non-deterministic entities can have different higher level properties.
Additionally the idea that the mind is deterministic is a largely unfounded in the era of quantum mechanics and chaos theory. The mind is certainly a chaotic system, with chaotic systems all the way down, until you reach the quantum realm where quantum uncertainty kicks any notion of determinism out.
Whether or not quantum uncertainty mixed with chaos theory is the recipe for free will or not is a more interesting philosophical question, but the mind and the world are most certainly not deterministic, at least not in the standard sense of the word.
With the technology we have now absolutely no.
'If' we were to assume potential development of new types of computing technology through mechanical, biological, or quantum methods
'and' we are to assume the mind is deterministic
'then' yes. It should be possible
If we assume thought itself is deterministic then many other forms of computing should also be able to replicate the human mind.
I feel like its a relevant question of if reality itself is deterministic (enter ontological and quantum theorys) first but again, we are just too ignorant and know too little to begin to do more than just throw a bunch of 'what ifs' at those.
I agree but for the completely opposite reason. I do not see any way of bridging this gap between our reasons as I've already exhausted my lines of reasoning as to why I don't believe the brain is equivalent to a Turing machine.
I reiterate that the Chinese room argument is likely the strongest argument against artificial intelligence and consequently also against the idea of mechanical intelligence. The only thing I can do at this point is attempt to explain my argument more thoroughly, because I am not entirely sure it's been understood properly, but I will not do that because it's far too complicated to do. Unless someone specifically asks me to do so
As to how a hypothetical hyper computer would function, that would be anybody's guess. It clearly cannot function along the lines of any physical laws we are aware of now, as they are all computable (barring chaotic conditions which sometimes introduce undecidability)
Also @Lag I am not sure the idea that the mind is deterministic is entirely unfounded. Without going into the physical aspects of the brain and concentrating strictly on the functionality, it seems hard to believe intelligence could be the result of random chance. Certainly we do not always understand why we act the way we do, but there are many things we do not understand and most are not random lol.
As for Artificial Intelligence one issue I see is that AI does not have to function the same as human intelligence in order to equate to intelligence. In animals many of the same questions have been asked. Are they self aware? Do they recognize themselves? How do groups of insets function with a level of super-organism intelligence? How does an octopus' cogitation work with a distributed structure instead of a centralized mind? Is there intelligence at the cellular level when we can identify patterns attributed to intelligence in things such as mold?
All of those things play with the idea of what has and what is intelligence but in a nonhuman-centric way. I would say its an almost certainty that we can create a level of computer intelligence that is like the human mind and the real question is simply 'to what degree' regardless of if the mind is deterministic. But if it is deterministic I feel like without the constraints of things like processing power and memory it should be possible to completely copy the mind with the only difference of the chaos created by quantum level issues.
Intellectual growth comes from discussions, not arguments. If you are unwilling to change your position and hear the other persons side you are closed minded and wasting your time.
I agree. I think the Turing test would be adequate for determining whether an agent is intelligent. The intelligence could be very well alien, but as long as you can recognise there is some kind of understanding behind it - which I think the Turing test would be capable of uncovering - then it does not matter.
I think that something that acts intelligently is intelligent; it's not possible to act intelligently and yet be completely dumb. The imitation argument doesn't work; sooner or later someone will you put in a situation you have not been in before and your lack of understanding will be revealed. Mechanical learning cannot replace real understanding. Isn't that what we tell students?
You know a question I've asked myself? Children, up until a certain age, are not self-aware. Why not give a Turing test of sorts to children ages 3-10 to see if we can spot a difference between those that are self-aware and those that are not?
We could go further than this and give a kind of Turing test to a single child several times over the course of their life, starting say, from when they're 1 year old until they reach like puberty or something. We can actually investigate some of these assumptions behind intelligence, but nobody does it. I looked it up and there is literally ONE study that attempted this experiment and, unfortunately, the paper is hidden behind a 30$ paywall lol, and it has like 10 citations or something similar to that.
FTR I suspect, but I am not certain, that intelligence automatically brings about consciousness; you cannot be intelligent and yet not self-aware.
And thereby we could also investigate whether and which animals are self-aware
Having realised that human babies aren't self-aware, I'm not sure anymore if any animals are self-aware lol.
Magpies and elephants have some rituals centered around death which point towards them being self-aware but other than that, we may be alone on this planet.
And if we're going to take the causality closer to present and say that the neurons in your head caused your thoughts, well that's an easier one. There's no difference between the neurons in your head and you. Saying you don't have free will because the neurons in your head fired a certain way and you have no control over it merely proves we do not have infinite free will, only a limited amount, but that's alright.
As an aside, this whole thing of "event A in the very remote past can be used to determine the outcome of every single event since" is kind of "true" but in the context of the mind it does not really apply. Even if it did cause your mind and ultimately determine your entire history, at the end of the day, this has nothing to do with what you are doing in your day to day life.
You aren't looking at a logbook of what happened billions of years ago to figure out your next decision. You are deciding that based on your internal processes. Your internal processes were indeed ultimately shaped by the Big Bang, but within the capability of those processes, now independent from the Big Bang, you have freedom of action.