I was trying to start a debate about this but I don't know how to phrase my thoughts. I don't believe the mind is a machine because of the way I think, and the way others think. I often get stumped by logic where it seems logic leads to two mutually contradictory conclusions, there is something other than logic that often leads me to either pick one of them or formulate a new conclusion, and this something is NOT from a formal system
You can do something without understanding what it is, and there's a very clear difference between doing with understanding and without, in behaviour. Give someone a list of instructions to follow without telling them what they do or what they're for. They will follow them and seem intelligent if you gave them something "smart" to do, but not have a fucking clue of what they're doing. It's like studying for an exam; there's a huge difference if you do it mechanically, and if you actually understand the material.
This is basically what The Chinese Room Argument states, that mere algorithmic computation can never lead to understanding (and it is in effect devoid of "meaning"). If you were to accept the premise that mere computation is enough for understanding and consciousness, then it must follow that it does not matter how the computation is performed. This way, you can have a piece of paper that is conscious, or a system of water pipes that happens to implement an extremely complicated program. This is clearly absurd.
On the other hand, if you were to accept the premise that the mind is indeed not a machine, then we have to extend the hierarchy of computational machines (and the definition of "computation") to include the mind as well. The hierarchy would now include the human mind as a hyper-turing machine (I suspect two or three levels up in the hierarchy compared to Turing machines, to leave space for animals and babies). This leads to some interesting conjectures in ethics and philosophy.
Firstly, it implies that there is no single "rule" of morality that applies in all scenarios; not even multiple ones that apply in different contexts, but rather something that is a kind of "rule" that is non-algorithmic in nature. This does not necessarily mean that morality is subjective; it could be entirely objective but uncomputable by algorithmic means (would anyone even be surprised about that?).
Second, it implies that the set of all possible thoughts humans can have is larger in size than the set of rational/integral/natural numbers, possibly as large as the set of irrationals, because otherwise Turing machines would be capable of generating all possible thoughts, as a kind of "language".
Third, if you were to extend the notion of the halting problem to human beings (it has already been done for hypothetical hyper-turing machines, such as Oracle machines), then it might be the case that in the interactions between two or multiple human beings, the "optimal" solution in all scenarios (or any solution at all) is undecidable. Maybe the game of Mafia is undecidable, has anyone ever thought about that?