Guest post: Wanna be a brain in a vat?
Guest post by James Garnett.
More and more these days I keep seeing pop technologists ranting on about AI and “transhumanism”, which is to say, moving beyond biology and injecting our consciousness into machines. There is so much wrong with this even from a merely technical standpoint that I don’t even know where to start.
So much of our consciousness and the way we understand and interpret the world is tied up with our physical bodies and the inseparable link between them and our minds (yes, I said “inseparable”) as to make talking about one without integrating the other almost nonsensical. It betrays the blindered standpoint of persons without disability or dysfunction; people who can mostly ignore the presence of their bodies because nothing impinges upon their awareness to force them to observe.
One person I spoke to recently said that this is exactly what the “Singularity” will mean (downloading one’s consciousness into a computer): freedom from awareness or worries about biology and the body. I submit that this is wholly and completely wrong and mistaken: being unaware of a body that is functioning perfectly is completely different from being cut off from that same body, or from living insensate, or living with a body that doesn’t do what our instinct tells us that it should. It doesn’t take much malfunction to make us very, very aware that we live within it, and that its proper function is absolutely fundamental to our ability (should we enjoy it) of being unaware of its daily operation.
Even the very formation of our thinking and the thoughts within are formed, molded, and ultimately limited by the fact that we have binocular vision facing forwards, or that we stand upright, or that we have two hands instead of eight tentacles, or that we have basic biological firmware mechanisms built in to our very brains that prevent us from doing such things as walking off a cliff, and so on et cetera.
Anyway. Just random blitherings on a lazy sunday afternoon.
Very intelligent blitherings.
As everyone here already knows, consciousness is just what a brain does.
It makes as much sense to suggest transferring our consciousness to a machine as it would to suggest transferring our gait (which is what our legs do, when they work).
Over the course of the evolution of life, the ability to detect sensory input preceded any kind of consciousness by thousands of millions of years. Indeed, I am of the opinion that consciousness is impossible without that ability, and evolved to co-ordinate increasingly complex senses. After all, what is the brain except the terminus of all the nerves connecting every part of the body? Bathed in chemicals produced by the body? Without a rail network, what would St. Pancras be? An empty building, devoid of purpose.
I don’t believe that artificial intelligence, however complex, is anything more advanced than simple calculation (albeit very fast), and doubt very much that it is any kind of precursor to machine consciousness.
It might be possible for some machines, by calculating very fast indeed, to pass the Turing test and fool an interlocutor into believing that they are conversing with a consciousness; but it would be an illusion caused by our evolved tendency to attribute agency where there is none.
Yes, very intelligent blitherings. Most of the puzzle of ‘consciousness’ derives, so far as I can see, from Plato & Descartes, and the usually unacknowledged assumption, or, better, prejudice, that ‘mind’ is some sort of ethereal, otherworldly thing that could exist anywhere. It cannot. We are embodied beings, and consciousness derives from our body and its various relations with the world. Who the hell would want to live, painlessly and for ever, as a ‘mind’ stuck in a computer? Unable to touch anything or anyone, unable to interact directly with world, dying, only metaphorically, of boredom – until perhaps some malfunction or some kind and genuinely living person shuts the computer down for good….I am reminded of Karel Capek’s play ‘The Makropoulos Affair’, and the opera Janacek made of it – living for eternity turns out not to be enjoyable at all, and for very good reasons.
In which connexion, there is a very good book from the Princeton University Press: ‘When Animals Dream : The Hidden World of Animal Consciousness’, by the philosopher David Peña-Guzmán. It is written for a popular audience, and though closely argued is readily understandable. He shows, among other things, how the nature of consciousness is necessarily & intimately involved in the kind of animal you are.
tigger:
Overwhelmingly popular understanding of the Turing test: an artificial intelligence is “good enough” when it can fool someone interacting with it into believing it to be intelligent.
Slightly more sophisticated understanding of the Turing test: we recognize intelligence by interacting with it.
Actual point of the Turing test: to interact with intelligent beings without causing them to doubt whether one is an intelligent being is constitutive of intelligence.
But is the ability ‘to interact with intelligent beings’ (by which I suppose you mean human beings) ‘without causing them to doubt whether one’ (by which I suppose you mean a machine) ‘is an intelligent being’ in fact ‘constitutive of intelligence’?
I once read Martine Rothblatt’s book Virtually Human: The Promise – and the Peril of Digital Immortality. I found it very disappointing.
Rothblatt is a transwoman. The jist of the argument is that sentient “mind-clones” of us will be made up from our recorded, online presences. And since we’re all stardust anyway, we’re all undifferentiated from everything else so these super-fast processors that can imitate our mannerisms and ideas really will be “us.” And “we” will go on to learn new languages and other neat things because our “mind-clones” super-fast processors can do such things easily.
Rothblatt made a computerized replica of his actually female wife. It’s a little creepy. I wonder why he didn’t make a copy of himself, since he’s more intimately familiar with himself.
The best you’re going to get is software emulation of the wetware based on a ROM dump (which may be quite imperfect). Still, when I boot up Final Fight on my MAME machine I’m not booting up the full equivalent of the arcade machine that originated the ROM. Still, it wouldn’t be “you” in a continuity of consciousness sort of way (which seems impossible and effectively meaningless, best bet would be recording of mental software up to the point of death and running that recording on the new physical hardware).
I prefer my transhumanism to be focused on genetic engineering our successor species… Homo Novus
A consciousness divorced from a brain would have no compunction to do anything. People might think that living as a Vulcan (Star Trek reference here), with no emotions to cloud your thinking, would free you to pursue science or art, unfettered. But, in reality, it is the hit of chemicals in the brain that rewards you for doing anything, including even mundane things like eating or staying warm.
Life as a brain in a vat would be nothing more than boredom (although even that is the result of chemistry). Unless, of course, you program the vat to inject happy drugs.
I imagine heaven would be just like this, if it existed. No physical drives to push you to improve or learn and artificially induced euphoria.
BKiSA #6
It’s very similar to the “teleportation problem”, isn’t it. Let’s say real teleportation were invented such that you could be disassembled into elementary particles in one place and perfectly reassembled somewhere else with all thoughts, feelings, memories etc. intact. Would the reassembled version really be “you” or just a perfect copy? What does it matter to Disassembled You that Reassembled You is out there thinking, feeling, and responding to sensory input exactly like the original, when Disassembled You is no longer around to think or feel or experience anything?
On the same note, let’s assume, for the sake of the argument, that Laplace’s Demon were able to create a perfect simulation of your brain, capable of thinking, feeling , reacting to sensory input (also perfectly simulated) exactly like you etc? Would the simulated version actually be you? What does it matter to Physical You that Simulated You is out there thinking, feeling, and responding to sensory input exactly like the original, when Physical You is no longer around to think or feel or experience anything?
I should also add that ‘intelligence’ & ‘intelligent’ seem to being defined in a peculiarly restrictive way by the AI crowd. And this very restricted idea of intelligence seems to be associated, without acknowledgement and probably without recognition, with Cartesian ideas of ‘mind’ and ‘consciousness’.
#7 Colin Daniels, I think you mean ‘desire’ or ‘compulsion’ rather than ‘compunction’ to do anything, unless I’m misunderstanding what you are trying to get at. Antonio Damasio has some interesting thoughts about that: without emotions (including of course interest, which is an an emotional as well as an intellectual thing – if indeed the intellect and the emotions can ever be tidily separated) there is no reason for doing anything.
Re the teleportation problem:
In the movie The Prestige, a magician uses a device he passes through that creates two of himself, one in the location where he passed through and one a short distance away. It was supposed to be a teleportation device. His solution is simply to kill the copy that passes through. This he does night after night in his show.
I once had a protracted argument with a relative who wondered whether the “real one” of the magician was always the one killed, the one at a distance, or if they alternative in some way. I rejected the idea that there was a single “real one”, and considered both (or neither) to be the “real one”.
The relative was essentially arguing that a “soil” or “spirit” went from the magician to exactly one of the two instances, and the other was a copy with a new “soul” or “spirit”. I argued against the concept of a “soil”, and made some unsuccessful software analogies where the concept of “original program” was ambiguous.
No minds were changed, but it was an interesting discussion.
Until we create or encounter other beings that can pass the Turing test, yes. I phrased it generally, because the test certifies an agent as being of the sort that can act as judge in subsequent tests.
No, on this one I mean any potentially intelligent agent.
It’s a useful, and thus compelling, answer to significant difficulties in defining intelligence by observable capacities. Although it can perform tasks once thought exclusively the domain of humans, my 20 year old calculator doesn’t intuitively appear intelligent in the desired way. Whenever we might think, “An intelligent being can do foo,” where foo is some observable thing, we can imagine a machine capable of doing exactly foo. And that machine would just be a “foo-doing machine”, not an intelligence. Turing supposed that this mode of testing for the presence of intelligence is unsatisfying because it’s not how we in fact unconsciously go about it every day.
Bjarte@8:
Aye, I’ve often been bothered by that one, and you’re right. Imagine a world where such a thing is possible and your minimum-wage IT-slave forgot to check the integrity of the backups, and suddenly half the people you know are dead forever because that jackjass John plugged a hot plate into the same circuit as a server farm again, and brought it down. Imagine what kind of bizarre society would exist where being restored to life really were as simple doing a Level Zero restore. Imagine the zany shenanigans that would ensue on that future sitcom where a few dozen copies of you were “restored” by accident. This is a whole ‘nother level of thinking that rarely gets addressed in the eager musings about the glory of the Coming Singularity, the “even IF it were possible”s.
And in point of fact, we have a good body of evidence for what happens when humans learn to be able to do a thing without really pondering the possible consequences beforehand.
I’ve tended to ignore that side of the question because of what I see as the basic impossibility of the thing as it now stands, but I should hesitate to really say that “the singularity” (blech, I hate the term) is in fact absolutely impossible in any way. Ophelia spoke to the horror of being conscious but “locked inside a box”, which would indeed be horrible; to me, no matter how it is cast, the idea of something replicating my consciousness even if it had no immediate effect upon “me” (whatever that means) is equally horrible. My mind rebels against it. The only other idea that I know of with equal effect is the idea of absolute oblivion of the universe (not heat-death and darkness, but rather total non-existence. Nothing.)
Trigger warning! Some science stuff inline with the linkage of our hardware to who we are and how downloading our consciousness isn’t likely to be a thing.
https://farcornercafe.blogspot.com/2022/01/we-humans-like-to-think-of-ourselves-as.html
My understanding is that not a lot of philosophers are willing to support dualism these days, but it sure persists in the culture generally. Science fiction is replete with it — characters swapping minds, etc.
Even putting pure dualism aside, I agree that too many people are willing to discount the role that our physical bodies play. It’s one reason why I’m not as paranoid as some about the prospect of AIs taking over the world, killing all humans, etc. — we tend to assume that, no matter how carefully we program them, AIs will “reprogram themselves” and proceed to wage war and genocide and multiply and spread. But we do those things because millennia of evolution have “programmed” us with those tendencies (or with drives that can produce those tendencies). It’s not immediately obvious to me why a super-AI would yearn to take over the galaxy and spread itself or copies/progeny of itself everywhere. I’m not saying there isn’t reason for caution in the world of AI, just that there seem to be some unwarranted assumptions that nonbiological entities will behave like biological ones.
#10 Tim, sorry, yes, I meant compulsion. We act either because doing so gives us a nice feeling or to avoid a bad feeling. Without that stimulus there is no reason to do anything at all.
It also seems obvious that the container shapes the liquid filling it.
Colin Daniels #16. You are absolutely right. And that is why I find the idea that machines might at some point become ‘more intelligent’ than human beings and ‘take over’ (an idea that even Turing seems to have entertained – at least he joked about it, as I recall) quite extraordinarily childish, and anthropomorphic in the worst and most ridiculous way.
The discussion reminds of the story Lena.
In terms of technology alone, we are so far from anything approaching the singularity that I despair of people who talk about it as though it’s just around the corner.
If you have an Alexa-based device in your house, you will know that the speech recognition is moderately good. But this is largely because the clever bit isn’t done on the device itself but in a data centre somewhere, which is why you can’t have conversations with the damn thing, even though that’s the only possible use I can see for it. The level of smartness has become a bandwidth problem instead of a programming problem. If people were having conversations with the things it would create very expensive bottlenecks.
And yet this is about the best we can do on that sort of scale. We can just about deploy a global system, costing untold trillions and with a carbon footprint that should terrify everyone just so we can ask a little machine in our house what the weather is like outside.
The singularity is demanding rather more. It requires that the entirety of a person’s brain and body is perfectly simulated. But also that a convincing environment is simulated. And that everyone else’s brain and body is simulated, too.
We not only don’t have the first idea about how to do any of that from a theoretical point of view, we don’t have the theoretical knowledge of how to build a network of machines that would handle it. We don’t know how much power and bandwidth it would take and we won’t ever be able to generate enough power to run such a thing anyway. Possibly if we construct a Dyson sphere, I suppose, but the engineering hurdles there are as least as bad.
We can’t upload a single consciousness into a computer. I think it’s probably theoretically possible in principle; I don’t think there’s anything so special about minds that they can’t, in principle, run on a different substrate. But I’m not convinced that minds could work independently of bodies – or simulated bodies – and I can tell you that the engineering problems are even greater than the theoretical ones and we don’t really have the slightest idea about where to begin.
I do, for what it’s worth, think that a mind could run in a computer. But I’m not convinced we’ll ever have the engineering skills to build a computer with a mind in it which we could interact with at anything approaching a reasonable human timescale. And that’s just the one mind.
The singularity is absolute bullshit. It would be too expensive even if we knew how to begin building it. Which we don’t. Perhaps it would be better to save the planet instead.
First things first. I agree.
Screechy, I think there’s an excellent reason for AIs wanting to take over the world: they’d need the power. I don’t think many people understand just how much electricity it would take to run an actual artificial intelligence, let alone many of them. Space and fissionable material are limited. Their best bet would definitely be to kill us all off and steal our sunlight, geothermic power and fission stuff.
If I were them (and I totally haven’t thought about this far too much) then I’d probably want to live a bit closer to the sun. I think it would wreck the Earth getting even one AI off the ground.