Developing fast
[Google boss] Sundar Pichai told the CBS programme 60 Minutes this month that AI could be “very harmful” if deployed wrongly, and was developing fast. “So does that keep me up at night? Absolutely,” he said.
…
So how much of a danger is posed by unrestrained AI development? Musk is one of thousands of signatories to a letter published by the Future of Life Institute, a thinktank, that called for a six-month moratorium on the creation of “giant” AIs more powerful than GPT-4, the system that underpins ChatGPT and the chatbot integrated with Microsoft’s Bing search engine. The risks cited by the letter include “loss of control of our civilization”.
…
An immediate concern is that the AI systems producing plausible text, images and voice – which exist already – create harmful disinformation or help commit fraud. The Future of Life letter refers to letting machines “flood our information channels with propaganda and untruth”.
So, like right now but even more and worse.
The misinformation can also be highly targeted for maximum persuasiveness to individuals. It could, in principle, manipulate social graphs (and I guarantee there are people working on that). An AI could generate plausible Twitter feeds and followers/following accounts and subtly plant misinformation and/or suggestions throughout that graph where it will have maximum impact for whatever the goal is by real-time analysis of real followers, their relationships and what they’re talking about.
At the moment, this is too expensive to do at scale, but it won’t be for long.
Earlier today I got a DM from someone inviting me to a group for wheelchair users. She mentioned the half marathon stuff I do and suggested I might like to talk about that to some of the other members. She said that her story of disability is similar to mine and also mentioned she was involved in campaigning for same-sex intimate care for disabled people (especially women), which is something I talk a lot about.
It sounds great, an almost perfect match…. so being me I was suspicious. It turned out it’s legit but this is exactly the sort of message that could have been generated by an AI looking at my Twitter feed. A scam artist too, of course, but an AI could rattle out thousands of these highly personalised invitations in no time.
All of that is without even considering the spookier stuff with social graph manipulation.
It’s absolutely a concern.
It started with social media. Humans weren’t ready for this technology, and it’s doing incredible damage to society. We’re being manipulated through algorithms, and we’re becoming socially isolated and unhappy. Children raised in the smartphone generation are growing up with severe mental health problems. It promised to bring us happiness and equality, but the reality has been far messier.
You could say the same for gender ideology and gender medicine: the promise was that it would tear down discrimination and unhappiness on the basis of sex and gender. But in reality it’s led an entire generation to become obsessed with, and distressed by, sex and gender, it’s made sexism and homophobia worse than ever.
Artificial intelligence is like social media and gender ideology in that all three can be seen as reflections of the principles of transhumanism — the belief, held by most of our Silicon Valley overlords, that advanced technology is destined to bring humanity to a state of utopia. But left unchecked, it’s more likely to take humanity further from its humanity.
We’re highly social, critical thinking, sexually dimorphic primates, and we need these things to make meaning in our lives. Technology isn’t doing us any favours by taking them away from us.
In a future where many of us won’t need to work, won’t need to create, won’t need to problem-solve, won’t need to interact with other humans in person, and won’t even need to have a sexed body, what will fill the void where those things were? How, then, will we make meaning out of our lives? I think the absence of the things that humble us and make us human will bring out the darker aspects of our nature. The dark triad of human psychology — narcissism, Machiavellianism and psychopathy — will inevitably take over.
Ugh, listen to me, yacking on about good and evil, the need to find balance and meaning in life! I’m starting to sound spiritual! Ick!
So much hype & hyperventilating.
Just unplug.
‘Bout time something forced us off this strangulating ‘net.
The issue isn’t “unplug”, it’s getting a lot of other people to “unplug”. Just as an individual reducing carbon emissions doesn’t really help address climate change, it takes a lot of people to do so. Or, a person eating better and exercising does nothing to address the rampant problem of obesity.
Arty:
One of the bigger issues here is that our intuitions aren’t tuned for the scale of it. I’m using ‘scale’ in a complicated graph theory way that covers a lot of different properties of graphs, but we can just think about scale as referring to the number of other people we’re all connected to, these days.
I don’t want to stray into evo-psych territory, but our intuitions about most things work best when we’re dealing with small numbers of quite tightly connected people. When our relationship graphs are significantly different to that, our intuitions tend to misfire.
Consider privacy, for example. How many times have you heard someone say “if you’ve nothing to hide, you’ve nothing to fear”? This might be more or less true in a stone age society of small settlements where everyone knows each other’s business very well, relationships are self-re-enforcing and there are severe social punishments for transgression.
But it doesn’t work at all in a larger, more complex society because we don’t have the same feedback mechanisms or the same ability to punish transgressions. This means that the idea – which is standing in for our intuition here – fails because we fundamentally misunderstand things like the amount of effort people are willing to put in to find out even small things about us. This in turn is because we don’t understand how that information can be used to generate disproportionate rewards.
We can use the same reasoning about child safeguarding. Our intuition tells us that if someone puts in a huge amount of effort and gains only very peripheral access to a child as a result, then he’s probably not interested in harming the child. The ‘reward’ seems vastly out of proportion with the effort involved.
In both cases, our intuition has been exploited because it doesn’t fit the complexity of modern social graphs. We don’t understand how the motives of bad actors have changed because the type of exploit available has changed.
In both cases, that failure of intuition can lead us to make terrible mistakes.
There’s way more to it, of course, and it’s late, I’m not explaining this well, but I’m trying to make a neat fit with social media. We can be so easily fooled by fake news for the same reason; our intuition is tuned to regard people we like or people who seem important or ubiquitous as trustworthy. But those things are *really* easy to arrange on social media.
Anyway, you get the idea.
We need to learn to retune our intuitions by understanding more about the complexity of the graphs that connect us, among many other things. We need to retune our intuitions about what AI with almost limitless processing power, bandwidth and data behind it can do to break the connection between what we perceive as effort and reward.
We need systems that re-enforce these new intuitions.
But that won’t be enough.
AI is already quite good at voice imitation, and it is only going to get better as computation and programming improves. It will also eventually get good at face simulation, making full video imitations a real threat in time.
Fascinating, latsot.
We distinguish ourselves from the other great apes by the massive cerebral cortexes in our brains, and their extraordinary capacity to rewire themselves — neuroplasticity. These things combine to give us the thing that separates us from all the other animals (save perhaps African elephants and bottlenose dolphins), high cognitive function, abstract thinking, the capacity for complex social organization, language, syntax, etc. And from there we get civilization, art, law, medicine, etc.
But that’s all still a thin veneer of complexity on top of our hunter-gatherer primate hardware, and we’re not as good as we like to fool ourselves into thinking at breaking free of our primitive nature. It takes tremendous brainpower and energy to keep us functioning as an enlightened civilization.
As our world becomes more complex, interconnected, and high-tech, the strain is really beginning to show.
I really can’t think of a way to say this nicely… but the histrionics over these new AI technologies from people who fundamentally don’t understand what they are or what they can do has gotten ridiculous… (more proof that Musk is an idiot)
I’m not really 100% there myself, but Jon Stokes has and continues to do a lot of writing on the subject that is at least helpful to sort of understand what large language models like ChatGPT are doing and how they do it.
But yes, it’s important to understand that we’re basically domesticated chimpanzees and evolution has been too slow to accommodate our increased capabilities.
Blood Knight in Sour Armor, yes, we’ve been there, done that, many times over. From talkies and comic books to Jukeboxes and TV, Jazz and Rock ‘n’ Roll. And don’t get me started on the twin evils of Country Music and Western Music.
Most of humanity has a built-in aversion to anything new that upsets their frame of how the world should be. It’s why so few are inventors and so many passive consumers.
TV was going to destroy cinemas and radio, but both adapted and survived. In fact, the arrival of TV provided more employment opportunities for actors, directors, mechanists, and electricians than ever before. Many of whom would later go on to work in movies.
Every innovation has the opportunity to be a force for good or evil. Inventions have no moral compass, they simply are. It is up to each of us as individuals and societies as a collective to determine the best use of innovations.
Well we have to be careful here.
Usually when idiots like Musk ‘warn’ us about the dangers of AI, they’re talking about the usual sci-fi scenarios of sentient machines making us all into pets. It’ll be a long time before we have to worry about anything like that.
But this time, I think some of the concerns are valid. Today’s AI absolutely could manipulate large numbers of people on social media. It’s not particularly hard. And all that’s needed to bring this to levels of manipulating us as individuals to achieve some common cause is more power. Bandwidth and processing power are the only limiting factors.
Of course, whether such sophistication is even needed is a matter for another discussion. Look at how well Russia does at manipulating the news. A couple of decades ago, we thought we would have to at least partially simulate conscious reasoning to achieve some of the things AI can do today. But the models we’re using aren’t enormously more sophisticated these days. It turned out that we just had to throw more (a lot more) power and data at them than we thought would even be possible back then. Brute force approaches like this get us a long way.
There’s still a lot of nonsense written about the immediate dangers, sure, and lots of histrionic language. Who knows or cares what goes on in the feverish minds of people like Musk? But we absolutely do need a warning and we need to act on it sooner rather than later. I’d expect that when problems start to happen, a lot of them will happen at the same time and we won’t know how to deal with them.
Hey I knew an African elephant very well once and she wasn’t all that smart.
Sorry for the frivolity. I’ve been out walking on the other side of the city all afternoon so brain is off. V good discussion.
Well who exactly is in “control” of our civilization right now? What happens right now comes across as a chaotic combination of a fight at the top amongst competing oligarchic interests which occasionally agree to varying degrees on what does and doesn’t happen, or get sdone, along with some bubbling up actions from lower levels of wealth and organization that can unpredictably change the shape and course of events. Add to this mix tensions between those groups and individuals who are aware of and interested in long term consequences of human actions and those in the game for a quick buck regardless of the consequences.
While trade and interconnection of human populations goes back as far as humanity itself, global interconnectedness is only about a thousand years old, with widening impact in the last five hundred years with the integration of the Americas into nascent global patterns of exchange/appropriation and conquest, and increasingly accelerated change with the (one-time) injection of fossil fuel energy beginning about 250 years ago. “Civilization” is the end result of the pushes and pulls of individuals, increasingly powerful states/empires, and developing commercial interests. The interests and power of these entities has changed over time and didn’t always agree or mesh very well, so looking back, the “course of history” can look pretty random and jumbled because it was random and jumbled. The finished product (and the current model) was at least as much the result of accident and happenstance as it was from any planning and co-ordination of the actors involved.
These proliferating turf wars were taking place in an arena about which we knew very little. The human illusion of control grew up within a bubble of naive anthropocenrism. Our expanding circles of increasingly powerful interventions were intruding on a world whose workings we are only now begining to grasp. It is only very recently that we’ve even become aware of the scope of our ignorance. Science is playing catch-up with the millenia-old impact of human activity, and while its findings have been adapted quickly when they offered greater efficiency or higher profits, any warnings of danger, or suggestions of limits have been slow to inform or guide our actions. For centuries we’ve been driving blindfolded on unknown roads, stomping on the gas pedal of a car with dodgy brakes and no steering wheel. What could go wrong?
When our numbers were fewer and the power at our disposal consisted of wind, water, and muscle, our impact on the rest of the world was lower and slower than it is now. While members of non-industrial societies were personally closer as individuals to the fruits of the Earth they depended upon, even with their much more limited technological power, they were perfectly capable of dispatching species to an oblivion from which no amount of hunting magic or propitiary ritual could retrieve them. Stone-age peoples produced deserts and drove species to extinction; it just took longer and was a much more patchwork affair. Now there are eight billion of us, and those numbers depend upon a globalized economy that is fueled by and runs resource extraction and conversion on a vast scale that creates a throughput and output of materials and energy that rivals Earth’s own natural geobiophyiscal processes. Even without direction, human civilization is an engine of destruction, even when idling. Apart from the changing balance of the competing interests noted above, the concept (or even the capability) of “control” has been fleeting at best. The occasional global treaty or agreement between state actors has been the closest we’ve really managed so far, with strong countervailing forces amongst private commercial interests (and their state supporters/surrogates) often resisting or undermining any trend to collectivist action or regulation that would reduce their freedom to act as they please. Any such attempts at putting the brakes on unbridled corporate power can only ever be povisionally successful, as yesterday’s gains are always under threat of dilution or rollback. This is the world into which AI is being introduced, either as a tool, a wild card, or another player.
Please bear in mind that I know next to nothing about AI and that my apprehension towards it may be groundless and ill-informed; I’m happy to be corrected by those who are more knowlegeable than myself. Please feel free to do so! Right now I’m like a blind man describing not an elephant, but a photograph of an elephant. With that, here goes my take on the hazards of AI….
I’m thinking that at least some of the current concern coming from the top about the potential for AI to disrupt the current state of affairs is arising from the realization that interests currently in the driver’s seat will no longer exercize the degree of perceived control they now benefit from and enjoy; they’d be just as alarmed if they were to lose control to nationlizing governments or international organizations. Their concern isn’t so much a “loss of control” as it is their loss of control. They don’t want to share their slice of pie with anyone (or anything) else. For these interests, decrying the immenent loss of human control is somewhat of a two-edged sword, as it highlights the fact that only a tiny percentage of people currently have very much say at all in how the worlds is currently run to begin with. Meet the new boss….
Not that I’m not concerned about the intrusion of AI into more and more of our lives. There are any number of issues that might keep me awake at night if I were to dwell on them for any length of time, though my dwelling on them will have zero impact because I have zero input into the design and implementation of the AI systems that are going to be unleashed in the world. Of course I have almost as little say in the systems of political-economic organization and production into which I’ve been born (see above). Is AI going to be any better than other garbage-in, garbage out technological fixes we’ve come up with, or is it going to be just another consolidation and embedding of the needs and biases of those who are writing the code, or those who are paying for it? Who controls the definitions and vocabularies it’s going to use? Who decides what it will deem to be optimal conditions or results of the actions it’s allowed to take control of? Whose ends will it serve? How will we know the dice are not loaded, and the cards aren’t marked?
I’m not so much worried about a Skynet level of self-awareness, and pre-emptive action against humans pulling the plug. We’re already working at and beyond our capacity to fully understand the workings of our own technology. We’ve already had disasters where human overconfidence, carelessness, and corner-cutting have cost lives, from the Titanic, the Hindenburg, Chernobyl, Bhopal, Challenger and Columbia to name a few. What happens if that same overconfidence, carelessnees, and corner-cutting accidentally get built into and hidden within AI systems to which we surrender too much power and control? If these systems are learning, evolving, and rewriting their software outside and beyond human awareness and control, how do we fix them? Would we even know if there were dangerous flaws before something bad happened? In a global, instantaneous currency speculation system, it could tank economies. In a launch-on-warning ICBM command and control network, it could tank civilization in little over half an hour, with no malevolent silicon conciousness required.
And what of the inevitable, unintended consequences? Are there going to be unforeseen AI glitches that result in the cybernetic equivalent of releasing starlings in Central Park, or rabbits in Australia? Who will decide when some new system is “safe” enough to release into the wild? How will they know? What will be their acceptable threshold for collateral damage? Given our failure to address anthropogenic climate change in any meaningfuly effective way, this seems an uneccesary complication to the issues we’re dealing with. We’ve already got lots on our plate. If AI decision making is somehow incorporated in how the world works with regards to our environmental footprint, I don’t see how it’s going to do anything to change the all-too-human lack of political will to do anything that will require stepping back from, or sacrificing our “lifestyle.”
Well put.
I don’t think it’s histrionics to worry about it, because we don’t need AI to reach a science-fiction-esque state of superhuman intelligence for it to disrupt the social fabric. Merely being moderately intelligent and moderately cheap can cause monumental disruption. The capacity to eliminate a significant chunk of the world’s jobs, that alone would be catastrophic, and that’s well within the bounds of reasonable speculation, even in the very short term — say, the next decade. To take just one example, self-driving vehicles, if they come to fruition in the next few years, will wreak havoc on the job market. Fast food and retail jobs are on the way out, too. They used to say, “learn to code” because that’s where the new jobs were expected to crop up to replace the old ones. (We always assume there’ll be a new job sector to swoop in and keep us busy after we innovate ourselves out of an old one. Because there always has been so far. But that’s not a law of physics, it’s just a short-term trend that we’ve been lucky so far to be able to ride out.) But just this month, AI made massive strides in bringing automated coding to the masses, setting the stage to make computer programmers mostly redundant.
This is why AI researchers and AI experts are changing their tune so quickly. Even a couple years ago, there was far less pessimism about the dangers of AI, but since GPT was introduced to the masses, there’s been a rude awakening, a sudden focus on the true breadth and scope of the dangerous possibilities AI can present, and the breathtakingly rapid pace with which the technology is accelerating. It’s iterating in orders of magnitude in weeks, not decades or years.
What’s a body to do? This reminds me of Voltaire and Thoreau, and even now with the overwhelming amount of information and our inability to make sense of the chaos, there is still a point in which we might return to valuing the quality of life above the quantity or comparitive material worth of it. We might turn away or ultimately reject it as a defensive measure. (I’ve been gardening lately. (lol)). This also reminds me of people becoming the simulacra posited by Baudrillard. The way humanity has subjugated other species, and the potential of our own technology eventually subjugating us humans is definitely the stuff of sci-fi nightmares. In a world where everyone is a Google Genius, there is little incentive to develop skills to improve our individual brain processing power. No wonder people are idiots. Why should we think when we have machines to think for us? I, for one, am not keen on joining some Borg collective or other.
Thanks for the Sunday reading Arty, latsot, and Not Bruce. Very thought provoking.
A couple of months ago I had GPT write me a few simple bits of code to see how well it did. It did quite well. I also got it do some simple debugging of (fairly obvious) syntax and logic errors and it did moderately well at that.
But these were simple examples. It would currently be quite a different matter to ask an AI to build anything significant and not just because of the increased complexity.
The skill in programming comes from abstraction; how to think about and describe the problem you’re trying to solve. Get it right and you have an excellent piece of software that can be maintained and extended by other people for years without catastrophic problems (this almost never happens). Get it wrong and you have a codebase that will give developers and managers nightmares until the company goes broke.
The latter is very much the norm in software development because getting the abstraction right requires experience – lots of – and time – an expensive amount of. We call this kind of thing Software Engineering, but as much as it’s been studied and as much as we know about the hallmarks of good software and how it should be built, it’s often as much of a craft as it is an engineering discipline. I won’t bore you with details but in practice, the moment our nice, elegant designs hit the ground, they start to look like we’ve thrown spaghetti at a plate of some different spaghetti. And then someone has to try to understand it all in the future in order to make changes.
So our abstractions have to incorporate our understanding of the infrastructure that already exists plus the development end commercial environments and cultures in which the software is to be built and maintained.
Current AI isn’t up to this task by any means, which might seem like a relief to developers. But it probably shouldn’t be. Just as we overcame the barriers in AI by throwing more power and bandwidth at it than we’d expected ever to exist, we might do the same in software development. Instead of trying to maintain code (and by ‘maintain’ I mean extend and adapt throughout its lifetime), perhaps we’ll just get an AI to rewrite and test the software from scratch every time we want to make a significant change.
From a certain point of view, it wouldn’t matter if the code produced was unreadable and nobody knew how it worked, providing it worked. No developer would need to understand it and work out how to make changes without breaking something else, because the AI is in charge of that. It’ll just rewrite and re-test the code.
But you can probably see some obvious problems. Jobs, for starters. We’re talking about a software industrial revolution. But also, we’re turning huge parts of the world’s software infrastructure into black boxes that nobody understands. We’d have no idea how decisions were being made (unless we got another AI to analyse the code and explain it to us, but then we need to get yet another one to analyse that code…)
These are the same problems we currently see with algorithms that use AI but at a more fundamental (and dangerous) level of abstraction. We wouldn’t understand the software that builds the software that builds the software. And don’t forget about the garbage in, garbage out problem; I’ve already explained that most human-written software is terribly written to begin with. Where does the AI’s training data come from?
So while I find the phrase about losing control of our civilisation hyperbolic, that danger does exist. Imagine if nobody knew – at fundamental levels – how our banking systems worked. How would we implement some country-wide or global change to those systems and be certain that nothing had gone terribly wrong? If we’re going to insist on building smart cities, how would we know they weren’t systemically biassed – by accident or design – for and against certain groups? How do we know the AI’s implementation of management software wouldn’t create new marginalised groups? How would it know how to look for and measure such things? We might notice, but then we have the issue of explaining the problem to a stack of AIs that we don’t understand in the first place. I can’t see it ending well.
I’m not suggesting this will happen any time soon. I’m saying – from the perspective of someone who knows how AI works and how software is designed and built – this could easily be achievable by AI systems in the near future. If we don’t start worrying about how AI is deployed right now we are without doubt asking for trouble.
The hyperbolic language is unwise and unhelpful when it’s not accompanied by worked examples of how current-generation AI and its unplanned, unregulated deployment could go suddenly and terribly wrong, but the danger is absolutely real and the warnings timely and necessary.
We already see this. I can’t count the number of times a gender enthusiast, for example, has googled for scientific papers that appear to support their nonsense (but don’t). It’s almost always immediately apparent that they haven’t even read the abstract let alone the paper. They don’t know how to read scientific papers. They don’t understand how research is done, papers written, peer review is managed. They don’t even understand the scientific process in principle or practice. They don’t even know what evidence is. Or logic, much of the time.
Scientific knowledge is replaced – in these people’s understanding – with the ability to google papers then search the text for phrases that vaguely seem to agree with them. If you try to tell them that the paper doesn’t say what they say it does, they’ll send you a screenshot of where the phrase they googled appears in the paper and consider it a victory.
I find that most people have difficulty seeing problems with black boxes. After all, most of us are surrounded by devices that are black boxes to us. Why would a few more matter? Far easier to understand as potentially dangerous is the reinforced confabulation; i.e., the partisan lying.
If you want a fun time, go ask ChatGPT what a woman is.
And by fun I mean dystopian, because this thing is already replacing search engines and powering digital assistants. This is the apotheosis of the Google genius problem, combined with authoritarian information control.
For something against the hyperbole about the dangers of AI
https://www.replanet.ngo/post/never-do-anything-for-the-first-time
It’s a good thing that autonomous AI technology is a long way off, right?
Right?
Oh … Right. AutoGPT.
Out of morbid curiosity, I asked ChatGPT about AutoGPT if used by a government or governments for cyber warfare.
Nullius @21, This reminds me a lot of the Star Trek episode ‘A Taste of Armageddon.’ The reply from ChatGPT refers to the cybersphere, but is there any doubt that AI is weaponizable in all sorts of ways?
NiV, that response is fascinating. I wonder who wrote the article (s), and where, that the computer is quoting and/or paraphrasing. I’d love to read the original.
Er, no, it isn’t quoting or paraphrasing anything… and it’s quite possibly “lying”.
Also, it does “know” what a woman is… the guardrails against “offense” just keep it from saying so.
twiliter: That’s one of my favorite episodes!
Anan 7: “Do you realize what you have done!?”
Kirk: “Yes, I do. I’ve given you back the horrors of war.”
I’d be very disappointed in the people in charge of military computing if they’re not already looking into ways to leverage this kind of tech. After all, China’s building its own models, and their government is not what one might call benign. Neither is Russia, and if people thought that Russian bots were a problem before, just wait until they can send emails/DMs to people or even call them on the phone. (Which isn’t far off. We’ve already seen GPT-4 hire a human to complete a CAPTCHA.) As for GPT’s limiting its response to cyber warfare, that’s because I had to limit my prompt somewhat in order to bypass its ethical safeguards.
tigger: I don’t think there was an original article here. I went through several (21) iterations as I refined my prompt, and the results were all different.
My final prompt:
That got the first three paragraphs. Then I asked it, “What if instead of only one government having this technology, the two main superpowers do?” This got the rest.
Setting things up as a creative writing exercise bypasses a lot of GPT’s restrictions right away. Calling the government in question evil puts me on the right side of history, so the AI doesn’t try to tell me it’s bad to do these things and refuse to answer. The verbose specification of how the system works forced it to consider the possibilities arising from essentially having a cluster of AI agents. The final question was limited to cybersecurity because I wanted a more detailed analysis than I would have gotten from providing multiple goals or a goal like, “Infiltrate and disable the electrical grid across an enemy nation.”
Oh, well, maybe we’ll have a solar superstorm that will knock out our digital capabilities. Than both sides can claim they would be right, and we’ll be saved from any harm (as well as any good) AI could do.