Where misuse can get you savaged on the Internet
David Auerbach on the uses of Wittgenstein.
Wittgenstein’s first period, culminating in 1921’s Tractatus Logico-Philosophicus(which Pears had co-translated), drew heavily on Bertrand Russell’s work in philosophical logic and made a huge impact on the logical positivist movement of the time, which would later in turn influence computer science, artificial intelligence, and linguistics. The Tractatus makes an ambitious and ostensibly definitive attempt to chart out the relationship between language and the world.
Then he went away and did other things for ten years (like teaching school and beating up his students, for instance), and then he said no that was all wrong, and started over.
Language did not have such a fixed, eternal relation to reality bound by logic. The process of “measuring” the truth of a statement against reality was neither objective nor cleanly delineated. The meaning of what we say can’t be abstracted away from the context in which we say it: “We are unable clearly to circumscribe the concepts we use; not because we don’t know their real definition, but because there is no real ‘definition’ to them,” Wittgenstein wrote. Instead, our speech acts are grounded in a set of social practices.
The idea of words having relative meanings was not new, but Wittgenstein pioneered the controversial linguistic conception of meaning-as-use, or the idea that the meanings of words, relative or not, cannot be specified in isolation from the life practices in which they are used. Instead, language should be studied from the starting point of its practices, rather from abstractions to syntax and semantics. As Wittgenstein put it, “Speaking a language is part of an activity, or of a form of life.”
And since we don’t all have the same form of life, we don’t always understand each other very well.
It means that instead of a word having a fixed definition or referent, a word is an evolving entity that carries its own history with it through time, picking up new nuances and discarding old ones as practices (linguistic and life) shift. This is trivially true in a sense, as you can see from dictionaries grudgingly accepting that literally now also means “not literally” and me grudgingly accepting that begging the question will usually mean “raising the question” for the rest of my natural life and I should just start saying petitio principii instead. But the implications are more troublesome when you get to nouns, especially as they get more abstract. The usage of dog has remained somewhat consistent over the years, but try defining love or heavy or Russia in any kind of complete or precise way. You can’t do it, yet we use these words with confidence every day.
I’ve known that since forever – I noticed long ago how shit I am at defining words, which seemed surprising since using them is my one skill. Witters seems to be talking about that, if I’m understanding correctly (which I’m probably not, because who knows what David Auerbach’s form of life is…).
So, language is quicksand—except it’s not. Unlike the parlor tricks of the deconstructionists who bloviate about différance and traces, there clearly are rules that shouldn’t be broken and clearly ways of speaking that are blatantly incorrect, even if they change over time and admit to flexible interpretations even on a daily basis. It’s just that explicitly delineating those boundaries is extremely difficult, because language is not built up through organized, hierarchical rules but from the top down through byzantine, overlapping practices. Some things can be pinned down with practical certainty, just notin isolation and without context.
And you know what didn’t know that at first? AI, that’s what!
Artificial intelligence was quite slow at learning this lesson. Well into the 1970s, it was still assumed that computers could understand natural language in more or less the same way that they could understand formal logic: by interpreting them as propositions that were either true or false. The efforts in this direction have, on the whole, been remarkably unsuccessful.
And these difficulties are exactly why Google succeeded—by ignoring semantics as much as possible, sticking instead to whatever it could glean without trying to understand the meaning of words or sentences. Google could count the popularity of a word, see which words co-occur with others, figure out which people where use which words—anything as long as it didn’t require determining where and how one should use a word. In very limited, circumscribed situations, like asking questions of certain specified forms, computers can figure out what you mean, and even then things are very limited. Google can answer, “How many ounces in a pound?” but still can’t tell me “How many years has Obama been in office?” Picking up on “Obama” and “years” and “in office,” Google returns some data about his 2012 re-election, but that’s as far as it gets in “understanding” my question. The problem, as summed up by Wittgenstein: “Understanding a sentence means understanding a language.”
Hmmm. In a way Google has taught me that – at least, it’s taught me not to ask a complete-sentence question like “How many years has Obama been in office?” but rather give it the key words and hope it figures it out – which it often does. I would make it something like “How long Obama president” –
*Googles it*
Ha! The top answer is 6’1″ of course, but the second is the answer in years days hours seconds. Google has taught me to talk to it.
And all this also explains why we’re always brawling with each other on the internet. It probably even explains the pathetic degeneration of Purethought blogs.
Wittgenstein’s philosophy also accounts for the disastrous state of Internet discourse today. The shift to online communication, textual interactions separated from accompanying physical practices, has had a persistent and egregious warping effect on language, and one that most people don’t even understand. It has made linguistic practice more limited, more universal, and more ambiguous. More people interact with one another without even realizing they are following different rules for words’ usages. There is no time or space to clarify one’s self—especially on Twitter.
It is this phenomenon that has affected political and ethical discourse in particular. To take some hot-button issues, use of the words privilege and feminism and racism is so hopelessly contentious that it’s not even worth asking for a definition—even if you get one, no one else will agree with it. In situations where misuse can get you savaged on the Internet, I’ve simply stopped using a word. Let me know when everyone else has worked it out.
Hahahahaha yeah been there.
On the other hand – if that were completely true, we wouldn’t be able to read each other’s essays and columns and books. I wouldn’t like Montaigne and Hazlitt and Pollitt and Goldberg. I wouldn’t have friends via the Internet whom I still haven’t alienated (and vice versa). We can use language in such a way that it doesn’t push us off a cliff…but there are traps all along the road.
I realize it’s a side issue, but I have to comment on “Well into the 1970s, it was still assumed that computers could understand natural language in more or less the same way that they could understand formal logic”
Not true. The experts from the computer side of things were assuming that. Anyone who knew anything about language was laughing at them from the start. Meaning and context and the whole approximately infinite universe are a different order of complexity than formal logic.
Even though I went into the sciences, I come from a long line of literature mavens and was absorbing this stuff with my baby food. I particularly remember one joke about getting a computer to translate, “Out of sight, out of mind.”
The result was “Invisible, idiot.”
Actually, quixote, I take issue with both Ophelia’s statement and yours, so I win!
For the most part (at least among the computer scientists I know and the literature I’ve read) few computer scientists in AI around that time really thought that their programs were doing things in quite the same way that people did them. They understood the limitations of what they were doing but didn’t really have a good alternative. It’s true that many thought that throwing enough rules and processing at the problem would lead to something that looked a bit like understanding and we know (and lots of them knew at the time) that this was misguided. But it’s not true (again, with the above qualifier) that they didn’t understand the problem or its difficulty. Even with hindsight it’s not really fair to laugh at them. Was it really so wrong to think that approach might eventually lead to something that was good enough? Who could have predicted that Google’s brute force approach would turn out to be a much better way of interacting with computers? Lots of things had to happen before anyone could have realised that so people chugged along with the tools they had, trying to make them better. They weren’t, for the most part, trying to build minds, they were trying to build interfaces. There certainly were a lot of extravagant and foolish claims made and lots of people got carried away, but there were many more who were quite aware of the limitations of their work.
Don’t forget that these people were doing science, which is fairly conservative. They were building on what other people had done partly to make it better and partly to understand it better, Most of the AI people I’ve known were in the latter camp, fascinated by the difference between what their programs did and what people did. They understood those differences pretty well.
Oops, I know it wasn’t Ophelia who said that, I’m not sure why I wrote that she did. It’s early and I’ve been up since a lot earlier.
It’s the mark of great writers (and in general great users of language), I suspect, that they are not merely sensitive to the nuances of language, so much as immersed in them. All that history and overlapping meaning and hidden corners words and phrases and sentences can have, they’re always dancing with those, always swimming in those, whether they even mean to. I’d say they choose with great care, but I think that’s too simplistic: they choose with great appreciation; care may or may not be present.
And in a naive moment, long ago, I used to kinda hope that the networked age might improve communications precisely because there was just so much more room for the words. Sure, words are such touchy things, so limited, so clumsy at best, so frequently, but now you could go long, be precise, explore an idea, qualify, home in, it might all be so much easier, I figure, when it’s disks and RAM and ethernet and fibre; we don’t have to cut down a tree or use a magnifying glass on the type…
… but even before the coming of Twitter and ‘Like’ as the ultimate in terse editorial feedback, I think I’d already begun to fear that the real problem in networking is the last quarter meter problem… not the twisted copper pair of the last mile, but the facilities, the faculties, the time any human being has to read, to think, when it’s a zoo and a storm of inputs, everything broken up and scattered by the mad rush of life, and three quarters of the input is from the awful ads in the margins. I mean no blame here; this is the ecology that has developed; I’m not sure anyone did it deliberately, exactly, except as a side effect of chasing other things. Still, running feed to feed, who has time to think so much as react?
That said, it still seems to me almost a deliberately nihilistic act ever to type ‘tldr’. Unless the l is for lackadaisical. It’s not so much I feel I can ever decide when you should make the time; it’s the very notion you might be proud to announce you won’t or hadn’t. And as to those who insist upon short and simple answers: perhaps you should be talking to someone else. We’ll both be better off.
… oh. Wait. You say you demand simple answers? Not just that you won’t listen to the complicated ones?
Right. Oh brave old world, then. And I hope you’ll be happy in it.
Two useful things I’ve learnt from Wittgenstein:
1. Language games.
2. Proposition 7 (“Whereof one cannot speak, thereof one must be silent.”)
I think being sensitive to the nuances of language is not only the mark of great writers but also of people with a high level of intellectual integrity.
Granted, being familiar with the nuances doesn’t mean you have intellectual integrity, since you could just as easily use that knowledge to manipulate conversation.
But it is a necessary condition for an intellectually honest conversation because only with that sensitivity are you able to spot moments where you are talking past one another or where the appearance of disagreement is false because it’s caused by failure to reconcile language. If you care about truly understanding something you’ll want to stop in that moment and clear things up so you can move on to a correct understanding. If you’re intellectually dishonest, you’re more than happy to pounce on those moments of apparent disagreement and declare that the other person or side of the conversation is bad and wrong.
Wonderful, now I have to fight the temptation to check what was David Auerbach’s last piece about Gamergate…
Jennifer #6, well said.