A much-debated topic in Menlo Park
Vanity Fair has a big long piece about how Facebook attempts to deal with abuse.
To my surprise, the person in charge of it isn’t some socially clueless techy fool, she’s a former prosecutor from the Obama administration.
But when it comes to figuring out how Facebook actually works—how it decides what content is allowed, and what isn’t—the most important person in the company isn’t Mark Zuckerberg. It’s Monika Bickert, a former federal prosecutor and Harvard Law School graduate. At 42, Bickert is currently one of only a handful of people, along with her counterparts at Google, with real power to dictate free-speech norms for the entire world. In Oh, Semantics*, she sits at the head of a long table, joined by several dozen deputies in their 30s and 40s. Among them are engineers, lawyers, and P.R. people. But mostly they are policymakers, the people who write Facebook’s laws. Like Bickert, a number are veterans of the public sector, Obama-administration refugees eager to maintain some semblance of the pragmatism that has lost favor in Washington.
*the name of a meeting room
You’d think they’d be a little more able to figure things out than Zuckerberg types, but…
Facebook has a 40-page rule book listing all the things that are disallowed on the platform. They’re called Community Standards, and they were made public in full for the first time in April 2018. One of them is hate speech, which Facebook defines as an “attack” against a “protected characteristic,” such as gender, sexuality, race, or religion. And one of the most serious ways to attack someone, Facebook has decided, is to compare them to something dehumanizing.
Like: Animals that are culturally perceived as intellectually or physically inferior.
Or: Filth, bacteria, disease and feces.
That means statements like “black people are monkeys” and “Koreans are the scum of the earth” are subject to removal. But then, so is “men are trash.”
See the problem? If you remove dehumanizing attacks against gender, you may block speech designed to draw attention to a social movement like #MeToo. If you allow dehumanizing attacks against gender, well, you’re allowing dehumanizing attacks against gender. And if you do that, how do you defend other “protected” groups from similar attacks?
Erm. They seem to have missed the whole power-imbalance issue completely. It can’t be just “about race” or “about gender” – it has to do with hierarchies as well as categories.
Another idea is to treat the genders themselves differently. Caragliano cues up a slide deck. On it is a graph showing internal research that Facebook users are more upset by attacks against women than they are by attacks against men. Women would be protected against all hate speech, while men would be protected only against explicit calls for violence. “Women are scum” would be removed. “Men are scum” could stay.
Problem solved? Well … not quite. Bickert foresees another hurdle. “My instinct is not to treat the genders differently,” she tells me. “We live in a world where we now acknowledge there are many genders, not just men and women. I suspect the attacks you see are disproportionately against those genders and women, but not men.” If you create a policy based on that logic, though, “you end up in this space where it’s like, ‘Our hate-speech policy applies to everybody—except for men.’ ” Imagine how that would play.
Oh ffs. It’s hopeless.
In truth, “men are scum” is a well-known and much-debated topic in Menlo Park, with improbably large implications for the governing philosophy of the platform and, thus, the Internet. For philosophical and financial reasons, Facebook was established with one set of universally shared values. And in order to facilitate as much “sharing” as possible, no one group or individual would be treated differently from another. If you couldn’t call women “scum,” then you couldn’t call men “scum,” either.
If you take a step back, it’s kind of an idealistic way to think about the world. It’s also a classically Western, liberal way to think about the world. Give everyone an equal shot at free expression, and democracy and liberty will naturally flourish.
I don’t see it as idealistic so much as uncomprehending, blind, privileged. It’s just dense to think that the rules already and always work exactly the same way for everyone. “Everybody gets to call everybody a cunt; that’s freedom.” Except that it harms women no matter who is the target, but hey, liberty will naturally flourish.
In defense of we socially clueless techy fools, the problems Facebook has with this kind of thing are mostly due to three things:
1. A disconnect between policy makers and technical people.
This can cause any number of problems and I’ve seen them all a hundred times. A classic one (and Facebook does this a lot) is when policy makers ask the tech people whether they can do X and the tech people say “no, X is either practically or fundamentally impossible” and the policy people say the tech people are just being negative or not trying hard enough or that some technology that eats poorly-defined policies and shits out magic will probably happen at some point in the future and we should plan for that. Providing the people who have already told them it’s impossible will just try hard enough, of course. What’s wrong with those socially clueless techy fools, anyway? It’s not as though we pay them to think, what else have they got to do but even more impossible things than we’re already demanding?
Another is the aforementioned poorly-defined policies and protocols. When they are put to the software people we say “yeah but…. look at all these holes. Look at the instances when it won’t work. Look at the fact that we can’t measure the things you want us to measure” and so on. Policies are one thing, protocols and engineering are quite another. I’ve been in dozens and dozens of meetings about this sort of thing. The policy makers think they can just say “make it do this sort of thing” and everything will work out fine. From their point of view, they’re absolutely right because we techies get the blame when the software behaves in exactly the poorly-specified way that was asked for. But we are really good at turning policies into protocols when there’s mutual respect. There never is.
There are lots and lots of other examples, but the problem isn’t either camp, it’s the failure of the camps to work together. These days this is rarely (in practice) the fault of the people trying to build the architecture and write the software so much as it is the way the organisation is structured. Even today there’s a widespread attitude that the tech groups ought to be subservient to everyone else and should just “GET IT THE FUCK DONE” regardless of reality or budget, which brings me to:
2. Inability to let software and architecture experts do their jobs
Give we socially clueless techy fools a set of requirements and enough time and money to do it and we’ll turn them into something that actually constitutes a set of requirements. Tell us to build a bridge to the moon (and some of what the non-tech divisions of Facebook regularly ask is genuinely about as ambitious) and we’ll do what we do and eventually say “but what you really want is this” and often “but we can’t do that, we can do this” or “we can do that but it will cost all the money in the world and won’t work anyway”.
Most of the time in big companies some enormous sum of money will be signed off to make that happen (yay!) but within a fortnight large amounts of that budget will be pulled, but we’re still supposed to deliver the same thing on the same timescale. We say that we can deliver less or deliver later but not both….. and hear nothing. We’re supposed to make that decision so that the policy people don’t get the blame when it all goes wrong.
Inevitably (seriously, on every software project anyone has ever done ever) we’re told that we can take the budget and time we need out of the testing budget. We all knows what happens then: the software doesn’t get finished and it doesn’t get tested properly, either. Besides, someone will long ago have seen this big, lovely testing budget sitting around not being used yet and plundered it. Every. Single. Time.
3. They’re lying
What Facebook has become is exactly what it wanted to be. The ‘problems’ are its business plans. The things it’s claiming to fix are what it wanted to happen in the first place, not the accidental consequences of a platform that somehow got away from itself. It was the business types and the policy types who made that happen, not the techy types. I’m damn sure they asked the software people about it and they told them what the consequences would likely be and were ignored. Again and again, I’d stake my life on it.
Oh and I should throw in a 4. Don’t ever consider that Zuckerberg is anything like a veteran of software and architecture developmet. He demonstrates his technical idiocy on a daily basis. He couldn’t write “hello world” on his own cock.
Don’t get me wrong. I’ve said several times here that software developers are often idiots about… well, most things, especially women, equality and not being a dick.
But what’s happening with Facebook is deception. It’s a company built on deception by people concerned with money more than technology. And the difficulties they are facing with being a terrible company and treating everyone including their users, people who are not their users and entire fucking democracies as subservient to their requirements are not due to we socially clueless techies or the technically clueless policy makers but to the capitally rapacious business types and their failure to learn about how technology – and the businesses that rely on it – scale.
In other words, the main problem is that our governments blithely allowed companies like Facebook to become de facto and then actual monopolies and pretended not to know what would happen.
Correct me if I’m wrong (I am not on Facebook and have no desire to be there), but doesn’t Facebook implement those rules somewhat inconsistently, as well? I mean, I’ve heard of people complaining about misogynistic posts that called women names and even called for rape against women, but not getting any action to have them removed. Or is that only Twitter? I can’t keep these things straight, especially since the entirety of my social online interaction is here at B&W.
Oh, definitely, which is part of the point of my heavy sarcasm.
That sums up so much, not just Facebook, not just tech companies, but even the world in which I work – academia. The search for the almighty dollars, butts in seats, eyeballs on screens, and money passing from those who have some to those who have a lot…that is what is driving almost everything in our society. And those of us in the trenches who know “we can’t do that” are going to always get the blame (like teachers, who are expected to increase graduation rates without decreasing quality and without asking the students to actually, you know, like, spend some of their time studying).
[…] Originally a comment by latsot on A much-debated topic in Menlo Park. […]
#2 and #3
Very definitely indeed. And every single time these companies appeal to software that can’t ever possibly exist to sort it out. That’s the problem. Filters can’t do what Facebook says they can do. Policies can’t automatically be implemented in software even if they are fairly well-defined. AI isn’t a silver bullet either.
Companies need to scale back their ambitions based on what is actually achievable in software otherwise we’ll be left with… well, look around.
Actually, what they’re talking about is possible – but would have to be done by people who can understand the nuances in language and not just look for word combinations or something. Of course, that has its own set of problems, and not just the fact that paying those people would cut into their obscene profits. The people come with their own sets of biases, and their own lack of knowledge about others, so they might not understand why “runs like a girl” or “hey, I bet you like watermelon” might be problematic (or they might actually have those biases, so they think they’re funny, clever things to say).
Correct and quite the thing I’m trying to say.
But even if that weren’t the main problem, which it is it wouldn’t scale anyway. That’s why it’s algorithms for all even when we don’t know how to write the algorithm, which is why we need to employ AI… which we don’t know how to do or regulate…
Seriously, the internet, I fucking told you so.
Facebook uses people to enforce community standards; meaning, rather than strictly having software apply a set of context-free rigid rules, they have a vast army of underpaid, overworked, psychologically traumatized individuals applying a set of context-free rigid rules.
I don’t think it is possible in most cases to look at single posts and determine that someone is being abusive. Thus I think the huge amount of effort aimed at scrutinizing text is doomed from the start.
Re the moderator army:
https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona
So they’re not the wizards of Menlo Park? Apologies to Edison.
I have reported threats and hate speech directed at real women many many times on Facebook and very very rarely does even the most vile thing upset their precious “community standards” but if you get reported for anything BY a trans booster, you will get a ban. So, sadly, I have taken to blocking every trans and trans booster I come across on FB because Facebook does not run on principles, it has openly decided that trans are The Chosen People and real women are trash.