The information is accurate but not true
Again with the issue of truth versus free speech: Think Progress on Facebook’s breezy indifference to truth:
The latest instance of Facebook doubling down on its failure to avert the spread of misinformation came after an altered video of House Speaker Nancy Pelosi (D-CA) went viral on the social media platform last week. Facebook was widely criticized for refusing to take down the video — even after admitting that it had been doctored to make her look like she was slurring her words or drunk.
What was particularly shocking is that in defending this move, Facebook told the Washington Post, “We don’t have a policy that stipulates that the information you post on Facebook must be true.”
We now live in a world where “information” doesn’t have to be true.
Equally stunning is what Monika Bickert, the company’s head of global policy management, told CNN’s Anderson Cooper on Friday. “We think it’s important for people to make their own informed choice about what to believe,” Bickert said. “Our job is to make sure that we are getting them accurate information. And that’s why we work with over 50 fact-checking organizations around the world.”
But how can a doctored video be considered “accurate information”?
Facebook didn’t say.
Interesting point: Federal money has been used to pay for gender-reassignment surgery for service members, but never for elective abortion. So, trans people have more rights to their bodies than women do.
I guess Joe and Jane Schmo making their lives look more interesting than it is on Facebook is no different than slandering a public figure in order to poison our political discourse.
Someone went to a lot of trouble to fake that video, slowing it down just enough to keep it believable, then correcting the pitch back to where it had been so that didn’t give away what had been done. This is a clear intent to deceive. I don’t see why such deception should be allowed on Facebook.
As Bruce says, this isn’t people lying about their boring lives. This is someone trying to shift political moods using clear deception.
There are videos where people are sped up or slowed down, where silly music is added, and other things are done to make fun of someone. I could see not banning that, as it’s clear what’s being done. But once you’re passing off something as original video when it’s been doctored, you’re in another realm.
How can they not have a policy “that the information you post on Facebook must be true” and simultaneously have a policy “to make sure that [they] are getting [users] accurate information”?
Thesaurus.com:
Maybe they need to think a little harder about their policy allowing uninhibited distribution of false information on their platform. It’s one thing to not expressly prohibit “untrue information”; maybe that’s partly a matter of practicality and enforcement — how do you measure what’s true; how do you police it, etc. But with just a few seconds’ casual thought I can imagine several measures they could take to curtail its spread. A team of monitors to track and shut down particularly viral posts that are demonstrably false; a system that lets users “flag” content they deem untrue that then warns future users that the content is suspicious (and not just a weak “additional reporting on this” link), etc., etc. They’re flush with money and brainpower over there; I have no doubt they could come up with better ideas if they actually gave a shit. Libertarian-minded and disinterested in hearing about their responsibility, I doubt they’ll come around anytime soon.
It’s the libertarian-ness that annoys me about this: that they think it’s somehow wrong to make any kind of judgment on the content on their site — to assess its moral value or truth value or any other value in any way — and do anything meaningful to curtail the stuff that’s demonstrably detrimental to society.
They are uninterested (or perhaps interested) in not having users flag content as inaccurate.
A site I spend a lot of time on, LibraryThing, which is nowhere near as big time as Facebook (but is pretty big time, apparently) has this feature. You can flag a reader’s review as not a review or as violating terms of service. After two flags, they look at it, and I have seen some that have been made inaccessible as a result (though they can still be seen with a little effort, which is good, because some people maliciously flag reviews they don’t like. The ones I have had flagged were always things that had a feminist bent to them). If you go to the trouble to look at a blocked review, you can then suggest to LibraryThing that the flagging was inaccurate, and they will review it again.
I realize that with the sheer numbers of users of Facebook, this could be daunting. But LibraryThing has a lot of users (though, again, not as many as Facebook), but they also have a lot fewer people doing the work.
Facebook just doesn’t want to. They want us to believe they are taking a principled stance, but they do have the time and ability to shut down things they want to shut down.
As I keep saying, the “tech” part of big tech isn’t really the problem, it’s the “big”. While I have no doubt that Facebook is a terrible company without even a veneer of morality and that everything everyone has said about it here is true, there are some practical and ethical difficulties with taking down the video, solely due to scale.
For example, if they used an algorithm to identify and take down all instances of the video, it would also take down the posts that said “christ, look at what these arseholes have done now”. This kind of filter will always result in false positives.
I can see why a platform with the best intentions wouldn’t want to do that.
Of course, Facebook only has bad intentions anyway and the difficulty of scale doesn’t give it a right to throw up its arms and say there’s nothing it can do. There are practical things it could have done to help and it was obviously not very interested in doing so.
so, latsot, are you saying they’re too big to succeed?
I’m saying that scale brings problems nobody knows how to solve yet or even to properly understand.
And I’m saying that Facebook has no intention of trying.