The toke wakeover
Bari Weiss shared this essay by UCLA Anthropology Professor Joseph Manson as another item on the long list of woke students running amok list. I think the story isn’t quite as stark as she and Manson think it is.
I’m a 62-year-old professor—by academic standards, still young. But I am retiring this summer because the woke takeover of higher education has ruined academic life. “Another one?” you ask. “What does this guy have to say that hasn’t already been said by Jordan Peterson, Peter Boghossian, Joshua Katz, or Bo Winegard?”
Well, for one thing, how about items of interest to women? There’s actually quite a lot of conflict among feminist women and trans-obsessed students in academia right now, in case you hadn’t noticed, so I doubt that four men have completely covered it…especially Peterson and Boghossian.
But Manson doesn’t address any specifically feminist issues.
I’ve been a professor in the Anthropology Department at UCLA since 1996; I received tenure in 2000. My research has spanned topics ranging from nonhuman primate behavior to human personality variation. For decades, anthropology has been notorious for conflict between the scientific and political activist factions in the field, leading many departments to split in two. But UCLA’s department remained unusually peaceful, cohesive, and intellectually inclusive until the late 2000s.
Gradually, one hire at a time, practitioners of “critical” (i.e. leftist, postmodernist) anthropology, some of them lying about their beliefs during job interviews, came to comprise the department’s most influential clique. These militant faculty members recruited even more militant graduate students to work with them.
I can’t recount here even a representative sample of this faction’s penchant for mendacity and intimidation, because most of it occurred during confidential discussions, usually about hiring and promotion decisions. But I can describe their public torment and humiliation of one of my colleagues, P. Jeffrey Brantingham.
Jeff had developed simulation models of the geographic and temporal patterning of urban crime, and had created predictive software that he marketed to law enforcement agencies. In Spring 2018, the department’s Anthropology Graduate Students Association passed a resolution accusing Jeff’s research of, among other counter-revolutionary sins, “entrench[ing] and naturaliz[ing] the criminalization of Blackness in the United States” and calling for “referring” his research to UCLA’s Vice Chancellor for Research, presumably for some sort of investigation. This document contained no trace of scholarly argument, but instead resembled a religious proclamation of anathema.
As you won’t be surprised to hear, Jeff is not a racist, but a standard-issue liberal Democrat. The “referral” to the Vice-Chancellor never materialized, but the resolution and its aftermath achieved its real goal, which was to turn Jeff, who had been one of the most selfless citizens of the department, into a pariah.
Ok, but there’s one bit here that stands out, I think. To repeat:
Jeff had developed simulation models of the geographic and temporal patterning of urban crime, and had created predictive software that he marketed to law enforcement agencies.
Does that sound potentially sinister to you? Because it does to me. Manson never mentions that potentially sinister vibe, so I went looking for a little analysis. From 2018:
A pioneer in predictive policing is starting a troubling new project
By Ali Winston and Ingrid Burrington
Jeff Brantingham is as close as it gets to putting a face on the controversial practice of “predictive policing.” Over the past decade, the University of California-Los Angeles anthropology professor adapted his Pentagon-funded research in forecasting battlefield casualties in Iraq to predicting crime for American police departments, patenting his research and founding a for-profit company named PredPol, LLC.
PredPol quickly became one of the market leaders in the nascent field of crime prediction around 2012, but also came under fire from activists and civil libertarians who argued the firm provided a sort of “tech-washing” for racially biased, ineffective policing methods.
Now, Brantingham is using military research funding for another tech and policing collaboration with potentially damaging repercussions: using machine learning, the Los Angeles Police Department’s criminal data, and an outdated gang territory map to automate the classification of “gang-related” crimes.
I don’t know, I’m not particularly well-read in this subject, but the project sounds ripe for abuse, and it’s a for-profit enterprise, not some sort of altruistic Trying to Help, so frankly I’m not at all convinced that this is a case of too-woke students and a bullied academic.
Moral of the story: not all cranky academics fed up with woke students are our friends and allies. Read their stories with a raised eyebrow.
I think the problem he’s talking about is described here:
(Emphasis added.)
It’s one thing to criticize the project, as the Verve article does well. It’s another thing to attack the researcher for Wrongthink.
And what did the Graduate Association accomplish? Did they start a conversation about the problems with designing AI to recognize and predict gang-related crime, or the shortcomings of this particular attempt? No. But they sure showed that guy, the bigot.
I know it is, I read the essay, which is why I looked for more information on Brantingham’s work. Judging by how much Manson left out of his account of that, I don’t think he’s a very reliable source.
I dunno. After all the woke handwringing about AIs, tech, and trees that are all manner of -ist and -phobic, I’m starting to hear “wolf” with every cry.
The thing with so called AI systems is they ‘learn’ from an existing data set. If that existing data set is biased, so is the predictive software. For a person targeted it’s much harder to argue that their civil rights have been attacked by software. There’s also a tendency to believe software because. I suspect latsot has forgotten more about this than I’ve ever learned.
I’d argue that to be biased is substantively different from being -ist. Even if they do rhyme.
I find this part telling:
…the implication being “if we had known their true beliefs, we wouldn’t have hired them.” You see, excluding qualified academics for ideological reasons is fine we *we* do it, but not when it’s done to *us*!
Not sure if this link will work–
https://drive.google.com/file/d/1Ti-6xq9yJJpG7CalWgkso1l2WbhbFJs5/view?usp=drivesdk
What is it? Why did you post it?
LM, I could open it just fine.
Ophelia, it’s a PDF copy of the resolution of the AGSA against Jeffrey Brantigam. The “resolved” section:
Sorry, I should have expected and fixed line break discrepancies in the copy/paste before posting.
Thanks Sackbut.
What a ridiculously jargony “statement.” They could have said something concrete and worth saying about profiteering or predictive technology or both instead of a bunch of generalized glurge.
For future reference – if a url doesn’t say what it is then tell us what it is rather than just posting it without explanation.
NP about the line breaks, I’ve had a lot of practice fixing them.
In my own field, the insurance industry has been utilizing machine learning/artificial intelligence techniques for pricing. A potential problem with these techniques is that they can end up selecting a characteristic correlated with another characteristic such as race that is disallowed for use in pricing as discriminatory. So the pricing can end up biased without that having been anyone’s intent. The irony is, in order to prevent something like this happening, you may have to collect data on the disallowed characteristic in order to show the correlation with the proxy indicator, data that you otherwise would want to avoid collecting to eliminate the possibility of discrimination.
There has been a lot of discussion about this issue lately among actuaries in their professional meetings and publications. Perhaps some will find it a little encouraging that at least some parties within a particular industry are openly acknowledging the potential problem and trying to figure out what to do about it without bogging down the discussion with a lot of woke jargon.
Yes, those AI systems for predicting crime are known to be problematic, we’ve talked about them here before. As Rob said, they are known to encapsulate the systemic racism and other biases that exist in police forces for the obvious reason that they are trained on data compiled by those forces. So if police think that an area where the population is predominantly non-white is a crime hotspot because of individual or systemic racism, they’ll patrol that area more, be more likely to arrest people there and so on. The system will then flag that area as a likely site of future crime, perpetuating the problem (and the racism).
I haven’t seen a study of this for a while, but ones I looked at a few years ago showed that these systems were practically useless at predicting crime, but very useful indeed at confirming biases and deflecting blame.