Brave new world
And then there’s this whole idea that we can make morality a science by basing it on universal desire for well-being.
One problem with that is that we don’t all have the same view of what constitutes well-being, to say the least. We don’t agree on what constitutes well-being in general and we certainly don’t agree on what constitutes it for self as opposed to other.
And suppose someone did come up with a survey that found – convincingly – that aggregate well-being was higher when women were more or less forced, by the lack of opportunity to do anything else, to be wives and mothers and nothing else, and lower when they had wider opportunities and correspondingly more freedom. Suppose there is such a survey, that shows aggregate well-being higher and women’s well-being lower. Suppose a world where women are distinctly a minority, as they are in India and China because of selective abortion. Would that outcome – a less happy minority but a happier total – be moral?
No; not in my view at least. But the idea that we can make morality a science by basing it on universal desire for well-being seems to mean that it would be.
I think our conceptions of well-being may not be crisply defined or definable, but I would be extremely surprised if there were no overlapping consensus over the core items in it. Food, shelter, etc.
The consequentialist asks us to restrain our thought-experiments to the level of the most realistic, the most plausible. So for them, the reply to your minority case is that it asks us to imagine a world that is not ours. Humanity is a race that is both occasionally fraternal and often spiteful. For any species with social sympathy and enlightened self-interest, if a population lives in a society that has national self-awareness and legal stability, then that population cannot help but fear for the minority. Otherwise, you risk both the national integrity and (more importantly) threaten civil unrest.
Perhaps I haven’t been paying adequate attention, but it hasn’t been my understanding that aggregate well-being plays into the discussion.
I don’t get that.
I am not worried. Sociological research on human needs and desires will help us move beyond what we might *think* is needed to fashion a better world, to what is actually needed. Moving away from opinions and political ideologies and establishing facts is a good thing.
Ophelia’s hypothetical, shortened:
Let’s call this Scenario B.
1. Is it plausible that denying half of us their freedom could make all of us better off, on average? Why does Scenario B have these results in terms of well-being — what is the mechanism? If it cannot actually happen, it’s not a good example.
2. “Would that outcome be moral” — compared to what? Even supposing it is known that society can be arranged this way — we have only to choose it, or not — that statement makes no sense unless we know what the alternative is. Can you describe a Scenario A, to which B can meaningfully be compared? Can you describe some parameters that can plausibly be tweaked to move us between A and B in design space?
3. Suppose that it is known that society can only be like A or B; that this is the only choice we get. Well, it’s not very innovative but in this case I propose that we choose the one which results in greater aggregate well-being.
(There may be subtleties that deserve supplementary consideration — I am thinking especially of Rawls’s “difference principle” — but generally speaking, these are covered by the mere consideration of utility, because that’s what utility is.)
4. Finally, if we did choose Scenario B — and if we chose it because although it’s unfair toward women it’s the best we can do — then we know exactly what to do next. Invoking the difference principle, we look for the parameters that we can tweak to make women better off, without ruining it for everyone else.
Now, in this story “everyone else” is the men — but we could swap the genders throughout and it would make no difference to the argument. The reason to think that it might make a difference is that Scenario B seems (by design) so similar to real situations around us in the world right now. But is it really like what’s going on out there? As in Scenario B, women are restrained, coerced and oppressed — but unlike in that scenario, there is no reason to suppose that, all things considered, this might be a good thing. Oh, there are people who say such things — but they are almost certainly wrong.
This means, if I am not mistaken, that as a counter-example to the kind of utilitarian realism Harris is advocating, Scenario B fails.
I will go further. (I always do.)
When people claim that the oppression of women leads to (in Ophelia’s language) “a less happy minority but a happier total,” they do so not because they take it to be true but because they hope that it will benefit them. It is not really a proposal as to the best way to maximize utility; in other words, it is not really a moral claim. What an oppressor wants is to reap the benefits of his oppression of us. Not because we are all better off that way — duh! — but because he is better off that way! He might say that it’s in everyone’s best interest; or that God wants it that way; or whatever. These are diversionary tactics, not moral theories.
Markup fail — the last three paras were not all supposed to be so intense.
The scenario with the survey seems a bit naive, on several levels. From a pure measurement point of view, questionnaires do not deal all that well with big important non-quantitative questions like, how much suppression of women do you consider essential for the Good Life?
More importantly, this would try to measure average well-being in the population, and to regulate society accordingly. How would that be different from hardcore utilitarianism? Even if you want to base your morality on some aggregate measure of well-being, you’d need to balance it by considering well-being on an individual level, too, just like majority rule needs some counterbalancing individual rights for a chance at democracy. I don’t know whether you can have science-derived morality, but I doubt you should have social-science derived morality, surveys and all.
I think one could definitely aim for science-compatible morality instead, whatever that may mean: is there a way of formalizing moral intuition, even different and conflicting intuitions? Or should we just try not to multiply our moral entities unnecessarily? Or are there solid (hah!) neuro-physiological correlates for well-being? Etc.
I’m worried that Harris is talking about turning science into teleology. Not an anti-teleology like liberalism (in the sense of ‘liberal democracy’), or using science as a method for achieving the goal of human happiness as defined by X, but an actual teleology.
Am I being unfair (and a bit “ooh! scientism!!1”)?
#6 – well I know it’s not a plausible survey – that’s why I said “suppose.” I wasn’t trying to paint reality, I was trying to consider an implication.
Will it. But how will we know what “a better world” is? How will we decide the definition of a better world? How can we even talk about a better world without opinions and political ideologies?
Ideas about what a better world is are full of emotion; they are passional, they are commitments. They are not just facts.
I think Harris’ idea is getting through to many people. His message is somewhat subtle and is not looking for a perfect solution, only a path to a better outcome, incrementally. Thanks Benjamin, Michael, Roy and Alexander for your clarifying comments. Harris’ October book seems so far away.
Hello Ophelia Benson. I think that proposing all these difficult scenarios doesn’t address the claim that a science of morality is possible in principle. For each scenario, I can imagine an alternative where some future, more sophisticated measurement of true well being gets it right (where, for instance, selfish gratification is determined to “weigh” less than loss of freedom). I concede this gets into science fiction-like territory pretty fast. But I think my point stands that your objections can be seen as pointing to practical challenges, but not a refutation of the in-principle possibility of Harris’ proposal.
Best regards – Steve Esser
This seems to be the common response to what Harris is proposing. Yet everyone who says it assumes that it’s just obviously true. I’m not convinced that it’s quite so obvious. I think a lot of people are confusing taste with morality.
I haven’t solved all the world’s problems in my head just yet, and will want to read Harris’ book on the subject before judging whether or not he’s made any progress on that front, but it strikes me as intuitively wrong to think that morality is outside the realm of science. Morality is a concept that does not exist outside of brains, and brains are physical objects, open to objective study. There are already studies showing that certain moral questions are answered the same across cultures (the famous questions concerning trolley switches, track workers, extremely large people nearby, etc.). It seems to me (i.e. it’s my opinion) that the claim that different humans have significantly different concepts of well-being is the extraordinary one, in need of supporting evidence.
I don’t assume that it’s obviously true, but I do think that this particular site (for one) has collected a lot of evidence that it’s true over the past several years. I’m leaning on that as background knowledge, and assuming that many readers are aware of it.
It’s certainly possible to study morality as an empirical subject, but that’s quite different from making moral choices and commitments. Anthropologists gather empirical evidence of customs and values, but that’s not the same thing as sharing those customs and values.
I’ve been mistrustful of Harris’s project from the start, not merely because he hasn’t even come close to bridging the gap between ‘is’ and ‘ought’ but because it looks suspiciously like he’s attempting to turn science into a substitute for religion in exactly the way theists claim the rest of us atheists are doing.
And the trolly question and the like don’t show any universally accepted answers, merely statistical preferences. Even if 99% of the world’s population answered it one way and that this statistic was consistant across all cultures and at all times it wouldn’t make the other 1% wrong, nor would discovering that this 1% lacked a particular gland found in the brains of others which produced both a biologically determined decision and a sense of euphoria on making that decision.
That would still be describing an ‘is’, not an ‘ought’.
Yes…It’s what a lot of people love to call “scientism”: thinking and claiming that science can answer all questions.
Playing into that stereotype is probably a considerably bigger mistake than merely being an unapologetic atheist, or even than being both a scientist and an unapologetic atheist.
I also have a problem with identifying ‘morality’ with ‘consequentialism’. A gangster who informs on his former colleagues in return for immunity can be thought of as acting in the public interest in the sense that his testemony might benefit society as a whole: I can accept that the prosecutors who offer him this deal may well be acting ‘morally’ but the gangster himself is acting purely out of self interest.
Or take a fictional example. In the TV series Dexter, the eponymous serial killer has a clearly defined code – the Code of Harry, the police officer who brought him up. Dexter only kills other killers and as such acts in a similar manner to more traditional vigilantes. Although the likelyhood of vigilate violence is more likely to lead to greater violence in the real world this is not a given and a plausible – if unlikely – case could be made that it can be a ‘moral’ act if it leads to positive consequences.
However Dexter isn’t acting out of any desire to save society from other killers or even to punish them on society’s behalf, he’s a sociopath driven to kill for the sake of killing by his ‘dark passenger’. The Code of Harry instructs him to kill only killers because that way he has less chance of being caught. Morality doesn’t enter into it.
Shatterface, you mention Dexter’s Code of Harry as an “example” — what is it an example of?
There is a problem of by what metric one measures ‘better’ society. Average lifespan? Average wealth? Overall lack of violence? All these are good, but in themselves they DON’T necessarily define a better society.
To take a contrived example, consider a coercive society where diet, activities, lifestyle, etc are highly constrained to eliminate risk. No one gets a choice. They are forced to go to the doctor, and to be treated regardless of their wishes, they are forced to eat a bland but healthy diet, forced to exercise, they are prevented from doing anything risky… average lifespan would increase, but it does not sound like good society.
To a lesser degree, there are problems with any ‘average’ metrics. They say nothing about the individuals involved. Average, statistial values are only of value to bureaucrats but do nothing for the individual. This is the danger of the ‘greater good’ mindset.
GDP is a metric that is used as a measure of the well-being of society in some quarters.
I appreciate that I’m committing at least two logical fallacies with that argument, but I don’t think that invalidates it. ;-)
And of course it’s well known (at least to people who pay attention) that GDP and other similar aggregates are a terrible way to measure overall well-being because they can simply conceal extreme poverty behind some comfortable average.
‘Shatterface, you mention Dexter’s Code of Harry as an “example” — what is it an example of?’
I was using it as a a formalised code of behaviour which *could* have a beneficial outcome for society in general – similar to some utilitarian arguments for a benign religion, in fact – but which I doubt any of us could consider to be ‘moral’.