The morality of the gaps
Kenan Malik is not bowled over by Sam Harris on morality.
Harris is nothing if not self-confident. There is a voluminous philosophical literature that stretches back almost to the origins of the discipline on the relationship between facts and values. Harris chooses to ignore most of it…It is one thing to want to “start a conversation that a wider audience can engage with and can find helpful”, something that many of us, including many of those boring moral philosophers, seek to do. It is quite another to imagine that you can engage in any kind of conversation, with any kind of audience, by wilfully ignoring the relevant scholarship because it is “boring”.
I share that view. (I agree with Polly-O!) The breeziness of the attempt to settle complicated issues while ignoring the existing scholarship is grating.
“How does Harris establish that values are facts?” He describes an utterly crappy life, and an utterly blissful one. See? Facts.
It is a kind of argument that suggests that Harris might have done well to spend a bit more time immersed in all the boring stuff…The insistence that because it seems obvious that rape and murder are bad, and that wealth and security are good, so there must be objective values, seems about as plausible as the argument that because there are gaps in the fossil record, so God must have created Adam and Eve.
Kenan sums up:
Creating a distinction between facts and values is neither to denigrate science nor to downgrade the importance of empirical evidence. It is, rather, to take both science and evidence seriously. It is precisely out of the facts of the world, and those of human existence, that the distinction between is and ought arises, as does the necessity for humans to take responsibility for moral judgement.
I did a review of the book myself a few months ago.
I’m first!
I think if Harris had overcome his antipathy to philosophy he’d find that he could get what he wants without a theory that falls apart rather quickly. Facts and values have different subjects, what things are (including us-things) and what we want as individuals and society. The latter is a negotiation among the parties, with community standards and individual choices frequently colliding. The best we can do is guide values with factual assumptions. Having done that, though, you still have choices to make individually and collectively.
It’s almost as though Harris had never heard of David Hume.
Should Harris be expected to be any more interested in all these writings than Dawkins is in writings on Christianity?
Is it that critical to answer every moral philospher ever if you think neuroscience is making their thinking obsolete?
I think the book is a little flabby in the middle. He jumps around and doesn’t seem to be building an argument.
Still odd to jump down his throat when he’s the first person to make this foray using neuroscience.
I’ve read David Hume. I wonder if he’d ever heard of modern neurology and scientific method.
I liked the article.
Geoff, cute. I wonder if Euclid heard of symbolic logic before developing the axiomatic method? Nope? Guess we can ignore that method then.
Did Euclid read Hume?
Having more of the relevant facts available makes for better choices but the facts can’t be the choices. That why the value/fact distinction can never be erased. They fill different niches in the conceptual toolkit, describing different operations. You can’t add more science or more/better facts and then use less values. They don’t substitute for each other. You can most easily see this by understanding that it’s a value to use facts. It can’t be a fact to use facts, can it? Having better nouns, or even perfect nouns, doesn’t mean you don’t need verbs.
Everyone could agree on all the scientific facts and yet not agree about what is right to do. Harris has not explained why he is right and, say, deep ecologists are wrong when they say that maximising ecosystem well-being should be a higher priority than the suffering of sentient beings.
The whole point of the is-ought distinction is that there’s no obvious criterion; neuroscience cannot resolve this by identifying scientific facts about how brains work.
Kenan Malik’s review is a good one, and I am also not impressed by Sam Harris’ attempt to derive an ‘ought’ from an ‘is’ (not that it can’t be done, just that one would have to do better than Harris to succeed).
However, I don’t agree with this line: “It is precisely out of the facts of the world, and those of human existence, that the distinction between is and ought arises, as does the necessity for humans to take responsibility for moral judgement.”
Is it out of the facts of the world that this distinction arises, or out of our metaphysical intuitions? I don’t see how the facts can entail this distinction any more than the facts can entail morality.
Maybe Malik is just saying we need values, and that’s a fact about us. I think that much is true.
Yes, I think that’s what he says here: The significance of the Euthyphro dilemma is that it embodies a deeper claim: that concepts such as goodness, happiness and wellbeing only have meaning in a world in which conscious, rational, moral agents exist that themselves are capable of defining moral right and wrong and acting upon it.
Caryn, I think that’s correct. Choices involve a multiplicity of facts, intuitions, fears and hopes. What tells you how to weigh each element? Wouldn’t that be a value, too? I think there’s plenty of “fuzzy logic” here, where conscious beings with limited information and resources bring all the emotional and rational tools to bear. What we can do is look for places where facts and rationality can improve the choices (though what constitutes improvement will itself be intuitionally derived, at least in part).
Reasoning about facts while having intuitions and emotions is an irreducible complex of ingredients, no substitutions allowed. The values decision can’t be broken down to more primitive parts while still doing what it does. You might say values are emergent properties of conscious minds, what we do with the extra degree of freedom we have. When a property emerges it usually means you don’t know what happens at the lower levels, like a thought which contains no information about all the billions of neurons firing away to produce it.
Sam Harris has been doing a lot for gnuism recently, with his appearance on BBC’s Newsnight on the topic of banning the wearing of tents, and his debate with Robert Winston in the Guardian, all good stuff. He’s attempted to bring up morality for debate and discussion, and that’s fair enough. I think he went wrong by telling us–rather like an authority on the subject–how we ought to do morality. I think its a rather large blindspot within an otherwise intelligent and informed mind.
And the problem is, as I’ve mentioned several times, is that we haven’t done our project on morality, and instead we’re relying on a less than conscious judgement based on experience or sentiment to ‘judge’ topics which are obviously moral. This means that we’re not reaching consensus on more nuanced discussions (as we’ve no theory on morality). I also think morality has less of a place in such discussions, because to be honest, our freedoms come before moral refinement. Plenty of people spend their lives living a less than perfect moral life, and it’s not really such a problem compared to tyranny, persecution and the other horrors that we’re all very much attempting to rid our world from.
And so I think we’re doomed to complexity whenever having refined moral discussions, until we accept a unified theory on the subject that is clearly based on scientific evidence.
There is a huge philosophical project that’s been in the works for a very long time on how to naturalize ethics. As Ophelia perfectly puts it, The breeziness of the attempt to settle complicated issues while ignoring the existing scholarship is grating.
Sam Harris is not the first person to look at morality through the prism of neuroscience, not even the first to write for the popular audience about it. Putting “morals neuroscience” into Amazon’s search box returns 86 hits, most of which are pop science books.
Also, there is a big difference between Dawkins ignoring a lot of theology and Harris ignoring a lot of philosophy. If one rejects the foundations of theology (as Dawkins et al do) then there is no reason to examine every thinker who based their work on those foundations. I actually went and looked at the theologians that Terry Eagleton complained Dawkins had ignored, and every single one of them had assumed the truth of their own theologies and had written nothing that would challenge the standard atheist arguments. It was, in fact, a Gish Gallop played by Eagleton. It was no different to defending homeopathy on the basis of a beautiful poem using metaphors from homeopathy.
Harris, on the other has has ignored a large and important body of work at the very core of the subject he is attempting to popularise. If he is allergic to philosophy, then he should not wade into a subject area that requires philosophising — because neuroscience, for all its amazing recent progress, is nowhere near capable of answering all the questions Harris would like it to answer…and probably never will be.
And he could have written an interesting book on how the latest findings in neuroscience can add to the discussion of morality, if he’d done it in a careful, modest way, instead of just announcing that he’d Fixed the whole thing. The grandiosity of it is really annoying – as if nobody’d ever thought to take well-being into account before.
Like, say, Bentham.
Quite.
I think the situations are different. With theology, do you think theologians would waste so much time with the horrible solutions they come up with if they could come up with even one good answer? That’s why dipping into the subject quickly convinces a sentient being that it’s all like that. Though it’s good to know from people who have explored in depth that it is all just as bad as the early returns indicate, it isn’t necessary to punish yourself with superfluous expertise.
Moral philosophers tackle good questions, and have decided that everyone must decide what the answers are, without an appeal to a moral authority beyond human experience, whatever that’s supposed to be. It’s not a cop out, because the choices are a condition of being human. Caryn mentioned the Euthyphro, still one of the greatest statement about moral choice. I wouldn’t trade that for all the theology ever produced.
Harris hasn’t missed anything. Everyone else seems to have missed the obvious, though.
How can Harris say that there is an objective “good?” The answer is very simple. Part of the definition of a living entity is that it seeks to preserve its status as living. That’s not some fancy philosophical concept that has to be deduced from first principles; it is definitional. It is a categorical statement derived from empirical observation.
Are people living entities? Of course. Hence, objective good.
This is exactly what Harris said with his “health” analogy. We all know what health means, in a general sense, and we all agree that it is good. Anybody who argues that we don’t know that being healthy is better than being unhealthy simply isn’t being serious, even if health is not formally definable.
What I find truly ironic is Ophelia’s injured tone. Harris is ignoring important and serious scholarly traditions! He’s making light of things he knows nothing about, making us all look foolish, and pissing people off. Why, one might even say he’s not helping.
100% of theology is bunk. 90% of moral philosophy is bunk. There’s a lot of bunk in the world, and anybody who stands up says, “forget that old bunk,” is going to get attacked by somebody. I don’t think Harris is overly concerned about it.
Oh, for crying out loud. Harris didn’t say he invented the concept; he simply said, “And now we know where the answer will come from, even if we don’t know what it is.” This is exactly what Crick said about consciousness in the opening paragraph of “The Astonishing Hypothesis.” Crick got a lot of backlash from injured philosophers who were suddenly rendered rendundant, too. And I don’t think Crick was terribly concerned about it, either.
What is objective good? That which would produce the highest average standard of living? That which would produce the longest average lifespan? or the longest peak lifespan? How about happiness? Animals live longer in zoos, but they aren’t necessarily happier there.
There is no single objective good. Is it better to be free with risk, or tightly controlled with safety? Is it better to allow people to seek out maximum quality of education for their children, or should all education be equalized?
There is no single objective ‘good’. There are certainly plenty of ‘bads’ but ‘good’ is and always will be too subjective a term to achieve any universal agreement.
Certainly Dr. Harris is not an originator when it comes to moral theory, but I think he is playing an important role as an advocate. The glory of science is revealing how complex things can result from simple mechanisms. They don’t need to be handed down by even more complex beings who happen to have very specific views on what you should be doing with your genitals. We should consider the possibility that the question of morals may be quite simple in the end. Put another way, I don’t know of any advanced scientific field in which the practitioners still need to know or care about what philosophers had to say about their subject matter. Do we need to study moral philosophy just because we haven’t started getting the real answers yet, or should we just try to start the process?
You’ve got cancer cells which seek to preserve their status lumped in with “objectively good” there.
@Ophelia,
“And he could have written an interesting book on how the latest findings in neuroscience can add to the discussion of morality …”
Patricia Churchland just did that:
http://www.amazon.com/Braintrust-Neuroscience-Tells-about-Morality/dp/069113703X
Harris could have also written a stellar book arguing against moral relativism and for a rational, more/somewhat objective morality. Or he could have hedged his claim and said “How science can help determine human values.” But these books would not have sold as well as one subtitled “How science can determine human values.”
Now, as wrong as I think Harris is about the fact-value distinction, I think much of the debate over his claim that science can determine human values is mistaken. Harris writes in his first endnote, and has said over and over again, that he basically equates the word “science” with “reason.” Read that way, his claim is still controversial, but much less so (which, again, means less books sold).
I have to object to some of the tone of this discussion. This rush to condemn Harris is more than a little unjustified.
For example the oft-quoted “didn’t read moral philosophy”. This is as blatant a bit of quote mining as is imaginable and more than a little perverse given that I’m only three chapters in and already he’s engaged with Hume, Kant, Rawls, Nozick, William Casebeer, Owen Flanegan, Daniel Dennett, Joshua Greene and Jonathan Haidt. By the second chapter he’s already fully and explicitly committed himself as a Consequentialist and he’s biting all the philosophical bullets he needs to to hold that position. Is it too much to ask that we read that footnote with the minimal amount of charity needed to see that what he’s saying is:
1. He came to his arguments from thinking about the mind, not about morals. This doesn’t mean he didn’t read any moral philosophers, he very obviously has, only that they weren’t where he started.
2. He is not writing a book of academic philosophy.
As for Malik’s review, he falls into the same trap while perpetrating some others. He certainly hasn’t engaged with the core metaphysics as you can see by his discussion of Harris’ comparison of the two women’s lives. The first step of Harris’ argument is to say that any morally relevant feature of the world has an affect on human consciousness. He gets lambasted for this but really this is no different to the assumption all of the Is-Ought criticism is grounded on. The point appears to be then that any question of consciousness can be examined through neuroscience but I’m not sure it succeeds with that. More importantly we need to note, this hasn’t escaped the objection of “how do we tell which state is better” and that’s precisely the side of Harris’ argument that Malik hasn’t acknowledged. The second step of his argument is a straight forward possible worlds metaphysics: conceive of the most terrible world imaginable and any act or feature of the world that takes us away from that is moral. Now I’m not in a position to say one way or the other whether or not I am convinced by possible worlds talk, one of my projects over the summer is going to be reading David Lewis on cognitive content to find out, but this isn’t a philosophically naive argument. On the other hand, the constant reaching for Is-Ought as if it’s the end of the discussion certainly is.
Okay, rant over: flame away! :)
Did Euclid read Hume?
Oh yes, it’s well known that the ancients Greeks were well versed in early Modern Philosophy.
The statement We should consider the possibility that the question of morals may be quite simple in the end is a perfect example. Perhaps – just perhaps – philosophers have in fact *already done this*.
As for Put another way, I don’t know of any advanced scientific field in which the practitioners still need to know or care about what philosophers had to say about their subject matter, might I say that the past tense is sufficient to unmask the problem?
Philosophers invented empiricism and science and, for that matter, computers. Might be worth considering that philosophers are actually still really doing stuff in the real world that is relevant to various disciplines. I believe I mentioned Richard Healey in a recent post on the topic of quantum mechanics and the realism/instrumentalism debate; look up his papers and explain, again, how philosophers are irrelevant to the practice of modern science. Dennett, the Churchlands, Ramachandran, John Pollock and the Oscar project… and I could keep going for hours.
I’m not sure I’d consider Jonathan Haidt a moral philosopher. In fact, I’m almost sure I wouldn’t. Probably the same with Joshua Greene. I’m not lessening the importance of their work, it’s just not philosophy.
The first step of Harris’ argument is to say that any morally relevant feature of the world has an affect on human consciousness… The second step of his argument is a straight forward possible worlds metaphysics: conceive of the most terrible world imaginable and any act or feature of the world that takes us away from that is moral.
Right, and the *problem* is the assumption that things that have what we’d personally call bad effects on human conscisousness are necessarily morally wrong. A deep ecologist might be perfectly happy with the extinction of humans, and might in fact advocate that as a *better* outcome than our survival. Why is the deep ecologist wrong about this?
Alright, alright. I’m not trying to say philosophy is worthless. I know I love Dan Dennett. But I don’t think there are many problems that are best solved by reading a bunch of philosophy. I think most here would agree. Science is philosophy plus. Why should we fault Harris for suggesting that we start working on the “plus”?
Well, because we have to get the groundwork right as well. It took us a while to hit on empiricism and there are still significant philosophical issues surrounding it. Try defining the difference between science and pseudoscience – the “demarcation problem” – or take a look at Chisholm’s problem of the criterion.
There’s nothing wrong with taking neuroscience seriously.
It seems like some of the objections to Harris are that optimizing well-being is fine, but who is to say that that is “morality”? OK, lets call it morality*. Morality* is the field concerned with optimizing the communal well-being of conscious creatures. Now, can we all agree that morality* is a very worthy field of study, and one that is fully accessible to scientific practice? If so, I say to hell with morality. I’ll take morality* any day. And would you want to live with people who don’t see why following the precepts of morality* is important?
I actually think most moral philosophers — and most people — would agree that, broadly speaking, morality is about maximizing well-being (for conscious creatures, societies, and perhaps even Earth). The problem is that there are many different conceptions of what well-being being entails, and how to get there.
In a way, I think that everyone can get his or her own definition of well-being, as long as their quest for well-being does not unduly interfere with the same quest in other conscious beings. Intense exercise is a big part of well-being for some, but certainly not for me. But we could still live ethically together, as long as my relaxation doesn’t interfere with their exercise and vice versa. Maybe the real problem isn’t defining well being, but figuring out the appropriate way to resolve conflicts in interest. This is no simple problem in itself, of course. But if we could solve that problem, it seems like we could all be moral even if no one agreed on the meaning of “well being.”
Well, and maximising well-being for any given individual/ society/ planet depends a lot on the fitness landscape and is of a necessity flexible. :)
However, this is precisely the derivation of the conflict with religious forms of morality, as many of them weight well-being somewhat lower than obedience to God. Post-Enlightenment societies usually say that’s fine, as long as you don’t interfere with the decision-making of other people and only sacrifice your *own* well-being in the interests of your *own* obedience to God. That’s why us Gnus are widely castigated for “wanting to take other people’s children away because of their religious faith” when we suggest that maybe withholding vaccinations from one’s children is immoral.
Well, putting a child into the equation changes things entirely. Now it is not just the parent’s own conception of well-being at stake. So you’ve probably hit on one of the most difficult questions to resolve. The child is in no position to pick his or her own definition of well-being, and we would certainly call it wrong for the parent to choose certain definitions and impose them on the child. So maybe we do have to agree on a definition of well-being that everyone can accept But with a 99% overlap in our genetic code, reaching consensus should be easy!
Ah, but of course they’d say that raising their child to obey their supernatural and un-empirically confirmable God is more important to the well-being of their child than whatever secular modern scientistic society might say about that child’s wellbeing. Which is where the whole normative philosophical tradition of individual rights bumps into this problem. :)
Why is it *better*? Whose well-being are you maximising? In which conditions? Who gets to say what counts as well-being?
People who work on naturalizing ethics are usually up to their eyeballs in philosophy of biology and epistemology. Go figure.
I do think there is a lot to Harris’ point that an inability to answer difficult questions shouldn’t stop us from answering more tractable ones. I’m sure most religious parents expose their child to religion because they think this is the best bet for making a happy and successful child. If they could be convinced that it was actually harmful to their child (no easy task, surely), I’d bet a lot of them would opt out of a religious upbringing. If being religious is not defined as well-being, but simply thought of as a way to achieve well-being, then there is no reason why we couldn’t talk about moral issues on common ground with religious people. And the definition of “well-being” might be one of those things that everybody really knows until a philosopher comes along. No we don’t have perfect terms and definitions. Let’s start using what we have. Science was never about getting things right, just about getting things better.
Well, yes, like I said — he could have written a killer book focused on critiquing moral relativism and advocating for rational moral standards.
What if it’s more harmful to the parents to vaccinate the child than it is to not vaccinate the child on some measures? What if we could even determine with an fMRI scan that the relevant bits of the brain are more active than whatever threshold we’ve set for “bad” in the parent because of the pseudoscientific beliefs the parent holds about vaccination? Does that mean we don’t vax the kid?
What we have is contradictory or incomplete; that’s the whole problem. Well, and your last sentence is a problem. Science was never about getting things right, just about getting things better presupposes a criterion for “better”. Science is about mapping the world; it’s empirical because we check the map against the world and the map gets more and more accurate. There’s a lot of confusion, I think, because we tend to elide between an epistemic and a moral sense of the term “should”. if you want an accurate map, there’s a way you should – epistemic should – go about drawing it, with empirical tests and peer review and repetition. That process will get you the most accurate map of any of the processes we’ve thought of up until now, and if we think of a way to enhance that process, it will be adopted by science. But it doesn’t lead you to moral truths; it leads you to a more accurate map of the world than the one you used to have.
If letting sentient life survive on this planet in the long run requires our absence, why not just go ahead with the deep ecologist views? Most of us probably have an intuition that the deep ecologists are wrong. But *why are they wrong*?
Michael – yes, exactly.
I can no more define what “better” theories are in science than I can define “better” moral actions, in the sense of a valueless definition. You are exactly right about that. We have no letter from the universe saying that we should prefer the explanations that account for phenomena in the simplest terms possible and can be potentially refuted by observations. This does not bother me in the least bit as a scientist (OK, a cognitive psychologist, so kind of a scientist). I guess that is the analogy I was drawing. It doesn’t bother me for morality, either. Science is achieved by a group of people willing to accept the terms without absolute justification, and thank goodness for it. If we had a group of people willing to accept the task of optimizing well-being without absolute justification, maybe we could have morality’s moon landing someday. Thanks for the discussion!
Harris’s book is better than a lot of this criticism indicates, although the discrepancy is largely Harris’s fault.
I think Harris is basically staking out a quite respectable philosophical position—one I mostly agree with—but seriuously fumbling the rhetoric in some important ways. It’s very disappointing that Harris is so uncharacteristically bad at explaining himself in TML when he gets pithy–that’s usually what he’s uncommonly good at—but if you ignore a lot of the confusing pithy stuff, and follow his arguments and examples, I think he’s mostly making quite good points. His high-level rhetoric doesn’t really convey they picture he’s painting with his arguments, which is much better.
With respect to the Hume is/ought thing, specifically, what most people are missing—and this is very much Harris’s fault—is that HARRIS AGREES WITH HUME. His high level rhetoric makes it sound like he doesn’t, but his actual arguments and the picture he’s painting makes it quite clear he does.
The kind of morality he’s talking about is “objective” in a descriptive sense—you can identify the natural kind in question—and it’s “prescriptive,” in that it’s about what one ought or ought not do, but it is NOT “objectively prescriptive” in the sense (Mackie’s) that just knowing the facts about such oughts will actually, in itself, make anyone care.
It’s disappointing to me that Harris fumbles the summaries of his own position so badly, but it’s also disappointing to me that so many people can’t see what he’s actually saying, which is really pretty good.
I wish more philosophers would take it as a teachable moment and emphasize the stuff that they think is right about what Harris is saying, which IMO is not only pretty mostly good, but mostly not unusual among professional philosophers.
Philosophers of science can, actually, define what theories are better maps of the world, because they can just go check against the world.
But how are we going to check to see if the wellbeing of sentient creatures is the right measure, and which ones trump the other ones? (This is where Bentham, and utilitarianism, come in. Lots of people have intuitions that it would be wrong to kill an innocent man to in order to stop mob violence that would kill 100 people. But it would minimize human suffering, wouldn’t it?)
Well, I couldn’t resist one more comment. Caryn, I agree that the analogy between science and morality does break down. In science, we have very effective ways to determine which explanations meet the agreed upon standard for a “better” explanation. For morality, we do not have systematic techniques to determine which actions optimize well-being. This may be a difference of degree, however, and not a difference in kind. Our ideas about what makes a better theory in science have developed and are developing, but we have been making breathtaking scientific progress all the while. When theories are expressed as formal models, we still do not have a consensus on “the” way balance model complexity and fit to data when selecting the best model. Selecting theories is not a simple matter of mapping them onto reality. The literature on model selection techniques is anything but simple! So people have long done and are now doing science without some of the foundations completely worked out. In fact, jumping in and getting started is largely what has helped us better define those foundations. We all have a vague sense of what “well-being” means, and I’m sure that – outside of creative examples – you would find broad agreement on which actions improve or impair well-being. It ain’t perfect, but I’ll take it. Thanks again!
In connexion with killing an innocent someone to save 100 (or more), is not that what in the end Christianity is about? And there is of course Sam Harris’s unattractive and post-911 ideas about torture: perhaps if an innocent gets tortured, that may be dismissed as ‘collateral damage’?
Yahzi, That Guy Montag, etc.
1. We are not *condemning* Harris, we are disagreeing with the way he went about arguing one thread in one of his books. Nobody has accused him of “not helping,” and I certainly didn’t detect any “wounded tone” in Ophelia’s thread. Pretty much everyone here is a friend of Harris’s; it’s just that some of us think he is wrong on this one particular point.
2. I think it behooves some of us to be a little more respectful to philosophy as an intellectual discipline. And say this as a lecturer who teaches young doctors in big letters that EMPIRICISM BEATS PATHOPHYSIOLOGY (i.e. evidence trumps theory) every time. Ophelia started this blog mainly to oppose the faddishness of structuralism and postmodernism in academia. So we’re hardly fans of philosophers who undermine science. But it seems to me that some people are more or less dismissing philosophy a priori and, to my eye, with prejudice. Crick’s “Astonishing Hypothesis” was certainly not astonishing to me — it pretty much agreed with what I have believed since I was about 12 — and although it received a lot of criticism from anti-materialists, the idea that he should have just dismissed those criticisms because they came from philosophers rather than on their merits seems unsupportable.
If we assume the most optimistic outcome of neuroscience’s forays into morality, then it is possible that in the future we will have a very good understanding of the neurological basis of human morality, right down to the neural networking and the transmitters. And this would be an epochal scientific achievement. But it still wouldn’t decide what was the most moral outcome in a given situation. Imagine that, armed with our new-found knowledge of human morality, we develop the capacity to prevent humans from committing murder by raising their moral consciousness either by genetic engineering or pharmacology. In all other respects, these modified/augmented humans are exactly the same. They have the same emotions, the same basic thought processes, they just have a greater recognition of the moral implications of murder. Now, if we believe that “ought comes from is” in the neuroscientific sense, then these new humans would be acting outside the moral code imbued in us by billions of years of evolution and would therefore be relatively immoral *even though they commit fewer murders* than unmodified/unaugmented humans. I wouldn’t be prepared to go there, but that’s where hardcore “ought from is” leads.
The basic problem with morality is that empiricism cannot arrive at moral values, so we are pretty much forced to negotiate our moral values on a philosophical level. Rejecting philosophy outright is like starting to build a house by throwing away your tools.
Well-being can’t possible be an adequate theory of morality. For example, how is imprisoning anyone for a crime improving their well-being?
@ Chris Lawson,
I think the prejudice works both ways. Analytic philosophers have no problem (often rightly) in rejecting continental philosophers and visa versa. Which particular philosopher or philosophy is the right philosophy? Clearly science is doing philosophy right, and that is why many of us reject ‘bad’ philosophy.
The problem is, I don’t think Sam Harris is doing good science (good philosophy) here, I think he’s doing bad philosophy. I think his moral ideas are influenced by his fascination in eastern religions such as Buddhism. I don’t see him in a lab coat doing experiments and writing papers on scientific morality. It’s bad philosophy.
Who said it was? Societies punish crimes (in whatever way) not to maintain the well-being of the criminal, but to maintain that of everybody else.
That’s not what Harris is arguing at all. The argument “we should do X because our evolved instincts tell us to do X” is the fallacy of equating “is” with “ought”; Harris does not claim that, instead he argues that we can derive at least some “ought” statements by starting with facts. It’s quite possible to argue “our evolved instincts tell us to do X; doing X reduces the well-being of everyone; therefore we should not do X”. Examples of this are not hard to find.
How is the well-being of everybody else, or ‘society’ at all scientific? This is exactly the problem with this error in thinking. You’ve gone from the well-being of particulars to abstract entities like society. This is blatant idealism and not science.
Much to respond to:
Michael:
I’ll object to denying that Jonathan Haidt and Joshua Greene are philosophers. Maybe they are just a psychologist and a neuroscientist, but the fact that Haidt feels his psychology justifies Aristotelianism and that Russell Blackford has described Joshua Greene as doing some of the most important modern work in Error Theory kind of implies that their work is philosophical. Maybe a brief gesture to Elizabeth Anscombe could count as a further hint and certainly let’s not forget the Churchlands either here.
Caryn:
You don’t think it’s a bit unfair that despite all the complexities in Science, you allow them to have a normative concept in order to aid observation, fits the world, while denying that morality can have the same? I’m not sure whether or not you’ve heard of Sellars Myth of the Given, the denial that there’s any non-conceptual part to observation, but trying to bridge the gap this seems to create between the world and our observations is a very important theme in epistemology. Please note, this is no different to the kinds of problems we face in moral philosophy. In fact, it’s so similar that you have philosophers like Huw Price mustering arguments from ethics in order to provide answers in epistemology. The point is that the overlap between our ethical and our epistemological concerns is so complete that the objections to science and epistemology’s involvement in ethics is either perverse or nakedly dogmatic.
Actually that suggests a challenge to me. Given how important a lot of the debates in ethics are, can you explain to me why we cannot or must not use the human faculty of reason at its absolute best expression as it is in science?
Chris:
As a slightly obsessed philosophy student, one who kind of lives philosophy, I find your comment just a touch odd. It’s even more odd when if you read my comment I’m actually arguing that Harris takes philosophy very seriously. His entire book is steeped in philosophy at every step of the way. His “denial” of moral philosophy is nothing more complicated than saying “those particular questions aren’t the ones I started with.” Hardly a rejection of the field and even less of one when you see all the philosophical arguments he engages with and the philosophers he references.
As for your objection I can only say that you clearly haven’t read the book because it is in fact one of the objections he tries to deal with, right down to the thought experiment. I’m not saying whether or not he’s succeeded, I’ve mixed feelings on how effective Harris’ arguments are, but please don’t act as if it’s obvious he hasn’t at least engaged with them.
Here is what happens when the well-being of society (the many) takes priority over the well-being of the few. Here are the results of such idealism:
http://video.google.com/videoplay?docid=-6076323184217355958#
Montag, yes, I am familiar with Sellars, and yes, the questions about how to ground ethics and epistemology have substantial similarities (that’s actually why I noted that people working on naturalizing ethics are usually epistemologists with evolutionary biology backgrounds.) I’d say something along these lines: science gets to be normative because it isn’t all of epistemology. One has to switch to using human reason philosophically while thinking about epistemology *because* science cannot tell us what counts as knowledge. Science is only for mapping the world. Science also cannot tell us what counts as moral.
Who said anything about abstracts? Real, concrete, people set up real, concrete law enforcement, judicial and penal systems, and the people whose well-being increases as a result are equally concrete.
Now, maybe we can’t measure everyone’s well-being and apply a universally-agreed function to produce a single result. But to use a variation of an example that Harris uses, we also can’t measure the ‘health’ of every individual cell in someone’s body and apply a function to produce a single result, and yet we have no hesitation in declaring that some people are healthier than others, and even that in at least some cases some people are objectively healthier. So we can’t abandon or dismiss the possibility of objective definitions of well-being simply because of the lack of a single measurement.
It’s quite possible to argue “our evolved instincts tell us to do X; doing X reduces the well-being of everyone; therefore we should not do X”. Examples of this are not hard to find.
Quite right. The objection is to the assumption that reducing the well-being of everyone is what we’d need to evaluate in order to determine whether or not an action is immoral. It’s also quite easy to produce examples where doing X reduces the well-being of everyone alive now, but improves the well-being of future generations, or improves the compliance to the commands of a group’s accepted God, or improves the health of the ecosystem.
Caryn:
Thanks for the reply. From your comments I suspected you had philosophical training but I wasn’t certain so I didn’t want to make any assumption.
I’m not sure if I buy your reply however. There’s a significant assumption to the idea that science can just be about mapping the world which would revolve around problems of content: Sellars is just one example of several; Quine and Davidson have from what I can tell have slightly different ways of looking at the same problem. In any case, what all of the positions have in common is that you need quite a lot of normative grunt just to get content going whether in language or mind.
Now I’m coming into this debate as someone who strongly self-identifies with the Skeptic community on the internet more generally. One of the more interesting basic subjects when getting involved in this community are quite explicitly norms of reason in the forms of conceptual clarity, epistemic humility, respect for evidence and a thorough grounding in the psychology of cognitive bias through fallacies. I have yet to see a discussion in ethics that wouldn’t benefit from applying the same norms. Once you start accepting all of these norms, what’s left to distinguish ethics from science apart from a dogmatic assertion about content that applies just as much to science? Worse yet as I pointed out above, we have more than a little reason to doubt there is such a sharp line to be drawn between norms and content.
The only point I need to make from there is that none of this implies any kind of pessimism about whether science is possible, though I have had some error theorists tell me that it should. I simply cannot see any reason why we should think that human beings reasoning to their best ability can’t surpass their own cognitive failings in much the same way as they surpass their physical failings. There is no contradiction in saying “I can see where my reasoning went wrong” and yet the only way to sustain that kind of pessimism is by assuming there is.
I don’t think anyone here is arguing for moral relativism, but I do think that some are arguing that the concept of well-being does not truly escape moral relativism (unless I am misunderstanding). So if morals aren’t purely relative, and well-being doesn’t work, what does work? What alternative way forward is being offered?
Jeffy Joe,
I think it might be helpful if we recognised that morality only extends so far in practice, among social equals. It does not operate between unequals, nor with enemies external to society.
Moral people can’t solve immorality, all they can do is teach morality and recognise its limits. One of the most important ways of improving morality within society is to understand the importance of teaching knowledge and reason, to make people wise. It is through ignorance and unreason that greater immoralities prevail. And inequality is death to morality.
Of course, such teachers are so open to corruption and inequalities, that the entire enlightenment process is constantly undermined by the very people who are supposed to be enlightened.
Egbert. I strongly agree with everything in your comment. But is a statement like “inequality is death to morality” any more objectively grounded than a statement like “well-being is the goal of morality.” What do we say to someone who just doesn’t see what is wrong with inequality? Anyway, I think we would both say that we have to work with heuristics and impressions, and that will probably be true for a long time to come. I just think that “does this affect the well-being of a conscious agent?” is a heuristic that takes us a long way in answering moral questions. I don’t really want to defend any claim bolder than that. And the fact that well-being is extremely difficult to define in cases does not impugn its general usefulness. Equality is difficult to define (should it be equality of opportunity or equality of outcome? should people be prevented from giving their children/relatives special advantages if they have the ability to?) but remains an extremely valuable moral concept.
Jeffy Joe,
I think a lot of people do argue for a sort of relativism, but it’s generally not the stereotypical relativism that results in things like crude cultural relativism.
IMHO, Harris is himself a “relativist” in a weak sense, as am I, and so are most error theorists, as well as
We agree with Hume that you can’t get from a bunch of nonmotivating facts to motivating facts by sheer reason. You have to start somewhere, and for Harris and me and most philosophers who talk about “morality” at all, that has to include at least a minimal benign concern for the welfare of others. No amount of truth and reason is going to keep a sociopath from being a sociopath.
(That’s in contrast to “moral rationalists,” if I understand the term, who want to make being basically moral part of rationality, and claim that sociopaths are irrational. Harris would say that sociopaths are broken moral units—not good examples of the category “moral agent,” which the rest of us care about—-but not ipso facto irrational.)
A lot of reviewers seem to miss the fact that the metaethical argument is not mainly about that sort of thing, and is largely about how to correctly talk about the generalization that moral people share a benign concern for others—e.g., is it absolutely necessary, or sufficient, for being a moral agent?
Moral philosophers (except for moral rationalists) generally recognize that normative reasons are something like reasons that an agent would have after fully informed and flawless reflection on the actions under consideration, and that there’s a big differences between
1. being morally wrong because you don’t understand the facts or don’t reason them through correctly and
2. being morally wrong because you don’t understand a fundamental moral principle, or
3. doing moral because you do understand a moral principle but just don’t care (like a sociopath), or don’t care enough, (like an overly selfish person).
Harris agrees with the large majority of moral philosophers on these basic distinctions, and a lot of his critics (like Malik) don’t seem to understand when he’s making absolutely normal moral arguments that any moral philosopher should recognize, and most would agree with.
For example, he uses Taliban acid-throwing as an example of the first category—even in Taliban culture, nobody’d think it’s morally right to throw acid in little girls’ faces for no very good reason. Even in very different cultures, rationalizing such acts as “moral” depends on a web of beliefs, many of which are crucially false. (E.g., that there’s a god who made women in such a way as to serve certain roles well and others badly, that uppity girls who dare to go to school like boys are setting a bad example for other girls, that if such things get out of hand they’ll have disastrous consequences for society in this life, and for people’s fates in the afterlife.)
A very interesting fact about ethical philosophers is that even when they disagree on metaethics, they often agree on ethics.
For example, if you ask random moral realists, moral relativists, and error theorists whether Taliban acid throwing still seems wrong on reflection, they’ll generally say yes, and even of course.
(Even error theorists will agree that it’s stupid and harmful and that they too don’t like that sort of thing, even if they wouldn’t call such judgments literally moral judgments, because they’ve abandoned normal first-order moral terminology as involving incoherent presuppositions. E.g., Richard Joyce would say that it’s “wrong” in a “fictive” sense that he nonetheless cares about—he’s an error theorist, not an asshole or a sociopath.)
Likewise, if you ask philosophers of all those metaethical stripes whether gay marriage is wrong, they’ll generally say no. The intuition that there’s something especially wrong with gay love systematically goes away on fully informed, rational reflection. All the arguments against it are ultimately based on falsehoods—e.g., that being gay is a choice, or that there’s good reason to condemn that choice, or that there’s a morally authoritative God who can tell us it’s wrong, or that it’s socially destructive in a serious way. It’s just a load of crap that doesn’t survive informed, rational scrutiny. Ultimately, pretty much everybody who isn’t hung up on religious delusions or profoundly confused about morality comes around to the idea that gay marriage is okay, and that opposing it, in lieu of reasons that actually make sense, is not.
One of Harris’s main and most important points is that there’s a whole lot of convergence in morality, on informed, rational reflection. A lot of bullshit just goes away, and certain basic intuitions remain. Crude cultural relativists are just wrong—the most basic principles of morality are not just something you get from your particular culture, and morality is generally not beyond rational criticism.
That doesn’t necessarily mean that we all converge to a fairly straight utilitarian account like Harris’s own. Whether he’s right or wrong about that claim, or about his metaethical framework, I think he’s pretty clearly right about the claim that there is a very useful degree of convergence in informed and rational moral reasoning.
For his mostly philosophically untutored audience, that’s the most important point in the book, and most philosophically sophisticated reviewers don’t seem to notice that or point it out—that they agree with Harris’s most important point, which most of his intended audience doesn’t understand.
That’s sorta understandable, because Harris has enormous brass gonads to go writing a book about that sort of thing and make himself sound like God’s gift to moral philosophy, when his most important point isn’t novel at all—it’s rather pedestrian in academic moral philosophy.
Still, it’s unfortunate, because a lot of philosophically sophisticated critics are missing a chance to say what’s right and important in TML that most people really do need to understand, and also the further things that are at least sorta right and interesting, which more people do need to think and talk about.
Another example is Harris’s Utilitarianism. You don’t have to agree with Harris on his fairly straight Utilitarianism to agree that there’s something very right about it, at least up to an important point, at some level, which most moral philosophers do agree with, even if they disagree about how such concerns come into play in moral reasoning.
For example, Benthamites and Rule Utilitarians and Kantians would all agree that what makes actions right typically has something to do with whether that sort of action typically helps or harms others, or would if everybody behaved that way, even if they disagree about whether that concern applies to individual actions (to justify individual acts) or classes of actions (to justify rules), and exactly how that justification works.
What Harris doesn’t make clear enough is that even if you don’t buy his apparently fairly straight Utilitarian story, and there’s considerable disagreement at middle levels of moral reasoning, such shared concerns do give us considerable useful moral agreement. So, for example, a Benthamite might think that gay marriage is fine because a particular gay couple wants to get married, it’s likely to make them happier to do so, and nobody’s paticularly likely to be particularly hurt by it. A Rule Utilitarian or Kantian might apply the same sort of reasoning at the level of moral rules, rather than individual actions, but all three are likely to agree that gay marriage should be legal.
The Benthamite might see no use for moral rules in between the level of individual, situational judgements and the heuristic level of legal rules, and the Rule Utilitarian and Kantian might disagree on the exact moral rules and their exact justification, but when it comes to agreeing on crude heuristic rules like workable laws, they’re all going to be in favor of gay marriage rights. Of course they are.
You don’t have to agree with Harris on, say, whether torture is ever justified, to agree with the main and most important points in TML.
Maybe Harris is wrong about the extent to which morality converges under informed reflection, but that doesn’t mean he’s just wrong—maybe it converges some, and very usefully, but not to the even more useful extent he thinks.
Likewise, maybe it does converge more than is obvious to most people, or even most philosophers, but not to what Harris currently thinks it does. So for example, maybe it converges as much as Harris thinks, or almost as much, and he’s simply mistaken about whether torture is ever justified.
Harris leaves that option open when he talks about moral mistakes and moral expertise, when talking about rational moral convergence. He just doesn’t make it clear enough that it applies to him too, and that his major points do not depend on whether you agree with his particulars about the magnitude of the dangers of Islam, or about the justifiability of torture.
That’s what bugs me about reviews like Malik’s—they mostly ignore the fact that Harris is making several important basic points, in boldly painting a big picture, but flaws in that picture are not necessarily fatal to the overall picture.
Harris’s book is ballsy and flawed—he sets himself up for that treatment by failing to make clear enough how the various parts of his ambitious project do or don’t depend on each other—but it’s still very disappointing to me how many critics are prone to running his different arguments together and dismissing them as just wrong, wrong, wrong.
That Guy Montag,
Sorry, I was lumping a few responses together to save time. My defence of philosophy was directed at Yahzi. I should have taken that time to address individual comments individually. Mea culpa.
You’re quite correct that I haven’t read the book yet, so please feel free to correct me on any particulars, but I do feel reasonably confident in the nature of Harris’s problem given the same flaw has been identified by several reviewers whose track record I find compelling (Ophelia, Russell Blackford, Malik) and from excerpts I’ve read. I don’t want to have to read every single book that gets discussed here before joining in — but I am correspondingly open to being corrected.
Having said that…I still think Harris has failed to address the problem. He only *thinks* he has avoided the problem by asserting that facts on well-being are objective and can therefore form the basis of an objective morality. I didn’t say he hasn’t engaged with the arguments, but that he’s done so inadequately. He’s still going to be unable to define well-being in an objective moral sense, and then he’s going to be unable to decide how to go about making practical policies to increase well-being in an objective moral framework.
Harris’s book is a good deal too “ballsy.” That’s part of what’s wrong with it.
Montag, after reading your post twice to determine that you weren’t claiming to be a Cartesian skeptic :) I’d point out that epistemic norms are things everyone adopts because they already have an interest in getting a set of answers about the world right. If you want to find your car keys, there are better and worse ways of going about it, and the epistemic *should*s derive from a set of norms everyone holds because of their interests – they want to know things about where they left their car keys. But they are not categorical norms.
Unlike epistemology, where we’re all in the same boat (even the religious have to accept the evidence of their senses in order to read their holy books, listen to their preachers, etc.) with morality it simply isn’t obvious that *there is a boat*. Everyone doesn’t hold the same norms. Two thousand year old thought experiments into what the norms should be remain unanswered.
Moral norms, if there are any, would be ones that applied to you regardless of your interests, or, at any rate, so goes the usual narrative about moral realism. So the fact that we can know epistemic norms, which are a different kind of norms, isn’t relevant to whether or not we can know moral norms. The egoist would say things like, “You ought not to murder people if you don’t want to go to prison.” Scientific investigation could tell you whether or not it’s true that, given that you don’t want to go to prison, you ought not to murder people. The *moral* norm “You ought not to murder people” is supposed to apply to *everyone*.
You can’t get out of morality by saying it isn’t relevant to your interests. But as far as I can tell, if you don’t care about believing true things, then you don’t have any epistemic duties. You might actually have a moral duty to care about the truth (that’s Clifford) but that would be another hypothetical. :)
Jeffy Joe, the argument is that it isn’t even obvious that Harris (who is staking out a moral realist stance) has identified the only relevant set of moral facts, or even a set of moral facts. But there are stances that require neither accepting moral realism nor accepting moral relativism, like noncognitivism. That’s one of the terms he didn’t want to use because it would be too boring, IIRC.
Paul W.,
Yours is a fascinating comment, seeing as it draws out numerous flaws in Harris’s arguments while still defending him overall. I’m not sure I could defend him with all that, but it does make for an interesting read.
One thing I would like to argue out, though, is that I don’t think convergence is as good a marker of objective morality as one would like. I wish it was. It would make moral thinking a lot easier and it would imply that social norms do tend to converge on morally beneficent outcomes.
But I’ve just been writing about the history of medical ethics for students. And in the numerous codes of medical ethics throughout history there is convergence on some important and IMO morally justifiable positions: keeping confidentiality, not exploiting the vulnerabilities of one’s patients, treating people of all walks of life equally. Excellent convergence. But they also tend to converge on one very significant bad point: many codes of ethics discouraged medical practitioners from discussing their knowledge with others, especially rival schools of medicine; one even forbids giving public lectures. These codes were written in order to preserve the secret knowledge of the school, not to increase the well-being of the public. And yet they converge.
Unalloyed convergence would also imply that women should be subject to the whims of the males in their lives (historically speaking, equal rights is a massive outlier). And then there’s slavery, homophobia, animal cruelty, etc., etc.
This is not to say that convergence is a useless concept, but that by itself it does not establish objective morality.
I don’t get the motivation to defend Harris’s book despite admitted flaws. I don’t have that. The flaws get on my nerves from the outset, and I think that makes the book not useful – I’m just not motivated to find reasons to say it’s useful anyway. I do think there’s a lot of interesting and useful material in it, but I think the book as a whole just muddies the waters.
I agree. One of the things that I find most frustrating in all this is that Harris goes all ballsy, and predictably gets smacked down hard largely for being too ballsy, without much enlightening consideration of Harris’s several (mostly-separate) important points.
I think there’s a better book in TML trying to get out; there are several babies in there, and not as much bathwater as a lot of people think.
His footnote about metaethics and boredom ensured that most philosophers would not cut him the slack he needs and focus the babies, and reveal the extent to which they agree with him on the major points that would be most interesting to other people. (And it really doesn’t help that his own high-level rhetoric is partly just wrong, e.g., about the basic is-ought thing, which he’s not denying, just doing an end-run around—a good end run, IMO, but an end run he ought to acknowledge.)
The picture he paints with his major arguments is much better than what he says about it, but it’s not surprising if people notice what’s wrong with the former and fail to see what’s right with the latter. They’ll naturally notice that he’s not delivering what he says he’s delivering, even if what he’s actually delivering is quite good—or even if they notice, they may feel unable to endorse the meat of the book, because of Harris’s false advertising; the discussion often stops right there. Nice job of foot-shooting on Harris’s part.
He asked for it, he got it, and that’s a shame. Drives me nuts.
Yeh. It could have been a good and interesting book if he’d done it right.
Well, I love Stephen Colbert so maybe I’m too desensitized to ballsyness to be bothered by Harris. Paul W. – thanks for the long post, that was a very clear summary of the issues. For morality, it is much too tempting to focus on the disputed details and not the broad areas where all the (reasonable) voices in the debate agree.
Jeffy Joe, careful, or you’ll end up with a tyranny of the majority. All the reasonable voices in the debate agree on accomodationism and the Gnus are being unreasonable. And of course the existence of those broad areas where all reasonable people agree might even be some evidence for the idea that there are actually facts that we’re converging on, but unless you can say *where the convergence is coming from* I don’t think you’ve really got an answer to the question. No one’s denying that you can make progress in applied ethics without completely nailing down questions about ethical theory; even moral skeptics can agree that the systematically false (because they don’t think there are moral facts) moral judgments that they and others are inclined to make have certain commonalities. Even thinking that there’s no such thing as a moral fact doesn’t stop people from doing applied ethics.
Pawl W, re: They’ll naturally notice that he’s not delivering what he says he’s delivering, even if what he’s actually delivering is quite good – well, but also, as you note, quite pedestrian. Applied utilitarianism is not exactly novel. The first three minutes of the TED talk were enough for professional philosophers to say, oh, so *if* you endorse this particular metaethical view, *then* what Harris is about to say follows… and notice that the complaints are not about the conclusions he draws, generally. He didn’t say he was just going to be doing applied ethics.
Caryn:
Actually Cartesian Sceptic might be a step up; right now I have to struggle very hard not to sound like Richard Rorty. ;)
My problem is that your point about what a real moral fact needs to look like isn’t going to motivate me because I don’t think you or I or anyone is in a position to say a priori what a moral fact looks like in exactly the same way that we don’t get to declare that god exist without first doing the hard graft needed to show that it does. In order for us to be in a position to be able to say anything one way or another about ontology we need to do work to turn our imperfect senses into something that can do the job. In science we do that through theoretical frame works and instruments; in morals we need to do the same.
Now the kind of objection that should be thrown against this sort of a view is the problem of moral disagreement. This at least has the virtue of being about the same subject (is ought isn’t.) The problem is it’s hard to see how disagreement is even surprising. Disagreements about knowledge are about as everyday as it gets.
I just want to quickly mention how my earlier talk of norms and content fits in because there’s a good chance I wasn’t as clear as I could be about it. Hopefully by now it’s clear that the kind of talk that I’m trying to get to grip with is the kind that’s concerned with how we hold beliefs about the world. The only things that will interest me in terms of norms therefore will be norms that help us know when beliefs come into contact with the world. The thing is it’s never been entirely clear to me how our beliefs can have content if they’re not about the world so that it seems to me there is a very blurry line between the methods we use to ensure that we are in a state that responds to the world and whatever it is that constitutes the meaning of the beliefs that we come to hold.
Caryn. You’ve made me into an accommodationist! That has to be the ultimate check mate in a B&W comment debate. I guess I must sulk away and add PuffHo to my bookmark list. : (
Chris:
First off major kudos on accepting the criticism. I would have responded earlier but I managed to do myself an injury trying to make myself as clear as I could about my interest in moral perception in response to Caryn. I’m pretty sure I failed but damn it I had to try.
As for whether or not you’re committed to actually reading the book, I certainly agree that life’s too short and the list of books that need to get read too long to commit yourself to reading everything that’s being commented on. Thing is I’m a bit torn here. On the one hand I don’t want to say this is a revolutionary book. I’m not sure it is and I’m certainly not sure if its arguments convince me, and I’m broadly on Harris’ side here. On the other hand this particular debate appears to have become very polarised and some of the criticisms fall far short of fair. That inclines me towards saying that it’s probably a good idea to read the book before taking too strong a stand but I can’t say that unreservedly.
Yahzi:
Does Harris actually argue that existence is an objective good? The way you summarized it, abortion and euthanasia would be objectively bad.
windy:
Harris does not say that sort of thing, and goes out of his way to make it clear that evolution’s “goals” are not and can’t be our goals—you can’t read psychological-level goals off of general principles of evolution. He quotes a Pinkerism about how if people were psychologically motivated toward inclusive fitness above all else, men would think that making daily deposits at the sperm bank would be their highest calling.
As I read him, Harris makes it pretty clear that evolution is mainly important to ethics in that it did in fact create the kind of human nature it contingently did—what counts as moral on informed reflection is grounded in human nature, and in particular on which things we value depend on factual errors and which ones don’t.
For example, evolution seems to have programmed us with a large dose of selfishness plus a capacity for benign concern for others. No amount of facts and reasoning can rationally give you those things, or rationally take them away.
Evolution may also have programmed us with some other relatively free-floating evaluative tendencies, e.g., to see obedience or purity as a good thing, without any real idea how they relate to the others.
Harris thinks that in informed reflection, those things turn out not to be fundamental, and either get subsumed as instrumental goals that advance more basic ones (especially benign concern for others), or get rendered impotent, or fade away.
(For example, it’s hard to sustain a valuing of blind obedience to moral authority, if you realize there’s no good authority to be obedient to, and/or that such obedience doesn’t generally work out well. People’s valuing of obedience to God is generally implicitly contingent on God existing and being Good in some noncircular way. Likewise concerns about “purity” and “sacredness” tend to collapse or fade when the underlying metaphysics is undermined.)
That’s a large part of what his discussion of Haidt and Greene is about. Harris doesn’t think that conservative and liberal morality are just different, with conservative morality being irreducibly and stably being more weighted toward obedience, purity, and sacredness. He thinks that conservative morality isn’t reflectively stable. (I.e., it can be undermined by facts such as nonexistence of God, the Euthyphro Dilemma, disutility of obedience, etc., because it depends on lack of reflection and/or persistent erroneous beliefs about such things.) People may start out with significantly different weightings of relatively free-floating moral principles, but given the right facts and enough serious reflection, the differences mostly tend to go away.
I think that Harris is roughly right about that, and also right that it’s an empirical claim about moral psychology, not an a priori assertion of a purely “philosophical” claim.
He’s saying that it’s a necessary truth of human morality, but an a posteriori one, like the necessary truth that water is H20—it turns out that’s what water actually is, definitionally—but by discovery, not by a priori definition.
That’s the kind of point that Harris makes that I don’t think is “pedestrian,” even among professional moral philosophers. He’s putting forth a serious and well-motivated view about how morality works, which goes beyond the usual. It may not be totally novel—I think a lot of philosophers think roughly similarly—but it’s an important idea. He’s saying that relativism about liberal vs. conservative moral foundations is false, and that it’s an empirical (“scientific”) question.
That’s the kind of central and important issue that seems to escape critics like Malik, who skip over the central theoretical ideas and make it sound like Harris isn’t saying anything deep or interesting—or isn’t sophisticated enough to know when he’s making an empirical claim that might be falsified vs. simply insisting that morality equals utilitarianism, without an interesting argument.
That’s also the kind of thing that’s generally left out when people complain about his Utilitarianism.
I actually agree that Harris doesn’t justify his strong Utilitarianism well enough. He doesn’t make a good case that people’s concerns about well-being and their concerns for fairness fall out in a similar way—with fairness turning out, on reflection, to be an instrumental means for increasing utility.
I think Harris should be clearer about that, both making a better case—and I think there is one—and saying that even if that issue is never fully resolvable, there’s still a whole lot of moral convergence to be had. The natural kind of “unmistaken” (informed and reflectively stable) morality may always admit of considerable variation on certain axes, but several other axes are pruned out.
I think Harris would agree with me that we’re “lucky” in a funny sense that our psychological goals converge as much as they do in reflective equilibrium. If we’d evolved a little differently, there might not be as much reflective convergence of morality. (E.g., if we were simply instinctively satisfied to absorb our cultures’ goals, like a duck being imprinted on either its mother or Konrad Lorenz.)
Montag, when you say I don’t think you or I or anyone is in a position to say a priori what a moral fact looks like then isn’t this a problem for Harris? Harris, after all, says that we know what moral facts look like. They are facts about the well-being of conscious creatures.
Caryn:
First things first there’s a lot in Paul’s last comment that I thoroughly support. He’s also making the sort of comments I would want to keep in mind if I reread the Moral Landscape. I certainly think the metaphysical point about morality being necessarily true a posteriori is I think an important insight and part of the reason I wish moral philosophers would spend more time paying attention to other branches of philosophy. (Another big one I’d put in here is I wish moral philosophers would stop thinking that counterfactual reasoning counts as evidence against moral realism considering all the work by logicians and metaphysicians showing that our models of truth break down when we try to apply them to counterfactuals.) It should also tip you that that’s part of the reason motivating me to look at thinkers such as Sellars, Quine and Davidson all of whom if I understand them right make broadly similar claims though with different emphasis.
On your point about Harris you’re right that it should mean that I disagree with him. I think though that some of his arguments make more sense if we see him making a similar point. First off there’s the strong emphasis he places throughout on drawing a distinction between not answerable in principle and not answerable in practice. The way I read this is as an argument against the very strong ontological commitment that Is Ought demands of us. It’s also very telling that so much of what I’ve read after that is very openly epistemological in its outlook though I’ve not got to his later discussions which might make all of this moot.
The final reason that I don’t think I’m that much in disagreement with Harris is really the way he sets out his claim about the well-being of conscious creatures. I’ll bite this bullet and say that I’m not sure how to render this 100% consistent with a denial of metaphysics, but I’ll try and scetch out what motivates me here. I guess the place to start is the kind of thinking that most makes me sound like Richard Rorty, my sense that we don’t have access to general principles that will do the job and that all we can do is appeal to the kind of principles that will do the work we need for the problem at hand. In case this helps, it’s problems with this position that lead to me saying the things I do about norms because I’m not (wholly) a pragmatist and definitely not a relativist. In any case even with all of these caveats and all this contingency flying around, my sense is that at the other end whatever it is that comes out of this process necessarily has to have some sort of impact on consciousness. It is I’ll admit a contradiction. Maybe some day I’ll see my way out of it.
Thanks – I suspected as much, just wanted to check because the argument was attributed to Harris.
This argument is effective in showing that evolution has programmed us with heuristics rather than direct motivations, but I’m not sure it is as effective in discussions of what our moral goals should be – you could turn it around and ask if people were motivated towards the “well-being of conscious creatures” above all else, why aren’t they volunteering daily at the local soup kitchen?
“Well-being” is actually a mishmash of emotions that were and are evolutionary heuristics to keep us doing things that increase our inclusive fitness. I’m not saying you or Sam Harris disagree with this, but given how cobbled-together it is, maybe it deserves more examination before we declare it as the fundamental goal of morality. It may be what we ultimately settle on, but as I understand it, Harris hasn’t really made the case.
Windy:
Presumably then you believe that your own internal perception of red is the end of the discussion about colour? Being motivated to stop at traffic lights is just a heuristic instilled by evolution and that science has nothing further to say about the conscious whatever that constitutes the “intrinsic redness” of red objects. There’s a reason why people get tired of having to deal with is-ought objections and it’s not necessarily that it’s right.
Presumably then in your universe “deserves more examination” means “end of the discussion”? Sheesh.
Sorry a bit of a rush to judgement. Funny thing, I see should in an argument and I can’t help but see red. :)
That’s just your internal perception of red! :) No worries, it happens.
Montag, that exchange with Windy pretty much illuminates what I was going to say. :) Harris says a lot of interesting things, but as I read it did *not* really get into the sorts of things he’d need to get into to do a good job of naturalizing ethics. You’re right that one way to do this is via phil mind and that as a neuroscientist with a philosophy background that’s the path Harris might be expected to take, but he doesn’t make the case and when he publishes on that sort of philosophy we’ll all sit up and take notice.
But in the meantime he’s asserting moral facts without naturalizing ethics, and this is a problem.
Also, OB posted this link to Blackburn making the point I’ve been making much more eloquently:
http://www.youtube.com/watch?v=W8vYq6Xm2To
windy:
The basic idea Harris is pushing—and it’s absolutely the norm in philosophical ethics, too—is that by thinking about the world and morality, in light of all the relevant facts, you come to realize what you actually stably care about, when push comes to shove, and what you don’t. You come to a “reflective equilibrium” where no amount of facts and reasoning changes your basic values, e.g., that causing gratuitious harm is generally bad, and that promoting well-being is generally good, and that not much else ultimately matters. (For example, maintaining sexual purity ceases to seem like a particularly worthy goal independent of some way that it helps anyone in either the short term by making them happy, or more importantly, in the long run by making people better able to make themselves and others happy.)
When you come to reflective equilibrium, one thing you typically realize is that you’re not a very nice person, and you don’t value others’ good as much as your own. You have mixed drives, mostly selfish, but with a certain concern for others, too, and most people are actually like that, too. Once you realize that, you can begin to think about what general kinds of things you want for yourself and for others, and what to do about it, in light of the way the world is, and how you reason that it could be different.
Doing moral philosophy doesn’t turn you into a selfless altruist, because no amount of reasoning is going to change your basic reflectively stable selfishness.
What it can do is reveal that you also want to help others, and that even your admittedly modest degree of altruism can be put to good use, because there are a lot of things that you can do that are of minor cost to you but major benefit to others, even numerous others.
It can also reveal that you’ve been attentionally blind—that your altruism is there, but mostly as a latent capacity because you haven’t noticed things that would “excite your moral passions”—you haven’t noticed the consequences of your actual actions, or the likely consequences of your potential actions, and on reflection, you do care.
That might be enough to get you to go work in that soup kitchen, or it might not, but that’s not the most important function of moral philosophy. It is not mostly good for getting individuals to go act altruistically, as individuals.
It’s much more important politically. When you realize that you’re a fairly selfish person, and that others like you are too, you may not be able to overcome your own selfishness, or even to want to—on reflection you may prefer going to a baseball game to going to work in a soup kitchen, every time. But you don’t like the idea of everybody else who’s as well-of as you doing that sort of thing too, and leaving the less well-off to suffer. You may not want to change yourself, or be able to want that, but you may nonetheless want to change the world, so that most other people treat most other people better.
So, for example, it’s way easier for me to vote for somebody who’ll raise my taxes a certain amount, if I think the money will be well-spent to improve the lives of others, than to voluntarily part with the same amount of money to improve the lives of less well-off people by the amount that my donated money can buy. Raising taxes on me and millions of similarly well-off people, to improve the lives of millions of worse-off people, is a very different proposition. It only costs me a certain amount, but it benefits others tremendously. To oversimplify, I may value my own well-being 100 times as much as random poor people’s, but I don’t value the well being of other members of my well-off socioeconomic class 100 times as much as members of the poorer class. I’m nowhere close to as classist as I am selfish. I’m happy to take a whole lot of money from my class (including a modest amount from me), and invest it in doing a whole lot more good for a whole lot of people.
I think Harris thinks similarly. He doesn’t explicitly focus on the morality of progressive politics, but that’s largely what he’s laying the groundwork for when he talks about utilitarianism and better ways of organizing societies. He doesn’t think rights are fundamental, and thinks that systems of individual rights must be justified in terms of more basic social goods. I’m with him on that.
Paul, you may be right about what Harris thinks, but it’s not what he wrote. Your version is much better than his! I don’t see why you bother trying to tell us what he would have said if…he had managed it. I don’t really care, and I don’t really care what he actually thinks, either. His book is what it is. That wouldn’t matter much, but a lot of people think it’s both the newest and the last word on the subject, which is tiresome.
Maybe so, but that is not “science determining human values”. In science, you should question even those things that seem more convincing the more you think about them. Morality might not work that way, but Harris claims he is doing something different from the norm.
I’m not sure that such a system is as lacking in the “respect for authority” factor as we’d like to think. (And I’m not convinced that the distinction between “instrumental” and “fundamental” goals resolves this. People who continue to hold sexual purity or “sacredness” important might also explain to themselves that they are only instrumental to other goals).
And if progressive morality is more stable, why does Harris himself espouse some notable ‘conservative’ (at least in the modern American sense) positions that seem to contradict it, like torture and pre-emptive war? Hasn’t he reflected on them enough or is he at a different equilibrium? The author’s inconsistency doesn’t necessarily invalidate a theory, but it makes me wonder if your hunch about what he’s trying to say is correct.
Me:
windy:
In a sense, it is, but the word “determining” is very ambiguous, and it applies in very different ways at different levels. (I do think it is a stupid thing for Harris to say that pithy way, because of ambiguities in “determining” and in “science,” too—and he’s compounded that with some misstatements about “philosophy” that support the wrong interpretation, i.e., not what he’s actually arguing for.)
Keep in mind that when Harris talks about “science,” he’s talking about rational evidence-based enquiry very generally, and in his view there really is no boundary between moral philosophy and the cognitive science of morality. I think he’s right about that. When moral philosophers use thought experiments and arguments to tease out what are fundamental values vs. instrumental ones, they’re doing very much the same sort of thing that cognitive psychologists do when trying to figure out how people think. They’re doing experiments that reveal unconscious dependencies on beliefs, etc. In his view, and mine, there should be literally no difference between that sort of “moral philosophy” and “moral cognitive psychology,” because they’re studying the same problem and whatever the appropriate methods are, they’re appropriate whether you call yourself a philosopher or call yourself a cognitive scientist.
There are two different applications of different senses of “determining” here:
1. Can science “determine,” i.e., find out, what people do in fact fundamentally value in reflective equilibrium. Harris claims that’s an empirical scientific question, and I agree with that. A lot of people would say no, it’s a philosophical question, not a scientific one, but I’d agree with Harris that the distinction is meaningless. Philosophy done right and science done right are a seamless garment, and in this area especially there’s no good reason to say it’s a philosophical question rather than a scientific one. The usual disciplinary boundaries not only should collapse when it comes to this subject, but they have to, and to a large extent already have.
2. Given what “science” (rightly broadly construed) can “determine” (find out) about fundamental values, can science “determine” (tell you) what is right and wrong? Harris thinks that in principle, the answer is often yes—in reflective equilibrium about basic values, and drawing on all sorts of knowledge about the world, very many answers are determinable in principle (though we might not have all the required knowledge) and many answers are determinable in practice, to a sufficiently high precision and degree of confidence as to be useful. (Especially since we don’t have any plausible sources of better guidance.)
One thing Harris does that is audacious but I think right is to talk of “science” as an extremely inclusive, seamless garment, which includes everything from properly done philosophical thought experiments to historical facts like the JFK assassination. Its all grist for the scientific worldview, and scientific principles apply throughout.
Unfortunately, he seriously shoots himself in the foot, and goes all loose cannon and shoots at a lot of people who should be his allies about that, by dissing philosophy as a discipline. That’s just stupid and wrong and undermines his message. He should acknowledge that what good moral philosophers do, when they do it right, is the kind of thing he’s talking about. They’re doing armchair cognitive science, and that’s an important part of the overall process.
For example, when he criticizes Haidt and Greene about their simplistic analyses of their experimental data, he says the very same things I’d expect a good moral philosopher to say, and for the very same reasons—he’s pointing out that Haidt and Greene’s conceptual and motivational dependency analyses are superficial, and that they stop at what seems like the wrong level. (When they assume that the thing they’re measuring are basic, when there’s excellent reason to think they’re not reflectively stable.) That’s exactly the kind of thing I’d expect any good expert on the subject to point out, whether they’re “philosophers” or “cognitive scientists,” because there’s just no difference at all between the philosophical and scientific questions. The subject is what it is, and when you realize what kind of subject it is, no disciplinary boundary makes any sense.
windy:
I’m not sure what you mean by these things, or especially the connection between the first sentence and the second one. Could you unpack it a bit?
(I do of course agree with the first one, but in realistic situations of scientific theory confirmation, there’s only so much you can do—think of all the plausible alternative theories you can, do differential diagnoses, etc. I’m not sure how you think Harris is failing to do due diligence, as you seem to. I’m also not sure how that relates to whether he acknowledges the similarities between what he’s promoting and what others actually do.)
windy:
I’m not sure what you mean by “such a system” or lacking “respect for authority” but…
The claim is that people who continue to think of religious obedience, purity and sacredness as very morally important–especially irreducibly important—generally do so by maintaining false beliefs. They’re not in reflective equilbrium, and are making mistakes of fact that prevent them from working things through. (And IMO that’s a major function of religion, as it socially evolves to be part of a stable status quo—it erects barriers that prevent people from working things through, and keep them making the same mistakes that support the religious and social status quo.
Harris is saying that Haidt may have in fact identified different interesting clusters of attributes of existing moral systems, but that his relativistic interpretation of them—that people just naturally differ in those ways, and their differences in values are therefore inarguable—is wrong. (And that whether or not Haidt is wrong, that’s an empirical question, not a particularly “philosophical” one, as though there was a difference. It needs to be properly scientifically studied, whether you call that moral cognitive psychology or experimental moral philosophy.)
The sense of liberal and conservative values in question doesn’t map neatly onto political liberalism and conservatism—and if Harris is right that most religious people are nowhere near reflective equilibrium about values, you wouldn’t expect it to. Religions evolve to keep people away from reflective equilibrium and accept a lot of inconsistencies, which are convenient for the particular religion and its survival “strategy” in its particular cuture, so you get way more variation in religious morality than you would in properly reflective irreligious morality. Irreligious morality isn’t generally in reflective equilbrium either, partly because religion tends to derail any sensible moral talk in largely religious societies.
Given all that, and the complexities of the general subject of morality, I’d expect that even if Harris is right about all of his major theses, (a) he’d disagree with liberals about some things, and (b) he’d be wrong about some “details” (however important those “details” are in concrete terms) himself, some of which would be “liberal” mistakes, and some of which would be typically non-“liberal” mistakes.
They were actually meant to relate to the sentence before them, which you took out of context to go off on a 700-word tangent. I think that’s rather bad form.
I don’t think I’m the only one who finds it annoying when Harris’s defenders start equivocating on the meaning of ‘determine’. Of course we agree that science and reason can study what values people actually hold, and that is not what is being criticized!
The meaning of my 3 sentences in context was: if you claim that your choice of values is scientific, you have to give us a better method than “I thought about this in the light of the facts and I continue to care about it”. We wouldn’t accept the argument that theism is scientific because many people continue to strongly care about it and because science can study facts about theism.
By the way, I tried searching TML on Amazon and I didn’t find anything about reflectively stable equilibriums. Could you offer some examples? I’d hate to get the book and find that it doesn’t work without the secret Paul W decoder ring.
I also don’t think that the desirability of a “reflective equilibrium” with fewer free-floating moral principles has been adequately supported. Suppose that I value the discovery of truths about the world as a more or less independent moral goal, but I’m not sure how to reconcile this with my desire to promote general wellbeing. Person B has stably decided that they value truth only to the extent that it can be used to further the wellbeing of conscious creatures. Is the second view automatically better?
Caryn:
There’s no need to be so self-deprecating it’s been very good having you to discuss this with and if there is a misunderstanding I’m going to be the first to assert it’s not that you’ve not done a good job of making your case, it’s that I’m not doing a very effective job of showing where my objections are coming from.
Simon Blackburn actually might be a good way I hope for me to start making my position clear. The way this goes is that he’s an expressivist so from the get go we’re working with different premises. Just so that we’re clear on the terminology he talks about morality as if the direction is from internal desires and these in turn get impressed onto the world. The assumption is that I, as a realist, have to hold the opposite view, that morality is the world impressing itself on me. This is as far as I’m concerned another way of conceiving of Is Ought. This issue starts to get difficult with Blackburn’s Quasi-realism which as I read it tries to show how you start with this expressivism and that that in turn explains how we start making the truth apt claims that make up our normal moral talk, and that this talk really is truth apt. Simon Blackburn is essentially an expressivist who doesn’t deny cognitivism. The point is that what makes Blackburn’s moral philosophy distinctive is already a blurring of the lines between the various distinctions between direction of fit.
Here’s my view:
1) Something like Simon Blackburn’s expressivist story is a plausible story of moral motivation.
2) It’s possible to create a plausible story of moral motivation that passes in the opposite direction so that we start with facts about the world and end with genuinely motivating statements.
3) If we can create perfectly equivalent theories that go in both directions then we have good reason to doubt that there’s a distinction to be drawn in either direction.
4) There is no meaningful (necessary) distinction between fact and value: each implies the other.
2 in particular needs a bit of extra work to become clear. What I need for this argument to really take hold is a theory of perception and this needs two things.
The first thing we need us a theory of what it is like to perceive. I’m still trying to get clear on what is needed to argue this but I think I’m probably not on bad ground thinking that being motivated to act will probably do everything I need here. A rough sketch of my argument will probably go that motivation forms a part of our everyday experience of reasoning but that it’s hard to see anything other than the reasons themselves doing the motivating. I can try and expand on that in a separate comment if you’re interested but there’s not much there yet.
The second thing we need is a way to deal with familiar, old fashioned skepticism: How do I know I really am perceiving facts about the world. I’m only just starting on reading about perception, at least anywhere past the odd Pinkerism about cheesecake and the familar scheme rationalism of say Locke, but what I’ve read so far suggests that an account of when our perceptions are genuine is necessary. It’s precisely here that I therefore see science fitting because as I’ve pointed out previously, the first step in science training is teaching a person to recognise genuinely perceiving. What this will involve are pretty standard tests like: how clear are your concepts, do they change regularly to fit the data, are they suitably clear of bias Etc. I think this is one place where Paul W for instance will agree with me because in many ways this looks just like what moral philosophers get up to anyway.
windy,
I still don’t get what you are saying such that my response was an irrelevant tangent. Your three crucial sentences are no clearer to me than taking the first one first, and the second and third later. In context, I didn’t get quite what you meant, and to the extent I thought I did, I thought what I said was likely relevant.
Here’s all three sentences:
Let me address your first sentence more directly this time. I don’t know if that will help, and I’m still not sure how the other two sentences relate. It is not clear to me that you understand Harris’s project and (if it’s succesful) its significance.
I do think that it is, in an interesting and quite important sense, science determining human values. If people are wondering whether to look to the scientific worldview to tell them what’s good, as opposed to, say, religion, the answer is yes.
Even if they’re wondering whether to look to the scientific worldview as opposed to philosophy, the answer is a qualified yes—“Yes, but they’re not distinct, so the “as opposed to” part is a false presupposition.”
(More than that, to the extent that there is a bunch of philosophy that presupposes that philosophy of such things is distinct from science of such things, that philosophy is missing the boat, and is the wrong place to look.)
A lot of the criticism misses the point that what Harris claims to be identifying is not just a consensus that just happens to exist, or to be able to exist, but one which is a “natural kind”—a natural, scientifically explicable phenomenon with a particular structure, and a distinctive internal logic, which has some definite consequences.
He’s saying that morality isn’t just a matter of opinion that science can’t tell you about, as most people think, including most scientists. There are strong constraints on what makes morality morality, and what makes unmistaken morality not mistaken.
(An analogy I used in an earlier thread is money. Whether something counts as money is in a sense “a matter of opinions”—if nobody thinks it’s money, it isn’t. That doesn’t mean that there aren’t definite objective facts about what is or isn’t money, and more interestingly, non-obvious facts about what makes money money, which are independent of any consensus on whether a particular thing is money. Money is a natural kind that was more discovered than invented, which has some very objectively identifiable properties that most people don’t know. If you ask “can science determine moneyness?” the answer is clearly yes in the most important sense.)
Consider the awkwardly-phrased question “Can science work out, in advance, what your values will be be if you know all the relevant stuff and work it through rationally?”
In other words, can it determine (in the sense of figure out) what moral values you will come to hold if you work it all out for yourself?
If morality is strongly convergent, as Harris thinks it is, the answer to that question is basically yes, in principle, and often in practice—science (broadly construed) can determine what you will morally value with a fairly high probability and a fair degree of specificity. There are some basic principles of morality wired into human nature, and they have a lot of consequences.
Should you be interested in that—should you take seriously what science implies about human values, and very likely what you personally would also value, if you knew enough and worked it through? Yes, definitely.
The idea is that science does have the ability, at least in principle, and to to some useful degree in practice, to be something like an oracle about moral issues, and nothing else does. Not religion, and not philosophy that isn’t part of the seamless garment of rational thought, indistinguishable from other sciences. The idea that science doesn’t have anything to say about morality is wrong—morality is a natural phenomenon entirely within the scope of science—and the idea that there’s anywhere else to look is wrong, too.
That claim might be empirically false—e.g., if it turns out that most people’s moral values do not strongly converge in reflective equilibrium, and there really are reflectively stable liberal morality and conservative morality, and/or other sorts of inarguably different moralities. It might turn out, scientifically, that human morality is more plastic and flexible and culturally programmed than Harris thinks. Cultural relativism could be empirically true, and other relativisms too.
In that case, morality would still be entirely within the scope of science, but science would not be able to determine answers to moral questions—the answers it gave would be more fundamentally conditional, and less “determinate.”)
If Harris is right, though, there are determinate answers to many important moral questions, and science can determine what those answers are, usually in principle, and quite usefully often in practice.
(As it figures out the answers to many other kinds of complicated questions—often we don’t have all the facts, or haven’t thought of and worked through all the plausible theories, or just don’t have the sheer computational power to solve intractable problems. None of that means that science is the wrong tool for the job. Generally it’s the only tool for the job—if science can’t figure it out, nothing can—and Harris thinks that’s true of morality as well.)
When you realize that’s what Harris is actually selling, you might think he’s wrong, but it doesn’t seem as much like a bait-and-switch when he says it’s about science “determining human values.” He’s saying that science (broadly construed) can determine answers to questions of value in much the same way that it can determine answers to other questions, and I think he’s basically right. It can’t answer all questions of value, but it can’t answer all other questions, either. Even it sometimes can’t give an answer, or can sometimes only give an approximate answer, or can only constrain the space of answers, it’s still the only game in town, and gives a whole lot of very useful answers. Nothing else does.
Ophelia:
I think a lot of what I’m saying is fairly explicit in what Harris says, but not brought out enough, and a lot more is implicit. I don’t think I’m making most of it up, or reading him much more charitably than many other people are reading him uncharitably. (Not that he didn’t give people lots of reasons to be uncharitable.)
And if I’m wrong, and reading him too charitably, and my ideas are all that much better than his, then I think they’re worth putting out for their own sake.
I think that there’s more good stuff in there, which isn’t just pedestrian, and is actually quite valuable, than a lot of his critics see, even if it’s not the last word on the subject.
I’d feel less compelled to defend Harris if I didn’t think his best ideas were getting bad rap along with his fuckups. I largely share those ideas, more or less, so that matters to me, irrespective of the quality of Harris’s presentation, and whether he deserves a good spanking. (Which he does.)
And I’m personally annoyed with Harris for putting me in this position. He really should have done the work to engineer the secret decoder ring, rather than leaving it up to people like me. But he did.
And by the way, if anybody knows of a book that already says all this stuff clearly—and for all I know, there may be one, because I don’t think what I’m saying is that weird—I’d very much like to know about it.
Ah what’s implicit – well yes – there’s a hell of a lot that’s implicit – but that’s the problem!
I really really disagree that he gets a pass because he relies on a mass of unargued assumptions. The unargued part is what makes the book so irritating and unhelpful.
You’re annoyed with him for putting you in this position – well to paraphrase Kingsley Amis’s reaction to Gerard Manley Hopkins’s saying he had the habit of doing something or other in his poetry (“well get out of the habit then!”) – get out of the position then!
Really. The book has to stand or fall on its own, not on your translation of it. And I refuse to think better of Harris on the strength of your supplying the bits that he should have.
I’m not saying I could have done a better job of arguing it than he did – but I am saying I damn well wouldn’t have written a book on the subject if I couldn’t! I would have pared down the subject until I thought I could manage it.
windy:
Sadly, “reflective equilibrium” or “reflective stability,” is one of the concepts that Harris doesn’t make explicit, connect to standard philosophical terms, etc. It’s an idea I learned on the street, discussing and arguing with philosopher friends, that’s common in moral philosophy. (E.g., I was recently reading Richard Joyce’s The Myth of Morality. where he talks about “reflectively stable” intuitions and “fully-informed flawless reflection” in those words or something very close.) He does make it pretty clear that’s what he’s talking about when he talks about various thought experiments, Rawls’s “Veil of Ignorance” / “original position” framework, etc.
(It’s not uncommon for ethical philosophers to appeal to the idea of reflective equilibrium without naming it or making it explicit—it’s standard operating procedure. You’re more likely to see it named and discussed clearly in metaethics, where arguments at different levels must be carefully delineated and connected.)
One example of reflective non-equilibrium is the belief that homosexuality is bad because God created penises and vaginas with a particular plan in mind, and homosexual sex is a sinful tendency due to the Dark Side of your Soul, which is perversely disobedient, and giving in makes Baby Jesus cry, or any roughly similar story.
If we assume for this discussion that in fact there’s no such God, and no such Soul, and homosexuality is just a difference that can’t be cashed out in terms remotely like that, then people who believe that homosexuality is wrong for those reasons are not in reflective equilibrium—they maintain a negative valuation of homosexuality that depends on falsehoods and/or ungrounded preferences that would evaporate on sustained close inspection.
Now suppose somebody becomes an atheist and ditches all that stuff, but still thinks homosexuality is bad, because they haven’t realized the extent to which that value depended on those beliefs (or on unreflectively absorbing it from their culture in a free-floating way). They’re generally not in reflective equilibrium, either—factual evidence and thought experiments can change their minds. E.g., if you get them to seriously imagine if they were gay, and couldn’t help wanting gay sex and romantic love, and enjoying it, and not finding it icky as they currently do, and so on, they’ll generally come around to the idea that there’s no good reason for people with that sort of tendency not to do that sort of thing, if it doesn’t hurt somebody in some non-circular sense, especially if it’s a net benefit to them in some non-circular sense.
In general, homophobes are not reflectively stable with respect to homosexuality, for one reason or another—there’s false stuff they think that they rationally shouldn’t, true stuff they should know which they don’t, or stuff they just haven’t worked through. (E.g., putting themselves in the other person’s place, and realizing what they would do, too, in that situation, and why, and why it would be stupid to expect any similarly placed person to do otherwise.)
A reflectively stable moral intuition is one that doesn’t change when you think about evidence and examples—
The assumption that there is such a thing as reflective stability underlies most most ethical philosophy, and most metaethics. IMO actually properly an empirical psychological claim, though, not an a priori principle of “philosophy.”
It could in principle be the case that people have no reflectively stable moral intuitions—that there’s no sense of rightness or wrongness that can’t be undermined by leading people through a cleverly designed sequence of thought experiments, pumping up certain intuitions and generating new ones, without ever introducing a false “fact” or hiding a relevant actual fa
That possibility is almost never discussed, and I agree with most moral philosophers and cognitive scientists that it’s apparently untrue, but it is an empirical assumption I thought I’d make explicit. It’s the basis of some atheists’ fears about relativism—a really radical relativism could in principle be scientifically true, and moral philosophy could reduce to nonrational persuasion—but it does not appear to be scientifically true.
I wouldn’t be surprised if accommodationists tend to think that more than gnus. If you don’t think there’s a fundamental basis in human nature for moral argument, it’s more reasonable to be afraid of anything that rocks the moral boat—best to stick with persuasive rhetoric than to go head-to-head arguing who’s morally right.
I suspect that’s part of Harris’s motivation for writing TML—he thinks too many atheists back down and evade a fight they could win, and assume that the best they can do is frame.
Look, that criticism was meant to apply to what you called “the basic idea Harris is pushing”. I don’t think the process described therein can be called scientific. I think it’s very problematic to go around saying that you can measure the validity of an idea by the degree of unwavering conviction that you have in it. You responded by quoting various ideas that Harris is pushing that allegedly conform to the definition “how science determines moral values”, but those were not exactly the ideas that I was criticizing. Whatever the merits of Harris’s “project” in entirety, this at least looks like a stumbling block given what we normally tell people about the workings of science.
Normally I like your comments, but I don’t think this Courtier’s reply-type attitude is at all productive unless you can identify some specific misunderstandings, like I’ve tried to do. I’m all for the project of naturalizing ethics but from what I’ve read, Harris hasn’t really done any heavy lifting in that regard.
Yes, I already gathered what you mean by it from previous comments. I didn’t ask you to explain it, I asked for examples from the book where you think Harris is arguing for this. This is pretty thin gruel:
I know about the Veil of Ignorance, and I disagree that it makes it “pretty clear” that Harris is talking about reflective equilibrium within a person, rather than the best ways to organize society. Of course, a consistency of principles is generally recommendable in any kind of thinking, so I have no doubt that Harris supports it wrt morality. But I hesitate to continue this discussion if it’s going to lead to more patronizing lectures in lieu of actual responses.
Suppose that I tried to convince you that although the Origin of Species doesn’t explicitly deal with the subject of its title, it does so implicitly (there’s a case to be made either way). Would you be satisfied if I told you it’s “pretty clear” from “various examples” that it’s in the book and insinuated that “it’s not clear to me that you understand Darwin’s project or it’s significance” and told you at length about my preferred theories of speciation?
So Windy, what did you make of Harris’s third chapter Belief? If you haven’t read it so far I think it might be useful.
windy:
Ah, OK. Much of chapter 1 uses that approach without explicitly arguing for it. (As moral philosophers typically do.)
Chapter 1 is quick and dirty and polemical, but basically doing what it seems everybody usually does, if they bother—showing that when people make moral claims, a little probing shows that they’re always related to the interests of some conscious creature(s) or other. (E.g., other people who’d get hurt if people behaved a certain way, or offense given to a God whose interests are more important than human interests, or something that somehow matters to someone else.)
Harris certainly doesn’t do a good, careful, clearly structured, or thorough job of that—he’s basically painting a picture and hoping people will think about it and “get it,” and moving on. (esp. pp. 32-38) It almost seems like an argument from obviousness, but it’s not that. There’s more real argument in there than it seems, but the structure isn’t as clear as it should be, partly because he interleaves discussions of different basic issues.
I think this link will work to access that section in preview on Google Books
http://books.google.com/books?id=VttdxFt4kT4C&pg=PA32&dq=%22consciousess+is+the+only+intelligible+domain+of+value%22&hl=en&ei=zjavTdj4O_SO0QGGs4TDCw&sa=X&oi=book_result&ct=result&resnum=1&ved=0CD8Q6AEwAA#v=onepage&q&f=false
In a quick-and-dirty way he’s establishing that when we talk about morality, what it turns out on reflection to mean is in fact caring about the interests of conscious creatures—and if his brisk discussion doesn’t convince you, try it, you’ll see.
That is actually a pretty standard move in the professional literature on ethics. It’s often simply assumed that readers already know that morality is basically about adjudicating conflicts of interest, typically between different persons, but sometimes between persons and non-persons (e.g., animals capable of suffering) or between a person and later states of that person. (To the extent that taking care of your “future selves” is regarded as a moral issue.) And if any argument is given, it’s generally a quick, intuitive one that gets the idea across, and a claim that nobody’s ever made sense of the word “morality” in any other way. On reflection, it turns out to be essential to what the word “morality” means—apparently necessary, and maybe sufficient, on reflection.
Interleaved with that attempt to establish what moral terms presuppose (concern for others well-being), Harris is addressing other issues. In particular p. 32 starts with seeming to address the issue about why we ought to care about the well being of others, anyhow.
The answer that Harris gives is the right one, but looks suspiciously like a bait-and-switch evasion of the question, in the way he says it and structures his interleaved arguments.
That answer, plainly put, is that Harris can’t give you reasons to have a basic concern for others, if that’s just not the sort of thing you care about on informed reflection. He can only tell you that if you’re a moral person, you do, or will in reflective equilibrium—-and consequences of such a concern, if you have it, which you can recognize in reflective equilibrium.
Just before launching into what he can say, Harris says
That’s a tremendously stupid thing to say at that point, even though I think it’s true in the sense he means it. It makes it sound like he’s disagreeing with philosophers who call themselves “moral skeptics,” but he’s not. What he’s disagreeing with is a certain extreme sort of naive moral relativist, which again is not what “moral relativist” generally means among professional philosophers who call themselves relativists. Serious foot-shooting there.
He also screws up similarly on p. 38, in discussing Hume’s is/ought distinction. Again what’s wrong is not that what he’s saying is wrong, but that he says it so stupidly and confusingly.
He is not disagreeing with Hume about the basic is/ought thing, just arguing (correctly) that it doesn’t necessarily keep us from recognizing what counts as morality in basic terms, such as due concern for others, and what counts as acting morally in actual situations, taking that into account.
Harris is right that people who think the Humean is/ought thing is fatal to “scientifically” figuring out morality are wrong. If it can’t be figured out, that is not why it can’t.
Unfortunately, he bashes “many moral skeptics” who take the is/ought thing way too far, but what he really means is naive relativists and outright Dumptyists who think they can be the masters of the word “moral,” and make it mean whatever they mean it to mean. That’s not what “moral skeptics” and “relativists” in philosophy generally do; it’s the kind of crap you’re way more likely to get from a certain kind of postmodernists in English or anthro departments, or a certain kind of physical scientists who don’t know shit about moral philosophy, or common-or-garden dumbasses on the internet who think philosophers are simply wanking.
It’s simply amazing how much stuff Harris can say that I think is extremely interesting and probably true, while simultaneously coming off as saying naive and obnoxious shit.
Windy,
About my Courtierly style—I didn’t mean it to sound (or be) that way. I honestly haven’t quite understood what you do or don’t quite understand about what Harris is up to, and what exactly you’re asking for. If you haven’t yet read the book, it would be entirely understandable if you don’t know what Harris is really up to. (And given that it’s far from his best-written book, not entirely surprising even if you have. IMO, it does take some decoding to realize what he doesn’t mean, but may seem to mean, if you don’t “get” where he’s coming from.)
If my last note was more longwinded shit that’s not what you’re asking for, too, I am sincerely sorry. It was a sincere try to show how Harris uses the idea of reflective equilibrium, in a way that is “obviously” reasonable when you see it done, without explicitly naming or justifying the idea. If that’s just more longwinded not what you’re asking for, I’m sincerely sorry.
BTW, windy, one of the things that you’ve said that I actually haven’t known what to make of is:
I honestly don’t know what you mean by that, in a way that applies to something specific we’re discussing.
I don’t know if you’re talking about coming to reflective equilibrium about basic values, or clarifying moral concepts, or whether you really mean validity, or actually mean truth. I actually don’t know how to give you a more focused answer than to show the kind of several-page argument Harris is making, and what I think its basic structure is, and let you pick something more specific and concrete to object to.
But I probably should have just said that first, before volunteering the multipage example and gloss, and I’m sorry if that was a fuckup, too.
2) It’s possible to create a plausible story of moral motivation that passes in the opposite direction so that we start with facts about the world and end with genuinely motivating statements…
2 in particular needs a bit of extra work to become clear.
Yes, because that’s the project of *naturalizing ethics*. A lot of people are working on it. :)
You know who’s a utilitarian? Hume. He agrees that it’s good to promote the general welfare, as he puts it. He just denies that facts about the general welfare *logically entail*, on their own, any facts about morality. To be a utilitarian in terms of normative theory is just to say that actions are good if and only if they maximize utility. But claiming that claims about goodness *mean the same thing* as claims about utility goes beyond that. Refusing to carve a healthy patient up for his organs in order to save five other people – is this good? does this maximize utility?
… sorry, in #102 I have blockquote fail. Everything but the last two sentences should be attributed to Montag #92!
Caryn:
Hey, that a lot of people on the project does’t stop me from developing my thinking on it. If anything it just spurs me on more: see, I’m on the right track! Actually, if you could throw some names at me it could help make feel more comfortable. I feel a bit naked out in the wilderness amongst all these moral dualists.
About the only thing I can add is that I think there are independent psychological reason to think that a naturalised ethics is the way to go and this is where I think Harris has the most weight on his side. In particular I think that there are elements of belief and accepting reasons that Is Ought fails to capture.
Caryn,
In what you quoted from me, I didn’t say anything about Utilitarianism, and in the section of Harris I was discussing, neither did he. (Though clearly that’s the direction he wants to go, and I’m personally pretty sympathetic with going pretty far that way.)
The argument was that morality is about conflicts of interest, and some kind of “due concern” for (self and) others’ interests. It doesn’t establishing that the right way to arbitrate conflicts of interest is to maximize total or average utility. You could get pretty Kantian if you want, or Rule Utilitarian, or go with a straight Utilitarianish thing but with a very different weighting function, like Rawlsian minimaxing.
It’s even consistent with God’s interests being paramount, and humans’ interests being much less important, so that you could agree with Aquinas that blasphemy is a more serious immorality than murder, because “due consideration” of God’s interests is different from “due consideration” of a human’s. (But that evaporates when you realize that no such God is plausible, on reflection in light of facts.)
The argument that far mainly serves to show that the Dumptyists / extreme naive relativists are wrong—morality isn’t just any damn thing you say it is, and does clearly have something to do with advancing well-being or limiting harm or something like that, for somebody.
Paul:
That’s a very good summation of what I’ve always read Harris as saying about this. It’s not that any particular wellbeing or kind of wellbeing counts as morality, rather that if you strip wellbeing out of the discussion it’s not clear you’re talking about anything that could count as morals.
I’ll also point out I’m officially a fan. A significant part of me is hoping that you wouldn’t mind inserting a plug for your own work if it’s available.
Caryn:
I feel compelled to add that naturalising ethics isn’t ultimately my goal here. I’ve come here because I’ve a goal to try to redeem the Logical Positivists for my dissertation and one of the reasons why is that I think there’s something to be said about the intuition that meaning fundamentally is about the world. One of the conclusions is that it’s just strange for some people to say that there could be anything we could meaningfully talk about that didn’t respond to the methods of science and so bang head on I need to engage with the expressivists in order to properly grasp the question: where better than in ethics?
Here’s my question: what do you think Harris said that somehow Hume left out? Hume agrees that it’s good to promote the general welfare. He agrees that science can and does inform our understanding of well-being (though of course Hume didn’t have access to fMRIs.)
Using modern scientific equipment might make what individuals value more clear than, say, asking them what they value. The majority of people might well cohere on similar values. But what it is that people value is not an identity map with what is moral, and nor is well-being. This is *precisely Hume’s point*.
Montag, any number of people are also working on salvaging logical positivism. :) Check the most recent papers from the Montreal conference when they come out, and start trawling philpapers. You’ve seen Timmons and Horgan? http://ndpr.nd.edu/review.cfm?id=9143
Oh Lordy me Moore! I’ve been curious about where Intuitionism sits with me. Certainly that Timmons and Horgan seems to have all the usual suspects. In particular I’m eyeing up the Thompson and the Schafer-Landau essays so I’ll see if I can get a copy out.
As for trawling Philpapers you know not once did it dawn on me to put Logical Positivisim in there! I’ve actually been desperate to find a decent account of when and why it was supposed to have been put to bed. The best I’ve got so far was from a tutor and it roughly seemed to go “Quine and his ilk raised some issues with the project and then everyone just quietly went out and did something different”. There certainly wasn’t a great collapse, or even much of an argument against it, despite what Mary Midgeley says. Buggery as if my bloody summer weren’t looking squeezed already. :)
Montag, also this might be useful: http://www.amazon.com/Dear-Carnap-Van-Quine-Carnap-Correspondence/dp/0520068475
It is in fact due back at the university library next time I have to be in. New toys and they won’t even get in the way of revision: oh happy days indeed!
Yay, nerds party!
:- )
OB:
Count yourself lucky you missed the happy dance! :D
I thought that was what you were doing in this thread until now- trying to clarify what Harris had explained so “uncharacteristically badly” in TML, now you say it can’t really be done? :)
Truth is perhaps a too loaded concept, but Harris says moral ideas can be ‘facts’ or ‘valid’ like other scientific facts, right? I don’t think I was saying anything especially interesting or controversial, just the basic Saganite idea that we should be careful about implying that strong feelings or the ‘obviousness’ of an idea or a popular consensus have something to do with validity. (yes, I know that’s not the entirety of Harris’s argument about morality)
Excuse me, but if I tell you a criticism applies to something specific in this thread (hint: if there’s a blockquote immediately above a paragraph, that’s usually it), stop trying to make it about whether I “understand what Harris is up to”. You seem to be hell-bent on interpreting everything I say as some particular naive criticism of Harris’s entire “project”. (Like in #85, where you quoted my answer to the “Pinkerism”, and the whole thing about the “basic idea”). I think you have now poisoned the well to the extent that it’s useless to continue this discussion.
» Caryn (#84): Also, OB posted this link to Blackburn making the point I’ve been making much more eloquently: http://www.youtube.com/watch?v=W8vYq6Xm2To
Remind me again: what is so eloquent about Blackburn in that clip?
» Ophelia (#86): I don’t really care, and I don’t really care what he actually thinks, either. His book is what it is. That wouldn’t matter much, but a lot of people think it’s both the newest and the last word on the subject, which is tiresome.
You don’t care what he really thinks? *rubs his eyes* I would have expected that kind of statement about somebody you have a certain personal aversion towards. Did I miss a memo?
And about this “lot of people”: who and where are they again exactly? I seem to remember that you said it wasn’t anonymous commenters on blogs, but I’m a bit foggy about names or other particulars. ;> In any case, you seem to be piling on, and so far I haven’t seen a Harris or TML cult following to warrant the continued attention. What I have seen, though, is you saying the book’s ballsiness gets on your nerves. Which, on its own merits, is a fair enough comment. I just don’t see that justifying the continued flogging.
Peter:
I watched it last time Caryn linked to it and I was pretty disappointed.
I was especially disappointed when Blackburn skated over the issue of whether middle-class people spending money on family vacations was immoral when the same money could save several lives in the third world, and went on to say that its actually a good thing that people do that.
It sounded a whole lot like an unreflective defense of unreflective self-serving middle class values, entirely missing the point of Singer’s Utilitarian critique and just denying it had any real force or any useful implications.
I don’t think Blackburn actually thinks what it sounded like he was saying; I’d guess that he was so intent on distancing himself from Singer’s more radical conclusions, and on emphasizing how it’s not that simple, that he sounded like something he isn’t, defending something he’s not.
I got the same sense (IIRC) from some of the stuff he said that seemed directed more at Harris.
I sorta expected Blackburn to agree with Harris on basic issues up to an interesting point, and make it reasonably clear where he thinks Harris goes wrong. I was particularly interested to hear what that of divergence would be.
I don’t think either made the right distinctions clearly—e.g., what does Harris mean by science, and what does Blackburn mean by philosophy that’s distinct from what he means by “science”—and I thought they were mostly talking past each other. I wasn’t surprised that Harris did that, but I was disappointed that Blackburn did, too, with babies and bathwater flying both ways.
I think Harris has largely wrong-footed the science vs. philosophy thing, but I’m disappointed when philosophers don’t straighten it out and make their disagreements with him clearer.
I have a lot of respect for Blackburn, but I haven’t been much impressed with his responses to Harris so far.
Paul, I said this on the Harris and Pigliucci thread: it’s not so much that Blackburn’s talk was disappointing, it’s that it was lazy and mostly glaringly wrong. BTW: Blackburn has a review of TML in which he goes off the rails right at the start, pretty much making as little sense as Pigliucci. He asserts that Harris is “triumphalist”, conveniently failing to mention that Harris repeatedly puts on the dampers by talking about “in principle” and “this is only the beginning”; and the Brave New World jibe that was just as ignorant when Pigliucci made it. Which is when I stopped reading; if he cannot be bothered to get these simple things right, I cannot be bothered to put up with his review.
Peter
Come on. I don’t care what he really thinks as opposed to what he wrote in the book. That was the issue, obviously. That was clear enough in what I said:
Of course I don’t care what he “really” thinks! Why would I? It’s not about him, it’s about the book. If he doesn’t successfully say it, then he doesn’t – what’s really in his head is totally beside the point.
Piling on? That’s a very odd thing to say. What, you think I’m bullying Harris? Seriously?
No, you certainly have not seen me saying the book’s ballsiness gets on my nerves – I don’t equate confidence or courage with testicles, and I’m regularly irritated by people who do. I would never say such a dopy self-hating thing.
What do you mean continued flogging? What do you mean justifies? “Continued flogging” could be an alternate name for this site, and I don’t consider that I need justification, since I don’t force anyone to read it, and no one pays me for the privilege.
Peter, I’d say that Blackburn, very carefully and in clear language, lays out the distinction between facts (which are about the way the world is) and values (which are about the way we think the world ought to be.) Science can’t tell us how we think the world ought to be; it can only tell us whether or not there’s a better approximation to the way we’ve already decided we’d like for it to be.
The well-known critiques of utilitarianism are glossed nicely into the talk as well, but in a way that doesn’t require the audience to have done any prior reading about the well-known critiques of utilitarianism. Harris could have done this as well, but didn’t.
Paul, since Blackburn isn’t a utilitarian, he’s going to focus on the points where he has a metaethical difference with Harris and Singer. The interesting parts aren’t where everyone reaches consensus, but where we don’t. Disagreements help us locate where the *problems* with getting at the truth actually lie.
I would say this is precisely correct and addresses your complaint about middle-class values: …if Bentham’s hedonist is in one brain state and Aristotle’s active subject is in another, as no doubt they would be, it is a moral, not an empirical, problem to say which is to be preferred. Even if this were solved, how are we to balance my right to pursue my wellbeing against the demand to help maximise that of everyone? Striving to maximise the sum of human wellbeing is making oneself a servant of the world, and it cannot be science that tells me to do that…
Oops: should read Science can’t tell us how we *should* think the world ought to be; it can only tell us whether or not there’s a better approximation to the way we’ve already decided we’d like for it to be.
» Ophelia (#120): Come on. I don’t care what he really thinks as opposed to what he wrote in the book
Yes, and I got that the first time around. My surprise persists, though. I would have thought that the issues themselves are much more important that whether Sam Harris was Right™ in his book. And if one is interested in the issues, then one would be tempted to spend more time talking about the issues than about how misguided the book is. My impression had been that you are more interested in the issues than the book, hence my surprise.
One reason you might think the book deserves this much attention is that there is a kind of cult following that could use some antidote—but then I would again respectfully ask who and where these lots of people that you mentioned actually are.