Is-ought and all that
Anthony Appiah says something in his review of Sam Harris’s The Moral Landscape that I don’t get – it looks wrong to me, but Appiah’s a philosopher and I’m not, so help me out here. Maybe he spoke in haste, or maybe a sub changed his wording, or maybe I’m just wrong.
Harris means to deny a thought often ascribed to David Hume, according to which there is a clear conceptual distinction between facts and values. Facts are susceptible of rational investigation; values, supposedly, not. But according to Harris, values, too, can be uncovered by science…
I thought the point was that facts can’t, as a matter of logic, get you to values. That doesn’t make values not susceptible of rational investigation, surely. Does it? It makes them not straightforwardly susceptible of empirical falsification, perhaps, but there are other ways of rationally investigating things – aren’t there?
Answers on a postcard.
Yeah, it’s too strong.
Perhaps if he’d said something like our ultimate values are not susceptible of rational investigation … I.e., whenever I ask, “Do I really want to have this value as part of my value set?”, I always have to appeal to other values in my value set. E.g., if I value eating chocolate and am considering whether to try to train myself out of having that value, or at least into not acting on it, I’m going to have to appeal to my other values, such as the value of getting a bit fitter, which may require that I lower my calorie intake. But there’s a point at which rational investigation runs out; we never go deeper than a level where we are still talking about values in our value set. We never reduce our investigations to a level of facts where values are not mentioned. I suppose another way of putting it is to say I can never stand out of and investigate the rationality of my entire value set at once; there’s no position from which that can be done.
Presumably he had something like this in mind and used somewhat misleading shorthand.
He later points out Harris isn’t the only one who thinks it (rational investigation of values) can be done:
There seems to be a pattern of misunderstandings and straw men surrounding responses to Sam Harris. I think what Sam means when he talks about science relating to values, morality, etc, is that as we learn more about the world through scientific investigation, we’ll be increasingly well equipped to make moral judgments based on whatever central principles that guide us to begin with. Whatever you decide to base your morality on, scientific inquiry would have something to say about how best to pursue it. When did Sam Harris say science objectively proved that well-being was absolutely good?
[…] This post was mentioned on Twitter by Skeptic South Africa, Ophelia Benson. Ophelia Benson said: Is-ought and all that http://dlvr.it/6QtMT […]
Kester, if that’s all he means, I have no quarrel with him. Indeed, he and I would be total agreement. But he seemed to go further in his TED talk. And when he was challenged on it – by Sean Carroll, for example – he didn’t clarify it in the way you just did but became very aggressive with his critics. If he was merely saying what you just said, I don’t think there’d be any conflict at all between his position and Carroll’s position (or mine if it comes to that), but that’s not the line he took.
Appiah was discussing “a thought often ascribed to David Hume”, by which he is evidently referring to the non-cognitivist approach towards moral statements. The non-cognitivists claim that there’s no rational way of resolving moral disagreements. As far as non-cognitivism is concerned, moral statements have no “cognitive content”, meaning truth-conditions. Instead, they’re just declarations like “boo” and “hooray”.
I don’t doubt that non-cognitivism is sometimes ascribed to Hume. It’s a separate question whether or not Hume deserves the ascription.
I look forward to reading Harris’s book. From what I can tell, Harris has been disappointingly aloof from meta-ethics. I hope to be proven wrong.
I’d always taken the “is does not imply ought” to be a critique of Natural Law moral theory. Isn’t that what Hume was criticizing, or am I misremembering? Anyway, that “is does not imply ought” should probably have a “necessarily” in there. And it works in that respect: just because sexual relations are how we reproduce does not mean that we ought not use sexual relations for other reasons.
Of course, I am not a moral philosopher, I’ve just done some studying on my own time, so I might have mangled a few points there.
I don’t actually think Hume was a non-cognitivist, though it’s hard to place his meta-ethical position since most of these positions didn’t exist in his day. He sometimes seems like a simple subjectivist. Other times, he seems like a virtue theorist or even a contractarian. Generally speaking, these are all cognitivist positions, but not objectivist positions. Actually, virtue theorists can also be objectivists, but probably not the kind that Hume was.
Nor do non-cognitivists necessarily think that there’s no way that moral disagreements can have rational resolutions. They entertain the idea that some moral disagreements might be like that, but they tend to think that most moral disagreements are really disagreements about the facts.
One of the things that I’m looking to develop is Hume’s view on moral motivation which seem to suggest he think reasons aren’t in and of themselves motives for action. If his view is as stark as that I don’t think he can avoid the fact that that’s just not how our best theories of the mind work which to be fair a lot of modern Philosophers have yet to grasp and it would be unfair to expect from Hume.
I also don’t think it works as a normative theory when a standard accepted route appears to one of expanding moral spheres, “how would you feel if Jimmy did that to you” kind of reasons. This kind of argument appears to only work because we’re motivated by concepts of non-contradiction, a logical rule.
It was my understanding that Hume argued we should be sceptical of claims purporting to derive values from facts, that there was some hidden values premise that could be contentious.
Such as say…
You have a statement of fact as a premise, but also a value statement, from which the final moral proposition is derived.
If you hide the value premise, naturally it looks like this…
You have apparently, a value being derived from a fact, a state of affairs that I think Hume would have people be sceptical about (I.e. go searching for the hidden value). How many times have the various electorates around the world heard this kind of rhetoric?
Based on what I’ve seen Sam Harris say around the traps in the lead-up to his book, I’m getting the idea that he has (albeit somewhat trivially), committed the same mistake in claiming that science can generate moral propositions. Something along the lines of…
I think Sam is leaving out the “suffering is ethically undesirable” when touting the ability of science to make moral judgements. “Suffering is ethically undersiable” being a philosophical pronouncement, not a scientific observation.
I don’t however think that it is particularly contentious that material suffering is ethically undesirable (except when you argue with some theologians), and in fact suspect it is universally true that all moral claims are reducable to human wellbeing (as I believe Sam believes himself). I suspect Hume’s fear, in cases like these though, would be that even if something uncontentious got past us, if we get in the habit of not being vigilant about it, other values could get past unchecked – maybe this is part of the worry of some of the people who think the the Gnu Atheists are going to start selling social Darwinism at some point.
I for one don’t have a problem with admitting that the ethical criteria used to define what is bad, are philosophical and am quite happy to sing it from the rooftops. As I am to openly admit that the best way to test for human suffering is quite probably neuroscience – you still have to establish the fact of where, how and how much suffering is going on in order to prescribe a moral course of action after all.
Anyway. I’ve probably ranted on this too much. You’re better off listening to Russell – he’s the philosopher. :D
The way I’ve understood Sam Harris’ argument on scientific morality, is that he treats the values that people hold as facts about the world as well – and hence they can be studied by science. Why do we hold these values? What brain processes create them?
And, of course, if we treat the values of the human population as a given, we can determine how we can fulfill the values of as many people as possible. This doesn’t even necessarily involve a value judgement on those values themselves. Although it does assume that we have the value that fulfilling people’s values is a good thing. Which leads us to:
Yes, but if you hold the position that suffering is undesirable, then that too is a fact of the world, that can be studied. For instance, we may find that this value can be reduced to neurological processes, that in turn have evolutionary explanations.
That this type of reasoning will never lead to “ultimate values” doesn’t bother me much. In fact, if the human consciousness is the result of a natural process (as I suspect it must be), you would expect that ultimate values don’t really exist. If we can accept that mathematics may never be complete, or accept the same for science, why should we excpect anything different for ethics or morality?
Which makes me wonder: is there a version of Gödel’s incompleteness theorem for ethics or morality? Something along the lines of: for any value system V, there is always a question that can’t be answered in V: should I follow V?
I think Sam Harris might be on a fool’s errand. It might just be possible that the idea we have ‘values’ is questionable, that we think we have morals may also be questionable. Rather than look for why we have morals, we should first determine if we have morals.
I’m pretty sure my cat is an amoral animal, and yet behaves in a very loving way that even transcends species (humanism loses points here), but of course she has no values or morality. I would also suggest that my cat and myself don’t have this mysterious spirit called free-will.
The idea that we don’t actually have any morality or values is incredibly freeing, but incredibly frightening for people. There may be very good rational explanations for why human behaviour has been modified in modern society, so as to conform to its rules. There may also be very good rational explanations for injustices for which can be argued rationally rather than appealing to subjective and questionable terms such as morality or values.
Just perhaps science does have something to say about values.
Robert Axelrod in his book on the Prisoner’s Dilemma uses very logical computer programmes to assess the long term success – in terms of survival – of different “moral” strategies such is “tit for tat” or “tit for two tats” or various “sneaky” strategies.
Is there a possibility that the development of values / morals / ethic over the last 500,000 years is based on certain strategies of survival in the long run? Could it be that people like Axelrod are looking at these values in a scientific way and uncovering the hidden reason why human beings have developed these values?
According to Axelrod, being nice but firm (“tit for tat”) is the very best strategy for dealing with the world at large – is it co-incidence that this way of dealing is also the basis of many schools of philosophy?
@jan frank
According to Adam Curtis, in his documentary The Trap, game theory–developed by John Nash made famous by the movie A Beautiful Mind–was used to predict the behaviour of people–based on the idea that people behave selfishly and rationally. But experiments showed that people did not behave either selfishly or rationally. Unfortunately, what people didn’t know at the time was that John Nash was crazy. He is a paranoid schizophrenic but this hasn’t stopped John Nash’s theories from being applied within economics and politics on the same assumption that people behave rationally and selfishly (which they don’t).
Deen:
I don’t actually read that as being completely Harris’ argument. There seems to me to be two claims at work, possibly more.
The first is that there are true and false answers to moral questions. This is in fact what Russell means when he refers to cognitivism. There are in fact a wide range of cognitivist approaches, one of which Russell himself adopts, Mackie’s error theory based subjectivism which is certainly Humean from what I gather.
The second claim is that moral questions are amenable to scientific inquiry. Now you could argue this means he’s arguing that it ultimately corresponds to physical facts but I’d reject that interpretation as too narrow in terms of how it defines scientific inquiry. Rather for this claim I’d argue Sam needs to show that the standard methods of scientific inquiry, from appeals to evidence, clarity, generalisability and specificity, leads to results we can recognise as better. There’s some question whether the better here is something that relates simply to subjective states or whether there are independent “moral facts” but this is a separate point to the one he needs to make. I think his suggestion here is that subjective states just are facts about consciousness but that’s just a guess on my part based on a particularly confusing claim of his.
In terms of a “Godel’s Incompleteness theorem” of ethics there’s two answers I can give here. The first is that the whole Is-Ought divide is something rather like that. Is-Ought is meant to show that motivation is sort of an extra component that reason alone isn’t quite capable of generating whereas Godel’s theorem would apply far better to theories of meaning. The second answer is a lot more technical and would be based around Quine’s theories of Underdetermination of Fact from Theory and more aptly his two Indetermination theses. The link between the two will be that Humean subjectivism bears a lot of similarities to the answers to Underdetermination and Indeterminacy given by people like Davidson or Dennett, but that’s another very tentative argument based simply around what I’ve been able to pick up around the fringes.
I’m sorry, Egbert, I don’t quite understand how your contribution relates to mine. I agree that game theory as developed by Nash casts a rather harsh light over the human psychic landscape, but from what I understand of the matter is that Axelrod – who did his work after Nash – showed how a selfish player of the re-iterated prisoner’s dilemma loses out to somebody using a “nice” strategy.
In other words, in the veeeeery long run, people with nice tendencies do have an evolutionary advantage – as demonstrated by the different computer strategies tried out in the course of his experiment. We, you and I, are the end product of that long run, which is why we tend to have more respect for nice strategies than nasty ones, even if nasty ones do sometimes work out better in the short run.
That’s why I suspect that scientific experiments like Axelrod’s can help us evolve, or at least analyse, values. Just as I suspect that science can do more to analyse and evaluate values than religion can do to answer scientific questions.
Egbert:
Apparently some research suggests Jeremy Bentham, founder of Utilitarianism, might have been on the autism spectrum. Despite the fact this likely was a huge factor in how impersonal his theory is, Utilitarianism is actually a very live option for a moral theory and in many ways precisely because it’s so impersonal. This is a point made by Joshua Greene in the recent Edge Video series The New Science of Morality.
Equally, when it comes to Game Theory, the fact that its founder was actually a Schizophrenic may have helped him develop a tool that is in fact useful. I’ll agree with you that one of its big failures if treated as a normative theory, equally a failure of a lot of economics and in particular naive free market economics, is that it can at times incorrectly assume it’s dealing with perfectly rational agents; that doesn’t stop the tool itself being incredibly useful at unpicking the mechanisms behind the choices people do actually make.
I think it’s almost trivial to use the is-ought problem to “undermine” any moral system. If you say “X aids our survival”, I can say “Why is survival good, and why is it not probable that there might be something better than human survival?” If you say “X aids human pleasure and abets human pain”, I can say “Why are pleasure and pain avoidance morally valuable, and why do you think that this hedonistic calculus is not superseded by some other, more basic value?” If you assert “God demands X”, I can hit that with the Euthyphro dilemma and ask why you think God is so good anyway.
In a way, it’s much like the Agrippan trilemma, or certain applications of the principle of sufficient reason (“Why is there something rather than nothing?”). Whenever you ask for a justification, whether moral, epistemological, or ontological, that justification can be challenged, and so on. To simply give up here is nihilism and/or solipsism. If you want to escape that, you have to either make a bland or self-justifying assertion, allow an infinite regress of justification, allow non-rational justifications to come into play (“It’s so because I want it to be so.”), regard the question itself as meaningless, or just regard the problem as unsolved and move on to a more immediate one.
I think Sam Harris is correct in that science has a clear bearing on ethics, particularly insofar as psychology, cognitive science, and biology tell us about our own needs and desires. However, my preliminary assessment (based on his statements and talks, not his book itself) is that he doesn’t have any answers regarding meta-ethics, nor does he have a particularly clear response to the is-ought problem itself. He’s working with a consequentialist framework; once you assume that morals exist, AND that they are meaningful and not just a common delusion in our species (like religion is), AND assume that morality is based entirely on the results (or reasonably foreseeable results) of actions, AND assume that something like “well-being” is the appropriate standard for morality, then he makes sense. This is not to completely disparage his work, however; few scientists feel the need to argue against solipsism before publishing papers on astronomy.
“it can at times incorrectly assume it’s dealing with perfectly rational agents”
Perhaps this is pedantic, but I think that this is also a problem with how it describes positive, not just normative, economics. I think that the issue is with how widely Nash’s work is applicable, rather than normative vs. positive(/descriptive).
Nah, not pedantic, pretty accurate in fact. The obvious return is that that’s a problem with pretty much all kinds of inquiry; it is both tempting and easy to apply a method or tool to a problem it’s not suited to deal with.<a href=http://scienceblogs.com/pharyngula/2010/09/hes_baaaack_stuart_pivar_hasnt.php>A recent example jumpts to mind</a> but I’m sure we can all come up with others.
If suffering of others provokes suffering in myself because I’m cursed (yes, it’s a curse at times because it can drive you strange places, including being very critical because you can see where the person is going with their idiotic behaviors long before they, and others, do) with empathy… And we have discovered that empathy is, at least partially, genetic….
We should be to see that there is a scientific basis to bring in “suffering is ethically undesirable” because suffering in one provokes suffering in others. And when people start suffering, it become contagious, and has an ability to spread, with the potential of afflicting a wider-and-wider field of individuals. Stress has its by-products — including many stress-related diseases — chronic pain, poor immune function, hypertension, susceptibility to various mental and emotional disorders… Stress also effects crime rates, poverty rates, educational outcomes and other factors in society as a whole.
So, at some point in time, as science learns more about mankind, positions that were “just philosophical” have the potential to become incorporated into science. And I think Harris is right, regardless of the clarity of his arguments, that science is exploring the foundations and origins of morality; and science does have ability to incorporate and say something valid and important about morality.
Now, I’m not saying I’ve either read the book or followed Sam Harris very carefully in this area. I think this is a “frontiers” concept in science. So, like any part of science on the frontiers, there’s going to be a lot of false starts, mistakes and over-reaching. We can look at psychiatry and psychology as excellent examples. There were some serious mistakes by the early pioneers and, even 40 and 50 years after Freud started pioneering the field, these disciplines were very wrong in some of their beliefs, such as the cause of schizophrenia which was blamed on mothers as late as the 1970’s.
While I agree that various strategies for determining or analysing human behaviour and building a theoretical structure are all good ideas, the problem–as I see it–is that we all live in a society where we already have a functioning system to provide boundaries to human behaviour, otherwise known as the legal system.
With a legal system, there is no need for morality or values because a legal system provides the necessary external framework to define good and bad human behaviour. But then we have the problem of determining whether the legal system is justified or not. In which case, we really are facing a kind of incompleteness problem.
One very large problem with legal systems is how they are justified by authority, from the top down. Whereas natural law theory works in the opposite direction, from bottom upwards.
I actually think that science could have a large say in working from the basis of natural law and building a rational system of justice based on it. This would then replace our traditional or custom based systems, that are clearly irrational but pragmatically necessary.
Natural law, then, would be the best domain for science to explore. This is especially important because, I feel, that it is the ideas and implementation of natural rights that is responsible for our progressive and scientific liberal societies.
Philosophically speaking, the main criticism of Harris is:
1. Distinction of fact and values.
2. You cannot get an out from an is.
These views came more or less from Hume and were given a great push from the logical postivists. Now, the fact value barrier rests upon the classical empirical assumptions and the distinction between analytic and synthetic propositions.
So, if we reject or undermine classical empiricism and analyatic propositions (as Quine did) then we can go on and knock out the barrier between facts and values (as Hilary Putnam did).
As to is and ought, this rests upon classical foundationalist assumptions, again reject this in favour of coherentist or abductive approches and you have your ought from is.
Now, I’m a big fan of David Hume and have a good degree of sympathy with the logical positvists but the some of their philosophy which can be used to reject outright Harris’s arguments is simply bad philosophy and old philosophy too.
I dunno if this is Harris’s position, but I’d like to defend a position that sounds similar.
There are three main points:
1) Most moral reasoning depends on reasoning from more basic moral principles and from nonmoral facts. Moral reasoning can be mistaken, if purported nonmoral facts are in fact false, or if people make errors of reasoning and purported consequences don’t actually follow. So, for example, if you think that embryos have souls that make them human persons, and derive an anti-abortion stance from that, that can be a moral mistake if in fact embryos don’t have souls, or if you make a mistake in reasoning somewhere along the way.
2) Morality is about certain kinds of things and not others; it isn’t just about what you like, but how and why you like it. So, for example, if I really like the color red in the usual way people have color preferences, but to a very extreme degree, I might make it my life’s work to maximize the number of red things in the world, and go around painting things red and encouraging other people to do so. That wouldn’t make my preference for red a moral preference; it might motivate me, and others with a similar sense of aesthetics, but not in the right way. If some person with no moral sense but a strong preference for the color red claimed that redness was their highest moral value, they’d be wrong, and be a broken moral unit. Scientifically, they’d simply be mistaken about the nature of moral good and bad, as well as what particular things count.
3) Morality is largely about human flourishing in some sense; that’s not only what it evolved for—limiting damage due to human selfishness—but largely what it is about at a basic psychological level. That doesn’t have to be a sharp-edged concept for it to be true and useful. Some people might ultimately value straight utility (happiness) above all, others might value proper functioning (e.g., maximizing human capabilities and freedoms) above all, but it still distinguishes a lot of moral things from nonmoral things. Whether we agree exactly on the same ultimate values or not, we can often come to rational agreement on how to make things better. (E.g., you might want to avoid a police state because it reduces people’s self-determination, and I might want to avoid it because that makes people miserable, but we can agree it’s a bad thing, and both are doing so out of concern for human welfare of some sort.)
One difficulty with this story—which makes it interesting but not necessarily wrong—is that it’s not clear that human morality evolved to do what you might assume, or to be the kind of thing it might seem.
We seem to have evolved moral systems that favor in-groups over out-groups, for understandable evolutionary reasons. You just can’t afford to be very altruistic, toward everyone, because selfish types will exploit you and win in the long run. In practice, moral systems evolve to limit altruism, and especially the scope of altruism, i.e., whose good or rights we care about.
I think it’s a crucial empirical fact how they evolve to do so. Like Peter Singer, I think we’re innately able to feel empathy and a sense of justice about anybody who’s like us, and how we limit that empathy in practice usually depends on social systems that define out-groups as not like us.
That is, the altruistic aspect of morality is more basic, psychologically, than the altruism-limiting features of particular moral systems that coevolve with societies, and societies evolve to limit the scope of altruism by promoting certain kinds of moral mistakes–e.g., promoting myths that Others deserve their inferior social status, are less deserving of moral consideration, etc. (E.g., Karma, or Divine Command Theory plus a Chosen People, or myths about evil things that the Others do.)
Whether that’s true, or how true it is, is an empirical question about the psychological mechanisms we’ve actually evolved.
It could certainly have been otherwise, in principle. We could have basically tribalistic instincts, and innately have no capability for general altruism toward those outside the group. You can imagine a social species for which tribal loyalties were everything, in a hard-wired way, which would not need any moral mistakes to exclude Others from moral consideration. They just wouldn’t care, and that would be that.
It does seem to me largely true that we’ve evolved to make this scope-setting socially programmable, and that the programming does generally hinge on duping people in ways that don’t work if they’re conscious of them. (Or don’t work as well.) That is the main reason why I am optimistic about humanistic morality. The capacity for broad empathy doesn’t go away on rational reflection in light of actual facts, but the excuses for not giving moral consideration to others generally do.
And that’s the major reason I’m a Gnu Atheist. I really think that secular morality is superior to religious morality, because religions coevolve with societies to be part of the morality-limiting system—they muddy the moral waters, erect in-group/out-group distinctions, and cobble up bogus rationalizations for an unjust status quo. Evolutionarily, that’s largely what they’re for, and that’s why they’re so dangerous.
Harris will be on the Daily Show tonight.
Paul W makes some some good points.
To philosophically defend Harris position you need to do the following.
1. Establish and defend moral realism – what is a moral fact and defend it from scepticism etc.
2. Argue that the purpose of morality is human wellbeing.
3. Argue that science (broadly defined) can answer moral problems as they relate to human wellbeing. And show that they can do so.
4. Refute G.E Moore – the naturalistic fallacy.
5. Refute Hume, J.L Mackie etc on the distinction between facts/ values and the issue of ought from is.
6. Answer issues of practical ethics ie how who gets what, how much and how? ie what happens when wellbeing(s) clash?
Now, as i hinted at in my above post, several of the arguments used against Harris position is based on now rejected philosophy, and also on some assumptions that themsevles need to be defended.
I’m not sure how Harris will tackle the philosophy (he is also a philosopher) but his position can be defended philosophically.
A good takeaway:
Thanks, Paul!
Do you really need to read the book to see how supercilious this review is
http://online.wsj.com/article/SB10001424052748703882404575520062380030080.html
It is a mistake to expect rational behavior.
1) ‘Rationality’, the cerebral cortex is merely a wrapper on our much older primate brain. While the rational shell can modify and affect our behavior, much of our gut feeling and decision making (including our perception of morality) is not accessible to our conscious minds.
2) We should not be surprised that simple psychological game tests looking for rational behavior often don’t find it. There is probably good reason why we all have a degree of ‘non-rationality’ because sometimes the locally ‘rational’ decision may not be the best one in the long run. Having a variety of human responses increases the chance that some will hit the better solution.
3) Rationality in behavior needs to be viewed within the context of the individual. For example:
When researchers discuss ‘rationality’ in people’s car choices, they usually quantify things like price, reliability, resale value, etc. and wonder why everyone is not driving the same half dozen models. It’s lots of fun to ridicule the car ads that feature sexy women as the ultimate example of irrationality.
But I argue that within the context of some buyers it is not irrational. Status is critical for success (including mating success) in many social animals including primates. Now the individual status cues vary from culture to culture and time to time, but they include visible display of wealth, especially wealth that can be squandered on at least some degree of extravagance. From an evolutionary point of view, acquiring status IS a rational behavior.
Morality, too (as touched on by others above) is not an absolute. There is no absolute metric as to the desired outcome. How do you weigh the overall affect on society? By some average over all members? By what effects it will have (assuming you can even quantify them) on different parts of the society? Or on individuals? If some individuals will be negatively affected, but the average will improve, is that OK? How do you compare the value of ‘harmony’ against the value of ‘individuality’, the value of ‘security’ over ‘freedom’? Do you pursue policies that will provide the longest average lifespan, or policies that will enable individuals the most control over their life?
I don’t want to defend neoclassical economics or the strong versions of rational choice theory, but I do want to point out that blaming them on Nash’s paranoid schizophrenia is an ad hominem argument, reinforces unfair stereotypes of the mentally ill, and is just really tacky.
Nash is a genius who made great contributions to game theory, a very worthy endeavor with important applications, (e.g., in evolutionary theory). The fact that he was a paranoid kook sometimes doesn’t mean that his theorems, etc. weren’t valid, and the fact that a lot of economists like such idealizations a bit too much (or take them the wrong way) isn’t really his fault.
the is ought distinction is some kind of dogma, of people that heard from people about Hume argument.
Read Hume for yourself, tell me even if there is an argument there (or, he is simply pointing out something…)
Michael:
I hope you don’t feel targeted here but you made a list and that makes it a nice way to wrap up an argument.
Now, 1 is obviously and quickly false. What Harris is aiming for is a Science of morality so what he needs to establish is that moral questions lend themselves to true or false answers. You can accept this without arguing for moral realism by for example being a subjectivist or maybe taking Mackie’s route which is to say that moral questions have true or false answers it’s just that all the answers we can give are false.
For 2 I’d disagree and say that that’s not necessary for Harris argument, though it is indeed what he ultimately argues for. A science of morality would be perfectly possible if the only good referred to what was good for plankton or the universe or ping pong balls. I’ve heard it argued before that information is the ultimate moral good and that all moral actions seek to preserve information and while this may seem perverse to you or I, it’s a perfectly coherent moral code that is perfectly amenable to being a science of morality.
3 I think is a pretty good statement of Harris’ goal and it’s close to what I’d be inclined to argue for. One of the reasons I don’t like Mackie’s argument is it seems to expect me to make commitments about what moral principles look like before I even begin my inquiry and I think that’s a bit cart before the horse. What I want is for us to figure out an epistemology or maybe even better a research program for ethics and only then get into arguing what the answers to our questions are.
4 frankly a toughie and I’ll not pretend to have an answer. My first stab will incline me to say that it’s generally accepted that there’s quite a lot of ambiguity built into cognitive and linguistic content but we seem to manage. Not a great first stab so I’ll give you that one.
5. The question of moral motivation is another matter entirely. I’ll be careful to put Mackie to one side here because I’ve not got a good synthesis for how I’d link his views to Hume’s which is simply my own personal ignorance talking. For Hume’s argument it’s a little easier and having recently read Ryle’s The Concept of Mind I’m pretty much with him here. Hume has basically fallen for the dualism that seems to plague how we conceive of the mind or maybe more specifically how philosophers conceive of the mind. There is no gap between our reasoning and our actions that we need to appeal to some extra force to bridge and to think otherwise is to fall into Descartes’ trap. This isn’t to say that action isn’t a complex of conflicting intuitions and urges and fallacies and contingencies but it is certainly enough to dispel the argument that we need something extra to turn reasons into motives.
To finish I’ll say that 6 doesn’t really strike me as an issue because he’s pretty bluntly pointed out how we would go about answering problems of practical ethics: through the science of ethics.
Hmm… Is it just me or am I starting to look a bit like a cow at the Restaurant at the End of the Universe?
@Gilad, precisely. My understanding is that Hume was basically saying, “In many conversations, I have noticed people go on about what ‘is,’ then, without warning, move to ‘ought,’ without ever building a bridge.” His suggestion is not that it is impossible to go from ‘is’ to ‘ought’ — this is the strawmen many people erect — but that there must be some arguments in-between to link the two.
Egbert:
In addition to Paul L’s point I would like to point out that John Nash did not invent Game Theory, he developed the Nash Equilibrium method of solving games. This was a massive innovation, but not the start of Game Theory. For that, you have to go back to Thomas Shelling and John von Neumann, neither of whom was mentally ill, as far as I am aware.
Von Neumann and Morgenstern are generally credited with the invention of game theory, not John Nash. Tit for tat was Anatol Rapaport’s contribution to a formal competition among iterations of prisoner’s dilemmas: first cooperate, thereafter repeat the opponent’s move. Instances of tit for tat obviously coöperate with each other, which is why their instantiations in the population of the initial trial thrived.
It’s intrinsically difficult to separate wired from cultured behavior in humans due to our prolonged infancy. By the time our neural wiring is complete our parents and everyone else have been trying to train us for a year or three. Somehow we learn manage to walk, talk and feed ourselves and the dog.
Science has some signal contributions to make to morality, but they come from its practice rather than its findings: honesty is required and authority is ignored. It’s democratic in the sense that some of the greatest minds came from the humblest of circumstances, like Mike the bookbinder’s apprentice and Al the patent examiner, and there is no final court; a congregation eventually arrives at a consensus.
Michael:
Hume’s argument is a bit bigger than that. He does for instance argue that no amount of reason can result in action which is a good deal more than just that people illegitimately move from is to ought.
@Montag,
But isn’t that a different argument? That reason is a slave to passions?
@montag
I actually agree a good deal with Harris, i was just pointing out the philosophy that needs to be overcome for his position to be defended. I think it can be defended.
We seem to disagree over the issue of moral realism. I have not read the book, but seen the talks and op-eds etc, Harris seems to me to go straight for moral realism per se. But yes his approach is coherent with a Quasi-realist positon I guess.
One thing I will say, If Harris does not deal with the objections I have listed people will simply go on throwing them up again and again.
A good read BTW is Hilary Putnam’s collaspe of the fact value distinction.
Michael D:
No wait, I might have left the impression I think you’re wrong in your definition there, which I don’t. At best I might say that I think the two arguments complement each other.
Michael F:
Thanks for the tip I’ll certainly try and fish that one out. By the quasi-realism comment can I assume I might benefit too from getting around to a proper look at my copy of Blackburn’s Ruling Passions?
Moral policies are the result of negotiations. Initial positions may be rational or emotional or in most cases a mixture that is a kind of internal negotiation. One can impose a retrospective rationality of the game theory kind and say that’s the rational reason, and that will be satisfactory as an explanation. The actual process won’t be easy to trace. I suppose I come down where Michael De Dora does in #33 (if he agrees with the position):
Moral arguments and conclusions, then, will have the character negotiations frequently have of not being strictly rational except in the larger sense that it’s rational to compromise differences where you can. Science can investigate the process, and if that’s what Harris means I’m with him. That doesn’t collapse the value/fact distinction, though. The treatment differs in some respects. There is no law of contradiction with negotiated decisions concerning morality. You can be a successful hypocrite, for example.
Michael, it seems as though you think that the idea of an is-ought gap were so obviously indefensible that it would be a stain on Hume to attribute it to him. Hence, you refer to it as a “strawman”. But the is-ought gap has received an enormous amount of attention, much of it positive. So I find your remark quite strange. Could you elaborate?
In Adam Curtis’s documentary, The Trap, John Nash openly admits to being crazy, and so I was not dismissing game theory because he was crazy, but because he assumed that people were rational and selfish, which did not match actual scientific data. I also said that he developed it, not created it. I think it is important that in order for a theory to be correct, it would need to be based in reality, but game theory became almost immediately a practical and influential policy within RAND, political and economic theory.
Egbert:
In your comment #14, you said:
That sure sounded like you thought his being crazy was relevant to whether people should have been impressed with his mathematical/scientific ideas, and the “this hasn’t stopped” part sounded like you thought his craziness should have stopped people from applying his ideas.
That may not be what you meant, but it did come across that way.
Michael at 33 is correct. But Hume seemingly thought that the bridge between “is” and “ought” would always involve something that we’d now call subjective, such as someone’s desires or values or whatever. Something like this has to be fed in to get “ought” out at the end. We’d now say that the bridge from “is” to “ought” always involves affective attitudes. Sooo, you can’t get an “ought” without feeding in affective attitudes at some earlier point – that’s really a better summary of what Hume was probably getting at.
We now know that there are, in fact, ways of getting validly from “is” to “ought” without feeding in affective attitudes, but they tend to involve logical tricks and are not very useful. Here’s just one way:
P1. Michael is intelligent.
P2. Michael is not intelligent.
C. We ought to drop a hydrogen bomb on Paris.
As all logicians know, the above is a perfectly valid argument. I’ll leave it to other readers to discern why this is not a very useful way of deriving an “ought” purely from “is” statements.
Here’s a trickier example:
P1. What Michael says is always true.
P2. Michael says: “We ought to drop a hydrogen bomb on Paris.”
C. We ought to drop a hydrogen bomb on Paris.
Again, this is a valid argument, but hopefully it’s clear why this kind of appeal in the premises to moral authority is not useful.
Thanks, Russell. Now I have coffee all over my keyboard.
Why, oh why, do nerdy, logic jokes do this to me?
Grunt. I completely misread Michael and retract my question.