Is morality choice or perception?
Anil Gomes, a Fellow and Tutor in Philosophy at Oxford, explains Iris Murdoch’s version of moral philosophy at the TLS:
Her views on moral philosophy are set out in three papers published over this period, none of them in the mainstream philosophy journals where her former colleagues might have come across them, later collected together as The Sovereignty of Good (1970). She presents herself throughout these essays as opposing a certain picture of moral philosophy. It is a picture, Murdoch tells us, that can be found in the work of R. M. Hare, where moral utterances are a kind of prescription, in Sartre’s existentialism, where moral value is created by our undetermined choices, and in the hero of many a contemporary novel. According to this picture, moral judgements are not in the business of describing how things are in the world. They cannot, that is, be true or false. Perhaps they express your emotions, perhaps they prescribe your actions, perhaps they announce your decisions – but whatever it is they do, they don’t tell you how things are in the world. Morality, on this view, isn’t a matter of finding out truths about the world; it is a matter of choosing which values guide your life.
I suppose that’s what I think, pretty much. If you subtract humans then surely it’s true that morality doesn’t describe how things are in the world – or you could broaden it and subtract all sentient beings. But at the same time it depends on what you mean by truths about the world…
Murdoch, Foot, Midgely and Anscombe – that wonderful generation of women philosophers – all rejected this idea of morality. The lessons of the war seemed to be that there is such a thing as getting it right or wrong, and that it mattered that one get it right.
Yes, in human terms, but if you subtract humans – etc.
Murdoch took the rejection much further than any of the others, and in a way which led her closer, in some guises, to Plato, and, in other guises, to a form of mysticism which will be familiar to anyone who has read her novels. The aim of the essays in The Sovereignty of Good is to replace this picture of moral life with an alternative, one that is adequate to our empirical, philosophical and moral existence.
What is this alternative picture? In contrast to her opponents, Murdoch stresses the reality of moral life. To acknowledge the reality of moral life is to recognize that the world contains such things as kindness, as foolishness, as mean-spiritedness. These are genuine features of reality, and someone who comes to know that some course of action would be foolish comes to know something about how things are in the world. This view is sometimes thought to be ruled out by a certain scientistic conception of the natural, one that restricts what exists to the things that feature in our best scientific theories. Such a view is too restricted, Murdoch thinks, to capture the reality of our lives – including our lives as moral agents. Goodness is sovereign, which is to say a real, if transcendent, aspect of the world.
Yes, but human lives. Humans are a contingent fact about the world. Opportunity and Curiosity haven’t reported back any kindness or foolishness on Mars, as far as I know.
Making sense of these ideas requires a metaphysics of morals, one that helps us to make peace with the existence of transcendent goodness. But if morality is to move us, we need not just a metaphysics of morals but also a moral psychology: an account of how we creatures, concrete as we are, are able to know about, and be guided by, the transcendent good. Here Murdoch aims to replace the metaphor of choice which dominated her opponents’ work with the metaphor of vision. We can look carefully, we can attend to people and their situations, and when we do so, we can come to know how things are in the moral realm, to know how people have behaved, and to know what we ought to do.
It’s interesting, I think, but not particularly convincing. I completely agree that “there is such a thing as getting it right or wrong, and that it matter[s] that one get it right,” but not that it’s somehow transcendent. I don’t think I believe in that kind of transcendence. (What kind then? Just the ordinary factual kind where you transcend an insult or an inconvenience.)
Any argument that says there’s no right or would have to justify why that position is right.
“To acknowledge the reality of moral life is to recognize that the world contains such things as kindness, as foolishness, as mean-spiritedness.”
Interestingly, each of those things can be defined without reference to a moral code. Kindness = acting in such a way as to increase the well-being of another person. Foolishness = acting in a way which is not likely to achieve your purposes. Mean-spiritedness = (approximately) the inverse of kindness. Nothing in those definitions implies that you ought or ought not do those things.
Having recently completed a seminar in metaethics, I’ll be insufferably didactic for a minute here. The anti-realist positions Gomes describes are called expressivism and prescriptivism respectively, and they are non-cognitivist positions, i.e. that moral discourse doesn’t even try to make truth-claims. I think this is *sometimes* true, but it’s also the case that often people (particularly conservative religious types) intend their moral claims to be straightforward factual statements. If you accept the latter, but dispute that any such claims can be true then you’ve adopted the anti-realist position known as Error Theory (due to J. L. Mackie).
The realism he attributes to Murdoch (whom I’ve never read) does sound unconvincing: How does looking at something “with love” allow me to *perceive* its essential moral qualities, as opposed to attributing or even constructing them myself? Also, moral psych is an actual field — it’s what Jonathan Haidt (among others) does. It’s largely about our reactions to morally salient situations — runaway trolleys, consensual sibling incest, moral judgment in the presence or absence of distractions such as bad smells (the ease with which that last can be manipulated weakens claims that we have any reliable perceptions).
Nonetheless there are current, non-religious, philosophers who hold some form of moral realism, the late Derek Parfit being the example who comes to mind (as I had to read several chapters of _On What Matters_. Also, my prof for the course did his PhD under Parfit). His argument for realism is rather technical, and I would have to re-read it a bunch of times to understand it (which is typical for Parfit). But, as Gomes mentions (and Mackie makes a big deal of), I don’t understand where these moral facts would exist. Thus, I remain an anti-realist. I think that, lacking a truly objective reference, the best we can do to ground ethics is to appeal to the most universal human needs and desires we can find, which I think yields a sort of quasi-Hobbesian contractarianism. I understand this is a bit of a minority view (David Gauthier being the only prominent proponent).
“If you subtract humans” is a bizarre sort of condition to bring up at all when discussing morality. Of course moral facts — or however one characterizes the basis for saying that some moral claims can be true or false, more or less well-justified — aren’t going to be found in the examination of the laws of physics or the workings of plate tectonics, any more than geological facts are going to be found in the examination of human lives. Whatever morality is and however one conceives of it, it’s ABOUT something particular — the behavior of certain sorts of beings with certain sorts of characteristics — and not about other things. So? Nothing important follows from that.
To put the point another way: Just because most of the universe is entirely devoid of life doesn’t mean there is no basis for true or false claims about biology. And just because there was a time in the history of the universe when there was no life at all, and therefore no biology, and therefore no biological facts to provide a basis for claims about biology to be true or false (and indeed no one to make such claims), none of that entails that there is no truth or falsity about claims about biology here and now, claims about life on Earth. Similarly, one can grant that there moral claims could not conceivably be true or false before there were the specific sorts of living creatures that bear whatever characteristics are necessary for claims about morality to be true or false, but that in no way entails that there is no truth or falsity about moral claims once the right sorts of beings (e.g. humans) exist. I am in no way endorsing Iris Murdoch’s particular account of what makes moral claims true or false. I’m just pointing out that any such account is necessarily going to involve features of human life, and that the fact there would be no features of human life if there were no humans is trivially true, but completely irrelevant to whether or not moral claims can be true or false.
But even if Iris Murdoch (and every other moral realist in history) is wrong and moral claims are NOT the sort of claims that can be true or false — that is, if justifying moral claims is simply impossible — the reason for that most certainly isn’t because there wouldn’t be morality if there weren’t humans. I’m sure Iris Murdoch (and every moral realist whose moral realism was unrelated to theological claims) would AGREE that if there were no humans (or relevantly similar creatures), there would be nor morality — and furthermore that if human beings and human life were very different than they are, then morality itself would be different, i.e. some moral claims which are true for human beings might not be true when applied to very different creatures. Moral realism in no way requires or logically depends on morality being completely independent of contingent facts about life in general and human life in particular. Morality is only required to be as real as the beings whose conduct morality evaluates, not real in some bizarre sense that is wholly independent of the nature and existence of such beings.
And, of course, moral realists can be perfectly content with the existence of *some* moral claims which are not and cannot be evaluated as true or false. For example, as a moral realist, I’m perfectly content to say that the claim “God is good” cannot be evaluated as true or false because the named entity “God” does not exist, and so it cannot have a character or actions subject to moral evaluation. But since humans DO exist, moral claims can be intelligibly made about them, and we can sensibly ask how those moral claims are to be evaluated and justified (or not), and reference to a possible universe without humans simply has no bearing on the issue.
G. Fells, your post put me in mind of one of my favorite books, Terry Pratchett’s Hogfather (Death is in all-caps; the other speaker is his adoptive daughter, Susan):
Freemage, it’s a great exchange in a good book. Susan is actually one of my favourite characters, and not just because she matches both name and personality to my beloved. That later perhaps best exemplified by this quote from Susan’s wikipedia page
Prefix: I am not a student of philosophy in general or ethics in particular. I’ve just argued a lot about them on the Intertubes.
I think “transcendent” is definitely the wrong word to use. While reading the OP a half-hour ago, two things just dawned on me. One is that morality is an emergent property (not “transcendent”) of matter, no less than consciousness is (because morality appears to be an emergent property of sentience). “Wetness” is another emergent property of matter, and it is quite real and objective. I see no reason why morality should be substantially different.
Two, after many years of arguing otherwise, I now think that there is a “real” morality. Sure, we can always find someone who defines cold-blooded murder as “good,” or altruism as “bad,” but we’ve also always been able to find people who see sandstone erosion pits and conclude they’re dinosaur tracks, or see artifacts of camera optics and conclude there was a moon-landing hoax. We do not define scientific correctness based on crackpots and corner cases, so why should we declare morality to be completely subjective and relative based on the ideas of whack-jobs and assholes?
Just like Newton’s laws will solve the vast majority of “everyday” physics problems to reasonable decimal places, there exists a sort of “classical” morality that will get 99.99% of people through their days just fine without having to delve deeply into weird “relativistic” ethical equations. Einstein showed that Newton’s physics had to be substantially modified under certain conditions, not that Newton’s physics had to be thrown out or completely reversed. Even if a compelling case could be made that saving a million people justifies some horrendous act against one innocent person, that doesn’t mean that the horrendous act is suddenly in some moral “gray area” for people who are just going about their normal days. The discovery of corner cases doesn’t mean that the general rule is suddenly open to debate in science. Why do we pretend it is in ethics?
Yes, I know Einstein’s Relativity applies to everything. I know, for example, that NASCAR cars while racing are on the order of 50 quadrillionths of an inch shorter than the same cars at rest. But that’s my point: that contraction is about 1/80000th of the width of a single Hydrogen atom, and so makes no practical difference to the drivers, pit crews, mechanics, or the outcome of a race. If you’re placing a bet on a NASCAR race due to some relativistic equation you’ve solved, you’ve wasted your time.
Similarly, the odds that an evil supervillain will set off a nuke in Seattle unless I gouge out the eyes of all the puppies at my local animal shelter are extraordinarily small, and so nobody should think that blinding puppies might be a good thing. The “gray area” we’re talking about is so incredibly tiny, it may as well not exist. A person’s everyday moral calculus need not take such contrived and unlikely examples into account when deciding (for example) whether stealing a coworker’s lunch is acceptable workplace behavior.
I’m sure all this has been discussed to death among professional philosophers, but it was a surprise to me, this evening.
Oh, note that while I’ve spent a long time arguing against an objective morality, I’ve thought (contrary to many) that morality is certainly measurable, so long as you define what “good” is specifically. It’s been about what one’s goals are, and whether one’s actions move one closer to or away from those goals. So if your goals include blinding puppies, then threatening to nuke Seattle is “good,” because it will allow you to coerce people into helping you reach your goals.
What’s changed for me tonight is rather than agreeing with various people online that nothing can possibly say what goals are right or wrong, it’s obvious to me now that there are some goals that are just flat-out wrong, just like there are some scientific conclusions that are just flat-out wrong. Nobody comes up with their moral goals out of thin air, just like nobody reaches a scientific conclusion without some sort of reasoning. We can examine the steps a person takes to decide that X should be one of their goals (just like we can examine a logical argument to reach some conclusion) and say, “no, this step in your moral argument is completely fallacious, so you are wrong, period.”
I think I’ll continue being insufferably didactic ;-).
Freemage @5: I think you’re actually contradicting the realism of G Felis @4. The Pratchett passage is giving the position known as fictionalism: These things don’t really exist, but we all agree to pretend they do because we need them to function as the particular kind of social animal that we are.
Dave W @7: I think a realist needs to be able to give an account of what morals consist in, and I read you as saying that they consist in everyday human moral intuitions. Yes, it’s true that that’s adequate to navigate the social world 90% of the time, without needing to be able to solve every contrived permutation of the trolley problem or decide between consequentialism and virtue ethics. But intuitions vary widely among cultures, and even among individuals in the same culture (Haidt’s work shows that self-identified liberals, conservatives, and libertarians hold different intuitions as primary). Which ones are the “real” ones?
Now it’s probably true that, given our psychology, there are only a limited number of ways humans can flourish individually and collectively, and that sets the space of possibilities within which morals can be discovered or constructed. And note that the intuition that we should all flourish, rather than just an elite, requires justification. I mean: You think so, and I think so, but how can we get a member of that elite to agree with us, if they happen to lack that intuition?
Steve Watson: “I think a realist needs to be able to give an account of what morals consist in…”
I certainly agree, and that’s why I explicitly said I’m not endorsing Iris Murdoch’s particular version of moral realism; I don’t know enough about her account to evaluate it thoroughly, but I’m not terribly impressed with what I do know of it. But however one gives an account of what morals consist in, any sensible account* will be grounded in facts about human life, not in physics or geology or even biology — although the latter may certainly be relevant in many respects. That’s why I always found the argument for moral fictionalism offered by Death in Pratchett’s Hogfather to be rather stupid and shallow, however cleverly phrased: Of course you won’t find mercy or justice atoms! But you also won’t find human atoms or beetle atoms or granite atoms; it’s all just atoms. Once you’re down to atoms — that is, down to physics — you’re looking in entirely the wrong place for geology, biology, anthropology, or morality. There are some conceptions of reductionism that don’t stand up to a moment’s reflection, and saying that morality is unreal because it wouldn’t exist without humans is one of them.
*By “any sensible account,” I mean to rule out — first and foremost, though not exclusively — theological accounts for the grounds of morality, which I take to be completely unsupportable at best, and mostly worse than that.
Steve Watson @8:
Actually, no. I’m saying that if morality is a consequence of consciousness, then moral truths ought to be as subject to scientific discovery as any other emergent property of biology. And if that’s true, then we should be ignoring or dismissing asswipes and fringe philosophers when discussing morality with/around the broader public to avoid the impression that there’s some dispute over whether or not horrible act X is, in fact, horrible.
Yes, and those political groups have widely varying opinions on climate change, too, but we have tools to distinguish which opinions are correct.
I’m suddenly wondering whether moral theorists are generally so afraid of having their ideas tested in some sort of rigorous fashion that they’ve declared it’s impossible to do so (by treating the corner cases and contrived permutations as serious challenges to median norms).
How do we get an elite member of the oil industry to agree with the scientific consensus on climate change, if they happen to be at risk of losing millions in personal income by doing so? By adjusting societal norms so that they’ll fail to gain larger benefits than their perceived losses if they persist, and/or so that their political voices become irrelevant until their generation dies out, and/or so that litigation eliminates them from the market. That appears to be they way it’s been done historically, on other matters. Again: I am arguing that we should NOT treat morality as somehow different from scientific questions. You’ve asked a political question about morality, and I’m saying that the lessons learned from political questions about science apply.
(Sorry, haven’t read the comments, just the post so this is only in response to that.)
First: What? Both sides of this sound like barking up the wrong tree (different trees, both wrong).
Morality seems rather obviously to be a set of rules. They’re made by humans, so again duh, they don’t exist independent of humans. They are, however, essential to humans if we’re to live together, and living together is also essential for us.
Depending how well we make those rules, we can live together happily and usefully, or create misery.
To me it also seems obvious that you can judge different systems of morality by how much joy and satisfaction in life all the people living by it have. (Yes, I know, not always easy to judge whether from inside or outside the system.)
Why does it have to be more complicated than that? Asks the person who never studied moral philosophy.
Dave W @10: Whether or not you’re aware of it, you’re suggesting something along the lines of Sam Harris’s proposal in The Moral Landscape. The book seems to have received mixed, but on the whole, negative reactions from the philosophical community (one of my instructors called it “the worst book of moral philosophy”). Having not read it myself, I couldn’t comment, so I’ll just refer you to the Wikipedia article for a precis of the whole matter.
No doubt morality is, in some sense, an emergent property of consciousness (more precisely, I’d say that it emerges from the interactions of conscious entities possessing interests). But that only gets you as far as moral psychology — a descriptive, rather than prescriptive discipline. And the results coming out of that indicate that we have a hodge-podge of moral intuitions that can’t, in any obvious way, be harmonized into a consistent theory. By that I don’t (only) mean among different people or political groups, I mean that each of us individually will respond to different scenarios in ways that show inconsistent patterns of moral judgment.
As it happens, this is exactly what one might expect, if our moral faculty evolved as a mechanism to allow us to be a highly intelligent social species: Like our perception, our cognition, our digestion, our reproduction, it was not designed from the ground up to instantiate some ideal theory of what that function should be, but rather cobbled together, kluged, and modified ad hoc to meet the immediate need, and as long as it worked (in the instrumental sense of promoting survival and reproduction) 90% of the time, that was good enough. Book recommendation: The Evolution of Morality by Richard Joyce (which I have read).
The thing that drove me the most crazy about Harris’s book was that he triumphantly presented utilitarianism as if he’d invented it and then completely forgot to say why anyone should care about the greatest happiness of others.
OB @13: Exactly. I want to be happy (or flourish, or have my desires satisfied, or whatever theory of value you subscribe to, which is a whole can of worms by itself), and you want to be happy, but that doesn’t tell us what to do when our respective pursuits of our own happiness clash. Pre-theoretically, nothing except my evolved sense of sympathy tells me I *ought* to value your happiness, or you mine. Hell, my *wanting* to be happy doesn’t even add up to that *I* ought to be happy, in the sense of something I can demand from the world. I’ll certainly *try* to make myself happy, but (absent further justification) it’s not something I can just claim as a right.
I might be empathetic enough that others’ happiness and misery is reflected in my own state and thus want to see you happy also (which was Hume’s view, and I do love me some Hume). Or I might recognize that as a social species we need each other, thus the best way of securing my own welfare is to work to secure yours as well, and it is better *for me* to have you as a friend rather than an enemy. From there we can (perhaps) make the leap from “want” to “ought”, and from the prudential to the moral. But it’s not as simple and obvious a path as many make it out to be.
Indeed. I think Plato did a nice job of showing how simple it is not in the Gorgias, where Kallikles makes a persuasive case for being ruthlessly selfish.
Dave W. @10: I’m with Steve (@12) here. You seem to be completely ignoring (as Sam Harris does) the distinction between descriptive and normative claims. No observations about what IS the case (say, for example, evidence for the evolution of moral “instincts,” moral impulses, or moral behavior) can, without further argument, automatically justify normative claims about what OUGHT to be the case or what individuals OUGHT to do. And I say this as someone who has taught a course on the evolutionary precursors of moral behavior in non-human social animals. This is generally called the Is-Ought Problem or the Fact-Value Gap: I don’t agree with the position that this obstacle is entirely insuperable, but it certainly can’t just be ignored out of existence (which, again, Sam Harris does in his execrable book The Moral Landscape).
Every ethical theory faces this obstacle, and must provide some argument to overcome it. Different ethical theories provide different arguments: Utilitarians of various stripes argue that facts about what all humans do value provide a bridge over the fact-value gap; I’m not certain they’re correct to think so. Kant thought that something about the nature of reason itself provided the bridge, and I’m pretty certain he was wrong about that. And so on.
Steve Watson @14: “Pre-theoretically, nothing except my evolved sense of sympathy tells me I *ought* to value your happiness, or you mine.”
The move from everyone as a matter of fact valuing their own happiness to a general obligation to promote happiness is a product of reason, not a matter of internal motivation, nor of sympathy, according to John Stuart Mill. He does under-explain this point in Utilitarianism, perhaps because it seems obvious to him. Mill takes the value of happiness as objective. That is, when we recognize that happiness is valued by everyone, we are recognizing something generally true, not something subjective like taste. After all, I don’t think of happiness as simply valuable-to-me; I can recognize that when you experience some moment of joy or satisfaction, you’ve achieved something of value just as I would if I experienced a moment of joy or satisfaction (even if I take joy in different things). Nevertheless, Mill would agree with you that what MOTIVATES everyone — what people have the most investment in and commitment to — is *their own* happiness. It is only the dictates of REASON that oblige us to recognize that, since the value of happiness is a matter of objective fact, and I can give no conceivable rational argument for the priority of my happiness over anyone else’s happiness, we have an obligation to promote happiness in general, no matter whose.
You’re certainly right to observe that it isn’t a simple and obvious path from one to the other. Geoffrey Sayre-McCord does a particularly good job of explaining Mill’s reasoning in his paper “Mill’s ‘Proof’ of the Principle of Utility: A More than Half-Hearted Defense” (which I think is pretty accessible to the non-specialist, but I can’t necessarily judge that very well because I *am* a specialist).
(Note: please forgive my anthropomorphizing of science in this thread. It’s a shorthand, and nothing more.)
Steven Watson @12:
Except that I’m not declaring a single over-arching goal for behavior as the correct one. I think potential goals ought to be scientifically discoverable. And science lets us know our limits, too, concluding some things are simply unobtainable, and I would think that it could inform us the same thing in regards to certain moral goals, too. (For example, historically, one person ruling the entire known world has been impossible.)
Here’s the thing: if you pick some goal, then physics stops being merely descriptive and becomes prescriptive. Want to launch something into orbit? We’ve got loads of phsyics that tell us what we MUST do to accomplish that goal. Even if we insist on getting to orbit in a suboptimal way. Phsyics even tells us that using a Star Trek-style transporter to get to orbit is completely impossible with today’s technology, and wildly impractical even if we had all the resources we’d need.
Why is getting into orbit our goal? It might not be. Unlike Harris, I’m not saying it must be one of our goals, just that if it is our goal, then we’ve got boatloads of data and experiment to tell us how to do it, and how well various methods work compared to one another.
And in regards to your #14, the vast majority of the methods we know to get to orbit directly conflict with the goal of reducing the effects of anthropogenic climate change, since each launch dumps gallizions of tons of CO2 into the atmosphere. But, we also have the science that tells us how MUCH those two goals conflict, and also offers various ways of limiting that conflict.
Sounds a lot like the human act of doing science, too. Our scientific faculties (for example, intuitive inferences of cause and effect) are clearly the result of a rather haphazard evolution, but we’ve since learned to institutionalize our knowledge, decide correct from incorrect, and build upon the results. The fact that our moral faculty is the result of a similar mechanism and history clearly does not rule out the possibility of creating a real moral science. It might be a lot more fuzzy and inexact than phsyics, but biology is also quite messy while clearly being a scientific endeavor.
G. Felis @16:
And up until a couple of days ago, I would have said almost the exact same thing to myself. See the last two paragraphs of my #7, above.
Look, I’ll admit that my argument is largely one of analogy, but by analogy, only crackpots argue against continuing our scientific exploration of physics by arguing that phsyics doesn’t tell us what to do with the physics knowledge we learn. We know F=ma, for example, but it doesn’t tell us to do anything in particular with that knowledge, until we decide we need a particular amount of force applied somewhere (for example). I’d be content with a moral science that’s more-or-less identical: in thus-and-such a context, with this-or-that set of goals, SHOULD we push the fat man off the bridge or not? I’m saying that the question CAN be answered scientifically, given enough data.
Science certainly didn’t tell us that we SHOULD send people to the Moon, but once we decided that that was our goal, science told us HOW to do it given the resource, time, and budgetary restraints we faced. And if those restraints couldn’t possibly be overcome, then science would have told us that we SHOULDN’T try to send people to the Moon. A moral science could then have informed us of the best known method for convincing the Administration that wasting resources on a mission doomed from the start would be a Bad Thing.
…
Maybe the problem is the idea that in morality/ethics, there are simply too many potential contexts and goals to possibly catalog them all? I’d have to object that the sum total of knowledge gained through “the scientific method” is hugely vast and varied.
Perhaps another set of examples might help get across what I’m saying.
I want to be able to fly like a bird. Given my current resources (including, but not limited to, my bank balance and my ability to handle a wrench), the physical sciences can tell me whether I should try to build wings, and if yes, how I should go about it.
Compare with…
I am hungry. Given my current resources (including, but not limited to, my bank balance and my ability at sleight-of-hand), a moral science should be able to tell me whether I should try to steal a loaf of bread from a particular grocer, and if yes, whether I should leave an apologetic note.
Dave W @18: I’m sorry, but I’m having trouble making sense of that. You seem (at least at some places) to say that a moral science could help us reach moral goals, but you’re not saying (or saying so only vaguely) what those goals might be. That’s a problem right there: Moral philosophy is largely about deciding what goals are worthy to pursue — which includes pointing out the pathological cases that inevitably ensue from choosing any simple conception of such. Suppose we decide we should make everyone maximally happy. Well, incorporating a euphoric (which I’ll stipulate has no side effects) in all foodstuffs would accomplish that. Or just run an electrode into the brain’s pleasure centers (see also: Nozick’s Experience Machine). But is that really anyone’s idea of a well-lived life? Surely (our intuitions tell us) *true* happiness includes the element of striving and accomplishment. And on the argument goes….
In science, we can get definite answers because the universe is out there, being itself, and suitably designed experiments can get those answers. In morality, it’s not even clear what a “right” answer would look like (e.g. as in your bread-stealing example).
Anyways, the good news is that, to the extent that what you seem to want can be done, it is being done. Moral psychology is an active research area in which there is an ongoing collaboration between the psych and moral phil fields (another book recommendation: The Moral Psychology Handbook by John Doris et al). The bad news is that it’s a hard problem, maybe the hardest problem there is, since the subject matter is arguably the most complex system we know of: human minds, interacting with one another in the large assemblage we call “society”.
Dave W @18: I missed this bit.
Which seems to contradict this:
With respect, while it’s possible I’m being thick, I don’t think you’re being very clear as to what your proposed moral science is trying to discover. Does it discover worthy moral goals, or only how to accomplish goals chosen for philosophical reasons? You *might* be able to get from description to prescription, but you still need to keep the distinction clear. (But I’ll note that what you’ve said actually sounds very much like the Wikipedia description of Harris’s programme).
And no, I’m not accusing you of specifying a particular behaviour as the correct one, and I don’t think Harris did that either. As I understand it (and some of this is spelled out the WP article on his book) his goal is maximizing human well-being, taken to be a state of mind, which can be objectively determined by some suitable brain scanning tech (recall that the book originated as his PhD thesis in neuroscience). How we implement that is where we get in to specific behaviours.
Dave W.: Suppose you are correct in claiming that science can identify goals in some fundamental way, and that by this you don’t just mean the social science of asking people what their personal goals are, but something like natural goals, something built into humans in some way. I’ve been down this road, and in fact wrote my dissertation about it. And the road is far curvier and bumpier than you think it is.
The only goal that is “natural” in the way you seem to be looking for is reproduction: The workings of natural selection create beings whose structure (from genes on up) and behavior (instinctive and learned) are directed towards reproduction, simply by virtue of the fact that the more successful an organism is at reproducing, the more the structures and behaviors resulting in its reproductive success are represented in future generations. Because every organism is the result of 3.5 billion years of ruthless selection for reproductive success, everything else you might scientifically identify as a natural goal would seem to be either subordinate to or derivative of the overarching evolved goal of reproduction. (Also, clearly defining what a “goal” is turns out to be tricky. I had to dig all the way down into the roots of Aristotelian teleology and reframe it in terms of modern biology.)
But here’s the thing: Everything we might think of as moral AND immoral behavior is equally the product of selection. Competition and cooperation are equally natural, and both occur in every social species, including our own. The judgment that we OUGHT TO engage in and cultivate the cooperative behaviors (mutual support, empathy, honesty in communication, self-restraint) and OUGHT NOT engage in and cultivate the competitive behaviors (undermining each other, deceiving each other, greed and selfishness of all sorts) requires more than your foundations can give you. That is, altruistic goals and selfish goals — and the behaviors that realize those goals — are equally natural. Your assertion that “some goals are flat-out wrong” cannot be justified simply by identifying goals through scientific investigation, you need some additional reasoning or argument — normative arguments that evaluate, not merely scientific arguments which describe — to differentiate the good from bad goals. Right now, it looks like you’re relying on intuition, and the history of intuitionism in ethical theory is not a proud one; it’s really just assertion rather than argument, gussied up with a fancy label.
Heh. Assuming “G Felis” is a pseudonym, I think I’ve got your dissertation on my tablet (at least, I’ve got one here that, from a very quick skim, sounds roughly like what you just described). If so, I think we’re working from different definitions of “realism”. And your exam committee has some waaay impressive names on it ;-).
Not a pseudonym. really, just an abbreviation. And the dissertation is easy enough to find: It’s publicly available from the University of Georgia library system.
To be honest, I’m not entirely sure to what extent I really *am* a moral realist. After wrestling with the realism/anti-realism debate for a long time in my dissertation research, I eventually set it aside and didn’t write much about it because it just wasn’t of primary importance to the project I was pursuing, which was an exercise in exploring the foundations of normative ethical theories, not metaethics. My view might be better seen as a variation on Aristotelian constructivism, and constructivism has an ambiguous and much-debated place with respect to the traditional realism/anti-realism divide.
Thank you. The dissertation I was referring to is William Casebeer’s, which also invokes evolution and modernized Aristotle. Yes, I’ve encountered constructivism, am aware of its borderline status w.r.t. the realism/anti-realism divide, and find it a potentially attractive position (I’m probably a constructivist w.r.t. phil of math). For background: I’m a retired engineer, now in the fourth year of a philosophy BA, with a particular interest in the naturalization of ethics and epistemology.
Philosophy is a small world: I met Casebeer at a conference some years ago, when I was still a grad student. I wasn’t even aware he’d done work in my area of special interest. I should read his dissertation and see whether and how his take on the issues parallels or clashes with mine…
Steve Watson @19:
Just like physics doesn’t specify what our goals might be for the knowledge gained while studying physics. We can use physics to fly to the Moon, or we can use phsyics to excel at beer pong.
And there exist material goals that physics can tell us are impossible or impractical. Science does “feed back” into our goal selection, and tell us which goals are worthless to pursue. Anyone can pick “I want to live on the surface of the Sun” as a goal, but ignoring all the various reasons that various sciences can provide for why such a life would be very, very short is… what? Ascientific? Anti-scientific? (Looking for a term that closely models ‘immoral’, since my entire argument here is an analogy.)
Except the science itself doesn’t specify what we should do with any knowledge so gained. The fact that force equals mass times acceleration doesn’t tell me whether I should build a slingshot, or a robotic toenail clipper, or not build anything at all. (Though it does tell me – see above – that I won’t be able to power a passenger plane using a pair of hamsters and a carrot.)
I’m suggesting that the sciences have a similar problem. The sciences are prescriptively silent on the broad question of what we should do with scientific knowledge. Should we use it to build a death ray or a healing ray? In science (assuming that we have the resources to build either one), there is no “right” answer.
But in the bread-stealing example, I’m trying to suggest that if we have enough data about ourselves and the others we might affect by stealing bread, we should be able to come up with an answer as to whether it’s right or wrong. An answer that would apply to anyone in the same situation. We have a goal (feeding ourself), knowledge about ourself (fast hands, zero dollars, a general desire to avoid harming those weaker than ourself), knowledge about who owns the bread (a faceless multinational conglomerate that would hardly see a dent in their profits if all their bread was stolen), etc. I’m wondering why these things can’t be parametized, measured, and a result computed. Punch in the numbers about what IS, turn the crank, and out pops an answer about what we OUGHT to do.
Sure, but “hard problem” doesn’t mean “impossible,” which is often what’s meant when people say, “you can’t turn an ‘is’ into an ‘ought’.”
At #20, you wrote:
Science doesn’t discover worthy scientific goals, just how to accomplish goals chosen for non-scientific reasons. (It can also generate knowledge for the sake of knowledge, but that’s also not something that science tells you that you should/must do.) As above, science does have the ability to tell you that certain goals are outright fantasy, and I’d guess that a moral science could also inform goal-choosing to say (for example) “you will never rule the universe with an iron fist, so don’t even try.”
That is precisely the single over-arching goal to which I was referring. Just like regular science doesn’t tell you if you should be a physicist, a biologist, or any other -ist, my idea of a moral science wouldn’t tell you whether you should be a “human well-being maximizer” or have any other particular moral goal.
Well, I guess certain facts coupled with some social science and economics could suggest a policy like “we need to encourage more people to become physicists,” but that’s less like dictating what should be a personal choice and more like trying to tilt the field in a particular direction. And I guess a moral science coupled with social data could suggest that “we need to encourage more people to become pragmatists” might be a good governmental policy. But even if we encourage people to become physicists or pragmatists, we’ll still need biologists and utilitarians.
G Felis @21:
No, I’m specifically not saying that (at least not any more, did I before? Probably. Science might suggest new avenues of research and possible applications, is what I meant), other than that science can determine that some goals are unacheivable or unrealistic. After all, both scientific and moral goals are essentially both random and infinite: people sitting on their couches can just dream up new goals that nobody has ever thought of before. (I think I’ve come up with a couple novel goals in this very thread.) I think a moral science, like regular science, should be able to distinguish the feasible goals from the infeasible.
Definitely not saying that. Wanting to travel to the Moon is surely not a “natural” goal, yet science showed us how to do it.
I disagree. If someone’s goal is to build a ladder to the Moon that they could climb, then materials science, physics, human physiology can very specifically determine and describe why that goal is impossible to reach, each for different reasons. In other words, it is scientifically wrong. I think that a moral science could identify that some moral goals are similarly unobtainable. Or perhaps “not even wrong.”
What I’m intuiting is that a moral science analogous to “regular” science exists, and would function similar to “regular” science in terms of what it can and can’t do for us.
I am very much enjoying this back-and-forth we’re having, but I think it’s clear that you and Steve Watson aren’t clear on what I’m talking about (probably my fault, and despite how much time I spend on editing and re-editing these comments, I imagine I’m coming across as all over the place in this particular thread), so we’re not yet having the discussion I wish we were having.