Counting beliefs
I’ve been thinking of objections to this – to the reply to the reply to the claim that belief in God is no more a “faith position” than is empirical science because our belief that our senses are a reliable guide to reality cannot be justified. One reply to that is we all assume our senses are a reliable guide to reality, while belief in God is an extra; reply to that reply is
if we accept (ii) [there is a God], then (i) [our senses are a reliable guide to reality] is no longer an assumption. We can justify it by appealing to (ii) (in the style of Descartes – a good God would not allow us systematically to be deceived). So, each belief involves an equal amount of “faith”.
Stephen asked for comments. I said
It seems to me we have to accept both (ii) and (iii) for (i) to be no longer an assumption. (ii) was “there is a God”; (iii) is “the God there is is good”. So the amounts of faith aren’t equal; (ii) and (iii) are at least double (i).
I’ve been thinking of (iv), (v), (vi) and so on – actually I thought of a couple of them at the time but wanted to keep it simple, not to say stark. But it takes a lot of steps to get from ‘there is a God’ to ‘a good God would not allow us systematically to be deceived, therefore our senses are a reliable guide to reality,’ doesn’t it? You could just change (ii) to ‘there is a good God’ in an attempt to eliminate (iii), but it would be cheating, since the game is counting items believed in order to compare quantity of faith needed for theism and atheism. So those are two, and then there are more. (iv) the good God there is had and/or has something to do with the way our senses are. (v) the good God there is that had something to do with the way our senses are has no way reliably to inform us about itself, despite having had something to do with the way our senses are. (vi) the good God that did these confusing things is not bothered by the fact that there is a convincing alternative explanation for the fact that our senses are a mostly reliable guide to reality. (vii) the good God that arranged this even more confusing situation is not bothered by the way we carry on.
The items you have to believe seem to keep multiplying, and the more of them there are the less sense they make, yet they can’t be eliminated. (Can they? No doubt I’m missing something, or everything, but I don’t see how they can.)
We have survived as a species — so far. We do not deliberately put ourselves in harm’s way, walk off cliffs, twist the tiger’s tail.
This might be taken as evidence that our senses are reliable guides to reality. They’ve proved themselves to be “good enough” thus far.
This “atheism as a faith position” is ridiculous. It only ever comes from people who fundamentally misunderstand all of reality. These people actually think that it’s possible for minds and ideas to exist without any brains to generate them. They’re basically saying that “god” might have rigged the entire universe to fool all conscious beings into relying on their senses, even though the underlying truth is something impossible to perceive.
Our senses are what they are because they accurately (more or less) tell us what’s out there by sending input to our brains, which then figure out what the meaning of it is. Human beings would never have been able to develop to the point that we have — every day reproducing experiments and processes — if our senses did not reflect reality with fairly good precison (suited to our needs).
Since there is a substantial amount of evidence that our sense perception is extremely successful, it follows that our perception of reality is accurate, and that, by the principal of economy, it is worse than useless to posit supernatural generators of reality, other worlds, or minds devoid of brains.
Human beings have been as successful as they have been because they are mostly practical and in touch with reality. When that bus is barrelling toward you when you’re crossing the street, you get out of the way, you don’t doubt your senses.
OK, so basically we’ve established that God has nothing better to do than (a) ensure that our belief that material objects exist is correct and (b) supervise our sex lives with great precision, so that, for instance, little girls are advised not to wear patent leather shoes that little boys can use as mirrors. And little boys are advised about the wholesome effects of cold showers on their immortal souls.
You might think that the omnipotent, all-wise Creator of the Universe would have somewhat more important matters on his mind. But you would be wrong.
May I suggest that OB (along with whoever made the assertion OB has been thinking of objections to) is simply confused about what it means “to know things”.
There is this “stupid” idea that some people have that we should “justify” why our senses are reliable or why our memory is reliable, because if we didn’t do that, we couldn’t justify any of our knowledge. Even more “stupid” than this idea is another idea that “we all assume everything we know” or “we all assume our senses are reliable”.
Allow me to offer a-not-so-stupid answer/solution:
1. Define ‘knowledge’ (most likely, you will define it the same way I do, which is “as certain kind(you’ll know what kind) of expectany that something is going to happen”)
2. Since you know can define knowledge, you can define words ‘assumption’ and ‘faith’, being not-knowledge.
3. Thus, you can see that you “know” all your “knowledge” (or the “normal” things you thought you knew), and that they are not assumptions or faith, and also aren’t based, in any way, on any assumptions or faith (since that would render the things you know not-knowledge, which isn’t the case, by definition)
4. It seems we use our senses and our memories to gain knowledge, but this is not our starting point, only an “optional” conclusion that itself is knowledge. Due to our definition of knowledge (and us applying it), we KNOW it most likely is true, so our senses and memories ARE reliable (in the sense that we can use them to get knowledge).
Any questions?
I don’t like the (theistic) argument as presented. My senses are broadly reliable – but occasionally not, and when they deceive me, I need reasoned thought to sort reality from illusion. I would think that the existence of (broadly) reliable senses (though maybe not mentality as such) can be explained naturalistically. It does not defeat the skeptical argument, mind you, as any naturalistic or scientific explanation of the reliability of our sensory apparatus would ultimately involve empirical evidence (there’d always be an out for a skeptic bloody-minded enough). But as such, the existence of a reliable sensory apparatus does not require divine intervention.
It’s another story if you move into much more abstract domains (i.e. the concept of truth itself, the validity of logic).
But I’d be against regarding atheism as a “faith” position on that basis. The argument as you put it does establish, I think, that no (sane) person goes without some philosophical assumptions that cannot be justified with empirical science itself. So what it defeats is a particular head-banging kind of positivism which I think few (reflective) atheists hold. But “faith” is something different (it is directed towards a person, not a philosophical assumption).
Addenda:
The reasoning:
1. Our sensa are (broadly) reliable
2. There is no good naturalistic explanation for why 1) should be so.
3. Providing a (good) God, assumably intervening to fix the reliability of our senses, 1) would be a matter of course.
does not convince me, though in a way it’s tantalizingly close to the mark. (I’m not accusing Stephen Law of knocking down a strawman here, as the first half of the argument seems to me to precisely correspond to Plantinga’s evolutionary argument against naturalism). The crux of the matter, for me, lies in the “reliability” of the senses. It seems to me that we are dealing with something very concrete, continuous (senses can be more or less reliable; my eyesight is pretty reliable but my hearing is a mess, etc.), and eminently subject to evolutionary development.
However, the notion of “reliability” seems to me to be founded on some kind of notion of “truth”. A yellow-and-black spot in my visual field cannot be true or not. The proposition “there’s a tiger in front of me” can be. In as far as sensual data lead us to entertain true propositions, they are reliable. But the relationship is not one of necessity.
Of course, perception and reasoning are very much intertwined: I may perceive sensa as “tigers”, “trains”, “a dog barking”, etc. which involves a lot of conceptualization, framing, etc.
So, I would suggest that the problem of “the reliability of our senses” (a concrete, gradual issue) be exchanged with something like “the validity of reasoning” or “the truth of propositions”. Because the latter are pretty much normative, abstract, discrete issues which cannot be reduced to the naturalistic realm. (Note: I’m saying “cannot be reduced” which does not mean one cannot reasonably hold metaphysical naturalism. See below).
The upshot of such a move would be that the argument is focused on what seems to me to be the heart of the matter – that of the relationship between matter, mind and abstracta – whereas the previous argument planted its boots halfway in the scientific (as opposed to the philosophical) domain. It is precisely for this reason that the theistic solution of the original argument is less than compelling: because it deduces the existence of God from what God (supposedly) contingently did (fixing our sensory apparatus?) as opposed to what he is. The other argument would deduce the existence of God out of the primacy of mind over matter which sounds a lot more convincing to me. Of course, one can be an idealist and an atheist; but an underlying mental reality seems to me to be more suggestive of theism than the (implicit?) design inference in the first argument.
Do I think that the “Argument from reason” as opposed to the “Argument from a reliable sensory apparatus” is defensible? Yes. Do I think it rationally compels the atheist contemplating it to accept God’s existence? No. It’s perfectly possible for an (atheistic) materialist to grant the irreducibility of reason but nevertheless assert the primacy of matter over mind (as Thomas Nagel, for instance, does).
Ultimately, I think theism is rationally defensible given a metaphysical framework which allows for it. The same goes, however, for atheism. I suppose I end up with something like Kuhn’s idea about incommensurable paradigms; meaning, it’s perfectly possible for me to try and think in the framework of an atheist, but I see no ultimate standard against which the reasonability of any such can be judged. The ones we do have (logic, coherence) are broad enough to let well-argued versions of both through.
“because it deduces the existence of God from what God (supposedly) contingently did (fixing our sensory apparatus?)”
Does it? I thought in Stephen’s version it was the other way around – it deduces the reliability of our senses from the assumption that God exists, thus matching one atheist assumption with one theist assumption (excpet the theist one is more than one, but it’s supposed to be one each).
Another problem with the reliability thing is, if it’s God wot guaranteed the reliability, why did God not also make the senses exhaustive? If God can be bothered to make them reliable, why not make them complete and exhaustive while God’s at it? (I will never understand why you refer to God as ‘he’ when you’re so careful in other ways.)
“I will never understand why you refer to God as ‘he’ when you’re so careful in other ways.”
Fatigue, in this case.
I inferred the “fixing” from Stephen’s comment: “in the style of Descartes – a good God would not allow us systematically to be deceived”. Otherwise I don’t see how the one should follow from the other (the link is tenuous enough even so).
I see now that the argument Stephen presented is a bit more subtle, i.e. more an argument against naturalism in that naturalism is supposed to take something “on faith” which on theism being true is explainable. Although, I’m not sure whether there is such a big difference, ultimately, between arguments for theism and arguments for theistic explanations – because it is typically on the basis of the latter that the former are built.
A good God would give you enough energy to avoid personal pronouns!
Heh.
Any questions?
Well, (3) doesn’t follow from (1) and (2), does it.
Jeff Ketland,
No, (3) doesn’t follow (well, not all of it anyways). Duh! Of course it doesn’t follow!
Knowledge isn’t something we always deduce from something logically. I meant to say that when one understands the meaning/definition of knowledge, one can apply it and (most likely, but not necessarily) he/she will understand that our senses are reliable (in the sense outlined above). Here, there’s no logical necessity, only likelihood.
However, my point was that once (and this most likely happens) a person understands what knowledge means and is able to say (and mean) that statement A is knowledge (e.g. has good likelihood), NOW, finally, it logically follows that it cannot be a statement of faith or an assumption, nor based on faith or assumptions in any way, because, by meaning of ‘based on’ and ‘faith’ and ‘assumption’, that would imply that A isn’t knowledge after all, but we simultanously realise that A IS knowledge, by definition of ‘knowledge’, so there’s a contradiction: so ‘faith’ and ‘assumptions’ have nothing to do with statement A. This part does follow logically.
Toivo: … he/she will understand that our senses are reliable …
But there’s the rub. We cannot be sure that our senses are reliable. Rather, it’s a reasonable assumption. In principle, we could be Matrix-style brains-in-vats.
Jeff Ketland,
We can’t be sure of anything. Knowledge, almost by definition, is not-sure of everything: it’s all about likelihoods and we can always conceive of something unexpected to happen.
Saying that: “We cannot be sure that our senses are reliable. ” is just as insightful and clever as saying “Aha! You can’t be sure that there aren’t little pink invisible unicorns reading newspapers under your bed” Both sentences are correct, and just as likely, which is not likely at all, and we should be equally concerned about them.
Also, you seem to not understand what knowledge is. To know X, I don’t have to make a “reasonable assumption” and say to myself: X is true, and believe in it “absolutely”. Rather, I say to myself: X has this-or-that likelihood, and adjust my belief according to the exact likelihood (I tend to use the word “know” for quite high-likelihood statements). I neither deny X or believe X “absolutely”, so I make no assumptions. As I said, as long as my belief relflects the likelihood(the meaning of knowledge), I am just stating the likelihood, and I am not assuming anything at all.
So, it’s not a reasonable assumption that our senses are reliable. It’s a fact (meaning: it has very high likelihood).
Toivo,
You’re using a lot of very idiosyncratic definitions and meanings, and treating them as self-evident and uncontested. It’s a bit odd to do both of those at once!
OB,
Yes, I think you’re right. I’m sorry. I perhaps should have explained myself better. Well, if anyone is confused about my posts, they can understand better what I meant by referring to the definitions below:
X is knowledge/X has high likelihood – there is a certain sense of expectany about X (that it’s going to happen/going to be true) (that expectancy defines the word ‘knowledge’, at least as I use it, and I think people use this sense of ‘knowledge’ most of the time when dealing with normal, day-to-day, practical things, like typing, changing the TV channel, walking. They would also realise this, if they’re honest and not awfully confused by bad philosophy.)
Likelihood – a measure of how “good” the knowledge is. Has nothing to do with probability, although it uses a “stolen” term from mathematics. Again, I think many people, when they say “likely” or “probably”, they mean this, not the mathematical probability.
Assumption – To fool oneself and pretend that X has high likelihood and/or is “absolutely true”, when it doesn’t and/or isn’t. To “just believe” in X, irrespective of X’s likelihood.
Faith – Same as assumption. Often involves badly hoping that X is true: hoping so badly that your imagination is “locked” and suddenly you can’t imagine otherwise: you treat X as knowledge, although it isn’t that.
Fact – A high-likelihood statement. E.g. Earth has an atmosphere, the currency of France is euro. How “high” the likelihood has to be in order to be a fact is hard to explain in words, but most people understand the meaning of ‘fact’, and use it accordingly.
Toivo,
I think you draw some unwarranted analogies here.
I can be said to “know” there is a tiger in front of me, if I reasonably infer such a proposition from sensory evidence, etc. That’s a statement about the “outer world”. I can be also said to “know” I’m in danger, based on my observations concerning the lack of a barrier between me and the tiger, on my knowledge about the typical behaviour of tigers (hearsay, but presumably reliable hearsay), etc. Similarly, my knowledge that there isn’t an invisible tiger lurking at me right at this moment is based on a reasonable inference based on, for example, my conviction about the non-existence of invisible tigers prowling about on the ninth floor of an office building in Scandinavia. But still, it’s a statement about the empirical world.
The statement concerning the reliability of our senses goes into something quite different. We cannot base this “knowledge” upon empirical observations about the world since it is precisely what makes empirical observations about the world possible. The statement about invisible tigers may be unfalsifiable – but it is unfalsifiable in a different manner than the statement “our senses are usually (i.e. in the case of a healthy, sober person) fairly reliable” or “we are not brains in a vat, subject to a suprisingly coherent and impressive illusion of sensory stimuli”.
We reject unobservable tigers precisely because, to the extent that they are both directly and indirectly wholly unobservable and in no sense affect any other existent, they are unnecessary additions to our picture of the empirical world. The statement concerning the reliability of our senses is not so much unnecessary as quite necessary.
Addendum: I’m getting myself in a knot about the relationship between knowledge and truth; as I see that Toivo’s notion of “knowledge” entails the likelihood of the statement concerned, “assumption” the unlikelihood (through a mistake in asserting likelihood).
There’s something about knowledge being “Justified true belief” that sits ill with me. Because I can think of contradictory justified beliefs. Take for example any pair of opposite beliefs concerning philosophy of mind. Take “Dennett believes x” and “Chalmers believes not-x” about any statement x on which the two gentlemen disagree. Let’s assume that either x or not-x is true.
One solution is that we cannot say “Dennett knows x” and “Chalmers knows not-x” because one belief is ultimately unjustified. There is some error somewhere hiding in either of the two resulting in its falsehood. That would posit some extremely high standards of justification, perhaps ultimately converging on the notion that only true beliefs are justified; aside from this, I am tending to believe that metaphysical issues are intractable, to the extent that there is not an intersubjectively convicing solution hiding deeply down there somewhere that anyone would agree on. At the same time, I believe they are yet meaningful.
Another solution would be to ascribe a distinction to a (highly) justified true belief or a (highly) justified untrue belief. But I don’t see how to go about this without introducing some kind of mysticism. I don’t see how one could specify the difference (mentally or neurologically) between a subject being justifiedly convinced of the truth of a statement and being unjustifiedly convinced of the truth of a statement.
Third solution: acknowledge that by saying “Dennett knows x” or “Chalmers knows not-x” we automatically state our own belief in either x or not-x. “Knowledge” and “knowing” is automatically intersubjective and social in nature. Someone could not be said to privately “know” something without the knowledge item being a public matter shared by a group (but what then becomes of the justification?). Am attracted to this solution, but it seems to me to lead to some kind of social constructivism, and I wish to avoid relativism.
Am not an epistemologist. Help, OB.
“Because I can think of contradictory justified beliefs.”
Sure, but then (at least) one of them isn’t true, so one of them isn’t knowledge; that’s why all three terms are needed.
There are other funny complications about having knowledge but having it by accident, but that’s a different matter.
Toivo,
Saying that: “We cannot be sure that our senses are reliable” is just as insightful and clever as saying “Aha! You can’t be sure that there aren’t little pink invisible unicorns reading newspapers under your bed”
Right. We cannot a priori rule out a Sceptical Scenario. That’s the central problem here, and you can’t eliminate it with a quick argument.
Both sentences are correct, and just as likely, which is not likely at all, and we should be equally concerned about them.
Your notion of “likely” is defined how? In terms of subjective probabilities (or credences)? Or in terms of objective propensities? I suspect that a Sceptical Scenario is unlikely in the first, subjective, sense (it would be entirely irrational for me to believe that I am a brain in a vat): but when we engage in reflective analysis, one still has to assume the reliability of our senses: and that’s the assumption that one can’t get rid of very easily.
[There’s a subtlety here, connected to the debate between internalists and externalists. An externalist would say that the mere fact of reliability is alone sufficient for knowldege; and we needn’t make an explicit assumption in addition.]
X is knowledge/X has high likelihood – there is a certain sense of expectany about X (that it’s going to happen/going to be true)
This is not what ordinary people mean when they use the word “know”. As ordinarily used, “A knows that P” clearly entails P.
For example, the statement “David Cameron knows that he was born in Edinburgh” implies “David Cameron was born in Edinburgh”.
Technically, we say that knowledge is factive.
(that expectancy defines the word ‘knowledge’, at least as I use it, and I think people use this sense of ‘knowledge’ most of the time when dealing with normal, day-to-day, practical things, like typing, changing the TV channel, walking. They would also realise this, if they’re honest and not awfully confused by bad philosophy.)
When people say “I know that I was born in 1964” this has nothing to do with “expectancy”. When Deborah Lipstadt says that “We know that the Holocaust occurred” it has nothing to do with expectancy. These are items of knowledge concerning past facts, not future facts. Similarly, when a physicist says “We know that neutrinos are not massless” this has nothing to do with expectancy.
Knowledge is usually defined as “true justified belief”. High likelihood – whether defined in terms of subjective credences or in terms of objective propensities – is not sufficient for knowledge. If a proposition is highly likely, but false, then it cannot be known. If not-P, then you cannot know that P.
So, it’s not a reasonable assumption that our senses are reliable. It’s a fact (meaning: it has very high likelihood).
A fact has, by definition, to be true. The notion of fact is usually defined as follows: a state of affairs that obtains. It’s an objective notion, and has nothing to do with epistemology.
I’m not saying one needs to be certain for knowledge. But one cannot logically conclude from the deliverances of our senses that we know. Actually, I do think we do know; but we cannot logically infer P from “I am perceiving that P”.
The problem is that perception is fallible.
We can, however, logically infer P from,
(i) I am perceiving that P
(ii) The deliverances of my senses are 100% reliable.
But the inference to P from these premises requires the assumption (ii).
Now if (ii) is weakened somewhat to
(ii)* The deliverances of my sense are normally reliable.
yields a perfectly reasonable assumption.
But you cannot defeat scepticism with such a quick argument. Knowing the definition of knowledge, plus a particular sensory fact, such as “I am currently perceiving that there is a football in front of me” allows us to deductively infer that there is a football in front of me only modulo the reasonable assumption that the deliverances of my perceptual states are reliable. That assumption is non-eliminable.
Sorry for the long post!
“Sure, but then (at least) one of them isn’t true, so one of them isn’t knowledge; that’s why all three terms are needed.”
Well, yes. But that means there is, ultimately, very little of an objective standard to distinguish knowledge from non-knowledge. Seeing as it is precisely through justification that we establish truth. My problem is that then using the terms “knowledge” and “know” contain a claim of allegiance (this is obvious with the word “know” but, perhaps, not necessarily with “knowledge”).
For instance, what of the status of pre-Einsteinian physics as knowledge? Intuitively, it should count, shouldn’t it? But what, then, is “truth”? I suppose one can argue that as a model of physical reality Newtonian mechanics was “close enough”, though our current understanding corresponds better – so “truth” is an ideal gradually approached. Alternatively, one could state that within a given framework, an individual proposition can be true or not (though that seems to me to lead to some kind of excessive Kuhnian system if truth is wholly denied to frameworks as such). In any event, seen as this, it seems to me “knowledge” vs. “non-knowledge” becomes a very gradual affair – and perhaps some kind of continuity in tradition, framework, etc. is decisive in dubbing something a body of knowledge.
Jeff Keland,
My notion of “likely” is that specific notion of likely that we all have…when we think that opening the refridgerator door will yield the inside of the refridgerator, when we think that a letter ‘t’ will appear on the screen when we type, when we put food in our mouths and think our hunger is going to be less. We have a certain kind of expectancy (not just ANY kind) that something is really, really going to happen.
You seem to setup a false dichotomy for me: either I have to “believe in” subjective probabilities or objective propensities. Both seem ill-defined, philosophical non-sense to me. Perhaps you could define the terms ‘subjective’, ‘objective’, ‘propensity’ and ‘probability’. Mind you, I’m a maths student and have recently suffered a lot before I could come up with a good definition of probability, so this task might be harder for you than you think.
Refer to my post to Merlijn de Smit for an answer to the DELUSION that “one still has to assume the reliability of our senses”.
Now, about your claim that “that is not what ordinary people mean when they use the word ‘know’ “. Are you being honest now? Really? So people can’t know things without…having “absolute” belief”? I don’t care how philosophers bend their minds and pretend to “know” the “assumptions” normal people make and “know” how normal people think. It’s simply wrong, I’m afraid. It’s mistaken and dishonest to think that “A knows that P” entails P. Why do you think that’s how ordinary people use it?
As to your examples:
“For example, the statement “David Cameron knows that he was born in Edinburgh” implies “David Cameron was born in Edinburgh”.”
-No, it doesn’t imply that. It only implies that it’s highly likely that David Cameron was born in Edinburgh.
“When people say “I know that I was born in 1964” this has nothing to do with “expectancy”.”
-Wrong. Here’s where the “expectancy” comes in: I know that I was born in 1964 it’s highly likely that I was born in 1964 I expect that “I was born in 1964” is true with high likelihood.
“When Deborah Lipstadt says that “We know that the Holocaust occurred” it has nothing to do with expectancy.”
-Wrong. We know that the Holocaust occurred it’s highly likely that the Holocaust occurred we expect that “Holocaust occurred” is true with high likelihood.
Ditto for the physicist example. It doesn’t matter whether the knowledge is about the past of the future. When you have a certain knowledge claim, you imagine that possibility (you imagine being able to see, hear, touch, etc. many things, as if you were there, in that time, future or past, as a “sort-of” 3rd person). The “likelihood” is exactly a measure of how much you expect (in a certain sense) to have a particular collection of perceptions.
Whoever came up with the idea of defining knowledge as “true justified belief” was a complete moron (at least in epistemology), even if he/she was praised as genius in all other fields. Quite simply, knowledge isn’t that. Granted, knowledge is about truth (but knowledge needn’t be true itself, like apples need to be fruits) and knowledge can be classified as beliefs, but “justified”? It’s just silly.
Can a proposition be both highly likely AND false at the same time? Well, we can know X is highly likely and still imagine it’s POSSIBLE X is false, so the answer is “yes”. However, it’s not possible (unless you contradict yourself) to KNOW that X is “highly likely” and “false” at the same time. Either you are making a trivial point (and I agree), or you’re stating something truly outrageous (and I disagree).
As for the definition of ‘fact’: well, I think you are using some philosophical definition, not the one people actually use. Now, in real life, I know what it makes for something to be granted the rank of “fact” and it ISN’T that the something has to be TRUE. It’s only that something has to be highly likely.
What do you mean that “perception is fallible”? Again, if you mean something trivial (like that sometimes, when we see a moving thing in front of us, there really isn’t a moving thing in front of us), I agree. But perception isn’t making any knowledge claims itself, like newspapers or blogs. Perception just is.
You say: “
We can, however, logically infer P from,
(i) I am perceiving that P
(ii) The deliverances of my senses are 100% reliable.
But the inference to P from these premises requires the assumption (ii).
Now if (ii) is weakened somewhat to
(ii)* The deliverances of my sense are normally reliable.
yields a perfectly reasonable assumption.
“
Yes, we can infer that I am your mother by making some “assumptions”. However, those inferences are as true or likely as the assumptions themselves, i.e. worthless. If we can just assume anything, why bother assuming something about the senses, why not just assume P right away, it’s not like we can avoid assumptions, right? It’s not like we can, you know, actually know P, without making any assumptions?
There are NO reasonable assumptions. If you have some reason to think X, then believe X with that specific likelihood. If you don’t have a good likelihood about X (i.e. you don’t know X), what good does it do to say: Ah! But I can make this reasonable assumption (i.e. reasonable leap of faith) (an oxymoron if there is one) and conclude X. No, you can’t! You don’t know X, and assuming something to get there won’t give you knowledge of X!
Knowledge IS NOT inferred from anything. It’s not inferred from sensory experience, it’s not inferred from senses. It’s not inferred from assumptions. Sometimes people (well-educated adults who should know better) just ASSUME that it is!
Merlijn de Smit,
No offense, but you seem to be VERY confused about “knowing things”. VERY.
Let me try to explain where you go wrong:
You seem to think that “we infer knowledge from sensory evidence”. How come? Did you just “assume” that? How do you know that “we infer knowledge from sensory evidence/observations”?
I think it’s “sort of” an assumption many people make that knowledge MUST come from sensory experience. I call it an assumption, because people rarely try to back it up, instead they sort of just start from there as if it were true.
Well, I think it’s a mistake (“starting from there”, not the proposition about sensory experience itself). It’s very highly likely that sensory experience & mental state etc. AFFECT but not necessarily determine the likelihood. It’s certainly false to think that the likelihood is logically deduced from sensory experience, since sensory experience isn’t the same as knowledge and from sensory experience you can only deduce stuff about sensory experience (you don’t get from sensory experience to knowledge by deduction only). This is a matter of logic.
However, in everyday speech, people do say things like “I know X based on seeing it on the news/hearing about it from a friend etc.” I’ve said and continue to say things like that. But neither I nor the other people “deduce knowledge from sensory experience”. How can that be?
Well, it’s simple really. The answer is that there are things that aren’t in the definition of knowledge themselves, and really have nothing to do with knowledge, intrinsically, but they function as “proxies”: things that likely indicate knowledge (we know with good likelihood that they affect our likelihoods). Examples of proxies: senses, newspapers, blogs, memory, mental states, friends, other people, scientific studies. Some of the proxies don’t “say” anything about knowledge, they just affect it (like the memory and the senses). Others make explicit statements (like newspapers, blogs and other people). Any of those things COULD, one day, become completely “unreliable” either in the sense of making completely unreliable statements or not affecting our likelihoods at all. E.g. in a Matrix-like world, you could know that you can walk ahead without getting hurt even though you “see” scorpions, spikes and traps ahead (you would know that there really aren’t any scorpions, spikes or traps ahead, that it’s just an “illusion”). Also, it’s possible that in the future, scientific studies become completely unreliable. The point is that it’s not at all necessary to “trust” any of the proxies. It’s thus completely possible to completely IGNORE all one’s senses, all one’s memories etc. and know, for example, that “one can walk safely ahead” (as in the Matrix-world example). We “trust” our proxies only to the extent that they are “likely”, and abandon that “trust” the moment the likelihood is gone.
Also, another possible “assumption” that other people often make is that other people (from their perspective) MUST be like them: they have minds, they form knowledge, they have sensory experience, they have valid senses. Again, there’s rarely any attempt to back this up, and people usually “start from here”. Again, I’m not saying that this isn’t so, but “this” certainly is a knowledge claim and needs to be evaluated as such.
Well, I think most people have minds and have sensory experiences. That is very likely. Also, it’s highly likely that they do form what I would call knowledge. However, I don’t call their beliefs “knowledge” “just because” or out of “respect”, but ONLY because it’s LIKELY that those beliefs and how they are formed match the idea of knowledge (as I have defined it). It’s highly likely that my, and other people’s, likelihoods are affected by various proxies, and it’s likely that through sharing the input (newspaper articles, scientific evidence, sensory experiences, memories) we achieve if not the same (ONLY due to us not being able to share ALL of the input), but extremely similar likelihoods. It’s an important point not to “give” other people the ability to form likelihoods just because they look like us, but ONLY to the extent that if one could “plug in” to their minds and “experience their minds” for a moment (but still retain the meaning of “likely” as one defined it), then other people would “experience” THE specific sense of expectancy that defines the meaning of “likely” and would believe things that are “likely” in that way. However, and this is even more an important point, we’ve already established that highly likely there are various proxies that can affect the likelihood, so knowing this, just because 2 people disagree on something, it DOESN’T mean they mean different things by “knowledge” or “likelihood”. Usually, it’s very highly likely that they just haven’t shared the same proxies. For instance, I think extraterrestrial life is extremely unlikely to exist, but I’m willing to listen to an astrobiologist who might have evidence to convince me otherwise. But I’m not going to listen to a new age person who thinks the spirits have told him about ETs, since THAT input is highly unlikely to affect knowledge. Also, it’s likely that I can affect the likelihoods of other people, e.g. by talking to them. I can even lie to them and make them “know” something which is really false. That’s entirely possible. However, from their perspective, they don’t have the “input”/”evidence” that would affect their likelihoods to understand that I’ve told them a lie. It’s perfectly possible for them to really know something, that actually is false and that I know to be a lie that I invented myself. In addition, if one knows that Y is a proxy (affects likelihoods) and that Y likely exists, then one automatically adjusts his/her likelihoods accordingly. E.g. knowing the evidence and knowing OF the evidence has much the same effect (but not exactly the same) effect.
What’s all that stuff about “justification” and “warrants”? If X is knowledge, it’s knowledge, if it isn’t, it isn’t. We don’t need a “social” permission or warrant or justification to believe in things. It would be extremely dishonest to go about “knowing” stuff that way.
I don’t know what the philosophers who came up with “justified, true belief” definition of knowledge were smoking when they came up with it. It’s a silly, DISHONEST definition (not reflecting how people actually think).
Also, by the way, “X or not-X” is not an assumption, by any stretch of imagination. It’s true whatever X may mean, and it’s a logical consequence from the meanings of “not-X” and “or”.
Toivo,
I cannot deal with all the points you raise. However.
1. The normal use of the word “know” is such that it is factive. If a person A knows that P, then P. It is inconsistent with the meanings of words to say “Cameron knows that he was born in Edinburgh, but he was not born in Edinburgh”.
2. The normal use of the word “fact” is such that it is, not surprisingly, factive. A fact is a state of affairs that obtains. For example, if neutrinos are massless, then it is a fact that neutrinos are massless. (And vice versa.) This has nothing to do with any probabilistic notions such as likelihood.
Neither of these elementary points is particularly “philosophical”. They simply register normative rules governing the correct use of the words “know” and “fact”. Both notions are factive.
The person responsible for giving the definition of knowledge, according to which it is true justified belief, is Plato in the dialogue Theaetetus.
Now, turning to probabilistic notions, a statement can be highly likely (in either the sense of rational credences or the sense of objective propensities) and yet be false.
For example, suppose a machine produces red and black balls at random, with 50% probability of either. Consider the statement:
(A) The next subsequence of 100 balls will contain at least 1 red ball.
This claim is very likely, and yet can be false. Unlikely events occur. In this case, the probability is 1 – (1/2^{100}).
In my view, the most interesting example of low probability events which actually occur concerns quantum mechanical tunnelling. Such events are classically impossible, because the particle’s kinetic energy is lower than the height of the potential barrier. But if the barrier has finite width, there is a finite probability for a particle to tunnel through the potential barrier.
Jeff Ketland,
one more time, “likelihood” has nothing to do with probability. Thus, you simply CANNOT claim that it’s a probabilistic notion. Maybe you mean that it’s a “gradual” notion: i.e. you can always be more or less certain of some piece of knowledge.
Also, you seemed to misunderstand one part of my reply. I explained why I 100% AGREE with you that it is inconsistent with the meanings of words to say “Cameron knows that he was born in Edinburgh, but he was not born in Edinburgh”. Yes, it’s inconsistent to say that, or think that, but it’s possible that Cameron was not born in Edinburgh, but we still have a high likelihood (i.e. we know) that Cameron was born in Edinburgh. If we knew Cameron wasn’t born there, OF COURSE we wouldn’t have that likelihood, but we don’t know where Cameron was born (the possibilities are endless), and we think, we suspect strongly, we expect, we know (meaning: have high likelihood) that Cameron was born in Edinburgh. But nevertheless, it’s still possible for Cameron to have been born anywhere.
Jeff Ketland,
Addendum to my last post:
Sorry, I didn’t read carefully enough. I though somebody knew that Cameron was born in Edinburgh and simultanously thought that he wasn’t really born there. Sorry, my mistake.
However, it is possible for Cameron to know he was born in Edinburgh and us (separate from Cameron) to say that we wasn’t born there. No, THIS is not inconsistent with the meanings of the words. Let me explain: that Cameron knows X means that if we “went inside” his mind (viewed everything from his perspective), we would see that Cameron’s belief that X has “real”(as we defined it) likelihood. It means nothing more. We, on the other hand, know that Cameron was born in London, not Edinburgh, and we know his parents have been lying to Cameron since he was born. We have a better understanding of reality than Cameron and if we told him what we know, if we shared our evidence and proxies etc., Cameron would most likely agree with us (that he was not born in Edinburgh). So, Cameron was mistaken all this time. But was he deluded? No, since from his perspective (most likely) his beliefs had good likelihood. It’s just he didn’t have any evidence to suggest otherwise.
Toivo,
… one more time, “likelihood” has nothing to do with probability.
So, what do you mean by “likelihood”?
The obvious way to take this would be some sort of rational credence. And the standard analysis of rational credences is given in work in Bayesianism (e.g., Howson and Urbach). That is, rational credence is a probability function defined over statements and satifying Kolmogorov’s axioms. Credences are then updated using some sort of conditionalization rule.
But in any case, one cannot define knowledge in terms of rational credences, because knowledge is factive and credences aren’t (except, perhaps, God’s).
Toivo,
Just one point (there are many to make):
You seem to think that “we infer knowledge from sensory evidence”. How come? Did you just “assume” that?
I do not necessarily believe that all knowledge is inferred from sensory evidence. But a lot of it is. The relevant example was one of a tiger in front of me. If I see the tiger, and infer the proposition “there is a tiger in front of me”, I deal with an item of knowledge inferred from sensory evidence. In a discussion which started about the reliability of sensory evidence, knowledge based on sensory evidence seems to be a germane issue.
And yes, ultimately the notion that we have sensory experience, and that we develop knowledge on that basis, resides on an assumption. One which is extremely hard not to entertain.
from sensory experience you can only deduce stuff about sensory experience (you don’t get from sensory experience to knowledge by deduction only). This is a matter of logic.
But I didn’t use the term “deduce”. I said, “infer”. From a given sensory experience taken as such to the categorized, framed experience of a tiger standing in front of me to entertaining the proposition “Hey! There’s a tiger in front of me!” (I’m not saying there’s any real temporal succession here) involves more than just deduction, absolutely. Peirce compared the integration of sensory experience to abductive processes of thought, and I’d be inclined to agree.
I am, like Jeff Ketland, a bit uncomfortable with your notion of likelihood. Especially since it seems totally divorced from any specifiable probability.
Toivo,
… it is possible for Cameron to know he was born in Edinburgh and us (separate from Cameron) to say that we wasn’t born there.
Of course, but that isn’t what I said.
What I said was that if Cameron knows that he was born in Edinburgh, then he was born in Edinburgh.
Knowledge is factive. Schematically,
(F) If A knows that P, then P.
Do you disagree with this basic principle of epistemic logic?
Toivo:
My impression is that you’re not really taking Jeff Ketland’s valuable contributions on board. His writing has been lucid and succinct. You do seem to be on to something, but you aren’t making it very clear. Is English not your primary language? (I know what that’s like.)
I’ve been trying to come up with good definitions of “knowledge” and “truth”. How about these?
Knowledge: An ideal reproduction in sign form of objective properties and connections in the world (natural or human).
Truth: The true, correct reflection of reality in thought, which can only be verified by practice.
Toivo, are you really Zizek?
Plato’s definition of knowledge (true, justified belief) is roughly ok for most ordinary purposes. In strange circumstances (e.g., Gettier situations, involving odd cases of accidental knowledge), there is a huge standard literature. See the textbook:
Pritchard, Duncan. What is this thing called knowledge?
Or see the Stanford Encyclopedia article on the analysis of knowledge.
Truth is a semantic concept, and the most important feature of this concept is its disquotationality. For example,
“Cameron was born in Edinburgh” is true if and only if Cameron was born in Edinburgh.
And so on. Such disquotational T-sentences cannot be turned into an explicit definition obviously, because there are infinitely many sentences to deal with. In certain cases, an inductive definition will be available.
The standard work explaining this (and much more) is:
Tarski, Alfred 1944. “The semantic conception of truth and the foundations of semantics”.
This is online here.
No, English is not my first language. Yes, my writing has been confusing. No, I’m not Zizek. Yes, I do disagree with that basic thing from epistemic logic.
Maybe I’m wrong. Maybe I have really confused myself. Or maybe I have yet again failed to communicate my ideas. I don’t know.
If you don’t mind, I’ll vanish at least for some time.
JK:
Thanks for that link to the Stanford article. I read the great majority of it, and have concluded that I’m an externalist K-reliabilist of the Dretske school. Now if you’ll excuse me, I must go degettierize.
Toivo:
Glad to know that Zizek isn’t here muddying the waters. Don’t be gone for too long.
I guess the real issue with what knowledge is, is that everyone agrees that it’s true belief + something else, that something usually taken to be justification, but there’s a lot of contention about what constitutes justification.
“Pritchard, Duncan. What is this thing called knowledge?“
He did a series on knowledge for TPM. A primer; like a mini-textbook. Very useful.
Toivo – do come back soon.
Pyotr – Yes, the consensus is that knowledge = true belief + X, where X is something like justification. For Robert Nozick this X involves counterfactual dependence. For reliabilists like Dretske, Papineau and others, knowledge is “true belief produced by a reliable mechanism”. Etc.
On the other hand, Timothy Williamson, who used to be here at Edinburgh and is now at Oxford, has defended the view that knowledge should be taken as a primitive, and not given an analysis or definition.
My old friend David Miller, a hardline Popperian who rejects the very notion of justification, has an odd view: he defines “knowledge” as “that which is produced by scientific inquiry”: as he jokes, knowledge is “untrue, unjustified, unbelief”. This is, at best, the basis of a decent little joke.
Social constructivists also usually define “scientific knowledge” as “that which is causally produced by scientific activity”. This leads to a radical kind of relativism.
Ophelia – Duncan Pritchard is very good at what he does, and he is just about to join us at Edinburgh as our new professor!
Ok. Thanks for supportive comments. Here are some questions for you (Jeff + others) (if you’re so kind as to answer them):
Can person A KNOW X, if that belief is “based on” assumptions X1, X2 and X3?
If yes, can I KNOW that I have 1 000 000 pounds in my bank account by assuming 1) My name is Toivo 2) I have 10 fingers 3) Anyone whose name is Toivo and who has 10 fingers has 1 000 000 pounds in his bank account.
?
If you answered Yes, Yes, then it seems you can know ANYTHING by just assuming it to be true, and you can assume anything at all (perhaps provided that your assumptions don’t contradict each other). Why call assumptions knowledge? Why not assume that we can fly like Supermen and -women? Why not assume that we have afterlives? Why not assume whatever we like?
If you answered Yes, No, you seem to have logically contradicted yourself. Care to explain how my assumptions about the 10 fingers and 1 000 000 pounds aren’t REALLY the kinds of assumptions YOU AGREED “knowing” to be “based on”.
If you answered No, Yes, again, you seem to have logically contradicted yourself. My own “example” is nothing but the general case filled in with names etc. If you disagree with the general case, how can you agree with the “special case”?
If you answered No, No, we agree. If you thought people who would give other answers don’t exist… well, you know the saying about human stupidity being infinite, so…
And a bonus question for the brave ones:
When you’re using your computer (like RIGHT NOW), and perhaps typing or moving the mouse etc., and anticipating what will happen (e.g. the mouse pointer will move, a certain letter will appear on the screen), do you KNOW that e.g. “a certain letter is going to appear on the screen”?
If Yes, and you answered previous questions No, No, please let me know. At least we agree on something. We seem to agree that for something to be called knowledge, it needn’t be “absolute” or “100% certain” or we needn’t “know” beforehand that it’s true.
If No, then do you think that humans can, at ALL, have knowledge? Are you prepared to say that neither you nor anyone else can know anything at all, ever? Is there any simple way a mere mortal can realistically achieve knowledge? However, you still seem to be using your computer, moving the mouse, anticipating effects on the screen. Why? Didn’t you just agree that you DON’T KNOW what will happen? Isn’t it then foolish and/or self-refuting to EXPECT some effects, when you said you DON’T KNOW if they will occur? Perhaps you think that even if you DON’T KNOW, you still have some “anticipation”/”expectancy” that something is really, going to happen. In the latter case, again, I’m pleased that we agree, but I label this “anticipation”/”expectancy” ‘knowledge’ and you seem to have some other definition for the word ‘knowledge’. So it’s just confusion with words. But I would really like to know that since you seem to agree (again, in the latter case) that you (and, by extension, everybody else) “anticipated” things and acts accordingly, what USE or UTILITY does the word ‘knowledge’ (as YOU define it) have. Since it’s so HARD to have, and perhaps you think NOBODY has it, why bother using it at all? After all, people DO talk about ‘knowledge’ while actually talking about the “anticipation”/”expectancy”, so why not REDEFINE ‘knowledge’ as that.
Toivo,
I don’t properly understand the example you give. You seem to wish to demonstrate that assumptions cannot play a role in the justification of knowledge. I’ll come back to that in a moment.
The usual, and presumably exhaustive, sources of knowledge are:
(i) observation/perception;
(ii) reason/thought.
So, for example, your knowledge that you have 10 fingers is based on observation and the ability to count. It is precisely via observation and reason that this item of knowledge receives its justification.
On the other hand, there doesn’t seem to be any justification for the claim “anyone whose name is Toivo and who has 10 fingers has 1000000 pounds in his bank account”. Indeed, you can probably refute this claim quite easily. You can check that the antecedent is true, and the consequent is false.
You seem to be strongly against the idea that knowledge can be obtained from assumptions (or, as Popper would have put it: conjectures). But consider how knowledge of the future is possible. If you think we have knowledge of future, then does this knowledge not rest on inductive reasoning? (E.g., that laws of nature will continue to operate as before.) And how is this justified?
Toivo,
Also note that when you say:
When you’re using your computer …
do you KNOW that e.g. “a certain letter is going to appear on the screen”?
This is very closely related to Hume’s Problem of Induction. If you know, then, how do you know? By inductive reasoning? But how is inductive reasoning justified?
[By the way, I do think that you do know. But this sort of knowledge must be based on inductive reasoning. The status of induction is, to say the least, very problematic.]
Jeff Ketland,
You say: “but this sort of knowledge MUST be based on inductive reasoning”…
How might you justify THAT claim? Let me guess: too hard not to assume that claim? Come on…
Why do you think that knowledge (or this sort of knowledge) MUST be based on inductive reasoning? Be careful, and try not to use inductive reasoning (which we all know to be invalid, since you can’t infer oranges from apples, as showed by David Hume) while trying to justify why you think that….
I’m willing to bet a large sum of money you can’t do that. You’ll most likely end up with something like: “it’s hard not to assume that”…Well guess what? For me, it may be hard not to assume that I have 1 000 000 pounds on my bank account and for you it’s beliving that knowledge is based on induction. We’re equally rational and justified.
If you’re honest, you WILL conclude that you DON’T know where knowledge comes from. You have no idea. Also, if honest, you will conclude that it has NOTHING to do with “inductive reasoning”, since knowledge is not the same as e.g. sensory experience, and you infer knowledge from sensory experience just as well as you can infer the irrationality of Pi from my name being Toivo (you can’t…and it’s foolish not to mention dishonest to pretend otherwise)
So what do you say, Jeff? Let’s not be bothered by assumptions like that, eh?
Jeff Ketland,
You say: “but this sort of knowledge MUST be based on inductive reasoning”…
How might you justify THAT claim? Let me guess: too hard not to assume that claim? Come on…
Why do you think that knowledge (or this sort of knowledge) MUST be based on inductive reasoning? Be careful, and try not to use inductive reasoning (which we all know to be invalid, since you can’t infer oranges from apples, as showed by David Hume) while trying to justify why you think that….
I’m willing to bet a large sum of money you can’t do that. You’ll most likely end up with something like: “it’s hard not to assume that”…Well guess what? For me, it may be hard not to assume that I have 1 000 000 pounds on my bank account and for you it’s hard not to assume that knowledge is based on induction. We’re equally rational and justified.
If you’re honest, you WILL conclude that you DON’T know where knowledge comes from. You have no idea. Also, if honest, you will conclude that it has NOTHING to do with “inductive reasoning”, since knowledge is not the same as e.g. sensory experience, and you infer knowledge from sensory experience just as well as you can infer the irrationality of Pi from my name being Toivo (you can’t…and it’s foolish not to mention dishonest to pretend otherwise)
So what do you say, Jeff? Let’s not be bothered by assumptions like “knowledge must be based on inductive reasoning”, eh?
Toivo:
I would say that with regard to future events, we know that there is a certain degree of likelihood that something will happen, based on our and others’ experiences of similar situations. In the real world, we rely on induction all the time, and while we can’t know our conclusions are 100% guaranteed, we can be satisfied that they have a certain degree of likelihood. When I press the “x” key, my degree of certainty of seeing an “x” on the screen is very close to 100%. However, when I turn the key on my old car, my degree of certainty that it will start is, unfortunately, somewhat lower. Still, it’s fair to say that I know that the way to start my car is by turning the ignition key (even though this operation being successful isn’t 100%).
I think it’s important to make a distinction between relative and absolute truth. I maintain that a set of what we take to be facts can be called knowledge, even though an unknown subset of those facts may later be revealed to be unreliable. The part that will never be shown to be untrue is that part that can be reasonably called absolute truth, even though we can never be 100% certain that any particular “fact” will never be disproven.
Toivo,
You say: “but this sort of knowledge MUST be based on inductive reasoning”… How might you justify THAT claim? Let me guess: too hard not to assume that claim? Come on… I’m willing to bet a large sum of money you can’t do that.
We know because our minds can reason, and using reason we can see that such inferences are ampliative (i.e., not deductively valid). For example, we might imagine a world where the premises are true, and conclusion false. Or, mathematically, we might define a model of the premises where the conclusion is false.
[I assume that you were joking when you said you would “bet a large amount of money”.]
… you infer knowledge from sensory experience just as well as you can infer the irrationality of Pi from my name being Toivo
I see. So, you don’t believe that, for example, in order to know how many fingers you have, you have to look, using your senses.
So how do you believe knowledge to be obtained?
Pyotr,
Brilliant! We seem to agree to large extent! But not on the induction part. Just everything else you wrote.
Jeff Ketland,
you did not seem to answer my “challenge”. You didn’t justify the claim that “this sort of knowledge must be based on inductive reasoning”. What I want is an explanation that would go: bla bla bla, so and so and so, therefore “this sort of knowledge must be based on inductive reasoning”. You never provided that explanation. I am still waiting, and if you can’t provide one, even for yourself, WHY do you continue to believe in the claim???
You stated that our minds can reason. Well, why do you believe THAT then? Perhaps (a BIG perhaps!) you could somehow experience momentarily that you can “reason” that if x = y and y = 1 then x = 1, but apart from that, how far can you get…without using induction?
Also, your answer was a bit ambiguous: did you mean to say that using reason we can see that “this sort of knowledge must be based on inductive reasoning”? If so, how exactly? Reason doesn’t allow us to “see” what “hidden” mental inferences have actually been made. It’s always possible that, for example, a huge AI “sends” knowledge to our minds and times it just so that we are tempted to believe that it was “from” our senses. Moreover, sometimes you can “know” something, but have very little idea how your mind “knows” this. You simply DON’T get to look inside your brain or mind like that. But if so, you don’t know “in advance” how our minds work: i.e. it’s not at all necessary that “this sort of knowledge must be based on inductive reasoning”.
So, again, why do you believe this claim?
I agree that it’s deductively invalid to “deduce” knowledge from e.g. sensory experiences: that’s akin to deducing apples from oranges, it’s simply impossible, since knowledge is not defined in terms of sensory experiences, it’s another kind of thing altogether.
Hmm…let’s see… how many times in your life have you either stated or thought or written something to the effect that “this sort of knowledge must be based on inductive reasoning”? But is this claim justified? I still see no argument to back it up.
I don’t think the claim is true, I think it’s utter non-sense. In fact, I think it’s “absolutely” false by definitions of ‘knowledge’, ‘inductive reasoning’ and ‘based on’ (as I understand those definitions).
Finally, you’re asking me how I believe knowledge to be obtained. Well, I’ll tell you if you first answer the questions and/or arguments above. :-)
Toivo,
It has been explained to you that inductive reasoning is ampliative. It is not deductively valid. Any particular instance of one of the usual forms of inductive reasoning can be demonstrated invalid by a mathematical argument.
For example,
(1) All As that are O are Bs
Hence,
(2) All As are Bs.
One can show that this is invalid by constructing a counter-model. (Intuitively, an A that is not-O might not be a B.) Similarly with other forms of inductive reasoning.
Since I have now answered your question, indeed twice and in great detail, are you going to explain how you think knowledge is obtained?
Pyotr,
When I press the “x” key, my degree of certainty of seeing an “x” on the screen is very close to 100%.
[This degree of certainty is also called a credence or a degree of belief.]
Suppose that your degree of belief is 0.99. And suppose that the letter “x” does not appear when you press the key. Since “x” did not appear, it follows that you did not know that “x” would appear.
Therefore, knowledge isn’t definable in terms of (high) degree of belief, since it is not factive. It may have a high probability of being factive, but that is not sufficient for knowledge.
On the other hand, belief (simpliciter) itself is sometimes defined in terms of degree of belief. E.g., one believes P iff one’s degree of belief in P is sufficiently high. Unfortunately, this can lead to paradoxes. Some Bayesians think we should simply drop the notion of belief (simpliciter) and thus, also, the notion of knowledge (simpliciter). One obvious problem with this is that this is hardly consistent with our ordinary colloquial talk about beliefs, and so on.
JK:
No doubt you’re correct in strict philosophical terms, but if the probablility of the truth of a proposition has to be 1, then it seems we would have a paucity of true propositions. Is it incorrect to say that I know that crows are black, even though there must have been an albino crow at some time or another, or at least a dark brown one? Can’t we say truthfully that human beings have gained the knowledge that crows are black? If “know” and “knowledge” have to have such a degree of certainty, aren’t we then restricted to the realm of proof, namely mathematics?
Toivo:
I’m glad we found some common ground. Now, please answer JK’s question.
Pyotr,
… if the probablility of the truth of a proposition has to be 1, then it seems we would have a paucity of true propositions.
Right, but the probability doesn’t have to be 1. The (epistemic) probability of a truth can, in principle, be zero or very close to zero. (Popper argued that the probability of physical laws is always either zero or very close to zero.)
There’s no straightforward link between:
(i) the probability of a statement,
and
(ii) its truth value.
In particular, knowledge (as we ordinarily use the concept) doesn’t have to be certain. But it does have to be true. That is,
If A knows that P, then P.
For example, I know that I was born in 1964. However, I do not feel 100% certain about this claim, and I am prepared to countance a variety of doubts. The point is that I believe the claim, it is in fact true and I have an appropriate justification.
(If this claim later turns out to be false, then I was simply mistaken in claiming to have known it.)
To sum up: truth (a semantic concept) is not related in a simple way to certainty (an epistemic concept). A statement can be assigned a high degree of certainty and yet be false; or it can be true and have very low degree of certainty.
To take a slightly silly example, suppose that, in fact, though no one knows this, David Cameron is a metallic robot built on Alpha Centauri. (We can imagine the robot is specially designed to fool doctors and most detection devices.) Then the statement “David Cameron is a human being” is false, since he is not a human being. But, on the basis of all the available evidence, this statement would be regarded as virtually certain by everyone.
Thanks, JK. That actually does help clarify things for me somewhat, although I wish you had addressed my crow examples.
WRT Popper, wasn’t he rather eccentric in his idea of scientific knowledge? Of course it’s hypothetical and subject to error (what isn’t?), but to assert that scientific laws are highly improbable? I guess he had some kind of conventionalist belief, which strikes me as downright nutty. I think the idea that there exists an objective, law-governed reality is pretty basic and well substantiated.
Pyotr,
Is it incorrect to say that I know that crows are black, even though there must have been an albino crow at some time or another, or at least a dark brown one?
If “Crows are black” means “All crows are black”, then a single albino crow makes the statement untrue.
Can’t we say truthfully that human beings have gained the knowledge that crows are black?
I think we can rightly say that we know that normal crows are black. The situation is, I suppose, similar to saying that normal human beings have two legs, despite the various exceptional cases of, e.g., birth abnormalities and amputations.
There are statements which are strictly speaking false, but still approximately true (in some hard to define sense). E.g., the Earth is a sphere. We can express a truth by saying “the Earth is, to a reasonable approximation, a sphere”.
Thanks again, JK. I guess I’ve been thinking more along the lines of applying these ideas to everyday life. Clearly, you’re about the precise language of logic, and no one can fault you for doing so. (In fact, your response is much appreciated given the fact that it must be the middle of the night where you are.)
Pyotr, I tend to agree about Popper. Bayesians insist that our best scientific laws have obtained a high degree of epistemic confirmation under evidential testing.
On the other hand, Popper thought of probability in objective terms, rather than epistemic terms (in 1934, he followed von Mises’ frequency interpretation and then later developed his propensity view). Objectively speaking, he may be right. On his view, he considered a law to be a logical conjunction of N individual statements each of finite probability, say r. The overall probability can be made to approach zero by taking N sufficiently large. If the N refers to spacetime points, then scientific laws (like Maxwell’s equations) come out as have objective probability zero.
It can be very frustrating trying to figure out Popper, whose anti-inductivism verges on incoherence; but he’s one 20th century philosopher of science that many scientists actually admire, and rightly. That was an achievement – particularly as a lot of 20th century philosophy of science is utter bullshit (don’t tell anyone I said that …).
“The overall probability can be made to approach zero by taking N sufficiently large. If the N refers to spacetime points, then scientific laws (like Maxwell’s equations) come out as have objective probability zero.”
The guy was definitely a (twisted?) genius. I just can’t understand trying so hard to invalidate scientific knowledge.
(If you’re in Edinburgh, it must be starting to get light there. Have you been up all night? That’s dedication to something or other.)
Yes, it’s just getting light here in Edinburgh: 4.20am. The pleasures of exam marking!
I admire your dedication to your field. So you’re a professor, then? Adjunct?
Jeff Ketland,
So… let me digest this.
You have explained to me twice that inductive reasoning is ampliative, and that it’s deductively not valid, and by inductive reasoning you mean e.g. the example you just gave me. Ok, I fully, 100% agree with that. Well, I already did agree with it not being deductively valid, so why do you repeat that? As for being ampliative, I would’ve agreed with you YESTERDAY, but I didn’t know that the word ‘ampliative’ means, so now I went to Websters online dictionary and looked it up. Of course! Of course inductive reasoning is ampliative! I’ve known that for ages! And I agree with that 100% too. You’re NOT TELLING ME ANYTHING NEW HERE!
Was this what I was asking for? That you tell me that inductive reasoning is ampliative and deductively invalid? Old stuff that I’ve known since I learned what “inductive reasoning” is about 1 year ago.
Do you think you have really answered my question, twice AND IN GREAT DETAIL? Are you sure I was asking you what inductive reasoning is or isn’t? Really?
No, that wasn’t what I was asking for, and if you even once more tell me that inductive reasoning is ampliative and deductively valid (I fully agree with those statements), my head is going to explode. :-)
Let’s return to your post where you made a claim that I want you to back up (a thing you STILL HAVEN’T DONE!):
Jeff Ketland writes:
“Toivo,
Also note that when you say:
When you’re using your computer …
do you KNOW that e.g. “a certain letter is going to appear on the screen”?
This is very closely related to Hume’s Problem of Induction. If you know, then, how do you know? By inductive reasoning? But how is inductive reasoning justified?
[By the way, I do think that you do know. But this sort of knowledge must be based on inductive reasoning. The status of induction is, to say the least, very problematic.]
“
Ok, look, you’re talking about knowing “a certain letter is going to appear on the screen”, you mention Hume, you say “If you know, then, how do you know?”, you answer yourself by claiming that one knows this by inductive reasoning. THIS is the claim that I want you to back up. That it’s specifically inductive reasoning and not e.g. logical deduction, messages from huge AI, precognition, weird quantum phenomena (hard to resist! :-)) that allow one to know.
Then you make the claim, referring to knowledge like ordinary knowledge when one uses one’s computer:” But this sort of knowledge must be based on inductive reasoning”
How do you know that knowledge is based on inductive reasoning? No, I’m not asking if inductive reasoning is ampliative, or that we might be able to see that using reason. That’s not what I’m asking. WHY do you think it’s INDUCTIVE REASONING that’s responsible for knowledge and not SOMETHING ELSE, ANYTHING ELSE, like the examples I mentioned?
Do you understand what I’m asking?
deductively INVALID that is, sorry for typo
Jeff Ketland,
Also, about your use of the word “know” and your definition of “knowledge” as justified true belief:
You say you know you were in 1964, but you’re not 100% certain about it. But then you say that the claim is true. But didn’t you just say that you’re not 100% certain about it? Those things are mutually exclusive. If you say X is true, you can’t be any uncertain about it, and if you’re a little uncertain about it, you can’t say X is true. You can say that it’s very, very likely true, but just to say it’s “true”?, no. I don’t think you can do that.
Really, what does go on inside your mind. You think about the year 1964. The likelihood grows in your mind that you were born that year, and then it’s very high. What do you do next? Say that it’s suddenly true? How come? Since when did you have “access” to truth in that way? You only HAVE the likelihood, and you can only make statements about that likelihood (of something being true), not the actual, past state of the universe as if you had direct access to it in your mind.
However, I do agree with you on the next paragraph:
“A statement can be assigned a high degree of certainty and yet be false; or it can be true and have very low degree of certainty.”
That’s perfect!
Certainly, it’s a property of “truth” that “P is true” implies that P and vice versa.
But the definition of ‘knowledge’ as justified true belief in an ill conceived one, although I fully understand that it’s logical for you to talk about knowledge as you have done in previous posts, since you define ‘knowledge’ like that.
The main reason why that definition is “bad” is the following: we (humans) don’t have access to “truth” directly (e.g. about the past, or what’s inside Europa’s surface), so for us to decide if X is knowledge, we would have to know that X is true, but how on EARTH are we going to find THAT out??? Sure, we have likelihoods and estimates, but those AREN’T the same as truth. There’s no way to have a way to decide if X is knowledge or not, since we cannot decide if X is true or not. We simply don’t have “absolute” knowledge and can’t say whether or not something is true.
What we do have is “lesser” knowledge, i.e. expectancies and likelihoods, and those we can and actually do use. It would make MUCH more sense that in order to label X knowledge we don’t have to have “truth about X”, just a sufficiently high likelihood. That’s it. We have just redefined knowledge as “justified high-likelihood belief”.
But what of “justified”? Why does knowledge have to be justified? If we already have the high-likelihood for some belief Y, what EXTRA do we gain by having some justification? Isn’t the high-likelihood justification ENOUGH by ITSELF?? If not, what is it that this “justification” gives us, exactly? Can we predict the weather without “justification”? Yes, we can. Can drive cars, repair machines, type without “justification”? Sure we can. We only NEED the high-likelihood belief. So what is this ghost of “justification” that haunts us? Let’s axe it, too! There, now we have redefined knowledge as “high-likelihood belief”, the “actual” definition I’ve had before I even knew how to speak: e.g. when I was, say 3, I KNEW that when I walked forward, my feet were going to touch the ground, and the ONLY thing I meant with that was the high likelihood. It wasn’t anything else. Anything that had high-likelihood automatically qualified as knowledge to me, and I used it in my daily activities. I think it’s similar for other people.
Then, of course, philosophers came along, and said: No, no! This is not how we “ought” to go around the world, being so honest to ourselves. Instead, let’s invent some arcane nonsense that we call “theory of knowledge” and pretend everybody actually thinks of ‘knowledge’ this way. Then let’s have posts for academics in universities to pointificate about the intricacies of so-called, alleged “inductive reasoning” that supposedly everyone is doing.
:-)
Toivo,
How do you know that knowledge is based on inductive reasoning?
In fact, you claimed to know what would happen if you pressed a certain computer key, presumably on the basis of previous experience. I pointed out that this is an example – in fact, almost a textbook example – of inductive reasoning. This is quite trivial.
You think about the year 1964. The likelihood grows in your mind that you were born that year, and then it’s very high.
What do you mean by “likelihood”? Do these “likelihoods” of yours satisfy Kolmogorov’s axioms? How are they updated? What are their dynamics?
If you say X is true, you can’t be any uncertain about it.
Of course you can.
A statement is true iff things are as the statement says they are.
The degree of belief in a statement is something entirely different.
A statement may be true but uncertain.
We … can’t say whether or not something is true.
Your nihilism is not very compelling.
Of course we can say whether or not something is true. Humans say (sometimes correctly) whether or not things are true every day. It is called talking, reasoning, observing, thinking, etc.
More generally, there is nothing stopping you studying this subject instead of rambling on. Why don’t you purchase some books on some of these topics and study them?
Then let’s have posts for academics in universities to pointificate about the intricacies of so-called, alleged “inductive reasoning” that supposedly everyone is doing.
Logicians, computer scientists and mathematicians, for example? Shocking! Evil!!
Pyotr,
So you’re a professor, then? Adjunct?
Yes. I work at Edinburgh University, teaching mathematical logic and philosophy. My position (as of August) is equivalent to what in the US is called “Associate Professor”.
Jeff Ketland,
Please. I beg you. Do not misunderstand me.
You wrote:
“In fact, you claimed to know what would happen if you pressed a certain computer key, presumably on the basis of previous experience. I pointed out that this is an example – in fact, almost a textbook example – of inductive reasoning. This is quite trivial.”
Please note: I did not say this was inductive reasoning. I did NOT say I used previous experience to infer this. I only said I know this.
It is YOU who said that it’s “presumably on the basis of previous experience”. It is YOU who said that it’s inductive reasoning.
It does not follow that if X is knowledge, it is based on inductive reasoning.
Why do you claim it’s “on the basis of previous experinece” (I said NO SUCH THING)? Why do you claim it’s inductive reasoning (again, I said NO SUCH THING)?
Please, please, understand what I’m asking. Please.
Toivo,
Why do you claim it’s inductive reasoning?
You yourself stated, as an example of knowledge: you know that, the next time you press the key “x” on the computer, then an “x” will appear on the screen.
I pointed out that this must be based on inductive reasoning. But you insist that you do not regard this as an example of inductive reasoning.
But it is almost a textbook example of inductive reasoning. Assuming that you have a normal human psychology, it involves ampliative reasoning to future events on the basis of previous experience.
Why do you believe — indeed, claim to know — that when you press “x” on the keyboard, then a letter “x” will appear on the screen? Presumably on the basis of previous experience. If not, then what? Magic? Does God speak to you?
Jeff Ketland,
Yes, I know that when I press key “x” on the computer, then an “x” will appear on the screen. I really do know this.
You claim that I must be using induction, since 1) I must have “normal” human psychology and 2) all people with “normal” human psychology gain knowledge by ampliative reasoning (i.e. inductive reasoning)
You haven’t justified 1) or 2). How do you know I have “normal” human psychology? How do you know “all people with “normal” human psychology gain knowledge by ampliative reasoning?
You can’t use inductive reasoning to establish 1) or 2), since inductive reasoning is invalid. How might you know 1) or 2)?
You can’t use inductive reasoning to establish those (1) or (2), since inductive reasoning is not valid.
So, again, how do you know 1) or 2)?
As to why I believe — indeed, claim to know — that when I press “x” on the keyboard, then a letter “x” will appear on the screen:
As I said, that is knowledge (meaning: it has high likelihood). I don’t know where it comes from, or if it indeed HAS ANYTHING to do with my mind. I have it (the likelihood) and I know what it means.
I don’t assume as you seem to be doing, that oh, indeed, it MUST have been inferred from previous experience. No, I don’t commit the argument from ignorance fallacy.
So you admit it? The only reason why you think inductive reasoning exists AT ALL is that you just ASSUME it or have FAITH in it. Just like some people may assume the existence of fairies at the bottom of the garden. Well, then you are as deluded as them.
P.S. Neither magic nor God exist. Why do you take me for? A complete loon?
edit: ….inductive reasoning exits in the sense that people do it
Why do you take me for? A complete loon?
You said it …
Jeff Ketland,
Ok. I’m sorry. Perhaps I got a little carried away. If I’ve insulted you, I am deeply sorry.
But I would like you to address my points. Please.
I try to be more civil.
Toivo,
Let’s examine your proposed definition of “knowledge”, which is:
(TDK) A knows that P iff A assigns high likelihood to P.
Now, it has been pointed out several times the triviality that this is not factive, because statements with a high likelihood can be false.
For any reasonable analysis of knowledge, we must have the basic epistemic principle,
(F) If A knows that P, then P.
Consider Jerry Falwell. We have the empirical fact,
(1) Falwell assigned high likelihood that gays go to hell.
By your definition, we get,
(2) Falwell knew that gays go to hell.
By (F), we get,
(3) Gays go to hell.
This is what you are committed to if you believe (TDK), (F) and (1).
Presumably, unless you really are a loon — which is possible on the evidence I’ve seen — you think that (3) is untrue.
So, which of (TDK), (F) and (1) do you you believe is false?
In fact, we can deduce a contradiction from your definition of knowledge using various empirical facts.
We have the empirical fact,
(4) Hitchens assigns high likelihood that gays do not go to hell.
Hence, by your definition,
(5) Hitchens knows that gays do not go to hell.
Hence, by (F),
(6) Gays do not go to hell.
Jeff Ketland,
By attacking my ‘likelihood’, are you NOT trying to avoid my questions TO YOU?
All the same, I can answer your questions:
Believing something is not the same as assigning a high likelihood. This is YOUR MISTAKE for thinking it is the same.
In my earlier definitions I painstakingly tried to explain why it’s a SPECIFIC kind of expectancy that I refer to as ‘likelihood’, that it’s not just anything at all.
I know very intimately what I mean. I also know, thus, what I mean when I say that I know X.
But I don’t assume other humans are like me. I don’t assume they form likelihoods. I don’t assume “their” likelihoods are or should be the same as “my” likelihoods. I assume none of that. Can you also not assume none of that?
Now,
It’s quite likely that other humans are like me in every respect. But it’s not necessary. It’s quite likely that if I “plugged my mind” “in their mind” (as a “sort of” 2nd mind of mine, only auxiliary), I would detect or “experience” something like my definition of “likelihood”. If I imagined “I went inside their mind” and then “blocked everything else I knew”, i.e. I would effectively experience what it’s like to be them, I would most likely experience that they,too, form likelihoods. But I suspect that if they e.g. believe in God, then that belief (even though the person might be Alvin Plantinga or some other theist), has very, very low likelihood. The person, however, does act AS IF that belief was “likely”, but I (inside his mind) would see that it isn’t (doesn’t match the definition of “likely”). Those beliefs could be called pretend-knowledge, but they can be classified into non-knowledge as well. I’ve had one of these beliefs when I used to believe in God, when I was a child. I hoped, really bad, and wanted that God exists. It felt comfortable and good. I didn’t know that “God exists” the same way I would know that “I have 2 feet”. In fact, I didn’t “know” that “God exists” at all. All I had was a sensation similar to but not the same as “knowledge”. Even at the time, I “sensed” it: I didn’t have the same “expectancy” about God’s actions or existence that I had about normal, day-to-day stuff like eating, walking, playing football.
Now, back to “my own” perspective:
Since it’s quite likely that other humans form likelihoods (that is, they understand the meaning of “likely” and use it to decide their beliefs), it’s likely that, in general (excluding God-belief + some others, but “in general” means really day-to-day, practical things like whether the laundromat is open or not), other people’s beliefs are likely. E.g. if my brother has quite high confidence that the oven is set to 200 degrees Celsius, then it becomes quite likely that the oven IS set to 200 degrees Celsius. But I “trust” people ONLY IF their beliefs are likely to be true. Never by default.
Also, if all people form likelihoods, who’s to say what’s the “true” likelihood? Well, first of all, it’s not a given that all people form likelihoods. It’s likely that they do, so whatever they believe already comes from the “filter” of or is preceded by the word “likely”. So this “upper” likelihood is the “true” likelihood: we don’t have direct access to other alleged likelihoods. Nothing is assumed.
ALSO:
When I think about the likelihoods of other people, they aren’t “real” likelihoods. I mean, I am not “locked” to their perspective (and even if I was, it’s not *necessary* that they indeed have “likelihoods”), so I can assess the minds of several people and myself while sitting on my chair. Then I ask myself: Is this likely? and get the answer (using the meaning of “likely”). But, again, I don’t know HOW I get the answer or how my mind operates. Let’s not assume.
Also, there’s a danger of imagining to group of people having different likelihoods about something. Example: a scientific controversy about something. Now, being scientists, the people involved would have different evidence, and most likely different likelihoods, too. What’s the “true” likelihood? Say the nobody knows what’ll happen a certain experiment is performed: perhaps it will be what some theory predicts, perhaps not. So, what’s the “true” likelihood that some event A happens, when all the scientists have different likelihoods about event A occurring?
Easy, just evaluate the likelihood that A occurs. “I” already exists on the scene, the “I” that has scanned the minds of the scientists to find out that they have different likelihoods, to find out what experiment they were doing etc. The same “I” can now evaluate the likelihood that A occurs. The “I” is just an “upper perspective” to the scene. It only has to apply the meaning of “likely” and there you go: some likelihood that event A occurs, even if this likelihood differs VASTLY from the likelihoods of the individual scientists.
Well, if there is so much difference, why call the individual scientists “likelihoods” likelihoods. They are not “really” likelihoods, after all, are they? Well, that most likely depends on “who you are”. If you “go inside” the mind of any individual scientist and BLOCK everything else you know, then most likely you would have a different likelihood for each different scientist. So they are “real” likelihoods, IN THAT SENSE. But as soon as “you are” some 3rd perspective or something else, and apply the meaning of “likely” again, you may get a different “likelihood”.
Please, do not mistake me for advocating some relativism or postmodernist non-sense. I can assure you: I am even more against them than you or OB. It’s just that I’m afraid that if I haven’t explained myself to you clearly enough, you will be confused and misunderstand me, again.
So, my answers:
(1) is false. It’s highly unlikely that Jerry Falwell assigned a high likelihood that gays go to hell. It’s highly unlikely that anyone can assign a high likelihood to such a belief. It can be done, yes. Just as, in the past, people have assigned high likelihoods to beliefs we today know are most likely false. But usually assigning likelihoods to beliefs is correlated by evidence and/or reason for those beliefs. No, it doesn’t mean automatically that people infer one from the other. But it’s highly likely that evidence affects likelihoods, and it’s highly likely that to be able to have most beliefs (e.g. that “gays go to hell”) you would have to have evidence, too. It’s not necessary, no. But the likelihood, knowing that Falwell doesn’t have such evidence, and knowing other stuff as well about Falwell and believers in God, drops very fast to near 0 that “gays go to hell”. Did I infer this likelihood? No. Logic doesn’t have anything to do with it. Even if you took all my knowledge, it would be impossible to deduce that “gays to go hell” gets the likelihood that it has as I now think about it. But it’s highly likely that my previous knowledge and yes, sensory experience, affected that likelihood. And *THAT* is the reason why I talk about evidence and reasons why I believe what I believe: they affect likelihoods, both mine and yours… most likely.
Now, about Hitchens. Does he assign a high likelihood that gays do not go to hell? (By the way, that’s the same as assigning a low likelihood (think of set complements) that “gays go to hell”).
I know who Hitchens is. It’s not that. It’s just that I’m a bit uncertain. A bit.
However, after some reflection I understood that it’s very likely that Hitchens assigns “high likelihood that gays do not go to hell”. Fine. So you could say that I know (4), but not such great degree that I know (1) is false.
So I say: Hitchens most likely knows that gays do not go to hell. That is, I agree with (5).
But, I don’t agree with (F). What do I mean “I don’t agree”? Well, you can certainly make that definition, but I would use the word ‘truth’ for that: If X is true, then X. But this can’t be applied to humans and knowledge. There’s nothing in humans or knowledge with which you would define something, call it BlackBox, such that if Person A BlackBox B, then B. The problem is that humans and knowledge don’t have a “direct” “link” to truth.
As to Kolmogorov’s axioms (as a 1st year maths student I know/remember what they are), no, my “likelihood” notion DOES NOT satisfy these axioms SIMPLY BECAUSE it’s not a number. That’s it. Simple, isn’t it. Thus, my “likelihood” cannot, by definition, be called a “probability” or “probabilistic” notion. For something to satisfy Kolmogorov’s axioms, that something needs to be a number, right? And “likelihood” isn’t that. It’s a certain kind of “sense”, but it’s soooo hard to put in words.
But what if we create a measure for likelihood? Like, we decide that real number 0 corresponds to “0-likelihood” and that all likelihoods are on a real line so that there would be an injective function from “likelihoods” to real line from 0 to +infinity. But is this function surjective? :-)(Sorry, can’t resist. I’ve just had an exam about this.) Hmmm… I don’t know. Maybe it’s possible to devise this kind of model (and it would be ONLY A MODEL) for “likelihoods”, and make it satisfy Kolmogorov’s axioms and thus call it a probability. Still, only the model for “likelihoods” would be a probability, not the “likelihoods” themselves.
(2) Falwell knew that gays go to hell.
By (F), we get,
(3) Gays go to hell.
This is what you are committed to if you believe (TDK), (F) and (1).
Presumably, unless you really are a loon — which is possible on the evidence I’ve seen — you think that (3) is untrue.
So, which of (TDK), (F) and (1) do you you believe is false?
In fact, we can deduce a contradiction from your definition of knowledge using various empirical facts.
We have the empirical fact,
(4) Hitchens assigns high likelihood that gays do not go to hell.
Hence, by your definition,
(5) Hitchens knows that gays do not go to hell.
Hence, by (F),
(6) Gays do not go to hell.
Edit: … sorry for the double negative. Can you also assume none of that? is what I wanted to say
Toivo,
I can’t deal with most of what you’ve written (it seems to ramble on).
I am asking about your use of the term “likelihood” because I really don’t know what you mean. Normally this term would be understood to denote a rational credence function (a degree of belief). That is, an epistemic probability function over statements satisfying Kolmogorov’s axioms. (If a credence function does not satisfy these axioms, then the subject is, in a sense, irrational in terms of their betting dispositions: this is the Dutch Book Argument.)
If you don’t think that Jerry Falwell assigned a high likelihood that gays go to hell, then I suggest that you read some of his well-known comments. You appear not to grasp that knowledge is, by definition, factive.
I am glad to hear that you know what a surjective function f : X -> Y is.
Probability functions are functions onto the interval [0, 1], and satisfying Kolmogorov’s axioms. Their domain can be either sets of events or set of sentences. (There is a correspondence between a Boolean algebra of sets and the propositional Boolean operations.)
If you’d like to study the Theory of Measurement (which contains a discussion of epistemic probability functions), then I suggest that you look at the following classic text, now out in Dover:
Krantz, R., Suppes, P., Luce, R. and Tversky, A. 1971. Foundations of Measurement. Volume 1.
And instead of continuing this pointless discussion, it might be better now if you went and read about:
(i) The analysis of knowledge: start with the Stanford Encyclopedia article that I linked to above, and this will guide you to further relevant literature.
(ii) The (standard) semantic theory of truth, as set out by Alfred Tarski in his 1933 article Der Wahrheitsbegriff and his shorter 1944 article “The Semantic Conception of Truth”, that I linked to above; and also to Wilfrid Hodges’ article on truth definitions on the online Stanford Encyclopedia of Philosophy.
(iii) The Bayesian theory of epistemic probabilities. A standard textbook is Howson & Urbach: Scientific Reasoning. Also, the Wikipedia article on Bayesian probability seems quite good.
Jeff Ketland,
I don’t think this discussion is pointless. I would like to continue this discussion, if you will. But if you think I’m a loon, not worthy of your time or just damn boring, feel free to point that out so I won’t pester you anymore.
Yes, you’re right. I do ramble on. I have thought about these things a lot, but putting it all together for an easy-to-read easy-to-understand and short explanation is hard for me. But I try.
About “likelihood”:
As I have tried to explain in my previous posts, “likelihood” is a certain sense of expecting something to be true or expecting something to happen. E.g. I walk on the street and I’m about to put my foot on the ground. At that moment, I have a sense or a thought that “my foot is really going to hit the ground”, although I’m never 100% certain that it will. When normally people say stuff like “it’ll probably rain tomorrow”, they mean it’s “likely” that it’ll rain tomorrow. Sometimes they even use the word “likely”. I can’t do better than that. If you still don’t understand, perhaps you aren’t even trying to understand or have decided not to. As I’ve said, “likelihood” is a hard thing to put in words, but nevertheless there already are people who understand the “likelihood” idea (a friend at Uni and my little brother). Intuitively, everybody (even you) understands it, and use it all the time, but only some people are able to talk about it honestly. But you can do this experiment: hold a pen above the floor and ask yourself what will happen when you let go. Now, don’t assume you’re doing or have to do inductive reasoning. Forget all about inductive reasoning. What will happen? You will most likely experience a sensation that you now expect the pen to drop and hit the floor. That sense of expectation, the sense that something is “really, really going to happen” I call “likelihood”.
I hope you understand it better now. But I can’t really do better than this, other than give more examples.
As to the Dutch Book Argument:
You mean stuff like the “likelihood” that any possible outcome of some experiment occurs should be “maximum likelihood”(in probability: 1) or that none of the possible outcomes occurs should be “minimum likelihood” (in probability: 0)? And that
I know Jerry Falwell very well. Not personally, but I’ve read a lot of stuff about him, especially lately. And I know what kind of “good Christian” he was. ;-) So? It’s still extremely unlikely that he assigned a high likelihood that “gays go to hell”. He certainly believed that, no question about that. But believing in something doesn’t mean automatically, that you have that certain “expectation” about what you believe in. As I’ve said, there are things are non-knowledge person A, but person A still believes them.
You say that I don’t grasp that “knowledge” is factive. Ummm… but wouldn’t you agree that ‘knowledge’ as I define it is NOT factive. If you agree, great. That’s what I think. And I agree that ‘knowledge’ as you define it IS factive (by it’s own definition). What seems to be the problem here? Were both right, but talking about different definitions of ‘knowledge’?
About mathematics applied to epistemology:
I have mixed feelings. I study mathematics myself.
But most of the stuff about “subjective probabilities” and epistemic probability functions seems to be (and I have read a little…but only about the “subjective probabilities part”) seems dishonest. The authors make no attempt to say that this is ONLY a model for “knowledge”, and that it cannot “override” what we know by the definition of “likelihood”. In contrast, it seems some people actually believe (but I’m not so sure about their “likelihoods”) that people compute probabilities in their minds or that knowledge itself is ACTUALLY just probabilities.
However, I might look into it, since it’s maths after all, and at least it might be helpful in designing artificial intelligence, which will need a MODEL of “knowledge”, since, at the moment, we can’t “give it” the ability to form “real” knowledge.
I did go to the Stanford Encyclopedia. But then I got back, because the article you linked to is not helpful. It’s not honest. It starts from some assumptions how knowledge is formed or what it is. Moreover, I know what knowledge is and how to use it. No, this is not arrogant, this is honest. I’ve known this since I was a child. Most of epistemology is bunk, I think. Please, don’t misunderstand me as anti-intellectual. On the contrary, I love thinking, I’ve done the IB (containing a course in theory of knowledge), and spent almost a year wasting almost all of my free time thinking about “knowledge”, “truth” etc. I’m not as “clever” as you (as I found out when I googled your name), but on this specific issue, I don’t think there is anything I might learn from books on philosophy. Well, perhaps only a little, but I already understand the big picture, and I know too many examples of well-educated adults spewing non-sense book after book and call it philosophy or even science.
I also visited the Tarski link, above. I looked at some of the pages and read a paragraph here and there. It’s not convincing. I already know what “truth” means. I’ve known this since I was a child. I don’t need a philosopher to explain to me what it is. I know what it is and how it “works”.
Please don’t refer me to books or articles. I live on university campus, and if I want to read something, I’m sure I can find it, get it and start reading within 10 minutes, even if it’s not available online. If I was interested in books on epistemology, I would be reading them. Instead I’m interested in what you, Jeff Ketland, think.
Now, you STILL haven’t answered my questions, that I referred to at the beginning of the previous long post. Are you deliberately trying to “escape” that?
Toivo,
You need to go to a library, and get some of the books that I recommended, and read them.
Best of luck.
Fine.