ABSTRACT

At the level of abstract theory, a collective decision is simply a coordination of intentions. No normative evaluation of the outcome can be deduced logically from the fact that it is chosen. But even if a logical deduction cannot be made, there is a strong tendency to expect chosen outcomes to be good. Discussions of options tend to be discussions not of, “What shall we do?” but rather of, “What should we do?” To ask, “What should we do?” suggests that the question has a definite answer, and that it ought to be possible to know it. But that may not be possible, and the failure to deal with this can lead to serious misunderstandings. For this reason it is important to explore in some detail how knowing occurs. This chapter begins with a general discussion of what knowing is and how it is accomplished, as a prelude to a discussion of how knowing applies to what a collectivity ought to do, and why the issue is important for collective decisions. This is followed by a discussion of the way that knowing what a collectivity ought to do might be applied to each of the modes of collective decision-making. The subsequent section discusses criteria for good collective decision rules in general. A final section summarizes the chapter. What is Knowing, and How is it Accomplished? A conception of knowing that is found in the philosophical literature is that person P knows thing T if: 1. T is true. 2. P believes that T is true. 3. P has a good reason for believing that T is true.1 It appears that no one doubts the sufficiency of the three stated conditions. They are regarded as all necessary because, 1) a person, it would appear, cannot know something if it is not true; 2) even if something is true, a person clearly does not know it if he does not believe that it is true; 3) even if something is true and a person believes that it is true, we would not say that he knows it if he does not have

a good reason for his belief. For example, if the New York Yankees won the 1956 World Series, and if I believe that the New York Yankees won the 1956 World Series, most people would not agree that I know that the New York Yankees won the 1956 World Series if the reason I give for my belief is that New York and Yankees both have seven letters, and 56 is divisible by seven. To qualify as knowing, a person who believes that something is true must have an acceptable reason for his belief. This conception of knowing makes it necessary to determine whether a thing is true before it can be determined whether anyone knows it, and this is a significant limitation. It may seem strange to call this a limitation, for how could it possibly be asserted that a person knew a thing if it were not known whether the thing was true? The answer is roundabout. First it must be noted that changes in general opinion concerning what things are unquestionably true occur in the widest variety of places: 1. In the 1960s at least one young geologist was denied tenure for supporting the then-heretical view that the earth’s crust was composed of plates that moved about and slid under each other. That view is now orthodox. 2. It is not clear that anyone bothered to ask the question, but if asked whether the mass of an object was independent of its velocity, it is hard to imagine any nineteenth century physicist pausing more than a second before responding, “Of course.” Modern theories of relativity say that it is not. 3. For centuries mathematicians understood that negative numbers did not have square roots. Then “imaginary” numbers came along. The question of whether negative numbers have square roots is now understood to be entirely a question of the framework in which the question is asked. 4. For centuries the dominant view was that it was possible for one human being to own another. Now almost no one makes such an assertion. 5. In the sixteenth century the Church asserted that the earth was the center of the universe and felt so strongly about this that Galileo was threatened with torture for arguing otherwise. It has been a very long time since any theologian has argued that an earth-centered universe was essential. While I have refrained in these examples from saying that the individuals involved “knew” the things that later came to be seen as not true, it is not clear that the individuals themselves would have refrained. In each case they might well have asserted that they were as sure of the thing in question as they were of anything. And yet it came to be seen to be otherwise. One possible response to this state of affairs is to say that there is an unfortunate tendency of people to think they know things when they don’t, and an appreciation of Bayes’ Theorem would help them realize this.2 This application of conditional probability can be stated as follows: If there are two possible states of the world, A and B, and if, from some unspecified source, the probabilities of the two states, prior to the incorporation of the implications of data D, are PA and PB respectively, and if the probabilities of observing data D given the two states are

PD | A and PD | B respectively, then the implication of observing data D is that the probabilities of the two states of the world become

BPPAPP

|| |

and BPPAPP BPP

|| |

respectively. One important idea that is highlighted by Bayes’ Theorem is that mere improbability can never yield the implication that a theory is false. Whenever there are two (or more) explanations that are not logically ruled out by the available evidence, the fact that one theory accounts for the evidence much more parsimoniously than any other can never be sufficient to establish that the theory in question is right and all others are wrong. From a Bayesian perspective, anyone who says that he knows anything is deluding himself. Evidence is inevitably finite, while the number of possible explanations is infinite. There can never be sufficient evidence to justify an assertion that all alternative hypotheses have been refuted. From this perspective, the only interpretation that can be placed on the phrase “I know” is that it is a sloppy approximation for “I believe that it is highly likely that.” The difficulty with this Bayesian interpretation of “I know” is that it does not fit the way people use language. Consider another example. Almost everybody knows that George Washington was the first President of the United States. Nobody says, “I believe that it is highly likely that George Washington was the first President of the United States.” They say, “I know that George Washington was the first President of the United States.” Wouldn’t you? Actually, under the Articles of Confederation, several individuals held the title of President of the Unites States.3 The first of these, John Hanson of Maryland, not George Washington, was the first person to hold the title of President of the United States. The purpose of the example is to illustrate that the phrase “I know” can be used in a perfectly understandable way about something that is not true. If the meaning of “I know” is to be found in an inquiry into how it is used, then it can be seen that people do not in general operate in terms of the intermediate probabilities required by Bayes’ Theorem, and it is not necessary for the predicate of “I know” to be true for the phrase to be used in a perfectly understandable way. This illustrates that “know” has two distinct meanings, depending on whether or not one is caught up in the illusion of certainty. From the perspective of a person who says “I know,” or of someone who shares the same framework, something that is “known” is true. However, from the perspective of someone who is engaged in an examination of language and beliefs, that which is “known” need not be true. From the latter perspective, To know is to make a practice of not doubting, and have a reason for that practice that is accepted by the persons with whom one communicates. In other words, from this perspective the second and

third of the requirements for a thing to be known must be satisfied, but the first need not be satisfied. Knowing in this sense is a cognitive strategy. It works because human minds and experiences are sufficiently similar. Furthermore, people are sociable and to a substantial degree are capable of being persuaded to adopt the assumptions employed by others. This permits the development of communicating groups that make a practice of not doubting the same things. We cannot communicate with one another without making assumptions: We assume, without any direct evidence, that others are conscious beings. (A society of solipsists is a contradiction in terms.) We assume that others will understand at least some of the words or signs that we plan to use in the same way that we understand them. Thus no discussion can begin at the beginning. Efforts to communicate involve beginning in the middle and risking incomprehension, which can sometimes be detected. Consider then the idea that knowing is accomplished by joining others in what they assume or by persuading others to join you in what you assume. This idea feels uncomfortable. We want what is “known” to be “objectively” true, true whether anyone knows it or not. This drive for objective truth in what is “known” may have an evolutionary rationale. It is hard to bring all of one’s efforts to bear on a task when one has doubts. Doubts are sometimes productive, but evolution would select individuals that are capable of setting aside their doubts, after entertaining them, and acting with conviction. A person does very little harm to himself by indulging in the delusion that the mass of an object is independent of its velocity, or that George Washington was the first President of the United States, whereas his probability of surviving could be seriously jeopardized by a failure to make logically invalid inferences based on observed regularities. People might starve if they insisted on not eating until it could be incontrovertibly proved to them that their eating did not violate the rights of any other plant or animal to live. (A “proof” establishes a correspondence between a set of assumptions and a conclusion, and a basis can always be found, if one wishes, for not accepting an assumption.) We strongly desire to have an irrefutable basis for asserting that what we believe is really true. But the practical significance of whether we are able to achieve this is slight, and more political than philosophical. There is no general algorithm for determining the truth status of propositions. Things are agreed to be true by a variety of different consensus-creating procedures. Sometimes a proposition is encountered that nearly everyone agrees is true. (The earth is round.) If we encounter someone who says that such a thing is not true, we decide that he must be joking, or else he is crazy. We dismiss such a person, deciding that there is no need to pay attention to what he says on this issue, and quite possibly on other issues as well. The importance of knowing what is definitely true is connected to concerns about the appropriateness of dismissing people who say things that seem

nonsensical. If we can be absolutely sure that they are wrong, then we can know that we are right to dismiss them. However, if the processes we use to determine what is true are not infallible, then those whom we would like to dismiss may have reasonable unsatisfied claims on us. The question of what we know and how we know it is thus a question of what obligations we have to those who disagree with us. To persist in asserting that we have absolute knowledge is to erect a barrier against those who see things differently than we do. To acknowledge that our ways of knowing are fallible is to become more susceptible to the claims of those who challenge our ideas. We fight a deeply ingrained instinct when we try to maintain a skeptical openmindedness about all ideas. In fact, while it is possible to be so hesitant that we are unable to act, we cannot doubt all ideas simultaneously because an idea must be formulated in words before it can be doubted, and this involves us unintentionally in the assumption that we know what the words describing the idea mean. We lack a valid basis for asserting that what we do not doubt is incontrovertibly true, but we are unable to doubt all that is dubious. All we can do is to nibble at the edges of our convictions, here and there replacing an identifiably dubious assumption with an assumption whose limitations are not yet visible. But we get by nevertheless, leading lives whose value many of us are able to refrain from doubting. Ethical Knowledge Ethical knowledge has the same structure as factual knowledge. Like-minded groups join together in not doubting factual postulates and in not doubting ethical postulates. Freudians do not doubt that a person has an id, an ego and a super-ego. Chemists do not doubt that reactivity is governed by the numbers of protons in nuclei. Vegetarians do not doubt that people shouldn’t eat animals. Antiabortionists do not doubt that the joining of a human sperm and a human egg creates a person with the same rights as any other person. In any such group, communication about issues of mutual concern proceeds on the basis of acceptance of certain basic postulates. Inquiry and discussion lead to new insights-factual insights in the case of groups sharing factual premises, ethical insights in the case of groups sharing ethical premises. Ethical knowledge is also related to our emotional responses to events. Things that happen that are “not good” tend to disturb us. We condemn those who do these things and/or look for ways of setting things straight. Things that happen that are “good” tend to leave us undisturbed or glowing warmly. (The correspondence between our intellectual understandings and our emotional responses is, of course, neither precise nor perfectly controllable.) If we come to an understanding that something we thought was not good actually is, then there will be a tendency for us to be less disturbed by that event. Thus the search for ethical knowledge is in part an effort to regulate our emotional responses to events.