出典(authority):フリー百科事典『ウィキペディア(Wikipedia)』「2015/10/08 06:06:19」(JST)
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (October 2007) |
This article needs attention from an expert in Philosophy/Epistemology. Please add a reason or a talk parameter to this template to explain the issue with the article. WikiProject Philosophy/Epistemology (or its Portal) may be able to help recruit an expert. (February 2009) |
The propensity theory of probability is one interpretation of the concept of probability. Theorists who adopt this interpretation think of probability as a physical propensity, or disposition, or tendency of a given type of physical situation to yield an outcome of a certain kind, or to yield a long run relative frequency of such an outcome.[1]
Propensities are not relative frequencies, but purported causes of the observed stable relative frequencies. Propensities are invoked to explain why repeating a certain kind of experiment will generate a given outcome type at a persistent rate. A central aspect of this explanation is the law of large numbers. This law, which is a consequence of the axioms of probability, says that if (for example) a coin is tossed repeatedly many times, in such a way that its probability of landing heads is the same on each toss, and the outcomes are probabilistically independent, then the relative frequency of heads will (with high probability) be close to the probability of heads on each single toss. This law suggests that stable long-run frequencies are a manifestation of invariant single-case probabilities. Frequentists are unable to take this approach, since relative frequencies do not exist for single tosses of a coin, but only for large ensembles or collectives. Hence, these single-case probabilities are known as propensities or chances.
In addition to explaining the emergence of stable relative frequencies, the idea of propensity is motivated by the desire to make sense of single-case probability attributions in quantum mechanics, such as the probability of decay of a particular atom at a particular time.
The main challenge facing propensity theories is to say exactly what propensity means. And then, of course, to show that propensity thus defined has the required properties. At present, unfortunately, none of the well-recognised accounts of propensity comes close to meeting this challenge.
A propensity theory of probability was given by Charles Sanders Peirce.[2][3][4][5]
A later propensity theory was proposed by philosopher Karl Popper, who had only slight acquaintance with the writings of Charles S. Peirce, however.[2][3] Popper noted that the outcome of a physical experiment is produced by a certain set of "generating conditions". When we repeat an experiment, as the saying goes, we really perform another experiment with a (more or less) similar set of generating conditions. To say that a set of generating conditions has propensity p of producing the outcome E means that those exact conditions, if repeated indefinitely, would produce an outcome sequence in which E occurred with limiting relative frequency p. For Popper then, a deterministic experiment would have propensity 0 or 1 for each outcome, since those generating conditions would have same outcome on each trial. In other words, non-trivial propensities (those that differ from 0 and 1) only exist for genuinely indeterministic experiments.
Popper's propensities, while they are not relative frequencies, are yet defined in terms of relative frequency. As a result, they face many of the serious problems that plague frequency theories. First, propensities cannot be empirically ascertained, on this account, since the limit of a sequence is a tail event, and is thus independent of its finite initial segments. Seeing a coin land heads every time for the first million tosses, for example, tells one nothing about the limiting proportion of heads on Popper's view. Moreover, the use of relative frequency to define propensity assumes the existence of stable relative frequencies, so one cannot then use propensity to explain the existence of stable relative frequencies, via the Law of large numbers.
A number of other philosophers, including David Miller and Donald A. Gillies, have proposed propensity theories somewhat similar to Popper's, in that propensities are defined in terms of either long-run or infinitely long-run relative frequencies.
Other propensity theorists (e.g. Ronald Giere [6]) do not explicitly define propensities at all, but rather see propensity as defined by the theoretical role it plays in science. They argue, for example, that physical magnitudes such as electrical charge cannot be explicitly defined either, in terms of more basic things, but only in terms of what they do (such as attracting and repelling other electrical charges). In a similar way, propensity is whatever fills the various roles that physical probability plays in science.
Other theories have been offered by D. H. Mellor,[7] and Ian Hacking[8]
What roles does physical probability play in science? What are its properties? One central property of chance is that, when known, it constrains rational belief to take the same numerical value. David Lewis called this the Principal Principle,[9] a term that philosophers have mostly adopted. For example, suppose you are certain that a particular biased coin has propensity 0.32 to land heads every time it is tossed. What is then the correct price for a gamble that pays $1 if the coin lands heads, and nothing otherwise? According to the Principal Principle, the fair price is 32 cents. It is argued that Propensity Theories fail to meet the Principal Principle.
全文を閲覧するには購読必要です。 To read the full text you will need to subscribe.
リンク元 | 「character」「disposition」「property」「性質」「nature」 |
.