出典(authority):フリー百科事典『ウィキペディア(Wikipedia)』「2013/11/21 15:39:47」(JST)
Counting is the action of finding the number of elements of a finite set of objects. The traditional way of counting consists of continually increasing a (mental or spoken) counter by a unit for every element of the set, in some order, while marking (or displacing) those elements to avoid visiting the same element more than once, until no unmarked elements are left; if the counter was set to one after the first object, the value after visiting the final object gives the desired number of elements. The related term enumeration refers to uniquely identifying the elements of a finite (combinatorial) set or infinite set by assigning a number to each element.
Counting sometimes involves numbers other than one; for example, when counting money, counting out change, "counting by twos" (2, 4, 6, 8, 10, 12, ...), or "counting by fives" (5, 10, 15, 20, 25, ...).
There is archeological evidence suggesting that humans have been counting for at least 50,000 years.[1] Counting was primarily used by ancient cultures to keep track of social and economic data such as number of group members, prey animals, property, or debts (i.e., accountancy). The development of counting led to the development of mathematical notation, numeral systems, and writing.
Counting can occur in a variety of forms.
Counting can be verbal; that is, speaking every number out loud (or mentally) to keep track of progress. This is often used to count objects that are present already, instead of counting a variety of things over time.
Counting can also be in the form of tally marks, making a mark for each number and then counting all of the marks when done tallying. This is useful when counting objects over time, such as the number of times something occurs during the course of a day. Tallying is base 1 counting; normal counting is done in base 10. Computers use base 2 counting (0's and 1's).
Counting can also be in the form of finger counting, especially when counting small numbers. This is often used by children to facilitate counting and simple mathematical operations. Finger-counting uses unary notation (one finger = one unit), and is thus limited to counting 10 (unless you start in with your toes). Other hand-gesture systems are also in use, for example the Chinese system by which one can count 10 using only gestures of one hand. By using finger binary (base 2 counting), it is possible to keep a finger count up to 1023 = 210 − 1.
Various devices can also be used to facilitate counting, such as hand tally counters and abacuses.
Inclusive counting is usually encountered when counting days in a calendar. Normally when counting "8" days from Sunday, Monday will be day 1, Tuesday day 2, and the following Monday will be the eighth day. When counting "inclusively," the Sunday (the start day) will be day 1 and therefore the following Sunday will be the eighth day. For example, the French phrase for "fortnight" is en quinze (in 15 [days]), and similar words are present in Greek (δεκαπενθήμερο, dekapenthímero), Spanish (quincena) and Portuguese (quinzena) - whereas "a fortnight" derives from "a fourteen-night", as the archaic "a sennight" does from "a seven-night". This practice appears in other calendars as well; in the Roman calendar the nones (meaning "nine") is 8 days before the ides; and in the Christian calendar Quinquagesima (meaning 50) is 49 days before Easter Sunday.
The Jewish people also counted[when?] days inclusively. For instance, Jesus it is said, announced he would die and resurrect "on the third day," i.e. two days later. Scholars[who?] most commonly place his crucifixion on a Friday afternoon and his resurrection on Sunday before sunrise, spanning three different days but a period of around 36–40 hours.[citation needed]
Musical terminology also uses inclusive counting of intervals between notes of the standard scale: going up one note is a second interval, going up two notes is a third interval, etc., and going up seven notes is an octave.
Learning to count is an important educational/developmental milestone in most cultures of the world. Learning to count is a child's very first step into mathematics, and constitutes the most fundamental idea of that discipline. However, some cultures in Amazonia and the Australian Outback do not count,[2][3] and their languages do not have number words.
Many children at just 2 years of age have some skill in reciting the count list (i.e., saying "one, two, three, ..."). They can also answer questions of ordinality for small numbers, e.g., "What comes after three?". They can even be skilled at pointing to each object in a set and reciting the words one after another. This leads many parents and educators to the conclusion that the child knows how to use counting to determine the size of a set.[4] Research suggests that it takes about a year after learning these skills for a child to understand what they mean and why the procedures are performed.[5][6] In the mean time, children learn how to name cardinalities that they can subitize.
Children with Williams syndrome often display serious delays in learning to count.[citation needed]
In mathematics, the essence of counting a set and finding a result n, is that it establishes a one to one correspondence (or bijection) of the set with the set of numbers {1, 2, ..., n}. A fundamental fact, which can be proved by mathematical induction, is that no bijection can exist between {1, 2, ..., n} and {1, 2, ..., m} unless n = m; this fact (together with the fact that two bijections can be composed to give another bijection) ensures that counting the same set in different ways can never result in different numbers (unless an error is made). This is the fundamental mathematical theorem that gives counting its purpose; however you count a (finite) set, the answer is the same. In a broader context, the theorem is an example of a theorem in the mathematical field of (finite) combinatorics—hence (finite) combinatorics is sometimes referred to as "the mathematics of counting."
Many sets that arise in mathematics do not allow a bijection to be established with {1, 2, ..., n} for any natural number n; these are called infinite sets, while those sets for which such a bijection does exist (for some n) are called finite sets. Infinite sets cannot be counted in the usual sense; for one thing, the mathematical theorems which underlie this usual sense for finite sets are false for infinite sets. Furthermore, different definitions of the concepts in terms of which these theorems are stated, while equivalent for finite sets, are inequivalent in the context of infinite sets.
The notion of counting may be extended to them in the sense of establishing (the existence of) a bijection with some well understood set. For instance, if a set can be brought into bijection with the set of all natural numbers, then it is called "countably infinite." This kind of counting differs in a fundamental way from counting of finite sets, in that adding new elements to a set does not necessarily increase its size, because the possibility of a bijection with the original set is not excluded. For instance, the set of all integers (including negative numbers) can be brought into bijection with the set of natural numbers, and even seemingly much larger sets like that of all finite sequences of rational numbers are still (only) countably infinite. Nevertheless there are sets, such as the set of real numbers, that can be shown to be "too large" to admit a bijection with the natural numbers, and these sets are called "uncountable." Sets for which there exists a bijection between them are said to have the same cardinality, and in the most general sense counting a set can be taken to mean determining its cardinality. Beyond the cardinalities given by each of the natural numbers, there is an infinite hierarchy of infinite cardinalities, although only very few such cardinalities occur in ordinary mathematics (that is, outside set theory that explicitly studies possible cardinalities).
Counting, mostly of finite sets, has various applications in mathematics. One important principle is that if two sets X and Y have the same finite number of elements, and a function f: X → Y is known to be injective, then it is also surjective, and vice versa. A related fact is known as the pigeonhole principle, which states that if two sets X and Y have finite numbers of elements n and m with n > m, then any map f: X → Y is not injective (so there exist two distinct elements of X that f sends to the same element of Y); this follows from the former principle, since if f were injective, then so would its restriction to a strict subset S of X with m elements, which restriction would then be surjective, contradicting the fact that for x in X outside S, f(x) cannot be in the image of the restriction. Similar counting arguments can prove the existence of certain objects without explicitly providing an example. In the case of infinite sets this can even apply in situations where it is impossible to give an example; for instance there must exists real numbers that are not computable numbers, because the latter set is only countably infinite, but by definition a non-computable number cannot be precisely specified.
The domain of enumerative combinatorics deals with computing the number of elements of finite sets, without actually counting them; the latter usually being impossible because infinite families of finite sets are considered at once, such as the set of permutations of {1, 2, ..., n} for any natural number n.
.