出典(authority):フリー百科事典『ウィキペディア(Wikipedia)』「2015/10/20 09:38:21」(JST)
In mathematics, the greatest common divisor (gcd) of two or more integers, when at least one of them is not zero, is the largest positive integer that divides the numbers without a remainder. For example, the GCD of 8 and 12 is 4.[1][2]
The GCD is also known as the greatest common factor (gcf),[3] highest common factor (hcf),[4] greatest common measure (gcm),[5] or highest common divisor.[6]
This notion can be extended to polynomials (see Polynomial greatest common divisor) and other commutative rings (see below).
In this article we will denote the greatest common divisor of two integers a and b as gcd(a,b).
Some textbooks use (a,b).[1][2][6][7]
The J (programming language) uses a +. b
The number 54 can be expressed as a product of two integers in several different ways:
Thus the divisors of 54 are:
Similarly the divisors of 24 are:
The numbers that these two lists share in common are the common divisors of 54 and 24:
The greatest of these is 6. That is the greatest common divisor of 54 and 24. One writes:
The greatest common divisor is useful for reducing fractions to be in lowest terms. For example, gcd(42, 56) = 14, therefore,
Two numbers are called relatively prime, or coprime, if their greatest common divisor equals 1. For example, 9 and 28 are relatively prime.
For example, a 24-by-60 rectangular area can be divided into a grid of: 1-by-1 squares, 2-by-2 squares, 3-by-3 squares, 4-by-4 squares, 6-by-6 squares or 12-by-12 squares. Therefore, 12 is the greatest common divisor of 24 and 60. A 24-by-60 rectangular area can be divided into a grid of 12-by-12 squares, with two squares along one edge (24/12 = 2) and five squares along the other (60/12 = 5).
Greatest common divisors can in principle be computed by determining the prime factorizations of the two numbers and comparing factors, as in the following example: to compute gcd(18, 84), we find the prime factorizations 18 = 2 · 32 and 84 = 22 · 3 · 7 and notice that the "overlap" of the two expressions is 2 · 3; so gcd(18, 84) = 6. In practice, this method is only feasible for small numbers; computing prime factorizations in general takes far too long.
Here is another concrete example, illustrated by a Venn diagram. Suppose it is desired to find the greatest common divisor of 48 and 180. First, find the prime factorizations of the two numbers:
What they share in common is two "2"s and a "3":
A much more efficient method is the Euclidean algorithm, which uses a division algorithm such as long division in combination with the observation that the gcd of two numbers also divides their difference. To compute gcd(48,18), divide 48 by 18 to get a quotient of 2 and a remainder of 12. Then divide 18 by 12 to get a quotient of 1 and a remainder of 6. Then divide 12 by 6 to get a remainder of 0, which means that 6 is the gcd. Note that we ignored the quotient in each step except to notice when the remainder reached 0, signalling that we had arrived at the answer. Formally the algorithm can be described as:
where
If the arguments are both greater than zero then the algorithm can be written in more elementary terms as follows:
The existence of the Euclidean algorithm places (the decision problem version of) the greatest common divisor problem in P, the class of problems solvable in polynomial time. The GCD problem is not known to be in NC, and so there is no known way to parallelize its computation across many processors; nor is it known to be P-complete, which would imply that it is unlikely to be possible to parallelize GCD computation. In this sense the GCD problem is analogous to e.g. the integer factorization problem, which has no known polynomial-time algorithm, but is not known to be NP-complete. Shallcross et al. showed that a related problem (EUGCD, determining the remainder sequence arising during the Euclidean algorithm) is NC-equivalent to the problem of integer linear programming with two variables; if either problem is in NC or is P-complete, the other is as well.[8] Since NC contains NL, it is also unknown whether a space-efficient algorithm for computing the GCD exists, even for nondeterministic Turing machines.
Although the problem is not known to be in NC, parallel algorithms asymptotically faster than the Euclidean algorithm exist; the best known deterministic algorithm is by Chor and Goldreich, which (in the CRCW-PRAM model) can solve the problem in O(n/log n) time with n1+ε processors.[9] Randomized algorithms can solve the problem in O((log n)2) time on processors (note this is superpolynomial).[10]
An alternative method of computing the gcd is the binary gcd method which uses only subtraction and division by 2. In outline the method is as follows: Let a and b be the two non negative integers. Also set the integer d to 0. There are five possibilities:
As gcd(a, a) = a, the desired gcd is a×2d (as a and b are changed in the other cases, and d records the number of times that a and b have been both divided by 2 in the next step, the gcd of the initial pair is the product of a by 2d).
In this case 2 is a common divisor. Divide both a and b by 2, increment d by 1 to record the number of times 2 is a common divisor and continue.
In this case 2 is not a common divisor. Divide a by 2 and continue.
As in the previous case 2 is not a common divisor. Divide b by 2 and continue.
As gcd(a,b) = gcd(b,a) and we have already considered the case a = b, we may assume that a > b. The number c = a − b is smaller than a yet still positive. Any number that divides a and b must also divide c so every common divisor of a and b is also a common divisor of b and c Similarly, a = b + c and every common divisor of b and c is also a common divisor of a and b. So the two pairs (a, b) and (b, c) have the same common divisors, and thus gcd(a,b) = gcd(b,c). Moreover, as a and b are both odd, c is even, and one may replace c by c/2 without changing the gcd. Thus the process can be continued with the pair (a, b) replaced by the smaller numbers (c/2, b).
Each of the above steps reduces at least one of a and b towards 0 and so can only be repeated a finite number of times. Thus one must eventually reach the case a = b, which is the only stopping case. Then, as quoted above, the gcd is a×2d.
This algorithm may easily programmed as follows:
Input: a, b positive integers Output: g and d such that g is odd and gcd(a, b) = g×2d d := 0 while a and b are both even do a := a/2 b := b/2 d := d + 1 while a ≠ b do if a is even then a := a/2 else if b is even then b := b/2 else if a > b then a := (a – b)/2 else b := (b – a)/2 g := a output g, d
Example: (a, b, d) = (48, 18, 0) → (24, 9, 1) → (12, 9, 1) → (6, 9, 1) → (3, 9, 1) → (3, 6, 1) → (3, 3, 1) ; the original gcd is thus 2d = 21 times a= b= 3, that is 6.
The Binary GCD algorithm is particularly easy to implement on binary computers. The test for whether a number is divisible by two can be performed by testing the lowest bit in the number. Division by two can be achieved by shifting the input number by one bit. Each step of the algorithm makes at least one such shift. Subtracting two numbers smaller than a and b costs bit operations. Each step makes at most one such subtraction. The total number of steps is at most the sum of the numbers of bits of a and b, hence the computational complexity is
For further details see Binary GCD algorithm.
If a and b are both nonzero, the greatest common divisor of a and b can be computed by using least common multiple (lcm) of a and b:
but more commonly the lcm is computed from the gcd.
Using Thomae's function f,
which generalizes to a and b rational numbers or commensurable real numbers.
Keith Slavin has shown that for odd a ≥ 1:
which is a function that can be evaluated for complex b.[11] Wolfgang Schramm has shown that
is an entire function in the variable b for all positive integers a where cd(k) is Ramanujan's sum.[12] Donald Knuth proved the following reduction:
for non-negative integers a and b, where a and b are not both zero.[13] More generally
which can be proven by considering the Euclidean algorithm in base n. Another useful identity relates to the Euler's totient function:
In 1972, James E. Nymann showed that k integers, chosen independently and uniformly from {1,...,n}, are coprime with probability 1/ζ(k) as n goes to infinity, where ζ refers to the Riemann zeta function.[15] (See coprime for a derivation.) This result was extended in 1987 to show that the probability that k random integers have greatest common divisor d is d−k/ζ(k).[16]
Using this information, the expected value of the greatest common divisor function can be seen (informally) to not exist when k = 2. In this case the probability that the gcd equals d is d−2/ζ(2), and since ζ(2) = π2/6 we have
This last summation is the harmonic series, which diverges. However, when k ≥ 3, the expected value is well-defined, and by the above argument, it is
For k = 3, this is approximately equal to 1.3684. For k = 4, it is approximately 1.1106.
This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (October 2014) |
The notion of greatest common divisor can more generally be defined for elements of an arbitrary commutative ring, although in general there need not exist one for every pair of elements.
If R is a commutative ring, and a and b are in R, then an element d of R is called a common divisor of a and b if it divides both a and b (that is, if there are elements x and y in R such that d·x = a and d·y = b). If d is a common divisor of a and b, and every common divisor of a and b divides d, then d is called a greatest common divisor of a and b.
Note that with this definition, two elements a and b may very well have several greatest common divisors, or none at all. If R is an integral domain then any two gcd's of a and b must be associate elements, since by definition either one must divide the other; indeed if a gcd exists, any one of its associates is a gcd as well. Existence of a gcd is not assured in arbitrary integral domains. However if R is a unique factorization domain, then any two elements have a gcd, and more generally this is true in gcd domains. If R is a Euclidean domain in which euclidean division is given algorithmically (as is the case for instance when R = F[X] where F is a field, or when R is the ring of Gaussian integers), then greatest common divisors can be computed using a form of the Euclidean algorithm based on the division procedure.
The following is an example of an integral domain with two elements that do not have a gcd:
The elements 2 and 1 + √(−3) are two "maximal common divisors" (i.e. any common divisor which is a multiple of 2 is associated to 2, the same holds for 1 + √(−3)), but they are not associated, so there is no greatest common divisor of a and b.
Corresponding to the Bézout property we may, in any commutative ring, consider the collection of elements of the form pa + qb, where p and q range over the ring. This is the ideal generated by a and b, and is denoted simply (a, b). In a ring all of whose ideals are principal (a principal ideal domain or PID), this ideal will be identical with the set of multiples of some ring element d; then this d is a greatest common divisor of a and b. But the ideal (a, b) can be useful even when there is no greatest common divisor of a and b. (Indeed, Ernst Kummer used this ideal as a replacement for a gcd in his treatment of Fermat's Last Theorem, although he envisioned it as the set of multiples of some hypothetical, or ideal, ring element d, whence the ring-theoretic term.)
全文を閲覧するには購読必要です。 To read the full text you will need to subscribe.
リンク元 | 「GCM」「最大公約数」「greatest common measure」 |
関連記事 | 「common」 |
.