出典(authority):フリー百科事典『ウィキペディア(Wikipedia)』「2014/07/15 05:14:45」(JST)
In mathematics, extrapolation is the process of estimating, beyond the original observation range, the value of a variable on the basis of its relationship with another variable. It is similar to interpolation, which produces estimates between known observations, but extrapolation is subject to greater uncertainty and a higher risk of producing meaningless results. Extrapolation may also mean extension of a method, assuming similar methods will be applicable. Extrapolation may also apply to human experience to project, extend, or expand known experience into an area not known or previously experienced so as to arrive at a (usually conjectural) knowledge of the unknown [1] (e.g. a driver extrapolates road conditions beyond his sight while driving). The extrapolation method can be applied in the interior reconstruction problem.
A sound choice of which extrapolation method to apply relies on a prior knowledge of the process that created the existing data points. Some experts have proposed the use of causal forces in the evaluation of extrapolation methods.[2] Crucial questions are for example if the data can be assumed to be continuous, smooth, possibly periodic etc.
Extrapolation means creating a tangent line at the end of the known data and extending it beyond that limit. Linear extrapolation will only provide good results when used to extend the graph of an approximately linear function or not too far beyond the known data.
If the two data points nearest the point to be extrapolated are and , linear extrapolation gives the function:
(which is identical to linear interpolation if ). It is possible to include more than two points, and averaging the slope of the linear interpolant, by regression-like techniques, on the data points chosen to be included. This is similar to linear prediction.
A polynomial curve can be created through the entire known data or just near the end. The resulting curve can then be extended beyond the end of the known data. Polynomial extrapolation is typically done by means of Lagrange interpolation or using Newton's method of finite differences to create a Newton series that fits the data. The resulting polynomial may be used to extrapolate the data.
High-order polynomial extrapolation must be used with due care. For the example data set and problem in the figure above, anything above order 1 (linear extrapolation) will possibly yield unusable values, an error estimate of the extrapolated value will grow with the degree of the polynomial extrapolation. This is related to Runge's phenomenon.
A conic section can be created using five points near the end of the known data. If the conic section created is an ellipse or circle, it will loop back and rejoin itself. A parabolic or hyperbolic curve will not rejoin itself, but may curve back relative to the X-axis. This type of extrapolation could be done with a conic sections template (on paper) or with a computer.
French curve extrapolation is a method suitable for any distribution that has a tendency to be exponential, but with accelerating or decelerating factors.[3] This method has been used successfully in providing forecast projections of the growth of HIV/AIDS in the UK since 1987 and variant CJD in the UK for a number of years. Another study has shown that extrapolation can produce the same quality of forecasting results as more complex forecasting strategies.[4]
Typically, the quality of a particular method of extrapolation is limited by the assumptions about the function made by the method. If the method assumes the data are smooth, then a non-smooth function will be poorly extrapolated.
In terms of complex time series, some experts have discovered that extrapolation is more accurate when performed through the decomposition of causal forces.[5]
Even for proper assumptions about the function, the extrapolation can diverge severely from the function. The classic example is truncated power series representations of sin(x) and related trigonometric functions. For instance, taking only data from near the x = 0, we may estimate that the function behaves as sin(x) ~ x. In the neighborhood of x = 0, this is an excellent estimate. Away from x = 0 however, the extrapolation moves arbitrarily away from the x-axis while sin(x) remains in the interval [−1,1]. I.e., the error increases without bound.
Taking more terms in the power series of sin(x) around x = 0 will produce better agreement over a larger interval near x = 0, but will produce extrapolations that eventually diverge away from the x-axis even faster than the linear approximation.
This divergence is a specific property of extrapolation methods and is only circumvented when the functional forms assumed by the extrapolation method (inadvertently or intentionally due to additional information) accurately represent the nature of the function being extrapolated. For particular problems, this additional information may be available, but in the general case, it is impossible to satisfy all possible function behaviors with a workably small set of potential behavior.
In complex analysis, a problem of extrapolation may be converted into an interpolation problem by the change of variable . This transform exchanges the part of the complex plane inside the unit circle with the part of the complex plane outside of the unit circle. In particular, the compactification point at infinity is mapped to the origin and vice versa. Care must be taken with this transform however, since the original function may have had "features", for example poles and other singularities, at infinity that were not evident from the sampled data.
Another problem of extrapolation is loosely related to the problem of analytic continuation, where (typically) a power series representation of a function is expanded at one of its points of convergence to produce a power series with a larger radius of convergence. In effect, a set of data from a small region is used to extrapolate a function onto a larger region.
Again, analytic continuation can be thwarted by function features that were not evident from the initial data.
Also, one may use sequence transformations like Padé approximants and Levin-type sequence transformations as extrapolation methods that lead to a summation of power series that are divergent outside the original radius of convergence. In this case, one often obtains rational approximants.
The extrapolated data often convolute to a kernel function. After data is extrapolated, the size of data is increased N times, here N=2~3. If this data needs to be convoluted to a know kernel function, the numerical calculations will increase log(N)*N times even with FFT(fast Fourier transform). There exists a algorithm, it analytically calculates the contribution from the part of the extrapolated data. The calculation time can be omitted compared with the original convolution calculation. Hence with this algorithm the calculations of a convolution using the extrapolated data is nearly not increased. This is referred as the fast extrapolation. The fast extrapolation has been applied to CT image reconstruction.[6]
Look up extrapolation in Wiktionary, the free dictionary. |
Wikimedia Commons has media related to Extrapolation. |
全文を閲覧するには購読必要です。 To read the full text you will need to subscribe.
リンク元 | 「estimate」「presume」「estimation」「推定」「deduce」 |
.