出典(authority):フリー百科事典『ウィキペディア(Wikipedia)』「2017/05/15 04:35:19」(JST)
Part of a series of articles about | ||||||
Calculus | ||||||
---|---|---|---|---|---|---|
|
||||||
Differential
|
||||||
Integral
|
||||||
Series
|
||||||
Vector
|
||||||
Multivariable
|
||||||
Specialized
|
||||||
|
In calculus, the second derivative, or the second order derivative, of a function f is the derivative of the derivative of f. Roughly speaking, the second derivative measures how the rate of change of a quantity is itself changing; for example, the second derivative of the position of a vehicle with respect to time is the instantaneous acceleration of the vehicle, or the rate at which the velocity of the vehicle is changing with respect to time. In Leibniz notation:
where the last term is the second derivative expression.
On the graph of a function, the second derivative corresponds to the curvature or concavity of the graph. The graph of a function with a positive second derivative is upwardly concave, while the graph of a function with a negative second derivative curves in the opposite way.
The power rule for the first derivative, if applied twice, will produce the second derivative power rule as follows:
The second derivative of a function is usually denoted . That is:
When using Leibniz's notation for derivatives, the second derivative of a dependent variable y with respect to an independent variable x is written
This notation is derived from the following formula:
Given the function
the derivative of f is the function
The second derivative of f is the derivative of f′, namely
The second derivative of a function f measures the concavity of the graph of f. A function whose second derivative is positive will be concave up (sometimes referred to as convex), meaning that the tangent line will lie below the graph of the function. Similarly, a function whose second derivative is negative will be concave down (sometimes called simply “concave”), and its tangent lines will lie above the graph of the function.
If the second derivative of a function changes sign, the graph of the function will switch from concave down to concave up, or vice versa. A point where this occurs is called an inflection point. Assuming the second derivative is continuous, it must take a value of zero at any inflection point, although not every point where the second derivative is zero is necessarily a point of inflection.
The relation between the second derivative and the graph can be used to test whether a stationary point for a function (i.e. a point where ) is a local maximum or a local minimum. Specifically,
The reason the second derivative produces these results can be seen by way of a real-world analogy. Consider a vehicle that at first is moving forward at a great velocity, but with a negative acceleration. Clearly the position of the vehicle at the point where the velocity reaches zero will be the maximum distance from the starting position – after this time, the velocity will become negative and the vehicle will reverse. The same is true for the minimum, with a vehicle that at first has a very negative velocity but positive acceleration.
It is possible to write a single limit for the second derivative:
The limit is called the second symmetric derivative.[1][2] Note that the second symmetric derivative may exist even when the (usual) second derivative does not.
The expression on the right can be written as a difference quotient of difference quotients:
This limit can be viewed as a continuous version of the second difference for sequences.
Please note that the existence of the above limit does not mean that the function has a second derivative. The limit above just gives a possibility for calculating the second derivative but does not provide a definition. As a counterexample look on the sign function which is defined through
The sign function is not continuous at zero and therefore the second derivative for does not exist. But the above limit exists for :
Just as the first derivative is related to linear approximations, the second derivative is related to the best quadratic approximation for a function f. This is the quadratic function whose first and second derivatives are the same as those of f at a given point. The formula for the best quadratic approximation to a function f around the point x = a is
This quadratic approximation is the second-order Taylor polynomial for the function centered at x = a.
For many combinations of boundary conditions explicit formulas for eigenvalues and eigenvectors of the second derivative can be obtained. For example, assuming and homogeneous Dirichlet boundary conditions, i.e., , the eigenvalues are and the corresponding eigenvectors (also called eigenfunctions) are . Here,
For other well-known cases, see the main article eigenvalues and eigenvectors of the second derivative.
The second derivative generalizes to higher dimensions through the notion of second partial derivatives. For a function f:R3 → R, these include the three second-order partials
and the mixed partials
If the function's image and domain both have a potential, then these fit together into a symmetric matrix known as the Hessian. The eigenvalues of this matrix can be used to implement a multivariable analogue of the second derivative test. (See also the second partial derivative test.)
Another common generalization of the second derivative is the Laplacian. This is the differential operator defined by
The Laplacian of a function is equal to the divergence of the gradient and the trace of the Hessian matrix.
&{\text{if }}x=0,\\1&{\text{if }}x>0.\end{cases}}}</annotation>
</semantics>
</math>
The sign function is not continuous at zero and therefore the second derivative for <math xmlns="http://www.w3.org/1998/Math/MathML" > <semantics> <mrow class="MJX-TeXAtom-ORD"> <mstyle displaystyle="true" scriptlevel="0"> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mstyle> </mrow> <annotation encoding="application/x-tex">{\displaystyle x=0}</annotation> </semantics> </math> does not exist. But the above limit exists for <math xmlns="http://www.w3.org/1998/Math/MathML" > <semantics> <mrow class="MJX-TeXAtom-ORD"> <mstyle displaystyle="true" scriptlevel="0"> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mstyle> </mrow> <annotation encoding="application/x-tex">{\displaystyle x=0}</annotation> </semantics> </math>:
Just as the first derivative is related to linear approximations, the second derivative is related to the best quadratic approximation for a function f. This is the quadratic function whose first and second derivatives are the same as those of f at a given point. The formula for the best quadratic approximation to a function f around the point x = a is
This quadratic approximation is the second-order Taylor polynomial for the function centered at x = a.
For many combinations of boundary conditions explicit formulas for eigenvalues and eigenvectors of the second derivative can be obtained. For example, assuming <math xmlns="http://www.w3.org/1998/Math/MathML" > <semantics> <mrow class="MJX-TeXAtom-ORD"> <mstyle displaystyle="true" scriptlevel="0"> <mi>x</mi> <mo>∈</mo> <mo stretchy="false">[</mo> <mn>0</mn> <mo>,</mo> <mi>L</mi> <mo stretchy="false">]</mo> </mstyle> </mrow> <annotation encoding="application/x-tex">{\displaystyle x\in [0,L]}</annotation> </semantics> </math> and homogeneous Dirichlet boundary conditions, i.e., <math xmlns="http://www.w3.org/1998/Math/MathML" > <semantics> <mrow class="MJX-TeXAtom-ORD"> <mstyle displaystyle="true" scriptlevel="0"> <mi>v</mi> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> <mo>=</mo> <mi>v</mi> <mo stretchy="false">(</mo> <mi>L</mi> <mo stretchy="false">)</mo> <mo>=</mo> <mn>0</mn> </mstyle> </mrow> <annotation encoding="application/x-tex">{\displaystyle v(0)=v(L)=0}</annotation> </semantics> </math>, the eigenvalues are <math xmlns="http://www.w3.org/1998/Math/MathML" > <semantics> <mrow class="MJX-TeXAtom-ORD"> <mstyle displaystyle="true" scriptlevel="0"> <msub> <mi>λ</mi> <mrow class="MJX-TeXAtom-ORD"> <mi>j</mi> </mrow> </msub> <mo>=</mo> <mo>−</mo> <mrow class="MJX-TeXAtom-ORD"> <mfrac> <mrow> <msup> <mi>j</mi> <mrow class="MJX-TeXAtom-ORD"> <mn>2</mn> </mrow> </msup> <msup> <mi>π</mi> <mrow class="MJX-TeXAtom-ORD"> <mn>2</mn> </mrow> </msup> </mrow> <msup> <mi>L</mi> <mrow class="MJX-TeXAtom-ORD"> <mn>2</mn> </mrow> </msup> </mfrac> </mrow> </mstyle> </mrow> <annotation encoding="application/x-tex">{\displaystyle \lambda _{j}=-{\frac {j^{2}\pi ^{2}}{L^{2}}}}</annotation> </semantics> </math> and the corresponding eigenvectors (also called eigenfunctions) are <math xmlns="http://www.w3.org/1998/Math/MathML" > <semantics> <mrow class="MJX-TeXAtom-ORD"> <mstyle displaystyle="true" scriptlevel="0"> <msub> <mi>v</mi> <mrow class="MJX-TeXAtom-ORD"> <mi>j</mi> </mrow> </msub> <mo stretchy="false">(</mo> <mi>x</mi> <mo stretchy="false">)</mo> <mo>=</mo> <mrow class="MJX-TeXAtom-ORD"> <msqrt> <mfrac> <mn>2</mn> <mi>L</mi> </mfrac> </msqrt> </mrow> <mi>sin</mi> <mo></mo> <mrow> <mo>(</mo> <mrow class="MJX-TeXAtom-ORD"> <mfrac> <mrow> <mi>j</mi> <mi>π</mi> <mi>x</mi> </mrow> <mi>L</mi> </mfrac> </mrow> <mo>)</mo> </mrow> </mstyle> </mrow> <annotation encoding="application/x-tex">{\displaystyle v_{j}(x)={\sqrt {\frac {2}{L}}}\sin \left({\frac {j\pi x}{L}}\right)}</annotation> </semantics> </math>. Here, <math xmlns="http://www.w3.org/1998/Math/MathML" > <semantics> <mrow class="MJX-TeXAtom-ORD"> <mstyle displaystyle="true" scriptlevel="0"> <msubsup> <mi>v</mi> <mrow class="MJX-TeXAtom-ORD"> <mi>j</mi> </mrow> <mo>″</mo> </msubsup> <mo stretchy="false">(</mo> <mi>x</mi> <mo stretchy="false">)</mo> <mo>=</mo> <msub> <mi>λ</mi> <mrow class="MJX-TeXAtom-ORD"> <mi>j</mi> </mrow> </msub> <msub> <mi>v</mi> <mrow class="MJX-TeXAtom-ORD"> <mi>j</mi> </mrow> </msub> <mo stretchy="false">(</mo> <mi>x</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mspace width="thinmathspace" /> <mi>j</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mi mathvariant="normal">∞</mi> <mo>.</mo> </mstyle> </mrow> <annotation encoding="application/x-tex">{\displaystyle v_{j}(x)=\lambda _{j}v_{j}(x),\,j=1,\ldots ,\infty .}</annotation> </semantics> </math>
For other well-known cases, see the main article eigenvalues and eigenvectors of the second derivative.
The second derivative generalizes to higher dimensions through the notion of second partial derivatives. For a function f:R3 → R, these include the three second-order partials
and the mixed partials
If the function's image and domain both have a potential, then these fit together into a symmetric matrix known as the Hessian. The eigenvalues of this matrix can be used to implement a multivariable analogue of the second derivative test. (See also the second partial derivative test.)
Another common generalization of the second derivative is the Laplacian. This is the differential operator <math xmlns="http://www.w3.org/1998/Math/MathML" > <semantics> <mrow class="MJX-TeXAtom-ORD"> <mstyle displaystyle="true" scriptlevel="0"> <msup> <mi mathvariant="normal">∇</mi> <mrow class="MJX-TeXAtom-ORD"> <mn>2</mn> </mrow> </msup> </mstyle> </mrow> <annotation encoding="application/x-tex">{\displaystyle \nabla ^{2}}</annotation> </semantics> </math> defined by
The Laplacian of a function is equal to the divergence of the gradient and the trace of the Hessian matrix.
</raw> </toggledisplay>
リンク元 | 「凹面」「concave」「陥凹部」 |
.