scipy.optimize.newton

scipy.optimize.newton(func, x0, fprime=None, args=(), tol=1.48e-08, maxiter=50, fprime2=None)[source]

Find a zero using the Newton-Raphson or secant method.

Find a zero of the function func given a nearby starting point x0. The Newton-Raphson method is used if the derivative fprime of func is provided, otherwise the secant method is used. If the second order derivate fprime2 of func is provided, parabolic Halley’s method is used.

Parameters:

func : function

The function whose zero is wanted. It must be a function of a single variable of the form f(x,a,b,c…), where a,b,c… are extra arguments that can be passed in the args parameter.

x0 : float

An initial estimate of the zero that should be somewhere near the actual zero.

fprime : function, optional

The derivative of the function when available and convenient. If it is None (default), then the secant method is used.

args : tuple, optional

Extra arguments to be used in the function call.

tol : float, optional

The allowable error of the zero value.

maxiter : int, optional

Maximum number of iterations.

fprime2 : function, optional

The second order derivative of the function when available and convenient. If it is None (default), then the normal Newton-Raphson or the secant method is used. If it is given, parabolic Halley’s method is used.

Returns:

zero : float

Estimated location where function is zero.

See also

brentq, brenth, ridder, bisect

fsolve
find zeroes in n dimensions.

Notes

The convergence rate of the Newton-Raphson method is quadratic, the Halley method is cubic, and the secant method is sub-quadratic. This means that if the function is well behaved the actual error in the estimated zero is approximately the square (cube for Halley) of the requested tolerance up to roundoff error. However, the stopping criterion used here is the step size and there is no guarantee that a zero has been found. Consequently the result should be verified. Safer algorithms are brentq, brenth, ridder, and bisect, but they all require that the root first be bracketed in an interval where the function changes sign. The brentq algorithm is recommended for general use in one dimensional problems when such an interval has been found.

Examples

>>> def f(x):
...     return (x**3 - 1)  # only one real root at x = 1
>>> from scipy import optimize

fprime and fprime2 not provided, use secant method

>>> root = optimize.newton(f, 1.5)
>>> root
1.0000000000000016

Only fprime provided, use Newton Raphson method

>>> root = optimize.newton(f, 1.5, fprime=lambda x: 3 * x**2)
>>> root
1.0

fprime2 provided, fprime provided/not provided use parabolic Halley’s method

>>> root = optimize.newton(f, 1.5, fprime2=lambda x: 6 * x)
>>> root
1.0000000000000016
>>> root = optimize.newton(f, 1.5, fprime=lambda x: 3 * x**2,
...                        fprime2=lambda x: 6 * x)
>>> root
1.0