Suppose we want to evaluate a function
f^{-1}(y) , the function’s inverse\left[f^{-1}\right]'(y) , the derivative of the function’s inversef^\ast(x) , an approximant tof(x)
then we can use the Newton–Raphson method to iteratively compute more accurate approximations.
Derivation
The Newton–Raphson method finds successively better approximations to a real function’s roots.
By constructing a new function
we can produce successive approximations
Note that
Example: Square Root
// Evaluates a Newton-Raphson approximation to f(x) = sqrt(x)
float sqrtApproxNR(const float& x) {
if (x < 0.0f) return NAN;
// Avoid a division-by-zero later on
if (x == 0.0f) return 0.0f;
// Initial rough approximation to sqrt(x) using bit-level manipulation
float y = std::bit_cast<float>((std::bit_cast<uint32_t>(x) + 0x3F800000) >> 1);
// Compute NR step
// f^-1(y) = y^2; [f^-1]'(y) = 2y
// y = y - (y * y - x)/(2.0f * y);
// After simplifying,
y = 0.5f * (y + x/y);
// Repeat as many times as desired; each iteration increases accuracy at
// the expense of execution time.
y = 0.5f * (y + x/y);
return y;
}
0.4091549 | 1.500000 | 15.00000 | |
0.3990697 | 1.416666 | 14.96666 | |
0.3989422 | 1.414215 | 14.96662 | |
std::sqrt(x) |
0.3989422 | 1.414213 | 14.96662 |
0.3989422… | 1.414213… | 14.96662… |
Function Transforms
Sometimes, the performance or stability can be improved by applying an additional
transform
The choice of
Example: Wright Omega
Seeking to approximate the Wright omega function
which can be problematic as
which both removes the singularity at
Sources
- “Newton’s method,” Wikipedia, October 2024.
- “Wright omega function,” Wikipedia, October 2024.
- S. D’Angelo, L. Gabrielli, and L. Turchet, “Fast Approximation of the Lambert W Function for Virtual Analog Modelling,” Proceedings of the 22nd International Conference on Digital Audio Effects, Birmingham, UK, September 2019.