It is implemented in many programming libraries as the hypot function, in a way designed to avoid errors arising due to limited-precision calculations performed on computers. Donald Knuth has written that "Most of the square root operations in computer programs could probably be avoided if [Pythagorean addition] were more widely available, because people seem to want square roots primarily when they are computing distances."[2]
The operation is associative[6][7] and commutative.[6][8] Therefore, if three or more numbers are to be combined with this operation, the order of combination makes no difference to the result:
Additionally, on the non-negative real numbers, zero is an identity element for Pythagorean addition. On numbers that can be negative, the Pythagorean sum with zero gives the absolute value:[3] The three properties of associativity, commutativity, and having an identity element (on the non-negative numbers) are the defining properties of a commutative monoid.[9][10]
Repeated Pythagorean addition can also find the diagonal length of a rectangle and the diameter of a rectangular cuboid. For a rectangle with sides and , the diagonal length is .[12][13] For a cuboid, the diameter is the longest distance between two points, the length of the body diagonal of the cuboid. For a cuboid with side lengths , , and , this length is .[13]
The root mean square or quadratic mean of a finite set of numbers is times their Pythagorean sum. This is a generalized mean of the numbers.[16]
The standard deviation of a collection of observations is the quadratic mean of their individual deviations from the mean. When two or more independent random variables are added, the standard deviation of their sum is the Pythagorean sum of their standard deviations.[16] Thus, the Pythagorean sum itself can be interpreted as giving the amount of overall noise when combining independent sources of noise.[17]
When combining signals, it can be a useful design technique to arrange for the combined signals to be orthogonal in polarization or phase, so that they add in quadrature.[23][24] In early radio engineering, this idea was used to design directional antennas, allowing signals to be received while nullifying the interference from signals coming from other directions.[23] When the same technique is applied in software to obtain a directional signal from a radio or ultrasoundphased array, Pythagorean addition may be used to combine the signals.[25] Other recent applications of this idea include improved efficiency in the frequency conversion of lasers.[24]
In the psychophysics of haptic perception, Pythagorean addition has been proposed as a model for the perceived intensity of vibration when two kinds of vibration are combined.[26]
In a 1983 paper, Cleve Moler and Donald Morrison described an iterative method for computing Pythagorean sums, without taking square roots.[3] This was soon recognized to be an instance of Halley's method,[8] and extended to analogous operations on matrices.[7]
Although many modern implementations of this operation instead compute Pythagorean sums by reducing the problem to the square root function,
they do so in a way that has been designed to avoid errors arising from the limited-precision calculations performed on computers. If calculated using the natural formula,
the squares of very large or small values of and may exceed the range of machine precision when calculated on a computer. This may to an inaccurate result caused by arithmetic underflow and overflow, although when overflow and underflow do not occur the output is within two ulp of the exact result.[28][29][30] Common implementations of the hypot function rearrange this calculation in a way that avoids the problem of overflow and underflow and are even more precise.[31]
If either input to hypot is infinite, the result is infinite. Because this is true for all possible values of the other input, the IEEE 754 floating-point standard requires that this remains true even when the other input is not a number (NaN).[32]
Calculation order
The difficulty with the naive implementation is that may overflow or underflow, unless the intermediate result is computed with extended precision. A common implementation technique is to exchange the values, if necessary, so that , and then to use the equivalent form
The computation of cannot overflow unless both and are zero. If underflows, the final result is equal to , which is correct within the precision of the calculation. The square root is computed of a value between 1 and 2. Finally, the multiplication by cannot underflow, and overflows only when the result is too large to represent.[31]
One drawback of this rearrangement is the additional division by , which increases both the time and inaccuracy of the computation.
More complex implementations avoid these costs by dividing the inputs into more cases:
When overflows, multiply both and by a small scaling factor (e.g. 2−64 for IEEE single precision), use the naive algorithm which will now not overflow, and multiply the result by the (large) inverse (e.g. 264).
When underflows, scale as above but reverse the scaling factors to scale up the intermediate values.
Otherwise, the naive algorithm is safe to use.
Additional techniques allow the result to be computed more accurately than the naive algorithm, e.g. to less than one ulp.[31] Researchers have also developed analogous algorithms for computing Pythagorean sums of more than two values.[33]
Fast approximation
The alpha max plus beta min algorithm is a high-speed approximation of Pythagorean addition using only comparison, multiplication, and addition, producing a value whose error is less than 4% of the correct result. It is computed as
for a careful choice of parameters and .[34]
The terms "Pythagorean addition" and "Pythagorean sum" for this operation have been used at least since the 1950s,[18][50] and its use in signal processing as "addition in quadrature" goes back at least to 1919.[23]
From the 1920s to the 1940s, before the widespread use of computers, multiple designers of slide rules included square-root scales in their devices, allowing Pythagorean sums to be calculated mechanically.[51][52][53] Researchers have also investigated analog circuits for approximating the value of Pythagorean sums.[54]
References
^Johnson, David L. (2017). "12.2.3 Addition in Quadrature". Statistical Tools for the Comprehensive Practice of Industrial Hygiene and Environmental Health Sciences. John Wiley & Sons. p. 289. ISBN9781119143017.
^Ellis, Mark W.; Pagni, David (May 2008). "Exploring segment lengths on the Geoboard". Mathematics Teaching in the Middle School. 13 (9). National Council of Teachers of Mathematics: 520–525. doi:10.5951/mtms.13.9.0520. JSTOR41182606.
^"SIN (3M): Trigonometric functions and their inverses". Unix Programmer's Manual: Reference Guide (4.3 Berkeley Software Distribution Virtual VAX-11 Version ed.). Department of Electrical Engineering and Computer Science, University of California, Berkeley. April 1986.
^ abWeisberg, Herbert F. (1992). Central Tendency and Variability. Quantitative Applications in the Social Sciences. Vol. 83. Sage. pp. 45, 52–53. ISBN9780803940079.
^Barlow, Roger (March 22, 2002). "Systematic errors: facts and fictions". Conference on Advanced Statistical Techniques in Particle Physics. Durham, UK. pp. 134–144. arXiv:hep-ex/0207026.
^ abEimerl, D. (August 1987). "Quadrature frequency conversion". IEEE Journal of Quantum Electronics. 23 (8): 1361–1371. doi:10.1109/jqe.1987.1073521.
^Powers, J. E.; Phillips, D. J.; Brandestini, M.; Ferraro, R.; Baker, D. W. (1980). "Quadrature sampling for phased array application". In Wang, Keith Y. (ed.). Acoustical Imaging: Visualization and Characterization. Vol. 9. Springer. pp. 263–273. doi:10.1007/978-1-4684-3755-3_18. ISBN9781468437553.
^Yoo, Yongjae; Hwang, Inwook; Choi, Seungmoon (April 2022). "Perceived intensity model of dual-frequency superimposed vibration: Pythagorean sum". IEEE Transactions on Haptics. 15 (2): 405–415. doi:10.1109/toh.2022.3144290.
^Kanopoulos, N.; Vasanthavada, N.; Baker, R.L. (April 1988). "Design of an image edge detection filter using the Sobel operator". IEEE Journal of Solid-State Circuits. 23 (2): 358–367. doi:10.1109/4.996.
^Jeannerod, Claude-Pierre; Muller, Jean-Michel; Plet, Antoine (2017). "The classical relative error bounds for computing and in binary floating-point arithmetic are asymptotically optimal". In Burgess, Neil; Bruguera, Javier D.; de Dinechin, Florent (eds.). 24th IEEE Symposium on Computer Arithmetic, ARITH 2017, London, United Kingdom, July 24–26, 2017. IEEE Computer Society. pp. 66–73. doi:10.1109/ARITH.2017.40.
^Muller, Jean-Michel; Salvy, Bruno (2024). "Effective quadratic error bounds for floating-point algorithms computing the hypotenuse function". arXiv:2405.03588 [math.NA].
^ abcBorges, Carlos F. (2021). "Algorithm 1014: An Improved Algorithm for hypot(x, y)". ACM Transactions on Mathematical Software. 47 (1): 9:1–9:12. arXiv:1904.09481. doi:10.1145/3428446. S2CID230588285.
^van der Leun, Vincent (2017). "Java Class Library". Introduction to JVM Languages: Java, Scala, Clojure, Kotlin, and Groovy. Packt Publishing Ltd. pp. 10–11. ISBN9781787126589.
^Taylor, Mischa; Vargo, Seth (2014). "Mathematical operations". Learning Chef: A Guide to Configuration Management and Automation. O'Reilly Media. p. 40. ISBN9781491945117.
^"Primitive Type f64". The Rust Standard Library. February 17, 2025. Retrieved 2025-02-22.
^Stern, T. E.; Lerner, R. M. (April 1963). "A circuit for the square root of the sum of the squares". Proceedings of the IEEE. 51 (4): 593–596. doi:10.1109/proc.1963.2206.