2n vs. n vs. ?
Sometimes I worry about the most inane things...
Consider this common problem: Find the value of f(b) if f(x) = a_{0} + a_{1}x + a_{2}x^{2} +...+ a_{n}x^{n}.
Now, one approach is to sequentially multiply b by itself, pausing to multiply by the appropriate parameter a_{i} and update the sum.
In pseucode form:
Sum ← a_{0};
y ← 1;
for i = 1 to n (step 1) do
y ← y*b;
sum ← sum + a_{i}*y
endfor
To find f(b), this approach involves n additions and 2n multiplications.
But, suppose we rewrote the polynomial as f(x) = a_{0} + x(a_{1} + x(a_{2} +...+ x(a_{n})...)). Do you see the reason for this rewrite?
Now, in pseucode form, we have the reverse:
Sum ← a_{n};
for i = 1 to n (step 1) do
sum ← sum*b + a_{ni}
endfor
If I did this correctly the value of f(b) is now obtained via n additions and n multiplications. A huge gain for highdegree polynomials.
But, alas....I expect students use neither...their algorithm probably involves n additions and (n^{2}+3n+2)/2 multiplications ...which is not efficient at all.
But then, with ready access to modern computing technologies, no one perhaps is concerned about which algorithm we teach...except for myself...and the British mathematician William George Horner (1819), although they were known before him by the Italian mathematician Paolo Ruffini(1809), Isaac Newton (1669), the Chinese mathematician Qin Jiushao (1247), and the Chinese mathematician Liu Hui (3rd century)!
