Using Taylor Series to Avoid Loss of Precision

Posted by Zachary on Stack Overflow See other posts from Stack Overflow or by Zachary
Published on 2009-02-12T20:33:48Z Indexed on 2010/03/15 18:29 UTC
Read the original article Hit count: 600

I'm trying to use Taylor series to develop a numerically sound algorithm for solving a function. I've been at it for quite a while, but haven't had any luck yet. I'm not sure what I'm doing wrong.

The function is

f(x)=1 + x - sin(x)/ln(1+x)   x~0

Also: why does loss of precision even occur in this function? when x is close to zero, sin(x)/ln(1+x) isn't even close to being the same number as x. I don't see where significance is even being lost.

In order to solve this, I believe that I will need to use the Taylor expansions for sin(x) and ln(1+x), which are

x - x^3/3! + x^5/5! - x^7/7! + ...

and

x - x^2/2 + x^3/3 - x^4/4 + ...

respectfully. I have attempted to use like denominators to combine the x and sin(x)/ln(1+x) components, and even to combine all three, but nothing seems to work out correctly in the end. Any help is appreciated.

© Stack Overflow or respective owner

Related posts about numerical-analysis

Related posts about taylor-series