Saturday, May 18, 2019
How Should Marianne Berner Respond to the Invitation for Ikea to Have a Representative Appear on the Upcoming Broadcast of the German Video Program?
numerical Di? erentiation MACM 316 1/9 Numerical Differentiation Suppose we bedevil a list of points x0 x1 x2 xn and corresponding function values f (x 0 ), f (x 1 ), f (x 2 ), . . . , f (x n ) A natural question is whether we peck use the data above to approximate f ? (x) at some point x ? x0, xn. The upshot is clear provided the points xi are equally spaced so that xi ? xi? 1 = h (constant) and x. The easiest way to motivate derivative politys is apply the definition of derivative f (x + h) ? f (x ) f ? (x) = lim h0 h which suggests many possible variation formulas ? (x ) ? f (x + h) ? f (x ) h f (x ) ? f (x ? h) f ? (x ) ? h f (x + h) ? f (x ? h) f ? (x ) ? 2h f (x + 2 h) ? f (x ? 2 h) ? f (x ) ? 4h ( onward contrariety) (backward difference) ( revolve about difference) (wide centered difference) These formulas are accurate only if h is small enough. October 30, 2008 c Steven Rauch and John Stockie Numerical Di? erentiation MACM 316 2/9 good example Suppose were approximating the derivative of f (x) = 2 sin(3x) using the equally-spaced data x 0. 3000 0. 3250 0. 3500 0. 3750 0. 4000 0. 4250 0. 4500 0. 4750 0. 5000 f (x) 1. 5667 1. 6554 1. 7348 1. 8045 1. 8641 1. 131 1. 9514 1. 9788 1. 9950 The approximations of f ? (0. 4) with h = 0. 1 are 1. f ? (x) ? f (x+h)? f (x) h f ? (0. 4) ? 1. 9950? 1. 8641 0. 1 = 1. 3090 (40%) 2. f ? (x) ? f (x)? f (x? h) h f ? (0. 4) ? 1. 8641? 1. 5667 0. 1 = 2. 9740 (37%) 3. f ? (x) ? f (x+h)? f (x? h) 2h f ? (0. 4) ? 1. 9950? 1. 5667 0. 2 = 2. 1415 (1. 5%) where the relative computer actus reuss are computed using the exact value f ? (0. 4) = 6 cos(3 ? 0. 4) = 2. 17414652686004 2. 1 2 y 1. 9 1. 8 1. 7 1. 6 1. 5 0. 25 0. 3 0. 35 0. 4 0. 45 0. 5 0. 55 0. 6 x Figure 1 f (x) = 2 sin(3x) October 30, 2008 c Steven Rauch and John Stockie Numerical Di? rentiation MACM 316 3/9 Example (contd) Investigate what happens when h is decreased to 0. 05 x 0. 3000 0. 3250 0. 3500 0. 3750 0. 4000 0. 4250 0. 4500 0. 4750 0. 5000 f (x) 1. 5667 1. 6554 1. 7348 1. 8045 1. 8641 1. 9131 1. 9514 1. 9788 1. 9950 The approximations of f ? (0. 4) with h = 0. 05 are 1. f ? (x) ? f (x+h)? f (x) h f ? (0. 4) ? 1. 9514? 1. 8641 0. 05 = 1. 7460 (20%) 2. f ? (x) ? f (x)? f (x? h) h f ? (0. 4) ? 1. 8641? 1. 7348 0. 05 = 2. 5860 (19%) 3. f ? (x) ? f (x+h)? f (x? h) 2h f ? (0. 4) ? 1. 8641? 1. 7348 0. 1 = 2. 1660 (0. 4%) 4. f ? (x) ? f (x+2h)? f (x? 2h) 4h f ? (0. 4) ? . 9950? 1. 5667 0. 2 = 2. 1415 (1. 5%) Notice that The forward and backward difference formulas (1 and 2) have similar the true. The centered difference (3) is much more accurate than the one-sided differences. Decreasing h increases the accuracy of the approximation. Question Can this be explained? October 30, 2008 c Steven Rauch and John Stockie Numerical Di? erentiation MACM 316 4/9 Error epitome To analyse the error in ? nite difference formulas, use Taylor series approximations. Example 1 Forward difference formula Write the Taylor multinomial of degree n = 1, with error term ? (x + h) = f (x ) + f (x ) h + f (c) 2 h2 Then f (x + h) ? f (x ) h ? = f (x ) + f (c) 2 h = f ? (x ) + O (h) Decreasing h clearly reduces the error. Example 2 Centered difference formula Taylor polynomials for f (x + h) and f (x ? h) to O (h4) f (x) 2 f (x) 3 f (4) (x) 4 f (5) (c1 ) 5 f (x + h) = f (x) + f (x) h + h+ h+ h+ h 2 6 4 5 ? f (x ? h) = f (x) ? f ? (x) h + f (x) 2 h2 ? f (x) 6 h3 + f (4) (x) 4 h4 ? f (5) (c2 ) 5 h5 Subtract the second equation from the ? rst and divide by 2h f (x) 2 f (x + h) ? f (x ? h) ? = f (x) + h + O (h4 ) 2h 6 Error in centered formula is smaller (as expected) October 30, 2008 c Steven Rauch and John Stockie Numerical Di? erentiation MACM 316 5/9 Error outline (contd) Taylor series can also be used to derive new formulas. Example 3 A second-order one-sided formula Write the Taylor polynomials for f (x + h) and f (x + 2h) f (x) 3 f (x) 2 h+ h + O (h4 ) f (x + h) = f (x ) + f (x ) h + 2 6 4f (x) 3 ? 2 f (x + 2 h) = f (x ) + 2f (x ) h + 2 f (x ) h + h + O (h4 ) 3 ? Form the following linear combination 4 f (x + h) ? f (x + 2 h) ? 3 f (x ) 2h ? = f (x ) ? f (x) 3 h2 + O (h4 ) Expect this formula to be more accurate than forward/backward differences, and similar to centered formula October 30, 2008 c Steven Rauch and John Stockie Numerical Di? erentiation MACM 316 6/9 Richardson Extrapolation In addition to creating new formulas or reducing h, there is a trick for increasing accuracy The centered difference formula is missing the odd terms f ? (x) = f (x+h)? f (x? h) 2h f ? (x) = g0 (h) + O (h2) + O (h4) + O (h6) + a h2 + + O (h4) + O (h6) + (1) Then, write the same difference formula using h 2 f ? (x) = g0 ( h ) 2 + a ( h )2 + O (h4) + O (h6) + (2) hand the O (h2) term by taking 4 ? (1) ? (2) 4f ? (x) ? f ? (x) = 4g0 ( h ) ? g0 (h) + 4a ( h )2 ? a h2 + O (h4 ) + O (h6 ) + 2 2 Simplify to obtain a formula of higher accuracy ? f (x) = 4g0 ( h ) ? g0 (h) 2 3 + O (h4) + O (h6 ) + Continue this idea (recursively) to higher orders 4g0 ( h )? g0 (h) 2 3 + b h4 + O (h6 ) + = g1 (h) + b h4 + O (h6 ) + = g1 ( h ) 2 + = 16g1 ( h )? g1 (h) 2 15 + O (h6 ) + = g2 (h) + O (h6 ) + f ? (x) = In general, gn = October 30, 2008 b 16 h4 + O (h6 ) + 4n gn? 1 ( h )? gn? 1 (h) 2 4n ? 1 Steven Rauch and John Stockie Numerical Di? erentiation MACM 316 7/9 Richardson Extrapolation Example Consider the data from the front example x 0. 300 0. 325 0. 350 0. 375 0. 400 0. 425 0. 450 0. 475 0. 500 f (x) 1. 5667 1. 6554 1. 7348 1. 8045 1. 8641 1. 9131 1. 9514 1. 9788 1. 9950 g0(h) = f (x+h)? f (x? h) 2h g1(h) = 4g0 ( h )? g0 (h) 2 3 g2(h) = 16g1 ( h )? g1 (h) 2 15 Steps in Richardson extrapolation (x = 0. 4 and h = 0. 1) are easy to organize in tabular form x=0. 4 h=0. 1 h g0 (h) O (h2 ) g1 (h) O (h4) g2 (h) O (h6 ) 2. 1416807698 2. 1741099363 h 2 2. 1660026447 h 4 2. 1721088377 . 1741465220 2. 1741442353 Note Dont fog with Newton divid ed differences Relative errors (using f ? (0. 4) = 2. 17414652686004) x=0. 4 h=0. 1 g0 (h) h 1. 49 ? 10? 2 g1 (h) g2 (h) 1. 68 ? 10? 5 h 2 h 4 October 30, 2008 3. 74 ? 10? 3 9. 25 ? 10? 4 2. 26 ? 10? 9 1. 05 ? 10? 6 c Steven Rauch and John Stockie Numerical Di? erentiation MACM 316 8/9 Optimal h When applying any ? nite difference formula, we requisite h as small as possible so that gruffness error is small (Taylor polynomial error term), we cannot take h too small, otherwise round-off error dominates (subtractive cancellation). There should be an optimal h where truncation error and round-off error balance out. Example Forward difference approximation Evaluate the difference using ? oating point arithmetic f (x+h)? f (x) h = f (x+h)(1+? 1 )? f (x)(1+? 2 ) , h = fl f (x+h)? f (x) h + where ? i u = unit roundoff au , h where a is some constant au = f ? (x) + b h + h truncation round-off Optimum h occurs roughly when bh ? au h =? h ? au b Example Take f (x) = 2 sin(3x) Then a ? 4 and b = f (x) 2 = 9 sin(3x) 9. Assume single precision arithmetic u = 10? 6 h? f ? (0. ) = October 30, 2008 4 10? 6 9 ? 0. 00066667. f (0. 400667)? f (0. 4) 0. 00066667 ? 2. 16 (0. 65% relative error) c Steven Rauch and John Stockie Numerical Di? erentiation MACM 316 9/9 Below is a representative survey of the estimates for bh truncation error au round-off error total error h bh+ au h 20 18 16 total error Total Error (10-3) 14 12 10 truncation error 8 6 round-off error 4 2 0 0 0. 2 0. 4 0. 6 0. 8 1 1. 2 1. 4 1. 6 1. 8 2 h (10-3) Note Total error has a topical anaesthetic minimum near x = 0. 667 ? 10? 3. October 30, 2008 c Steven Rauch and John Stockie
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.