* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Computerised Mathematical Methods in Engineering
Compressed sensing wikipedia , lookup
Linear least squares (mathematics) wikipedia , lookup
Singular-value decomposition wikipedia , lookup
Eisenstein's criterion wikipedia , lookup
Factorization of polynomials over finite fields wikipedia , lookup
Quartic function wikipedia , lookup
Finite element method wikipedia , lookup
Multidisciplinary design optimization wikipedia , lookup
System of polynomial equations wikipedia , lookup
Dynamic substructuring wikipedia , lookup
Matrix multiplication wikipedia , lookup
System of linear equations wikipedia , lookup
Horner's method wikipedia , lookup
Newton's method wikipedia , lookup
Calculus of variations wikipedia , lookup
Gaussian elimination wikipedia , lookup
Computerised Mathematical Methods in Numerical Differentiation Forward Difference Approximation Engineering (HG3MCE) Nonlinear functions ๐ โฒ (๐ฅ) โ ๐(๐ฅ+โ๐ฅ)โ๐(๐ฅ) โ๐ฅ โ๐ฅโ0 Bisection method Half the interval between 2 points successively to find root. ๐= ๐+๐ 2 Centred Difference Approximation , if f(c)>0, a=c; if f(c)<0, b=c; iterate. ๐ฅ = โ๐ โน ๐(๐ฅ) = ๐ฅ 2 โ ๐ ๐(๐ฅ๐ ) ๐โฒ (๐ฅ๐ ) ๐ฅ= 1 ๐ โน ๐(๐ฅ) = ๐ โ ๐ฅ๐ = ๐ฅ๐โ1 โ h = ฮx h h/2 h/4 โฆ ๐โ๐ 3 โ 1 2 2๐โ1 โ1 ๐๐ (โ) = ๐๐โ1 ( ) + ฮx=h h/2n h/2n โฆ fโn(x) a b โฆ N1(h)=fโ(x) # # # โฆ N2(h) # # โฆ โฆ # โฆ โ [๐๐โ1 ( ) โ ๐๐โ1 (โ)] 2 N1(h) = fโ(x) (using a difference approximation) Numerical Integration (Quadrature) Uses parabolas (quadratic functions) using 3 points that use the x-axis intercept as the next approximation ๐±โ๐2 โ4๐๐ โ2 Richardsonโs Extrapolation ๐ฅ๐โ1 โ๐ฅ๐โ2 2๐ ๐(๐ฅ+โ)โ2๐(๐ฅ)+๐(๐ฅโโ) ๐โ = ๐ + ๐(๐ฅ๐โ1 )โ๐(๐ฅ๐โ2 ) Mullerโs method ๐ โฒ (๐ฅ) โ n 0 1 โฆ 2โ 1 ๐ฅ Secant method Uses straight lines dictated by 2 points on the graph that close in on the root ๐ฅ๐ = ๐ฅ๐โ1 โ ๐(๐ฅ๐โ1 ) ๐(๐ฅ+โ)โ๐(๐ฅโโ) Extrapolation Techniques ฯ(h2) Find better estimate by using 2 approximations (a & b) and cancel errors. Newton Raphson method Differentiate function, ๐ฅ๐+1 = ๐ฅ๐ โ ๐ โฒ (๐ฅ) โ ๐ ๐ผ = โซ๐ ๐(๐ฅ) ๐๐ฅ Fixed Point Theorem Recast function into fixed point form, g(x): Trapezoidal Rule (1 subinterval) f(x)=0 ๏จ x=โฆ=g(x) ๏จ |gโ(x)|max <1 ๏จ only one root Composite Trapezoid Rule Divide h into n smaller subintervals of width h/n: ๐ ๐ผ โ โซ๐ ๐1 (๐ฅ) ๐๐ฅ = If xn+1 iteration converges, |gโ(root)|<1 ๐(๐)+๐(๐) 2 โ where ๐ = ๐โ๐ ๐ต โ ๐ผ โ [๐(๐0 ) + 2๐(๐1 ) + 2๐(๐3 ) + โฏ + 2๐(๐๐โ1 ) + ๐(๐)] Interpolation 2 Straight lines: f(x0) = mx0 + d, ๐ = ๐(๐ฅ1 )โ๐(๐ฅ0 ) Simpsons Rule (2 subintervals) Interpolate with a quadratic polynomial: ๐ฅ1 โ๐ฅ0 Polynomials: f(xn) = axi3 + bxi2 + cxi + d Lagrange Polynomials ๐(๐ฅ) = ๐0 (๐ฅ) + ๐1 (๐ฅ) + โฏ + ๐๐ (๐ฅ) i xi f(xi) x0 0 1 2nd order, ๐0 (๐ฅ) = (๐ฅ nth 0 โ๐ฅ1 ) (๐ฅ0 โ๐ฅ2 ) f(x0) (๐ฅโ๐ฅ๐โ1 ) ๐ โ๐ฅ0 ) (๐ฅ๐ โ๐ฅ1 ๐ โ๐ฅ๐โ1 order, ๐๐ (๐ฅ) = (๐ฅ โฆ (๐ฅ ) f(x1) โ๐ 3 ๐(๐ฅ๐ ) ) ๐1 = ๐[๐ฅ0 , ๐ฅ1 ] = 0 a0 a1 a2 โฆ f[xi, xi+1,โฆ, xn] an ๐2 = ๐[๐ฅ0 , ๐ฅ1 , ๐ฅ2 ] = ๐[๐ฅ1 ,๐ฅ2 ]โ๐1 ๐๐ = ๐[๐ฅ๐ , โฆ , ๐ฅ๐ ] = ๐[๐ฅ๐+1 ,โฆ,๐ฅ๐ ]โ๐[๐ฅ1 ,โฆ,๐ฅ๐โ1 ] Pn(xi) = f(xi) ๐ฅ2 โ๐ฅ0 ๐ฅ๐ โ๐ฅ๐ [๐(๐0 ) + 4๐(๐1 ) + [(๐0 + ๐2๐ ) + 4๐๐๐๐ + 2๐๐ฃ๐๐๐ ] ๐ผโ i xi f[xi] f[xi, xi+1] f[xi, xi+1, xi+2] โฆ ๐ฅ1 โ๐ฅ0 3 3/8 Simpsons Rule using Newton-Cotes formula for N=3 (3 subintervals) For nth order polynomial: ๐๐ (๐ฅ) = ๐0 + ๐1 (๐ฅ โ ๐ฅ0 ) + ๐2 (๐ฅ โ ๐ฅ0 )(๐ฅ โ ๐ฅ1 ) + โฏ + [๐๐ (๐ โ ๐๐ ) โฆ (๐ โ ๐๐โ๐ )] ๐(๐ฅ1 )โ๐(๐ฅ0 ) โ๐ 2๐(๐2 ) + 4๐(๐3 ) + 2๐(๐4 ) + โฏ + 4๐(๐2๐โ1 ) + ๐(๐)] = Divided Difference method nth order polynomial from (n+1) points. ๐0 = ๐[๐ฅ0 ] = ๐(๐ฅ0 ) 3 Divide b-a into 2n subintervals: ๐ผ โ ๐(๐ฅ0 ) (๐ฅโ๐ฅ0 ) (๐ฅโ๐ฅ1 ) โ Composite Simpsons Rule x1 (๐ฅโ๐ฅ1 ) (๐ฅโ๐ฅ2 ) ๐ ๐ผ โ โซ๐ ๐2 (๐ฅ) ๐๐ฅ = [๐(๐) + 4๐(๐1 ) + ๐(๐)] 1 - 3โ 8 [๐(๐) + 3๐(๐1 ) + 3๐(๐2 ) + ๐(๐)] f(x) P(x) Extrapolation for ฯ(h4) ๐ผ โ = ๐ผ2๐ + ๐ = ๐ผ2๐ + 2 - โฆ โฆ โฆ โฆ โฆ n - ๐ผ2๐ โ๐ผ๐ 15 h a0=a Richardsonโs Extrapolation โ 1 2 4 ๐โ1 โ1 ๐๐ (โ) = ๐๐โ1 ( ) + N=2 subintervals h a1 an=b โ [๐๐โ1 ( ) โ ๐๐โ1 (โ)] 2 N1(h) corresponds to composite trapezoid rule, N2(h) corresponds to Simpsons Rule, N2(h/2) corresponds to composite Simpsons Rule N 2 4 8 โฆ N1(h) # # # โฆ N2(h) # # โฆ โฆ # โฆ Linear Algebra Numerical methods for ODEs Gaussian Elimination for solving a linear system of equations Ax=b where โAโ is a nxn matrix of the left hand side of equations, โbโ is a vector of the right hand side, and the ๐ฅ vector โxโ=(๐ฆ). Place โAโ and โbโ alongside each other in ๐ง a table and manipulate each row until โAโ becomes an upper triangle matrix. Can write nth order ODE as n first order ODEs Forward Euler Method Initial condition: y(x0) = y0 yn+1 = yn + hf(xn, yn) xn = x0 + nh n 0 1 2 Modified Euler Method 1 ๐ฆ๐+1 = ๐ฆ๐ + (๐1 + ๐2 ) xn x0 x0+h x0+2h yn y0 y1 y2 k1 - k2 - 2 This method wonโt work when the |pivot element| << |any other element of that row|. ๐1 = โ๐(๐ฅ๐ , ๐ฆ๐ ) With Scaled Partial Pivoting Matrix, ๐ด = ๐[๐๐๐ค ๐๐ข๐๐๐๐],[๐๐๐๐ข๐๐ ๐๐ข๐๐๐๐] = ๐๐,๐ ๐2 = โ๐(๐ฅ๐+1 , ๐ฆ๐+1 ) = โ๐(๐ฅ๐+1 , ๐ฆ๐ + ๐1 ) Index vector, k = (1, 2, 3, 4) 2nd order ๐ฆ๐+1 = ๐ฆ๐ + ๐๐1 + ๐๐2 Runge-Kutta Methods Largest elements of each row, s = (-, -, -, -) ๐1 = โ๐(๐ฅ๐ , ๐ฆ๐ ) Index for iterations, m = 1 (for first step) ๐2 = โ๐(๐ฅ๐ + ๐ผโ, ๐ฆ๐ + ๐ฝ๐1 ) Pivot row, ๐= Mod Euler when a=b=0.5, ฮฑ=ฮฒ=1; Midpoint method when a=0, b=1, ฮฑ=ฮฒ=0.5 |๐๐๐ , ๐ | โถ ๐ = 1, 2, 3, 4 ๐ ๐๐ Pick kj = first index j with largest number and then swap this with km in index vector. The number in km = pivoting row (for all manipulations to be based upon in this turn). k (=k1+k2) is a weighted sum of 1 (a+b=1). 4th order 1 ๐ฆ๐+1 = ๐ฆ๐ + (๐1 + 2๐2 + 2๐3 + ๐4 ) 6 Matrix Inversion 1 0 Inverted matrix, A-1 ๏ ๐ด๐ดโ1 = ๐ผ = [0 1 0 0 0 0] 1 ๐1 = โ๐(๐ฅ๐ , ๐ฆ๐ ) Determinants The det of an upper triangular matrix is simply the product of the diagonal elements. Iterative Methods Matrix needs to be diagonally dominant for methods to converge i.e. |๐๐,๐ | > โ๐:๐โ ๐|๐๐,๐ | Gauss-Jacobi Method Ax=b ๏จ (D+R)x=b ๏จ Dx=b-Rx where D is the diagonal of A and R is the remainder. = D-1(b โ Rx(n)) n x1(n+1) (n+1) x2 x3(n+1) โฆ -1 (start value) - Gauss-Seidel Method Improves the Gauss-Jacobi by using the already computed values of x(n+1) for the (n+1) stage. ๐1 2 2 โ ๐2 2 2 ๐3 = โ๐ (๐ฅ๐ + , ๐ฆ๐ + Can only use: New row = Old row + Multiple of another row x(n+1) โ ๐2 = โ๐ (๐ฅ๐ + , ๐ฆ๐ + 0 - 1 - Ill-conditioned Matrices ๏ท When |largest element in A| x |largest element in A-1| >> size of matrix, n. ๏ท Eigenvalues differ by 2 orders of magnitude or more. ) ) ๐4 = โ๐(๐ฅ๐ + โ, ๐ฆ๐ + ๐3 ) k (inside brackets) is a weighted sum of 6. Multistep Methods Use (k+1) past values of y to interpolate a polynomial, and then evaluate yn+1 from integration. yโ(x)=f(x,y) ๏จ ๐ฅ ๐ฅ y=f(x,y)dx ๐ฆ๐+1 = โซ๐ฅ ๐+1 ๐ฆ๐ = โซ๐ฅ ๐+1 ๐(๐ฅ, ๐ฆ)๐๐ฅ ๐ Adams-Bashford (predictor) 3rd point 2nd order polynomial: ๐ฆ ฬ ฬ ฬ ฬ ฬ ฬ ๐+1 = ๐ฆ๐ + โ 12 ๐ x f(x) (23๐๐ โ 16๐๐โ1 + 5๐๐โ2 ) ฬ ฬ ฬ ฬ ฬ ฬ ๐๐+1 = ๐(๐ฅ๐+1 , ฬ ฬ ฬ ฬ ฬ ฬ ) ๐ฆ๐+1 Adams-Moulton (corrector) โ ฬ ฬ ฬ ฬ ฬ ฬ ๐ฆ๐+1 = ๐ฆ๐ + (9๐ ๐+1 + 19๐๐ โ 5๐๐โ1 + ๐๐โ2 ) 24 -2h f-2 -h f-1 0 f0 PDEs For unknown function, c(x, y), ๐ด ๐( ๐๐ ๐๐ , ๐๐ฅ ๐๐ฆ ๐2 ๐ ๐๐ฅ 2 +๐ต ๐2 ๐ ๐๐ฅ๐๐ฆ +๐ถ ๐2 ๐ ๐๐ฆ 2 + No-flux boundary condition: , ๐ฅ, ๐ฆ, ๐) = 0 ๐๐ B2-4AC <0 : elliptic =0 : parabolic >0 : hyperbolic Ellipse Parabola Hyperbola ๏ท Dirichlet boundary condition: ฯ(x,y) on S ๏ท Neumann boundary condition: ๏ท Mixed boundary condition: ฯ + โฯ = 0 ๐๐ ๐๐ฅ ๐๐ฅ 2 + ๐2 ๐ ๐๐ฆ 2 =0 Compare with general formula, A=1, B=0, C=1 so satisfies B2-4AC<0 Let ฯi,j = ฯ(xi, yj) where xi=hi and yj=kj (spacing of a 2D grid) then use centred difference approximation to get: ๐๐,๐ = ๐2 2โ2 +2๐ 2 ๏ท ๏ท ๏ท (๐๐โ1,๐ + ๐๐+1,๐ ) + โ2 2โ2 +2๐ 2 (๐๐,๐โ1 + ๐๐,๐+1 ) Boundary conditions give the values of ฯ(x,0) ฯ(0,y) ฯ(x,M) ฯ(N,y) so only internal points need to be solved This problem is symmetrical around y=x, so ฯi,j=ฯj,i (i.e. only 1 half of the grid needs to be solved) Linear system of equations for unknown points can be solved using Gaussian elimination Parabolic PDEs Heat equation in 1 dimension: ๐๐ ๐๐ก = โ2 c = ๐2 ๐ ๐๐ฅ 2 ๏จ A=1, B=C=0 Use forward difference approximation on time derivative, centred difference on spatial derivative: ๐๐,๐+1 โ ๐๐,๐ ๐๐+1,๐ โ 2๐๐,๐ + ๐๐โ1,๐ = ๐ โ2 โน ๐๐,๐+1 = ๐๐๐+1,๐ + (1 โ 2๐)๐๐,๐ + ๐๐๐โ1,๐ Where ci,j = c(x,t) at spatial position xi and time tj (=jk) and ๐ = โ ๐๐+1,๐ โ๐๐โ1,๐ 2โ ๐๐ ๐๐ฅ = 0 at x=hN = 0 โน ๐๐+1,๐ = ๐๐โ1,๐ This means that all ci,j are determined by values inside the boundaries Initial condition [e.g. c(x,0)=sin(xฯ)] must satisfy boundary conditions [e.g. c(0,t)=0] = โฯ = 0 Elliptic PDE Describes steady state behaviour, e.g.2D Laplaceโs: ๐2 ๐ | ๐๐ฅ x=hN Data on boundary, S = boundary condition โ2 ๐ = This suggests the solution of next time point (j+1) can be found given all values of i for time point j are known. cj = Acj-1 where state vector at time jk, cj=[c1,j, c2,j, โฆ, cn,j]t and A is nxn matrix: ๐จ=[ 1 โ 2๐ โฎ ๐ โฏ ๐ โฑ โฎ ] but only valid for rโค0.5 โฏ 1 โ 2๐ Crank-Nicholson Method Acj+1 = Bcj + bj where bj = boundary data vector, ๐จ = 2 + 2๐ โฏ โ๐ 2 โ 2๐ โฏ ๐ [ โฎ โฑ โฎ ] and ๐ฉ = [ โฎ โฑ โฎ ] โ๐ โฏ 2 + 2๐ ๐ โฏ 2 โ 2๐ Hyperbolic PDEs Wave equation: ๐2 ฯ ๐t2 โ ๐2 ๐2 ฯ ๐x2 = 0 ๏จ A=1, B=0, C<0 ฯ=(ฯ,v) where u=x+ct and v=x-ct Approximate ฯi,j using centred difference approximation at time jk and position ih: ๐๐,๐+1 โ 2๐๐,๐ + ๐๐,๐โ1 ๐๐+1,๐ โ 2๐๐,๐ + ๐๐โ1,๐ = ๐2 ๐2 โ2 2 โน ๐๐,๐+1 = 2๐๐,๐ โ ๐๐,๐โ1 + ๐ (๐๐+1,๐ โ 2๐๐,๐ + ๐๐โ1,๐ ) where ๐ = ๐๐ โ For t=k (j=1) use Taylor expansion: ๐๐,1 โ ๐(๐โ, ๐) = ๐(๐โ, 0) + ๐ ๐๐(๐โ,0) ๐๐ก + k2 ๐2 ฯ(ih,0) 2 ๐t2 Where the first 2 terms are found from initial conditions and 3rd term from wave eq Valid for kcโคh S ฯ(x,M) ฯN,M ๐ โ2 ฯi,j+1 ฯi-1,j ฯ(0,y) ฯi,j h k ฯi+1,j h k ฯi,j-1 ฯ0,0 ฯ(x,0) ฯ(N,y)