ch10.pdf

of 12
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Information Report
Category:

Articles & News Stories

Published:

Views: 6 | Pages: 12

Extension: PDF | Download: 0

Share
Description
Download ch10.pdf
Transcript
  Chapter 10 Stability of Runge-Kutta Methods Main concepts:  Stability of equilibrium points, stability of maps, Runge-Kutta stability func-tion, stability domain.In the previous chapter we studied equilibrium points and their discrete couterpart, fixedpoints. A lot can be said about the qualitative behavior of dynamical systems by looking atthe local solution behavior in the neighborhood of equilibrium points. In this chapter we study stability   of these points and the related stability of fixed points of Runge-Kutta methods. 10.1 Stability of Equilibrium Points We now define the stability of an equilibrium point. In general, the stability concerns the behaviorof solutions near an equilibrium point in the long term. Given an autonomous system of differentialequations (1.13), denote the solution of the system with the initial condition  y (0) =  y 0  by  y ( t ; y 0 ).(1) The equilibrium point  y ∗ is  stable in the sense of Lyapunov   (or just  stable  ) for the givendifferential equation if   ∀  >  0,  ∃ δ >  0 ,T >  0 such that, if    y 0 − y ∗  < δ   then  y ( t ; y 0 ) − y ∗  < , t > T. (2) The equilibrium point  y ∗ is  asymptotically stable   if there exists  γ >  0 such that, for anyinitial condition  y 0  such that   y 0 − y ∗  < γ  ,lim t →∞  y ( t ; y 0 ) − y ∗  = 0(3) The equilibrium point  y ∗ is  unstable   if it is not stable (in the sense of Lyapunov).Note: any asymptotically stable equilibrium point is also stable.The simplest way to illustrate the types of situations that can arise is to consider the linearsystems ddty  =   − 1 00  − 1  y, ddty  =   − 1 00 1  y, ddty  =   0 1 − 1 0  y. The srcin is the only equilibrium point in each case. For the first system it is asymptotically stable(and also stable), for the second it is unstable, and for the third it is stable but not asymptoticallystable. See Figure 10.1.One sometimes hears it said in engineering or science that a certain  system   is stable. Thisuse of the term may be the same as that used above but suggesting that the equilibrium point of interest is understood, or it may be a different usage of the term entirely. (Some caution is urged.)There are a number of methods for analyzing stability in general. One of these is the Poincar´e-Lyapunov Theorem and is based on looking at the eigenvalues of the linear part of the problem.We state a simplified version here, without proof, which also is sometimes called the LinearizationTheorem.53  54  CHAPTER 10. STABILITY OF RUNGE-KUTTA METHODS  Figure 10.1: Some phase diagrams for linear systems in  R 2 . Theorem 10.1.1 (Linearization Theorem)  Consider the equation in   R d : dy/dt  =  Ay  + F  ( y ) (10.1) subject to initial condition   y (0) =  y 0 . Assume   A  is a constant   d × d  matrix with eigenvalues having all negative real parts and the function   F   is   C  1 in a neighborhood of   y  = 0 , with   F   (0) =  F  (0) = 0 ,where   F   ( y )  is the Jacobian matrix of   F  . Then   y  = 0  is an asymptotically stable equilibrium point.If   A  has any eigenvalues with positive real part, then   y  = 0  is an unstable equilibrium point. Although the conditions on  A  and  F   may at first appear restricitve, Theorem 10.1.1 is actuallyquite powerful. First, if we have an equilibrium point  y ∗ of a differential equation (1.13) whichis not at the origin, we can always shift it to the srcin by introducing a translation. Define˜ y  =  y − y ∗ and then we have d ˜ ydt  =  dydt  =  f  (˜ y  + y ∗ ) ≡  ˜ f  (˜ y ) , which is in the form (1.13) but has an equilibrium point at ˜ y  = 0.Second, consider an arbitrary differential equation (1.13) with  C  2 vector field and an equilib-rium point at the srcin. We may proceed as follows to use Theorem 10.1.1. First expand  f   in aTaylor series expansion about 0: f  ( y ) =  f  (0) + f   (0) y  + R ( y ) . Because 0 is an equilibrium point, we have  f  (0) = 0, and we set  A  =  f   (0). Define  F  ( y ) =  R ( y ).We see that we can write our equation in the form dydt  =  Ay  + F  ( y ) , and it is easy to verify that  F  (0) =  F   (0) = 0. If the eigenvalues of   A  have negative real parts,then we can conclude that the srcin is asymptotically stable. Let us state an alternative form of the Linearization Theorem based on this: Theorem 10.1.2 (Linearization Theorem II)  Suppose that   f   in ( 1.13 ) is   C  2 and has an equi-librium point   y ∗ . If the eigenvalues of  J   =  f   ( y ∗ ) all lie strictly in the left complex half-plane, then the equilibrium point   y ∗ is asymptotically stable.If   J   has any eigenvalue in the right complex half-plane, then   y ∗ is an unstable point. 10.2 Stability of Fixed Points of Maps We have seen in the introduction two versions of stability for equilibrium points of dynamicalsystems. These concepts have natural analogues for fixed points of maps.  10.2. STABILITY OF FIXED POINTS OF MAPS   55 Definition 10.2.1  Consider a map Ψ on  R d and a fixed point  y ∗ :  y ∗ = Ψ( y ∗ ). Define  y n ( y 0 ) asthe iteration of the map  n  times applied to  y 0 , so  y 1 ( y 0 ) = Ψ( y 0 ),  y 2 ( y 0 ) = Ψ(Ψ( y 0 )), etc. We saythat (1)  y ∗ is  stable in the sense of Lyapunov   (or just  stable  ) if   ∀  >  0,  ∃ δ >  0 ,N >  0 such that,if    y 0 − y ∗  < δ   then  y n ( y 0 ) − y ∗  < , n > N. (2)  y ∗ is  asymptotically stable   if there exists  γ >  0 such that, for any initial condition  y 0  suchthat   y 0 − y ∗  < γ  ,lim n →∞  y n ( y 0 ) − y ∗  = 0(3)  y ∗ is  unstable   if it is not stable (in the sense of Lyapunov).   It is easy to see that these definitions agree with the previous definitions for continuous dy-namics. That is, let us take Ψ ≡ Φ t , the time- t  exact flow map of the differential equation. Then,if an equilibrium point of the continuous dynamics is stable, then it is also a stable fixed point forΨ.Next consider a more general map Ψ, not necessarily the flow map. What are the conditionsfor stability of a fixed point  y ∗ ? In other words:  when does the sequence of iterates obtained by successively applying   Ψ  starting from a point near   y ∗ eventually converge to  y ∗ ?  First consider a scalar linear iteration, Ψ( y ) =  ay ,  a  ∈  R , started from a point  y 0   = 0. Theiteration of Ψ yields the sequence  y n  =  a n y 0 . Then we have (i) | y n |→ 0, if  | a | <  1, (ii) | y n |≡| y 0 | ,if   | a |  = 1, (iii)  | y n | → ∞ ,  | a |  >  1. The fixed point is at the origin, so we see that we have (i)asymptotical stability, (ii) stability, and (iii) instability.Now consider a linear iteration in  R d . Then we have Ψ( y ) =  Ky ,  K   ∈ R d × d . We consider theconvergence to the fixed point at the srcin of the sequence of points  y 0 ,y 1 ,...  generated by y n  =  Ky n − 1 . We have  y 1  =  Ky 0 ,  y 2  =  Ky 1  =  K  2 y 0 , etc. The question concerns the norms of powers of   K  :  K  n y 0  . Let  ρ ( K  ) denote the  spectral radius   of the matrix  K  , i.e. the radius of the smallest circlecentered at the srcin an enclosing all eigenvalues of   K  . We have a standard theorem of lineariterations to apply to this case: Theorem 10.2.1  Let   z n  =  K  n y 0  . Then (i)  z n  → 0 , as   n →∞ , if and only if, ρ ( K  )  <  1 Moreover, (ii)  z n  →∞  for some   y 0 , if  ρ ( K  )  >  1 Finally, (iii) if   ρ ( K  ) = 1  and eigenvalues on the unit circle are semisimple, then   { z n }  remains bounded as   n →∞ . When the eigenvectors of   K   form a basis of  R d , then all parts are easy to prove by diagonalizingthe matrix. When  K   does not have a basis of eigenvectors, we use the Jordan canonical form forthis purpose. Recall that an eigenvalue is semisimple if the dimension of its largest Jordan blockis 1.Finally we turn our attention to a more general iteration of the form y n  = Ψ( y n − 1 ) ,  (10.2)where Ψ is a map on  R d .Note that y ∗ = Ψ( y ∗ ) .  (10.3)  56  CHAPTER 10. STABILITY OF RUNGE-KUTTA METHODS  Take the difference of (10.2) and (10.3) to obtain y n − y ∗ = Ψ( y n − 1 ) − Ψ( y ∗ ) . Next, we use Taylor’s Theorem for functions from  R d to  R d to obtain y n − y ∗ = Ψ  ( y ∗ )( y n − 1 − y ∗ ) + R ( y n − 1 − y ∗ ) . Ψ  ( y ∗ ) here refers to the Jacobian matrix of Ψ.  R (∆) is a remainder term which goes to zeroquadratically: R (∆) =  O (  ∆  2 )When ∆ =  y n − 1 − y ∗ is small in norm, its squared norm is very small. It is natural to thinkof simply neglecting the remainder term  R ( y n − 1 − y ∗ ), i.e., y n − y ∗ ≈ Ψ  ( y ∗ )( y n − 1 − y ∗ ) . in which case the convergence would appear to be related to convergence of a linear iteration.Whether we can neglect the remainder depends on the eigenvalues of Ψ  . It is possible to showthe following theorem: Theorem 10.2.2  Given a smooth (  C  2 ) map  Ψ , (i) the fixed point   y ∗ is asymptotically stable for the iteration   y n +1  = Ψ( y n )  if  ρ (Ψ  ( y ∗ ))  <  1 . (ii) the fixed point   y ∗ is unstable if   ρ (Ψ  ( y ∗ ))  >  1 . The marginal case  ρ (Ψ  ) = 1 is delicate and must be considered on a case-by-case basis. 10.3 Stability of Numerical Methods: Linear Case For a linear system of ODEs dy/dt  =  Ay, where  A  is a  d × d  matrix with a basis of eigenvectors, it can easily be shown that the generalsolution can be written in the compact form y ( t ) = d  i =1 C  i e λ i t u i , where  λ 1 ,λ 2 ,...,λ d  are the eigenvalues,  u 1 ,u 2 ,...,u d  are the corresponding eigenvectors, and C  1 ,C  2 ,...,C  d  are coefficients. It is easy to see that this means that stability is determined by theeigenvalues. For example, if all the eigenvalues lie in the left half plane, then the srcin is stablein the sense of Lyapunov. Also, if all the eigenvalues have negative real part, then the srcin isasymptotically stable.A related statement can be shown to hold for many of the numerical methods in common use.For example, consider Euler’s method applied to the linear problem  dy/dt  =  Ay : y n +1  =  y n  + hAy n  = ( I   + hA ) y n . If we let  y n  be expanded in the eigenbasis (say  u 1 ,u 2 ,...,u d ), we may write y n  =  α ( n )1  u 1  + α ( n )2  u 2  + ... + α ( n ) d  u d . If we now apply Euler’s method, we find y n +1  = ( I   + hA )( α ( n )1  u 1  + α ( n )2  u 2  + ... + α ( n ) d  u d ) , =  α ( n )1  ( I   + hA ) u 1  + α ( n )2  ( I   + hA ) u 2  + ... + α ( n ) d  ( I   + hA ) u d
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks