Issue 29
A. Castellano et alii, Frattura ed Integrità Strutturale, 29 (2014) 128-138; DOI: 10.3221/IGF-ESIS.29.12 130 group structure from the corresponding Lie algebra. Thus, for a first order ODE system like (1), it is natural to attempt to construct the solution as a matrix exponential with exponent given by a matrix which belongs to the associated Lie algebra. This peculiarity of the exponential mapping probably inspired the idea developed by Magnus [4], who proposed to express the exponent by a matrix series expansion made of infinite terms, each of them belonging to the Lie algebra corresponding to the Lie group of the unknown matricant. In detail, with reference to the differential problem defined in (1) the Magnus method assumes the matricant to be of exponential form with the exponent given by the following series expansion, called the Magnus expansion : 6x6 k k=1 R exp R 0 , 0 , R R Y Y O (3) Here we do not report the analytical procedures for determining the terms of the series expansion in (3). We address the reader to the seminal paper [5] where, in view of (1) and (3), and according to the notions of the first derivative of a matrix exponential map and of its inverse (see formulae (33) and (38) in [5]), it is clearly showed how it is possible to construct all the terms of the Magnus series. For our purposes, we report here only the first two terms of the series: 1 R R 1 1 1 2 1 1 2 2 0 0 0 1 R d , R d , d 2 A A A (4) where [. , .] is the matrix commutator (2) and A is the governing matrix of (1). Notice that each of the remaining terms (here not reported) contains nested matrix commutators of the form (2) involving the matrix operator R A . It is worth to note that only if the commutative condition 1 2 2 1 1 2 R R R R for each R , R A A A A (5) holds, then, in view of (3), (1) and (4), the matricant of (1) takes the exponential form R 0 R exp d Y A (6) which is analogous to the classical exponential solution for a scalar linear ODE. Clearly, (6) yields the solution of (1) if the entries of the matrix A governing (1) are independent of R. Obviously, (5) holds only in special cases because the matrices in general do not commute (this is closely related to the structure of the commutation law (2) which characterizes matrix Lie algebras). A crucial aspect of the Magnus method related to its pertinence when seeking approximate solutions of differential problems like (1) emerges from the analysis of (3) and (4). By the above discussion, we recall that the matrix R A in (1) belongs to a matrix Lie algebra for all R; thus, in view of (4) we see that any term of the Magnus expansion belongs to the same Lie algebra. This implies that any truncation of the Magnus series will also belong to the same Lie algebra and therefore the exponential map of any truncation will necessarily stay in the corresponding Lie group. This is basically the main reason why an approximated solution obtained by truncating the Magnus expansion at any order preserves the same qualitative features of the exact solution. Another major issue concerns the conditions on the matrix R A which guarantee the convergence of the Magnus series. This problem has been studied in [12], where it is shown that a sufficient condition for the local convergence of the Magnus series in a certain interval 2 0, R is given by : 2 R 2 0 R dR A (7) where the integrand function in (7) is the spectral norm of R A , i.e., the square root of the maximum eigenvalue of the product T R R A A . Moreover, the fulfilment of the inequality (7) determines the range of the independent variable R for which it is possible to write the solution in the form (3). However, in many applicative cases it happens that the convergence condition (7) does not hold for the whole integration interval 2 0, R . In these cases, as usual for numerical
Made with FlippingBook
RkJQdWJsaXNoZXIy MjM0NDE=