• Ingen resultater fundet

Summary

In document Wavelets in Scientific Computing (Sider 175-182)

N = 256 L = 5

D = 2 D = 4 D = 6 D = 8 0 2560 2560 2560 2560 1 4096 6144 8192 10240 2 5120 9216 13312 17408 3 6272 12288 18560 24576 4 7360 15360 23680 31808 5 8416 18432 28736 38336

Table 8.4: The number of floating point operations

F

CIRMULshown for different values of

and

D

.

Table 8.2 shows that

F

CIRMUL depends linearly on

N

. Moreover, Tables 8.3 and 8.4 show that the computational work grows with the bandwidth

L

, the wavelet genus

D

, and the transform depth

. Suppose that

L

is given. Then we see that the matrix-vector multiplication is most efficient if we take no steps of the wavelet transform (

= 0

). Consequently, any work reduction must be sought in trunca-tion of the vector x. This can be justified because xwill often be a 1D wavelet transform of the same depth (

) as the matrixH. Therefore, depending on

and

the underlying application, we expect many elements in xto be close to zero so we may be able to discard them in order to reduce the computational work. Con-sider Table 8.3. If we take

N = 256

,

L = 3

,

= 4

as an example, the question boils down to whether such a truncation ofxcan reduce 14592 operations to less than the 1536 operations required for

= 0

(no transform). Assuming that the work depends linearly on the number of non-zero elements inx, this means thatx must be reduced by a factor of

10

(at least) before any work reduction is obtained.

al-gorithm was analyzed. It was found that the work needed to compute the matrix-vector multiplication is linear in

N

and grows with the bandwidth (

L

), the wavelet genus (

D

), and the depth of the wavelet transform

. On the other hand, the work is reduced when small elements inxare discarded.

Chapter 9

Examples of wavelet-based PDE solvers

We consider here the task of solving partial differential equations in one space dimension using wavelets. We assume that the boundary conditions are periodic and make use of the periodized wavelets described in Section 2.3 and the material developed in Chapters 7 and 8. Sections 9.1 and 9.2 treat linear problems and serve to set the stage for Sections 9.3 and 9.4 which deal with nonlinear problems.

We begin by considering a periodic boundary value problem because it is a simple and illustrative example.

9.1 A periodic boundary value problem

Consider the 1D Helmholtz equation

;

u

00

+ u = f(x) u(x) = u(x + 1)

x

2R (9.1)

where

2Rand

f(x) = f(x + 1)

.

We look for

1

-periodic solutions, so it suffices to consider

u(x)

on the interval

0

x < 1

.

9.1.1 Representation with respect to scaling functions

We begin the discretization of (9.1) by replacing

u(x)

with the approximation

u

J

(x) =

2

J

;1

X

k=0

(c

u

)

Jk

~

Jk

(x) J

2N0 (9.2)

Following the approach that lead to (7.9) we find that

u

00J

(x) =

2

J

;1

X

k=0

(c

(2)u

)

Jk

~

Jk

(x)

(9.3)

where

(c

(2)u

)

Jk is given as in (7.12), i.e.

(c

(2)u

)

Jk

=

D(2)cu

]

k

=

DX;2

n=2;D

(c

u

)

Jhn+ki2J

2

Jd

;

dn

k = 0 1 ::: 2

J;

1

with

;

dn defined in (7.1).

We can use the Galerkin method to determine the coefficients

(c

u

)

Jk. Multi-plying (9.1) by

~

Jl

(x)

and integrating over the unit interval yields the relation

; Z

1

0

u

00J

(x)~

Jl

(x)dx +

Z

1

0

u

J

(x)~

Jl

(x)dx =

Z

1

0

f(x)~

Jl

(x)dx

Using (9.2), (9.3), and the orthonormality of the periodized scaling functions (2.48) we get

;

(c

(2)u

)

Jl

+ (c

u

)

Jl

= (c

f

)

Jl

l = 0 1 ::: 2

J ;

1

where

(c

f

)

Jl

=

Z 1

0

f(x)~

Jl

(x)dx

(9.4)

In vector notation this becomes

;c

(2)u

+

cu

=

cf (9.5)

and using (7.13) we arrive at the linear system of equations

Acu

=

cf (9.6)

where

A

=

;D(2)

+

I (9.7)

Alternatively, we can replaceD(2)byD2, whereDis given in (7.14), and obtain

Acu

=

cf (9.8)

where

A

=

;D2

+

I (9.9)

Equations (9.6) and (9.8) represent the scaling function discretizations of (9.1).

Hence this method belongs to Class

1

described in Section 7.1. We observe that (9.6) has a unique solution if

does not belong to the set of eigenvalues ofD(2). Similarly (9.8) has a unique solution if

does not belong to the set of eigenvalues ofD2.

;

log

2

(

k

u

;

u

JkJ1

)

A

=

;D(2)

+

I A

=

;D2

+

I

J D = 6 D = 8 D = 4 D = 6 D = 8 2 1:99 3:40 1:80 3:05 4:18 3 4:81 8:15 5:71 8:60 11:42 4 8:57 13:87 9:65 14:46 19:21 5 12:51 19:81 13:63 20:43 27:16 6 16:50 25:79 17:63 26:42 35:15 7 20:50 31:79 21:63 32:42 42:48 8 24:50 40:03 25:63 38:61 42:01 9 28:50 35:48 29:63 39:27 40:85

Table 9.1: The error ;log2

(k

u

;

u

JkJ1)shown for different values of

J

,

D

and the

choice of differentiation matrix. Convergence rates are seen from the differences between successive measurements. The value

D

= 4 is omitted in the case of (9.7) because the combination

D

=4

d

=2is invalid as described in Remark 7.1.

Accuracy

Let

f(x) = (4

2

+ )sin(2x)

. The solution of (9.1) is then

u(x) = sin(2x)

Define

k

u

k1

= max

0x<1j

u(x)

j (9.10)

k

u

kJ1

=

k

max

=01:::2J;1

u(k=2

J

)

(9.11)

Table 9.1 shows how the errork

u

;

u

JkJ1depends on

J

,

D

, and the choice ofD(2)orD2. Until the onset of rounding errors, the convergence rates obtained in Chapter 7 are recovered. More precisely,

k

u

;

u

JkJ1

=

O;

2

;J(D;2) in the case of (9.7)

k

u

;

u

JkJ1

=

O;

2

;JD in the case of (9.9)

Table 9.2 shows how the errork

u

;

u

Jk1depends on

J

,

D

, and the choice of

D

(2)orD2. In this case, we find

k

u

;

u

Jk1

=

O;

2

;JD=2 in the case of (9.7)

k

u

;

u

Jk1

=

O;

2

;JD=2 in the case of (9.9)

;

log

2

(

k

u

;

u

Jk1

)

A

=

;D(2)

+

I A

=

;D2

+

I

J D = 6 D = 8 D = 4 D = 6 D = 8 2 1:15 2:18 0:99 1:68 2:10 3 3:36 5:84 2:41 3:78 5:69 4 6:45 9:61 4:16 6:61 9:54 5 9:53 13:51 6:08 9:58 13:49 6 12:56 17:49 8:06 12:57 17:48 7 15:56 21:48 10:06 15:57 21:48 8 18:57 25:48 12:06 18:57 25:48 9 21:57 29:45 14:06 21:57 29:48

Table 9.2: The error ;log2

(k

u

;

u

Jk1) shown for different values of

J

,

D

and the

differentiation matrix. Convergence rates are seen from the differences between succes-sive measurements. The value

D

=4is omitted in case of (9.7) because the combination

D

=4

d

=2is invalid as described in Remark 7.1.

which agrees with the approximation properties described in (4.5).

The high convergence observed in Table 9.1 means that the solution

u(x)

is

approximated to

D

th order in the norm kkJ1 even though the the subspace

~V

J can only represent exactly polynomials up to degree

D=2

;

1

, as explained in Section 4.1. This phenomenon, known as super convergence, is also encountered in the finite element method, [WM85, p. 106], [AB84, p. 231].

9.1.2 Representation with respect to wavelets

Taking (9.6) as point of departure and using the relations

du

=

Wcu

df

=

Wcf yields

AW

T

du

=

WTdf Let

A

=

WAWT

=

W ;D(2)

+

IWT

=

;D

(2)

+

I

whereD

(2)is defined according to (7.22). Then

Adu

=

df (9.12)

which is the wavelet discretization of (9.1). This method belongs to Class

2

de-scribed in Section 7.1. Hence, there is a potential for using wavelet compression to reduce the computational complexity of solving (9.12).

9.1.3 Representation with respect to physical space

Multiplying (9.5) byT and using (3.17) yields

;u

(2)

+

u

=

f

where

f

=

Tcf

=

%;

P

V~J

f

(x

l

)

&2l=0J;1 (9.13)

From the relationu(2)

=

D(2)u(7.15) we obtain the equation

Au

=

f (9.14)

whereAis given by (9.7).

An important variant of (9.14) is obtained by redefiningf as the vector

f

=

f

f(x

l

)

g2l=0J;1

This produces a collocation scheme for the solution of (9.1).

Because this method is essentially based on scaling functions and does not use wavelet compression it belongs to Class

1

described in Section 7.1.

9.1.4 Hybrid representation

We mention a second possibility for discretizing (9.1) using wavelets. It is simple and several authors follow this approach, e.g. [EOZ94, CP96, PW96]. This is essentially a combination of the approaches that lead to (9.14) and (9.12), and we proceed as follows: Multiplying (9.14) from the left byW and using the identity

WTW

=

I one obtains

WA WTWu

=

Wf

Defining the wavelet transformed vectors as

f

=

Wf and

u

=

Wu

then yields

Au

=

f (9.15)

whereA

is the same as in (9.12).

This approach bypasses the scaling coefficient representation and relies on the fact that the FWT can be applied directly to function values of

f

. Indeed, using this approach, the differentiation matrixA might as well be derived from finite differences, finite elements or spectral methods.

From Section 4.3 we know that the elements inu

will behave similarly as the true wavelet coefficientsdu. Therefore, wavelet compression is as likely in this case as with the pure wavelet representation (9.12). Hence this method belongs to Class

2

described in Section 7.1.

In document Wavelets in Scientific Computing (Sider 175-182)