• Ingen resultater fundet

HólarSummerSchoolonSparseCoding,August2010 FrançoisLauze TotalVariationinImageAnalysis(TheHomoErectusStage?)

N/A
N/A
Info
Hent
Protected

Academic year: 2022

Del "HólarSummerSchoolonSparseCoding,August2010 FrançoisLauze TotalVariationinImageAnalysis(TheHomoErectusStage?)"

Copied!
147
0
0

Indlæser.... (se fuldtekst nu)

Hele teksten

(1)

Total Variation

Total Variation in Image Analysis (The Homo Erectus Stage?)

François Lauze

1 Department of Computer Science University of Copenhagen

Hólar Summer School on Sparse Coding, August 2010

(2)

1 Motivation

Origin and uses of Total Variation Denoising

Tikhonov regularization 1-D computation on step edges 2 Total Variation I

First definition Rudin-Osher-Fatemi Inpainting/Denoising 3 Total Variation II

Relaxing the derivative constraints Definition in action

Using the new definition in denoising: Chambolle algorithm Image Simplification

4 Bibliography 5 The End

(3)

Total Variation Motivation

Origin and uses of Total Variation

Outline

1 Motivation

Origin and uses of Total Variation Denoising

Tikhonov regularization 1-D computation on step edges

2 Total Variation I First definition Rudin-Osher-Fatemi Inpainting/Denoising

3 Total Variation II

Relaxing the derivative constraints Definition in action

Using the new definition in denoising: Chambolle algorithm Image Simplification

4 Bibliography 5 The End

(4)

In mathematics: the Plateau problem of minimal surfaces, i.e. surfaces of minimal area with a given boundary

In image analysis: denoising, image reconstruction, segmentation...

An ubiquitous prior for many image processing tasks.

(5)

Total Variation Motivation

Origin and uses of Total Variation

In mathematics: the Plateau problem of minimal surfaces, i.e. surfaces of minimal area with a given boundary

In image analysis: denoising, image reconstruction, segmentation...

An ubiquitous prior for many image processing tasks.

(6)

In mathematics: the Plateau problem of minimal surfaces, i.e. surfaces of minimal area with a given boundary

In image analysis: denoising, image reconstruction, segmentation...

An ubiquitous prior for many image processing tasks.

(7)

Total Variation Motivation Denoising

Outline

1 Motivation

Origin and uses of Total Variation Denoising

Tikhonov regularization 1-D computation on step edges

2 Total Variation I First definition Rudin-Osher-Fatemi Inpainting/Denoising

3 Total Variation II

Relaxing the derivative constraints Definition in action

Using the new definition in denoising: Chambolle algorithm Image Simplification

4 Bibliography 5 The End

(8)

Determine an unknown image from a noisy observation.

(9)

Total Variation Motivation Denoising

Methods

All methods based on some statistical inference.

Fourier/Wavelets Markov Random Fields

Variational and Partial Differential Equations methods ...

We focus on variational and PDE methods.

(10)

A digital imageuof sizeN×Mpixels, corrupted by Gaussian white noise of varianceσ2

write it as observed imageu0=u+η,ku−u0k2=P

ij(uij−u0ij)2=NMσ2 (noise variance =σ2),P

ijuij=P

iju0ij(zero mean noise).

could add a blur degradationu0=Ku+ηfor instance, so to have kKu−u0k2=NMσ2.

(11)

Total Variation Motivation Denoising

A simple corruption model

A digital imageuof sizeN×Mpixels, corrupted by Gaussian white noise of varianceσ2

write it as observed imageu0=u+η,ku−u0k2=P

ij(uij−u0ij)2=NMσ2 (noise variance =σ2),P

ijuij=P

iju0ij(zero mean noise).

could add a blur degradationu0=Ku+ηfor instance, so to have kKu−u0k2=NMσ2.

(12)

A digital imageuof sizeN×Mpixels, corrupted by Gaussian white noise of varianceσ2

write it as observed imageu0=u+η,ku−u0k2=P

ij(uij−u0ij)2=NMσ2 (noise variance =σ2),P

ijuij=P

iju0ij(zero mean noise).

could add a blur degradationu0=Ku+ηfor instance, so to have kKu−u0k2=NMσ2.

(13)

Total Variation Motivation Denoising

A simple corruption model

A digital imageuof sizeN×Mpixels, corrupted by Gaussian white noise of varianceσ2

write it as observed imageu0=u+η,ku−u0k2=P

ij(uij−u0ij)2=NMσ2 (noise variance =σ2),P

ijuij=P

iju0ij(zero mean noise).

could add a blur degradationu0=Ku+ηfor instance, so to have kKu−u0k2=NMσ2.

(14)

A digital imageuof sizeN×Mpixels, corrupted by Gaussian white noise of varianceσ2

write it as observed imageu0=u+η,ku−u0k2=P

ij(uij−u0ij)2=NMσ2 (noise variance =σ2),P

ijuij=P

iju0ij(zero mean noise).

could add a blur degradationu0=Ku+ηfor instance, so to have kKu−u0k2=NMσ2.

(15)

Total Variation Motivation Denoising

Recovery

The problem: Findusuch that

ku−u0k2=NMσ2, X

ij

uij=X

ij

u0ij (1)

is not well-posed. Many solutions possible.

In order to recoveru, extra information is needed, e.g. in the form of a prior onu.

For images, smoothness priors often used.

LetRua digital gradient ofu, Then find smoothestuthat satisfy constraints (1), the smoothest meaning with smallest

T(u) =kRuk= sX

ij

|Ru|2ij.

(16)

The problem: Findusuch that

ku−u0k2=NMσ2, X

ij

uij=X

ij

u0ij (1)

is not well-posed. Many solutions possible.

In order to recoveru, extra information is needed, e.g. in the form of a prior onu.

For images, smoothness priors often used.

LetRua digital gradient ofu, Then find smoothestuthat satisfy constraints (1), the smoothest meaning with smallest

T(u) =kRuk= sX

ij

|Ru|2ij.

(17)

Total Variation Motivation Denoising

Recovery

The problem: Findusuch that

ku−u0k2=NMσ2, X

ij

uij=X

ij

u0ij (1)

is not well-posed. Many solutions possible.

In order to recoveru, extra information is needed, e.g. in the form of a prior onu.

For images, smoothness priors often used.

LetRua digital gradient ofu, Then find smoothestuthat satisfy constraints (1), the smoothest meaning with smallest

T(u) =kRuk= sX

ij

|Ru|2ij.

(18)

The problem: Findusuch that

ku−u0k2=NMσ2, X

ij

uij=X

ij

u0ij (1)

is not well-posed. Many solutions possible.

In order to recoveru, extra information is needed, e.g. in the form of a prior onu.

For images, smoothness priors often used.

LetRua digital gradient ofu, Then find smoothestuthat satisfy constraints (1), the smoothest meaning with smallest

T(u) =kRuk= sX

ij

|Ru|2ij.

(19)

Total Variation Motivation Denoising

Recovery

The problem: Findusuch that

ku−u0k2=NMσ2, X

ij

uij=X

ij

u0ij (1)

is not well-posed. Many solutions possible.

In order to recoveru, extra information is needed, e.g. in the form of a prior onu.

For images, smoothness priors often used.

LetRua digital gradient ofu, Then find smoothestuthat satisfy constraints (1), the smoothest meaning with smallest

T(u) =kRuk= sX

ij

|Ru|2ij.

(20)

1 Motivation

Origin and uses of Total Variation Denoising

Tikhonov regularization 1-D computation on step edges

2 Total Variation I First definition Rudin-Osher-Fatemi Inpainting/Denoising

3 Total Variation II

Relaxing the derivative constraints Definition in action

Using the new definition in denoising: Chambolle algorithm Image Simplification

4 Bibliography 5 The End

(21)

Total Variation Motivation

Tikhonov regularization

Tikhonov regularization

It can be show that this is equivalent to minimize E(u) =kKu−u0k2+λkRuk2

for aλ=λ(σ)(Wahba?).

E(u)minimizaton can be derived from a Maximum a Posteriori formulation

Arg.max

u

p(u|u0) = p(u0|u)p(u) p(u0)

Rewriting in a continuous setting:

E(u) = Z

(Ku−uo)2dx+λ Z

|∇u|2dx

(22)

It can be show that this is equivalent to minimize E(u) =kKu−u0k2+λkRuk2

for aλ=λ(σ)(Wahba?).

E(u)minimizaton can be derived from a Maximum a Posteriori formulation

Arg.max

u

p(u|u0) = p(u0|u)p(u) p(u0)

Rewriting in a continuous setting:

E(u) = Z

(Ku−uo)2dx+λ Z

|∇u|2dx

(23)

Total Variation Motivation

Tikhonov regularization

Tikhonov regularization

It can be show that this is equivalent to minimize E(u) =kKu−u0k2+λkRuk2

for aλ=λ(σ)(Wahba?).

E(u)minimizaton can be derived from a Maximum a Posteriori formulation

Arg.max

u

p(u|u0) = p(u0|u)p(u) p(u0)

Rewriting in a continuous setting:

E(u) = Z

(Ku−uo)2dx+λ Z

|∇u|2dx

(24)

It can be show that this is equivalent to minimize E(u) =kKu−u0k2+λkRuk2

for aλ=λ(σ)(Wahba?).

E(u)minimizaton can be derived from a Maximum a Posteriori formulation Arg.max

u

p(u|u0) = p(u0|u)p(u) p(u0)

Rewriting in a continuous setting:

E(u) = Z

(Ku−uo)2dx+λ Z

|∇u|2dx

(25)

Total Variation Motivation

Tikhonov regularization

Tikhonov regularization

It can be show that this is equivalent to minimize E(u) =kKu−u0k2+λkRuk2

for aλ=λ(σ)(Wahba?).

E(u)minimizaton can be derived from a Maximum a Posteriori formulation Arg.max

u

p(u|u0) = p(u0|u)p(u) p(u0)

Rewriting in a continuous setting:

E(u) = Z

(Ku−uo)2dx+λ Z

|∇u|2dx

(26)

Solution satisfies the Euler-Lagrange equation forE: K(Ku−u0)−λ∆u=0.

(Kis the adjoint ofK)

A linear equation, easy to implement, and many fast solvers exit, but...

(27)

Total Variation Motivation

Tikhonov regularization

How to solve?

Solution satisfies the Euler-Lagrange equation forE: K(Ku−u0)−λ∆u=0.

(Kis the adjoint ofK)

A linear equation, easy to implement, and many fast solvers exit, but...

(28)

Solution satisfies the Euler-Lagrange equation forE: K(Ku−u0)−λ∆u=0.

(Kis the adjoint ofK)

A linear equation, easy to implement, and many fast solvers exit, but...

(29)

Total Variation Motivation

Tikhonov regularization

Tikhonov example

Denoising example,K=Id.

Original λ=50 λ=500

Not good: images contain edges but Tikhonov blur them. Why?

The termR

(u−u0)2dx: not guilty!

Then it must beR

|∇u|2dx. Derivatives and step edges do not go too well together?

(30)

Denoising example,K=Id.

Original λ=50 λ=500

Not good: images contain edges but Tikhonov blur them. Why?

The termR

(u−u0)2dx: not guilty!

Then it must beR

|∇u|2dx. Derivatives and step edges do not go too well together?

(31)

Total Variation Motivation

Tikhonov regularization

Tikhonov example

Denoising example,K=Id.

Original λ=50 λ=500

Not good: images contain edges but Tikhonov blur them. Why?

The termR

(u−u0)2dx: not guilty!

Then it must beR

|∇u|2dx. Derivatives and step edges do not go too well together?

(32)

Denoising example,K=Id.

Original λ=50 λ=500

Not good: images contain edges but Tikhonov blur them. Why?

The termR

(u−u0)2dx: not guilty!

Then it must beR

|∇u|2dx. Derivatives and step edges do not go too well together?

(33)

Total Variation Motivation

1-D computation on step edges

Outline

1 Motivation

Origin and uses of Total Variation Denoising

Tikhonov regularization 1-D computation on step edges 2 Total Variation I

First definition Rudin-Osher-Fatemi Inpainting/Denoising

3 Total Variation II

Relaxing the derivative constraints Definition in action

Using the new definition in denoising: Chambolle algorithm Image Simplification

4 Bibliography 5 The End

(34)

SetΩ = [−1,1],aa real number anduthe step-edge function u(x) =

(

0 x≤0

a x>0 Not differentiable at 0, but forget about it and try to compute

Z 1

−1

|u0(x)|2dx.

Around 0 “approximate”u0(x)by u(h)−u(−h)

2h , h>0,small

(35)

Total Variation Motivation

1-D computation on step edges

SetΩ = [−1,1],aa real number anduthe step-edge function u(x) =

(

0 x≤0

a x>0 Not differentiable at 0, but forget about it and try to compute

Z 1

−1

|u0(x)|2dx.

Around 0 “approximate”u0(x)by u(h)−u(−h)

2h , h>0,small

(36)

SetΩ = [−1,1],aa real number anduthe step-edge function u(x) =

(

0 x≤0

a x>0 Not differentiable at 0, but forget about it and try to compute

Z 1

−1

|u0(x)|2dx.

Around 0 “approximate”u0(x)by u(h)−u(−h)

2h , h>0,small

(37)

Total Variation Motivation

1-D computation on step edges

with this finite difference approximation u0(x)≈ a

2h, x∈[−h,h]

then Z1

−1

|u0(x)|2dx = Z −h

−1

|u0(x)|2dx+ Z h

−h

|u0(x)|2dx+ Z 1

h

|u0(x)|2dx

= 0+2h×a 2h

2

+0

= a2

2h→ ∞, h→0

So a step-edge has “infinite energy”.It cannot minimizes Tikhonov.

What went “wrong”: the square:

(38)

with this finite difference approximation u0(x)≈ a

2h, x∈[−h,h]

then Z1

−1

|u0(x)|2dx = Z −h

−1

|u0(x)|2dx+ Z h

−h

|u0(x)|2dx+ Z 1

h

|u0(x)|2dx

= 0+2h×a 2h

2

+0

= a2

2h→ ∞, h→0

So a step-edge has “infinite energy”.It cannot minimizes Tikhonov.

What went “wrong”: the square:

(39)

Total Variation Motivation

1-D computation on step edges

with this finite difference approximation u0(x)≈ a

2h, x∈[−h,h]

then Z1

−1

|u0(x)|2dx = Z −h

−1

|u0(x)|2dx+ Z h

−h

|u0(x)|2dx+ Z 1

h

|u0(x)|2dx

= 0+2h×a 2h

2

+0

= a2

2h→ ∞, h→0

So a step-edge has “infinite energy”.It cannot minimizes Tikhonov.

What went “wrong”: the square:

(40)

with this finite difference approximation u0(x)≈ a

2h, x∈[−h,h]

then Z1

−1

|u0(x)|2dx = Z −h

−1

|u0(x)|2dx+ Z h

−h

|u0(x)|2dx+ Z 1

h

|u0(x)|2dx

= 0+2h×a 2h

2

+0

= a2

2h→ ∞, h→0

So a step-edge has “infinite energy”.It cannot minimizes Tikhonov.

What went “wrong”: the square:

(41)

Total Variation Motivation

1-D computation on step edges

with this finite difference approximation u0(x)≈ a

2h, x∈[−h,h]

then Z1

−1

|u0(x)|2dx = Z −h

−1

|u0(x)|2dx+ Z h

−h

|u0(x)|2dx+ Z 1

h

|u0(x)|2dx

= 0+2h×a 2h

2

+0

= a2

2h→ ∞, h→0

So a step-edge has “infinite energy”.It cannot minimizes Tikhonov.

What went “wrong”: the square:

(42)

Replace the square in the previous computation byp>0 and redo:

Then Z1

−1

|u0(x)|pdx = Z −h

−1

|u0(x)|pdx+ Z h

−h

|u0(x)|pdx+ Z 1

h

|u0(x)|pdx

= 0+2h×

a 2h

p

+0

= |a|p(2h)1−p<∞ whenp≤1

Whenp≤1 this is finite! Edges can survive here!

Quite ugly whenp<1 (but not uninteresting) Whenp=1, this is theTotal Variationofu.

(43)

Total Variation Motivation

1-D computation on step edges

Replace the square in the previous computation byp>0 and redo:

Then Z1

−1

|u0(x)|pdx = Z −h

−1

|u0(x)|pdx+ Z h

−h

|u0(x)|pdx+ Z 1

h

|u0(x)|pdx

= 0+2h×

a 2h

p

+0

= |a|p(2h)1−p<∞ whenp≤1

Whenp≤1 this is finite! Edges can survive here!

Quite ugly whenp<1 (but not uninteresting) Whenp=1, this is theTotal Variationofu.

(44)

Replace the square in the previous computation byp>0 and redo:

Then Z1

−1

|u0(x)|pdx = Z −h

−1

|u0(x)|pdx+ Z h

−h

|u0(x)|pdx+ Z 1

h

|u0(x)|pdx

= 0+2h×

a 2h

p

+0

= |a|p(2h)1−p<∞ whenp≤1

Whenp≤1 this is finite! Edges can survive here!

Quite ugly whenp<1 (but not uninteresting) Whenp=1, this is theTotal Variationofu.

(45)

Total Variation Motivation

1-D computation on step edges

Replace the square in the previous computation byp>0 and redo:

Then Z1

−1

|u0(x)|pdx = Z −h

−1

|u0(x)|pdx+ Z h

−h

|u0(x)|pdx+ Z 1

h

|u0(x)|pdx

= 0+2h×

a 2h

p

+0

= |a|p(2h)1−p<∞ whenp≤1

Whenp≤1 this is finite! Edges can survive here!

Quite ugly whenp<1 (but not uninteresting) Whenp=1, this is theTotal Variationofu.

(46)

Replace the square in the previous computation byp>0 and redo:

Then Z1

−1

|u0(x)|pdx = Z −h

−1

|u0(x)|pdx+ Z h

−h

|u0(x)|pdx+ Z 1

h

|u0(x)|pdx

= 0+2h×

a 2h

p

+0

= |a|p(2h)1−p<∞ whenp≤1

Whenp≤1 this is finite! Edges can survive here!

Quite ugly whenp<1 (but not uninteresting) Whenp=1, this is theTotal Variationofu.

(47)

Total Variation Motivation

1-D computation on step edges

Replace the square in the previous computation byp>0 and redo:

Then Z1

−1

|u0(x)|pdx = Z −h

−1

|u0(x)|pdx+ Z h

−h

|u0(x)|pdx+ Z 1

h

|u0(x)|pdx

= 0+2h×

a 2h

p

+0

= |a|p(2h)1−p<∞ whenp≤1

Whenp≤1 this is finite! Edges can survive here!

Quite ugly whenp<1 (but not uninteresting) Whenp=1, this is theTotal Variationofu.

(48)

1 Motivation

Origin and uses of Total Variation Denoising

Tikhonov regularization 1-D computation on step edges

2 Total Variation I First definition Rudin-Osher-Fatemi Inpainting/Denoising

3 Total Variation II

Relaxing the derivative constraints Definition in action

Using the new definition in denoising: Chambolle algorithm Image Simplification

4 Bibliography 5 The End

(49)

Total Variation Total Variation I

First definition

Letu: Ω⊂Rn→R. Define total variation as

J(u) = Z

|∇u|dx, |∇u|= v u u t

n

X

i=1

ux2i.

WhenJ(u)is finite, one says thatuhasbounded variationsand the space of function of bounded variations onΩis denotedBV(Ω).

(50)

Letu: Ω⊂Rn→R. Define total variation as

J(u) = Z

|∇u|dx, |∇u|= v u u t

n

X

i=1

ux2i.

WhenJ(u)is finite, one says thatuhasbounded variationsand the space of function of bounded variations onΩis denotedBV(Ω).

(51)

Total Variation Total Variation I

First definition

Expected: when minimizingJ(u)with other constraints, edges are less penalized that with Tikhonov.

Indeed edges are “naturally present” in bounded variation functions. In fact:

functions of bounded variations can be decomposed in

1 smooth parts,∇uwell defined,

2 Jump discontinuities (our edges)

3 something else (Cantor part) which can be nasty...

The functions that do not possess this nasty part form a subspace ofBV(Ω) calledSBV(Ω), TheSpecial functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

(52)

Expected: when minimizingJ(u)with other constraints, edges are less penalized that with Tikhonov.

Indeed edges are “naturally present” in bounded variation functions. In fact:

functions of bounded variations can be decomposed in

1 smooth parts,∇uwell defined,

2 Jump discontinuities (our edges)

3 something else (Cantor part) which can be nasty...

The functions that do not possess this nasty part form a subspace ofBV(Ω) calledSBV(Ω), TheSpecial functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

(53)

Total Variation Total Variation I

First definition

Expected: when minimizingJ(u)with other constraints, edges are less penalized that with Tikhonov.

Indeed edges are “naturally present” in bounded variation functions. In fact:

functions of bounded variations can be decomposed in

1 smooth parts,∇uwell defined,

2 Jump discontinuities (our edges)

3 something else (Cantor part) which can be nasty...

The functions that do not possess this nasty part form a subspace ofBV(Ω) calledSBV(Ω), TheSpecial functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

(54)

Expected: when minimizingJ(u)with other constraints, edges are less penalized that with Tikhonov.

Indeed edges are “naturally present” in bounded variation functions. In fact:

functions of bounded variations can be decomposed in

1 smooth parts,∇uwell defined,

2 Jump discontinuities (our edges)

3 something else (Cantor part) which can be nasty...

The functions that do not possess this nasty part form a subspace ofBV(Ω) calledSBV(Ω), TheSpecial functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

(55)

Total Variation Total Variation I

First definition

Expected: when minimizingJ(u)with other constraints, edges are less penalized that with Tikhonov.

Indeed edges are “naturally present” in bounded variation functions. In fact:

functions of bounded variations can be decomposed in

1 smooth parts,∇uwell defined,

2 Jump discontinuities (our edges)

3 something else (Cantor part) which can be nasty...

The functions that do not possess this nasty part form a subspace ofBV(Ω) calledSBV(Ω), TheSpecial functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

(56)

Expected: when minimizingJ(u)with other constraints, edges are less penalized that with Tikhonov.

Indeed edges are “naturally present” in bounded variation functions. In fact:

functions of bounded variations can be decomposed in

1 smooth parts,∇uwell defined,

2 Jump discontinuities (our edges)

3 something else (Cantor part) which can be nasty...

The functions that do not possess this nasty part form a subspace ofBV(Ω) calledSBV(Ω), TheSpecial functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

(57)

Total Variation Total Variation I

First definition

Expected: when minimizingJ(u)with other constraints, edges are less penalized that with Tikhonov.

Indeed edges are “naturally present” in bounded variation functions. In fact:

functions of bounded variations can be decomposed in

1 smooth parts,∇uwell defined,

2 Jump discontinuities (our edges)

3 something else (Cantor part) which can be nasty...

The functions that do not possess this nasty part form a subspace ofBV(Ω) calledSBV(Ω), TheSpecial functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

(58)

Expected: when minimizingJ(u)with other constraints, edges are less penalized that with Tikhonov.

Indeed edges are “naturally present” in bounded variation functions. In fact:

functions of bounded variations can be decomposed in

1 smooth parts,∇uwell defined,

2 Jump discontinuities (our edges)

3 something else (Cantor part) which can be nasty...

The functions that do not possess this nasty part form a subspace ofBV(Ω) calledSBV(Ω), TheSpecial functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

(59)

Total Variation Total Variation I

First definition

Expected: when minimizingJ(u)with other constraints, edges are less penalized that with Tikhonov.

Indeed edges are “naturally present” in bounded variation functions. In fact:

functions of bounded variations can be decomposed in

1 smooth parts,∇uwell defined,

2 Jump discontinuities (our edges)

3 something else (Cantor part) which can be nasty...

The functions that do not possess this nasty part form a subspace ofBV(Ω) calledSBV(Ω), TheSpecial functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

(60)

Expected: when minimizingJ(u)with other constraints, edges are less penalized that with Tikhonov.

Indeed edges are “naturally present” in bounded variation functions. In fact:

functions of bounded variations can be decomposed in

1 smooth parts,∇uwell defined,

2 Jump discontinuities (our edges)

3 something else (Cantor part) which can be nasty...

The functions that do not possess this nasty part form a subspace ofBV(Ω) calledSBV(Ω), TheSpecial functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

(61)

Total Variation Total Variation I

Rudin-Osher-Fatemi

Outline

1 Motivation

Origin and uses of Total Variation Denoising

Tikhonov regularization 1-D computation on step edges

2 Total Variation I First definition Rudin-Osher-Fatemi Inpainting/Denoising

3 Total Variation II

Relaxing the derivative constraints Definition in action

Using the new definition in denoising: Chambolle algorithm Image Simplification

4 Bibliography 5 The End

(62)

State the denoising problem as minimizingJ(u)under the constraints Z

u dx= Z

uodx, Z

(u−u0)2dx=|Ω|σ2 (|Ω|=area/volume ofΩ)

Solve via Lagrange multipliers.

(63)

Total Variation Total Variation I

Rudin-Osher-Fatemi

ROF Denoising

State the denoising problem as minimizingJ(u)under the constraints Z

u dx= Z

uodx, Z

(u−u0)2dx=|Ω|σ2 (|Ω|=area/volume ofΩ)

Solve via Lagrange multipliers.

(64)

State the denoising problem as minimizingJ(u)under the constraints Z

u dx= Z

uodx, Z

(u−u0)2dx=|Ω|σ2 (|Ω|=area/volume ofΩ)

Solve via Lagrange multipliers.

(65)

Total Variation Total Variation I

Rudin-Osher-Fatemi

TV-denoising

Chambolle-Lions: there existsλsuch the solution minimizes ETV(u) =1

2 Z

(Ku−u0)2dx+λ Z

|∇u|dx

Euler-Lagrange equation:

K(Ku−u0)−λdiv ∇u

|∇u|

=0.

The term div

∇u

|∇u|

is highly non linear. Problems especially when|∇u|=0.

In fact|∇u|∇u/(x)is the unit normal of the level line ofuatxand div

∇u

|∇u|

is the (mean)curvature of the level line: not defined when the level line is singular or does not exist!

(66)

Chambolle-Lions: there existsλsuch the solution minimizes ETV(u) =1

2 Z

(Ku−u0)2dx+λ Z

|∇u|dx

Euler-Lagrange equation:

K(Ku−u0)−λdiv ∇u

|∇u|

=0.

The term div

∇u

|∇u|

is highly non linear. Problems especially when|∇u|=0.

In fact|∇u|∇u/(x)is the unit normal of the level line ofuatxand div

∇u

|∇u|

is the (mean)curvature of the level line: not defined when the level line is singular or does not exist!

(67)

Total Variation Total Variation I

Rudin-Osher-Fatemi

TV-denoising

Chambolle-Lions: there existsλsuch the solution minimizes ETV(u) =1

2 Z

(Ku−u0)2dx+λ Z

|∇u|dx

Euler-Lagrange equation:

K(Ku−u0)−λdiv ∇u

|∇u|

=0.

The term div

∇u

|∇u|

is highly non linear. Problems especially when|∇u|=0.

In fact|∇u|∇u/(x)is the unit normal of the level line ofuatxand div

∇u

|∇u|

is the (mean)curvature of the level line: not defined when the level line is singular or does not exist!

(68)

Chambolle-Lions: there existsλsuch the solution minimizes ETV(u) =1

2 Z

(Ku−u0)2dx+λ Z

|∇u|dx

Euler-Lagrange equation:

K(Ku−u0)−λdiv ∇u

|∇u|

=0.

The term div

∇u

|∇u|

is highly non linear. Problems especially when|∇u|=0.

In fact|∇u|∇u/(x)is the unit normal of the level line ofuatxand div

∇u

|∇u|

is the (mean)curvature of the level line: not defined when the level line is singular or does not exist!

(69)

Total Variation Total Variation I

Rudin-Osher-Fatemi

TV-denoising

Chambolle-Lions: there existsλsuch the solution minimizes ETV(u) =1

2 Z

(Ku−u0)2dx+λ Z

|∇u|dx

Euler-Lagrange equation:

K(Ku−u0)−λdiv ∇u

|∇u|

=0.

The term div

∇u

|∇u|

is highly non linear. Problems especially when|∇u|=0.

In fact|∇u|∇u/(x)is the unit normal of the level line ofuatxand div

∇u

|∇u|

is the (mean)curvature of the level line: not defined when the level line is singular or does not exist!

(70)

Replace it by regularized version

|∇u|β= q

|∇u|2+β, β >0 Acar - Vogel show that

β→0lim

Jβ(u) = Z

|∇u|βdx

=J(u).

Replace energy by

E0(u) = Z

(Ku−u0)2dx+λJβ(u)

Euler-Lagrange equation:

K(Ku−u0)−λdiv ∇u

|∇u|β

=0 The null denominator problem disappears.

(71)

Total Variation Total Variation I

Rudin-Osher-Fatemi

Acar-Vogel

Replace it by regularized version

|∇u|β= q

|∇u|2+β, β >0 Acar - Vogel show that

β→0lim

Jβ(u) = Z

|∇u|βdx

=J(u).

Replace energy by

E0(u) = Z

(Ku−u0)2dx+λJβ(u)

Euler-Lagrange equation:

K(Ku−u0)−λdiv ∇u

|∇u|β

=0 The null denominator problem disappears.

(72)

Replace it by regularized version

|∇u|β= q

|∇u|2+β, β >0 Acar - Vogel show that

β→0lim

Jβ(u) = Z

|∇u|βdx

=J(u).

Replace energy by

E0(u) = Z

(Ku−u0)2dx+λJβ(u)

Euler-Lagrange equation:

K(Ku−u0)−λdiv ∇u

|∇u|β

=0 The null denominator problem disappears.

(73)

Total Variation Total Variation I

Rudin-Osher-Fatemi

Acar-Vogel

Replace it by regularized version

|∇u|β= q

|∇u|2+β, β >0 Acar - Vogel show that

β→0lim

Jβ(u) = Z

|∇u|βdx

=J(u).

Replace energy by

E0(u) = Z

(Ku−u0)2dx+λJβ(u)

Euler-Lagrange equation:

K(Ku−u0)−λdiv ∇u

|∇u|β

=0 The null denominator problem disappears.

(74)

Implementation by finite differences, fixed-point strategy, linearization.

Original λ=1.5,β=10−4

(75)

Total Variation Total Variation I

Rudin-Osher-Fatemi

Example

Implementation by finite differences, fixed-point strategy, linearization.

Original λ=1.5,β=10−4

(76)

1 Motivation

Origin and uses of Total Variation Denoising

Tikhonov regularization 1-D computation on step edges

2 Total Variation I First definition Rudin-Osher-Fatemi Inpainting/Denoising 3 Total Variation II

Relaxing the derivative constraints Definition in action

Using the new definition in denoising: Chambolle algorithm Image Simplification

4 Bibliography 5 The End

(77)

Total Variation Total Variation I

Inpainting/Denoising

Fillinguin the subsetH⊂Ωwhere data is missing, denoise known data Inpainting energy (Chan & Shen):

EITV(u) = 1 2 Z

Ω\H

(u−u0)2dx+λ Z

|∇u|dx

Euler-Lagrange Equation:

(u−u0)χ−λdiv ∇u

|∇u|

=0.

(χ(x) =1 isx6∈H, 0 otherwise).

Very similar to denoising. Can use the same approximation/implementation.

(78)

Fillinguin the subsetH⊂Ωwhere data is missing, denoise known data Inpainting energy (Chan & Shen):

EITV(u) = 1 2 Z

Ω\H

(u−u0)2dx+λ Z

|∇u|dx

Euler-Lagrange Equation:

(u−u0)χ−λdiv ∇u

|∇u|

=0.

(χ(x) =1 isx6∈H, 0 otherwise).

Very similar to denoising. Can use the same approximation/implementation.

(79)

Total Variation Total Variation I

Inpainting/Denoising

Fillinguin the subsetH⊂Ωwhere data is missing, denoise known data Inpainting energy (Chan & Shen):

EITV(u) = 1 2 Z

Ω\H

(u−u0)2dx+λ Z

|∇u|dx

Euler-Lagrange Equation:

(u−u0)χ−λdiv ∇u

|∇u|

=0.

(χ(x) =1 isx6∈H, 0 otherwise).

Very similar to denoising. Can use the same approximation/implementation.

(80)

Fillinguin the subsetH⊂Ωwhere data is missing, denoise known data Inpainting energy (Chan & Shen):

EITV(u) = 1 2 Z

Ω\H

(u−u0)2dx+λ Z

|∇u|dx

Euler-Lagrange Equation:

(u−u0)χ−λdiv ∇u

|∇u|

=0.

(χ(x) =1 isx6∈H, 0 otherwise).

Very similar to denoising. Can use the same approximation/implementation.

(81)

Total Variation Total Variation I

Inpainting/Denoising

Fillinguin the subsetH⊂Ωwhere data is missing, denoise known data Inpainting energy (Chan & Shen):

EITV(u) = 1 2 Z

Ω\H

(u−u0)2dx+λ Z

|∇u|dx

Euler-Lagrange Equation:

(u−u0)χ−λdiv ∇u

|∇u|

=0.

(χ(x) =1 isx6∈H, 0 otherwise).

Very similar to denoising. Can use the same approximation/implementation.

(82)

Degraded Inpainted

(83)

Total Variation Total Variation I

Inpainting/Denoising

Segmention

Inpainting - driven segmention (Lauze, Nielsen 2008, IJCV)

Aortic calcifiction Detection Segmention

(84)

1 Motivation

Origin and uses of Total Variation Denoising

Tikhonov regularization 1-D computation on step edges

2 Total Variation I First definition Rudin-Osher-Fatemi Inpainting/Denoising

3 Total Variation II

Relaxing the derivative constraints Definition in action

Using the new definition in denoising: Chambolle algorithm Image Simplification

4 Bibliography 5 The End

(85)

Total Variation Total Variation II

Relaxing the derivative constraints

With definition of total variation as J(u) =

Z

|∇u|dx umust have (weak) derivatives.

But we just saw that the computation is possible for a step-edgeu(x) =0,x<0, u(x) =a,x>0:

Z 1

−1

|u0(x)|dx=|a|

Can we avoid the use of derivatives ofu?

(86)

With definition of total variation as J(u) =

Z

|∇u|dx umust have (weak) derivatives.

But we just saw that the computation is possible for a step-edgeu(x) =0,x<0, u(x) =a,x>0:

Z 1

−1

|u0(x)|dx=|a|

Can we avoid the use of derivatives ofu?

(87)

Total Variation Total Variation II

Relaxing the derivative constraints

With definition of total variation as J(u) =

Z

|∇u|dx umust have (weak) derivatives.

But we just saw that the computation is possible for a step-edgeu(x) =0,x<0, u(x) =a,x>0:

Z 1

−1

|u0(x)|dx=|a|

Can we avoid the use of derivatives ofu?

(88)

With definition of total variation as J(u) =

Z

|∇u|dx umust have (weak) derivatives.

But we just saw that the computation is possible for a step-edgeu(x) =0,x<0, u(x) =a,x>0:

Z 1

−1

|u0(x)|dx=|a|

Can we avoid the use of derivatives ofu?

(89)

Total Variation Total Variation II

Relaxing the derivative constraints

Assume first that∇uexists.

|∇u|=∇u· ∇u

|∇u|

(except when∇u=0) and|∇u|∇u is the normal to the level lines ofu, it has everywhere norm 1.

LetVthe set of vector fieldsv(x)onΩwith|v(x)| ≤1. I claim J(u) =sup

v∈V

Z

∇u(x)·v(x)dx

(consequence of Cauchy-Schwarz inequality).

(90)

Assume first that∇uexists.

|∇u|=∇u· ∇u

|∇u|

(except when∇u=0) and|∇u|∇u is the normal to the level lines ofu, it has everywhere norm 1.

LetVthe set of vector fieldsv(x)onΩwith|v(x)| ≤1. I claim J(u) =sup

v∈V

Z

∇u(x)·v(x)dx

(consequence of Cauchy-Schwarz inequality).

(91)

Total Variation Total Variation II

Relaxing the derivative constraints

Assume first that∇uexists.

|∇u|=∇u· ∇u

|∇u|

(except when∇u=0) and|∇u|∇u is the normal to the level lines ofu, it has everywhere norm 1.

LetVthe set of vector fieldsv(x)onΩwith|v(x)| ≤1. I claim J(u) =sup

v∈V

Z

∇u(x)·v(x)dx

(consequence of Cauchy-Schwarz inequality).

(92)

Restrict to the setW of suchv’s that are differentiable and vanishing at∂Ω, the boundary ofΩThen

J(u) =sup

v∈W

Z

∇u(x)·v(x)dx

But then I can use Divergence theorem:H⊂D⊂Rn,f :D→Rdifferentiable function,g= (g1, . . . ,gn) :D→Rndifferentiable vector field and

divg=Pn i=1gix

i, Z

H

∇f·g dx=− Z

H

fdivg dx+ Z

∂H

fg·n(s)ds

withn(s)exterior normal field to∂H.

Apply it to J(u) above:

J(u) = sup

v∈W

− Z

u(x)divv(x)dx

The gradient has disappeared fromu!This is the classical definition of total variation.

Note that when∇u(x)6=0, optimalv(x) = (∇u/|∇|u)(x)and divv(x)is the mean curvature of the level set ofuatx. Geometry is there!

(93)

Total Variation Total Variation II

Relaxing the derivative constraints

Restrict to the setW of suchv’s that are differentiable and vanishing at∂Ω, the boundary ofΩThen

J(u) =sup

v∈W

Z

∇u(x)·v(x)dx

But then I can use Divergence theorem:H⊂D⊂Rn,f :D→Rdifferentiable function,g= (g1, . . . ,gn) :D→Rndifferentiable vector field and

divg=Pn i=1gix

i, Z

H

∇f·g dx=− Z

H

fdivg dx+ Z

∂H

fg·n(s)ds withn(s)exterior normal field to∂H.

Apply it to J(u) above:

J(u) = sup

v∈W

− Z

u(x)divv(x)dx

The gradient has disappeared fromu!This is the classical definition of total variation.

Note that when∇u(x)6=0, optimalv(x) = (∇u/|∇|u)(x)and divv(x)is the mean curvature of the level set ofuatx. Geometry is there!

(94)

Restrict to the setW of suchv’s that are differentiable and vanishing at∂Ω, the boundary ofΩThen

J(u) =sup

v∈W

Z

∇u(x)·v(x)dx

But then I can use Divergence theorem:H⊂D⊂Rn,f :D→Rdifferentiable function,g= (g1, . . . ,gn) :D→Rndifferentiable vector field and

divg=Pn i=1gix

i, Z

H

∇f·g dx=− Z

H

fdivg dx+ Z

∂H

fg·n(s)ds withn(s)exterior normal field to∂H.

Apply it to J(u) above:

J(u) = sup

v∈W

− Z

u(x)divv(x)dx

The gradient has disappeared fromu!This is the classical definition of total variation.

Note that when∇u(x)6=0, optimalv(x) = (∇u/|∇|u)(x)and divv(x)is the mean curvature of the level set ofuatx. Geometry is there!

(95)

Total Variation Total Variation II

Relaxing the derivative constraints

Restrict to the setW of suchv’s that are differentiable and vanishing at∂Ω, the boundary ofΩThen

J(u) =sup

v∈W

Z

∇u(x)·v(x)dx

But then I can use Divergence theorem:H⊂D⊂Rn,f :D→Rdifferentiable function,g= (g1, . . . ,gn) :D→Rndifferentiable vector field and

divg=Pn i=1gix

i, Z

H

∇f·g dx=− Z

H

fdivg dx+ Z

∂H

fg·n(s)ds withn(s)exterior normal field to∂H.

Apply it to J(u) above:

J(u) = sup

v∈W

− Z

u(x)divv(x)dx

The gradient has disappeared fromu!This is the classical definition of total variation.

Note that when∇u(x)6=0, optimalv(x) = (∇u/|∇|u)(x)and divv(x)is the mean curvature of the level set ofuatx. Geometry is there!

(96)

Restrict to the setW of suchv’s that are differentiable and vanishing at∂Ω, the boundary ofΩThen

J(u) =sup

v∈W

Z

∇u(x)·v(x)dx

But then I can use Divergence theorem:H⊂D⊂Rn,f :D→Rdifferentiable function,g= (g1, . . . ,gn) :D→Rndifferentiable vector field and

divg=Pn i=1gix

i, Z

H

∇f·g dx=− Z

H

fdivg dx+ Z

∂H

fg·n(s)ds withn(s)exterior normal field to∂H.

Apply it to J(u) above:

J(u) = sup

v∈W

− Z

u(x)divv(x)dx

The gradient has disappeared fromu!This is the classical definition of total variation.

Note that when∇u(x)6=0, optimalv(x) = (∇u/|∇|u)(x)and divv(x)is the mean curvature of the level set ofuatx. Geometry is there!

(97)

Total Variation Total Variation II

Relaxing the derivative constraints

Restrict to the setW of suchv’s that are differentiable and vanishing at∂Ω, the boundary ofΩThen

J(u) =sup

v∈W

Z

∇u(x)·v(x)dx

But then I can use Divergence theorem:H⊂D⊂Rn,f :D→Rdifferentiable function,g= (g1, . . . ,gn) :D→Rndifferentiable vector field and

divg=Pn i=1gix

i, Z

H

∇f·g dx=− Z

H

fdivg dx+ Z

∂H

fg·n(s)ds withn(s)exterior normal field to∂H.

Apply it to J(u) above:

J(u) = sup

v∈W

− Z

u(x)divv(x)dx

The gradient has disappeared fromu!This is the classical definition of total variation.

Note that when∇u(x)6=0, optimalv(x) = (∇u/|∇|u)(x)and divv(x)is the mean curvature of the level set ofuatx. Geometry is there!

(98)

1 Motivation

Origin and uses of Total Variation Denoising

Tikhonov regularization 1-D computation on step edges

2 Total Variation I First definition Rudin-Osher-Fatemi Inpainting/Denoising

3 Total Variation II

Relaxing the derivative constraints Definition in action

Using the new definition in denoising: Chambolle algorithm Image Simplification

4 Bibliography 5 The End

(99)

Total Variation Total Variation II

Definition in action

Step-edge

uthe step-edge function defined in previous slides. We computeJ(u)with the new definition.

hereW ={φ: [−1,1]→Rdifferentiable, φ(−1) =φ(1) =0,|φ(x)| ≤1},

J(u) = sup

φ∈W

Z 1

−1

u(x)φ0(x)dx

we compute

Z 1

−1

u(x)φ0(x)dx=a Z 1

0

φ0(x)dx

=a(φ(1)−φ(0))

=−aφ(0)

As−1≤φ(0)≤1, the maximum is|a|.

(100)

uthe step-edge function defined in previous slides. We computeJ(u)with the new definition.

hereW ={φ: [−1,1]→Rdifferentiable, φ(−1) =φ(1) =0,|φ(x)| ≤1}, J(u) = sup

φ∈W

Z 1

−1

u(x)φ0(x)dx

we compute

Z 1

−1

u(x)φ0(x)dx=a Z 1

0

φ0(x)dx

=a(φ(1)−φ(0))

=−aφ(0)

As−1≤φ(0)≤1, the maximum is|a|.

(101)

Total Variation Total Variation II

Definition in action

Step-edge

uthe step-edge function defined in previous slides. We computeJ(u)with the new definition.

hereW ={φ: [−1,1]→Rdifferentiable, φ(−1) =φ(1) =0,|φ(x)| ≤1}, J(u) = sup

φ∈W

Z 1

−1

u(x)φ0(x)dx

we compute

Z 1

−1

u(x)φ0(x)dx=a Z 1

0

φ0(x)dx

=a(φ(1)−φ(0))

=−aφ(0)

As−1≤φ(0)≤1, the maximum is|a|.

(102)

uthe step-edge function defined in previous slides. We computeJ(u)with the new definition.

hereW ={φ: [−1,1]→Rdifferentiable, φ(−1) =φ(1) =0,|φ(x)| ≤1}, J(u) = sup

φ∈W

Z 1

−1

u(x)φ0(x)dx

we compute

Z 1

−1

u(x)φ0(x)dx=a Z 1

0

φ0(x)dx

=a(φ(1)−φ(0))

=−aφ(0)

As−1≤φ(0)≤1, the maximum is|a|.

(103)

Total Variation Total Variation II

Definition in action

Step-edge

uthe step-edge function defined in previous slides. We computeJ(u)with the new definition.

hereW ={φ: [−1,1]→Rdifferentiable, φ(−1) =φ(1) =0,|φ(x)| ≤1}, J(u) = sup

φ∈W

Z 1

−1

u(x)φ0(x)dx

we compute

Z 1

−1

u(x)φ0(x)dx=a Z 1

0

φ0(x)dx

=a(φ(1)−φ(0))

=−aφ(0)

As−1≤φ(0)≤1, the maximum is|a|.

(104)

uthe step-edge function defined in previous slides. We computeJ(u)with the new definition.

hereW ={φ: [−1,1]→Rdifferentiable, φ(−1) =φ(1) =0,|φ(x)| ≤1}, J(u) = sup

φ∈W

Z 1

−1

u(x)φ0(x)dx

we compute

Z 1

−1

u(x)φ0(x)dx=a Z 1

0

φ0(x)dx

=a(φ(1)−φ(0))

=−aφ(0)

As−1≤φ(0)≤1, the maximum is|a|.

(105)

Total Variation Total Variation II

Definition in action

2D example

Bopen set with regular boundary curvepartialB,Ωlarge enough to containBand χBthe characteristic function ofB

χB(x) =

(1 x∈B 0 x6∈B

Forv∈W, by the divergence theorem onBand its boundary∂B Z

χ(x)divv(x)dx= Z

B

divv(x)dx

=− Z

∂B

v(s)·n(s)ds (n(s)is the exterior normal to∂B)

This integral is maximized whenv=−n: length of∂Bperimeter ofB.

(106)

Bopen set with regular boundary curvepartialB,Ωlarge enough to containBand χBthe characteristic function ofB

χB(x) =

(1 x∈B 0 x6∈B

Forv∈W, by the divergence theorem onBand its boundary∂B Z

χ(x)divv(x)dx= Z

B

divv(x)dx

=− Z

∂B

v(s)·n(s)ds (n(s)is the exterior normal to∂B)

This integral is maximized whenv=−n: length of∂Bperimeter ofB.

Referencer

RELATEREDE DOKUMENTER

Train platform Duration Total Transfer tunnel Duration Total Escalator Duration Total Stair Duration. Total Lift Duration Total Platform Duration Total

3 Man skal være opmærksom på, at ø-kommunerne har et lille befolkningstal og dermed også naturligt et mindre antal ydelsesmodtagere, hvorfor disse kommuner kan fremstå med en

Efter en årrække ændredes anbefalingerne til tidlig afnavling som led i blødningsprofylaksen og efterfølgende blev der i 2010 endnu engang ændret i afnavlingspraksis

Step 1: Rough data acquisition Step 2: Uncertainty analysis. Step 3:

Since the combined restriction of Article 3(3) limits the total step of the NordLink, NorNed and Skagerrak, it is a possibility to increase the maximum individual ramping rates

M e n en lo vbekendtgørelse skal ikke tage stilling til fortolkningsspørgsm ål, feks om forholdet m ellem forskellige love.... D erfor er spø rg

Da en model, hvor individuel vejledning er det primære tilbud til de studerende, vil omfatte en studieordningsrevision af en ordning, som ikke har haft fuldt gennemløb,

(('oral management':ti,ab,kw OR 'dental hygiene':ti,ab,kw OR 'oral care':ti,ab,kw OR 'mouth rinse':ti,ab,kw OR 'tooth cleaning':ti,ab,kw OR 'teeth cleaning':ti,ab,kw OR