• Ingen resultater fundet

1

13

2 b

1

2

+

16

(1 − p)

1 6

p

1 3

p

1 3 1

3

+

13

(1 − p)

1

Figure 2.2: Underlying Markov chain ofZ.

2.2 Continuous phase-type distributions

In analogy with the discrete case, continuous phase-type distributions (or sim-ply phase-type distributions) are defined as the time until absorption in a continuous-time Markov chain.

Let {Xt}t≥0 be a continuous-time Markov chain on the state space E = {1,2, . . . , m, m+ 1} where we take again state m+ 1 as the absorbing state. The intensity matrix of this process is given by

Λ = T t

0 0

.

Here T is an m×m sub-intensity matrix and since each row of an intensity matrix must sum up to zero we have t=−Te. The initial distribution of this process will again be denoted by (α, αm+1) withαe+αm+1= 1.

Definition 2.8 (Phase-type distribution) A random variableτhas a phase-type distribution if τ is the time until absorption in a continuous-time Markov chain,

τ:= inf{t >0 :Xt=m+ 1}.

We write τ ∼ PH (α, T) where α is the initial probability vector and T the sub intensity matrix of the transient states of the Markov chain. The transition probability matrix for the Markov chain at timet is given byPt=eΛt.For the transition probabilities in the transient states of the Markov chain we have

P(Xt=j, t≤τ|X0=i) = eT t

ij.

Lemma 2.9 The density of a phase-type random variableτ is given by f(t) =αeT tt, t >0. (2.7) andf(0) =αm+1.

Proof. We have f(t)dt = P(τ∈(t, t+dt]). By conditioning on the initial state of the Markov chain,i, and on the state at timet,j, we get

f(t)dt =

m

X

i,j=1

P(τ ∈(t+dt]|Xt=j, X0=i)P(Xt=j|X0=i)P(X0=i)

=

m

X

i,j=1

P(τ ∈(t+dt]|Xt=j) (Pt)ijαi.

The probability of absorption in the time interval (t, t+dt] when Xt = j is given by tjdt, where tj is the j-th element of the exit vector t. Also, for all statesi, j∈ {1, . . . , m}we have (Pt)ij = eT t

ij which leads to f(t)dt=

m

X

i,j=1

αi eT t

ijtjdt=αeT ttdt.

As in the discrete case we havef(0) =αm+1which is the probability of starting

in the absorbing state.

The continuous analogue of the probability generating function is the Laplace transform. For a non-negative random variableX with densityf(x) the Laplace transform is defined as

L(s) =E e−sX

= Z

0

e−sxf(x)dx.

Without proof we state the following:

Corollary 2.10 The distribution of a phase-type random variable is given by

F(t) = 1−αeT te. (2.8)

The Laplace-transform of a phase-type random variableX is given by

L(s) =αm+1+α(sI−T)−1t. (2.9) Note that analogously to the discrete case the Laplace transform of a phase-type distribution is a rational function ins.

An important property of continuous phase-type distributions that does not hold in the discrete case is that a continuous phase-type density is strictly positive on (0,∞). This can be seen by rewriting the matrix eT t using uniformization.

If we take a phase-type density f(n) = αeT tt with irreducible representation (α, T) we can write

λ=−min(Tii)

2.2 Continuous phase-type distributions 21

which is a non-negative matrix. If the matrix Ki is irreducible from some i on, (hence the matrix T is irreducible), this matrix will be strictly positive. If this is not the case, there will be states in the underlying Markov chain that cannot be reached from certain other states. However, the choice of α will make sure that all states can be reached with positive probability as we have chosen the representation (α, T) to be irreducible (meaning we left out all the states that cannot be reached). Hence αeT t is a strictly positive row vector and multiplying this vector with the non-negative non-empty column vector t ensures thatαeT tt>0 for allt >0.

Example 2.3 If we take a phase-type density with representation α= (1,0,0), T =

then we have a continuous phase-type distribution where each phase is visited an exp(λ) distributed time before the process enters the next phase. Since there are three transient states this is the sum of three exponential distributions and hence an Erlang-3 distribution. The density of this distribution is given by

f(t) = αeT tt

which we recognize as the Erlang-3 density.

Chapter 3

Matrix-exponential distributions

This chapter is about matrix-exponential distributions. They are distribu-tions with a density of the same form as continuous phase-type distribudistribu-tions, f(t) =αeT tt, but they do not necessarily possess the probabilistic interpreta-tion as the time until absorpinterpreta-tion in a continuous-time finite-state Markov chain.

In this chapter we explore some identities and give some examples of matrix-exponential distributions. It serves as a preparation to the next chapter in which we will study the discrete analogous of matrix-exponential distributions.

A thorough introduction to matrix-exponential distributions is given by As-mussen and O’Cinneide in [4].

In Section 3.1 the definition of a matrix-exponential distribution is given, and the equivalent definition as distributions with a rational Laplace transform is addressed. In Section 3.2 we give three examples of matrix-exponential distri-butions and explain their relationship to phase-type distridistri-butions.

3.1 Definition

Definition 3.1 (Matrix-exponential distribution) A random variable X has a matrix-exponential distribution if the density ofX is of the form

f(t) =αeT tt, t≥0.

Here α and t are a row and column vector of length m and T is an m×m matrix, all with entries inC.

If we denote the class of matrix-exponential distributions by ME and the class of phase-type distributions by PH we immediately have the relation PH⊆ME.

In the next section we will show that in fact PH(MG.The matrix-exponential distributions generalize phase-type distributions in the sense that they do not need to possess the probabilistic interpretation as a distribution of the time until absorption in a continuous-time Markov chain. This means that the vector α and matrixT are not necessarily an initial probability vector and sub-intensity matrix, which allows them to have entries inC.AsTis no longer a sub-intensity matrix, the relation t = −Te does not necessarily hold. Hence the vector t becomes a parameter, and we write

X ∼ME (α, T,t) for a matrix-exponential random variableX.

There is an equivalent definition of the class of matrix-exponential distributions, that says that a random variableX has a matrix-exponential distribution if the Laplace transformL(s) ofX is a rational function ins.The connection between the rational Laplace transform and density of a matrix-exponential distribution is made explicit in the following proposition, which is taken from Asmussen &

Bladt [3].

Proposition 3.2 The Laplace transform of a matrix-exponential distribution can be written as

L(s) = b1+b2s+b3s2+. . .+bnsn−1 sn+a1sn−1+. . .+an−1s+an

, (3.1)

for some n ≥ 1 and some constants a1, . . . , an, b1, . . . , bn. From L(0) = 1 it follows that we have an=b1.The distribution has the following representation

f(t) =αeT tt