• Ingen resultater fundet

On the relation between matrix-geometric and discrete phase-type distributions

N/A
N/A
Info
Hent
Protected

Academic year: 2022

Del "On the relation between matrix-geometric and discrete phase-type distributions"

Copied!
60
0
0

Indlæser.... (se fuldtekst nu)

Hele teksten

(1)

On the relation between matrix-geometric

and

discrete phase-type distributions

Sietske Greeuw

Kongens Lyngby/Amsterdam 2009 Master thesis Mathematics

University of Amsterdam

(2)

Building 321, DK-2800 Kongens Lyngby, Denmark Phone +45 45253351, Fax +45 45882673

reception@imm.dtu.dk www.imm.dtu.dk

University of Amsterdam Faculty of Science

Science Park 404, 1098 XH Amsterdam, The Netherlands Phone +32 20 5257678, Fax +31 20 5257675

info-science@uva.nl www.science.uva.nl

(3)

Summary

A discrete phase-type distribution describes the time until absorption in a discrete-time Markov chain with a finite number of transient states and one absorbing state. The densityf(n) of a discrete phase-type distribution can be expressed by the initial probability vectorα, the transition probability matrix T of the transient states of the Markov chain and the vector t containing the probabilities of entering the absorbing state from the transient states:

f(n) =αTn−1t, n∈N.

If we take a probability density of the same form, but not necessarily require α, T andtto have the probabilistic Markov-chain interpretation, we obtain the density of a matrix-geometric distribution. Matrix-geometric distributions can equivalently be defined as distributions on the non-negative integers that have a rational probability generating function.

In this thesis it is shown that the class of matrix-geometric distributions is strictly larger than the class of discrete phase-type distributions. We give an example of a set of matrix-geometric distributions that are not of discrete phase- type. We also show that there is a possible order reduction when representing a discrete phase-type distribution as a matrix-geometric distribution.

The results parallel the continuous case, where the class of matrix-exponential distributions is strictly larger than the class of continuous phase-type distribu- tions, and where there is also a possible order reduction.

Keywords: discrete phase-type distributions, phase-type distributions, matrix- exponential distributions, matrix-geometric distributions.

(4)
(5)

Resum´ e

En diskret fasetype fordeling er fordelingen af tiden til absorption i en diskret- tids Markov kæde med et begrænset antal transiente tilstande og ´en absorbe- rende tilstand. Tæthedenf(n) af en diskret fasetype fordeling kan udtrykkes ved vektoren med den initielle fordelingα, matricenT,der beskriver de mulige over- gange mellem transiente tilstande i Markov kæden, og vektorentmed sandsyn- ligheder for at springe til den absorberende tilstand fra de transiente tilstande:

f(n) =αTn−1t, n∈N.

Hvis vi tager en tæthed af samme form, men ikke nødvendigvis kræver, at α, T ogt har probabilistisk Markov kæde fortolkning, f˚ar vi tætheden af en matrix-geometrisk fordeling. Matrix-geometriske fordelinger kan tilsvarende de- fineres som fordelinger p˚a ikke-negative heltal, der har en rationel sandsyn- lighedsgenererende funktion.

I denne projekt er det vist, at mængden af matrix-geometriske fordelinger er strengt større end mængden af diskrete fasetype fordelinger. Der gives et ek- sempel p˚a en mængde af matrix-geometriske fordelinger, der ikke er af diskret fasetype. Vi viser ogs˚a, at der er en mulighed for reduktion af størrelse af repræsentationen, n˚ar de repræsenterer en diskret fasetype fordeling som en matrix-geometrisk fordeling.

Resultaterne svarer til det kontinuerte tilfælde, hvor klassen af matrix- eksponentielle fordelinger er strengt større end klassen af kontinuert fasetype fordelinger, og hvor reduktion af størrelse af repræsentation ogs˚a er muligt.

(6)
(7)

Preface

This thesis was prepared in partial fulfillment of the requirements for acquiring the master of science degree in Mathematics. It has been prepared during a five month Erasmus-stay at the Danish Technical University, at the institute for Informatics and Mathematical Modelling.

I would like to thank my two supervisors, Bo Friis Nielsen and Michel Mandjes, for both being very supportive in my decision of doing my master project abroad.

Furthermore I want to thank Bo for his friendly and informative guidance, and Michel for his help with the final parts of my thesis.

Finally I want to thank my family and my friends and all other people who have supported me during this process.

Amsterdam, March 2009 Sietske Greeuw

(8)
(9)

Contents

Summary i

Resum´e iii

Preface v

1 Introduction 1

2 Phase-type distributions 5

2.1 Discrete phase-type distributions . . . 5

2.2 Continuous phase-type distributions . . . 19

3 Matrix-exponential distributions 23 3.1 Definition . . . 24

3.2 Examples of matrix-exponential distributions . . . 25

4 Matrix-geometric distributions 31 4.1 Definition . . . 32

4.2 Existence of genuine matrix-geometric distributions . . . 34

4.3 Order reduction for matrix-geometric distributions . . . 38

4.4 Properties of matrix-geometric distributions . . . 45

(10)
(11)

Chapter 1

Introduction

In this thesis the relation between matrix-geometric and discrete phase-type distributions will be explored. Matrix-geometric distributions (MG) are distri- butions on the non-negative integers that possess a density of the form

f(n) =αTn−1t, n∈N (1.1)

together with f(0) = ∆, the point mass in zero. The parameters (α, T,t) are a row vector, matrix and column vector respectively, that satisfy the necessary conditions for f(n) to be a density. Hence αTn−1t ≥ 0 for all n ∈ N and P

n=0f(n) = ∆ +α(I−T)−1t= 1.

The class MG is a generalization of the class of discrete phase-type distributions (DPH). A discrete phase-type distribution describes the time until absorption in a finite-state discrete-time Markov chain. A discrete phase-type density has the same form as (1.1) but requires the parameters to have the interpretation as initial probability vector (α), sub-transition probability matrix (T) and exit probability vector (t) of the underlying Markov chain.

The analogous setup in continuous time is given by matrix-exponential distri- butions (ME) that are a generalization of continuous phase-type distributions (PH). The latter describe the time until absorption in a continuous-time finite- state Markov chain.

The name phase-type refers to the states of the Markov chain. Each visit to a

(12)

state can be seen as a phase with a geometric (in continuous time exponential) distribution, and the resulting Markovian structure gives rise to a discrete (con- tinuous) phase-type distribution. The method of phases has been introduced by A.K. Erlang in the beginning of the 20th century and has later been generalized by M.F. Neuts. A general probability distribution can be approximated arbi- trarily closely by a phase-type distribution. This makes phase-type distributions a powerful tool in the modelling of e.g. queueing systems. Another nice feature of phase-type distributions is that they often give rise to closed form solutions.

The main reference on discrete phase-type distributions is ‘Probability distribu- tions of phase type’ by M.F. Neuts, published in 1975 [12]. This article gives a thorough introduction to discrete phase-type distributions, their main proper- ties and their use in the theory of queues. In his 1981 book ‘Matrix-geometric Solutions in Stochastic Models’ [13], M.F. Neuts gives again an introduction to phase-type distributions, this time focussing on the continuous case. Another clear introduction of continuous phase-type distributions can be found in the book by G. Latouche and V. Ramaswami [10]. A good survey on the class ME is given in the entry on matrix-exponential distributions in theEncyclopedia of Statistical Sciences by Asmussen and O’Cinneide [4]. In this article the rela- tion of the class ME to the class PH is discussed, some properties of ME are given and the use of these distributions in applied probability is explained. The authors mention the class of matrix-geometric distributions as the analogous discrete counterpart of ME. However, no explicit examples are given.

The main question addressed in this thesis is what the relation is between the classes DPH and MG. From their definition it is clear that

DPH⊆MG.

It is however not immediately clear, whether there exists distributions that are in MG but do not have a discrete phase-type representation. The second question is on the order of the distribution. This order is defined as the minimal dimension of the matrix T in (1.1) needed to represent the distribution. It is of interest to know if there exist discrete phase-type distributions that have a lower order representation when they are represented as a matrix-geometric distribution. Such a reduced-order representation can be much more convenient to work with, for example in solving high-dimensional queueing systems.

In the continuous case it is known that PH(ME,

that is, PH is a strict subset of ME [3, 4]. A standard example of a distribution that is in ME but not in PH is a distribution represented by a density which has a zero on (0,∞). Such a distribution cannot be in PH, as a phase-type density

(13)

3

is strictly positive on (0,∞) [13]. There is no equivalent result in discrete time, as discrete phase-type densities can be zero with a certain periodicity. A sec- ond way of showing that ME is larger than PH is by using the characterization of phase-type distributions stated by O’Cinneide in [14] which is based on the absolute value of the poles of the Laplace transform.

In [15] O’Cinneide gives a lower bound on the order of a phase-type represen- tation. He shows that there is possible order reduction when using a matrix- exponential representation rather than a phase-type representation.

We will use the characterization of discrete phase-type distributions by O’Cinneide [14], to give a set of distributions that is in MG but not in DPH.

We will also show with an example that order reduction by going from a DPH to an MG representation is possible. This result is based on discrete phase-type densities which have a certain periodicity.

The rest of the thesis is organized as follows: Section 2.1 gives an introduc- tion to and some important properties of discrete phase-type distributions. In Section 2.2 the class of continuous phase-type distributions PH is shortly ad- dressed. In Section 3.1 the class ME is introduced and in Section 3.2 some examples of matrix-exponential distributions are given, where the focus lies on their relation to the class PH. In Chapter 4 the answers to the questions stated in this introduction are given. In Section 4.1 we introduce matrix-geometric distributions and give an equivalent definition for these distributions based on the rationality of their probability generating function. In Section 4.2 we give an example that illustrates that MG is strictly larger than DPH and in Section 4.3 we give an example of order reduction between these classes. For the sake of completeness, Section 4.4 is devoted to some properties of matrix-geometric distributions. They equal the properties of discrete phase-type distributions that are stated in Chapter 2 but the proofs now need to be given analytically, as the probabilistic Markov-chain interpretation is no longer available.

(14)
(15)

Chapter 2

Phase-type distributions

This chapter is about phase-type distributions. In Section 2.1 discrete phase- type distributions are introduced. They are the distribution of the time until absorption in a discrete-time finite-state Markov chain with one absorbing state.

Some properties of these distributions will be studied, the connection with the geometric distribution will be explained, and some closure properties will be examined.

In Section 2.2, we have a look at continuous phase-type distributions. As their development is analogous to the discrete case, we will not explore them as extensively as the discrete case, but we will only state the most relevant results.

The presentation of phase-type distributions in this chapter is based on [12],[6]

and [13]. In the following,Iandewill denote the identity matrix and a vector of ones respectively, both of appropriate size. Row vectors will be denoted by bold face Greek letters, whereas column vectors will be denoted by bold face Latin letters. We defineNto be the set of positive integers, i.e. N={1,2,3, . . .}.

2.1 Discrete phase-type distributions

Let {Xn}n∈N≥0 be a discrete-time Markov chain on the state space E ={1,2, . . . , m, m+ 1}. We let {1,2, . . . , m} be the transient states of the

(16)

Markov chain and m+ 1 be the absorbing state. The transition probability matrix of this Markov chain is given by

P =

T t 0 1

.

HereT is them×msub-transition probability matrix for the transient states, andtis the exit vector which gives the probability of absorption into statem+ 1 from any transient state. SinceP is a transition probability matrix, each row ofP must sum to 1, hence

Te+t=e.

The probability of initiating the Markov chain in state i is denoted by αi = P(X0=i).The initial probability vector of the Markov chain is then given by (α, αm+1) = (α1, α2, . . . , αm, αm+1) and we have Pm+1

i=1 αi= 1.

Definition 2.1 (Discrete phase-type distribution) A random variable τ has a discrete phase-type distribution if τ is the time until absorption in a discrete-time Markov chain,

τ:= min{n∈N≥0:Xn=m+ 1}.

The name phase-type distribution refers to the states (phases) of the underlying Markov chain. IfTis anm×mmatrix we say that the density is of orderm.The order of the discrete-phase type distribution is the minimal size of the matrix T needed to represent the density.

2.1.1 Density and distribution function

In order to find the density of τ we look at the probability that the Markov chain is in one of the transient statesi∈ {1,2, . . . , m} afternsteps,

p(n)i =P(Xn=i) =

m

X

k=1

αk(Tn)ki.

We can collect these probabilities in a vector and getρ(n)= (p(n)1 , p(n)2 ,· · ·, p(n)m ).

Note thatρ(0)=α.

Lemma 2.2 The density of a discrete phase-type random variableτ is given by

fτ(n) =αTn−1t, n∈N (2.1)

andfτ(0) =αm+1.

(17)

2.1 Discrete phase-type distributions 7

Proof. The probability of absorption of the Markov chain at timen is given by the sum over the probabilities of the Markov chain being in one of the states {1,2, . . . , m}at timen−1 multiplied by the probability that absorption takes place from that state. The state of the Markov chain at timen−1 depends on the initial state of the Markov chain and the (n−1)-step transition probability matrixTn−1.Hence we get

fτ(n) =P(τ=n) =

m

X

i=1

p(n−1)i ti(n−1)t=αTn−1t, n∈N.

Note that fτ(0) is the probability of absorption of the Markov chain in zero steps, which is given by αm+1, the probability of initiating in the absorbing

state.

The density of τ is completely defined by the initial probability vector α and the sub-transition probability matrix T, sincet= (I−T)e. We write

τ ∼DPH (α, T)

to denote thatτis of discrete phase type with parametersαandT. The density of a discrete phase-type distribution is said to have an atom in zero of sizeαm+1 iffτ(0) =αm+1.

A representation (α, T) for a phase-type distribution is calledirreducibleif every state of the Markov chain can be reached with positive probability if the initial distribution is given byα. We can always find such a representation by simply leaving out the states that cannot be reached.

From now on we will omit the subscriptτand simply writef(n) for the density of a discrete phase-type random variable. We have to check thatf(n) =αTn−1t is a well-defined density on the non-negative integers. Since α, T andt only have non-negative entries (as their entries are probabilities) we know thatf(n) is non-negative for alln.The infinite sum off(n) is given by:

X

n=0

f(n) = f(0) +

X

n=1

αTn−1t

= αm+1

X

n=0

Tn

! t

= αm+1+α(I−T)−1t

= αm+1+α(I−T)−1(I−T)e

= αm+1+αe= 1.

(18)

As T is a sub-stochastic matrix, all its eigenvalues are less than one (see e.g.

[2] Proposition I.6.3). Therefore we have in the above that the series P n=0Tn converges to the matrix (I−T)−1 (see e.g. [16] Lemma B.1.).

We denote byF(n) =P(τ≤n) the distribution function ofτ. The distribution function can be deduced by the following probabilistic argument.

Lemma 2.3 The distribution function of a discrete phase-type variable is given by

F(n) = 1−αTne. (2.2)

Proof. We look at the probability that absorption has not yet taken place and hence the Markov chain is in one of the transient states. We get

1−F(n) = P(τ > n)

=

m

X

i=1

p(n)i

= p(n)e

= αTne.

2.1.2 Probability generating function and moments

For a discrete random variable X defined on the non-negative integers and densitypn=P(X=n) the probability generating function (pgf) is given by

H(z) =p0+zp1+z2p2+. . .=

X

k=0

zkpk, |z| ≤1.

HenceH(z) =E zX

. Denote byH(i)(z) the i-th derivative ofH(z). We have H(1) =

X

k=0

pk= 1, H(1)(1) =

X

k=0

kpk=E(X) and in general we get thek-th factorial moment, if it exists, by

H(k)(1) =E(X(X−1). . .(X−(k−1))). For a probability generating function, the following properties hold:

(19)

2.1 Discrete phase-type distributions 9

1. The probability generating function uniquely determines the probability density ofX, since

H(k)(0)/k! =pk for k= 1,2, . . . .

It follows that if HX(z) = HY(z) for all zwith|z| ≤ 1 then fX(n) = fY(n) for alln∈N.

2. IfX, Y are two independent random variables, then the pgf of the random variableZ=X+Y is given by

HZ(z) =E zX+Y

=E zXzY

=E zX E zY

=HX(z)HY(z).

3. IfZ is with probability pequal to X and with probability (1−p) equal toY then

HZ(z) =pHX(z) + (1−p)HY(z).

Lemma 2.4 The probability generating function of a discrete phase-type ran- dom variable is given by

H(z) =αm+1+αz(I−zT)−1t. (2.3)

Proof.

H(z) =E(zn) =

X

n=0

znf(n)

= f(0) +

X

n=1

znαTn−1t

= αm+1+αz

X

n=1

(zT)n−1t

= αm+1+αz

X

n=0

(zT)nt

= αm+1+αz(I−zT)−1t.

Note thatH(z) is a rational expression (a ratio between two polynomials) inz.

This because all elements of (I−zT) are polynomials inzand taking the inverse leads to ratios of those polynomials. The poles of a rational function are given by the roots of the denominator. Hence the poles ofH(z) are the solutions to the equation det(I−zT) = 0. As the eigenvalues of the matrix T are found by solving det(T−zI) = 0, we see that the poles of the probability generating

(20)

functions are reciprocals of eigenvalues of T (there might be less poles than eigenvalues due to cancellation).

From the probability generating function we can obtain the expectation of a discrete phase-type random variable by differentiating H(z) and takingz = 1.

We first need the following lemma:

Lemma 2.5 For|z| ≤1andIandT the identity and sub-transition probability matrix it holds that

X

k=1

k(zT)k = (I−zT)−2−(I−zT)−1. (2.4)

Proof. We have (I−zT)

X

k=1

k(zT)k = zT −(zT)2+ 2(zT)2−2(zT)3+ 3(zT)3−. . .

= zT + (zT)2+ (zT)3+. . .

=

X

k=0

(zT)k−I

= (I−zT)−1−I

from which the desired result follows.

Using Lemma 2.5 we can now calculate the first derivative ofH(z) and find the expectation by substituting z= 1:

H(1)(z) = d dz

αm+1+αz(I−zT)−1t

= α(I−zT)−1t+αz d dz

X

k=0

(zT)k

! t

= α(I−zT)−1t+αz

X

k=1

k(zT)k−1Tt

= α(I−zT)−1t+α

X

k=1

k(zT)kt

= α(I−zT)−1t+α((I−zT)−2−(I−zT)−1)t.

H(1)(1) = αe+α(I−T)−1e−αe=α(I−T)−1e, where we used for the last equation again thatt= (I−T)e.

(21)

2.1 Discrete phase-type distributions 11

Corollary 2.6 The expectation of a discrete phase-type random variable is given by

E(X) =α(I−T)−1e. (2.5)

The factorial moments for a discrete phase-type random variable can be obtained by successive differentiation of the probability generating function and are given by:

E(X(X−1). . .(X−(k−1))) =H(k)(1) =k!α(I−T)−ke.

The formulation in (2.5) of the mean of a discrete phase-type distribution sug- gests that the matrix (I−T)−1 denotes the mean number of visits to each stat of the Markov chain, prior to absorption.

Lemma 2.7 Let U = (I−T)−1. Then uij gives the mean number of visits to statej, given that the Markov chain initiated in statei, prior to absorption.

Proof. Let Bj denote the number of visits to state j prior to absorption.

Further, let Ei andPi denote the expectation respectively the probability con- ditioned on the eventX0=i. Then

Ei(Bj) = Ei τ−1

X

n=0

1{Xn=j}

!

=

X

n=0

Pi(Xn =j, τ −1≥n)

=

X

n=0

Pi(Xn =j, τ > n)

=

X

n=0

(Tn)ij. Now we can writeU = (Ei(Bj))ij to get

U =

X

n=0

Tn = (I−T)−1.

2.1.3 Geometric and negative binomial distribution

As a first example of a discrete phase-type distribution we have a look at the phase-type representation for the geometric distribution. Let the random vari-

(22)

ableX be of discrete phase-type with parameters α = 1 andT = (1−p) for 0< p <1. Then it is readily seen that the exit vector is given byt=pand we can compute the density ofX by

f(n) =αTn−1t= (1−p)n−1p.

HenceXis geometrically distributed with success probabilityp. The probability generating function is given by

H(z) =αm+1+αz(I−zT)−1t= 1·z(1−z(1−p))−1p= zp 1−(1−p)z. We can generalize the previous example by considering the (m+ 1)×(m+ 1)- transition probability matrix

P =

1−p p 0 . . . 0 0 0

0 1−p p . . . 0 0 0

... ... ... ... ... ...

0 0 0 . . . 1−p p 0

0 0 0 . . . 0 1−p p

0 0 0 . . . 0 0 1

. (2.6)

If we take the initial probability vector (1,0, . . . ,0) we have a Markov chain that runs through all the states, and makes a transition to the following state accord- ing to a geom(p) distribution. The discrete phase-type distribution represented by the firstmrows and columns of P is the sum ofmgeometric distributions, which is a negative binomial distribution. The densityg(n) of a negative bino- mial distribution with success probabilitypandmsuccesses is given by

g(n) =

n+m−1 m−1

(1−p)npm

and has support on the non-negative integers. This density counts the number of failuresnuntilmsuccesses have occurred. The phase-type expression in (2.6) however counts all trials. Hence the density corresponding to this representation is given by:

f(n) =

n−1 m−1

(1−p)n−mpm, which is zero on the firstm−1 integers, as m−1n−1

is zero forn < m.A further generalization is possible, by taking the sum ofm geometric distributions that each have a different success probabilitypk for 1≤k≤m. The resulting distri- bution is called generalized negative binomial distribution. The corresponding

(23)

2.1 Discrete phase-type distributions 13

transition probability matrix is given by

P =

1−p1 p1 0 . . . 0 0 0

0 1−p2 p2 . . . 0 0 0

... ... ... ... ... ...

0 0 0 . . . 1−pm−1 pm−1 0

0 0 0 . . . 0 1−pm pm

0 0 0 . . . 0 0 1

 ,

and the initial probability vector is again given by (1,0, . . . ,0).

2.1.4 Non-uniqueness of representation

The representation of a discrete phase-type distribution is not unique. Consider the phase-type density f(n) with parameters α and T, where we take T to be irreducible. Since T is a sub-stochastic matrix, and the matrix (I−T) is invertible, we know that T has at least one eigenvalue λ with 0 < λ < 1 and corresponding eigenvectorν. We can normalizeν such thatνe= 1. If we now take α=ν we get

f(n) = νTn−1t

= λn−1νt=λn−1ν(I−T)e

= λn−1(νIe−νTe) =λn−1(1−λ).

It follows that the geometric density with success probability (1−λ) can be represented as a discrete phase-type density both with parameters (ν, T) of dimensionm, say, and with parameters (1, λ) of dimension 1.

2.1.5 Properties of discrete phase-type distributions

In this section we will look at some properties for the set of discrete phase-type distributions. We will give only probabilistic proofs for these properties. The analytic proofs can be found in Section 4.4 on properties for matrix-geometric distributions.

Property 2.1 Any probability density on a finite number of positive integers is of discrete phase-type.

Proof. Let{g(i)}i∈I be a probability density on the positive integers of finite support, hence I ( N. Then we can make a Markov chain {Xn}n∈N≥0 on a

(24)

state space of size I+ 1 such that the probability of absorption after i steps equals exactlyg(i). The discrete phase-type distribution with this underlying Markov chain and initiating in phase 1 with probability 1 will now give the

desired density.

Example 2.1 As an example of this property, we have a look at the density g(n) with support on the set{1,2,3,4}given by

g(1) = 1

3, g(2) = 1

6, g(3) = 1

12, andg(4) = 5 12.

We want to find a discrete phase-type representation (β, S) for this density.

The underlying Markov chain {Xn}n∈N≥0 will consist of five states (labelled 1,2,3,4,5 and visited in that order), where the fifth state is the absorbing state, and henceS will be of order four. We takeβ= (1,0,0,0) and denote bysthe exit vector of the phase-type representation, which is given by (I−S)e.

The Markov chain initiates in state one with probability 1, P(X0 = 1) = 1, and since the probability of absorption in one step should equal g(1) we get P(X1 = 5) = g(1) = 13, and hence P(X1 = 2) = 23. This gives s1 = 13 and S1,2=23.From state 2 the probability of absorption must beg(2) =16. We find P(X1 = 2)P(X2 = 5) = 23s2 = 16 which leads to s2 = 14 and hence S2,3 = 34. Continuing this way, we find that the transition probability matrix

P =

0 23 0 0 13 0 0 34 0 14 0 0 0 56 16

0 0 0 0 1

0 0 0 0 1

describes the underlying Markov process of the discrete phase-type densityg(n), and leads to the representation

β= (1,0,0,0) andS=

0 23 0 0 0 0 34 0 0 0 0 56

0 0 0 0

 .

Property 2.1 is of limited use in practise, unless the setI is small. As a conse- quence of this first property, the discrete phase-type distributions are dense in the set of distributions on the non-negative integers. This means that for any probability distributionF onN≥0 there is a sequenceFnof discrete phase-type distributions that converges in distribution toF (this convergence is called weak convergence).

Property 2.2 The convolution of a finite number of discrete phase-type den- sities is itself of phase-type.

(25)

2.1 Discrete phase-type distributions 15

Proof. We will prove this property for two phase-type distributions. Let X and Y be two independently distributed phase-type random variables with distributions represented by (α, T) and (β, S) respectively. We letT be of order mand S of orderkand denote the exit vectors of the two densities bytands respectively. Consider the transition probability matrix

P =

T tβ βk+1t

0 S s

0 0 1

and initial probability vector γ = (α, αm+1β, αm+1βk+1). The matrix P de- scribes a discrete-time Markov chain with m+k+ 1 phases, where the firstm transient states are visited according to initial probability vectorα and transi- tion probability matrix T and they are left with the probabilities given in the exit vectort. The next kstates are entered with the initial probabilities given in β and visited according to the matrixS. Absorption from those states into statem+k+ 1 takes place with the exit probabilities given ins.

With probabilityαm+1the chain is entered in statem+ 1 and with probability αm+1βk+1 the chain is entered immediately in the absorbing statem+k+ 1.If we denote the upper left part of P byV,

V =

T tβ

0 S

,

we see that (γ, V) is a representation for the discrete phase-type random variable

Z =X+Y.

Property 2.3 Any finite mixture of probability densities of phase type is itself of phase type.

Proof. Let the vector (p1, p2, . . . , pk) denote the mixing density. Let the densitiesfi(n) be represented by the matricesTiand initial probability vectors αi. Then the mixture

f(n) =

k

X

i=1

pifi(n) can be represented by the transition probability matrix

P =

T1 0 0 . . . 0 t1 0 T2 0 . . . 0 t2 ... ... ... ... 0 0 0 . . . Tk tk

0 0 0 . . . 0 1

and initial probability vectorβ= (p1α1, . . . , pkαk, αm+1).

(26)

For two matricesA andB of sizek×m andp×qrespectively, the Kronecker productA⊗B is given by

A⊗B=

a11B . . . a1mB ... . .. ... ak1B . . . akmB

.

Hence A⊗B has sizekp×mq. We can use the Kronecker product to express the maximum and minimum of two discrete phase-type variables.

Property 2.4 LetX andY be discrete phase-type distributed with parameters (α, T) and (β, S) respectively. Then U = min(X, Y) is discrete phase-type distributed with parameters (α⊗β, T ⊗S) and V = max(X, Y) is discrete phase-type distributed with parameters

γ = (α⊗β, βk+1α, αm+1β),

L =

T⊗S I⊗s t⊗I

0 T 0

0 0 S

.

Proof. In order to prove that the minimumU is of discrete phase-type we have to find a Markov chain where the time until absorption has the same distribution as the minimum ofX andY. This minimum is the time until the first of the two underlying Markov chains ofX andY gets absorbed. We make a new Markov chain that keeps track of the state of both underlying Markov chains ofXandY. The state space of this new Markov chain consists of all combinations of states of XandY. Hence, ifXhas ordermandY has orderkwe have the new state space {(1,1),(1,2), . . . ,(1, k),(2,1),(2,2), . . . ,(2, k), . . . ,(m,1),(m,2), . . . ,(m, k)}.

The mk×mk order transition probability matrix corresponding to this new Markov chain is given by T ⊗S since each entrytijS gives for a transition in the Markov chain corresponding to (α, T) all possible transitions in the Markov chain corresponding to (β, S). The corresponding initial probability vector is given byα⊗β.

The maximumV is the time until both underlying Markov chains ofX andY get absorbed. This means that after one of the Markov chains gets absorbed we need to keep track of the second Markov chain until it too gets absorbed. Hence we need a new Markov chain withmk+m+kstates, where the firstmk states are the same as in the case of the minimum above. Then with the probabilities in the exit vectorsthe Markov chain ofY gets absorbed, and our new Markov chain continues in the states ofX.And with the probabilities in the exit vector tthe Markov chain ofX gets absorbed and our new Markov chain continues in the states ofY.Hence the matrixLdescribes the transition probabilities in the new Markov chain. If one of the Markov chains initiates in the absorbing state

(27)

2.1 Discrete phase-type distributions 17

we only have to keep track of the remaining Markov chain. Hence the initial probability vector corresponding to the matrixLis given byγ.

The last property is about theN-fold convolution of a discrete phase-type distri- bution, whereNis also discrete phase-type distributed. For ease of computation we will assume in the following that the probability of initiating in the absorbing state is zero.

Property 2.5 Let Xi ∼ DPH(α, T) i.i.d. and N ∼ DPH(β, S). Then the N-fold convolution of theXi,

Z=

N

X

i=1

Xi

is again discrete phase-type distributed, with representation (γ, V) where γ= (α⊗β)

andV is the upper left part of the transition probability matrix P =

T⊗I+ (tα)⊗S t⊗s

0 1

,

where the identity matrix has the same dimension as the matrixS.

Proof. The interpretation of the matrixV = (T ⊗I+ (tα)⊗S) is that first the transient states of the Markov chain corresponding to the Xi are visited according to the transition probabilities in T. When this Markov chain gets absorbed with the probabilities given in t, the Markov chain underlying N is visited according toS. If absorption in this second Markov chain does not take place, the first Markov chain will be revisited with initial probabilities given in α.The new system corresponding toV is absorbed when both the first and the

second Markov chain get absorbed.

We will make this probabilistic proof more clear with the following example.

Example 2.2 LetXi∼DPH(α, T) be represented by the following vector and matrix:

α= (1,0), T = 1

2 1 1 3 3

1 3

and let N ∼DPH(β, S) be geometrically distributed with success probability p∈(0,1), hence

β= 1, S = (1−p).

(28)

This leads to the exit vectors t = 1

61 3

and s = p. The underlying Markov chains for theXi and forN are shown in Figure 2.1:

1

13

2 3

1 2

1 6

1 3 1 3 1

3

1

(I) Underlying Markov chain ofXi

a b

p

1 − p 1

(II) Underlying Markov chain ofN

Figure 2.1: The Markov chains corresponding toXi andN.

Usingγ= (α⊗β) andV = (T⊗I+ (tα)⊗S) we get for the representation of Z:

γ= (1,0), V = 1

2+16·(1−p) 13+ 0·(1−p)

1

3+13·(1−p) 13+ 0·(1−p)

,v= p

6p 3

.

The interpretation of thisN-fold convolution of theXiis that first states 1 and 2 of (I) are visited, and when the absorbing state 3 is reached the Markov chain given in (II) is started. With probability (1−p) this Markov chain chooses to go for another round in (I), and with probabilitypthis Markov chain gets absorbed in state b and then the total system is absorbed. We can think of the Markov chain underlying Z as a combination of (I) and (II), where the states 3 anda are left out, as the probabilities of going from state 1 or 2 through state 3 and stateaback to state 1 are equal to the new probabilities given in the matrixV of going directly from state 1 or 2 to state 1. Figure 2.2 depicts this Markov chain underlying the random variableZ:

(29)

2.2 Continuous phase-type distributions 19

1

13

2 b

1

2

+

16

(1 − p)

1 6

p

1 3

p

1 3 1

3

+

13

(1 − p)

1

Figure 2.2: Underlying Markov chain ofZ.

2.2 Continuous phase-type distributions

In analogy with the discrete case, continuous phase-type distributions (or sim- ply phase-type distributions) are defined as the time until absorption in a continuous-time Markov chain.

Let {Xt}t≥0 be a continuous-time Markov chain on the state space E = {1,2, . . . , m, m+ 1} where we take again state m+ 1 as the absorbing state. The intensity matrix of this process is given by

Λ = T t

0 0

.

Here T is an m×m sub-intensity matrix and since each row of an intensity matrix must sum up to zero we have t=−Te. The initial distribution of this process will again be denoted by (α, αm+1) withαe+αm+1= 1.

Definition 2.8 (Phase-type distribution) A random variableτhas a phase- type distribution if τ is the time until absorption in a continuous-time Markov chain,

τ:= inf{t >0 :Xt=m+ 1}.

We write τ ∼ PH (α, T) where α is the initial probability vector and T the sub intensity matrix of the transient states of the Markov chain. The transition probability matrix for the Markov chain at timet is given byPt=eΛt.For the transition probabilities in the transient states of the Markov chain we have

P(Xt=j, t≤τ|X0=i) = eT t

ij.

Lemma 2.9 The density of a phase-type random variableτ is given by f(t) =αeT tt, t >0. (2.7) andf(0) =αm+1.

(30)

Proof. We have f(t)dt = P(τ∈(t, t+dt]). By conditioning on the initial state of the Markov chain,i, and on the state at timet,j, we get

f(t)dt =

m

X

i,j=1

P(τ ∈(t+dt]|Xt=j, X0=i)P(Xt=j|X0=i)P(X0=i)

=

m

X

i,j=1

P(τ ∈(t+dt]|Xt=j) (Pt)ijαi.

The probability of absorption in the time interval (t, t+dt] when Xt = j is given by tjdt, where tj is the j-th element of the exit vector t. Also, for all statesi, j∈ {1, . . . , m}we have (Pt)ij = eT t

ij which leads to f(t)dt=

m

X

i,j=1

αi eT t

ijtjdt=αeT ttdt.

As in the discrete case we havef(0) =αm+1which is the probability of starting

in the absorbing state.

The continuous analogue of the probability generating function is the Laplace transform. For a non-negative random variableX with densityf(x) the Laplace transform is defined as

L(s) =E e−sX

= Z

0

e−sxf(x)dx.

Without proof we state the following:

Corollary 2.10 The distribution of a phase-type random variable is given by

F(t) = 1−αeT te. (2.8)

The Laplace-transform of a phase-type random variableX is given by

L(s) =αm+1+α(sI−T)−1t. (2.9) Note that analogously to the discrete case the Laplace transform of a phase-type distribution is a rational function ins.

An important property of continuous phase-type distributions that does not hold in the discrete case is that a continuous phase-type density is strictly positive on (0,∞). This can be seen by rewriting the matrix eT t using uniformization.

If we take a phase-type density f(n) = αeT tt with irreducible representation (α, T) we can write

λ=−min(Tii)

(31)

2.2 Continuous phase-type distributions 21

andT =λ(K−I) whereK=I+λ−1T.The matrixK is stochastic, since X

j

Kij =X

j

Tij

λ + 1 = 1 and

Kij = Tij

λ =− Tij minTii

≤1 Kii = Tii

λ + 1 =− Tii

minTii + 1≥0.

We find foreT t:

eT t = eλ(K−I)t

= e−λt

X

i=0

(λt)i i! Ki

which is a non-negative matrix. If the matrix Ki is irreducible from some i on, (hence the matrix T is irreducible), this matrix will be strictly positive. If this is not the case, there will be states in the underlying Markov chain that cannot be reached from certain other states. However, the choice of α will make sure that all states can be reached with positive probability as we have chosen the representation (α, T) to be irreducible (meaning we left out all the states that cannot be reached). Hence αeT t is a strictly positive row vector and multiplying this vector with the non-negative non-empty column vector t ensures thatαeT tt>0 for allt >0.

Example 2.3 If we take a phase-type density with representation α= (1,0,0), T =

−λ λ 0

0 −λ λ

0 0 −λ

 andt=

 0 0 λ

then we have a continuous phase-type distribution where each phase is visited an exp(λ) distributed time before the process enters the next phase. Since there are three transient states this is the sum of three exponential distributions and hence an Erlang-3 distribution. The density of this distribution is given by

f(t) = αeT tt

= (1,0,0)

e−λt e−λtλt 12e−λtλ2t2 0 e−λt e−λtλt

0 0 e−λt

 0 0 λ

= λe−λt(λt)2 2 ,

which we recognize as the Erlang-3 density.

(32)
(33)

Chapter 3

Matrix-exponential distributions

This chapter is about matrix-exponential distributions. They are distribu- tions with a density of the same form as continuous phase-type distributions, f(t) =αeT tt, but they do not necessarily possess the probabilistic interpreta- tion as the time until absorption in a continuous-time finite-state Markov chain.

In this chapter we explore some identities and give some examples of matrix- exponential distributions. It serves as a preparation to the next chapter in which we will study the discrete analogous of matrix-exponential distributions.

A thorough introduction to matrix-exponential distributions is given by As- mussen and O’Cinneide in [4].

In Section 3.1 the definition of a matrix-exponential distribution is given, and the equivalent definition as distributions with a rational Laplace transform is addressed. In Section 3.2 we give three examples of matrix-exponential distri- butions and explain their relationship to phase-type distributions.

(34)

3.1 Definition

Definition 3.1 (Matrix-exponential distribution) A random variable X has a matrix-exponential distribution if the density ofX is of the form

f(t) =αeT tt, t≥0.

Here α and t are a row and column vector of length m and T is an m×m matrix, all with entries inC.

If we denote the class of matrix-exponential distributions by ME and the class of phase-type distributions by PH we immediately have the relation PH⊆ME.

In the next section we will show that in fact PH(MG.The matrix-exponential distributions generalize phase-type distributions in the sense that they do not need to possess the probabilistic interpretation as a distribution of the time until absorption in a continuous-time Markov chain. This means that the vector α and matrixT are not necessarily an initial probability vector and sub-intensity matrix, which allows them to have entries inC.AsTis no longer a sub-intensity matrix, the relation t = −Te does not necessarily hold. Hence the vector t becomes a parameter, and we write

X ∼ME (α, T,t) for a matrix-exponential random variableX.

There is an equivalent definition of the class of matrix-exponential distributions, that says that a random variableX has a matrix-exponential distribution if the Laplace transformL(s) ofX is a rational function ins.The connection between the rational Laplace transform and density of a matrix-exponential distribution is made explicit in the following proposition, which is taken from Asmussen &

Bladt [3].

Proposition 3.2 The Laplace transform of a matrix-exponential distribution can be written as

L(s) = b1+b2s+b3s2+. . .+bnsn−1 sn+a1sn−1+. . .+an−1s+an

, (3.1)

for some n ≥ 1 and some constants a1, . . . , an, b1, . . . , bn. From L(0) = 1 it follows that we have an=b1.The distribution has the following representation

f(t) =αeT tt

(35)

3.2 Examples of matrix-exponential distributions 25

where

α = (b1, b2, . . . , bn)

T =

0 1 0 0 0 . . . 0 0

0 0 1 0 0 . . . 0 0

.. .. .. .. .. . . . .. ..

0 0 0 0 0 . . . 0 1

−an −an−1 −an−2 −an−3 −an−4 . . . −a2 −a1

 and

t =

 0 0 0 . . 0 1

. (3.2)

Proof. The proof of this proposition can be found in [3] pages 306-310.

If T is an m×mmatrix, the corresponding matrix-exponential distribution is said to be of order m.If we write L(s) = p(s)q(s) thend= deg(q) is the degree of the matrix-exponential distribution.

Note that Proposition 3.2 is of great importance as it gives a method to find an order-d representation of a matrix-exponential density that has a Laplace transform of degreed.

3.2 Examples of matrix-exponential distributions

In this section we will give three examples of matrix-exponential distributions.

Our main aim with these examples is to illustrate that the class of matrix- exponential distributions is strictly larger than the class of phase-type distri- butions and that it is sometimes more convenient to use a matrix-exponential representation for a distribution because it can be represented with a lower or- der representation (i.e. the matrix T has a lower dimension).

To be able to say whether or not a distribution is of phase-type we use the follow- ing Characterization Theorem for phase-type distributions by C.A. O’Cinneide [14].

(36)

Theorem 3.3 (Characterization of phase-type distributions) A distribu- tion on [0,∞) with rational Laplace-Stieltjes transform is of phase-type if and only if it is either the point mass at zero, or (a) it has a continuous positive density on the positive reals, and (b) its Laplace-Stieltjes transform has a unique pole of maximal real part.

The proof of this theorem is far from simple and can be found in [14]. Condition (a) of the characterization, stating that a phase-type density is always positive on (0,∞),has already been addressed in Chapter 2 on page 20. Condition (b) implies that a distribution can only be of phase-type if the sinusoidal effects in the tail behavior of the density of a phase-type distribution (which cause two conjugate complex poles in the Laplace-Stieltjes transform) decay at a faster exponential rate than the dominant effects.

The first example is of a matrix-exponential distribution that does not satisfy Condition (a) and (b) of Theorem 3.3, and hence is not of phase-type.

Example 3.1 A distribution that is genuinely matrix-exponential (hence not of phase-type) is represented by the following density:

f(t) =Ke−λt(1−cos(ωπt)), t≥0 forλandωreal coefficients andK a scaling coefficient given by

K= λ(λ22ω2) π2ω2 .

Figure 3.1 shows this density forλ= 1 andω= 2, which impliesK= 1 +12. This density is zero whenever t equals 2nω for n ∈ N and therefore violates Condition (a) of Theorem 3.3. By rewriting this density using the identity cosx= eix+e2−ix we find a matrix-exponential representation for this density:

f(t) = Ke−λt(1−cos(ωπt))

= Ke−λt(1−

eiωπt+e−iωπt 2

)

= Ke−λt−K

2 e−t(−iωπ+λ)−K

2e−t(iωπ+λ)

= (1,−1,−1)

e−λt 0 0

0 e−t(−iωπ+λ) 0 0 0 e−t(iωπ+λ)

 K

K 2 K 2

.

(37)

3.2 Examples of matrix-exponential distributions 27

1 2 3 4

t 0.2

0.4 0.6 0.8 1.0 1.2 fHtL

Figure 3.1: Density plot of f(t) = (1 +12)e−t(1−cos(2πt)).

The Laplace transform of f(t) is given by L(s) = K

Z 0

e−ste−λx(1−cos(ωπt))dt

= Kπ2ω2

(s+λ)(s−iπω+λ)(s+iπω+λ)

= Kπ2ω2

s3+ 3λs2+ (π2ω2+ 3λ2)s+π2ω2λ+λ3. (3.3)

This is a rational expression in s which again shows that f(n) is a matrix- exponential density. The poles of the Laplace transform are−λ,−λ+iπω and

−λ−iπω which all have−λas real part. Hence this distribution violates also Condition (b) of the Characterization Theorem for phase-type distributions.

Now by using Proposition 3.2 and Formula (3.3) we find that another matrix- exponential representation forf(t) =αeT ttis given by

α = (Kπ2ω2,0,0),

T =

0 1 0

0 0 1

−π2ω2λ−λ3 −π2ω2−3λ2 −3λ

and

t=

 0 0 1

.

(38)

1 2 3 4 5 t 0.2

0.4 0.6 0.8 1.0 1.2 1.4 gHtL

Figure 3.2: Density plot ofg(t) = 1+ 11 2+8π2

e−t 1 + 12cos(2πt) .

A slight change in this first example leads to a density that violates Condition (b) but not Condition (a) of Theorem 3.3 and hence is again matrix-exponential and not of phase-type.

Example 3.2 The distribution represented by the density g(t) = 1

1 + 2+8π1 2

e−t

1 +1

2cos(2πt)

, t≥0 is genuine matrix-exponential. Its Laplace transform is given by

L(t) = K

s+ 1+ K

4(s+ 1−2πi)+ K 4(s+ 1 + 2πi), whereK= 1+ 11

2+8π2

.

The poles of the Laplace transform are−1,−1 + 2πiand−1−2πihence there is no pole of maximum real part and Condition (b) of Theorem 3.3 is not satisfied.

We do haveg(n)>0 for allt≥0.Figure 3.2 shows a plot of this density.

In ‘Phase-Type Distributions and Invariant Polytopes’ by C.A. O’Cinneide, [15]

the author states the following theorem on the minimal order of a phase-type distribution.

Theorem 3.4 Let µ be a phase-type distribution with −λ1 for the pole of its transform of maximal real part, and with poles at −λ2±iθ,where θ >0.Then the ordern ofµsatisfies

θ

λ2−λ1 ≤cotπ n.

(39)

3.2 Examples of matrix-exponential distributions 29

Using the fact thattanx≥x, for0≤x≤ 12π,this implies that n≥ πθ

λ2−λ1

. (3.4)

Using this theorem we can give an example of a distribution that has a lower order when represented as a matrix-exponential density than when represented as a phase-type density.

Example 3.3 This is an elaboration of an example in [15] on page 524. The density

h(t) = 1

1 2+1−1

!

e−(1−)t+e−tcost

, t≥0 for 0< <1 has Laplace transform

L(s) = 2(−1 +)(−3 +−4s+s−2s2) (−3 +)(−1 +−s)(2 + 2s+s2).

The denominator ofL(s) is of degree three and henceh(t) has a matrix-exponential representation of order three, which can again be found by using Proposition 3.2. The poles of the Laplace transform are −1−i,−1 +iand −1 +.Thus by Theorem 3.4 and Inequality (3.4) we have for the ordermof the phase-type representation forh(t) that

m≥π .

Hence we have a phase-type density of degree three but with arbitrarily large order. Note that for= 0 this density is not of phase-type since it then doesn’t have a unique pole of maximum real part. Figure 3.3 shows the density plot of h(t) for= 0,7/9 and 1/2.

In this chapter we have shown that the class of matrix-exponential distributions is strictly larger than the class of phase-type distributions. We have seen that there are phase-type distributions that have a larger order than their degree.

In the next chapter we will explore the relation between matrix-geometric dis- tributions, which are the discrete analogous of matrix-exponential distributions and discrete phase-type distributions.

(40)

1 2 3 4 t 0.2

0.4 0.6 0.8 1.0 1.2 hHtL

Ε =12 Ε =79 Ε =0

Figure 3.3: Density plot ofh(t) for= 0,79 and 12

(41)

Chapter 4

Matrix-geometric distributions

This chapter is about the class of matrix-geometric distributions (MG). They are distributions on the non-negative integers with a density of the same form as discrete phase-type distributions,

f(n) =αTn−1t.

However, matrix-geometric distributions do not necessarily posses the proba- bilistic interpretation as the time until absorption in a discrete-time Markov chain. They can equivalently be defined as distributions with a rational proba- bility generating function.

The main question of this thesis, to be answered in this chapter, is how matrix- geometric distributions relate to discrete phase-type distributions. We will show that

DPH(MG

and that there is possible order reduction by taking the matrix-geometric re- presentation for a discrete phase-type distribution.

Matrix-geometric distributions are mentioned by Asmussen and O’Cinneide in [4] as the discrete equivalent of matrix-exponential distributions, but no explicit example of such a distribution is given. They are used by Akar in [1] to study the queue lengths and waiting times of a discrete time queueing system, but no explicit examples are given in this article either. Sengupta [17] shows a class of

(42)

distributions that are obviously in MG, but by the use of a transformation he shows that they are actually in DPH.

In Section 4.1 matrix-geometric distributions are defined. In Section 4.2 we show that there exist distributions that are genuinely matrix-geometric, hence not of discrete phase-type, by the use of the Characterization Theorem of dis- crete phase-type distributions stated by O’Cinneide in [14]. In Section 4.3 we will show that there exist phase-type distributions that have a lower degree, and hence lower order matrix-geometric representation, than the order of their discrete phase-type representation. Finally, in Section 4.4 we will revisit some properties of discrete phase-type distributions and show that they also hold for the class MG, by proving them analytically.

4.1 Definition

Definition 4.1 (Matrix-geometric distribution) A random variableXhas a matrix-geometric distribution if the density ofX is of the form

f(n) =αTn−1t, n∈N

where α and t are a row and column vector of length m and T is anm×m matrix with entries inC.

We have to make a note about the support of these distributions. Matrix- geometric distributions have support on the non-negative integers, and the matrix-geometric density is actually a mixture of the above density defined on n∈N and the point mass at zero. For convenience we will assume in this chapter thatf(0) = 0.

The following proposition shows that we can equivalently define matrix-geometric distributions as distributions that have a rational probability generating func- tion. It is the discrete equivalent of Proposition 3.2.

Proposition 4.2 The probability generating function of a matrix-geometric dis- tribution can be written as

H(z) = bnz+bn−1z2+. . .+b2zn−1+b1zn

1 +a1z+a2z2+. . .+an−1zn−1+anzn, (4.1) for some n ≥ 1 and constants a1, . . . , an, b1, . . . , bn in R. The corresponding density is given by

f(n) =αTn−1t

(43)

4.1 Definition 33

where

α = (1,0, . . . ,0),

T =

−a1 0 0 0 0 . . . 0 1

−an 0 0 0 0 . . . 0 0

−an−1 1 0 0 0 . . . 0 0 ... ... ... ... ... ... ... ...

−a2 0 0 0 0 . . . 1 0

and

t =

 bn b1

b2

... bn−1

. (4.2)

We call this the canonical formof the matrix-geometric density.

Proof. From Equation (2.9) and Proposition 3.2 we know that we have the following formula for the Laplace transform of a matrix-exponential distribution:

L(s) =α(sI−T)−1t= b1+b2s+b3s2+. . .+bnsn−1 sn+a1sn−1+. . .+an−1s+an

.

In Proposition 3.2 the corresponding vectorsαandtand a matrixT are given, using the coefficients of the Laplace transform.

The probability generating function of a matrix-geometric distribution is given by

H(z) =αz(I−zT)−1t.

This formula is equal to the matrix formula of the Laplace transform if s is replaced by 1/z, which leads to the following fractional form of the probability generating function:

H(z) =L(1

z) = b1+bz2 +zb32+. . .+zn−1bn

1

zn+zn−1a1 +. . .+an−1z +an

= bnz+bn−1z2+. . .+b2zn−1+b1zn 1 +a1z+a2z2+. . .+an−1zn−1+anzn.

The corresponding vectors α and t and a matrix T are hence the same as in Proposition 3.2 in Formula (3.2). To obtain the canonical form for a matrix- geometric density given in Formula (4.2) we take the transpose of α,t and T and rename the states to obtain α= (1,0, . . . ,0).

Referencer

RELATEREDE DOKUMENTER

It was found that the work needed to compute the matrix- vector multiplication is linear in N and grows with the bandwidth ( L ), the wavelet genus ( D ), and the depth of the

Our research shows that focusing on episodic emotions deepens the understanding of the relation between emotional triggers in the digital working life, that is, activities

Since we found it plausible that the early phase of critical illness, representing the phase with the most pro- nounced loss of skeletal muscle mass, might also be the phase with

Babble is clearly the worst noise type of all, as it consists of a mixture of signals of the target signal class(!)..

The stationary probability vector πππ for this process satisfies πππ(SSS +sssααα) = 000 or equivalently πππ = πππsssαααU U U. At an arbitrary epoch the

Analyzing when object is larger than non-object and when non-object is larger than the object condition, it is seen that the first factor of the ANOVA corresponds to the factor

• we show that there is a class of CSP channel and process structures that correspond to the class of mereologies where mereology parts become CSP processes and connectors

the comunication issue at respectively service layer and network layer, since the purpose of the type system is to ensure that a message with the wrong type is never send nor