• Ingen resultater fundet

Generative Models for

N/A
N/A
Info
Hent
Protected

Academic year: 2022

Del "Generative Models for "

Copied!
44
0
0

Indlæser.... (se fuldtekst nu)

Hele teksten

(1)

Generative Models for

Complex Network Structure

Aaron Clauset

@aaronclauset

Computer Science Dept. & BioFrontiers Institute University of Colorado, Boulder

External Faculty, Santa Fe Institute

4 June 2013 NetSci 2013, Complex Networks meets Machine Learning

1

(2)

what is structure?

• generative models for complex networks general form

types models

opportunities and challenges

• weighted stochastic block models a parable about thresholding checking our models

learning from data (approximately)

(3)

• makes data different from noise

makes a network different from a random graph

what is structure?

3

(4)

• makes data different from noise

makes a network different from a random graph

• helps us compress the data

describe the network succinctly capture most relevant patterns

what is structure?

(5)

• makes data different from noise

makes a network different from a random graph

• helps us compress the data

describe the network succinctly capture most relevant patterns

• helps us generalize, from data we’ve seen to data we haven’t seen:

i. from one part of network to another

ii. from one network to others of same type

iii. from small scale to large scale (coarse-grained structure) iv. from past to future (dynamics)

what is structure?

5

(6)

• imagine graph is drawn from an ensemble or generative model: a probability distribution with parameters

• can be continuous or discrete; represents structure of graph

statistical inference

P (G | ) G

(7)

• imagine graph is drawn from an ensemble or generative model: a probability distribution with parameters

• can be continuous or discrete; represents structure of graph

• inference (MLE): given , find that maximizes

• inference (Bayes): compute or sample from posterior distribution

statistical inference

G ✓ P (G | ) P ( | G)

P (G | ) G

7

(8)

statistical inference

• imagine graph is drawn from an ensemble or generative model: a probability distribution with parameters

• can be continuous or discrete; represents structure of graph

• inference (MLE): given , find that maximizes

• inference (Bayes): compute or sample from posterior distribution

• if is partly known, constrain inference and determine the rest

• if is partly known, infer and use to generate the rest

• if model is good fit (application dependent), we can generate synthetic graphs structurally similar to

• if part of has low probability under model, flag as possible anomaly

G ✓ P (G | ) P ( | G)

✓ P (G | )

G

G G

P (G | ) G

(9)

• what is structure?

generative models for complex networks general form

types models

opportunities and challenges

• weighted stochastic block models a parable about thresholding checking our models

learning from data (approximately)

9

(10)

assumptions about “structure” go into consistency

requires that edges be conditionally independent

[Shalizi, Rinaldo 2011]

generative models for complex networks

general form

P (G | ✓ ) = Y

i<j

P (A ij | ✓ )

P (A ij | ✓ )

n lim !1 Pr ⇣

✓ ˆ 6 = ✓ ⌘

= 0

(11)

assor tativ e modules

D , { p r }

probability p r

11

(12)

hierarchical random graph

model

instance

Pr(i, j connected) = p r

i

j

i j

= p (lowest common ancestor of i,j )

(13)

L(D , { p r }) = !

r

p E r r (1 − p r ) L r R r E r

E r R r

L r = number nodes in left subtree

= number nodes in right subtree

= number edges with as lowest common ancestor

L r R r E r

p r

r

13

(14)

classes of generative models

• stochastic block models

k types of vertices, depends only on types of i, j originally invented by sociologists [Holland, Laskey, Leinhardt 1983]

many, many flavors, including

mixed-membership SBM [Airoldi, Blei, Feinberg, Xing 2008]

hierarchical SBM [Clauset, Moore, Newman 2006,2008]

restricted hierarchical SBM [Leskovec, Chakrabarti, Kleinberg, Faloutsos 2005]

infinite relational model [Kemp, Tenenbaum, Griffiths, Yamada, Ueda 2006]

restricted SBM [Hofman, Wiggins 2008]

degree-corrected SBM [Karrer, Newman 2011]

SBM + topic models [Ball, Karrer, Newman 2011]

SBM + vertex covariates [Mariadassou, Robin, Vacher 2010]

SBM + edge weights [Aicher, Jacobs, Clauset 2013]

+ many others

P (A

ij

| z

i

, z

j

)

(15)

classes of generative models

• latent space models

nodes live in a latent space, depends only on vertex-vertex proximity many, many flavors, including

logistic function on vertex features [Hoff, Raftery, Handcock 2002]

social status / ranking [Ball, Newman 2013]

nonparametric metadata relations [Kim, Hughes, Sudderth 2012]

multiple attribute graphs [Kim, Leskovec 2010]

nonparametric latent feature model [Miller, Griffiths, Jordan 2009]

infinite multiple memberships [Morup, Schmidt, Hansen 2011]

ecological niche model [Williams, Anandanadesan, Purves 2010]

hyperbolic latent spaces [Boguna, Papadopoulos, Krioukov 2010]

P (A

ij

| f (x

i

, x

j

))

1094 Journal of the American Statistical Association, December 2002

Figure 1. Maximum Likelihood Estimates (a) and Bayesian Marginal Posterior Distributions (b) for Monk Positions. The direction of a relation is indicated by an arrow.

distances between nodes, but quite slowly in Å, as shown in Figure 2(b). Output from the chain was saved every 2,000 scans, and positions of the different monks are plotted for each saved scan in Figure 1(b) (the plotting color for each monk is based on their mean angle from the positive x-axis and their mean distance from the origin). The categorization of the monks given at the beginning of this section is validated by the distance modelétting, as there is little between-group overlap in the posterior distribution of monk positions. Additionally, this model is able to quantify the extent to which some actors (such as monk 15) lie between other groups of actors.

The extent to which model ét can be improved by increas- ing the dimension of the latent space was examined by étting the distance model in<3, that is,zi2 <3 foriD11 : : : 1 n. The maximum likelihood for this model is ƒ34004 in 50 param- eters, a substantial improvement over the ét in <2 at a cost of 16 additional parameters. It is interesting to note that the ét cannot be improved by going into higher dimensions. This can be seen as follows. For a given datasetY, the best-étting symmetric model (pi1 j Dpj1 i) has the property that pi1 j D1 for yi1 j Dyj1 i D1, pi1 j D0 for yi1 jDyj1 i D0, and pi1 j D1=2 foryi1 j6Dyj1 i. The log-likelihood of such aét is thusƒalog4, where a is the number of asymmetric dyads. For the monk

0 200 600 1000

1101009080

scan / ( 2 x 103)

loglikelihood

0 200 600 1000

4681012

scan / (2 x 103)

alpha

(a) (b)

Figure 2. MCMC Diagnostics for the Monk Analysis. (a) Log- likelihood; (b) alpha.

dataset, the number of asymmetric dyads is 26, and so the maximum possible log-likelihood under a symmetric model is ƒ26 log 4D ƒ36004, which is achieved by the distance model in <3. More precisely, there exists a set of positions Ozi 2 <3 and a rate parameter ÅO such that limc!ˆlogP4Y—cÅ1 cbO Z5D ƒ26 log 4.

4.2 Florentine Families

Padgett and Ansell (1993) compiled data on marriage and business relations between 16 historically prominent Florentine families, using a history of this period given by Kent (1978). We analyze data on the marriage relations tak- ing place during the 15th century. The actors in the population are families, and a tie is present between two families if there is at least one marriage between them. This is an undirected relation, as the respective families of the husband and wife in each marriage were not recorded. One of the 16 families had no marriage ties to the others, and was consequently dropped from the analysis. If included, this family would have inénite distance from the others in a maximum likelihood estimation and a large buténite distance in a Bayesian analysis, as deter- mined by the prior.

Modelingdi1 j D —ziƒzj1 zi1 zj 2 <2 and using the param- eterization i1 j DÅ41ƒdi1 j5 as described in Section 2, the likelihood of (Å1 Z5 can be made arbitrarily close to 1 as Å! ˆforéxed ZDZ; that is, the data areb d2 representable.

Such a representing bZ is plotted in Figure 3(a). Family 9 is the Medicis, whose average distance to others is greater only than that of families 13, the Ridolés and 16, the Tornabuonis.

Anotherd2 representation is given in Figure 3(b). This conég- uration is similar in structure to the érst, except that the seg- ments 9-1 and 9-14-10 have been rotated. This is somewhat of an artifact of our choice of dimension: When modeled in three dimensions, 1 and 14 are ét as being relatively equidis- tant from 6.

One drawback of the MLEs presented earlier is that they overét the data in a sense, as theétted probabilities of ties are all either 0 or 1 (or nearly so, for very large Å). Alternatively, a prior forÅcan be formulated to keep predictive probabilities more in line with our beliefs; for example, that the probabil- ity of a tie rarely goes below some small, but not inénitesi- mal value. Using the MCMC procedure outlined in Section 3, the marriage data were analyzed using an exponential prior with mean 2 for Å and diffuse independent normal priors for the components of Z (mean 0, standard deviation 100). The MCMC algorithm was run for 5Ä106 scans, with the output saved every 5,000 scans. This chain mixes faster than that of the monk example, as can be seen in the diagnostic plots of Figure 4 and in plots of pairwise distances between nodes (not shown). Marginal conédence regions are represented by plot- ting samples of positions from the Markov chain, shown in Figure 3(c). Note that the conédence regions include both the conégurations given in Figure 3(a) and (b). Actors 14 and 10 (in red and purple) are above or below actor 1 (in green) for any particular sample; the observed overlap of these actors in theégure is due to the bimodality of the posterior and that the plot gives the marginalposterior distributions of each actor.

15

(16)

opportunities and challenges

• richly annotated data

edge weights, node attributes, time, etc.

= new classes of generative models

• generalize from to ensemble

useful for modeling checking, simulating other processes, etc.

• many familiar techniques

frequentist and Bayesian frameworks

makes probabilistic statements about observations, models predicting missing links leave-k-out cross validation approximate inference techniques (EM, VB, BP, etc.) sampling techniques (MCMC, Gibbs, etc.)

• learn from partial or noisy data

extrapolation, interpolation, hidden data, missing data

n = 1

(17)

opportunities and challenges

• only two classes of models

stochastic block models latent space models

• bootstrap / resampling for network data

critical missing piece

depends on what is independent in the data

• model comparison

naive AIC, BIC, marginalization, LRT can be wrong for networks

what is goal of modeling: realistic representation or accurate prediction?

• model assessment / checking?

how do we know a model has done well? what do we check?

• what is v-fold cross-validation for networks?

Omit edges? Omit nodes? What? n

2

/v n/v

17

(18)

• what is structure?

• generative models for complex networks general form

types models

opportunities and challenges

weighted stochastic block models a parable about thresholding

learning from data (approximately)

checking our models

(19)

functional groups, not just clumps

• social “communities” (large, small, dense or empty)

• social: leaders and followers

• word adjacencies: adjectives and nouns

• economics: suppliers and customers

stochastic block models

19

(20)

nodes have discrete attributes each vertex has type

matrix of connection probabilities

if and , edge exists with probability not necessarily symmetric, and we do not assume

given some , we want to simultaneously

label nodes (infer type assignment ) learn the latent matrix

t i { 1, . . . , k } i

k ⇥ k p

t i = r t j = s (i ! j ) p rs

p p rr > p rs

G

t : V { 1, . . . , k } p

classic stochastic block model

(21)

1 2 3 4 5 6 1

2 3 4 5 6

as so rta tiv e mo dul es

classic stochastic block model

model

instance

P (G | t, ) = Y

(i,j)2E

p

ti,tj

Y

(i,j)62E

(1 p

ti,tj

)

likelihood

21

(22)

4 groups

• edge weights with

• what threshold should we choose?

µ 1 < µ 2 < µ 3 < µ 4

⇠ N (µ i , 2 ) t

thresholding edge weights

t = 1, 2, 3, 4

(23)

4 groups

• edge weights with

• set threshold , fit SBM

µ 1 < µ 2 < µ 3 < µ 4

⇠ N (µ i , 2 ) t  1

23

(24)

4 groups

• edge weights with

• set threshold , fit SBM

µ 1 < µ 2 < µ 3 < µ 4

⇠ N (µ i , 2 )

t = 2

(25)

4 groups

• edge weights with

• set threshold , fit SBM

µ 1 < µ 2 < µ 3 < µ 4

⇠ N (µ i , 2 ) t = 3

25

(26)

t 4

4 groups

• edge weights with

• set threshold , fit SBM

µ 1 < µ 2 < µ 3 < µ 4

⇠ N (µ i , 2 )

(27)

each edge has weight let

covers all exponential-family type distributions:

bernoulli, binomial (classic SBM), multinomial poisson, beta

exponential, power law, gamma

normal, log-normal, multivariate normal

adding auxiliary information:

w(i, j ) w(i, j ) f (x | ⇥ )

= h(x) exp(T (x) · (⇥))

weighted stochastic block model

27

(28)

each edge has weight let

examples of weighted graphs:

frequency of social interactions (calls, txt, proximity, etc.) cell-tower traffic volume

other similarity measures time-varying attributes

missing edges, active learning, etc.

adding auxiliary information:

w(i, j ) w(i, j ) f (x | ⇥ )

= h(x) exp(T (x) · (⇥))

weighted stochastic block model

(29)

given and choice of , learn and block structure

weight distribution block assignment weighted graph likelihood function:

degeneracies in likelihood function

(variance can go to zero. oops)

technical difficulties:

weighted stochastic block model

R : k k ⇥ { 1, . . . , R }

P (G | z, ✓, f ) = Y

i<j

f G i,j | ✓ R (z

i

,z

j

) f

z G

f z

G ✓

29

(30)

approximate learning

edge generative model

estimate model via variational Bayes

conjugate priors solve degeneracy problem algorithms for dense and sparse graphs

P (G | z, ✓, f )

(31)

approximate posterior distribution

estimate by minimizing

where

for (conjugate) prior for exponential family distribution taking derivative yields update equations for

iterating equations yields local optima

dense weighted SBM

(z, ✓ | G) ⇡ q(z, ✓) = Y

i

q i (z i ) Y

r

q(✓ r )

D KL (q || ⇡ ) = ln P (G | z, ✓, f ) G (q) G (q) = E q ( L ) + E q

log ⇡(z, ✓) q(z, ✓)

◆ q

⇡ f

z, ✓

31

(32)

checking the model

synthetic network with known structure

• given synthetic graph with known structure

• run VB algorithm to convergence

• compare against choose threshold + SBM (and others)

compute Variation of Information (partition distance)

VI(P 1 , P 2 ) 2 [0, ln N ]

VI(P

1

, P

2

) 2 [0, ln k

+ 1.5] = [0, 3.1]

in this case

(33)

checking the model

synthetic network with known structure

• variation of Newman’s four-groups test

• latent groups

• Normal edge weights:

f = N (µ r , r 2 )

n r = [48, 16, 32, 48, 16]

k = 5

VI(P

1

, P

2

) 2 [0, ln k

+ 1.5] = [0, 3.1]

in this case

33

(34)

learn better with more data

increase network size

• fix ,

• bigger network, more data

N k = k

we keep the constant n

r

/N

f = N

(35)

100 150 200 0

0.2 0.4 0.6 0.8 1

Number of Nodes

Variation of Information: VI

• fix ,

• bigger network, more data

• WSBM converges on correct solution more quickly

• thresholding + SBM particularly bad

k = k

we keep the constant n

r

/N

f = N

learn better with more data increase network size N

2 4 6 8

0 0.5 1 1.5 2

Number of Groups: K

Variation of Information: VI

WBM

SBM+Thresh Kmeans Cluster(Max) Cluster(Avg)

35

(36)

learning the number of groups

vary number of groups found

• fix

• too few / many blocks?

k

f = N

(37)

2 4 6 8 0

0.5 1 1.5 2

Number of Groups: K

Variation of Information: VI

WBM

SBM+Thresh Kmeans

Cluster(Max) Cluster(Avg)

• fix

• too few / many blocks?

• WSBM converges on correct solution

• WSBM fails gracefully when

• others do poorly

k > k f = N

In fact, Bayesian marginalization will correctly choose k=k* in this case.

2 4 6 8

0 0.5 1 1.5 2

Number of Groups: K

Variation of Information: VI

WBM

SBM+Thresh Kmeans Cluster(Max) Cluster(Avg)

learning the number of groups vary number of groups found k

37

(38)

learning despite noise

increase variance in edge weights

• fix ,

• bigger variance, less signal

r 2

k = k f = N

(39)

0 50 100 150 200 0

0.5 1 1.5

Variance

Variation of Information: VI

• fix ,

• bigger variance, less signal

• WSBM fails more gracefully than

alternatives, even for very high variance

• thresholding + SBM particularly bad

k = k f = N

learning despite noise

increase variance in edge weights r 2

2 4 6 8

0 0.5 1 1.5 2

Number of Groups: K

Variation of Information: VI

WBM

SBM+Thresh Kmeans Cluster(Max) Cluster(Avg)

39

(40)

• single-scale structural inference

mixtures of assortative, disassortative groups

• inference is cheap (VB)

approximate inference works well

• thresholding edge weights is bad, bad, bad one threshold (SBM) vs. many (WSBM)

• generalizations also for sparse graphs, degree-corrections, etc.

comments

(41)

• auxiliary information

node & edge attributes, temporal dynamics (beyond static binary graphs)

• scalability

fast algorithms for fitting models to big data (methods from physics, machine learning)

• model selection

which model is better? is this model bad? how many communities?

• model checking

have we learned correctly? check via generating synthetic networks

• partial or noisy data

extrapolation, interpolation, hidden data, missing data

• anomaly detection

low probability events under generative model

generative models

41

(42)

• auxiliary information

node & edge attributes, temporal dynamics (beyond static binary graphs)

• scalability

fast algorithms for fitting models to big data (methods from physics, machine learning)

• model selection

which model is better? is this model bad? how many communities?

• model checking

have we learned correctly? check via generating synthetic networks

• partial or noisy data

extrapolation, interpolation, hidden data, missing data

• anomaly detection

low probability events under generative model

generative models

(43)

Thanks

Cris Moore (Santa Fe) Mark Newman (Michigan)

Cosma Shalizi (Carnegie Mellon) Funding from

• Aicher, Jacobs, Clauset, “Adapting the Stochastic Block Model to Edge-Weighted Networks.” ICML (2013)

• Moore, Yan, Zhu, Rouquier, Lane, “Active learning for node classification in assortative and disassortative networks.” KDD (2011)

• Park, Moore and Bader, “Dynamic Networks from Hierarchical Bayesian Graph Clustering.” PLoS ONE 5(1): e8118 (2010).

• Clauset, Newman and Moore, “Hierarchical structure and the prediction of missing links in networks.” Nature 453, 98-101 (2008)

• Clauset, Moore and Newman, “Structural Inference of Hierarchies in Networks.”

ICML (2006)

some references

acknowledgments

Abigail Jacobs (Colorado)

Christopher Aicher (Colorado)

43

(44)

fin

Referencer

RELATEREDE DOKUMENTER

The resulting function is the probability of travel choice behavior as a function of observable explanatory variables (i.e., traveler characteristics S n , alternative

The deep learning paradigm of unsupervised learning of layered hierarchical models of unstruc- tured data is similar to the previously described architecture and working of

discussed in Probst and Hansen [2008]) will be used to calculate a superset of attacks that can be caused by an attacker at a given location in the specified system. Thus the tool

Table 7: Segmentation results – large models, no subsampling Model type Learning method Mean pt.pt... Table 8: Segmentation results – small models, no subsampling Model Type

A Case Study: Model Selection for Stochastic Block Models..

• Timing failures are applicable in synchronous distributed systems, where time limits are set on process execution time, message delivery time and clock drift rate...

• Timing failures are applicable in synchronous distributed systems, where time limits are set on process execution time, message delivery time and clock drift rate. • In

Many applications that could have use of a surface estimation method requires textured surface models. The X3D models resulting from a surface fit using the implementation