Databases Reference
In-Depth Information
x i > 0,
i = 1⋯ N
As it turns out, that last condition is equivalent to choosing the largest
eigenvalue λ . So for an actual algorithm, find the roots of the equation
det A tI and order them by size, grabbing the biggest one and calling
it λ . Then solve for x by solving the system of equations:
A λI x = 0
Now we have x , the vector of eigenvector centrality scores.
Note this doesn't give us much of a feel for eigenvalue centrality, even
if it gives us a way to compute it. You can get that feel by thinking about
it as the limit of a simple iterative scheme—although it requires proof,
which you can find, for example, here .
Namely, start with a vector whose entries are just the degrees of the
nodes, perhaps scaled so that the sum of the entries is 1. The degrees
themselves aren't giving us a real understanding of how interconnec‐
ted a given node is, though, so in the next iteration, add the degrees
of all the neighbors of a given node, again scaled. Keep iterating on
this, adding degrees of neighbors one further step out each time. In
the limit as this iterative process goes on forever, we'll get the eigen‐
value centrality vector.
A First Example of Random Graphs: The Erdos-Renyi
Model
Let's work out a simple example where a network can be viewed as a
single realization of an underlying stochastic process. Namely, where
the existence of a given edge follows a probability distribution, and all
the edges are considered independently .
Search WWH ::




Custom Search