Difference between revisions of "Supervised Random Walk"

From Cohen Courses
Jump to navigationJump to search
 
(3 intermediate revisions by the same user not shown)
Line 28: Line 28:
 
In the above formulation <math> \lambda </math> is regularization parameter that trades-off between the complexity (measured in norm of <math> w </math>) for the fit of the model ( how many constraints are violated).  
 
In the above formulation <math> \lambda </math> is regularization parameter that trades-off between the complexity (measured in norm of <math> w </math>) for the fit of the model ( how many constraints are violated).  
 
=== Solving Optimization Problem ===
 
=== Solving Optimization Problem ===
In order to solve the optimization problem formulated, the expression is differentiated. Firstly, the loss function <math> h </math> has to be differential and secondly, we need to derive the relationship between parameters <math> w </math> and random walk scores <math> p </math>. The edge strength <math> a_{uv} = f_w(\psi_{uv})</math> then the stochastic transition matrix <math> Q' </math> is given as following  
+
In order to solve the optimization problem formulated, the expression is differentiated. Firstly, the loss function <math> h </math> has to be differential and secondly, they derive the relationship between parameters <math> w </math> and random walk scores <math> p </math>. The edge strength <math> a_{uv} = f_w(\psi_{uv})</math> then the stochastic transition matrix <math> Q' </math> is given as following  
 
:<math>\displaystyle
 
:<math>\displaystyle
 
\begin{array}{lcll}
 
\begin{array}{lcll}
Line 83: Line 83:
 
=== Experimental Results ===
 
=== Experimental Results ===
 
==== Synthetic dataset ====
 
==== Synthetic dataset ====
 +
They generate a scale-free graph with 10,000 nodes. For each edge <math> (u,v) </math> they create two independent Gaussian features with mean 0 and variance 1. The strength of each edge is given by <math> a_{uv} = exp(\psi_{uv1} - \psi_{uv2})</math> and hence <math> w* = [1, -1].</math>. On this graph they run the experiment with <math> \alpha = 0.2 </math> and obtain the PageRank scores <math> p^* </math>. Using these scores they pick the destination nodes or positive nodes in two ways. First, is to pick the top ''K'' nodes and second way is to pick K nodes with probability equal to their scores. They also add the Gaussian noise to the features and vary the noise and verify the performance.
 +
 +
They generate 100 graphs using the way explained above and use 50 graphs for learning the parameters and 50 graphs to test the performance. The measure of performance is AUC (area under the curve) which measures the accuracy of the prediction, AUC = 1 meaning the prediction were perfect. The Figures shown below provides the performance measures both in case of deterministic top K nodes and probabilistic top K nodes. The un-weighted version is the case where we assign equal weight to all the edges. We can see that performance of the learned graph(blue) is better than the un-weighted version (red) and most importantly as the noise increases the learned parameters performs better then the true parameters (green) both in case of deterministic and probabilistic ways.
 +
 +
[[File:SyntheticPerformanceSRW.png]]
 +
 
==== Real world dataset ====
 
==== Real world dataset ====
 
+
They run it on both the [[UsesDataset::Facebook dataset]] and [[UsesDataset::arXive]] database dataset and show that the performance of the supervised random walk is better than doing a typical Random walk with restart method, or a logistic regression with the network features extracted out to predict the nodes for which a given ''s'' gets linked to. As the number of nodes in the graph get really high, they consider the nodes that are not very far in terms of number of hops as potential candidates so that the method does not bloat up the size of transition matrix.
 
== Conclusion ==
 
== Conclusion ==
 
+
Supervised random walk algorithm, takes into consideration both network structures and features of the nodes/edges, therefore has better performance than a typical Random walk with restarts which assigns all the equal weight. The problem is formulated as an optimization problem and is solved using gradient descent. Experiments were done on synthetic dataset and on real world Facebook and co-authorship dataset and shown that it performs better. This can be applied to various other applications like Link recommendation, ranking nodes in the graph, missing link analysis.
If the number of nodes are really high, we need to consider the nodes that are not very far in terms of number of hops as potential candidates so that the method does not bloat up the size of transition matrix.
 

Latest revision as of 22:18, 31 March 2011

This is one of the paper discussed and written in course Social Media Analysis 10-802 in Spring 2011

Citation

Lars Backstrom & Jure Leskovec "Supervised Random Walks: Predicting and Recommending Links in Social Networks"

Online version

Link to paper

Summary

This paper addresses the problem of Link Prediction using the method of Random walk with restart. Supervised Random Walk is interesting because it ranks the nodes based the network information and also using rich node and edge attributes that exist in the dataset. The method is supervised learning task where the goal is to learn the parameters of the function that assigns the strength of the edge (probability of taking that edge) such that a random walker is more likely to reach nodes to which new links will be created in future.

This method is used to recommend friends on Facebook dataset and also in predicting links in collaboration network in arXive database.

Method

Typically, Random walk with restart involves giving probabilities to every edge, which indicate the probability of taking that edge by a random walker given he is at any one of the node on either side of the edge. These probabilities decide which nodes are closer to the node from which we restart our random walks. One simple method that is to assign each edge out of a given node equal probability. However, supervised random walk presented in this paper provides the method to learn the probabilities so that the random walker restarting from node s is more likely to reach the "positive" nodes than "negative" nodes. Positive nodes are the one for which the links were formed in the training dataset and negative nodes are rest all nodes which are not connected the nodes s. The method is formulated into an optimization problem for which an efficient estimation procedure is derived.

Problem Formulation

Given a graph and the training data which has set of the positive nodes for which the links were formed by s and all other nodes in the graph to which linked were not formed as . Assume that we are learning the probability assignments for edges with respect to a single s. Each edge has a vector of features , which could be features of the interaction between nodes or could be features on individual nodes . Also, if there is a function which is parameterized function that assigns edge strengths or probabilities given feature vector . The problem formulation is to learn the parameter, that such that if we do random walk with restarts from s on the graph where the edge strengths are assigned based , the random walk station distribution has property that for each . This means that are more likely to reach the positive nodes than nodes in negative set . Hence, the optimization problem is to regularize the parameter vector satisfying this condition, which is given below.

such that

The above formulation is the "hard" constraint that the stationary probabilities of all negative scores should be less than that of a positive node. The constraint is relaxed by introducing an error function which is such that if however in case where the constraint is violated. With this "soft" constraints the following is the an optimization problem formulation

In the above formulation is regularization parameter that trades-off between the complexity (measured in norm of ) for the fit of the model ( how many constraints are violated).

Solving Optimization Problem

In order to solve the optimization problem formulated, the expression is differentiated. Firstly, the loss function has to be differential and secondly, they derive the relationship between parameters and random walk scores . The edge strength then the stochastic transition matrix is given as following

From this stochastic transition matrix we can obtain the transition probability matrix when we do Random walk with restart. The following is the restart probability.

Given this transition probability matrix, the stationary distribution of Random walk with restart is given by eigenvector equation.

With this we can find the relation ship between parameters and stationary distribution . Hence if we differentiate the optimization problem we get the following equation. We can consider

Now since we know is principal eigen vector for the transition probability matrix we have and hence

In the above equation can be computed by Power Iteration method and can be computed using the definition of transition probability matrix.

Optimization problem is then solved using Gradient Descent method and is minimized.

Datasets Used

This paper for link prediction and link recommendation uses the following two datasets

  • Facebook dataset is dataset of Facebook users in Iceland. They chose Iceland, as they have seen the penetration was highest in Iceland and hence new links were formed more rapidly than other countries.
  • Collaboration network in arXive database.

Experiments

The paper does experiments of ranking the nodes using the supervised random walks both on synthetic dataset and real world dataset.

Experimental Setup

The following are some of the general considerations of loss function, edge strength function, choice of and regularization parameter

Edge Strength Function

Edge strength function with a parameter takes as input the feature vector and outputs the edge strength. The following are some possibility of the edge strength

  • Exponential edge strength:
  • Logistic edge strength:

Loss Function

As said earlier we need to consider loss function such that it is differentiable. The following are some of th possible loss functions considered

  • Squared loss with margin:

Choice of

The choice of decides how far we do the random walk before we restart our random walk from s. As approaches 1, the random walks crossing more than 2 hops becomes increasingly unlikely.

Regularization parameter

This parameter controls the over-fitting of the model, and in this paper they set which gave best performance.

Experimental Results

Synthetic dataset

They generate a scale-free graph with 10,000 nodes. For each edge they create two independent Gaussian features with mean 0 and variance 1. The strength of each edge is given by and hence . On this graph they run the experiment with and obtain the PageRank scores . Using these scores they pick the destination nodes or positive nodes in two ways. First, is to pick the top K nodes and second way is to pick K nodes with probability equal to their scores. They also add the Gaussian noise to the features and vary the noise and verify the performance.

They generate 100 graphs using the way explained above and use 50 graphs for learning the parameters and 50 graphs to test the performance. The measure of performance is AUC (area under the curve) which measures the accuracy of the prediction, AUC = 1 meaning the prediction were perfect. The Figures shown below provides the performance measures both in case of deterministic top K nodes and probabilistic top K nodes. The un-weighted version is the case where we assign equal weight to all the edges. We can see that performance of the learned graph(blue) is better than the un-weighted version (red) and most importantly as the noise increases the learned parameters performs better then the true parameters (green) both in case of deterministic and probabilistic ways.

SyntheticPerformanceSRW.png

Real world dataset

They run it on both the Facebook dataset and arXive database dataset and show that the performance of the supervised random walk is better than doing a typical Random walk with restart method, or a logistic regression with the network features extracted out to predict the nodes for which a given s gets linked to. As the number of nodes in the graph get really high, they consider the nodes that are not very far in terms of number of hops as potential candidates so that the method does not bloat up the size of transition matrix.

Conclusion

Supervised random walk algorithm, takes into consideration both network structures and features of the nodes/edges, therefore has better performance than a typical Random walk with restarts which assigns all the equal weight. The problem is formulated as an optimization problem and is solved using gradient descent. Experiments were done on synthetic dataset and on real world Facebook and co-authorship dataset and shown that it performs better. This can be applied to various other applications like Link recommendation, ranking nodes in the graph, missing link analysis.