Difference between revisions of "Supervised Random Walk"
(16 intermediate revisions by the same user not shown) | |||
Line 14: | Line 14: | ||
Typically, [[UsesMethod:: Random walk with restart]] involves giving probabilities to every edge, which indicate the probability of taking that edge by a random walker given he is at any one of the node on either side of the edge. These probabilities decide which nodes are closer to the node from which we restart our random walks. One simple method that is to assign each edge out of a given node equal probability. However, ''supervised random walk'' presented in this paper provides the method to learn the probabilities so that the random walker restarting from node ''s'' is more likely to reach the "positive" nodes than "negative" nodes. Positive nodes are the one for which the links were formed in the training dataset and negative nodes are rest all nodes which are not connected the nodes ''s''. The method is formulated into an optimization problem for which an efficient estimation procedure is derived. | Typically, [[UsesMethod:: Random walk with restart]] involves giving probabilities to every edge, which indicate the probability of taking that edge by a random walker given he is at any one of the node on either side of the edge. These probabilities decide which nodes are closer to the node from which we restart our random walks. One simple method that is to assign each edge out of a given node equal probability. However, ''supervised random walk'' presented in this paper provides the method to learn the probabilities so that the random walker restarting from node ''s'' is more likely to reach the "positive" nodes than "negative" nodes. Positive nodes are the one for which the links were formed in the training dataset and negative nodes are rest all nodes which are not connected the nodes ''s''. The method is formulated into an optimization problem for which an efficient estimation procedure is derived. | ||
=== Problem Formulation === | === Problem Formulation === | ||
− | + | Given a graph <math> G = (V,E) </math> and the training data which has set of <math>D</math> the positive nodes for which the links were formed by ''s'' and all other nodes in the graph to which linked were not formed as <math>L</math>. Assume that we are learning the probability assignments for edges with respect to a single ''s''. Each edge <math>(u,v) </math> has a vector of features <math> \psi_{uv}</math>, which could be features of the interaction between nodes <math> u, v</math> or could be features on individual nodes <math>u,v</math>. Also, if there is a function <math> f_w (\psi_{uv}) = a_{uv}</math> which is parameterized function that assigns edge strengths or probabilities <math>a_{uv}</math> given feature vector <math>\psi_{uv}</math>. The problem formulation is to learn the parameter, <math>w</math> that such that if we do random walk with restarts from ''s'' on the graph where the edge strengths are assigned based <math>f_w</math>, the random walk station distribution <math>p</math> has property that for each <math>d \in D, l \in L\ \ p_l < p_d </math>. This means that are more likely to reach the positive nodes <math>D</math> than nodes in negative set <math>L</math>. Hence, the optimization problem is to regularize the parameter vector satisfying this condition, which is given below. | |
+ | :<math> | ||
+ | \min_w F(w) = ||w||^2 | ||
+ | </math> | ||
+ | such that | ||
+ | :<math> | ||
+ | \forall d \in D, l \in L: p_l < p_d | ||
+ | </math> | ||
+ | The above formulation is the "hard" constraint that the stationary probabilities of all negative scores should be less than that of a positive node. The constraint is relaxed by introducing an error function <math> h </math> which is such that <math> h(p_l - p_d) = 0 </math> if <math> p_l - p_d < 0 </math> however <math> h(p_l - p_d) > 0 </math> in case <math> p_l-p_d > 0 </math> where the constraint is violated. With this "soft" constraints the following is the an optimization problem formulation | ||
+ | :<math> | ||
+ | \min_w F(w) = ||w||^2 + \lambda \sum_{d\in D, l\in L} h(p_l - p_d) | ||
+ | </math> | ||
+ | In the above formulation <math> \lambda </math> is regularization parameter that trades-off between the complexity (measured in norm of <math> w </math>) for the fit of the model ( how many constraints are violated). | ||
=== Solving Optimization Problem === | === Solving Optimization Problem === | ||
+ | In order to solve the optimization problem formulated, the expression is differentiated. Firstly, the loss function <math> h </math> has to be differential and secondly, they derive the relationship between parameters <math> w </math> and random walk scores <math> p </math>. The edge strength <math> a_{uv} = f_w(\psi_{uv})</math> then the stochastic transition matrix <math> Q' </math> is given as following | ||
+ | :<math>\displaystyle | ||
+ | \begin{array}{lcll} | ||
+ | Q'_{uv} & = & \frac{a_{uv}}{\sum_w a_{uw}}\ & if\ (u,v)\ \in E\\ | ||
+ | & = & 0 \ & otherwise\\ | ||
+ | \end{array} | ||
+ | </math> | ||
+ | From this stochastic transition matrix <math> Q' </math> we can obtain the transition probability matrix <math> Q </math> when we do [[UsesMethod::Random walk with restart]]. The following <math> \alpha </math> is the restart probability. | ||
+ | :<math> | ||
+ | Q_{uv} = (1-\alpha) Q'_{uv} + \alpha \mathbf{1}(v == s) | ||
+ | </math> | ||
+ | Given this transition probability matrix, the stationary distribution <math> p </math> of Random walk with restart is given by eigenvector equation. | ||
+ | :<math> | ||
+ | p^T = p^T Q | ||
+ | </math> | ||
+ | With this we can find the relation ship between parameters <math> w </math> and stationary distribution <math> p </math>. Hence if we differentiate the optimization problem we get the following equation. We can consider <math> \delta_{ld} = p_l - p_d </math> | ||
+ | :<math> \frac{\partial F(w)}{\partial w} = 2w + \sum_{l,d} \frac{\partial h(p_l - p_d)}{\partial w}</math> | ||
+ | :<math> \frac{\partial F(w)}{\partial w} = 2w + \sum_{l,d} \frac{\partial h(\delta_{ld})}{\partial \delta_{ld} } ( \frac{\partial p_l}{\partial p_w} - \frac{\partial p_d}{\partial p_w})</math> | ||
+ | Now since we know <math> p </math> is principal eigen vector for the transition probability matrix we have <math> p_u = \sum_j p_j Q_{ju} </math> and hence | ||
+ | :<math> \frac{\partial p_u}{\partial w} = \sum_j Q_{ju} \frac{\partial p_j}{\partial w} + p_j \frac{\partial Q_{ju}}{\partial u} </math> | ||
+ | In the above equation <math> \frac{\partial p_u}{\partial w} </math> can be computed by [http://en.wikipedia.org/wiki/Power_iteration Power Iteration] method and <math> \frac{\partial Q_{ju}}{\partial u} </math> can be computed using the definition of transition probability matrix. | ||
+ | Optimization problem is then solved using [[UsesMethod::Gradient Descent]] method and <math> F(w) </math> is minimized. | ||
== Datasets Used == | == Datasets Used == | ||
− | + | This paper for link prediction and link recommendation uses the following two datasets | |
+ | * [[UsesDataset::Facebook dataset]] is dataset of Facebook users in Iceland. They chose Iceland, as they have seen the penetration was highest in Iceland and hence new links were formed more rapidly than other countries. | ||
+ | * Collaboration network in [[UsesDataset::arXive]] database. | ||
== Experiments == | == Experiments == | ||
− | + | The paper does experiments of ranking the nodes using the supervised random walks both on synthetic dataset and real world dataset. | |
=== Experimental Setup === | === Experimental Setup === | ||
+ | The following are some of the general considerations of loss function, edge strength function, choice of <math> \alpha </math> and regularization parameter <math> \alpha </math> | ||
+ | ==== Edge Strength Function ==== | ||
+ | Edge strength function with a parameter <math> w </math> takes as input the feature vector <math> \psi_{uv} </math> and outputs the edge strength. The following are some possibility of the edge strength | ||
+ | * Exponential edge strength: <math> a_{uv} = exp(\psi_{uv}\cdot w) </math> | ||
+ | * Logistic edge strength: <math> a_{uv} = \frac{1}{1 + exp(-\psi_{uv}\cdot w)} </math> | ||
+ | ==== Loss Function ==== | ||
+ | As said earlier we need to consider loss function <math> h </math> such that it is differentiable. The following are some of th possible loss functions considered | ||
+ | * Squared loss with margin: | ||
+ | :<math> | ||
+ | h(x) = \max\{x+b,0\}^2 | ||
+ | </math> | ||
+ | * [http://en.wikipedia.org/wiki/Huber_loss_function Huber loss] with margin b and window z > b | ||
+ | :<math> | ||
+ | \begin{array}{lcll} | ||
+ | h(x) & = & 0 &if\ \ x \le -b\\ | ||
+ | & = & (x+b)^2/(2z) & if\ \ -b < x \le z-b\\ | ||
+ | & = & (x+b)-z/2 & if\ \ x> z-b\\ | ||
+ | \end{array} | ||
+ | </math> | ||
+ | ==== Choice of <math> \alpha </math> ==== | ||
+ | The choice of <math> \alpha</math> decides how far we do the random walk before we restart our random walk from ''s''. As <math> \alpha </math> approaches 1, the random walks crossing more than 2 hops becomes increasingly unlikely. | ||
+ | ==== Regularization parameter <math> \lambda </math> ==== | ||
+ | This parameter controls the over-fitting of the model, and in this paper they set <math> \lambda = 1 </math> which gave best performance. | ||
+ | === Experimental Results === | ||
+ | ==== Synthetic dataset ==== | ||
+ | They generate a scale-free graph with 10,000 nodes. For each edge <math> (u,v) </math> they create two independent Gaussian features with mean 0 and variance 1. The strength of each edge is given by <math> a_{uv} = exp(\psi_{uv1} - \psi_{uv2})</math> and hence <math> w* = [1, -1].</math>. On this graph they run the experiment with <math> \alpha = 0.2 </math> and obtain the PageRank scores <math> p^* </math>. Using these scores they pick the destination nodes or positive nodes in two ways. First, is to pick the top ''K'' nodes and second way is to pick K nodes with probability equal to their scores. They also add the Gaussian noise to the features and vary the noise and verify the performance. | ||
− | = | + | They generate 100 graphs using the way explained above and use 50 graphs for learning the parameters and 50 graphs to test the performance. The measure of performance is AUC (area under the curve) which measures the accuracy of the prediction, AUC = 1 meaning the prediction were perfect. The Figures shown below provides the performance measures both in case of deterministic top K nodes and probabilistic top K nodes. The un-weighted version is the case where we assign equal weight to all the edges. We can see that performance of the learned graph(blue) is better than the un-weighted version (red) and most importantly as the noise increases the learned parameters performs better then the true parameters (green) both in case of deterministic and probabilistic ways. |
+ | |||
+ | [[File:SyntheticPerformanceSRW.png]] | ||
+ | ==== Real world dataset ==== | ||
+ | They run it on both the [[UsesDataset::Facebook dataset]] and [[UsesDataset::arXive]] database dataset and show that the performance of the supervised random walk is better than doing a typical Random walk with restart method, or a logistic regression with the network features extracted out to predict the nodes for which a given ''s'' gets linked to. As the number of nodes in the graph get really high, they consider the nodes that are not very far in terms of number of hops as potential candidates so that the method does not bloat up the size of transition matrix. | ||
== Conclusion == | == Conclusion == | ||
+ | Supervised random walk algorithm, takes into consideration both network structures and features of the nodes/edges, therefore has better performance than a typical Random walk with restarts which assigns all the equal weight. The problem is formulated as an optimization problem and is solved using gradient descent. Experiments were done on synthetic dataset and on real world Facebook and co-authorship dataset and shown that it performs better. This can be applied to various other applications like Link recommendation, ranking nodes in the graph, missing link analysis. |
Latest revision as of 22:18, 31 March 2011
This is one of the paper discussed and written in course Social Media Analysis 10-802 in Spring 2011
Citation
Lars Backstrom & Jure Leskovec "Supervised Random Walks: Predicting and Recommending Links in Social Networks"
Online version
Summary
This paper addresses the problem of Link Prediction using the method of Random walk with restart. Supervised Random Walk is interesting because it ranks the nodes based the network information and also using rich node and edge attributes that exist in the dataset. The method is supervised learning task where the goal is to learn the parameters of the function that assigns the strength of the edge (probability of taking that edge) such that a random walker is more likely to reach nodes to which new links will be created in future.
This method is used to recommend friends on Facebook dataset and also in predicting links in collaboration network in arXive database.
Method
Typically, Random walk with restart involves giving probabilities to every edge, which indicate the probability of taking that edge by a random walker given he is at any one of the node on either side of the edge. These probabilities decide which nodes are closer to the node from which we restart our random walks. One simple method that is to assign each edge out of a given node equal probability. However, supervised random walk presented in this paper provides the method to learn the probabilities so that the random walker restarting from node s is more likely to reach the "positive" nodes than "negative" nodes. Positive nodes are the one for which the links were formed in the training dataset and negative nodes are rest all nodes which are not connected the nodes s. The method is formulated into an optimization problem for which an efficient estimation procedure is derived.
Problem Formulation
Given a graph and the training data which has set of the positive nodes for which the links were formed by s and all other nodes in the graph to which linked were not formed as . Assume that we are learning the probability assignments for edges with respect to a single s. Each edge has a vector of features , which could be features of the interaction between nodes or could be features on individual nodes . Also, if there is a function which is parameterized function that assigns edge strengths or probabilities given feature vector . The problem formulation is to learn the parameter, that such that if we do random walk with restarts from s on the graph where the edge strengths are assigned based , the random walk station distribution has property that for each . This means that are more likely to reach the positive nodes than nodes in negative set . Hence, the optimization problem is to regularize the parameter vector satisfying this condition, which is given below.
such that
The above formulation is the "hard" constraint that the stationary probabilities of all negative scores should be less than that of a positive node. The constraint is relaxed by introducing an error function which is such that if however in case where the constraint is violated. With this "soft" constraints the following is the an optimization problem formulation
In the above formulation is regularization parameter that trades-off between the complexity (measured in norm of ) for the fit of the model ( how many constraints are violated).
Solving Optimization Problem
In order to solve the optimization problem formulated, the expression is differentiated. Firstly, the loss function has to be differential and secondly, they derive the relationship between parameters and random walk scores . The edge strength then the stochastic transition matrix is given as following
From this stochastic transition matrix we can obtain the transition probability matrix when we do Random walk with restart. The following is the restart probability.
Given this transition probability matrix, the stationary distribution of Random walk with restart is given by eigenvector equation.
With this we can find the relation ship between parameters and stationary distribution . Hence if we differentiate the optimization problem we get the following equation. We can consider
Now since we know is principal eigen vector for the transition probability matrix we have and hence
In the above equation can be computed by Power Iteration method and can be computed using the definition of transition probability matrix.
Optimization problem is then solved using Gradient Descent method and is minimized.
Datasets Used
This paper for link prediction and link recommendation uses the following two datasets
- Facebook dataset is dataset of Facebook users in Iceland. They chose Iceland, as they have seen the penetration was highest in Iceland and hence new links were formed more rapidly than other countries.
- Collaboration network in arXive database.
Experiments
The paper does experiments of ranking the nodes using the supervised random walks both on synthetic dataset and real world dataset.
Experimental Setup
The following are some of the general considerations of loss function, edge strength function, choice of and regularization parameter
Edge Strength Function
Edge strength function with a parameter takes as input the feature vector and outputs the edge strength. The following are some possibility of the edge strength
- Exponential edge strength:
- Logistic edge strength:
Loss Function
As said earlier we need to consider loss function such that it is differentiable. The following are some of th possible loss functions considered
- Squared loss with margin:
- Huber loss with margin b and window z > b
Choice of
The choice of decides how far we do the random walk before we restart our random walk from s. As approaches 1, the random walks crossing more than 2 hops becomes increasingly unlikely.
Regularization parameter
This parameter controls the over-fitting of the model, and in this paper they set which gave best performance.
Experimental Results
Synthetic dataset
They generate a scale-free graph with 10,000 nodes. For each edge they create two independent Gaussian features with mean 0 and variance 1. The strength of each edge is given by and hence . On this graph they run the experiment with and obtain the PageRank scores . Using these scores they pick the destination nodes or positive nodes in two ways. First, is to pick the top K nodes and second way is to pick K nodes with probability equal to their scores. They also add the Gaussian noise to the features and vary the noise and verify the performance.
They generate 100 graphs using the way explained above and use 50 graphs for learning the parameters and 50 graphs to test the performance. The measure of performance is AUC (area under the curve) which measures the accuracy of the prediction, AUC = 1 meaning the prediction were perfect. The Figures shown below provides the performance measures both in case of deterministic top K nodes and probabilistic top K nodes. The un-weighted version is the case where we assign equal weight to all the edges. We can see that performance of the learned graph(blue) is better than the un-weighted version (red) and most importantly as the noise increases the learned parameters performs better then the true parameters (green) both in case of deterministic and probabilistic ways.
Real world dataset
They run it on both the Facebook dataset and arXive database dataset and show that the performance of the supervised random walk is better than doing a typical Random walk with restart method, or a logistic regression with the network features extracted out to predict the nodes for which a given s gets linked to. As the number of nodes in the graph get really high, they consider the nodes that are not very far in terms of number of hops as potential candidates so that the method does not bloat up the size of transition matrix.
Conclusion
Supervised random walk algorithm, takes into consideration both network structures and features of the nodes/edges, therefore has better performance than a typical Random walk with restarts which assigns all the equal weight. The problem is formulated as an optimization problem and is solved using gradient descent. Experiments were done on synthetic dataset and on real world Facebook and co-authorship dataset and shown that it performs better. This can be applied to various other applications like Link recommendation, ranking nodes in the graph, missing link analysis.