Difference between revisions of "Supervised Random Walk"

From Cohen Courses
Jump to navigationJump to search
Line 26: Line 26:
 
\forall d \in D, l \in L: p_l < p_d
 
\forall d \in D, l \in L: p_l < p_d
 
</math>
 
</math>
 +
 +
The above formulation is the "hard" constraint that the stationary probabilities of all negative scores should be less than that of a positive node. The constraint is relaxed by introducing an error function <math> h </math> which is such that <math> h(p_l - p_d) = 0 </math> if <math> p_l - p_d < 0 </math> however <math> h(p_l - p_d) > 0 </math> in case <math> p_l-p_d > 0 </math> where the constraint is violated. With this "soft" constraints the following is the an optimization problem formulation
 +
 +
 +
<math>
 +
\min_w F(w) = ||w||^2 + \lambda \sum_{d\in D, l\in L} h(p_l - p_d)
 +
</math>
 +
 +
In the above formulation <math> \lambda </math> is regularization parameter that trades-off between the complexity (measured in norm of <math> w </math>) for the fit of the model ( how many constraints are violated).
 
=== Solving Optimization Problem ===
 
=== Solving Optimization Problem ===
  

Revision as of 23:36, 30 March 2011

This is one of the paper discussed and written in course Social Media Analysis 10-802 in Spring 2011

Citation

Lars Backstrom & Jure Leskovec "Supervised Random Walks: Predicting and Recommending Links in Social Networks"

Online version

Link to paper

Summary

This paper addresses the problem of Link Prediction using the method of Random walk with restart. Supervised Random Walk is interesting because it ranks the nodes based the network information and also using rich node and edge attributes that exist in the dataset. The method is supervised learning task where the goal is to learn the parameters of the function that assigns the strength of the edge (probability of taking that edge) such that a random walker is more likely to reach nodes to which new links will be created in future.

This method is used to recommend friends on Facebook dataset and also in predicting links in collaboration network in arXive database.

Method

Typically, Random walk with restart involves giving probabilities to every edge, which indicate the probability of taking that edge by a random walker given he is at any one of the node on either side of the edge. These probabilities decide which nodes are closer to the node from which we restart our random walks. One simple method that is to assign each edge out of a given node equal probability. However, supervised random walk presented in this paper provides the method to learn the probabilities so that the random walker restarting from node s is more likely to reach the "positive" nodes than "negative" nodes. Positive nodes are the one for which the links were formed in the training dataset and negative nodes are rest all nodes which are not connected the nodes s. The method is formulated into an optimization problem for which an efficient estimation procedure is derived.

Problem Formulation

Given a graph and the training data which has set of the positive nodes for which the links were formed by s and all other nodes in the graph to which linked were not formed as . Assume that we are learning the probability assignments for edges with respect to a single s. Each edge has a vector of features , which could be features of the interaction between nodes or could be features on individual nodes . Also, if there is a function which is parameterized function that assigns edge strengths or probabilities given feature vector . The problem formulation is to learn the parameter, that such that if we do random walk with restarts from s on the graph where the edge strengths are assigned based , the random walk station distribution has property that for each . This means that are more likely to reach the positive nodes than nodes in negative set . Hence, the optimization problem is to regularize the parameter vector satisfying this condition, which is given below.


such that

The above formulation is the "hard" constraint that the stationary probabilities of all negative scores should be less than that of a positive node. The constraint is relaxed by introducing an error function which is such that if however in case where the constraint is violated. With this "soft" constraints the following is the an optimization problem formulation


In the above formulation is regularization parameter that trades-off between the complexity (measured in norm of ) for the fit of the model ( how many constraints are violated).

Solving Optimization Problem

Datasets Used

Experiments

Experimental Setup

Experimental Results

Conclusion

If the number of nodes are really high, we need to consider the nodes that are not very far in terms of number of hops as potential candidates so that the method does not bloat up the size of transition matrix.