Difference between revisions of "Jurgens and Lu ICWSM 2012"

From Cohen Courses
Jump to navigationJump to search
Line 36: Line 36:
  
 
=== Network derivation from Wikipedia dataset ===
 
=== Network derivation from Wikipedia dataset ===
 +
 +
[[UsesDataset::Wikipedia_revision|Wikipedia revision dataset]] is derived from a complete revision history of Wikipedia, ending on April 05, 2011.
 +
By extracting article pages that have at least 10 revisions in their history, the resulting dataset contains 2,715,123 articles and 227,034,806 revisions.
  
 
== Method ==
 
== Method ==

Revision as of 22:31, 1 October 2012

Citation

@inproceedings{DBLP:conf/icwsm/JurgensL12,

 author = {David Jurgens and Tsai-Ching Lu},
 title = {Temporal Motifs Reveal the Dynamics of Editor Interactions in Wikipedia},
 booktitle = {ICWSM},
 year = {2012}

Online version

Temporal Motifs Reveal the Dynamics of Editor Interactions in Wikipedia


Summary

Underlying the growth of Wikipedia are the cooperative –and sometimes combative– interactions between editors working on the same content. But most research on Wikipedia editor interactions focus on cooperative behaviors, which calls for a full analysis of all types of editing behaviors, including both cooperative and combative. To investigate editor interactions in Wikipedia in this context, this paper proposes to represent Wikipedia's revision history as a temporal, bipartite network with multiple node and edge types for users and revisions. From this representation, they identify author interactions as network motifs and show how the motif types capture editing behaviors. They demonstrate the usefulness of motifs by two tasks; (1) classification of pages as combative or cooperative page and (2) analysis of the dynamics of editor behavior to explain Wikipedia’s content growth.

Proposed network representation

Definition

They view editor interactions in Wikipedia as a bipartite graph from authors to the pages. They expand this representation to encode three additional features: (1) the type of author who made the change, (2) the time at which the change was made, and (3) the magnitude and effect of the change to the page. To do so, they define the bipartite graph of Wikipedia revisions as follows.

Jurgens 2.png


The figure below illustrates a subset of a page’s history as sequence of classified revisions.

Jurgens 1.png


Network derivation from Wikipedia dataset

Wikipedia revision dataset is derived from a complete revision history of Wikipedia, ending on April 05, 2011. By extracting article pages that have at least 10 revisions in their history, the resulting dataset contains 2,715,123 articles and 227,034,806 revisions.

Method

Dividing networks into two communities

The author rewrites the equation [1] as follows. where is the column vector whose elements are the , and , which is called modularity matrix.

By writing as a linear combination of the normalized eigenvectors of , it is shown that we can express as follow:

where is the eigenvalue of corresponding to eigenvector .

The author shows that the maximum of is achieved by setting if the corresponding element of the leading eigen vector (whose eigenvalue is largest) is positive and -1 otherwise. Thus, the algorithm is as follows: we compute the leading eigenvector of the modularity matrix and divide the vertices into two groups according to the signs of the elements in this vector.

Dividing networks into more than two communities

The author divides networks into multiple communities by repeating the previous method recursively. That is, he uses the algorithm described above first to divide the network into two parts, then divides those parts, and so on.

More specifically, he considers how much the modularity increases when we divide a group into two parts. He shows this additional contribution of modularity can be expressed in a similar form as the previous section. He also shows that the modularity matrix in the previous section is now rewritten as a generalized modularity matrix. Then he shows that we can apply same spectral algorithm to maximize .

This algorithm tells us clearly at what point we need to halt the subdivision process; If there are no division of a subgraph that will increase the modularity of the network, or equivalently that gives a positive value for , we should stop the process then.

Nice features of this method

  • We do not need to specify the size of communities.
  • It has the ability to refuse to divide the network when no good division exists.
    • If the generalized modularity matrix has no positive eigenvalues, it means there is no division of the network that results in positive modularity, which we can see from the equation [2].

Dataset

Experiment

Review

Recommendation for whether or not to assign the paper as required/optional reading in later classes.

Yes.

  • Modularity-based methods are common in community detection task. This papper might be a good introduction for the concept of modularity.
  • This paper also illustrates how the optimization problem can be rewritten in terms of eigenvalues and eigenvectors of a matrix called modularity matrix, which results into eigenvalue problems. This derivation shows that we can solve a problem by seeing the problem from different view points. This might be a good lesson for us when we face problems.

Related Papers

Study Plan