Difference between revisions of "Jurgens and Lu ICWSM 2012"

From Cohen Courses
Jump to navigationJump to search
Line 49: Line 49:
  
 
[[File:Jurgens_3.png]]
 
[[File:Jurgens_3.png]]
 +
 +
* To further distinguish edits based on the magnitude of their effect in addition to the type, they partition each class into major and minor subcategories, with the exception of Revert.
 +
* Based on the shape of the effect distributions, the difference between major and minor was selected using the Pareto principle, or “80/20 rule” ([http://people.physics.anu.edu.au/~tas110/Teaching/Lectures/L4/Material/Newman05.pdf Newman, M. 2005. Power laws, pareto distributions and zipf’s law. Contemporary physics 46(5):323–351.]).
 +
* The intuition here is, the revisions with small effects account for the majority of the cumulative effects to the content.
 +
* The figure belos shows distributions of the effects for Add, Delete, and Edit types. Vertical lines indicate the division between major and minor revisions based on the 80/20 rule, where 80% of a type’s cumulative effects are due to those to the left of the line.
 +
 +
[[File:Jurgens_4.png]]
  
 
== Method ==
 
== Method ==

Revision as of 22:56, 1 October 2012

Citation

@inproceedings{DBLP:conf/icwsm/JurgensL12,

 author = {David Jurgens and Tsai-Ching Lu},
 title = {Temporal Motifs Reveal the Dynamics of Editor Interactions in Wikipedia},
 booktitle = {ICWSM},
 year = {2012}

Online version

Temporal Motifs Reveal the Dynamics of Editor Interactions in Wikipedia


Summary

Underlying the growth of Wikipedia are the cooperative –and sometimes combative– interactions between editors working on the same content. But most research on Wikipedia editor interactions focus on cooperative behaviors, which calls for a full analysis of all types of editing behaviors, including both cooperative and combative. To investigate editor interactions in Wikipedia in this context, this paper proposes to represent Wikipedia's revision history as a temporal, bipartite network with multiple node and edge types for users and revisions. From this representation, they identify author interactions as network motifs and show how the motif types capture editing behaviors. They demonstrate the usefulness of motifs by two tasks; (1) classification of pages as combative or cooperative page and (2) analysis of the dynamics of editor behavior to explain Wikipedia’s content growth.

Proposed network representation

Definition

They view editor interactions in Wikipedia as a bipartite graph from authors to the pages. They expand this representation to encode three additional features: (1) the type of author who made the change, (2) the time at which the change was made, and (3) the magnitude and effect of the change to the page. To do so, they define the bipartite graph of Wikipedia revisions as follows.

Jurgens 2.png


The figure below illustrates a subset of a page’s history as sequence of classified revisions.

Jurgens 1.png


Network derivation from Wikipedia dataset

Data:

  • Wikipedia revision dataset is derived from a complete revision history of Wikipedia, ending on April 05, 2011.
  • After extracting article pages that have at least 10 revisions, the resulting dataset contained 2,715,123 articles and 227,034,806 revisions.

Revision classes:

  • They selected four high-level categories for revisions: adding, deleting, editing, and reverting.
  • Using (1) the revising author’s comment and (2) MD5 hash for the articles, a revision can be identified as revert or not.
  • To classify a revision into one of the other three revision classes, they used two parameters: (1) the number of whitespace-delimited tokens added or removed from the page, , i.e., its change in size, and (2) the number of tokens whose content was changed, .
  • The classification rule is as follows.

Jurgens 3.png

  • To further distinguish edits based on the magnitude of their effect in addition to the type, they partition each class into major and minor subcategories, with the exception of Revert.
  • Based on the shape of the effect distributions, the difference between major and minor was selected using the Pareto principle, or “80/20 rule” (Newman, M. 2005. Power laws, pareto distributions and zipf’s law. Contemporary physics 46(5):323–351.).
  • The intuition here is, the revisions with small effects account for the majority of the cumulative effects to the content.
  • The figure belos shows distributions of the effects for Add, Delete, and Edit types. Vertical lines indicate the division between major and minor revisions based on the 80/20 rule, where 80% of a type’s cumulative effects are due to those to the left of the line.

Jurgens 4.png

Method

Dividing networks into two communities

The author rewrites the equation [1] as follows. where is the column vector whose elements are the , and , which is called modularity matrix.

By writing as a linear combination of the normalized eigenvectors of , it is shown that we can express as follow:

where is the eigenvalue of corresponding to eigenvector .

The author shows that the maximum of is achieved by setting if the corresponding element of the leading eigen vector (whose eigenvalue is largest) is positive and -1 otherwise. Thus, the algorithm is as follows: we compute the leading eigenvector of the modularity matrix and divide the vertices into two groups according to the signs of the elements in this vector.

Dividing networks into more than two communities

The author divides networks into multiple communities by repeating the previous method recursively. That is, he uses the algorithm described above first to divide the network into two parts, then divides those parts, and so on.

More specifically, he considers how much the modularity increases when we divide a group into two parts. He shows this additional contribution of modularity can be expressed in a similar form as the previous section. He also shows that the modularity matrix in the previous section is now rewritten as a generalized modularity matrix. Then he shows that we can apply same spectral algorithm to maximize .

This algorithm tells us clearly at what point we need to halt the subdivision process; If there are no division of a subgraph that will increase the modularity of the network, or equivalently that gives a positive value for , we should stop the process then.

Nice features of this method

  • We do not need to specify the size of communities.
  • It has the ability to refuse to divide the network when no good division exists.
    • If the generalized modularity matrix has no positive eigenvalues, it means there is no division of the network that results in positive modularity, which we can see from the equation [2].

Dataset

Experiment

Review

Recommendation for whether or not to assign the paper as required/optional reading in later classes.

Yes.

  • Modularity-based methods are common in community detection task. This papper might be a good introduction for the concept of modularity.
  • This paper also illustrates how the optimization problem can be rewritten in terms of eigenvalues and eigenvectors of a matrix called modularity matrix, which results into eigenvalue problems. This derivation shows that we can solve a problem by seeing the problem from different view points. This might be a good lesson for us when we face problems.

Related Papers

Study Plan