Difference between revisions of "Tom Broxton el al., Catching a viral video, J Intell Inf Syst 2011"

From Cohen Courses
Jump to navigationJump to search
Line 8: Line 8:
  
 
== Summary ==
 
== Summary ==
This is a [[Category::paper]] of Google Research introducing the preliminary analysis on virus video [http://en.wikipedia.org/wiki/Viral_video]([[AddressesProblem::Viral Video Analysis]]). Since the data set used in the study is large-scale, confidential and exclusive to other researchers, the revealed conclusion are valuable.  
+
This is a [[Category::paper]] of Google Research introducing the preliminary analysis on virus video [http://en.wikipedia.org/wiki/Viral_video]([[AddressesProblem::Viral Video Analysis]]). The data set used in the study is a large-scale, confidential and exclusive data set, the revealed conclusion from which are considerable valuable.  
 
Specifically it
 
Specifically it
  
 +
Different research reaches the same conclusion that the most distinguishing characteristic of the viral video is its lifespan. Compared with "popular videos" which are capable of attracting large number of views, the viral video gain traction in social media quickly and fade quickly as well.
  
== Method ==
+
 
 +
== Data set ==
 +
1.5 million video randomly selected from the sey of video uploaded to YouTube between Apirl 2009 and March 2010. Each video
 +
 
 +
 
 +
== Conclusions==
 +
 
 +
First of all, the authors categorize the videos into 10 group according to their level of "socialness".
  
 
=== Social segmentation and video growth ===
 
=== Social segmentation and video growth ===
Line 28: Line 36:
 
The authors fail to elaborate how the weight <math>w_{pq}</math> on each edge is calculated. They only state that the weight is increased as the directed edit distance as well as the frequency of q grows.
 
The authors fail to elaborate how the weight <math>w_{pq}</math> on each edge is calculated. They only state that the weight is increased as the directed edit distance as well as the frequency of q grows.
  
=== Clustering ===
 
The goal of [[UsesMethod::Clustering]] is to retrieve all ''single rooted'' components so that all phrases in a component are closely related by deleting a set of edges of minimum total weight. The single rooted indicates a directed acyclic sub-graph if it contains exactly one root node (out-degree = 0). As other clustering problem, it proves to be a NP-hard problem. Therefore the authors propose three heuristic towards a feasible clustering solution. And the authors claim using the heuristic (although, in my opinion , the contribution of the heuristic is obscure) they found that keeping an edge to the shortest phrase yields 9% improvement over the baseline, 12% improvement keeping an edge to the most frequent phrases and 13% greedily assigning the node to the cluster with the most edges([[UsesMethod::Hill Climbing]]). The experimental result ([[UsesMethod::Network Structure Analysis]]) demonstrates that the volume distribution for both phrase (solid blue) and phrase cluster (dashed green) generated by their cluster method follows power law distribution .
 
[[File:Meme-tracking_and_the_Dynamics_of_the_News_Cycle_3.png‎]]
 
 
== Data set ==
 
90 million news and blog articles 390GB collected over the final three months of the 2008 U.S. Presidential Election (from August 1 to October 31 2008).
 
 
== Experimental Result ==
 
 
Based on the 35,800 non-trivial clusters (at least two phrases), the author extracted 50 largest threads which can be regarded as the cluster of the cluster containing some phrases and the threads are depicted in the following famous figure.
 
 
[[File:Meme-tracking and the Dynamics of the News Cycle 2.png]]
 
 
From the above figure we can not only obtain a clue about the news cycle but also get an idea about the popular news in each period. In addition, the authors also conclude their findings by global analysis and local analysis.
 
 
=== Global Analysis ===
 
The authors compare the dynamics of news threads to the follicular development within an ecosystem and claims two ingredients affecting the dynamics: '''imitation''' (different sources imitate one another) and '''recency''' (up-to-date news are always favored to old ones). Based on the idea, the authors propose a generation model based on the famous preferential attachment model ([[UsesMethod::BA model]]). At each discrete time step <math>t = 1,2,3...</math>, a source chooses thread j with probability proportional to <math>f(n_j)\delta(t-t_j)</math>, where <math>f(\cdot)</math> is a monotonically increasing function and <math>n_j</math> denotes the #stories about thread j; <math>\delta(\cdot)</math> is a monotonically decreasing function and <math>t_j</math> is the first time j was proposed. Intuitively the attachment is governed by the two factors and is preferential to "richer threads" ('''imitation''') and the novelty of the threads ('''recency''').
 
 
An interesting theoretical property of the dynamics is that:
 
 
Suppose we focus on thread j and let <math>X(t)</math> be the thread's volume at time t and <math>t_j = 0</math> then we have:
 
 
<math>
 
X(t+1) = cf(X(t))delta(t)
 
</math>
 
 
Subtracting X(t) and dividing <math>\delta(t)</math> on both sides we have:
 
 
<math>
 
\frac{X(t+1)-X(t)}{\delta(t)} = \frac{dX}{d\delta} = cf(X(t))-\frac{X(t)}{\delta(t)}
 
</math>
 
 
which is essentially an differential equation. For certain <math>f(\cdot)</math> and <math>\delta(\cdot)</math> we can obtain closed form for X.
 
 
=== Local Analysis ===
 
The authors find some interesting observations through local analysis:
 
 
1. News stories will gradually diffused to Blog after its cycle.
 
 
2. Quotes can migrate from blogs to news media
 
  
3. Different sites have different respond time for a phrase.
 
  
 
== Notes ==
 
== Notes ==

Revision as of 22:00, 30 September 2012

Citation

Tom Broxton and Yannet Interian and Jon Vaver and Mirjam Wattenhofer: Catching a viral video. Journal of Intelligent Information Systems 2011: 1-19.


Online version

[1]

Summary

This is a paper of Google Research introducing the preliminary analysis on virus video [2](Viral Video Analysis). The data set used in the study is a large-scale, confidential and exclusive data set, the revealed conclusion from which are considerable valuable. Specifically it

Different research reaches the same conclusion that the most distinguishing characteristic of the viral video is its lifespan. Compared with "popular videos" which are capable of attracting large number of views, the viral video gain traction in social media quickly and fade quickly as well.


Data set

1.5 million video randomly selected from the sey of video uploaded to YouTube between Apirl 2009 and March 2010. Each video


Conclusions

First of all, the authors categorize the videos into 10 group according to their level of "socialness".

Social segmentation and video growth

First of all, pre-processing is conducted to eliminate the noisy phrases within the data set including:

1. remove the phrases whose word-length is less than 4.

2. remove the phrases whose term-frequency is less than 10.

3. eliminate the phrases whose domain-frequency is at least 25% (avoid spammers).

Graph construction

Each node in the phrase graph represents a phrase extracted from the corpus. An edge is included for every pair of phrases p and q, which always points from shorter phrases to longer phrases. Two phrases are connected either the edit-distance (treating a word as a token) is smaller than 1 or there is at least a 10-word consecutive overlap between them. In other words, the edge implies the inclusion relation between the phrases and since the direction is strictly pointing to longer phrases the graph becomes a directed acyclic graph (DAG).

The authors fail to elaborate how the weight on each edge is calculated. They only state that the weight is increased as the directed edit distance as well as the frequency of q grows.


Notes

[3] Support website

[4] J. Leskovec, M. McGlohon, C. Faloutsos, N. Glance, M. Hurst. Cascading behavior in large blog graphs.SDM’07.

[5] X. Wang and A. McCallum. Topics over time: a non-markov continuous-time model of topical trends.Proc. KDD, 2006.

[6] X. Wang, C. Zhai, X. Hu, R. Sproat. Mining correlated bursty topic patterns from coordinated text streams.KDD, 2007.