|
|
(41 intermediate revisions by the same user not shown) |
Line 1: |
Line 1: |
− | == PET:A Statistical Model for Popular Events Tracking in Social Communities ==
| + | This is a [[Category::Paper]] [http://malt.ml.cmu.edu/mw/index.php/User:Mings I] read in 10802 social media analysis. |
− | '''Problem:'''
| |
− | In this paper, the authors address a method to observe and track the popular events or topics that evolve over time in the communities. Existing methods separate topics and netword structures apart. In this paper, textual topics and network are combined together which makes more sense.
| |
| | | |
− | '''Method:'''
| + | == Citation == |
− | The authors address the event tracking by first defining a term - Popular Event Tracking (PET) in online communities which includes the popularity of events over time, the burstiness of user interest, information diffusion through the network structure and the evolution of topics.
| |
| | | |
− | PET leverages a Gibbs Random Field to model the interest of users, depending on their historical status as well as the influence form their social connections. The intuition here is that my current interest will be strongly related with my previous interest. Also my interest will be influenced by my friends which are my connections in social media.
| + | [http://www.cs.uiuc.edu/~hanj/pdf/kdd10_xlin.pdf PET:A Statistical Model for Popular Events Tracking in Social Communities] Cindy Xide Lin, Bo Zhao, [http://www-personal.umich.edu/~qmei/ Qiaozhu Mei], [http://www.cs.uiuc.edu/~hanj/ Jiawei Han] KDD, 2010 |
| | | |
− | [[Definitions:]]
| + | == Problem == |
| | | |
− | 1.'' Network Stream:'' <math>G=\{G_1,G_2,...,G_T\}</math> is a stream of network structures. Each element <math>G_k</math> in the set is a snapshot of the network at time <math>t_k</math>. <math>G_k = \{V_k, E_k\}</math>.
| + | In this paper, the authors try to [[AddressesProblem::observe and track the popular events or topics]] that evolve over time in the communities. |
| | | |
− | 2. ''Document stream:'' <math>D=\{D_1,D_2,...,D_T\}</math> is a stream of document collections. <math>D_k</math> is a the set of documents published between time <math>t_{k-1}</math> and <math>t_k</math>. <math>D_k=\{d_{k,1},d_{k,2},...,d_{k,N}\}</math>. <math>d_(k,i)</math> is the text document associated with user i in time <math>t_k</math>.
| + | == Summary == |
| | | |
− | 3. ''Topic:'' Semantically coherent topic <math>\theta</math> is a multinomial distribution of words <math>\{p(w|\theta)\}</math>.
| + | Existing methods separate topics and network structure apart. In this paper, textual topics and network are combined together which makes more sense. The authors address the event tracking by using a model - [[UsesMethod::Popular Event Tracking]] (PET) in online communities which includes the popularity of events over time, the burstiness of user interest, information diffusion through the network structure and the evolution of topics. |
| | | |
− | 4. ''Event:'' <math>\theta^E=\{\theta_0^E,\theta_1^E,...,\theta_T^E\}</math> is a stream of topics. Among these, <math>\theta_0^E</math> is either specified by users or be discovered by an event detection algorithm.
| + | == Dataset == |
| | | |
− | 5. ''Interest:'' for each event, at each time point, each user has a certain level of interest in the event which is expressed as <math>h_k(i)</math>.
| + | In this paper, the authors select [[UsesDataset::twitter]] as their source of data. |
| | | |
− | [[Event Tracking Model:]]
| + | == Evaluation == |
| | | |
− | The model in this paper relies on the following three important observations:
| + | Here in this paper, the authors compare PET with some other baseline models such as JonK, Cont, BOM and GInt. The authors apply these models to analyze both the Popularity Trend and Network diffusion. The [[result]] shows that PET generates the most consistent trends and the smoothest diffusion. |
− | 1. Interest & Connection: user i's current interest is influenced by i's connections and a stronger tie brings a larger impact.
| |
− | 2. Interest & History: interest values are generally consistent over time.
| |
− | 3. Content & Interest: if user i has a higher level of interest in an event, the content he generates should be more likely to be related to the event.
| |
| | | |
− | [[General Model:]]
| + | == Related Papers == |
| | | |
− | First, the authors introduce two reasonable independent assumptions:
| + | [1] L.A.Adamic and E.Adar. [[RelatedPaper::Friends and neighbors on the web]]. |
− | 1. Given the current network structure and the previous interest status, the current interest status is independent of the document collection. This is a cause-effect assumption that people generate the document at this moment as a result of current interest rather than a cause of current interest. | |
| | | |
− | 2. Given the current interest status and the document, the current topic model is independent of the network structure and previous interest status. The reason is that once the user has a certain level of interest toward some event, the contents he produces will only depends on the event and the interest level. | + | [2] L.Araujo, J.A.Cuesta. [[RelatedPaper::Genetic algorithm for burst detection and activity tracking in event streams]]. |
− | | |
− | Based on the above assumptions, the inference target becomes: <math>P(H_k,\theta_k|G_k,D_k,H_{k-1})= P(H_k|G_k,H_{k-1})P(\theta_k|H_k,D_k)</math>
| |
− | The first part <math>P(H_k|G_k,H_{k-1})</math> is corresponding to the first assumption and is called the interest model because it measures the distribution of interest. The second part <math>P(\theta_k|H_k,D_k)</math> is related to the second assumption and is called topic model which deals with topic distribution.
| |
− | | |
− | '''Dataset:'''
| |
− | | |
− | In this paper, the authors select twitter as their source of data. They choose 5,000 users with follower=followee relationships and crawling down 1,438,826 tweets displayed by these users during the period from Oct.2009 to early Jan.2010. Each day is regarded as a time point. Document is obtained by simply concatenating all tweets displayed by the user in certain day. The connection is defined as the number of tweets displayed by user by following another user during the period of 30 days.
| |
− | | |
− | [[Comparison with baseline models:]] | |
− | | |
− | Here in this paper, the authors compare PET with some other baseline models such as JonK, Cont, BOM and GInt. The authors apply these models to analyze the Popularity Trend. The conclusion is that PET generates the most consistent trends to the gold standard because PET estimates the popularity by comprehensively considering historic, textual and structured information in a unified way.
| |
− | | |
− | The popularity trend is shown in Fig.1. Network diffusion is shown in Fig.2.
| |
− | [[File:Result.jpg]]
| |
− | [[File:Result2.jpg]]
| |
This is a Paper I read in 10802 social media analysis.
Citation
PET:A Statistical Model for Popular Events Tracking in Social Communities Cindy Xide Lin, Bo Zhao, Qiaozhu Mei, Jiawei Han KDD, 2010
Problem
In this paper, the authors try to observe and track the popular events or topics that evolve over time in the communities.
Summary
Existing methods separate topics and network structure apart. In this paper, textual topics and network are combined together which makes more sense. The authors address the event tracking by using a model - Popular Event Tracking (PET) in online communities which includes the popularity of events over time, the burstiness of user interest, information diffusion through the network structure and the evolution of topics.
Dataset
In this paper, the authors select twitter as their source of data.
Evaluation
Here in this paper, the authors compare PET with some other baseline models such as JonK, Cont, BOM and GInt. The authors apply these models to analyze both the Popularity Trend and Network diffusion. The result shows that PET generates the most consistent trends and the smoothest diffusion.
Related Papers
[1] L.A.Adamic and E.Adar. Friends and neighbors on the web.
[2] L.Araujo, J.A.Cuesta. Genetic algorithm for burst detection and activity tracking in event streams.