# Controversial events detection

Jump to navigationJump to search

## Comments

This is a neat idea. The main difficulty I see here is formalizing the task precisely. What does it mean for an event to be controversial, exactly? Part of the problem is that it's not perfectly clear what an "event" is.

One suggestion would be to look at a topic-modeling approach, eg topics over time, to find topics with a short temporal span in social-media data. You might be able to combine this with sentiment around those topics in two different communities - eg using something like my MCR-LDA model. So one way to flesh out this idea would be to start with two topic models:

• MCR-LDA, to measure 'controversy' - you might be able to get predictions from Ramnath on his blog data, if the code's not ready to distribute yet. I would not completely commit to using twitter data exclusively, btw.
• TOT, to detect shortlived 'events' vs long-term topics.

Then write some inference code to combine the predictions and pick out "controversial events". The next stage would be working out a joint model (which you might not chose to do for the project). It's not obvious how you'd evaluate all this, however...maybe do some user labeling of final predictions like "this topic corresponds to a controversial event."

These are just ideas - you might try and flesh out some other concrete idea instead. Good luck! --Wcohen 14:33, 10 October 2012 (UTC)

PS. There is also a one-person team working on similar topic, you all should talk - it's User:Yuchen Tian --Wcohen 18:40, 10 October 2012 (UTC)

## Project idea

In our project, we propose to jointly detect events and the controversy surrounding it in the context of social media. For example, Christmas day is an event that receives the most attention around December 25th, while the Presidential debates once every four years. Controversy-wise, Christmas day is relatively one sided, with most of the text mentioning it being relatively homogeneous. In contrast, the Presidential debates event will have obvious sides (supporting the different candidates).

Our goal is not only to detect controversial events, but also to discover what the different sides are - both grouping the individuals associated with each faction and describing how each faction talks about the event differently.

We propose to use a probabilistic graphical model to achieve our goals of learning these latent structures from the data without labeled training data.

## Formalizing the task

Event - In the context of social media, an event is a period of time where there is a "surge" in the amount of interest (i.e. blog posts, tweets, comments, etc) surrounding the occurrence.

We call this event controversial if given the text surrounding the event, the nature of the discussions are highly non-homogeneous (or exhibit high entropy). Each side of this event can be grouped together into a small number of distinct factions.

Thus, in our task, given a collection of social media documents over time, we seek to jointly infer the the events that have occurred, as well as the controversy associated with it.

## A probabilistic model

Here's a sketch of a topic model that we are considering for our task. It is a variant of a topic model, where each word is assumed to be jointly generated by an event and faction. It is also similar to the topic over time model, where we generate the time stamps for each document.

A graphical plate diagram of our model will be up soon.

### Notation

${\displaystyle E}$ - fixed number of events

${\displaystyle \theta _{d}}$ - multinomial distribution of events specific to document ${\displaystyle d}$

${\displaystyle \phi _{e_{di}}}$ - multinomial distribution of factions specific to event ${\displaystyle e_{di}}$

${\displaystyle \psi _{e_{di}}}$ - the beta distribution of time specific to event ${\displaystyle e_{di}}$

${\displaystyle w_{di}}$ - the ${\displaystyle i}$th token in document ${\displaystyle d}$

${\displaystyle t_{di}}$ - timestamp associated with the ${\displaystyle i}$th token in document ${\displaystyle d}$

${\displaystyle \eta ^{e},\eta ^{e,f},\eta ^{m}}$ - SAGE vectors, which are log additive weights for each word in the vocabulary. We have one for each event, each combination of event and faction, and a background word distribution.

### Generative story

1. Draw ${\displaystyle E}$ multinomials, ${\displaystyle \phi _{e}}$ from a Dirichlet prior, one for each event ${\displaystyle e}$. This is the distribution over factions for each event that we have.
2. For each document ${\displaystyle d}$, draw a multinomial ${\displaystyle \theta _{d}}$ from a prior ${\displaystyle \alpha }$ (this prior could be Dirichlet or logistic normal); then for each word ${\displaystyle w_{di}}$ in the document ${\displaystyle d}$:
1. Draw an event ${\displaystyle e_{di}}$ from multinomial ${\displaystyle \theta _{d}}$;
2. Draw a faction ${\displaystyle f_{di}}$ from multinomial ${\displaystyle \phi _{e_{di}}}$;
3. Draw a word ${\displaystyle w_{di}}$ from a SAGE language model ${\displaystyle p(w_{di}\mid e_{di},f_{di},\mathbf {\eta } )\propto \exp(\eta _{w}^{e_{di}}+\eta _{w}^{e_{di},f_{di}}+\eta _{w}^{m})}$;
4. Draw a timestamp ${\displaystyle t_{di}}$ from Beta ${\displaystyle \psi _{e_{di}}}$.

### SAGE language model

To model the diﬀerent eﬀects of events and factions, we use a sparse additive generative (SAGE) model. In contrast to the popular Dirichlet-multinomial for topic modeling, which directly models lexical probabilities associated with each (latent) topic, SAGE models the deviation in log frequencies from a background lexical distribution. Applying a sparsity inducing prior on the topic term vectors limits the number of terms whose frequencies diverge from the background lexical frequencies, thereby increasing robustness to limited training data. Also, in the case of our model, it eliminates the need for a switching variable to choose between event words and faction words.

### Logistic normal prior for events

Using a logistic normal prior for events will allow us to incorporate features (such as Twitter hashtags, blog posts titles, comments count, etc) in a principled manner. Logistic normal priors have been used in here and here

## Data and evaluation

We intend to experiment with two different sets of data:

1. Set of tweets collected over 12 weekends (Sep-Dec 2011)
2. Posts and comments from political blogs (relating to the presidential elections) in the year 2012

Over the 12 weekends from Sep-Dec, there are football games played every Sunday evenings. Football games present an obvious way for us to evaluate the performance of our model. Each of these games qualify as an event with a known time of occurrence. Additionally, we also know that there are at least two factions associated with each game (one set of fans for each team). One way of identifying factions would be to manually inspect the word vectors associated with the factions, identifying the teams that they are supporting. Another option is to leverage on the location metadata associated with each tweet. To identify factions with fans bases, we will compute the mean location (expressed as latitude and longitude) for each faction as the weighted average of words that draw from that faction, and then associate it with the geographically closest NFL market (in terms of great-circle distance).

Also, significant events that have occurred during this period are 9/11 anniversary, Halloween, thanksgiving and Christmas. These events should have low entropy in the faction distribution of words within a document, which will serve as a reference for evaluating our model in terms of its ability to identify factions.

Blog posts provide substantially more content per document. Since this is an election year, hope to use data scraped from political blogs to qualitatively evaluate our model in its ability to pick up key election year events (like debates, primaries, conventions, Todd Akin-like controversial remarks, etc). Also, politics is one of the most contentious subject with much discussions and debates, which we hope our model will be able to learn the factions from.