Difference between revisions of "Huang 2010 Conversational Tagging in Twitter"

From Cohen Courses
Jump to navigationJump to search
Line 24: Line 24:
 
tags in the Delicious dataset. And they show three key insights:
 
tags in the Delicious dataset. And they show three key insights:
  
* ""
+
* ''Trending Effect''
 
adada
 
adada
  
* ""
+
* ''Conversational vs. Organizational''
 
dadda
 
dadda
  
* ""
+
* ''Micro-memes''
dadda
+
dadd
  
 
== Quantitative (Statistical) Analysis ==
 
== Quantitative (Statistical) Analysis ==

Revision as of 22:07, 31 March 2011

Citation

Jeff Huang, Katherine M. Thornton and Efthimis N. Efthimiadis. 2010. Conversational Tagging in Twitter. In Proceedings of ACM HT.

Online version

An online version of this paper is available at [1].

Summary

This paper presents a study of Twitter tags versus tags in other Web 2.0 systems. They show several findings on their differences and similarities. They claim that twitter tags are more about filtering and directing content so that it appears in certain streams.

Key Contributions

The paper made a key contributions by its findings on the differences between twitter tags and tags in previous systems. It presents the old-style tags as a posteriori and the twitter-style tags as a priori. As claimed by the authors, it is the first large-scale study on twitter tags.

Dataset

They author created their own dataset, from 2 different sources: Twitter and Delicious. They collected a sample of 42 million hashtags used in the microblogging website Twitter, inserted in messages posted by users. They also got a sample of 378 million tags from the online bookmarking service Delicious, created by users to organize their bookmarks. Both of these datasets contain the tag along with the timestamp of when that tag was attached, intended for temporal analysis.

Qualitative Analysis

The authors first present their qualitative analysis of the tags used in Twitter and Delicious. The authors went through the 224 most common tags in the Twitter dataset and the 304 most common tags in the Delicious dataset. And they show three key insights:

  • Trending Effect

adada

  • Conversational vs. Organizational

dadda

  • Micro-memes

dadd

Quantitative (Statistical) Analysis

The authors present details on their sentiment classification experiments including the feature extractions, classifier building and experiments results.

In feature extraction, authors present a four-step approach consisting of (1) filtering URL links, Twitter user names and such non-informative tokens; (2) tokenizing the text with punctuation marks and spaces; (3) removing stopwords (articles); (4) constructing n-grams.

In classifier building, the authors claim that they have tried Naive Bayes, SVM and CRF. However, Naive Bayes classifier works the best thus was picked.

In the final results, the authors present several comparisons between systems with different settings and conclude that the Naive Bayes classifier with bigram features works best due to its good balance between coverage and sentiment patterns.

Discussion

This paper gives a broad overview of twitter hashtags, in particular from user's perspective. It is thus highly related to our proposed course project on automatic Twitter message clustering based on hashtags.