Difference between revisions of "Topic Model Approach to Authority Identification"
Line 37: | Line 37: | ||
* random : Sort reviews randomly | * random : Sort reviews randomly | ||
* nwords : Sort reviews by number of votes [More votes more authoritative] | * nwords : Sort reviews by number of votes [More votes more authoritative] | ||
− | * unique : For each word w, let | + | * unique : For each word w, let g_{w} be its count across all documents for all products. Let p_{w} be its count amongst |
− | of a given product. Rank a review d of this product by the number of words unique amongst the document collection. | + | documents of a given product. Rank a review d of this product by the number of words unique amongst the document collection. |
− | Specifically, the score associated with a document is, <math>\Sigma_{w \in d s.t. | + | Specifically, the score associated with a document is, <math>\Sigma_{w \in d s.t. p_{w}=1} log(g_{w} + 1)</math> |
+ | |||
+ | * Summarization-Based Approaches: | ||
+ | * sumbasic: Rank documents by the sum-basic criterion [Nenkova and Vanderwende, 2005], ordering reviews of the same product | ||
+ | by how many high-frequency words they have relative to the product document collection. | ||
+ | The score of a document D is <math>\Sigma_{w \in d P(w)}</math>. | ||
== Findings == | == Findings == |
Revision as of 20:31, 1 October 2012
This a Paper reviewed for Social Media Analysis 10-802 in Fall 2012.
Contents
Citation
author = {Alexandre Passos and Jacques Wainer and Aria Haghighi}, title = {What do you know? A topic-model approach to authority identification}, journal = {NIPS 2010 Workshop on Computational Social Science and the Wisdom of the Crowds}, year = {2010}
Online version
What do you know? A topic-model approach to authority identification
Summary
In this paper the authors present a preliminary study of basic approaches to the problem of identifying authoritative documents in a given domain using textual content and report their best performing approach using Hierarchical Topic Models [Blei et al, 2004]. Authoritative documents are ones which exhibit novel and relevant information relative to a document collection while demonstrating domain knowledge. Authors define authoritativeness identification task as a ranking problem and focus on product (book GoodReads and restaurant Yelp) reviews utilizing user votes as proxy for helpfulness and authoritativeness.
Dataset Description
The authors have reported results on two datasets.
- Book Reviews GoodReads dataset
* First 326 books in the "Best Books Ever" Category * First 60 odd reviews from each book.
- Restaurant Reviews Yelp dataset
* 283 Most reviewed restaurants in the Boston/Cambridge area
Number of "helpful" user votes for each review were considered as a proxy for ranking reviews authoritativeness.
Task Description and Evaluation
Models:
- Heuristic Approaches:
* random : Sort reviews randomly
* nwords : Sort reviews by number of votes [More votes more authoritative]
* unique : For each word w, let g_{w} be its count across all documents for all products. Let p_{w} be its count amongst
documents of a given product. Rank a review d of this product by the number of words unique amongst the document collection.
Specifically, the score associated with a document is,
- Summarization-Based Approaches:
* sumbasic: Rank documents by the sum-basic criterion [Nenkova and Vanderwende, 2005], ordering reviews of the same product
by how many high-frequency words they have relative to the product document collection.
The score of a document D is .