Zhang et all, WWW 2007

From Cohen Courses
Jump to navigationJump to search

Citation

Online version

ICWSM09

Summary

The aim of this paper is to identify users with high expertise within online expertise-sharing communities. This expertise finding system uses graph-based algorithms on social networks within the community.

They treat expertise as a relative concept.

network based algorithms such as PageRank, HITS

They created a post-reply network in which each user is represented as a node and a directed edge is created from each user who started the post to other users who replied to it. The prestige measure of this network is highly correlated with a user's expertise due to the way the network is constructed. Therefore this network is called community expertise network (CEN).

Network Characteristics

The authors experimented on the Java Forum which is a large online help-seeking community. Before testing the algorithms they did several analysis to characterize the network. Below are the performed analysis and their results

  • The Bow tie structure analysis : More than half of the users only asks questions. 13% only answers and 12% both answers and asks.
  • Degree distribution analysis : The majority of users answers only a few questions but few active users answers a lot of questions.
  • Degree correlation analysis : Top repliers answer questions for everyone but less expert users do not reply to high expert users.



the important and influential blogs with recurring interest in a specific topic. Given a set of blogs related to a particular topic, the authors are trying to find a subset of blogs that represents the larger set by using a stochastic graph based method.

The authors approached to this blog retrieval problem with the assumption that important and representative blogs tend to be lexically similar to other important and representative blogs. Therefore they used textual similarity between posts as a way to understand which blog is affecting the others and so to determine the authorities.

The authors used a PageRank like algorithm, called BlogRank, to rank the blogs by their popularity. In their algorithm they represented each blog with a node and put an edge between two nodes if they are lexically similar. Iterations over this graph calculates the importance score of a blog by using the scores of its neighbors.

BlogRank.jpg

Cosine similarity between tf-idf vector representations of posts are used the calculate the text similarity between posts. The authors also used blog related attributes such as number of posts, average length of posts etc. as priors. BlogRank algorithm takes diversity into account and penalize blogs that are quite similar to already selected blogs.

TREC BLOG06 and UCLA Blogocenter datasets had been used in the experiments. They used diffusion models to measure the performance of their algorithm. Initially they marked the selected nodes as active and then applied the diffusion model and counted the number of activated nodes at the end.

The authors tried several other algorithms to compare with their ranking algorithm. The experiments showed that BlogRank outperforms other methods both in coverage and in running time. They also performed experiments in order to see whether BlogRank algorithm can be used in predicting. The results indicated that BlogRank method generalizes well for the future.

This work is similar to the Blog Distillation task in the TREC Blog Track. However in blog distillation task, given a query the aim is to return all relevant blogs. In this paper, given set of blogs related to topic, the aim is to select smaller set of blogs. Some related works are Arguello et al, ICWSM 2008 and Elsas et al, TREC 2007.