Difference between revisions of "Compare Rodriguez Barabasi"
(2 intermediate revisions by the same user not shown) | |||
Line 8: | Line 8: | ||
== Big idea == | == Big idea == | ||
− | The first paper tries to apply an unseen network to a visible result. The result could be virus propagation, news spread, etc. The model is to maximize the possibility of the set of cascades over all possible graphs of at most k edges. Such graph is considered as the most likelihood underlying graph. The second paper, uses maths, computer simulation and real data to prove that, in the assumption of people following the pattern of doing tasks with highest priority first, will lead to the conclusion that most of the tasks will be rapidly executed and a few will have long waiting times, which is called the heavy-tail process. | + | The first paper tries to apply an unseen network to a visible result. The result could be virus propagation, news spread, etc. The model is to maximize the possibility of the set of cascades over all possible graphs of at most k edges. Such graph is considered as the most likelihood underlying graph. |
+ | |||
+ | The second paper, uses maths, computer simulation and real data to prove that, in the assumption of people following the pattern of doing tasks with highest priority first, will lead to the conclusion that most of the tasks will be rapidly executed and a few will have long waiting times, which is called the heavy-tail process. | ||
== Dataset == | == Dataset == | ||
Line 14: | Line 16: | ||
The second paper studied based on several thousands of emails with the information of senders, receivers, time and the size of each email. The duration of the data is several months. | The second paper studied based on several thousands of emails with the information of senders, receivers, time and the size of each email. The duration of the data is several months. | ||
+ | |||
+ | == Other == | ||
+ | The first paper comes up with an algorithm to get the max-likelihood graph which leads to the result. | ||
+ | |||
+ | The second paper proves the idea by mathematics. | ||
+ | |||
+ | Both two papers, especially the first paper, use mathematics quite heavily in order to get the result. | ||
== Questions == | == Questions == | ||
1. How much time did you spend reading the (new, non-wikified) paper you summarized? | 1. How much time did you spend reading the (new, non-wikified) paper you summarized? | ||
− | |||
2.5 hours | 2.5 hours | ||
2. How much time did you spend reading the old wikified paper? | 2. How much time did you spend reading the old wikified paper? | ||
− | |||
1.5 hours | 1.5 hours | ||
3. How much time did you spend reading the summary of the old paper? | 3. How much time did you spend reading the summary of the old paper? | ||
− | |||
0.25 hours | 0.25 hours | ||
4. How much time did you spend reading background materiel? | 4. How much time did you spend reading background materiel? | ||
− | |||
0.25 hours | 0.25 hours | ||
5. Was there a study plan for the old paper? | 5. Was there a study plan for the old paper? | ||
if so, did you read any of the items suggested by the study plan? and how much time did you spend with reading them? | if so, did you read any of the items suggested by the study plan? and how much time did you spend with reading them? | ||
− | |||
Yes, there is a study plan. 0.5 hours | Yes, there is a study plan. 0.5 hours |
Latest revision as of 10:03, 6 November 2012
The papers
Problem
These two papers both try to describe the human or social behavior by modelling - coming up with a model for the underlying structure, though the ways to modelling are different.
Big idea
The first paper tries to apply an unseen network to a visible result. The result could be virus propagation, news spread, etc. The model is to maximize the possibility of the set of cascades over all possible graphs of at most k edges. Such graph is considered as the most likelihood underlying graph.
The second paper, uses maths, computer simulation and real data to prove that, in the assumption of people following the pattern of doing tasks with highest priority first, will lead to the conclusion that most of the tasks will be rapidly executed and a few will have long waiting times, which is called the heavy-tail process.
Dataset
The first paper uses more than 172 million news articles and blog posts from 1 million online sources. The duration of the data is a year.
The second paper studied based on several thousands of emails with the information of senders, receivers, time and the size of each email. The duration of the data is several months.
Other
The first paper comes up with an algorithm to get the max-likelihood graph which leads to the result.
The second paper proves the idea by mathematics.
Both two papers, especially the first paper, use mathematics quite heavily in order to get the result.
Questions
1. How much time did you spend reading the (new, non-wikified) paper you summarized? 2.5 hours
2. How much time did you spend reading the old wikified paper? 1.5 hours
3. How much time did you spend reading the summary of the old paper? 0.25 hours
4. How much time did you spend reading background materiel? 0.25 hours
5. Was there a study plan for the old paper? if so, did you read any of the items suggested by the study plan? and how much time did you spend with reading them? Yes, there is a study plan. 0.5 hours