Difference between revisions of "Naive Bayes classifier learning"

From Cohen Courses
Jump to navigationJump to search
m (1 revision: Naive Bayes page - missed in first import)
 
 
(3 intermediate revisions by the same user not shown)
Line 1: Line 1:
This is a [[category::method]] discussed in [[Social Media Analysis 10-802 in Spring 2010]].
+
This is a [[category::method]] discussed in [[Social Media Analysis 10-802 in Spring 2010]] and [[Social Media Analysis 10-802 in Spring 2011]].
 +
 
 +
== Background ==
 +
A Naive Bayes classifier is a simple probabilistic classifier based on applying Bayes' theorem (from Bayesian statistics) with strong (naive) independence assumptions.
 +
 
 +
[[File:f1.PNG]]
 +
 
 +
Here, we use an example from sentiment analysis on twitter messages. So we let s be the sentiment label, M be the Twitter message. If we assume there are equal sets of positive, negative and neutral messages, we simplify the equation:
 +
 
 +
[[File:f2.PNG]]
 +
 
 +
If we re-write M into G a set of features and assume they are conditionally independent, we have:
 +
 
 +
[[File:f3.PNG]]
 +
 
 +
Finally, we have the log-likelihood of each sentiment:
 +
 
 +
[[File:f4.PNG]]
 +
 
 +
== Applications ==
 +
It is widely used in information retrieval and information extraction, for example, Document Categorization, Text Classification and many different problems.
  
 
== Relevant Papers ==
 
== Relevant Papers ==

Latest revision as of 20:39, 31 March 2011

This is a method discussed in Social Media Analysis 10-802 in Spring 2010 and Social Media Analysis 10-802 in Spring 2011.

Background

A Naive Bayes classifier is a simple probabilistic classifier based on applying Bayes' theorem (from Bayesian statistics) with strong (naive) independence assumptions.

F1.PNG

Here, we use an example from sentiment analysis on twitter messages. So we let s be the sentiment label, M be the Twitter message. If we assume there are equal sets of positive, negative and neutral messages, we simplify the equation:

F2.PNG

If we re-write M into G a set of features and assume they are conditionally independent, we have:

F3.PNG

Finally, we have the log-likelihood of each sentiment:

F4.PNG

Applications

It is widely used in information retrieval and information extraction, for example, Document Categorization, Text Classification and many different problems.

Relevant Papers

 AddressesProblemUsesDataset
Pang et al EMNLP 2002Review classificationPang Movie Reviews