Difference between revisions of "Naive Bayes classifier learning"

From Cohen Courses
Jump to navigationJump to search
Line 3: Line 3:
 
== Background ==
 
== Background ==
 
A Naive Bayes classifier is a simple probabilistic classifier based on applying Bayes' theorem (from Bayesian statistics) with strong (naive) independence assumptions.  
 
A Naive Bayes classifier is a simple probabilistic classifier based on applying Bayes' theorem (from Bayesian statistics) with strong (naive) independence assumptions.  
 
 
[[File:f1.PNG]]
 
[[File:f1.PNG]]
  
 
Here, we use an example from sentiment analysis on twitter messages. So we let s be the sentiment label, M be the Twitter message. If we assume there are equal sets of positive, negative and neutral messages, we simplify the equation:
 
Here, we use an example from sentiment analysis on twitter messages. So we let s be the sentiment label, M be the Twitter message. If we assume there are equal sets of positive, negative and neutral messages, we simplify the equation:
 +
[[File:f2.PNG]]
  
 
+
If we re-write M into G a set of features and assume they are conditionally independent, we have:
If we re-write M into a set of features and assume they are conditionally independent, we have:
+
[[File:f3.PNG]]
 
 
  
 
Finally, we have the log-likelihood of each sentiment:
 
Finally, we have the log-likelihood of each sentiment:
 +
[[File:f4.PNG]]
  
 
== Applications ==
 
== Applications ==

Revision as of 20:39, 31 March 2011

This is a method discussed in Social Media Analysis 10-802 in Spring 2010 and Social Media Analysis 10-802 in Spring 2011.

Background

A Naive Bayes classifier is a simple probabilistic classifier based on applying Bayes' theorem (from Bayesian statistics) with strong (naive) independence assumptions. F1.PNG

Here, we use an example from sentiment analysis on twitter messages. So we let s be the sentiment label, M be the Twitter message. If we assume there are equal sets of positive, negative and neutral messages, we simplify the equation: F2.PNG

If we re-write M into G a set of features and assume they are conditionally independent, we have: F3.PNG

Finally, we have the log-likelihood of each sentiment: F4.PNG

Applications

It is widely used in information retrieval and information extraction, for example, Document Categorization, Text Classification and many different problems.

Relevant Papers

 AddressesProblemUsesDataset
Pang et al EMNLP 2002Review classificationPang Movie Reviews