Difference between revisions of "Florencia Reali and Thomas L. Griffiths, Words as alleles: connecting language evolution with Bayesian learners to models of genetic drift, Proceedings of The Royal Society of London. Series B, Biological Sciences 2010"
Line 75: | Line 75: | ||
== Related Papers == | == Related Papers == | ||
− | Unlike this paper that defines a 'neutral' model of how languages evolve in the absence of selection at the level of linguistic variants (i.e. only by virtue of being transmitted from one learner to another), other recent computational work has focused on the role of selective forces or directed mutation at the level of linguistic variants: | + | Unlike this paper that defines a 'neutral' model of how languages evolve in the absence of selection at the level of linguistic variants (i.e. it evolves only by virtue of being transmitted from one learner to another), other recent computational work has focused on the role of selective forces or directed mutation at the level of linguistic variants: |
* [[RelatedPaper::Komarova, N. L. & Nowak, M. A. 2001 Natural selection of the critical period for language acquisition. Proc. R. Soc. Lond. B. 268, 1189 – 1196]] | * [[RelatedPaper::Komarova, N. L. & Nowak, M. A. 2001 Natural selection of the critical period for language acquisition. Proc. R. Soc. Lond. B. 268, 1189 – 1196]] | ||
* [[RelatedPaper::Christiansen, M. H. & Chater, N. 2008 Language as shaped by the brain. Behav. Brain Sci. 31, 489–558]] | * [[RelatedPaper::Christiansen, M. H. & Chater, N. 2008 Language as shaped by the brain. Behav. Brain Sci. 31, 489–558]] |
Revision as of 21:03, 29 March 2011
Contents
Citation
Florencia Reali and Thomas L. Griffiths, Words as alleles: connecting language evolution with Bayesian learners to models of genetic drift, Proceedings of The Royal Society of London. Series B, Biological Sciences 2010
Online version
Summary
This paper addresses the problem of language evolution by relating it to models of genetic drift in biological evolution. Although the mechanisms of biological and language evolution are very different: biological traits are transmitted via genes while language is transmitted via learning, the paper shows that these different mechanisms can arrive at the same results. Specifically, the paper demonstrates that transmission of frequency distribution over linguistic variants by Bayesian learners arrives at the same results as the Wright-Fisher model of genetic drift.
Description of the method
In HMM model to segment a document, the document is treated as a collection of mutually independent sets of words. Each set is probabilistically generated by a hidden topic variable in a time series. Transition probabilities determine the value of the next hidden topic variable in the series.
The generative process is as follows: choose a topic z, then generate a set of L independent words w from a distribution over words associated with that topic. Then choose another topic from a distribution of allowed transitions between topics. Given an unsegmented document, the most likely sequence of topics that generate the observed L-word sets in the document are computed (using Viterbi algorithm). Topic breaks occur at points where the value two consecutive topics are different.
The drawback of HMM method is the Naive Bayes assumption of conditional independence of words within each L-word set given a topic:
This assumption works well as L becomes large. However, the larger L becomes, the less precise (coarser) is the segmentation.
The aspect HMM segmentation model does away with this Naive Bayes assumption of conditional independence of words, by adding a probability distribution (an aspect model) over pairs of discrete random variables: in this case the pair consists of the L-word window of observation and a word. The L-word window of observation is not represented as a set of its words but simply a label which identifies it. It is associated with its corresponding set of words through each window-word pair. With this aspect model, the occurrence of a window of observation o and a word w are independent of each other given a hidden topic variable z:
The paper uses Expectation Maximization and Bayes' Law to estimate the parameters , , and .
Given an unsegmented document, aspect HMM divides its words into observation windows of size L and running the Viterbi algorithm to find the most likely sequence of hidden topics that generate the given document. Segmentation breaks occur when the topic of one window is different from the next window.
Datasets used
The aspect HMM segmentation model is applied on two corpora:
- A corpus of SpeechBot transcripts from All Things Considered (ATC), a daily news program on National Public Radio. This dataset consists of 4,917 segments with 35,777 word types and about 4 million word tokens. The word error rate in this corpus is estimated to be in the 20% to 50% range.
- A corpus of 3,830 articles from the New York Times (NYT) consisting of about 4 million word tokens and 70,792 word types.
The aspect HMM is trained with 20 hidden topics in the experiments.
Experimental Results
Three variants of the two corpora are used in the experiments:
- a random sequences of segments from the ATC corpus
- a random sequences of segments from the NYT corpus
- actual aired sequences of ATC segments (in this audio transcripts, clear demarcations of segmentation breaks are not explicitly given; this is the primary problem that the paper is trying to tackle)
The paper uses co-occurrence agreement probability (CoAP) to quantitatively evaluate the segments produced by their model. In short the paper uses CoAP to measure how often a segmentation is correct with respect to two words that are k words apart in the document.
A useful interpretation of the CoAP is through its compliment:
where is the a priori probability of a segment, is the probability of missing a segment, and is the probability of hypothesizing a segment where there is no segment.
In the random sequences of segments, the model performs better (i.e. produces better segmentation) on NYT randomized segments than on ATC randomized segments; probably since NYT is a cleaner, more error-free corpus than ATC. The result on actual aired ATC sequence seems worse than either of the randomized test set.
When comparing the performance of aspect HMM (AHMM) to HMM model in segmenting NYT corpus, AHMM outperforms HMM segmentation for small window widths. As the window size increases HMM increasingly does well since all words are counted equally in increasingly larger windows, satisfying HMM's Naive Bayes assumption of mutual independence between words. However, as window size increases the precision of the segmenter also decreases due to coarser segmentation.
Discussion
The novelty of the paper lies on its addition of aspect model to HMM model for segmenting documents. It removes HMM naive assumption that words are generated independently given the hidden topic variable. Instead, words from the selected hidden variable are generated via the aspect model rather than independently generated.
However, one of the possible drawback of the paper is its very coarse approximation to the probability distribution over the observation windows o. The Viterbi algorithm requires the observation probability for each time step. While the HMM uses its Naive Bayes assumption to compute this distribution, the AHMM can only compute conditional probabilities about observation windows which it was exposed to in training. In testing, the observation window may not be something the model has seen before. The paper therefore uses an online approximation to EM to find that refines its probability approximation recursively as it sees more words in the observation window. Words in the beginning of the window are weighted more heavily than words towards the end of the window. Therefore, as window size increases, more words make less impact on the observation distribution and the segmenter does not perform as well.
Another possible drawback is that AHMM does not model topic breaks explicitly. Topic breaks are implicitly assumed when two adjacent windows have different topic variables. This lack of explicit modeling of topic breaks is possibly what causes the model's tendency to undersegment, as indicated by the high values in the experiment. In future, direct modeling of topic breaks may be explored. The idea of an overlapping window may also be good to improve the precision of the segmentation. The idea to automatically assign labels on each segment (i.e. topic labeling) is also an interesting future direction.
Related Papers
Unlike this paper that defines a 'neutral' model of how languages evolve in the absence of selection at the level of linguistic variants (i.e. it evolves only by virtue of being transmitted from one learner to another), other recent computational work has focused on the role of selective forces or directed mutation at the level of linguistic variants: