William Yang Wang
Who am I and why am I here?
William is currently a first-year LTI PhD student.
He has broad interests in Computational Linguistics, Spoken Language Processing, Affective Computing, Spoken Dialog Systems, Question Answering, Multimodal Interation, Machine Learning and Virtual Humans. He received his M.S. in Computer Science from Columbia University, focusing on Speech and Language Processing. Machine Learning/NLP is not always about simple binary classification or regression, and the SPLODD class will tell us what is beyond that.
- William has been a reviewer in the Spoken Language Processing field for several years, and he knows that speech is not just noisy text.
- William has published more than 10 papers in Text Mining, Information Retrieval, Emotional Speech, Natural Language Understanding, Computer-assisted Language Learning and Speech Synthesis. He knows that many people are doing similar things in their own field, but using different terminologies to call them.
- Smith and Eisner 2008: Dependency parsing by belief propagation (Sept. writeup)
- Smith and Eisner 2005:Contrastive Estimation: Training Log-Linear Models on Unlabeled Data (Sept. writeup)
- Stoyanov et al. 2011: Empirical Risk Minimization of Graphical Model Parameters Given Approximate Inference, Decoding, and Model Structure (Oct. writeup)
- Reisinger et al 2010: Spherical Topic Models (new! Nov. writeup)
- Eisenstein et al 2011: Sparse Additive Generative Models of Text (new! Nov. writeup)
- Belief Propagation (Sept. writeup)
- Contrastive Estimation (Sept. writeup)
- Empirical Risk Minimization (Oct. writeup)