Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

A hybrid language model based on a combination of N -grams and stochastic context-free grammars

A hybrid language model based on a combination of N -grams and stochastic context-free grammars In this paper, a hybrid language model is defined as a combination of a word-based n -gram, which is used to capture the local relations between words, and a category-based stochastic context-free grammar (SCFG) with a word distribution into categories, which is defined to represent the long-term relations between these categories. The problem of unsupervised learning of a SCFG in General Format and in Chomsky Normal Form by means of estimation algorithms is studied. Moreover, a bracketed version of the classical estimation algorithm based on the Earley algorithm is proposed. This paper also explores the use of SCFGs obtained from a treebank corpus as initial models for the estimation algorithms. Experiments on the UPenn Treebank corpus are reported. These experiments have been carried out in terms of the test set perplexity and the word error rate in a speech recognition experiment. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png ACM Transactions on Asian Language Information Processing (TALIP) Association for Computing Machinery

A hybrid language model based on a combination of N -grams and stochastic context-free grammars

Loading next page...
 
/lp/association-for-computing-machinery/a-hybrid-language-model-based-on-a-combination-of-n-grams-and-Stm2f06sEG

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
Association for Computing Machinery
Copyright
Copyright © 2004 by ACM Inc.
ISSN
1530-0226
DOI
10.1145/1034780.1034783
Publisher site
See Article on Publisher Site

Abstract

In this paper, a hybrid language model is defined as a combination of a word-based n -gram, which is used to capture the local relations between words, and a category-based stochastic context-free grammar (SCFG) with a word distribution into categories, which is defined to represent the long-term relations between these categories. The problem of unsupervised learning of a SCFG in General Format and in Chomsky Normal Form by means of estimation algorithms is studied. Moreover, a bracketed version of the classical estimation algorithm based on the Earley algorithm is proposed. This paper also explores the use of SCFGs obtained from a treebank corpus as initial models for the estimation algorithms. Experiments on the UPenn Treebank corpus are reported. These experiments have been carried out in terms of the test set perplexity and the word error rate in a speech recognition experiment.

Journal

ACM Transactions on Asian Language Information Processing (TALIP)Association for Computing Machinery

Published: Jun 1, 2004

There are no references for this article.