Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Large‐Scale Modeling of Wordform Learning and Representation

Large‐Scale Modeling of Wordform Learning and Representation The forms of words as they appear in text and speech are central to theories and models of lexical processing. Nonetheless, current methods for simulating their learning and representation fail to approach the scale and heterogeneity of real wordform lexicons. A connectionist architecture termed the sequence encoder is used to learn nearly 75,000 wordform representations through exposure to strings of stress‐marked phonemes or letters. First, the mechanisms and efficacy of the sequence encoder are demonstrated and shown to overcome problems with traditional slot‐based codes. Then, two large‐scale simulations are reported that learned to represent lexicons of either phonological or orthographic wordforms. In doing so, the models learned the statistics of their lexicons as shown by better processing of well‐formed pseudowords as opposed to ill‐formed (scrambled) pseudowords, and by accounting for variance in well‐formedness ratings. It is discussed how the sequence encoder may be integrated into broader models of lexical processing. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Cognitive Science - A Multidisciplinary Journal Wiley

Loading next page...
 
/lp/wiley/large-scale-modeling-of-wordform-learning-and-representation-XLZiWjn0Ql

References (26)

Publisher
Wiley
Copyright
2008 Cognitive Science Society, Inc.
ISSN
0364-0213
eISSN
1551-6709
DOI
10.1080/03640210802066964
Publisher site
See Article on Publisher Site

Abstract

The forms of words as they appear in text and speech are central to theories and models of lexical processing. Nonetheless, current methods for simulating their learning and representation fail to approach the scale and heterogeneity of real wordform lexicons. A connectionist architecture termed the sequence encoder is used to learn nearly 75,000 wordform representations through exposure to strings of stress‐marked phonemes or letters. First, the mechanisms and efficacy of the sequence encoder are demonstrated and shown to overcome problems with traditional slot‐based codes. Then, two large‐scale simulations are reported that learned to represent lexicons of either phonological or orthographic wordforms. In doing so, the models learned the statistics of their lexicons as shown by better processing of well‐formed pseudowords as opposed to ill‐formed (scrambled) pseudowords, and by accounting for variance in well‐formedness ratings. It is discussed how the sequence encoder may be integrated into broader models of lexical processing.

Journal

Cognitive Science - A Multidisciplinary JournalWiley

Published: Jun 1, 2008

There are no references for this article.