Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 7-Day Trial for You or Your Team.

Learn More →

EDITORIAL Integrating Neural and Symbolic Processes

EDITORIAL Integrating Neural and Symbolic Processes Connection Science, Vol. 5, Nos. 3 & 4, 1993 EDITORIAL Integrating Neural and Symbolic Processes The explosion of research in neural network models and its seepage into many real-world applications has led to the development of integrated architectures for combining both neural and symbolic processes. These architectures have sorne- times outperformed more conventional symbol-processing architectures. For ex- ample, one experimental system models international bond markets in order to allocate assets between bonds and cash. The system uses one neural network for each country, with each network trained to model the bond market for that country. For this financial forecasting operation each neural network predicts bond returns one month ahead of the market it represents, passing the predictions to a software-based portfolio management system, i.e. a symbol-processing sys- tem. The integrated system beat the conventional one by 3.6 to 1 (Harnmerstrom, 1993). What is interesting here is that the neural networks are modules within a larger system architecture. The advantages of neural networks-adaptability (they can extract complex relationships from data), generalizability (they can correctly process data that is broadly similar to the originally trained data), massive parallelism (they contain identical independent operations), noise tolerance (they can provide sensible answers in noisy environments) and graceful degradation (they degrade gradually in proportion to the amount of system damage)-make them very attractive for applications that need to cope with noise, imprecision and complexity. The advantages of symbol-processing systems-intelligibility, easy manipulation of abstract knowledge and the variety of methods for control of information flow- make them attractive for applications that require control, explanation and understanding. On the other hand, neural and symbol-processing systems have their disadvantages or weaknesses. Neural networks generate results that are sometimes difficult to explain or understand. The training of these networks is often a very time-consuming and difficult task, and is only now beginning to be understood. Likewise, symbol-processing systems generally perform poorly in the presence of noisy data and very poorly in the presence of novel data. An architecture that can integrate both neural and symbolic processes can potentially provide greater explanatory power than either process considered in isolation. The knowledge that is incorporated into an artifical intelligence (AI) system has to be interpretable if it is to be used for a conceptual level explanation/ understanding, but not necessarily interpretable if used for other purposes. Thus, for some applications it is sufficient for this knowledge to be in some non-symbolic form, whereas in others, the knowledge may have to be in symbolic form. An architecture that can shift between these two modes can potentially achieve greater explanatory power in that it handles both types of representation (and the 204 Editorial proccssing thereof), and can adapt to various situations (e.g. if the data is incomplete or imperfect). Combining symbolic and neural models is potentially a promising way to deal with this issue. Another important issue in A1 is the problem of scalability. For an architecture to scale, a tremendous amount of knowledge of various sorts must be somehow placed into it. Combining neural and symbolic models may facilitate the incorporation of this knowledge and thus help to deal with the scalability issue. Hendler (1989), in an earlier special issue of Connection Science, argues that hybrid architectures may be crucial to our gaining an understanding of the parameters and functions of biologically plausible cognitive models, whereas the focus here is on the exploration of specific architectures and the interaction between neural and symbolic processes embedded and/or integrated within these architectures. The set of papers contained in this issue are focused on the themes of representation and learning, and information extraction and prediction: architec- tural scalability (papers by Bookman; Mani & Shastri), tractable reasoning (Mani & Shastri), an exploration of the interactions between symbolic and neural representations (Giles & Omlin; Ueherla & Jagota), frameworks for exploring the interaction between symbolic and neural processing (Wilson & Hendler), approx- imate reasoning (Kasabov & Shiskov), knowledge refinement (Giles & Omlin; Mahoney & Mooney; Fletcher & Obradovit), and knowledge acquisition (Book- man; Fletcher & Obradovit; Giles & Omlin; Mahoney & Mooney). Applications such as text understanding, process control, knowledge-base revision and optical character recognition are presented. Questions that the special issue addresses include: (1) What benefits do we achieve by integrating neural and symbolic architectures and how does this compare with conventional architectures? (2) How do symbolic and neural learning schemes interact in integrated systems? (3) What technical problems do we need to solve and what tools do we need to create so that we can begin to explore, in a more principled way, the interaction between neural and symbolic processes? (4) What do these architectures tell us about cognition? To varying degrees, the papers in this issue present new ways of looking at the interaction between neural and symbolic processes that potentially can have a trcmendous impact on real-world applications. Integration permits more accurate classification and prediction, reduces training time, enables information checking, provides greater explanatory power, and provides guidance to efficiently extend and refine a knowledge base. LAWRENCE A. BOOKMAN Sun Microsystems, MA, USA RON SUN University of Alabama, AL, USA References Hammerstrom, D. (1993) Neural networks at work. IEEE Spectrum, 30, 26-32. Hendler, J. (1989) On the need for hybrid systems. Connection Science, 1, 00-00. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Connection Science Taylor & Francis

EDITORIAL Integrating Neural and Symbolic Processes

Connection Science , Volume 5 (3-4): 2 – Jan 1, 1993

Loading next page...
 
/lp/taylor-francis/editorial-integrating-neural-and-symbolic-processes-iZ4xKzNyts

References (3)

Publisher
Taylor & Francis
Copyright
Copyright Taylor & Francis Group, LLC
ISSN
1360-0494
eISSN
0954-0091
DOI
10.1080/09540099308915699
Publisher site
See Article on Publisher Site

Abstract

Connection Science, Vol. 5, Nos. 3 & 4, 1993 EDITORIAL Integrating Neural and Symbolic Processes The explosion of research in neural network models and its seepage into many real-world applications has led to the development of integrated architectures for combining both neural and symbolic processes. These architectures have sorne- times outperformed more conventional symbol-processing architectures. For ex- ample, one experimental system models international bond markets in order to allocate assets between bonds and cash. The system uses one neural network for each country, with each network trained to model the bond market for that country. For this financial forecasting operation each neural network predicts bond returns one month ahead of the market it represents, passing the predictions to a software-based portfolio management system, i.e. a symbol-processing sys- tem. The integrated system beat the conventional one by 3.6 to 1 (Harnmerstrom, 1993). What is interesting here is that the neural networks are modules within a larger system architecture. The advantages of neural networks-adaptability (they can extract complex relationships from data), generalizability (they can correctly process data that is broadly similar to the originally trained data), massive parallelism (they contain identical independent operations), noise tolerance (they can provide sensible answers in noisy environments) and graceful degradation (they degrade gradually in proportion to the amount of system damage)-make them very attractive for applications that need to cope with noise, imprecision and complexity. The advantages of symbol-processing systems-intelligibility, easy manipulation of abstract knowledge and the variety of methods for control of information flow- make them attractive for applications that require control, explanation and understanding. On the other hand, neural and symbol-processing systems have their disadvantages or weaknesses. Neural networks generate results that are sometimes difficult to explain or understand. The training of these networks is often a very time-consuming and difficult task, and is only now beginning to be understood. Likewise, symbol-processing systems generally perform poorly in the presence of noisy data and very poorly in the presence of novel data. An architecture that can integrate both neural and symbolic processes can potentially provide greater explanatory power than either process considered in isolation. The knowledge that is incorporated into an artifical intelligence (AI) system has to be interpretable if it is to be used for a conceptual level explanation/ understanding, but not necessarily interpretable if used for other purposes. Thus, for some applications it is sufficient for this knowledge to be in some non-symbolic form, whereas in others, the knowledge may have to be in symbolic form. An architecture that can shift between these two modes can potentially achieve greater explanatory power in that it handles both types of representation (and the 204 Editorial proccssing thereof), and can adapt to various situations (e.g. if the data is incomplete or imperfect). Combining symbolic and neural models is potentially a promising way to deal with this issue. Another important issue in A1 is the problem of scalability. For an architecture to scale, a tremendous amount of knowledge of various sorts must be somehow placed into it. Combining neural and symbolic models may facilitate the incorporation of this knowledge and thus help to deal with the scalability issue. Hendler (1989), in an earlier special issue of Connection Science, argues that hybrid architectures may be crucial to our gaining an understanding of the parameters and functions of biologically plausible cognitive models, whereas the focus here is on the exploration of specific architectures and the interaction between neural and symbolic processes embedded and/or integrated within these architectures. The set of papers contained in this issue are focused on the themes of representation and learning, and information extraction and prediction: architec- tural scalability (papers by Bookman; Mani & Shastri), tractable reasoning (Mani & Shastri), an exploration of the interactions between symbolic and neural representations (Giles & Omlin; Ueherla & Jagota), frameworks for exploring the interaction between symbolic and neural processing (Wilson & Hendler), approx- imate reasoning (Kasabov & Shiskov), knowledge refinement (Giles & Omlin; Mahoney & Mooney; Fletcher & Obradovit), and knowledge acquisition (Book- man; Fletcher & Obradovit; Giles & Omlin; Mahoney & Mooney). Applications such as text understanding, process control, knowledge-base revision and optical character recognition are presented. Questions that the special issue addresses include: (1) What benefits do we achieve by integrating neural and symbolic architectures and how does this compare with conventional architectures? (2) How do symbolic and neural learning schemes interact in integrated systems? (3) What technical problems do we need to solve and what tools do we need to create so that we can begin to explore, in a more principled way, the interaction between neural and symbolic processes? (4) What do these architectures tell us about cognition? To varying degrees, the papers in this issue present new ways of looking at the interaction between neural and symbolic processes that potentially can have a trcmendous impact on real-world applications. Integration permits more accurate classification and prediction, reduces training time, enables information checking, provides greater explanatory power, and provides guidance to efficiently extend and refine a knowledge base. LAWRENCE A. BOOKMAN Sun Microsystems, MA, USA RON SUN University of Alabama, AL, USA References Hammerstrom, D. (1993) Neural networks at work. IEEE Spectrum, 30, 26-32. Hendler, J. (1989) On the need for hybrid systems. Connection Science, 1, 00-00.

Journal

Connection ScienceTaylor & Francis

Published: Jan 1, 1993

There are no references for this article.