Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Introduction—discrete event simulation

Introduction—discrete event simulation This special section deals with the simulation of complex systems, such as computer, communications, and manufacturing systems—specifically focusing on stochastic discrete event simulation. Such systems are typically (but not always) modeled by a network of queues, in which jobs compete for the system's resources. For example, in an on-line computer database system, the jobs would represent transactions, and the system's resources would include processors, disks, main memory, data locks, etc. Since job arrival patterns and resource requirements are unpredictable, such systems are inherently stochastic (random). As these systems increase in complexity, it becomes increasingly difficult to build analytically tractable performance models; thus simulation, because of its versatility, often becomes the only viable analysis technique. This issue contains articles which span a broad range of current topics in simulation, including Efficient execution of simulation models using parallel processors; Integration of real-time systems, artificial intelligence, and simulation for automated factory control, Rapid prototyping and simulation of distributed systems, Sensitivity analysis of simulation output, Random number generation, and Effective use of simulation in analyzing and improving the performance of actual systems. Richard Fujimoto's article is a state-of-the-art survey on the execution of simulation models on parallel processors. Fujimoto describes why discrete event simulations have proven to be a difficult class of applications to parallelize. He then describes two basic parallelization approaches (conservative and optimistic), and recent experience with these approaches, including his own results in which significant speedups have been obtained on nontrivial problems using the optimistic “Time Warp” approach. He also provides a critique and assessment of the basic approaches to parallel simulation. The article by Sanjay Jain, Karon Barber and David Osterfeld describes a system for factory scheduling. The scheduling system obtains the status of the plant floor using the factory's real-time monitoring and control computer system. This information is fed into the scheduler which generates a new schedule, and sends it back to the factory control computer. The scheduler integrates a simulation model (which simulates backwards in time), and an expert system whose rules encode scheduling heuristics. The system is currently in use at a highly automated General Motors component production facility. Alexander Dupuy, Jed Schwartz, Yechiam Yemini and David Bacon's article describes NEST, a UNIX-based environment for prototyping, modeling, and simulating distributed systems. NEST has a graphical interface for building simulation models. In addition, users can program (or prototype) “node functions” that implement the system's control algorithms (e.g., routing protocols in a communications network). The node functions are linked into the simulation, allowing simulation (and debugging) of a system using its actual control logic. With only minor modification, the node functions can be decoupled from the simulation and used to build the actual system. The authors illustrate this combined use of prototyping and simulation using the RIP protocol, a simple routing protocol for IP (Internet Protocol) networks. The authors also describe other features of NEST, including its ability to dynamically reconfigure the simulation model during execution, which permits the study of how the system reacts to changing conditions, such as link failures. The article by Peter Glynn is concerned with sensitivity analysis of simulation output. He discusses a general-purpose method for using simulation to estimate the derivative of a performance measure, and provides explicit formulae for estimating derivatives for several broad classes of stochastic processes that typically arise in discrete event simulations. Pierre L'Ecuyer's article is a timely survey on pseudorandom number generation. As L'Ecuyer explains, this topic has recently received renewed attention for a variety of reasons, including the proliferation of generators on microcomputers, the need for portable generators, the requirement for generators with very long periods (as machines get faster), and the application of pseudorandom number generators to cryptology. L'Ecuyer examines several classes of generators, outlines their theoretical and empirical properties, and discusses implementation issues as well. The article by David Miller presents a case study illustrating the use of simulation modeling to analyze the performance of an IBM semiconductor manufacturing facility in Essex Junction, Vermont. Miller describes the relevant background and goals of the modeling study, the modeling approach, and model validation. He then describes the results of the simulation experiments, which included investigating a variety of line-loading and scheduling policies. Miller found that a lot-release policy that keeps a fixed amount of work-in-progress in the line could significantly reduce the lot turnaround time without reducing throughput, when compared to the policy that was in use in the facility. Miller then describes the specific changes, suggested by the modeling study, that were implemented in Essex Junction, and the corresponding improvement in the facility's efficiency. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Communications of the ACM Association for Computing Machinery

Introduction—discrete event simulation

Communications of the ACM , Volume 33 (10) – Oct 1, 1990

Loading next page...
 
/lp/association-for-computing-machinery/introduction-discrete-event-simulation-NDdzCH4AKs

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
Association for Computing Machinery
Copyright
Copyright © 1990 by ACM Inc.
ISSN
0001-0782
DOI
10.1145/84537.214948
Publisher site
See Article on Publisher Site

Abstract

This special section deals with the simulation of complex systems, such as computer, communications, and manufacturing systems—specifically focusing on stochastic discrete event simulation. Such systems are typically (but not always) modeled by a network of queues, in which jobs compete for the system's resources. For example, in an on-line computer database system, the jobs would represent transactions, and the system's resources would include processors, disks, main memory, data locks, etc. Since job arrival patterns and resource requirements are unpredictable, such systems are inherently stochastic (random). As these systems increase in complexity, it becomes increasingly difficult to build analytically tractable performance models; thus simulation, because of its versatility, often becomes the only viable analysis technique. This issue contains articles which span a broad range of current topics in simulation, including Efficient execution of simulation models using parallel processors; Integration of real-time systems, artificial intelligence, and simulation for automated factory control, Rapid prototyping and simulation of distributed systems, Sensitivity analysis of simulation output, Random number generation, and Effective use of simulation in analyzing and improving the performance of actual systems. Richard Fujimoto's article is a state-of-the-art survey on the execution of simulation models on parallel processors. Fujimoto describes why discrete event simulations have proven to be a difficult class of applications to parallelize. He then describes two basic parallelization approaches (conservative and optimistic), and recent experience with these approaches, including his own results in which significant speedups have been obtained on nontrivial problems using the optimistic “Time Warp” approach. He also provides a critique and assessment of the basic approaches to parallel simulation. The article by Sanjay Jain, Karon Barber and David Osterfeld describes a system for factory scheduling. The scheduling system obtains the status of the plant floor using the factory's real-time monitoring and control computer system. This information is fed into the scheduler which generates a new schedule, and sends it back to the factory control computer. The scheduler integrates a simulation model (which simulates backwards in time), and an expert system whose rules encode scheduling heuristics. The system is currently in use at a highly automated General Motors component production facility. Alexander Dupuy, Jed Schwartz, Yechiam Yemini and David Bacon's article describes NEST, a UNIX-based environment for prototyping, modeling, and simulating distributed systems. NEST has a graphical interface for building simulation models. In addition, users can program (or prototype) “node functions” that implement the system's control algorithms (e.g., routing protocols in a communications network). The node functions are linked into the simulation, allowing simulation (and debugging) of a system using its actual control logic. With only minor modification, the node functions can be decoupled from the simulation and used to build the actual system. The authors illustrate this combined use of prototyping and simulation using the RIP protocol, a simple routing protocol for IP (Internet Protocol) networks. The authors also describe other features of NEST, including its ability to dynamically reconfigure the simulation model during execution, which permits the study of how the system reacts to changing conditions, such as link failures. The article by Peter Glynn is concerned with sensitivity analysis of simulation output. He discusses a general-purpose method for using simulation to estimate the derivative of a performance measure, and provides explicit formulae for estimating derivatives for several broad classes of stochastic processes that typically arise in discrete event simulations. Pierre L'Ecuyer's article is a timely survey on pseudorandom number generation. As L'Ecuyer explains, this topic has recently received renewed attention for a variety of reasons, including the proliferation of generators on microcomputers, the need for portable generators, the requirement for generators with very long periods (as machines get faster), and the application of pseudorandom number generators to cryptology. L'Ecuyer examines several classes of generators, outlines their theoretical and empirical properties, and discusses implementation issues as well. The article by David Miller presents a case study illustrating the use of simulation modeling to analyze the performance of an IBM semiconductor manufacturing facility in Essex Junction, Vermont. Miller describes the relevant background and goals of the modeling study, the modeling approach, and model validation. He then describes the results of the simulation experiments, which included investigating a variety of line-loading and scheduling policies. Miller found that a lot-release policy that keeps a fixed amount of work-in-progress in the line could significantly reduce the lot turnaround time without reducing throughput, when compared to the policy that was in use in the facility. Miller then describes the specific changes, suggested by the modeling study, that were implemented in Essex Junction, and the corresponding improvement in the facility's efficiency.

Journal

Communications of the ACMAssociation for Computing Machinery

Published: Oct 1, 1990

There are no references for this article.