Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Automatic feedback-directed object fusing

Automatic feedback-directed object fusing Object fusing is an optimization that embeds certain referenced objects into their referencing object. The order of objects on the heap is changed in such a way that objects that are accessed together are placed next to each other in memory. Their offset is then fixed, that is, the objects are colocated, allowing field loads to be replaced by address arithmetic. Array fusing specifically optimizes arrays, which are frequently used for the implementation of dynamic data structures. Therefore, the length of arrays often varies, and fields referencing such arrays have to be changed. An efficient code pattern detects these changes and allows the optimized access of such fields. We integrated these optimizations into Sun Microsystems' Java HotSpot™ VM. The analysis is performed automatically at runtime, requires no actions on the part of the programmer, and supports dynamic class loading. To safely eliminate a field load, the colocation of the object that holds the field and the object that is referenced by the field must be guaranteed. Two preconditions must be satisfied: The objects must be allocated at the same time, and the field must not be overwritten later. These preconditions are checked by the just-in-time compiler to avoid an interprocedural data flow analysis. The garbage collector ensures that groups of colocated objects are not split by copying groups as a whole. The evaluation shows that the dynamic approach successfully identifies and optimizes frequently accessed fields for several benchmarks with a low compilation and analysis overhead. It leads to a speedup of up to 76% for simple benchmarks and up to 6% for complex workloads. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png ACM Transactions on Architecture and Code Optimization (TACO) Association for Computing Machinery

Loading next page...
 
/lp/association-for-computing-machinery/automatic-feedback-directed-object-fusing-BjD0L9yVZT

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
Association for Computing Machinery
Copyright
The ACM Portal is published by the Association for Computing Machinery. Copyright © 2010 ACM, Inc.
Subject
Memory management (garbage collection)
ISSN
1544-3566
DOI
10.1145/1839667.1839669
Publisher site
See Article on Publisher Site

Abstract

Object fusing is an optimization that embeds certain referenced objects into their referencing object. The order of objects on the heap is changed in such a way that objects that are accessed together are placed next to each other in memory. Their offset is then fixed, that is, the objects are colocated, allowing field loads to be replaced by address arithmetic. Array fusing specifically optimizes arrays, which are frequently used for the implementation of dynamic data structures. Therefore, the length of arrays often varies, and fields referencing such arrays have to be changed. An efficient code pattern detects these changes and allows the optimized access of such fields. We integrated these optimizations into Sun Microsystems' Java HotSpot™ VM. The analysis is performed automatically at runtime, requires no actions on the part of the programmer, and supports dynamic class loading. To safely eliminate a field load, the colocation of the object that holds the field and the object that is referenced by the field must be guaranteed. Two preconditions must be satisfied: The objects must be allocated at the same time, and the field must not be overwritten later. These preconditions are checked by the just-in-time compiler to avoid an interprocedural data flow analysis. The garbage collector ensures that groups of colocated objects are not split by copying groups as a whole. The evaluation shows that the dynamic approach successfully identifies and optimizes frequently accessed fields for several benchmarks with a low compilation and analysis overhead. It leads to a speedup of up to 76% for simple benchmarks and up to 6% for complex workloads.

Journal

ACM Transactions on Architecture and Code Optimization (TACO)Association for Computing Machinery

Published: Sep 1, 2010

There are no references for this article.