TY - JOUR AU - Liu, Xin AB - Abstract:In this report, we propose Triton-distributed, an extension of existing Triton compiler, to overcome the programming challenges in distributed AI systems. Triton-distributed is the first compiler that supports native overlapping optimizations for distributed AI workloads, providing a good coverage of existing optimizations from different frameworks. First, we integrate communication primitives compliant with the OpenSHMEM standard into the compiler. This enables programmers to utilize these primitives with a higher-level Python programming model. Second, we illustrate how to achieve complex joint optimization of computation, memory access, and communication with the assistance of the compiler. In particular, we show how to use overlapping techniques to hide latency and present our compiler-based programming methods in both single-node and multi-node scenarios. Finally, we showcase the performance of the code generated by our compiler. In a test environment with up to 64 devices, our compiler can fully utilize heterogeneous communication and computation resources to provide effective overlapping and high performance. In many cases, the performance of the generated code can even outperform hand-optimized code. Moreover, the development difficulty and the time cost for development using our compiler are far less than those of low-level programming such as CUDA/C++, which clearly demonstrates significant productivity advantages. TI - Triton-distributed: Programming Overlapping Kernels on Distributed AI Systems with the Triton Compiler JF - Computing Research Repository DO - 10.48550/arxiv.2504.19442 DA - 2025-06-05 UR - https://www.deepdyve.com/lp/arxiv-cornell-university/triton-distributed-programming-overlapping-kernels-on-distributed-ai-mlG6khM1C4 VL - 2025 IS - 2504 DP - DeepDyve ER -