Parallel Program

Parallel programming aims to improve computational efficiency by distributing tasks across multiple processors. Current research heavily focuses on automating the parallelization process, leveraging large language models (LLMs) to generate efficient parallel code and unit tests, and comparing their performance against traditional optimizing compilers. This work addresses the challenges of manual parallelization, particularly in high-performance computing and deep learning, leading to faster and more scalable software applications. Furthermore, research explores optimizing communication strategies in distributed systems and adapting parallel architectures to handle diverse workloads, such as in neural radiance field rendering.

Papers