Hardware Design Optimization

Hardware design optimization aims to create efficient and high-performing hardware architectures, often focusing on minimizing latency, power consumption, and area while maximizing throughput. Current research emphasizes automated design space exploration using model-based methods, including machine learning techniques like transformers and large language models, to accelerate the design process and optimize for specific workloads such as deep learning inference and graph neural networks. These advancements are crucial for improving the efficiency and scalability of various applications, from embedded systems to high-performance computing, and are driving progress in fields like AI acceleration and energy-efficient computing.

Papers