Hardware Design
Hardware design research currently focuses on improving efficiency and automation across various domains, from AI accelerators to general-purpose processors. Key areas include developing novel architectures for neuro-symbolic AI, leveraging machine learning (particularly graph neural networks and large language models) for automated design and optimization, and exploring efficient approximate computing techniques. These advancements aim to address challenges in speed, power consumption, and design complexity, ultimately impacting the development of faster, more energy-efficient, and reliable hardware for diverse applications.
Papers
QuArch: A Question-Answering Dataset for AI Agents in Computer Architecture
Shvetank Prakash, Andrew Cheng, Jason Yik, Arya Tschand, Radhika Ghosal, Ikechukwu Uchendu, Jessica Quaye, Jeffrey Ma, Shreyas Grampurohit, Sofia Giannuzzi, Arnav Balyan, Fin Amin, Aadya Pipersenia, Yash Choudhary, Ankita Nayak, Amir Yazdanbakhsh, Vijay Janapa Reddi
Dedicated Inference Engine and Binary-Weight Neural Networks for Lightweight Instance Segmentation
Tse-Wei Chen, Wei Tao, Dongyue Zhao, Kazuhiro Mima, Tadayuki Ito, Kinya Osa, Masami Kato
Towards Efficient Neuro-Symbolic AI: From Workload Characterization to Hardware Architecture
Zishen Wan, Che-Kai Liu, Hanchen Yang, Ritik Raj, Chaojian Li, Haoran You, Yonggan Fu, Cheng Wan, Sixu Li, Youbin Kim, Ananda Samajdar, Yingyan (Celine)Lin, Mohamed Ibrahim, Jan M. Rabaey, Tushar Krishna, Arijit Raychowdhury
Learning to Compare Hardware Designs for High-Level Synthesis
Yunsheng Bai, Atefeh Sohrabizadeh, Zijian Ding, Rongjian Liang, Weikai Li, Ding Wang, Haoxing Ren, Yizhou Sun, Jason Cong