AI Ethic
AI ethics focuses on ensuring the responsible development and deployment of artificial intelligence systems, aiming to mitigate potential harms and promote beneficial societal impact. Current research emphasizes addressing biases in models like large language models (LLMs) and text-to-image generators, developing frameworks for fairness, transparency, and accountability, and exploring methods for aligning AI systems with human values through techniques such as reinforcement learning from human feedback and social choice theory. This rapidly evolving field is crucial for guiding the development of trustworthy AI, influencing both scientific methodology and the practical application of AI across various sectors.
Papers
October 10, 2023
October 9, 2023
September 19, 2023
August 31, 2023
August 1, 2023
July 31, 2023
July 14, 2023
July 13, 2023
May 31, 2023
May 29, 2023
May 16, 2023
March 13, 2023
February 23, 2023
February 18, 2023
January 30, 2023
December 14, 2022
December 4, 2022
October 27, 2022
September 12, 2022
June 23, 2022