AI Ethic
AI ethics focuses on ensuring the responsible development and deployment of artificial intelligence systems, aiming to mitigate potential harms and promote beneficial societal impact. Current research emphasizes addressing biases in models like large language models (LLMs) and text-to-image generators, developing frameworks for fairness, transparency, and accountability, and exploring methods for aligning AI systems with human values through techniques such as reinforcement learning from human feedback and social choice theory. This rapidly evolving field is crucial for guiding the development of trustworthy AI, influencing both scientific methodology and the practical application of AI across various sectors.
Papers
June 1, 2022
May 26, 2022
February 19, 2022
February 17, 2022
February 12, 2022
December 14, 2021
November 28, 2021
November 15, 2021
November 10, 2021