AI Ethic

AI ethics focuses on ensuring the responsible development and deployment of artificial intelligence systems, aiming to mitigate potential harms and promote beneficial societal impact. Current research emphasizes addressing biases in models like large language models (LLMs) and text-to-image generators, developing frameworks for fairness, transparency, and accountability, and exploring methods for aligning AI systems with human values through techniques such as reinforcement learning from human feedback and social choice theory. This rapidly evolving field is crucial for guiding the development of trustworthy AI, influencing both scientific methodology and the practical application of AI across various sectors.

Papers