Math Task
Research on mathematical task performance by large language models (LLMs) focuses on improving their accuracy and reasoning abilities, particularly in complex or ambiguous problems. Current efforts involve enhancing prompting techniques, such as problem elaboration and red teaming, to improve context understanding and identify model weaknesses. These investigations utilize various LLMs and reinforcement learning algorithms to optimize both model performance and the adaptive delivery of educational support. The ultimate goal is to develop more robust and reliable LLMs for mathematical reasoning, with implications for both automated problem-solving and personalized education.
Papers
June 10, 2024
March 13, 2024
February 24, 2024
December 30, 2023
November 13, 2023
May 20, 2023
April 11, 2023