Paper ID: 2403.15397
Regulating Large Language Models: A Roundtable Report
Gabriel Nicholas, Paul Friedl
On July 20, 2023, a group of 27 scholars and digital rights advocates with expertise in law, computer science, political science, and other disciplines gathered for the Large Language Models, Law and Policy Roundtable, co-hosted by the NYU School of Law's Information Law Institute and the Center for Democracy & Technology. The roundtable convened to discuss how law and policy can help address some of the larger societal problems posed by large language models (LLMs). The discussion focused on three policy topic areas in particular: 1. Truthfulness: What risks do LLMs pose in terms of generating mis- and disinformation? How can these risks be mitigated from a technical and/or regulatory perspective? 2. Privacy: What are the biggest privacy risks involved in the creation, deployment, and use of LLMs? How can these risks be mitigated from a technical and/or regulatory perspective? 3. Market concentration: What threats do LLMs pose concerning market/power concentration? How can these risks be mitigated from a technical and/or regulatory perspective? In this paper, we provide a detailed summary of the day's proceedings. We first recap what we deem to be the most important contributions made during the issue framing discussions. We then provide a list of potential legal and regulatory interventions generated during the brainstorming discussions.
Submitted: Feb 16, 2024