reading robot icon GenLaw '23

Generative AI + Law (GenLaw) ’23

We are very excited to announce the inaugural Workshop on Generative AI and Law (GenLaw ’23)! Please join us in Honolulu, Hawai’i at ICML ’23, where we’ll be bringing together experts in privacy, ML, policy, and law to discuss the intellectual property (IP) and privacy challenges that generative AI raises.

Robots reading on the beach, thanks to DALL-E

Rolling abstract submission deadline:

Rolling decisions, with final decisions by:

Workshop date:

About GenLaw

Progress in generative AI depends not only on better model architectures, but on terabytes of scraped Flickr images, Wikipedia pages, Stack Overflow answers, and websites. But generative models ingest vast quantities of intellectual property (IP), which they can memorize and regurgitate verbatim. Several recently-filed lawsuits relate such memorization to copyright infringement. These lawsuits will lead to policies and legal rulings that define our ability, as ML researchers and practitioners, to acquire training data, and our responsibilities towards data owners and curators.

AI researchers will increasingly operate in a legal environment that is keenly interested in their work — an environment that may require future research into model architectures that conform to legal requirements. Understanding the law and contributing to its development will enable us to create safer, better, and practically useful models.

Our Workshop

We’re excited to share a series of tutorials from renowned experts in both ML and law and panel discussions, where researchers in both disciplines can engage in semi-moderated conversation.

Our workshop will begin to build a comprehensive and precise synthesis of the legal issues at play. Beyond IP, the workshop will also address privacy and liability for dangerous, discriminatory, or misleading and manipulative outputs. It will take place on 29 July 2023, in Ballroom B.

Call for Papers

→ Submit to CMT

The 1st Workshop on Generative AI and Law (GenLaw) is soliciting 1-2 page extended abstracts related to any topic pertaining to recent developments in generative AI/ML and their legal implications, with a particular focus on implications for intellectual property (IP) and privacy law. Submissions should employ methods from AI/ML, law, or both.

Possible extended abstract formats include, but are not limited to, preliminary technical results, early-stage law review submissions, and position papers,which should provide novel perspectives and findings at the intersection of generative AI and law. Potential topics include:

Accepted papers will present posters in-person or on Zoom. We will also accept other presentation formats, since scholarship from some disciplines may not be well-suited to posters. For alternative options, we will provide sample templates. Additionally, some submissions will be accepted for a 3-minute spotlight. This workshop is non-archival to allow for future submission to other venues (any/all archival and non-archival workshops, journals, conferences, etc.). We will host all accepted papers on the website, unless requested not to do so by the authors.

Please submit to CMT. We will review submissions in a rolling fashion, in order to enable as much time as possible for the visa application process for authors who would need a visa to attend GenLaw ’23 in person. The submission window will begin 4 May 2023, AoE and end on 29 May 2023, AoE, and the form will contain a checkbox to indicate if there is at least one author that plans to attend GenLaw ’23 and would need a visa to do so. If this category is applicable, please submit as early as possible to facilitate speedy review. We will prioritize reviewing submissions in this category, and will provide rolling acceptance/rejection decisions (up until 19 June 2023).

Please anonymize your submission and respect a 2-page maximum using the ICML Template. We allow up to 2 additional pages for references. We will be using a double-blind review process.

Please see our reviewer guidelines for more information.

Speakers, Panelists & Moderators

Pam Samuelson

Pam Samuelson

Distinguished Professor of Law and Information University of California, Berkeley

Website

Mark Lemley

Mark Lemley

Professor of Law Stanford Law School

Website

Nicholas Carlini

Nicholas Carlini

Research Scientist Google Brain

Website

Gautam Kamath

Gautam Kamath

Assistant Professor University of Waterloo

Website

Kristen Vaccaro

Kristen Vaccaro

Assistant Professor University of California, San Diego

Website

Luis Villa

Luis Villa

Co-founder and General Counsel Tidelift

Website

Miles Brundage

Miles Brundage

Head of Policy Research OpenAI

Website

Jack M. Balkin

Jack M. Balkin

Professor Yale Law School

Website

Organizer Information

Katherine Lee

Katherine Lee

Ph.D. Candidate Cornell University Department of Computer Science

Website Google Scholar

Katherine’s work has provided essential empirical evidence and measurement for grounding discussions around concerns that language models, like CoPilot, are infringing copyright, and about how language models can respect an individuals’ right to privacy and control of their data. Additionally, she has proposed methods of reducing memorization. Her work has received recognition at ACL and USENIX.

A. Feder Cooper

A. Feder Cooper

Ph.D. Candidate Cornell University Department of Computer Science

Website Google Scholar

Cooper studies how to make more reliable conclusions when using ML methods in practice. This work has thus-far focused on empirically motivated, theoretically grounded problems in Bayesian inference, model selection, and deep learning. Cooper has published numerous papers at top ML conferences, interdisciplinary computing venues, and tech law journals. Much of this work has been recognized with spotlight and contributed talk awards. Cooper has also been recognized as a Rising Star in EECS (MIT, 2021).

Fatemehsadat Mireshghallah

Fatemehsadat Mireshghallah

Ph.D. Candidate UC San Diego Computer Science and Engineering Department

Website Google Scholar

Fatemeh’s research aims at understanding learning and memorization patterns in large language models, probing these models for safety issues (such as bias), and providing tools to limit their leakage of private information. She is a recipient of the National Center for Women & IT (NCWIT) Collegiate award in 2020 for her work on privacy-preserving inference, a finalist for the Qualcomm Innovation Fellowship in 2021, and a recipient of the 2022 Rising Star in Adversarial ML award. She was a co-chair of the NAACL 2022 conference and has been a co-organizer for numerous successful workshops, including Distributed and Private ML (DPML) at ICLR 2021, Federated Learning for NLP (FL4NLP) at ACL 2022, Private NLP at NAACL 2022 and Widening NLP at EMNLP 2021 and 2022

Madiha Z. Choksi

Madiha Z. Choksi

Ph.D. Student Cornell University Department of Information Science

Website

James Grimmelmann

James Grimmelmann

Professor of Digital and Information Law Cornell Law School and Cornell Tech

Website Google Scholar

James Grimmelmann is the Tessler Family Professor of Digital and Information Law at Cornell Tech and Cornell Law School. He studies how laws regulating software affect freedom, wealth, and power. He helps lawyers and technologists understand each other, applying ideas from computer science to problems in law and vice versa. He is the author of the casebook Internet Law: Cases and Problems and of over fifty scholarly articles and essays on digital copyright, content moderation, search engine regulation, online governance, privacy on social networks, and other topics in computer and Internet law. He organized the D is for Digitize conference in 2009 on the copyright litigation over the Google Book Search project, the In re Books conference in 2012 on the legal and cultural future of books in the digital age, and the Speed conference in 2018 on the implications of radical technology-induced acceleration for law, society, and policy.

David Mimno

David Mimno

Associate Professor Cornell University Department of Information Science

Website Google Scholar

David Mimno builds models and methodologies that empower researchers outside NLP to use language technology. He was general chair of the 2022 Text As Data conference at Cornell Tech and organized a workshop on topic models at NeurIPS. His work spans from education to the development of advanced new language technology driven by the needs of non-expert users. He is chief developer of the popular Mallet toolkit and is currently co-PI on the NEH-sponsored BERT for Humanists project. His work has been supported by the Sloan foundation and NSF

Deep Ganguli

Deep Ganguli

Research Scientist Anthropic

Website Google Scholar

Deep Ganguli leads the Societal Impacts team at Anthropic, which designs experiments to measure both the capabilities and harms of large language models. He is on the program committee at FAccT ’23, and was formerly the Research Director at the Stanford Institute for Human Centered AI where he designed several successful and well-attended multidisciplinary workshops aimed to bridge the gap between technologists and humanists. Prior to this he was a Science Program Officer at the Chan Zuckerberg initiative, where he designed numerous workshops and conferences aimed to bring together software engineers and neuroscientists to address pressing questions about neurodegenerative diseases.

Ludwig Schubert

Ludwig Schubert

Website Google Scholar

Contact us

Reach the organizers at:

Or, join our mailing list at: genlaw@groups.google.com