About

Private ML @ ICLR 2024

Recent advances in artificial intelligence greatly benefit from data-driven machine learning methods that train deep neural networks with large scale data. The usage of data should be responsible, transparent, and comply with privacy regulations. This workshop aims to bring together industry and academic researchers, privacy regulators and legal, policy people to have a conversation on privacy research. We hope to (re)visit major privacy considerations from both technical and nontechnical perspectives through interdisciplinary discussions. Topics of interest include, but are not limited to, the following:


  • Relationship of privacy regulation (such as GDPR, DMA) to machine learning
  • Interpolation and explanation of data privacy
  • Efficient methods for privacy preserving machine learning
  • Federated learning for data minimization
  • Differential privacy theory and practice
  • Threat model and privacy attacks
  • Encryption methods for machine learning
  • Privacy in machine learning systems
  • Privacy for large language models
  • Relationship between privacy, transparency, auditability, verifiability
  • Relationship between privacy, robustness, fairness etc

Calls

Call for Papers

The organizers and ICLR workshop chairs have worked together to find a room for in-person workshop. A huge thanks to Andrea Brown for supporting us. We encourage in-person participation while will try our best to provide accessibility. This workshop is accepted to ICLR as virtual. We will update with more information later, and please make your travel plan accordingly.

Important Dates
  • Submission Due Date: February 4th February 9th, 2024, AoE
  • Notification of Acceptance: March 3rd, 2024, AoE
  • Camera-ready Papers Due: May 3rd, 2024, AoE
  • Workshop Date: Saturday, May 11th, 2024, Vienna, Austria
Submission Instructions

Submissions should be double-blind, no more than 6 pages long (excluding references), and following the ICLR 2024 template. An optional appendix of any length can be put at the end of the draft (after references).

Submissions are processed in OpenReview. https://openreview.net/group?id=ICLR.cc/2024/Workshop/PML.

Our workshop does not have formal proceedings, i.e., it is non-archival. Accepted papers and their review comments will be posted on OpenReview in public (after the end of the review process), while rejected and withdrawn papers and their reviews will remain private.

We welcome submissions from novel research, ongoing (incomplete) projects, drafts currently under review at other venues, as well as recently published results. However, we request significant updates if the work has previously been presented at major machine learning conferences or workshops before May 1st, 2024.

Camera Ready Instructions

Please keep using the ICLR template for camera ready, and feel free to update the footnote/header in the template from ICLR main conference to workshop.

We allow an extra page (7 pages for main context) to incorporate reviewer feedback, and adding additional information such as authors formatting, limitations, ethics and acknowledgement.

The accepted papers will become public on openreview after the camera ready dealine. The reviews will **not** be publicized.

Presentation Instructions

We encourage in-person presentations, while also aim to provide accessibility to virtual attendees.

In-person posters should be portrait orientation and up to 24"w x 36"h (W 60.96 x H 91.44 cm) size.

For every accepted paper, authors can optinoally submit a virtual poster and a 5-min recorded video (links are preferred, e.g., Google Drive, YouTube etc.) to be highlighted on our webpage. Please email to privacy-workshop-iclr24@googlegroups.com with "[PML-ICLR24 Virtual Presentation]+ Paper Title" as email title.

Awards

Awards

Best Paper Awards

Best paper award

  • Chhavi Yadav, Amrita Roy Chowdhury, Dan Boneh, Kamalika Chaudhuri.
  • FairProof : Confidential and Certifiable Fairness for Neural Networks

Best paper honorable mention

  • Charlie Hou, Akshat Shrivastava, Hongyuan Zhan, Rylan Conway, Trang Le, Adithya Sagar, Giulia Fanti, Daniel Lazar.
  • PrE-Text: Training Language Models on Private Federated Data in the Age of LLMs
Early Career Free Registration

The workshop can provide limited number of free (full ICLR'24 conference) registration to our attendees, which will prioritize early career students, and promote diversity, equity and inclusion (DEI). If you are interested, please email us at privacy-workshop-iclr24@googlegroups.com following the instructions:

  • Email has to be sent before Apr 20th to be considered.
  • Email title starts with [PML-ICLR24 free registration].
  • Includes link(s) to your accepted, or submitted paper(s) to our workshop.
  • Indicates whether you will attend the workshop in-person.
  • Includes a short paragraph describing why it is important for your research and career.
  • (Optional) includes link(s) to your webpage and resume.
  • The awardees will be announced in late April. Registration fee can be refunded (handled by ICLR) if you already registered.

Awardees: Rob Romijnders, Charlie Hou, Jiaqi Shao, Sreetama Sarkar, Basileal Y. Imana, Rakshit Naidu Nemakallu, Chhavi Yadav

Program

Workshop Program

The following program is local time.


Local Time Activity
08:25AM - 08:30AM Opening Remarks
08:30AM - 09:00AM

Invited Talk: Nicolas Berkouk

  • Technical and scientific stakes of ML regulation : a Data Protection Authority perspective
09:00AM - 09:30AM

Invited Talk (Virtual): Janel Thamkul

  • Navigating the Intersection of AI Privacy Regulation and Technical Innovation
09:30AM - 10:00AM Break (9:30 Conference Break)
10:00AM - 10:30AM Invited Talk: Will Bullock
  • Scaling Privacy-Preserving ML for Industry
10:30AM - 11:00AM

Invited Talk: Daniel Ramage

  • Provably private learning on federated data
11:00AM - 11:30AM

Three Spotlight Talks

  • Efficient Language Model Architectures for Differentially Private Federated Learning. Presenter: Srinadh Bhojanapalli.
  • DNA: Differential privacy Neural Augmentation for contact tracing. Presenter: Rob Romijnders.
  • PrE-Text: Training Language Models on Private Federated Data in the Age of LLMs. Presenter: Charlie Hou.
11:30AM - 12:30PM Poster Session
12:30PM - 13:30PM Lunch Break
13:30PM - 14:00PM

Invited Talk: Rachel Cummings

  • Centering Policy and Practice: Research Gaps around Usable Differential Privacy
14:00PM - 15:00PM

Panel Discussions

  • 14:00PM - 14:10PM Herbie Bradley: Introducing UK AI Safety Institute
  • Privacy Regulation and Protection: Past, Present, and Future
15:00PM - 15:30PM Break (15:15 Conference Break)
15:30PM - 16:00PM

Invited Talk (Virtual): Kobbi Nissim

  • A methodology for reconciling computer science and legal approaches to privacy
16:00PM - 16:30PM

Invited Talk (Virtual): Dan Kifer

  • Evaluating the confidentiality of the 2010 Census: Why it is important and the road ahead
16:30PM - 17:00PM

Three Spotlight Talks (Virtual)

  • Langevin Unlearning. Presenter: Eli Chien.
  • FairProof : Confidential and Certifiable Fairness for Neural Networks. Presenter: Chhavi Yadav
  • Having your Privacy Cake and Eating it Too: Platform-supported Auditing of Social Media Algorithms for Public Interest. Presenter: Basileal Y. Imana.
16:50PM - 17:00PM Concluding Remarks

Talks

Invited Speakers

Rachel Cummings

Columbia University

Dan Kifer

Penn State University

Kobbi Nissim

Georgetown University

Daniel Ramage

Google

Janel Thamkul

Anthropic

Panel Discussion

Panelists

Rachel Cummings

Columbia University

Daniel Ramage

Google

Accepted Papers

Accepted Papers

Accepted Papers on Openreview
Spotlight Presentations

Morning Session

  • Efficient Language Model Architectures for Differentially Private Federated Learning. Jae Hun Ro, Srinadh Bhojanapalli, Zheng Xu, Yanxiang Zhang, Ananda Theertha Suresh.
  • DNA: Differential privacy Neural Augmentation for contact tracing. Rob Romijnders, Christos Louizos, Yuki M Asano, Max Welling.
  • PrE-Text: Training Language Models on Private Federated Data in the Age of LLMs. Charlie Hou, Akshat Shrivastava, Hongyuan Zhan, Rylan Conway, Trang Le, Adithya Sagar, Giulia Fanti, Daniel Lazar.

Afternoon Session

  • Langevin Unlearning. Eli Chien, Haoyu Peter Wang, Ziang Chen, Pan Li.
  • FairProof : Confidential and Certifiable Fairness for Neural Networks. Chhavi Yadav, Amrita Roy Chowdhury, Dan Boneh, Kamalika Chaudhuri.
  • Having your Privacy Cake and Eating it Too: Platform-supported Auditing of Social Media Algorithms for Public Interest. Basileal Yoseph Imana, Aleksandra Korolova, John Heidemann.
Poster Presentations
  • Balancing Privacy and Performance for Private Federated Learning Algorithms. Xiangjian Hou, Sarit Khirirat, Mohammad Yaqub, Samuel Horváth.
  • Having your Privacy Cake and Eating it Too: Platform-supported Auditing of Social Media Algorithms for Public Interest. Basileal Yoseph Imana, Aleksandra Korolova, John Heidemann.
  • FairProof : Confidential and Certifiable Fairness for Neural Networks. Chhavi Yadav, Amrita Roy Chowdhury, Dan Boneh, Kamalika Chaudhuri.
  • Subsampling is not Magic: Why Large Batch Sizes Work for Differentially Private Stochastic Optimisation. Ossi Räisä, Joonas Jälkö, Antti Honkela.
  • Personalized Differential Privacy for Ridge Regression. Krishna Acharya, Franziska Boenisch, Rakshit Naidu, Juba Ziani.
  • Efficient Private Federated Non-Convex Optimization With Shuffled Model. Lingxiao Wang, Xingyu Zhou, Kumar Kshitij Patel, Lawrence Tang, Aadirupa Saha.
  • Communication Efficient Differentially Private Federated Learning Using Second-Order Information. Mounssif Krouka, Antti Koskela, Tejas Kulkarni.
  • Data Forging Is Harder Than You Think. Mohamed Suliman, Swanand Kadhe, Anisa Halimi, Douglas Leith, Nathalie Baracaldo, Ambrish Rawat.
  • Confidential-DPproof : Confidential Proof of Differentially Private Training. Ali Shahin Shamsabadi, Gefei Tan, Tudor Ioan Cebere, Aurélien Bellet, Hamed Haddadi, Nicolas Papernot, Xiao Wang, Adrian Weller.
  • WAVES: Benchmarking the Robustness of Image Watermarks. Tahseen Rabbani, Bang An, Mucong Ding, Aakriti Agrawal, Yuancheng Xu, Chenghao Deng, Sicheng Zhu, Abdirisak Mohamed, Yuxin Wen, Tom Goldstein, Furong Huang.
  • Posterior Probability-based Label Recovery Attack in Federated Learning. Rui Zhang, Song Guo, Ping Li.
  • Differentially Private Latent Diffusion Models. Saiyue Lyu, Michael F Liu, Margarita Vinaroz, Mijung Park.
  • Gradient-Congruity Guided Federated Sparse Training. Chris XING TIAN, Yibing Liu, Haoliang Li, Ray C.C. Cheung, Shiqi Wang.
  • Privacy-Perserving Data Release Leveraging Optimal Transport and Particle Gradient Descent. Konstantin Donhauser, Javier Abad, Neha Hulkund, Fanny Yang.
  • Understanding Practical Membership Privacy of Deep Learning. Marlon Tobaben, Gauri Pradhan, Yuan He, Joonas Jälkö, Antti Honkela.
  • Langevin Unlearning. Eli Chien, Haoyu Peter Wang, Ziang Chen, Pan Li.
  • Linearizing Models for Efficient yet Robust Private Inference. Sreetama Sarkar, Souvik Kundu, Peter Anthony Beerel.
  • Online Experimentation under Privacy Induced Identity Fragmentation. Shiv Shankar, Ritwik Sinha, Madalina Fiterau.
  • PrE-Text: Training Language Models on Private Federated Data in the Age of LLMs. Charlie Hou, Akshat Shrivastava, Hongyuan Zhan, Rylan Conway, Trang Le, Adithya Sagar, Giulia Fanti, Daniel Lazar.
  • Differentially Private Best Subset Selection Via Integer Programming. Kayhan Behdin, Peter Prastakos, Rahul Mazumder. [Poster] [Slides] [Talk]
  • Fed Up with Complexity: Simplifying Many-Task Federated Learning with NTKFedAvg. Aashiq Muhamed, Meher Mankikar, Virginia Smith. [Poster]
  • Cache Me If You Can: The Case For Retrieval Augmentation in Federated Learning. Aashiq Muhamed, Pratiksha Thaker, Mona T. Diab, Virginia Smith. [Poster]
  • Byzantine Robustness and Partial Participation Can Be Achieved Simultaneously: Just Clip Gradient Differences. Grigory Malinovsky, Eduard Gorbunov, Samuel Horváth, Peter Richtárik.
  • DNA: Differential privacy Neural Augmentation for contact tracing. Rob Romijnders, Christos Louizos, Yuki M Asano, Max Welling. [Poster] [Slides]
  • Federated Unlearning: a Perspective of Stability and Fairness. Jiaqi Shao, Tao Lin, Xuanyu Cao, Bing Luo.
  • Guarding Multiple Secrets: Enhanced Summary Statistic Privacy for Data Sharing. Shuaiqi Wang, Rongzhe Wei, Mohsen Ghassemi, Eleonora Kreacic, Vamsi K. Potluru. [Poster]
  • Efficient Language Model Architectures for Differentially Private Federated Learning. Jae Hun Ro, Srinadh Bhojanapalli, Zheng Xu, Yanxiang Zhang, Ananda Theertha Suresh.
  • The Privacy Power of Correlated Noise in Decentralized Learning. Youssef Allouah, Anastasia Koloskova, Aymane El Firdoussi, Martin Jaggi, Rachid Guerraoui.

Organization

Workshop Organizers

Salman Avestimehr

University of Southern California / FedML

Tian Li

University of Chicago / Meta

Niloofar (Fatemeh) Mireshghallah

University of Washington

Sewoong Oh

University of Washington / Google

Florian Tramer

ETH Zurich

Zheng Xu

Google

Committee

Program Committee

  • Ahmed M. Abdelmoniem (Queen Mary University of London)
  • Alexandra Wood (Berkman Klein Center at Harvard)
  • Ali Shahin Shamsabadi (Brave Software)
  • Ambrish Rawat (IBM)
  • Amr Abourayya (Universität Duisburg-Essen)
  • Andrew Hard (Google)
  • Ang Li (University of Maryland College Park)
  • Anran Li (Nanyang Technological University)
  • Anshuman Suri (University of Virginia)
  • Aritra Mitra (North Carolina State University)
  • Arun Ganesh (Google)
  • Ashwinee Panda (Google)
  • Aurélien Bellet (INRIA)
  • Berivan Isik (Google)
  • Bing Luo (Duke Kunshan University)
  • Carlee Joe-Wong (Carnegie Mellon University)
  • Chuan Guo (Facebook AI Research)
  • Chuan Xu (INRIA)
  • Chulhee Yun (KAIST)
  • Chulin Xie (University of Illinois Urbana Champaign)
  • Dan Alistarh (Institute of Science and Technology)
  • Deepesh Data (University of California Los Angeles)
  • Dimitrios Dimitriadis (Amazon)
  • Divyansh Jhunjhunwala (Carnegie Mellon University)
  • Edwige Cyffers (INRIA)
  • Egor Shulgin (KAUST)
  • Emiliano De Cristofaro (University of California Riverside)
  • Evita Bakopoulou (University of California Irvine)
  • Fan Mo (Huawei Technologies Ltd.)
  • Galen Andrew (Google)
  • Gauri Joshi (Carnegie Mellon University)
  • Giulia Fanti (Carnegie Mellon University)
  • Giulio Zizzo (IBM)
  • Graham Cormode (Facebook)
  • Haibo Yang (Rochester Institute of Technology)
  • Hamed Haddadi (Imperial College London)
  • Jalaj Upadhyay (Rutgers University)
  • James Henry Bell (Google)
  • Jayanth Reddy Regatti (Ohio State University)
  • Jiachen T. Wang (Princeton University)
  • Jiankai Sun (ByteDance Inc.)
  • Jianyu Wang (Apple)
  • Jiayi Wang (University of Utah)
  • Jiayu Zhou (Michigan State University)
  • Jiayuan Ye (National University of Singapore)
  • Jinghui Chen (Pennsylvania State University)
  • Jinhyun So (Samsung)
  • John Nguyen (Facebook)
  • Kai Yi (KAUST)
  • Kaiyuan Zhang (Purdue University)
  • Kallista Bonawitz (Google)
  • Karthik Prasad (Facebook AI)
  • Ken Liu (Stanford University)
  • Kevin Hsieh (Microsoft)
  • Krishna Kanth Nakka (Huawei Technologies Ltd.)
  • Kumar Kshitij Patel (TTIC)
  • Lie He (EPFL)
  • Lun Wang (Google)
  • Lydia Zakynthinou (Northeastern University)
  • Mahdi Chehimi (Virginia Tech)
  • Martin Jaggi (EPFL)
  • Matthias Reisser (QualComm)
  • Mi Zhang (Ohio State University)
  • Michal Yemini (Bar-Ilan University)
  • Mikko A. Heikkilä (Telefonica Research)
  • Milad Nasr (Google)
  • Mingrui Liu (George Mason University)
  • Mónica Ribero (Google)
  • Nasimeh Heydaribeni (University of California San Diego)
  • Niloofar Mireshghallah (University of Washington)
  • Paulo Abelha Ferreira (Dell Technologies)
  • Peter Kairouz (Google)
  • Peter Richtárik (KAUST)
  • Radu Marculescu (University of Texas Austin)
  • Salim El Rouayheb (Rutgers University)
  • Se-Young Yun (KAIST)
  • Sebastian U Stich (CISPA)
  • Shanshan Wu (Google)
  • Shiqiang Wang (IBM)
  • Songze Li (Southeast University)
  • Stefanos Laskaridis (Brave Software)
  • Swanand Kadhe (IBM)
  • Tahseen Rabbani (University of Maryland College Park)
  • Wei-Ning Chen (Stanford University)
  • Xuechen Li (Stanford University)
  • Yae Jee Cho (Carnegie Mellon University)
  • Yang Liu (Tsinghua University)
  • Yangsibo Huang (Princeton University)
  • Yanning Shen (University of California Irvine)
  • Yi Zhou (IBM)
  • Yibo Jacky Zhang (Stanford University)
  • Zhaozhuo Xu (Rice University)
  • Zheng Xu (Google)

Sponsors

Sponsors

Google          Meta          FedML