P13N: Personalization in Generative AI Workshop

ICCV 2025

Hawaii, Oct 19/Oct 20 (TBD)


Teaser For ICCV 2025


Workshop Overview


Welcome to the Personalization in Generative AI Workshop (P13N). This full-day workshop brings together leading researchers and industry experts to explore cutting-edge personalization techniques in generative AI.

The event will feature paper presentations, panel discussions, and a competition focusing on personalized generative models across images, video, 3D, and AR/VR. Topics include advanced optimization methods for personalizing diffusion models (e.g., DreamBooth, textual inversion), multi-subject composition, cross-modal personalization, AR/VR personalization, dataset curation and benchmarking, as well as ethical and privacy considerations.

Call for Papers is Open!

We invite submissions on all aspects of personalization in generative AI. Both short papers (4 pages, out of proceedings) and long papers (8 pages, in-proceedings) are welcome.

Topics:

  • Advanced optimization methods for personalizing diffusion models (e.g., DreamBooth, textual inversion)
  • Multi-subject composition: handling multiple entities in a single scene
  • Cross-modal personalization: bridging text, images, video, and 3D
  • AR/VR personalization for handling immersive experiences where user-specific styles or preferences adapt in real time within augmented or virtual environments
  • Dataset curation for benchmarking personalized generative models
  • Benchmark and evaluation metrics, developing standardized protocols to measure personalization quality, style consistency, and identity preservation
  • New methods for personalized video generation
  • Ethical and privacy considerations in personalization (addressing user consent, data ownership, and transparency in personalized content generation)
  • Personalized storytelling and narrative visualization
  • Style adaptation for digital art and illustration
  • Emerging applications in gaming, e-commerce, and digital marketing
  • Adapting LLM-based personalization approaches to vision tasks
  • Personalization on edge devices for deploying personalized models on resource-constrained hardware


Submit Your Paper


Important Dates


Description Date Countdown (AoE)
Long-paper Deadline (in proceedings) June 27, 2025 23:59 AoE
Long-paper Notification July 10, 2025 23:59 AoE
Long-paper Camera Ready August 13, 2025 23:59 AoE
Short-paper Deadline (out of proceedings) August 18, 2025 23:59 AoE
Short-paper Notification August 25, 2025 23:59 AoE
Workshop Date Full-day, Oct 19/Oct 20 (TBD)


Personalization Challenge

We will host two key challenges that address critical problems in the personalization of generative models: multi-concept image personalization and video personalization.

Multi-concept personalization challenge: Current approaches often struggle with concept interference, identity loss, or compositional failures when combining multiple concepts. This challenge will focus on methods that accurately integrate multiple personalized concepts (e.g., a custom character, specific objects, or artistic styles) while maintaining semantic consistency.

Video personalization: This challenge will focus on achieving temporally coherent personalization while maintaining subject fidelity across generated frames. This includes addressing challenges such as identity drift, motion distortion, and loss of subject details over time.

The challenge datasets and details will be announced on June 25, 2025. With sponsorship from Fal.ai, we will distribute cash prizes up to $5K, stay tuned!


Invited Speakers


Nataniel Ruiz

Nataniel Ruiz is a Research Scientist at Google and the lead author of DreamBooth, which was selected for a Best Paper Award at CVPR 2023. His main research interests revolve around generative models, and he has authored other works in the areas of controllability and personalization of diffusion models, including StyleDrop, ZipLoRA, and HyperDreamBooth. He obtained his PhD from Boston University, his Master’s from Georgia Tech, and his Bachelor’s from École Polytechnique in Paris. Prior to joining Google, he also interned at Apple, Amazon, and NEC Labs.

Kfir Aberman

Kfir Aberman is a Research Scientist and team lead in the Creative Vision group at Snap Research. His primary research focus lies in Generative AI for visual content, with a core objective of synthesizing novel visual effects that are impactful and meaningful to people’s day to day. Before joining Snap, Kfir was a Research Scientist at Google. His research led to publications in top conferences and journals, where some of his contributions in the field of generative AI include noteworthy works such as Prompt-to-Prompt and DreamBooth.

Yael Vinker

Yael Vinker is a postdoctoral associate at MIT CSAIL, working under the guidance of Professor Antonio Torralba. She completed her PhD at Tel Aviv University, where she was advised by Daniel Cohen-Or and Ariel Shamir. Her research explores the intersection of computer graphics, cognitive science, and art. Her work has been recognized with two Best Paper awards and one Honorable Mention at SIGGRAPH and SIGGRAPH Asia.

Jun-Yan Zhu

Jun-Yan Zhu is an Assistant Professor at CMU’s School of Computer Science. Previously, he was a Research Scientist at Adobe Research and a postdoc at MIT CSAIL. He earned his Ph.D. from UC Berkeley and B.E. from Tsinghua University. His research spans computer vision, graphics, and computational photography. He has received the Samsung AI Research of the Year (2024), Packard Fellowship (2023), and NSF CAREER Award (2023).

Ishan Misra

Ishan Misra is a Research Scientist in the GenAI group at Meta where he led the research efforts on video generation models. He was the tech lead for Meta's Movie Gen project for foundation models in video generation, video editing, video personalization, and audio generation. He holds a PhD from the Robotics Institute at Carnegie Mellon University and graduated in 2018 when he joined FAIR.

Fatih Porikli

Fatih Porikli is an IEEE Fellow and the Global Lead of AI Systems at Qualcomm AI Research. He was a full tenured Professor in the Research School of Engineering, Australian National University (ANU), Canberra. His recent work on personalization includes FouRA, LoRA-X and HyperNet Fields.

Hila Chefer

Hila Chefer is a PhD Candidate at Tel Aviv University and a research intern at Meta AI. Her groundbreaking work in video personalization and generation includes VideoJAM, Still-Moving, and Lumiere.


Invited Panelists


David Bau
David Bau
Northeastern Khoury College
Varun Jampani
Varun Jampani
Stability AI
Gorkem Yurtseven
Gorkem Yurtseven
CTO, Fal.ai


Organizers


Pinar Yanardag
Pinar Yanardag
Virginia Tech
Rinon Gal
Rinon Gal
NVIDIA
Federico Tombari
Daniel Cohen-Or
Tel Aviv University
Tuna Han Salih Meral
Tuna Han Salih Meral
Virginia Tech
Enis Simsar
Enis Simsar
ETH Zurich
Federico Tombari
Nupur Kumari
Carnegie Mellon University
Federico Tombari
Ayşegül Dündar
Bilkent University
Federico Tombari
Federico Tombari
Google, TUM


Challenge Organizers


Yusuf Dalva
Yusuf Dalva
Virginia Tech
Tahira Kazimi
Tahira Kazimi
Virginia Tech
Hidir Yesiltepe
Hidir Yesiltepe
Virginia Tech


Contact

To contact the organizers please use generative.p13n.workshop@gmail.com



Acknowledgments

Thanks to languagefor3dscenes for the webpage format.