P13N: Personalization in Generative AI Workshop

ICCV 2025

October 20, (full-day) in Hawai'i Convention Center, room 317 A


Teaser For ICCV 2025


Workshop Overview


Welcome to the Personalization in Generative AI Workshop (P13N). This full-day workshop brings together leading researchers and industry experts to explore cutting-edge personalization techniques in generative AI.

The event will feature paper presentations, panel discussions, and a competition focusing on personalized generative models across images, video, 3D, and AR/VR. Topics include advanced optimization methods for personalizing diffusion models (e.g., DreamBooth, textual inversion), multi-subject composition, cross-modal personalization, AR/VR personalization, dataset curation and benchmarking, as well as ethical and privacy considerations.

Call for Papers is Open!

We invite submissions on all aspects of personalization in generative AI. Both short papers (4 pages, out of proceedings) and long papers (8 pages, in-proceedings) are welcome and should be double-blind. References do not count toward the page limits. Including supplementary material is allowed after the references.

Topics:

  • Advanced optimization methods for personalizing diffusion models (e.g., DreamBooth, textual inversion)
  • Multi-subject composition: handling multiple entities in a single scene
  • Cross-modal personalization: bridging text, images, video, and 3D
  • AR/VR personalization for handling immersive experiences where user-specific styles or preferences adapt in real time within augmented or virtual environments
  • Dataset curation for benchmarking personalized generative models
  • Benchmark and evaluation metrics, developing standardized protocols to measure personalization quality, style consistency, and identity preservation
  • New methods for personalized video generation
  • Ethical and privacy considerations in personalization (addressing user consent, data ownership, and transparency in personalized content generation)
  • Personalized storytelling and narrative visualization
  • Style adaptation for digital art and illustration
  • Emerging applications in gaming, e-commerce, and digital marketing
  • Adapting LLM-based personalization approaches to vision tasks
  • Personalization on edge devices for deploying personalized models on resource-constrained hardware


Submit Your Paper


Important Dates


Description Date Countdown (AoE)
Long-paper Deadline (in proceedings) July 07, 2025 23:59 AoE
Long-paper Notification July 11, 2025 23:59 AoE
Long-paper Camera Ready August 13, 2025 23:59 AoE
Short-paper Deadline (out of proceedings) August 18, 2025 23:59 AoE
Short-paper Notification August 29, 2025 23:59 AoE
Workshop Date Full-day, Oct 20


Personalization Challenge

We will host a challenge that address critical problems in the personalization of generative models: multi-concept image personalization.

Multi-concept personalization challenge: Current approaches often struggle with concept interference, identity loss, or compositional failures when combining multiple concepts. This challenge will focus on methods that accurately integrate multiple personalized concepts (e.g., a custom character, specific objects, or artistic styles) while maintaining semantic consistency.


Join our challenge!


Invited Speakers


Nataniel Ruiz

Nataniel Ruiz is a Research Scientist at Google Deepmind and the lead author of DreamBooth, which was selected for a Best Paper Award at CVPR 2023. His main research interests revolve around generative models, and he has authored other works in the areas of controllability and personalization of diffusion models, including StyleDrop, ZipLoRA, and HyperDreamBooth. He obtained his PhD from Boston University, his Master’s from Georgia Tech, and his Bachelor’s from École Polytechnique in Paris. Prior to joining Google, he also interned at Apple, Amazon, and NEC Labs.

Kfir Aberman

Kfir Aberman is a founding member of Decart AI, leading the innovation in real-time, interactive generative video models. Previously, as Principal Research Scientist at Snap Research, he led the company’s Personalized Generative AI effort, building systems that learn who you are and create visuals that reflect your identity, relationships, and style. Earlier in his career, he was a Research Scientist at Google Research. His research, including breakthroughs like DreamBooth and Prompt-to-Prompt, has become foundational to how people and creators today interact with generative AI.

Yael Vinker

Yael Vinker is a postdoctoral associate at MIT CSAIL, working under the guidance of Professor Antonio Torralba. She completed her PhD at Tel Aviv University, where she was advised by Daniel Cohen-Or and Ariel Shamir. Her research explores the intersection of computer graphics, cognitive science, and art. Her work has been recognized with two Best Paper awards and one Honorable Mention at SIGGRAPH and SIGGRAPH Asia.

Fatih Porikli

Fatih Porikli is an IEEE Fellow and the Global Lead of AI Systems at Qualcomm AI Research. He was a full tenured Professor in the Research School of Engineering, Australian National University (ANU), Canberra. His recent work on personalization includes FouRA, LoRA-X and HyperNet Fields.

Hila Chefer

Hila Chefer is a researcher in the Fundamental Research team at Black Forest Labs and an incoming Assistant Professor at Tel Aviv University (starting October 2026). Her groundbreaking work in video personalization and generation includes VideoJAM, Still-Moving, and Lumiere.


Schedule


Time Event
9:30-9:45 AM Organizers: Opening Remarks
9:45-10:15 AM Invited Talk: "Reorienting the Machine: How AI Learned to Generate You" - Kfir Aberman (Decart AI)
10:15-10:45 AM Coffee Break
10:45-11:15 AM Invited Talk: "From Still to Moving: Temporal Priors as Creative Tools for Personalization" - Hila Chefer (Black Forest Labs and Tel Aviv University)
11:15-11:45 AM Invited Talk: "Noise Hypernetworks: Amortizing Test-Time Compute in Diffusion Models" - Nataniel Ruiz (Google Deepmind)
11:45 AM-12:00 PM Oral Presentation 1: "LoRA.rar: Learning to Merge LoRAs via Hypernetworks for Subject-Style Conditioned Image Generation" - Donald Shenaj
12:00-1:30 PM Lunch
1:30-2:00 PM Invited Talk: "Personalization Methods for Design and Artistic Creation" - Yael Vinker (MIT)
2:00-2:15 PM Oral Presentation 2: "Only-Style: Stylistic Consistency in Image Generation without Content Leakage" - Tilemachos Aravanis (National Technical University of Athens)
2:15-2:30 PM Oral Presentation 3: "Locally Controlled Face Aging with Latent Diffusion Models" - Sileye Ba (Loreal)
2:30-3:00 PM Invited Talk: "Efficient Personalization of VLM/LLM Models" - Fatih Porikli (Qualcomm)
3:00-4:00 PM Coffee Break and Poster Session (Exhibition Hall #227-241)
4:00-5:00 PM Panel - Moderator: David Bau
5:00-5:30 PM Best Paper Awards, Challenge Winners and Awards


Invited Panelists


David Bau
David Bau
Northeastern Khoury College
Varun Jampani
Varun Jampani
Arcade AI
Rana Hanocka
Rana Hanocka
University of Chicago
Nataniel Ruiz
Nataniel Ruiz
Google Deepmind
Yijun Li
Yijun Li
Adobe Research
Sergey Tulyakov
Sergey Tulyakov
Snap Research
Yael Vinker
Yael Vinker
MIT CSAIL


Organizers


Pinar Yanardag
Pinar Yanardag
Virginia Tech
Rinon Gal
Rinon Gal
Black Forest Labs
Federico Tombari
Daniel Cohen-Or
Tel Aviv University
Tuna Han Salih Meral
Tuna Han Salih Meral
Virginia Tech
Enis Simsar
Enis Simsar
ETH Zurich
Federico Tombari
Nupur Kumari
Carnegie Mellon University
Federico Tombari
Ayşegül Dündar
Bilkent University
Federico Tombari
Federico Tombari
Google, TUM


Challenge Organizers


Yusuf Dalva
Yusuf Dalva
Virginia Tech
Tahira Kazimi
Tahira Kazimi
Virginia Tech
Hidir Yesiltepe
Hidir Yesiltepe
Virginia Tech


Contact

To contact the organizers please use generative.p13n.workshop@gmail.com


Sponsors



Acknowledgments

Thanks to languagefor3dscenes for the webpage format.