20.October Room 311, Honolulu, Hawaii, US

AIM 2025

Advances in Image Manipulation workshop

in conjunction with ICCV 2025

Sponsors (TBU)






Call for papers

Image manipulation is a key computer vision tasks, aiming at the restoration of degraded image content, the filling in of missing information, or the needed transformation and/or manipulation to achieve a desired target (with respect to perceptual quality, contents, or performance of apps working on such images). Recent years have witnessed an increased interest from the vision and graphics communities in these fundamental topics of research. Not only has there been a constantly growing flow of related papers, but also substantial progress has been achieved.

Each step forward eases the use of images by people or computers for the fulfillment of further tasks, as image manipulation serves as an important frontend. Not surprisingly then, there is an ever growing range of applications in fields such as surveillance, the automotive industry, electronics, remote sensing, or medical image analysis etc. The emergence and ubiquitous use of mobile and wearable devices offer another fertile ground for additional applications and faster methods.

This workshop aims to provide an overview of the new trends and advances in those areas. Moreover, it will offer an opportunity for academic and industrial attendees to interact and explore collaborations.

This workshop builds upon the success of Advances in Image Manipulation (AIM) workshop at ECCV 2024,ECCV 2022, ICCV 2021, ECCV 2020, ICCV 2019, Mobile AI (MAI) workshop at CVPR 2025, CVPR 2024, CVPR 2023 , CVPR 2022 , CVPR 2021 , Perceptual Image Restoration and Manipulation (PIRM) workshop at ECCV 2018 , Graphics, Vision, Graphics and AI for Streaming (AIS) workshop at CVPR 2024, Perceptual Image Restoration and Manipulation (PIRM) workshop at ECCV 2018 , the workshop and Challenge on Learned Image Compression (CLIC) editions at DCC 2024,CVPR 2018-2022, and the New Trends in Image Restoration and Enhancement (NTIRE) editions at CVPR 2017-2025 and at ACCV 2016. Moreover, it relies on the people associated with the PIRM, CLIC, MAI, AIM, AIS and NTIRE events such as organizers, PC members, distinguished speakers, authors of published papers, challenge participants and winning teams.

Papers addressing topics related to image/video manipulation, restoration and enhancement are invited. The topics include, but are not limited to:

  • Image-to-image translation
  • Video-to-video translation
  • Image/video manipulation
  • Perceptual manipulation
  • Image/video generation and hallucination
  • Image/video quality assessment
  • Image/video semantic segmentation
  • Perceptual enhancement
  • Multimodal translation
  • Depth estimation
  • Saliency and gaze estimation
  • Image/video inpainting
  • Image/video deblurring
  • Image/video denoising
  • Image/video upsampling and super-resolution
  • Image/video filtering
  • Image/video de-hazing, de-raining, de-snowing, etc.
  • Demosaicing
  • Image/video compression
  • Removal of artifacts, shadows, glare and reflections, etc.
  • Image/video enhancement: brightening, color adjustment, sharpening, etc.
  • Style transfer
  • Hyperspectral imaging
  • Underwater imaging
  • Aerial and satellite imaging
  • Methods robust to changing weather conditions / adverse outdoor conditions
  • Image/video manipulation on mobile devices
  • Image/video restoration and enhancement on mobile devices
  • Studies and applications of the above.

AIM 2025 challenges

One needs to check the corresponding Codalab competition(s) in order to learn more about and to register to access the data and participate in the challenge(s) of interest.

Important dates



Challenges Event Date (always 23:59 CET)
Site online May 1, 2025
Release of train data and validation data May 30, 2025
Validation server online May 30, 2025
Final test data release, validation server closed July 1, 2025
Test phase submission deadline July 5, 2025
Fact sheets, code/executable submission deadline July 6, 2025
Preliminary test results release to the participants July 7, 2025
Paper submission deadline for entries from the challenges July 9, 2025
Workshop Event Date (always 23:59 CET)
Paper submission deadline June 30, 2025
Paper submission deadline (only for methods from AIM 2025 challenges and papers reviewed elsewhere!) July 9, 2025
Paper decision notification July 11, 2025
Camera ready deadline August 11, 2025 23:59 PDT
Workshop day October 20, 2025

Submit



Instructions and Policies
Format and paper length

A paper submission has to be in English, in pdf format, and at most 8 pages (excluding references) in double-column, ICCV style. The paper format must follow the same guidelines as for all ICCV 2025 submissions.
AIM 2025 and ICCV 2025 author guidelines

Double-blind review policy

The review process is double blind. Authors do not know the names of the chair/reviewers of their papers. Reviewers do not know the names of the authors.

Dual submission policy

Dual submission is not allowed. If a paper is submitted also to ICCV and accepted, the paper cannot be published both at the ICCV and the workshop.

Submission site

https://cmt3.research.microsoft.com/AIMWC2025/

*The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.

Proceedings

Accepted and presented papers will be published after the conference in ICCV Workshops proceedings by IEEE and CVF together with the ICCV 2025 main conference papers.

Author Kit

The author kit provides a LaTeX2e template for paper submissions.
Please refer to https://iccv.thecvf.com/Conferences/2025/AuthorGuidelines for detailed formatting instructions.

People



Organizers (TBU)

  • Radu Timofte, University of Wurzburg,
  • Andrey Ignatov, AI Benchmark and ETH Zurich,
  • Marcos V. Conde, University of Wurzburg and Sony,
  • Zongwei Wu, University of Wurzburg
  • Dmitriy Vatolin, Moscow State University,
  • George Ciubotariu, University of Wurzburg
  • Georgii Perevozchikov, University of Wurzburg
  • Andrei Dumitriu, University of Wurzburg
  • Florin Vasluianu, University of Wurzburg
  • Chao Wang, Max Planck Institute for Informatics
  • Nikolai Karetin, Lomonosov Moscow State University
  • Nikolay Safonov, Lomonosov Moscow State University
  • Alexander Yakovenko, Lomonosov Moscow State University


PC Members (TBU)

  • Mahmoud Afifi, Google
  • Cosmin Ancuti, UPT
  • Michael S. Brown, York University
  • Jiezhang Cao, ETH Zurich
  • Sunghyun Cho, Samsung
  • Marcos V. Conde, University of Wurzburg
  • Touradj Ebrahimi, EPFL
  • Corneliu Florea, University Politechnica of Bucharest
  • Peter Gehler, Zalando / Amazon
  • Jinjin Gu, INSAIT
  • Shuhang Gu, University of Electronic Science and Technology of China
  • Chiu Man Ho, OPPO,
  • Hiroto Honda, GO Inc.
  • Andrey Ignatov, ETH Zurich
  • Eddy Ilg, University of Technology Nuremberg
  • Jan Kautz, NVIDIA
  • Christian Ledig, University of Bamberg
  • Yawei Li, ETH Zurich
  • Stephen Lin, Microsoft Research
  • Kai-Kuang Ma, Nanyang Technological University, Singapore
  • Vasile Manta, Technical University of Iasi
  • Rafal Mantiuk, University of Cambridge
  • Zibo Meng, OPPO
  • Yusuke Monno, Tokyo Institute of Technology
  • Karol Myszkowski, MPI-Informatik
  • Vinay P. Namboodiri, University of Bath
  • Michael Niemeyer, Google
  • Sylvain Paris, Adobe Research
  • Aline Roumy, INRIA
  • Nicu Sebe, University of Trento
  • Gregory Slabaugh, Queen Mary University of London
  • Hugues Talbot, Centralesupelec
  • Masayuki Tanaka, Tokyo Institute of Technology
  • Hao Tang, Peking University
  • Jean-Philippe Tarel, G. Eiffel University
  • Qi Tian, Huawei Cloud & AI
  • Radu Timofte, University of Wurzburg
  • George Toderici, Google
  • Luc Van Gool, ETH Zurich & KU Leuven
  • Yingqian Wang, National University of Defense Technology
  • Zhou Wang, University of Waterloo
  • Gordon Wetzstein, Stanford University
  • Ming-Hsuan Yang, University of California at Merced & Google
  • Ren Yang, Microsoft
  • Wenjun Zeng, Microsoft Research
  • Kai Zhang, Nanjing University
  • Yulun Zhang, Shanghai Jiao Tong University
  • Jun-Yan Zhu, Carnegie Mellon University
  • Wangmeng Zuo, Harbin Institute of Technology

Invited Talks (TBU)



Vishal M. Patel

Johns Hopkins University

Title: Two Perspectives on Image Manipulation: Restoration and Implicit Representations

Abstract: This talk presents two complementary perspectives on image manipulation: image restoration and implicit neural representations. I will first discuss our recent advances in image restoration, spanning GAN-based approaches for de-raining, de-hazing, and all-weather removal. I will highlight key challenges in this domain, particularly issues related to identity preservation, sample replication, and bias observed in both diffusion and generative models. In the second part, I will introduce our ongoing work on implicit neural representations (INRs) and explore how these continuous models can provide new insights for image quality assessment and representation learning. Together, these directions aim to push the boundaries of faithful and interpretable image manipulation.

Bio: Vishal M. Patel is an Associate Professor in the Department of Electrical and Computer Engineering (ECE) at Johns Hopkins University. His research focuses on computer vision, machine learning, image processing, medical image analysis, and biometrics. He has received a number of awards including the 2021 IEEE Signal Processing Society (SPS) Pierre-Simon Laplace Early Career Technical Achievement Award, the 2021 NSF CAREER Award, the 2021 IAPR Young Biometrics Investigator Award (YBIA), the 2016 ONR Young Investigator Award, and the 2016 Jimmy Lin Award for Invention. Patel serves as an associate editor for the IEEE Transactions on Pattern Analysis and Machine Intelligence journal and IEEE Transactions on Biometrics, Behavior, and Identity Science. He also chairs the conference subcommittee of IAPR Technical Committee on Biometrics (TC4). He is a fellow of the IAPR.

Nupur Kumari

Carnegie Mellon University

Title: Customizing Text-to-Image Diffusion Models

Abstract: With the rapid advancement of generative models, their potential to transform creative content creation is increasingly evident. However, most large-scale generative models are primarily text-conditioned, given the availability of large-scale paired text–image datasets. In contrast, for most practical applications, creators often begin from an existing asset and wish to generate variations or modify it in specific ways. For images, this may involve placing an object in a new context, adjusting local attributes, or altering visual style. To tackle the challenge of lack of paired data, in this talk, I will first present our work on constructing a synthetic paired dataset for the personalization task using the capabilities of pre-trained generative models themselves. In the end I will also share some of our recent work on training without paired supervision. We instead leverage vision–language models to evaluate task success and provide direct gradient-based feedback to the generative model. This holds the promise of being a scalable and robust framework for customizing generating models on downstream tasks where there's limited to no supervised paired data.

Bio: Nupur is a PhD student at the Robotics Institute, Carnegie Mellon University, advised by Prof. Jun-Yan Zhu. Her research focuses on generative models, with an emphasis on fine-tuning techniques for customization, controllability, and responsible content generation. She holds a bachelor’s degree in Mathematics and Computing from the Indian Institute of Technology Delhi. Prior to starting her PhD, she worked as a Machine Learning Engineer at Adobe India. She has interned at Meta and Adobe Research and is a 2025 WiGRAPH Rising Star in Computer Graphics.

Sergey Tulyakov

Snap In.

Title: Three and a Half Generations of Video Generation Models

Abstract: Come to the talk and you will know it!

Bio: Sergey Tulyakov is the Director of Research at Snap Inc., where he leads the Creative Vision team. Sergey’s work focuses on building technology to enhance creators’ skills using computer vision, machine learning, and generative AI. His work involves 2D, 3D, video generation, editing, and personalization. To scale generative experiences to hundreds of millions of users, Sergey’s team builds the world’s most efficient mobile foundational models, which enhance multiple products at Snap Inc. Sergey pioneered video generation and unsupervised animation domains with MoCoGAN, MonkeyNet, and the First Order Motion Model, sparking several startups in the field. His work on Interactive Video Stylization received the Best in Show Award at SIGGRAPH Real-Time Live! 2020. He has published over 60 top conference papers, journals, and patents, resulting in multiple innovative products, including Real-time Neural Lenses, real-time try-on, Snap AI Video, Imagine Together, world's fastest foundational image-to-image model and many more. Before joining Snap Inc., Sergey was with Carnegie Mellon University, Microsoft, and NVIDIA. He holds a PhD from the University of Trento, Italy.

Varun Jampani

Arcade.AI

Title: Crafting Video Diffusion: Precise Controls and Rich Outputs

Abstract: In this talk, I will give an overview of our recent works on video generative models that enables precise input controls and rich outputs resulting in a range of 2D, 3D and 4D applications. Specifically, I will talk about the following works with precise input controls: Structured taxonomy controls (Stable Cinemetrics, NeurIPS’25) for professional movie making; Precise 3D camera control (Stable Virtual Camera, ICCV’25); Expression Control (Stable Video-driven Portraits). In the later part of the talk, I will give an overview of the works with rich outputs enabling 3D and 3D generations: Spatio-temporal generation (Stable Video 4D 2.0, ICCV’25); Multi-view kinematic part generation (Stable Part Diffusion 4D, NeurIPS’25); Multi-view PBR Material generation (Stable Video Materials 3D, ICCV’25).

Bio: Varun Jampani is Chief AI Officer at Arcade.AI. Previously, he was VP research at Stability AI and also held researcher positions at Google and Nvidia. He works in the areas of machine learning and computer vision and his main research interests include image, video and 3D generation. He obtained his PhD with highest honors at Max Planck Institute for Intelligent Systems (MPI) and University of Tübingen in Germany. He obtained his BTech and MS from International Institute of Information Technology, Hyderabad (IIIT-H), India, where he was a gold medalist. He actively contributes to the research community and regularly serves as area chair and reviewer for major computer vision and machine learning conferences. His works have received ‘Best Paper Honorable Mention’ award at CVPR’18 and 'Best Student Paper Honorable Mention' award at CVPR’23.

Boxin Shi

Peking University

Title: Video Generation with Audio Synchronization and Panorama Representation

Abstract: Generating videos with synchronized audio and expanded filed-of-view overcomes the limitations of traditional media with planar scene representation, thereby enabling a more immersively dynamic visual experience. This talk will introduce two advancements in video generation with audio synchronization and panorama representation. For audio- synchronized video generation, we design a multi-stream temporal ControlNet and a multi-stage training strategy to achieve precise audio-visual synchronization. For panoramic video generation, we propose a latitude-aware sampling, a rotated semantic denoising, and a padded pixel-wise decoding strategy to effectively extend pre-trained text-to-video models to the panoramic domain. Our results demonstrate significant enhancement to the expressive power of existing video generation models, offering new perspectives for automated content creation and the development of dynamic world models.

Bio: Boxin Shi is currently a Boya Young Fellow Associate Professor (with tenure) and Research Professor at Peking University, where he leads the Camera Intelligence Lab. He received the PhD degree from the University of Tokyo and did postdoc research at MIT Media Lab. His research interests are computational photography and computer vision. He has published more than 200 papers, including 32 papers in TPAMI and 100+ papers in CVPR/ICCV/ECCV. His papers were awarded as Best Paper, Runners-Up at CVPR 2024/ICCP 2015, and selected as candidate for Best Paper at ICCV 2015. He received the Okawa Foundation Research Grant in 2021. He has served as an associate editor of TAPMI/IJCV and an area chair of CVPR/ICCV/ECCV. He is a Distinguished Lecturer of APSIPA, a Distinguished Member of CCF, and a Senior Member of the IEEE/CSIG. Please access his lab website for more information: https://camera.pku.edu.cn/

Schedule (TBU)


All the accepted AIM workshop papers have poster presentation, some have, in addition, oral presentation.
All the accepted AIM workshop papers are published under the book title "International Conference on Computer Vision Workshops (ICCVW)" by

IEEE and CVF







All AIM 2025 papers have poster presentation in Exhibition Hall II
Each paper has an assigned poster panel # between 55 and 73 as listed below.
The posters can be placed on the panels starting 15:00 and need to be removed at 18:00.



Paper ID AIM 7 -- 04:45PM Poster ID 55 DMS: Diffusion-Based Multi-Baseline Stereo Generation for Improving Self-Supervised Depth Estimation (poster # 55 )
Zihua Liu (Institute of Science Tokyo)*; Yizhou Li (Sony Semiconductor Solutions Group); Songyan Zhang (anyang Tech- nological University); Masatoshi Okutomi (Institute of Science Tokyo)
Paper ID AIM 9 -- 04:45PM Poster ID 56 ReBaIR: Reference-Based Image Restoration (poster # 56 )
Michael Bernasconi (ETH Zürich / Disney Research Zürich)*; Abdelaziz Djelouah (Disney Research); Yang Zhang (Disney Research); Markus Gross (ETH Zürich); Christopher Schroers (Disney Research )
Paper ID AIM 10 -- 04:45PM Poster ID 57 MedShift: Implicit Conditional Transport for X-Ray Domain Adaptation (poster # 57 )
Francisco Caetano (Eindhoven University of Technology)*; Christiaan Viviers (Eindhoven University of Technology); Peter de With (Eindhoven University of Technology); Fons van der Sommen (Eindhoven University of Technology)
Paper ID AIM 14 -- 04:45PM Poster ID 58 LMLT : Low-to-high Multi-Level Vision Transformer for Lightweight Image Super-Resolution (poster # 58 )
Jeongsoo Kim (Sogang University)*; Jongho Nang (Sogang University); Junsuk Choe (Sogang University)
Paper ID AIM 15 -- 04:45PM Poster ID 59 Self-Controlled Diffusion for Denoising in Scientific Imaging (poster # 59 )
Nikolay Falaleev (Fanis Technologies)*; Nikolai Orlov (AMOLF)
Paper ID AIM 17 -- 04:45PM Poster ID 60 JFFRA : Joint Flow And Feature Refinement Using Attention For Video Restoration (poster # 60 )
Ranjith Merugu (Samsung R&D Institute India-Bangalore)*; Akshay P sarashetti (Samsung R&D Institute India-Bangalore); Mohammed Sameer Suhail (Samsung R&D Institute India-Bangalore); Venkata Bharath Reddy Reddem (Samsung R&D Institute India-Bangalore); Pankaj Kumar Bajpai (Samsung R&D Institute India-Bangalore); Amit satish Unde (Samsung R&D Institute India-Bangalore)
Paper ID AIM 27 -- 04:45PM Poster ID 61 Diffusion-based Compression Quality Tradeoffs without Retraining (poster # 61 )
Jonas Brenig (Universität Würzburg)*; Radu Timofte (Universität Würzburg)
Paper ID AIM 29 -- 04:45PM Poster ID 62 Efficient High FPS Non-Uniform Motion Deblurring via Progressive Learning (poster # 62 )
Xin Lu (University of Science and Technology of China)*; Zhijing Sun (University of Science and Technology of China); Chengjie Ge (University of Science and Technology of China); Yufeng Peng (University of Science and Technology of China); Ziang Zhou (University of Science and Technology of China); Zihao Li (University of Science and Technology of China); Zishun Liao (University of Science and Technology of China); Dong Li (University of Science and Technology of China); Qiyu Kang (University of Science and Technology of China); Xueyang Fu (University of Science and Technology of China); Zheng-Jun Zha (University of Science and Technology of China)
Paper ID AIM 30 -- 04:45PM Poster ID 63 Boosting Inverse Tone Mapping via Diffusion Regularization (poster # 63 )
Xin Lu (University of Science and Technology of China)*; Yufeng Peng (University of Science and Technology of China); Chengjie Ge (University of Science and Technology of China); Zhijing Sun (University of Science and Technology of China); Ziang Zhou (University of Science and Technology of China); Zishun Liao (University of Science and Technology of China); Zihao Li (University of Science and Technology of China); Dong Li (University of Science and Technology of China); Qiyu Kang (University of Science and Technology of China); Xueyang Fu (University of Science and Technology of China); Zheng-Jun Zha (University of Science and Technology of China)
Paper ID AIM 33 -- 04:45PM Poster ID 64 RCENet : Recursive Concatenation and Enhancement Network for Real-Time Super-Resolution (poster # 64 )
Kihwan Yoon (BLUEDOT)*; Ganzorig Gankhuyag (Korea Electronics Technology Institute); Jinwoo Jeong (Korea Electronics Technology Institute)
Paper ID AIM 35 -- 04:45PM Poster ID 65 LiteRT-Optimized INT8 LLM for Raspberry Pi4 Deployment (poster # 65 )
Kihwan Yoon (BLUEDOT)*; Hyeon-Cheol Moon (Korea Electronics Technology Institute ); Ganzorig Gankhuyag (Korea Electronics Technology Institute ); Sungjei Kim (Korea University of Technology and Education); Aeri Kim (Korea Electronics Technology Institute ); Jinwoo Jeong (Korea Electronics Technology Institute ); Sang-Seol Lee (Korea Electronics Technology Institute); Sung-Joon Jang (Korea Electronics Technology Institute )
Paper ID AIM 37 -- 04:45PM Poster ID 66 Multi-Scale Tensorial Summation and Dimensional Reduction Guided Neural Network for Edge Detection (poster # 66 )
Lei Xu (Tampere University )*; Mehmet Yamac (Tampere University); mete ahishali (Tampere University); moncef gabbouj (Tampere University)
Paper ID AIM 41 -- 04:45PM Poster ID 67 Optuna vs Code Llama: Are LLMs a New Paradigm for Hyperparameter Tuning? (poster # 67 )
Roman Kochnev (Unversity of Wurzburg)*; Arash Goodarzi (Unversity of Wurzburg); Zofia Bentyn (Unversity of Wurzburg); Dmitry Ignatov (Unversity of Wurzburg); Radu Timofte (University of Wurzburg)
Paper ID AIM 42 -- 04:45PM Poster ID 68 Practical Manipulation Model for Robust Deepfake Detection (poster # 68 )
Benedikt Hopf (University of Würzburg)*; Radu Timofte (University of Würzburg)
Paper ID AIM 22 -- 04:45PM Poster ID 69 AIM 2025 High FPS Non-Uniform Motion Deblurring Challenge Report (poster # 69 )
George Ciubotariu (University of Wurzburg)*; Florin-Alexandru Vasluianu (University of Wurzburg); Zhuyun Zhou (University of Wurzburg); Nancy Mehta (University of Wurzburg); Radu Timofte (University of Wurzburg)
Paper ID AIM 23 -- 04:45PM Poster ID 70 AIM 2025 Challenge on Rip Current Segmentation (RipSeg) (poster # 70 )
Andrei Dumitriu (University of Wurzburg, University of Bucharest)*; Florin Miron (University of Bucharest); Florin Tatui (University of Bucharest); Radu Ionescu (University of Bucharest); Radu Timofte (University of Wurzburg)
Paper ID AIM 28 -- 04:45PM Poster ID 71 AIM2025 1st Inverse Tone Mapping Challenge Report (poster # 71 )
chao wang (MPI); Francesco Banterle (ISTI-CNR); Bin Ren (University of Trento)*; Radu Timofte (University of Würzburg)
Paper ID AIM 34 -- 04:45PM Poster ID 72 AIM 2025 Challenge on Robust Offline Video Super-Resolution: Dataset, Methods and Results (poster # 72 )
Nikolai Karetin (Lomonosov Moscow State University)*; Ivan Molodetskikh (Lomonosov Moscow State University); Dmitry Vatolin (Lomonosov Moscow State University); Radu Timofte (University of Würzburg)
Paper ID AIM 38 -- 04:45PM Poster ID 73 AIM 2025 Low-light RAW Video Denoising Challenge: Dataset, Methods and Results (poster # 73 )
Alexander Yakovenko (Lomonosov Moscow State University)*; George Chakvetadze (Lomonosov Moscow State University); Ilya Khrapov (Lomonosov Moscow State University); Maksim Zhelezov (Lomonosov Moscow State University); Dmitry Vatolin (Lomonosov Moscow State University); Radu Timofte (University of Würzburg)
Paper ID AIM 39 -- 04:45PM AIM 2025 Challenge on Screen-Content Video Quality Assessment: Methods and Results (poster # X )
Nickolay Safonov (MSU)*; Mikhail Rakhmanov (Lomonosov Moscow State University); Dmitriy Vatolin (Lomonosov Moscow State University); Radu Timofte (University of Würzburg)
Paper ID AIM 43 -- 04:45PM Real-World RAW Denoising using Diverse Cameras: AIM 2025 Challenge Report (poster # X )
Marcos Conde (University of Würzburg & Sony PlayStation)*; Feiran Li (Sony AI); Jiacheng Li (Sony AI); Beril Besbinar (Sony AI); Vlad Hosu (Sony AI); Daisuke Iso (Sony AI); Radu Timofte (University of Würzburg)
Paper ID AIM 45 -- 04:45PM AIM 2025 Perceptual Image Super-Resolution Challenge (poster # ? )
Marcos Conde (University of Würzburg & Sony PlayStation)*; Bruno Longarela (Cidaut AI); Álvaro García (Cidaut AI); Radu Timofte (University of Würzburg)
Paper ID AIM 49 -- 04:45PM Efficient Real-World Deblurring using Single Images: AIM 2025 Challenge Report (poster # X )
Daniel Feijoo (Cidaut AI); Paula Garrido (Cidaut AI); Marcos Conde (University of Würzburg & Sony PlayStation)*; Jaesung Rim (POSTECH); Alvaro Garcia (Cidaut AI); Sunghyun Cho (POSTECH); Radu Timofte (University of Würzburg)
Paper ID AIM 50 -- 04:45PM 4K Image Super-Resolution on Mobile NPUs: Mobile AI & AIM 2025 Challenge Report (poster # X )
Andrey Ignatov (ETH Zurich)*; Georgy Perevozchikov (University of Wurzburg); Radu Timofte (University of Wurzburg)
Paper ID AIM 51 -- 04:45PM Efficient Image Denoising on Smartphone GPUs: Mobile AI & AIM 2025 Challenge Report (poster # X )
Andrey Ignatov (ETH Zurich)*; Georgy Perevozchikov (University of Wurzburg); Radu Timofte (University of Wurzburg)
Paper ID AIM 54 -- 04:45PM Efficient Learned Smartphone ISP on Mobile GPUs: Mobile AI & AIM 2025 Challenge Report (poster # X )
Andrey Ignatov (ETH Zurich)*; Georgy Perevozchikov (University of Wurzburg); Radu Timofte (University of Wurzburg)
Paper ID AIM 56 -- 04:45PM Adapting Stable Diffusion for On-Device Inference: Mobile AI & AIM 2025 Challenge Report (poster # X )
Andrey Ignatov (ETH Zurich)*; Georgy Perevozchikov (University of Wurzburg); Radu Timofte (University of Wurzburg)