18?.June Seattle, US (in-person)

NTIRE 2024

New Trends in Image Restoration and Enhancement workshop

and associated challenges

in conjunction with CVPR 2024

Sponsors (TBU)




Call for papers

Image restoration, enhancement and manipulation are key computer vision tasks, aiming at the restoration of degraded image content, the filling in of missing information, or the needed transformation and/or manipulation to achieve a desired target (with respect to perceptual quality, contents, or performance of apps working on such images). Recent years have witnessed an increased interest from the vision and graphics communities in these fundamental topics of research. Not only has there been a constantly growing flow of related papers, but also substantial progress has been achieved.

Each step forward eases the use of images by people or computers for the fulfillment of further tasks, as image manipulation serves as an important frontend. Not surprisingly then, there is an ever growing range of applications in fields such as surveillance, the automotive industry, electronics, remote sensing, or medical image analysis etc. The emergence and ubiquitous use of mobile and wearable devices offer another fertile ground for additional applications and faster methods.

This workshop aims to provide an overview of the new trends and advances in those areas. Moreover, it will offer an opportunity for academic and industrial attendees to interact and explore collaborations.

This workshop builds upon the success of New Trends in Image Restoration and Enhancement (NTIRE) editions: at CVPR 2023 , 2022 , 2021, 2020 , 2019 , 2018 and 2017 and at ACCV 2016, Advances in Image Manipulation (AIM) workshop at ECCV 2022,ICCV 2021, ECCV 2020,ICCV 2019, Mobile AI (MAI) workshop at CVPR 2023 , 2022 , and 2021 , Perceptual Image Restoration and Manipulation (PIRM) workshop at ECCV 2018 , the workshop and Challenge on Learned Image Compression (CLIC) editions at CVPR 2018, CVPR 2019, CVPR 2020, CVPR 2021, CVPR 2022, DCC 2024. Moreover, it relies on the people associated with the PIRM, CLIC, MAI, AIM, and NTIRE events such as organizers, PC members, distinguished speakers, authors of published papers, challenge participants and winning teams.

Papers addressing topics related to image/video manipulation, restoration and enhancement are invited. The topics include, but are not limited to:

  • Image/video inpainting
  • Image/video deblurring
  • Image/video denoising
  • Image/video upsampling and super-resolution
  • Image/video filtering
  • Image/video de-hazing, de-raining, de-snowing, etc.
  • Demosaicing
  • Image/video compression
  • Removal of artifacts, shadows, glare and reflections, etc.
  • Image/video enhancement: brightening, color adjustment, sharpening, etc.
  • Hyperspectral image restoration, enhancement, manipulation
  • Underwater image restoration, enhancement, manipulation
  • Light field image restoration, enhancement, manipulation
  • Aerial and satellite image restoration, enhancement, manipulation
  • Methods robust to changing weather conditions / adverse outdoor conditions
  • Image/video manipulation on constrained settings/mobile devices
  • Image/video restoration and enhancement on constrained settings/mobile devices
  • Image-to-image translation
  • Video-to-video translation
  • Image/video manipulation
  • Perceptual manipulation
  • Style transfer
  • Image/video generation and hallucination
  • Image/video quality assessment
  • Image/video semantic segmentation
  • Perceptual enhancement
  • Multimodal translation
  • Depth estimation
  • Saliency and gaze estimation
  • Studies and applications of the above.

Important dates (TBU)



Challenges Event Date (always 23:59 PT)
Site online January 17, 2024
Release of train data and validation data January 30, 2024
Validation server online February 5, 2024
Final test data release, validation server closed March 15, 2024 (EXTENDED)
Test phase submission deadline March 21, 2024 (EXTENDED)
Fact sheets, code/executable submission deadline March 22, 2024
Preliminary test results release to the participants March 24, 2024
Paper submission deadline for entries from the challenges April 5, 2024 (EXTENDED)
Workshop Event Date (always 23:59 PT)
Paper submission deadline March 17, 2024 (EXTENDED)
Paper submission deadline (only for methods from NTIRE 2024 challenges and papers reviewed elsewhere!) April 5, 2024 (EXTENDED)
Paper decision notification April 8, 2024
Camera ready deadline April 13, 2024
Workshop day June 18?, 2024

Submit



Instructions and Policies
Format and paper length

A paper submission has to be in English, in pdf format, and at most 8 pages (excluding references) in double column. The paper format must follow the same guidelines as for all CVPR submissions.
https://cvpr.thecvf.com/Conferences/2024/AuthorGuidelines

Double-blind review policy

The review process is double blind. Authors do not know the names of the chair/reviewers of their papers. Reviewers do not know the names of the authors.

Dual submission policy

Dual submission is not allowed. If a paper is submitted also to CVPR and accepted, the paper cannot be published both at the CVPR and the workshop.

Submission site

https://cmt3.research.microsoft.com/NTIRE2024

Proceedings

Accepted and presented papers will be published after the conference in CVPR Workshops proceedings together with the CVPR2024 main conference papers.

Author Kit

https://github.com/cvpr-org/author-kit/archive/refs/tags/CVPR2024-v2.zip
The author kit provides a LaTeX2e template for paper submissions. Please refer to the kit for detailed formatting instructions.

People



Organizers (TBU)

  • Radu Timofte, University of Wurzburg,
  • Zongwei Wu, University of Wurzburg
  • Marcos V. Conde, University of Wurzburg,
  • Florin Vasluianu, University of Wurzburg
  • Ming-Hsuan Yang, University of California at Merced & Google
  • Kyoung Mu Lee, Seoul National University
  • Codruta Ancuti, University Politehnica Timisoara
  • Cosmin Ancuti, UCL
  • Ren Yang, Microsoft Research,
  • Yawei Li, ETH Zurich
  • Bin Ren, University of Trento
  • Nancy Mehta, University of Wurzburg
  • Zheng Chen, Shanghai Jiao Tong University
  • Yulun Zhang, ETH Zurich, Shanghai Jiao Tong University
  • Kai Zhang, ETH Zurich
  • Longguang Wang, National University of Defense Technology
  • Yingqian Wang, National University of Defense Technology
  • Yulan Guo, National University of Defense Technology
  • Zhi Jin, Sun Yat-Sen University
  • Shuhang Gu, UESTC
  • Zhilu Zhang, Harbin Institute of Technology
  • Wangmeng Zuo, Harbin Institute of Technology
  • Nicolas Chahine, DXOMARK
  • Sira Ferradans, DXOMARK
  • Xiaohong Liu, Shanghai Jiao Tong University
  • Xin Li, University of Science and Technology of China
  • Kun Yuan, Kuaishou Technology
  • Zhibo Chen, University of Science and Technology of China
  • Jie Liang, The OPPO Research Institute
  • Lei Zhang, The Hong Kong Polytechnic University, The OPPO Research Institute
  • Xiaoning Liu, UESTC
  • Tom Bishop, GLASS Imaging
  • Fabio Tosi, University of Bologna
  • Pierluigi Zama Ramirez, University of Bologna
  • Luigi Di Stefano, University of Bologna
  • Egor Ershov, IITP RAS
  • Luc Van Gool, KU Leuven & ETH Zurich,




PC Members (TBU)

  • Mahmoud Afifi, Google
  • Timo Aila, NVIDIA
  • Codruta Ancuti, UPT
  • Boaz Arad, Ben-Gurion University of the Negev
  • Siavash Arjomand Bigdeli, DTU
  • Michael S. Brown, York University / Samsung AI Center
  • Jianrui Cai, The Hong Kong Polytechnic University
  • Chia-Ming Cheng, MediaTek
  • Cheng-Ming Chiang, MediaTek
  • Sunghyun Cho, Samsung
  • Marcos V. Conde, University of Wurzburg
  • Chao Dong, SIAT
  • Weisheng Dong, Xidian University
  • Touradj Ebrahimi, EPFL
  • Michael Elad, Technion
  • Paolo Favaro, University of Bern
  • Graham Finlayson, University of East Anglia
  • Corneliu Florea, University Politechnica of Bucharest
  • Alessandro Foi, Tampere University of Technology
  • Peter Gehler, Amazon
  • Bastian Goldluecke, University of Konstanz
  • Shuhang Gu, OPPO & University of Sydney
  • Yulan Guo, NUDT
  • Christine Guillemot, INRIA
  • Felix Heide, Princeton University & Algolux
  • Chiu Man Ho, OPPO,
  • Hiroto Honda, Mobility Technologies Co Ltd.
  • Andrey Ignatov, ETH Zurich
  • Eddy Ilg, Saarland University
  • Michal Irani, Weizmann Institute, Israel
  • Aggelos Katsaggelos, Northwestern University
  • Jean-Francois Lalonde, Université Laval
  • Christian Ledig, University of Bamberg
  • Seungyong Lee, POSTECH
  • Kyoung Mu Lee, Seoul National University
  • Juncheng Li, The Chinese University of Hong Kong
  • Yawei Li, ETH Zurich
  • Stephen Lin, Microsoft Research
  • Chen Change Loy, Chinese University of Hong Kong
  • Guo Lu, Shanghai Jiao Tong University
  • Vladimir Lukin, National Aerospace University, Ukraine
  • Kede Ma, City University of Hong Kong
  • Vasile Manta, Technical University of Iasi
  • Rafal Mantiuk, University of Cambridge
  • Zibo Meng, OPPO
  • Yusuke Monno, Tokyo Institute of Technology
  • Hajime Nagahara, Osaka University
  • Vinay P. Namboodiri, University of Bath/li>
  • Federico Perazzi, Bending Spoons
  • Fatih Porikli, Qualcomm CR&D
  • Rakesh Ranjan, Meta Reality Labs
  • Antonio Robles-Kelly, Deakin University
  • Aline Roumy, INRIA
  • Yoichi Sato, University of Tokyo
  • Christopher Schroers, DisneyResearch|Studios
  • Nicu Sebe, University of Trento
  • Eli Shechtman, Creative Intelligence Lab at Adobe Research
  • Gregory Slabaugh, Queen Mary University of London
  • Sabine Süsstrunk, EPFL
  • Robby T. Tan, Yale-NUS College
  • Masayuki Tanaka, Tokyo Institute of Technology
  • Hao Tang, ETH Zurich
  • Jean-Philippe Tarel, G. Eiffel University
  • Christian Theobalt, MPI Informatik
  • Qi Tian, Huawei Cloud & AI
  • Radu Timofte, University of Wurzburg & ETH Zurich
  • George Toderici, Google
  • Luc Van Gool, ETH Zurich & KU Leuven
  • Javier Vazquez-Coral, Autonomous University of Barcelona
  • Longguang Wang, NUDT
  • Xintao Wang, Tencent
  • Yingqian Wang, NUDT
  • Gordon Wetzstein, Stanford University
  • Ming-Hsuan Yang, University of California at Merced & Google
  • Ren Yang, ETH Zurich
  • Wenjun Zeng, Microsoft Research
  • Kai Zhang, ETH Zurich
  • Yulun Zhang, ETH Zurich
  • Jun-Yan Zhu, Carnegie Mellon University
  • Wangmeng Zuo, Harbin Institute of Technology

Invited Talks (TBA)



Schedule (TBA)


All the accepted NTIRE workshop papers have poster presentation. A subset of the accepted NTIRE workshop papers have also oral presentation.
All the accepted NTIRE workshop papers are published under the book title "2024 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops" by

Computer Vision Foundation Open Access and IEEE Xplore Digital Library