18.June Vancouver, Canada (Hybrid)

NTIRE 2023

New Trends in Image Restoration and Enhancement workshop

and associated challenges

in conjunction with CVPR 2023

Join NTIRE 2023 workshop online Zoom for LIVE, talks, Q&A, interaction

The event starts 18.06.2023 at 8:00 PT / 15:00 UTC / 23:00 China time.
Check the NTIRE 2023 schedule.
The recording of the NTIRE 2023 event:

Sponsors (TBU)




Call for papers

Image restoration, enhancement and manipulation are key computer vision tasks, aiming at the restoration of degraded image content, the filling in of missing information, or the needed transformation and/or manipulation to achieve a desired target (with respect to perceptual quality, contents, or performance of apps working on such images). Recent years have witnessed an increased interest from the vision and graphics communities in these fundamental topics of research. Not only has there been a constantly growing flow of related papers, but also substantial progress has been achieved.

Each step forward eases the use of images by people or computers for the fulfillment of further tasks, as image manipulation serves as an important frontend. Not surprisingly then, there is an ever growing range of applications in fields such as surveillance, the automotive industry, electronics, remote sensing, or medical image analysis etc. The emergence and ubiquitous use of mobile and wearable devices offer another fertile ground for additional applications and faster methods.

This workshop aims to provide an overview of the new trends and advances in those areas. Moreover, it will offer an opportunity for academic and industrial attendees to interact and explore collaborations.

This workshop builds upon the success of New Trends in Image Restoration and Enhancement (NTIRE) editions: at CVPR 2022 , 2021, 2020 , 2019 , 2018 and 2017 and at ACCV 2016, Advances in Image Manipulation (AIM) workshop at ECCV 2022,ICCV 2021, ECCV 2020,ICCV 2019, Mobile AI (MAI) workshop at CVPR 2022 , CVPR 2021 , Perceptual Image Restoration and Manipulation (PIRM) workshop at ECCV 2018 , the workshop and Challenge on Learned Image Compression (CLIC) editions at CVPR 2018, CVPR 2019, CVPR 2020, CVPR 2021, , CVPR 2022. Moreover, it relies on the people associated with the PIRM, CLIC, MAI, AIM, and NTIRE events such as organizers, PC members, distinguished speakers, authors of published papers, challenge participants and winning teams.

Papers addressing topics related to image/video manipulation, restoration and enhancement are invited. The topics include, but are not limited to:

  • Image/video inpainting
  • Image/video deblurring
  • Image/video denoising
  • Image/video upsampling and super-resolution
  • Image/video filtering
  • Image/video de-hazing, de-raining, de-snowing, etc.
  • Demosaicing
  • Image/video compression
  • Removal of artifacts, shadows, glare and reflections, etc.
  • Image/video enhancement: brightening, color adjustment, sharpening, etc.
  • Hyperspectral image restoration, enhancement, manipulation
  • Underwater image restoration, enhancement, manipulation
  • Light field image restoration, enhancement, manipulation
  • Aerial and satellite image restoration, enhancement, manipulation
  • Methods robust to changing weather conditions / adverse outdoor conditions
  • Image/video manipulation on constrained settings/mobile devices
  • Image/video restoration and enhancement on constrained settings/mobile devices
  • Image-to-image translation
  • Video-to-video translation
  • Image/video manipulation
  • Perceptual manipulation
  • Style transfer
  • Image/video generation and hallucination
  • Image/video quality assessment
  • Image/video semantic segmentation
  • Perceptual enhancement
  • Multimodal translation
  • Depth estimation
  • Saliency and gaze estimation
  • Studies and applications of the above.

Important dates



Challenges Event Date (always 23:59 PT)
Site online January 10, 2023
Release of train data and validation data January 30, 2023
Validation server online January 30, 2023
Final test data release, validation server closed March 14, 2023
Test phase submission deadline March 20, 2023
Fact sheets, code/executable submission deadline March 20, 2023
Preliminary test results release to the participants March 22, 2023
Paper submission deadline for entries from the challenges April 5, 2023
Workshop Event Date (always 23:59 PT)
Paper submission deadline March 10, 2023
Paper submission deadline (only for methods from NTIRE 2023 challenges and papers reviewed elsewhere!) April 5, 2023
Paper decision notification April 8, 2023
Camera ready deadline April 13, 2023
Workshop day June 18, 2023

Submit



Instructions and Policies
Format and paper length

A paper submission has to be in English, in pdf format, and at most 8 pages (excluding references) in double column. The paper format must follow the same guidelines as for all CVPR 2023 submissions.
https://cvpr2023.thecvf.com/Conferences/2023/AuthorGuidelines

Double-blind review policy

The review process is double blind. Authors do not know the names of the chair/reviewers of their papers. Reviewers do not know the names of the authors.

Dual submission policy

Dual submission is not allowed. If a paper is submitted also to CVPR and accepted, the paper cannot be published both at the CVPR and the workshop.

Submission site

https://cmt3.research.microsoft.com/NTIRE2023

Proceedings

Accepted and presented papers will be published after the conference in CVPR Workshops proceedings together with the CVPR2023 main conference papers.

Author Kit

https://media.icml.cc/Conferences/CVPR2023/cvpr2023-author_kit-v1_1-1.zip
The author kit provides a LaTeX2e template for paper submissions. Please refer to the kit for detailed formatting instructions.

People



Organizers

  • Radu Timofte, University of Wurzburg,
  • Marcos V. Conde, University of Wurzburg,
  • Ren Yang, ETH Zurich,
  • Florin-Alexandru Vasluianu, University of Wurzburg
  • Yawei Li, ETH Zurich
  • Kai Zhang, ETH Zurich
  • Yulun Zhang, ETH Zurich
  • Shuhang Gu, UEST
  • Ming-Hsuan Yang, University of California at Merced & Google
  • Lei Zhang, Alibaba / Hong Kong Polytechnic University
  • Kyoung Mu Lee, Seoul National University
  • Eli Shechtman, Adobe Research
  • Yulan Guo, National University of Defense Technology
  • Codruta Ancuti, University Politehnica Timisoara
  • Cosmin Ancuti, UCL
  • Chao Dong, SIAT
  • Xintao Wang, Tencent
  • Sira Ferradans, DXOMARK
  • Tom Bishop, GLASS Imaging
  • Longguang Wang, National University of Defense Technology
  • Yingqian Wang, National University of Defense Technology
  • Fabio Tosi, University of Bologna
  • Pierluigi Zama Ramirez, University of Bologna
  • Luigi Di Stefano, University of Bologna
  • Egor Ershov, IITP RAS
  • Luc Van Gool, KU Leuven & ETH Zurich,




PC Members (TBU)

  • Mahmoud Afifi, Apple
  • Timo Ala, NVIDIA
  • Codruta Ancuti, UPT
  • Boaz Arad, Ben-Gurion University of the Negev
  • Siavash Arjomand Bigdeli, DTU
  • Michael S. Brown, York University
  • Jianrui Cai, The Hong Kong Polytechnic University
  • Chia-Ming Cheng, MediaTek
  • Cheng-Ming Chiang, MediaTek
  • Sunghyun Cho, Samsung
  • Marcos V. Conde, University of Wurzburg
  • Chao Dong, SIAT
  • Weisheng Dong, Xidian University
  • Touradj Ebrahimi, EPFL
  • Michael Elad, Technion
  • Paolo Favaro, University of Bern
  • Graham Finlayson, University of East Anglia
  • Corneliu Florea, University Politechnica of Bucharest
  • Peter Gehler, Amazon
  • Bastian Goldluecke, University of Konstanz
  • Shuhang Gu, OPPO & University of Sydney
  • Yulan Guo, NUDT
  • Christine Guillemot, INRIA
  • Felix Heide, Princeton University & Algolux
  • Chiu Man Ho, OPPO,
  • Hiroto Honda, Mobility Technologies Co Ltd.
  • Andrey Ignatov, ETH Zurich
  • Eddy Ilg, Saarland University
  • Aggelos Katsaggelos, Northwestern University
  • Aditya Prakash Kulkani, Apple & MST
  • Jean-Francois Lalonde, Université Laval
  • Christian Ledig, University of Bamberg
  • Seungyong Lee, POSTECH
  • Kyoung Mu Lee, Seoul National University
  • Juncheng Li, The Chinese University of Hong Kong
  • Yawei Li, ETH Zurich
  • Stephen Lin, Microsoft Research
  • Chen Change Loy, Chinese University of Hong Kong
  • Guo Lu, Beijing Institute of Technology
  • Vladimir Lukin, National Aerospace University, Ukraine
  • Kede Ma, City University of Hong Kong
  • Vasile Manta, Technical University of Iasi
  • Rafal Mantiuk, University of Cambridge
  • Zibo Meng, OPPO
  • Yusuke Monno, Tokyo Institute of Technology
  • Hajime Nagahara, Osaka University
  • Vinay P. Namboodiri, University of Bath/li>
  • Federico Perazzi, Bending Spoons
  • Fatih Porikli, Qualcomm CR&D
  • Rakesh Ranjan, Meta Reality Labs
  • Antonio Robles-Kelly, Deakin University
  • Aline Roumy, INRIA
  • Yoichi Sato, University of Tokyo
  • Christopher Schroers, DisneyResearch|Studios
  • Nicu Sebe, University of Trento
  • Eli Shechtman, Creative Intelligence Lab at Adobe Research
  • Gregory Slabaugh, Queen Mary University of London
  • Sabine Süsstrunk, EPFL
  • Yu-Wing Tai, Kuaishou Technology & HKUST
  • Robby T. Tan, Yale-NUS College
  • Masayuki Tanaka, Tokyo Institute of Technology
  • Hao Tang, ETH Zurich
  • Jean-Philippe Tarel, G. Eiffel University
  • Christian Theobalt, MPI Informatik
  • Qi Tian, Huawei Cloud & AI
  • Radu Timofte, University of Wurzburg & ETH Zurich
  • George Toderici, Google
  • Luc Van Gool, ETH Zurich & KU Leuven
  • Javier Vazquez-Coral, Autonomous University of Barcelona
  • Longguang Wang, NUDT
  • Xintao Wang, Tencent
  • Yingqian Wang, NUDT
  • Gordon Wetzstein, Stanford University
  • Ming-Hsuan Yang, University of California at Merced & Google
  • Ren Yang, ETH Zurich
  • Wenjun Zeng, Microsoft Research
  • Kai Zhang, ETH Zurich
  • Yulun Zhang, ETH Zurich
  • Jun-Yan Zhu, Carnegie Mellon University
  • Wangmeng Zuo, Harbin Institute of Technology

Invited Talks



Ming-Hsuan Yang

University of California, Merced & Google

Title: Learning to Synthesize Image and Video Contents

Abstract: In this talk, I will first review our work on learning to synthesize image and video content from image data. The underlying theme is to exploit different priors to synthesize diverse content with robust formulations. One related issue is learning effective models from limited data, and I will present several methods to advance state of the art. I will then present our image and video synthesis work using vision and language models. When time allows, I will also discuss recent findings for other vision tasks.

Bio: Ming-Hsuan Yang is a professor in Electrical Engineering and Computer Science at University of California, Merced, and a research scientist at Google. He received a Ph.D. degree in Computer Science from the University of Illinois at Urbana-Champaign in 2000. He served as a program co-chair for IEEE International Conference on Computer Vision (CVPR) in 2019 as well as Asian Conference on Computer Vision (ACCV) in 2014, and a general co-chair for Asian Conference on Computer Vision in 2016. He serves as an associate editor of the IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) from 2007 to 2011 and International Journal of Computer Vision (IJCV). He is co-editor-in-chief of Computer Vision and Image Understanding (CVIU). He received paper awards from CVPR, ACCV, and UIST. Yang received the Google faculty award in 2009, and Distinguished Early Career Research Award from the UC Merced senate in 2011, CAREER award from the National Science Foundation in 2012, and Distinguished Research Award from UC Merced Senate in 2015. He is a Fellow of the IEEE and ACM.

Jon Barron

Google Research

Title: Anti-Aliasing Neural Radiance Fields

Abstract: Neural Radiance Fields (NeRFs) are very good at view synthesis, but have critical issues concerning aliasing that result in reduced image quality and an inability to handle variation in scale. I'll be discussing a line of work I've been developing on how to anti-alias NeRFs (specifically, mip-NeRF and Zip-NeRF). The resulting system is able to produce 3D reconstructions that significantly outperform the prior state-of-the-art in terms of image quality.

Bio: Jon Barron is a senior staff research scientist at Google Research in San Francisco, where he works on computer vision and machine learning. He received a PhD in Computer Science from the University of California, Berkeley in 2013, where he was advised by Jitendra Malik, and he received a Honours BSc in Computer Science from the University of Toronto in 2007. He received a National Science Foundation Graduate Research Fellowship in 2009, the C.V. Ramamoorthy Distinguished Research Award in 2013, and the PAMI Young Researcher Award in 2020. His works have received awards at ECCV 2016, TPAMI 2016, ECCV 2020, ICCV 2021, CVPR 2022, the 2022 Communications of the ACM, and ICLR 2023.

Rakesh Ranjan

Meta Reality Labs

Title: Computational Mixed Reality

Abstract: Mixed Reality offers immersive experiences by combining the real environment of users with digital augmentations. Creating magical MR experience however poses unique challenges in image processing. In this talk, I will introduce some of those challenges and possible directions to address them.

Bio: Rakesh Ranjan is a Senior Research Scientist Manager in Reality Labs, Meta. Rakesh and his team pursue research in the areas of Computational Photography, 3D Computer Vision and Generative AI for AR/VR. Prior to Meta, Rakesh was a Research Scientist at Nvidia where he worked in AI for Real Time Graphics (DLSS) and AI for Cloud Gaming (GeForce Now). Rakesh also spent 5 years at Intel Research as a PhD and full-time researcher.

Fahad Khan

MBZUAI, UAE & Linköping University, Sweden

Title: Burst Image Restoration and Enhancement

Abstract: Modern handheld devices can acquire burst image sequence in a quick succession. However, the individual acquired frames suffer from multiple degradations and are misaligned due to camera shake and object motions. The goal of burst image restoration is to effectively combine complimentary cues across multiple burst frames to generate high-quality outputs. Further, how to upscale the burst features effectively is a challenge. In this talk, I will present our recent results on burst image restoration and enhancement. First, I will present our approach that focuses on the effective information exchange between burst frames, such that the degradations get filtered out while the actual scene details are preserved and enhanced. The proposed approach aligns the burst features with respect to the base frame. By creating pseudo-burst features that combine complementary information from all aligned frames, our method enables seamless information exchange. Additionally, we introduce an adaptive and progressive up-sampling strategy to enhance the quality of the resulting image. Next, I will discuss how to extend the proposed feature alignment to multiple scales along with an ensemble-based up-sampler for feature up-sampling. This leads to further improvement in the quality of the final image output. Lastly, I will talk about collectively addressing higher accuracy and lower inference time for burst image processing with the help of a refined multi-scale feature alignment method and a computationally lighter burst fusion mechanism.

Bio: Fahad Khan is currently a Professor and Deputy Department Chair of Computer Vision at the MBZUAI, UAE. He also holds a faculty position at Computer Vision Laboratory, Linköping University, Sweden. He received the M.Sc. degree in Intelligent Systems Design from Chalmers University of Technology, Sweden and a Ph.D. degree in Computer Vision from Computer Vision Center Barcelona, Spain. He has achieved top ranks on various international challenges (Visual Object Tracking VOT: 1st 2014 and 2018, 2nd 2015, 1st 2016; VOT-TIR: 1st 2015 and 2016; OpenCV Tracking: 1st 2015; 1st PASCAL VOC Segmentation and Action Recognition tasks 2010). He received the best paper award in the computer vision track at IEEE ICPR 2016. He has published over 100 reviewed conference papers, journal articles, and book contributions, with over 30,000 citations according to Google Scholar. His research interests include a wide range of topics within computer vision and machine learning. He serves as a regular senior program committee member for leading conferences such as CVPR, ICCV, and ECCV.

Schedule


All the accepted NTIRE workshop papers have poster presentation. A subset of the accepted NTIRE workshop papers have also oral presentation.
All the accepted NTIRE workshop papers are published under the book title "2023 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops" by

Computer Vision Foundation Open Access and IEEE Xplore Digital Library



[15:10 UTC] 08:10 Lens-to-Lens Bokeh Effect Transformation. NTIRE 2023 Challenge Report
Marcos V Conde (University of Würzburg)*; Manuel Kolmet (Technical University of Munich); Tim Seizinger (University of Würzburg); Tom E Bishop (Glass Imaging Inc.); Radu Timofte (University of Wurzburg & ETH Zurich) et al.

[15:20 UTC] 08:20 Efficient Multi-Lens Bokeh Effect Rendering and Transformation
Tim Seizinger (University of Würzburg); Marcos V Conde (University of Würzburg)*; Manuel Kolmet (Technical University of Munich); Tom E Bishop (Glass Imaging Inc.); Radu Timofte (University of Wurzburg & ETH Zurich)

[15:25 UTC] 08:25 NAFBET: Bokeh Effect Transformation with Parameter Analysis Block based on NAFNet
xiangyu kong (Samsung Research China – Beijing (SRCB))*; Fan Wang (Samsung Research China - Beijing (SRC-B)); dafeng zhang (Samsung Research China – Beijing (SRCB)); jinlong wu (Samsung Research China – Beijing (SRCB)); Zikun Liu (Samsung Research China – Beijing (SRC-B))

[15:30 UTC] 08:30 Selective Bokeh Effect Transformation
Juewen Peng (Huazhong University of Science and Technology); Zhiyu Pan (Huazhong Univ. of Sci.&Tech.); Chengxin Liu (Huazhong University of Science and Technology); Xianrui Luo (Huazhong University of Science and Technology); Huiqiang Sun (Huazhong University of Science and Technology); Liao Shen (Huazhong University of Science and Technology); Ke Xian (Nanyang Technological Univeristy); Zhiguo Cao (Huazhong Univ. of Sci.&Tech.)*

[15:35 UTC] 08:35 High-Perceptual Quality JPEG Decoding via Posterior Sampling
Sean Man (Technion)*; Guy Ohayon (Technion); Theo J Adrai (Technion); Michael Elad (Technion)


[15:40 UTC] 08:40 NTIRE 2023 Challenge on Light Field Image Super-Resolution: Dataset, Methods and Results
Yingqian Wang (National University of Defense Technology)*; Longguang Wang (National University of Defense Technology); Zhengyu Liang (National University of Denfense Technology); Jungang Yang (National University of Defense Technology); Radu Timofte (University of Wurzburg & ETH Zurich); Yulan Guo (Sun Yat-sen University) et al.

[15:50 UTC] 08:50 DistgEPIT: Enhanced Disparity Learning for Light Field Image Super-Resolution
Kai Jin (Bigo Technology Pte. Ltd.); Angulia Yang (Bigo Technology Pte. Ltd.); Zeqiang Wei (Beijing University of Posts and Telecommunications); Sha Guo (Peking University); Mingzhi Gao (Bigo Technology Pte. Ltd.); Xiuzhuang Zhou (Beijing University of Posts and Telecommunications)*

[15:55 UTC] 08:55 Spatial-Angular Multi-Scale Mechanism for Light Field Spatial Super-Resolution
Chen Gao (Beijing Jiaotong University )*; Youfang Lin (Beijing Jiaotong University); Song Chang (Beijing Jiaotong University); Shuo Zhang (Beijing Jiaotong University)


[16:00 UTC] 09:00 NTIRE 2023 HR NonHomogeneous Dehazing Challenge Report
Codruta O Ancuti (University Politehnica Timisoara)*; Cosmin Ancuti (UCL); Florin-Alexandru Vasluianu (Computer Vision Lab, University of Wurzburg); Radu Timofte (University of Wurzburg & ETH Zurich) et al.

[16:10 UTC] 09:10 Breaking Through the Haze: An Advanced Non-Homogeneous Dehazing Method based on Fast Fourier Convolution and ConvNeXt
Han Zhou (McMaster University)*; Wei Dong (University of Alberta); Yangyi Liu (McMaster University); Jun Chen (McMaster University)

[16:15 UTC] 09:15 A Data-Centric Solution to NonHomogeneous Dehazing via Vision Transformer
Yangyi Liu (McMaster University)*; Huan Liu (Huawei Technologies); Liangyan Li (McMaster university); Zijun Wu (China Telecom); Jun Chen (McMaster University)

[16:20 UTC] 09:20 Streamlined Global and Local Features Combinator (SGLC) for High Resolution Image Dehazing
Bilel Benjdira (Prince Sultan University)*; Anas M. Ali (Prince Sultan University); Anis Koubaa (Prince Sultan University)

[16:25 UTC] 09:25 High-Resolution Synthetic RGB-D Datasets for Monocular Depth Estimation
Aakash Rajpal (K|Lens GmbH); Noshaba Cheema (Max-Planck Institute for Informatics & DFKI); Klaus Illgner (K|Lens GmbH); Philipp Slusallek (German Research Center for Artificial Intelligence (DFKI) & Saarland University); Sunil Jaiswal (K|Lens GmbH)*

[16:30 UTC] 09:30 Expanding Synthetic Real-World Degradations for Blind Video Super Resolution
Mehran Jeelani (Saarland University); Sadbhawna . (IIT Jammu); Noshaba Cheema (Max-Planck Institute for Informatics & DFKI); Klaus Illgner (K|Lens GmbH); Philipp Slusallek (German Research Center for Artificial Intelligence (DFKI) & Saarland University); Sunil Jaiswal (K|Lens GmbH)*

[16:35 UTC] 09:35 Denoising Diffusion Models for Plug-and-Play Image Restoration
Yuanzhi Zhu (ETH zurich); Kai Zhang (ETH, Zurich)*; Jingyun Liang (ETH Zurich); Jiezhang Cao (ETH Zürich); Bihan Wen (Nanyang Technological University); Radu Timofte (University of Wurzburg & ETH Zurich); Luc Van Gool (ETH Zurich)


[16:40 UTC] 09:40 NTIRE 2023 Challenge on Image Super-Resolution (x4): Methods and Results
Yulun Zhang (ETH Zurich)*; Kai Zhang (ETH, Zurich); Zheng Chen (Shanghai Jiao Tong University); Yawei Li (ETH Zurich); Radu Timofte (University of Wurzburg & ETH Zurich) et al.

[16:50 UTC] 09:50 LSDIR: A Large Scale Dataset for Image Restoration
Yawei Li (ETH Zurich)*; Kai Zhang (ETH, Zurich); Jingyun Liang (ETH Zurich); Jiezhang Cao (ETH Zürich); Ce Liu (ETH Zurich); RUI GONG (ETH Zurich); Yulun Zhang (ETH Zurich); Hao Tang (ETH Zurich); Yun Liu (A*STAR); Denis Demandolx (Meta); Rakesh Ranjan (Meta); Radu Timofte (University of Wurzburg & ETH Zurich); Luc Van Gool (ETH Zurich)

[16:55 UTC] 09:55 Attention Retractable Frequency Fusion Transformer for Image Super Resolution
Qiang Zhu (UESTC)*; Li Peng Fei (UESTC); Qianhui Li (UESTC)


[17:30 UTC] 10:30 NTIRE 2023 Quality Assessment of Video Enhancement Challenge
Xiaohong Liu (Shanghai Jiao Tong University)*; Xiongkuo Min (Shanghai Jiao Tong University); Wei Sun (Shanghai Jiao Tong Unviersity); Yulun Zhang (ETH Zurich); Kai Zhang (ETH, Zurich); Radu Timofte (University of Wurzburg & ETH Zurich); Guangtao Zhai (Shanghai Jiao Tong University); Yixuan Gao (Shanghai Jiao Tong University); Yuqin Cao (Shanghai Jiao Tong university); Tengchuan Kou (Shanghai Jiao Tong University); Yunlong Dong (JHC); Ziheng Jia (SJTU GVSP) et al.

[17:40 UTC] 10:40 VDPVE: VQA Dataset for Perceptual Video Enhancement
Yixuan Gao (Shanghai Jiao Tong University)*; Yuqin Cao (Shanghai Jiao Tong university); Tengchuan Kou (Shanghai Jiao Tong University); Wei Sun (Shanghai Jiao Tong Unviersity); Yunlong Dong (JHC); Xiaohong Liu (Shanghai Jiao Tong University); Xiongkuo Min (Shanghai Jiao Tong University); Guangtao Zhai (Shanghai Jiao Tong University)

[17:45 UTC] 10:45 Video Quality Assessment Based on Swin Transformer with Spatio-Temporal Feature Fusion and Data Augmentation
Wei Wu (Alibaba Group)*; Shuming Hu (Alibaba Group); Pengxiang Xiao (Alibaba Group); Sibin Deng (Alibaba Group); Yilin Li (Alibaba Group); Ying Chen (Alibaba Group); Kai Li (Alibaba Group)

[17:50 UTC] 10:50 Zoom-VQA: Patches, Frames and Clips Integration for Video Quality Assessment
Kai Zhao (Kuaishou Technology)*; Kun Yuan (Kuaishou Technology); Ming Sun (Kuaishou Technology); Xing Wen (Kuaishou)

[17:55 UTC] 10:55 Quality assessment of enhanced videos guided by aesthetics and technical quality attributes
Mirko Agarla (University of Milano - Bicocca)*; Luigi Celona (University of Milano - Bicocca); Claudio Rota (University of Milano - Bicocca); Raimondo Schettini (University of Milano - Bicocca)


[18:00 UTC] 11:00 NTIRE 2023 Video Colorization Challenge
Xiaoyang Kang (Alibaba)*; Xianhui Lin (Alibaba Group); Kai Zhang (ETH, Zurich); Zheng Hui (Alibaba DAMO Academy); Wangmeng Xiang (DAMO Academy, Alibaba Group); Jun-Yan He (DAMO Academy, Alibaba Group); Xiaoming Li (Nanyang Technological University); PEIRAN REN (Alibaba); Xuansong Xie (Alibaba); Radu Timofte (University of Wurzburg & ETH Zurich) et al.

[18:10 UTC] 11:10 RTTLC: Video Colorization with Restored Transformer and Test-time Local Converter
Jinjing Li (Communication University of China); Qirong Liang (Communication University of China)*; Qipei Li (Communication University of China); Ruipeng Gang (Academy of Broadcasting Science, NRTA); Ji Fang (Academy of Broadcasting Science, NRTA); Chichen Lin (Communication University of China); Shuang Feng (Communication University of China); Xiaofeng Liu (Communication University of China)

[18:15 UTC] 11:15 Temporal Consistent Automatic Video Colorization via Semantic Correspondence
Yu Zhang (Beijing University of Posts and Telecommunications); Siqi Chen (Beijing University of Posts and Telecommunications)*; Mingdao Wang (Beijing University of Posts and Telecommunications); Xianlin Zhang (Beijing University of Posts and Telecommunications); Chuang Zhu (Beijing University of Posts and Telecommunications); Yue Zhang (Beijing University of Posts and Telecommunications); Xueming Li (Beijing University of Posts and Telecommunications)

[18:20 UTC] 11:20 Unlimited-Size Diffusion Restoration
Yinhuai Wang (Peking University Shenzhen Graduate School)*; Jiwen Yu (Peking University); Runyi Yu (Peking University); Jian Zhang (Peking University Shenzhen Graduate School)

[18:25 UTC] 11:25 Benchmark Dataset and Effective Inter-Frame Alignment for Real-World Video Super-Resolution
Ruohao Wang (Harbin Institute of Technology); Xiaohui Liu (Harbin Institute of Technology); Zhilu Zhang (Harbin Institute of Technology); Xiaohe Wu (Harbin Institute of technology); Chun-Mei Feng (Institute of High Performance Computing, A*STAR); Lei Zhang ("Hong Kong Polytechnic University, Hong Kong, China"); Wangmeng Zuo (Harbin Institute of Technology, China)*


[20:30 UTC] 13:30 NTIRE 2023 Challenge on Stereo Image Super-Resolution: Methods and Results
Longguang Wang (National University of Defense Technology); Yulan Guo (Sun Yat-sen University)*; Yingqian Wang (National University of Defense Technology); Juncheng Li (The Chinese University of Hong Kong); Shuhang Gu (ETH Zurich, Switzerland); Radu Timofte (University of Wurzburg & ETH Zurich) et al.

[20:40 UTC] 13:40 Hybrid Transformer and CNN Attention Network for Stereo Image Super-resolution
Ming Cheng (ByteDance)*; Haoyu Ma (the University of Hong Kong); qiufang ma (ByteDance Inc.); Xiaopeng Sun (ByteDance Inc.); Weiqi Li (Peking University Shenzhen Graduate School); Zhenyu Zhang (PKU); Xuhan Sheng (Peking University Shenzhen Graduate School); Shijie Zhao (Bytedance Inc.); Junlin Li (ByteDance Inc.); Li Zhang (Bytedance Inc.)

[20:45 UTC] 13:45 SC-NAFSSR: Perceptual-Oriented Stereo Image Super-Resolution Using Stereo Consistency Guided NAFSSR
Zidian Qiu (SYSU); Zongyao He (Sun Yat-sen University); Zhihao Zhan (SUN YAT-SEN UNIVERSITY); Zilin Pan (Sun Yat-sen University); Xingyuan Xian (Sun Yat-sen University); Zhi Jin (Sun Yat-sen University)*

[20:50 UTC] 13:50 Stereo Cross Global Learnable Attention Module for Stereo Image Super-Resolution
Yuanbo Zhou (Fuzhou University); Yuyang Xue (University of Edinburgh); Wei Deng (Fuzhou University); Ruofeng Nie ( Imperial Vision Technology); Jiajun Zhang (Fuzhou University); Jiaqi Pu (Imperial Vision Technology); Qinquan Gao (Fuzhou University); Junlin Lan (Fuzhou University); Tong Tong (College of Physics and Information Engineering, Fuzhou University, Fuzhou, China)*


[20:55 UTC] 13:55 NTIRE 2023 Challenge on HR Depth from Images of Specular and Transparent Surfaces
Pierluigi Zama Ramirez (University of Bologna)*; Fabio Tosi (University of Bologna); Luigi Di Stefano (University of Bologna); Radu Timofte (University of Wurzburg & ETH Zurich) et al.

[21:05 UTC] 14:05 FlexiCurve: Flexible Piecewise Curves Estimation for Photo Retouching
Chongyi Li ( Nanyang Technological University)*; Chun-Le Guo (Nankai University); Shangchen Zhou (Nanyang Technological University); Qiming Ai (Nanyang Technological University); Ruicheng Feng (Nanyang Technological University); Chen Change Loy (Nanyang Technological University)

[21:10 UTC] 14:10 SCONE-GAN: Semantic Contrastive learning-based Generative Adversarial Network for an end-to-end image translation
Iman Abbasnejad (QUT)*; Fabio Zambetta (RMIT University); Flora D. Salim (University of New South Wales); Timothy Wiley (RMIT University); Jeffrey Chan (RMIT University); Russell Gallagher (rheinmetall); Ehsan M Abbasnejad (The University of Adelaide)


[21:45 UTC] 14:45 NTIRE 2023 Challenge on 360° Omnidirectional Image and Video Super-Resolution: Datasets, Methods and Results
Mingdeng Cao (The University of Tokyo); Chong Mou (Peking University Shenzhen Graduate School); Fanghua Yu (Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences); Xintao Wang (Tencent)*; Yinqiang Zheng (The University of Tokyo); Jian Zhang (Peking University Shenzhen Graduate School); Chao Dong (SIAT); Ying Shan (Tencent); Gen Li (Tencent); Radu Timofte (University of Wurzburg & ETH Zurich) et al.

[21:55 UTC] 14:55 OPDN: Omnidirectional Position-aware Deformable Network for Omnidirectional Image Super-Resolution
Xiaopeng Sun (ByteDance Inc.)*; Weiqi Li (Peking University Shenzhen Graduate School); Zhenyu Zhang (PKU); qiufang ma (ByteDance Inc.); Xuhan Sheng (Peking University Shenzhen Graduate School); Ming Cheng (ByteDance); Haoyu Ma (the University of Hong Kong); Shijie Zhao (Bytedance Inc.); Jian Zhang (Peking University Shenzhen Graduate School); Junlin Li (ByteDance Inc.); Li Zhang (Bytedance Inc.)


[22:00 UTC] 15:00 Efficient Deep Models for Real-Time 4K Image Super-Resolution. NTIRE 2023 Benchmark and Report
Marcos V Conde (University of Würzburg)*; Eduard Zamfir (University of Wurzburg); Radu Timofte (University of Wurzburg & ETH Zurich) et al.

[22:10 UTC] 15:10 Towards Real-Time 4K Image Super-Resolution
Eduard Zamfir (University of Wurzburg); Marcos V Conde (University of Würzburg)*; Radu Timofte (University of Wurzburg & ETH Zurich)

[22:15 UTC] 15:15 AsConvSR: Fast and Lightweight Super-Resolution Network with Assembled Convolutions
Jiaming Guo (Huawei Noah's Ark Lab)*; Xueyi Zou (Huawei Noah's Ark Lab); Yuyi Chen (Huawei Noah's Ark Lab); Yi Liu (Huawei Noah's Ark Lab); Jia Hao (HiSilicon (Shanghai) Technologies Co., Ltd); Jianzhuang Liu (Huawei Noah's Ark Lab); Youliang Yan (Huawei Noah's Ark Lab)

[22:20 UTC] 15:20 Bicubic++: Slim, Slimmer, Slimmest - Designing an Industry-Grade Super-Resolution Network
Bahri Batuhan Bilecen (Aselsan Research); Mustafa Ayazoglu (Aselsan Research)*

[22:25 UTC] 15:25 BeautyREC: Robust, Efficient, and Component-Specific Makeup Transfer
Qixin Yan (Tencent); Chun-Le Guo (Nankai University); Jixin Zhao (Nanyang Technological University); YUEKUN DAI (Nanyang Technological University); Chen Change Loy (Nanyang Technological University); Chongyi Li ( Nanyang Technological University)*


[23:00 UTC] 16:00 NTIRE 2023 Image Shadow Removal Challenge Report
Florin-Alexandru Vasluianu (Computer Vision Lab, University of Wurzburg)*; Tim Seizinger (University of Würzburg); Radu Timofte (University of Wurzburg & ETH Zurich) et al.

[23:10 UTC] 16:10 WSRD: A Novel Benchmark for High Resolution Image Shadow Removal
Florin-Alexandru Vasluianu (Computer Vision Lab, University of Wurzburg)*; Tim Seizinger (University of Würzburg); Radu Timofte (University of Wurzburg & ETH Zurich)

[23:15 UTC] 16:15 Pyramid Ensemble Structure for High Resolution Image Shadow Removal
Shuhao Cui (Meituan)*; junshi huang (Meituan); Shuman Tian (Meituan); Mingyuan Fan (Meituan); jiaqi zhang (meituan); Li Zhu (Meituan); Xiaoming Wei (Meituan); Xiaolin Wei (Meituan)

[23:20 UTC] 16:20 Refusion: Enabling Large-Size Realistic Image Restoration with Latent-Space Diffusion Models
Ziwei Luo (Uppsala Universitet)*; Fredrik K Gustafsson (Uppsala University); Zheng Zhao (Uppsala University); Jens Sjölund (Uppsala University); Thomas B. Schön (Uppsala University)

[23:25 UTC] 16:25 Quantum Annealing for Single Image Super-Resolution
Han Yao Choong (ETH Zurich); Suryansh Kumar (ETH Zurich)*; Luc Van Gool (ETH Zurich)


[23:30 UTC] 16:30 NTIRE 2023 Challenge on Image Denoising: Methods and Results
Yawei Li (ETH Zurich)*; Yulun Zhang (ETH Zurich); Luc Van Gool (ETH Zurich); Radu Timofte (University of Wurzburg & ETH Zurich) et al.


[23:40 UTC] 16:40 NTIRE 2023 Challenge on Efficient Super-Resolution: Methods and Results
Yawei Li (ETH Zurich)*; Yulun Zhang (ETH Zurich); Luc Van Gool (ETH Zurich); Radu Timofte (University of Wurzburg & ETH Zurich) et al.

[23:50 UTC] 16:50 DIPNet: Efficiency Distillation and Iterative Pruning for Image Super-Resolution
Lei Yu (Megvii); Xinpeng Li (Megvii); Youwei Li (Microbt); Ting Jiang (MEGVII); Qi Wu (Megvii); Haoqiang Fan (Megvii Inc(face++)); Shuaicheng Liu (UESTC; Megvii)*

[23:55 UTC] 16:55 A Single Residual Network with ESA Modules and Distillation
Yucong Wang (Hunan University)*; Minjie Cai (Hunan University)

[24:00 UTC] 17:00 Large Kernel Distillation Network for Efficient Single Image Super-Resolution
Chengxing Xie (Southwest Jiaotong University); Xiaoming Zhang (School of Computing and Artificial Intelligence, Southwest Jiaotong University)*; Linze Li (Southwest Jiaotong University); Haiteng Meng (Southwest Jiaotong University); Tianlin Zhang (National Space Science Center ); Tianrui Li (School of Computing and Artificial Intelligence, Southwest Jiaotong University); Xiaole Zhao (School of Computing and Artificial Intelligence, Southwest Jiaotong University)

[00:05+1 UTC] 17:05 Multi-level Dispersion Residual Network for Efficient Image Super-Resolution
Yanyu Mao (Xi’an University of Posts & Telecommunications)*; Nihao Zhang (Xi’an University of Posts & Telecommunications); Qian Wang (Xi’an university of posts and telecommunications ); Bendu Bai (Xi’an University of Posts and Telecommunications); Wanying Bai (Xi'an University of Posts & Telecommunications); Haonan Fang (Xi’an University of Posts & Telecommunications); Peng Liu (Xi'an University of Posts & Telecommunications); Ming Yue Li (Xi'an University of Posts & Telecommunications); Shengbo Yan (Xi'an University of Posts and Telecommunications)


[00:10 UTC] 17:10 NTIRE 2023 Challenge on Night Photography Rendering
Alina Shutova (IITP RAS); Egor Ershov (IITP RAS)*; Georgy Perevozchikov (IITP RAS); Ivan A Ermakov (IITP RAS); Nikola Banic (Gideon Brozers); Radu Timofte (University of Wurzburg & ETH Zurich); Richard Collins (Practical Photography); Maria Efimova (IITP RAS); Arseniy Terekhin (IITP RAS) et al.

[00:20 UTC] 17:20 Back to the future: a night photography rendering ISP without deep learning
Simone Zini (University of Milano - Bicocca)*; Claudio Rota (University of Milano - Bicocca); Marco Buzzelli (University of Milano - Bicocca); Simone Bianco (University of Milano - Bicocca); Raimondo Schettini (University of Milano - Bicocca)



List of all NTIRE 2023 papers (& poster panel allocation)

papers (pdf, suppl. mat) available at https://openaccess.thecvf.com/CVPR2023_workshops/NTIRE
Each paper has an poster panel # allocated at noon 11:30-13:30 and in the evening 17:30-19:00.
The poster should be placed at the beginning of each time slot and removed at the end of each time slot.


(Poster #178 noon / #143 evening) FlexiCurve: Flexible Piecewise Curves Estimation for Photo Retouching
Chongyi Li ( Nanyang Technological University)*; Chun-Le Guo (Nankai University); Shangchen Zhou (Nanyang Technological University); Qiming Ai (Nanyang Technological University); Ruicheng Feng (Nanyang Technological University); Chen Change Loy (Nanyang Technological University)
[poster][video][project]
(Poster #179 noon / #144 evening) BeautyREC: Robust, Efficient, and Component-Specific Makeup Transfer
Qixin Yan (Tencent); Chun-Le Guo (Nankai University); Jixin Zhao (Nanyang Technological University); YUEKUN DAI (Nanyang Technological University); Chen Change Loy (Nanyang Technological University); Chongyi Li ( Nanyang Technological University)*
[poster][video][project]
(Poster #180 noon / #145 evening) SCONE-GAN: Semantic Contrastive learning-based Generative Adversarial Network for an end-to-end image translation
Iman Abbasnejad (QUT)*; Fabio Zambetta (RMIT University); Flora D. Salim (University of New South Wales); Timothy Wiley (RMIT University); Jeffrey Chan (RMIT University); Russell Gallagher (rheinmetall); Ehsan M Abbasnejad (The University of Adelaide)

(Poster #181 noon / #146 evening) Adaptive Human-Centric Video Compression for Humans and Machines
Wei Jiang (InterDigital)*; Hyomin Choi (INTERDIGITAL COMMUNICATIONS INC); Fabien Racape (Interdigital)
[poster][video]
(Poster #182 noon / #147 evening) ProgDTD: Progressive Learned Image Compression with Double-Tail-Drop Training
Ali Hojjat (Kiel University)*; Janek Haberer (Kiel University); Olaf Landsiedel (Kiel University)
[poster][project]
(Poster #183 noon / #148 evening) RB-Dust – A Reference-based Dataset for Vision-based Dust Removal
Peter Buckel (DHBW Ravensburg)*; Timo Oksanen (Technical University of Munich); Thomas Dietmüller (DHBW Ravensburg)
[poster][slides][project]
(Poster #184 noon / #149 evening) Quantum Annealing for Single Image Super-Resolution
Han Yao Choong (ETH Zurich); Suryansh Kumar (ETH Zurich)*; Luc Van Gool (ETH Zurich)
[poster][video][project]
(Poster #185 noon / #150 evening) Unlimited-Size Diffusion Restoration
Yinhuai Wang (Peking University Shenzhen Graduate School)*; Jiwen Yu (Peking University); Runyi Yu (Peking University); Jian Zhang (Peking University Shenzhen Graduate School)

(Poster #186 noon / #151 evening) Benchmark Dataset and Effective Inter-Frame Alignment for Real-World Video Super-Resolution
Ruohao Wang (Harbin Institute of Technology); Xiaohui Liu (Harbin Institute of Technology); Zhilu Zhang (Harbin Institute of Technology); Xiaohe Wu (Harbin Institute of technology); Chun-Mei Feng (Institute of High Performance Computing, A*STAR); Lei Zhang ("Hong Kong Polytechnic University, Hong Kong, China"); Wangmeng Zuo (Harbin Institute of Technology, China)*
[poster][video][slides][project]
(Poster #187 noon / #152 evening) SS-TTA: Test-Time Adaption for Self-Supervised Denoising Methods
Masud ANI Fahim (University of Vaasa)*; Jani Boutellier (University of Vaasa)
[poster][video][slides][project]
(Poster #188 noon / #153 evening) High-Resolution Synthetic RGB-D Datasets for Monocular Depth Estimation
Aakash Rajpal (K|Lens GmbH); Noshaba Cheema (Max-Planck Institute for Informatics & DFKI); Klaus Illgner (K|Lens GmbH); Philipp Slusallek (German Research Center for Artificial Intelligence (DFKI) & Saarland University); Sunil Jaiswal (K|Lens GmbH)*
[poster]
(Poster #189 noon / #154 evening) Expanding Synthetic Real-World Degradations for Blind Video Super Resolution
Mehran Jeelani (Saarland University); Sadbhawna . (IIT Jammu); Noshaba Cheema (Max-Planck Institute for Informatics & DFKI); Klaus Illgner (K|Lens GmbH); Philipp Slusallek (German Research Center for Artificial Intelligence (DFKI) & Saarland University); Sunil Jaiswal (K|Lens GmbH)*
[poster][video][slides][project]
(Poster #190 noon / #155 evening) Deep Dehazing Powered by Image Processing Network
Guisik Kim (Korea Electronics Technology Institute, Korea); Jinhee Park (Chung-Ang Univ., Korea); Junseok Kwon (Chung-Ang Univ., Korea)*
[poster][video][slides][project]
(Poster #191 noon / #156 evening) Rip Current Segmentation: A Novel Benchmark and YOLOv8 Baseline Results
Andrei Dumitriu (University of Wuerzburg, University of Bucharest)*; Florin Tatui (University of Bucharest); Florin Miron (University of Bucharest); Radu Tudor Ionescu (University of Bucharest); Radu Timofte (University of Wurzburg & ETH Zurich)
[poster][video][slides][project]
(Poster #192 noon / #157 evening) Denoising Diffusion Models for Plug-and-Play Image Restoration
Yuanzhi Zhu (ETH zurich); Kai Zhang (ETH, Zurich)*; Jingyun Liang (ETH Zurich); Jiezhang Cao (ETH Zürich); Bihan Wen (Nanyang Technological University); Radu Timofte (University of Wurzburg & ETH Zurich); Luc Van Gool (ETH Zurich)
[poster][video][slides][project]
(Poster #193 noon / #158 evening) Saliency-aware Stereoscopic Video Retargeting
Hassan Imani (Bahcesehir University)*; MD BAHARUL ISLAM (Bahcesehir University); Lai-Kuan Wong (Multimedia University)
[poster][video][slides][project]
(Poster #194 noon / #159 evening) FRR-Net: A Real-Time Blind Face Restoration and Relighting Network
Samira Pouyanfar (Microsoft)*; Sunando Sengupta (Microsoft); Mahmoud Mohammadi (Microsoft); Ebey Abraham (Microsoft); Brett Bloomquist (Microsoft); Lukas Dauterman (Microsoft); Anjali Parikh (Microsoft); Steve Lim (Microsoft ); Eric Sommerlade (Microsoft)
[poster][video][slides][project]
(Poster #195 noon / #160 evening) Blind Image Inpainting via Omni-dimensional Gated Attention and Wavelet Queries
Shruti S Phutke (Indian Institute of Technology Ropar)*; Ashutosh C Kulkarni (Indian Institute of Technology, Ropar); Santosh Kumar Vipparthi (Indian Institute of Technology Ropar); Subrahmanyam Murala (IIT Ropar)
[poster][video][slides][project]
(Poster #196 noon / #161 evening) High-Perceptual Quality JPEG Decoding via Posterior Sampling
Sean Man (Technion)*; Guy Ohayon (Technion); Theo J Adrai (Technion); Michael Elad (Technion)
[poster][video][slides][project]
(Poster #197 noon / #162 evening) Large Kernel Distillation Network for Efficient Single Image Super-Resolution
Chengxing Xie (Southwest Jiaotong University); Xiaoming Zhang (School of Computing and Artificial Intelligence, Southwest Jiaotong University)*; Linze Li (Southwest Jiaotong University); Haiteng Meng (Southwest Jiaotong University); Tianlin Zhang (National Space Science Center ); Tianrui Li (School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, 611756, China); Xiaole Zhao (School of Computing and Artificial Intelligence, Southwest Jiaotong University)
[poster][video][slides][project]
(Poster #198 noon / #163 evening) OPDN: Omnidirectional Position-aware Deformable Network for Omnidirectional Image Super-Resolution
Xiaopeng Sun (ByteDance Inc.)*; Weiqi Li (Peking University Shenzhen Graduate School); Zhenyu Zhang (PKU); qiufang ma (ByteDance Inc.); Xuhan Sheng (Peking University Shenzhen Graduate School); Ming Cheng (ByteDance); Haoyu Ma (the University of Hong Kong); Shijie Zhao (Bytedance Inc.); Jian Zhang (Peking University Shenzhen Graduate School); Junlin Li (ByteDance Inc.); Li Zhang (Bytedance Inc.)
[poster][video][slides]
(Poster #199 noon / #164 evening) Zoom-VQA: Patches, Frames and Clips Integration for Video Quality Assessment
Kai Zhao (Kuaishou Technology)*; Kun Yuan (Kuaishou Technology); Ming Sun (Kuaishou Technology); Xing Wen (Kuaishou)

(Poster #200 noon / #165 evening) Pyramid Ensemble Structure for High Resolution Image Shadow Removal
Shuhao Cui (Meituan)*; junshi huang (Meituan); Shuman Tian (Meituan); Mingyuan Fan (Meituan); jiaqi zhang (meituan); Li Zhu (Meituan); Xiaoming Wei (Meituan); Xiaolin Wei (Meituan)

(Poster #201 noon / #166 evening) NTIRE 2023 Challenge on Light Field Image Super-Resolution: Dataset, Methods and Results
Yingqian Wang (National University of Defense Technology)*; Longguang Wang (National University of Defense Technology); Zhengyu Liang (National University of Denfense Technology); Jungang Yang (National University of Defense Technology); Radu Timofte (University of Wurzburg & ETH Zurich); Yulan Guo (Sun Yat-sen University)
[poster][video][slides][project]
(Poster #202 noon / #167 evening) Learning Epipolar-Spatial Relationship for Light Field Image Super-Resolution
Ahmed Salem (Chungbuk National University)*; Hatem Hossam Ibrahem (Chungbuk National University); Hyun Soo Kang (Chungbuk National University)

(Poster #203 noon / #168 evening) NTIRE 2023 Challenge on Stereo Image Super-Resolution: Methods and Results
Longguang Wang (National University of Defense Technology); Yulan Guo (Sun Yat-sen University)*; Yingqian Wang (National University of Defense Technology); Juncheng Li (The Chinese University of Hong Kong); Shuhang Gu (ETH Zurich, Switzerland); Radu Timofte (University of Wurzburg & ETH Zurich)
[poster][video]
(Poster #204 noon / #169 evening) DistgEPIT: Enhanced Disparity Learning for Light Field Image Super-Resolution
Kai Jin (Bigo Technology Pte. Ltd.); Angulia Yang (Bigo Technology Pte. Ltd.); Zeqiang Wei (Beijing University of Posts and Telecommunications); Sha Guo (Peking University); Mingzhi Gao (Bigo Technology Pte. Ltd.); Xiuzhuang Zhou (Beijing University of Posts and Telecommunications)*
[slides][project]
(Poster #205 noon / #170 evening) NTIRE 2023 Challenge on HR Depth from Images of Specular and Transparent Surfaces
Pierluigi Zama Ramirez (University of Bologna)*; Fabio Tosi (University of Bologna); Luigi Di Stefano (University of Bologna); Radu Timofte (University of Wurzburg & ETH Zurich)
[poster][video][slides][project]
(Poster #206 noon / #171 evening) Cross-View Hierarchy Network for Stereo Image Super-Resolution
Wenbin Zou (South China University of Technology)*; Hongxia Gao (South China University of Technology (SCUT)); Liang Chen (Fujian Normal University); Yunchen Zhang (China Design Group Ltd.Co); Mingchao Jiang ( GAC R&D Center); Zhongxin Yu (Fujian Normal University); Ming Tan (Fujian Normal University)
[poster][video][slides]
(Poster #207 noon / #172 evening) A Data-Centric Solution to NonHomogeneous Dehazing via Vision Transformer
Yangyi Liu (McMaster University)*; Huan Liu (Huawei Technologies); Liangyan Li (McMaster university); Zijun Wu (China Telecom); Jun Chen (McMaster University)
[poster][video][slides][project]
(Poster #208 noon / #173 evening) Stereo Cross Global Learnable Attention Module for Stereo Image Super-Resolution
Yuanbo Zhou (Fuzhou University); Yuyang Xue (University of Edinburgh); Wei Deng (Fuzhou University); Ruofeng Nie ( Imperial Vision Technology); Jiajun Zhang (Fuzhou University); Jiaqi Pu (Imperial Vision Technology); Qinquan Gao (Fuzhou University); Junlin Lan (Fuzhou University); Tong Tong (College of Physics and Information Engineering, Fuzhou University, Fuzhou, China)*

(Poster #209 noon / #174 evening) SC-NAFSSR: Perceptual-Oriented Stereo Image Super-Resolution Using Stereo Consistency Guided NAFSSR
Zidian Qiu (SYSU); Zongyao He (Sun Yat-sen University); Zhihao Zhan (SUN YAT-SEN UNIVERSITY); Zilin Pan (Sun Yat-sen University); Xingyuan Xian (Sun Yat-sen University); Zhi Jin (Sun Yat-sen University)*
[poster][video][slides][project]
(Poster #210 noon / #175 evening) TSRFormer: Transformer Based Two-stage Refinement for Single Image Shadow Removal
Hua-En Chang (National Taiwan University); Chia-Hsuan Hsieh (University of Pittsburgh ); Hao-Hsiang Yang (National Taiwan University)*; I-HSIANG CHEN (National Taiwan University); Yuan-Chun Chiang (National Taiwan University); Yi-Chung Chen (National Taiwan University); Zhi-Kai Huang (National Taiwan University); Wei-Ting Chen (National Taiwan University); Sy-Yen Kuo (National Taiwan University)
[poster][video][slides][project]
(Poster #211 noon / #176 evening) Semantic Guidance Learning for High-Resolution Non-homogeneous Dehazing
Hao-Hsiang Yang (National Taiwan University); I-HSIANG CHEN (National Taiwan University); Chia-Hsuan Hsieh (University of Pittsburgh ); Hua-En Chang (National Taiwan University); Yuan-Chun Chiang (National Taiwan University); Yi-Chung Chen (National Taiwan University); Zhi-Kai Huang (National Taiwan University); Wei-Ting Chen (National Taiwan University)*; Sy-Yen Kuo (National Taiwan University)
[poster][video][slides][project]
(Poster #212 noon / #177 evening) Selective Bokeh Effect Transformation
Juewen Peng (Huazhong University of Science and Technology); Zhiyu Pan (Huazhong Univ. of Sci.&Tech.); Chengxin Liu (Huazhong University of Science and Technology); Xianrui Luo (Huazhong University of Science and Technology); Huiqiang Sun (Huazhong University of Science and Technology); Liao Shen (Huazhong University of Science and Technology); Ke Xian (Nanyang Technological Univeristy); Zhiguo Cao (Huazhong Univ. of Sci.&Tech.)*
[poster][video]
(Poster #213 noon / #178 evening) Back to the future: a night photography rendering ISP without deep learning
Simone Zini (University of Milano - Bicocca)*; Claudio Rota (University of Milano - Bicocca); Marco Buzzelli (University of Milano - Bicocca); Simone Bianco (University of Milano - Bicocca); Raimondo Schettini (University of Milano - Bicocca)
[poster][video][slides][project]
(Poster #214 noon / #179 evening) VDPVE: VQA Dataset for Perceptual Video Enhancement
Yixuan Gao (Shanghai Jiao Tong University)*; Yuqin Cao (Shanghai Jiao Tong university); Tengchuan Kou (Shanghai Jiao Tong University); Wei Sun (Shanghai Jiao Tong Unviersity); Yunlong Dong (JHC); Xiaohong Liu (Shanghai Jiao Tong University); Xiongkuo Min (Shanghai Jiao Tong University); Guangtao Zhai (Shanghai Jiao Tong University)
[poster][video][slides][project]
(Poster #215 noon / #180 evening) A Simple Transformer-style Network for Lightweight Image Super-resolution
Garas Gendy (Shanghai Jiao Tong University ); nabil sabor (Assiut University); Jingchao Hou (Shanghai Jiao Tong University); Guanghui He (Shanghai Jiao tong University)*
[poster][video][slides]
(Poster #216 noon / #181 evening) Efficient Deep Models for Real-Time 4K Image Super-Resolution. NTIRE 2023 Benchmark and Report
Marcos V Conde (University of Würzburg)*; Eduard Zamfir (University of Wurzburg); Radu Timofte (University of Wurzburg & ETH Zurich)

(Poster #217 noon / #182 evening) Towards Real-Time 4K Image Super-Resolution
Eduard Zamfir (University of Wurzburg); Marcos V Conde (University of Würzburg)*; Radu Timofte (University of Wurzburg & ETH Zurich)

(Poster #218 noon / #183 evening) Quality assessment of enhanced videos guided by aesthetics and technical quality attributes
Mirko Agarla (University of Milano - Bicocca)*; Luigi Celona (University of Milano - Bicocca); Claudio Rota (University of Milano - Bicocca); Raimondo Schettini (University of Milano - Bicocca)
[poster][video][slides][project]
(Poster #219 noon / #184 evening) BokehOrNot: Transforming Bokeh Effect with Image Transformer and Lens Metadata Embedding
Zhihao Yang (Uppsala University)*; Wenyi Lian (Uppsala University); Siyuan Lai (Uppsala University)
[poster][video][slides]
(Poster #220 noon / #185 evening) NTIRE 2023 Quality Assessment of Video Enhancement Challenge
Xiaohong Liu (Shanghai Jiao Tong University)*; Xiongkuo Min (Shanghai Jiao Tong University); Wei Sun (Shanghai Jiao Tong Unviersity); Yulun Zhang (ETH Zurich); Kai Zhang (ETH, Zurich); Radu Timofte (University of Wurzburg & ETH Zurich); Guangtao Zhai (Shanghai Jiao Tong University); Yixuan Gao (Shanghai Jiao Tong University); Yuqin Cao (Shanghai Jiao Tong university); Tengchuan Kou (Shanghai Jiao Tong University); Yunlong Dong (JHC); Ziheng Jia (SJTU GVSP)
[poster][video][slides]
(Poster #221 noon / #186 evening) NTIRE 2023 Video Colorization Challenge
Xiaoyang Kang (Alibaba)*; Xianhui Lin (Alibaba Group); Kai Zhang (ETH, Zurich); Zheng Hui (Alibaba DAMO Academy); Wangmeng Xiang (DAMO Academy, Alibaba Group); Jun-Yan He (DAMO Academy, Alibaba Group); Xiaoming Li (Nanyang Technological University); PEIRAN REN (Alibaba); Xuansong Xie (Alibaba); Radu Timofte (University of Wurzburg & ETH Zurich)
[poster][video][slides][project]
(Poster #222 noon / #187 evening) AsConvSR: Fast and Lightweight Super-Resolution Network with Assembled Convolutions
Jiaming Guo (Huawei Noah's Ark Lab)*; Xueyi Zou (Huawei Noah's Ark Lab); Yuyi Chen (Huawei Noah's Ark Lab); Yi Liu (Huawei Noah's Ark Lab); Jia Hao (HiSilicon (Shanghai) Technologies Co., Ltd); Jianzhuang Liu (Huawei Noah's Ark Lab); Youliang Yan (Huawei Noah's Ark Lab)
[poster][video][slides][project]
(Poster #223 noon / #188 evening) Mixer-based Local Residual Network for Lightweight Image Super-resolution
Garas Gendy (Shanghai Jiao Tong University ); nabil sabor (Assiut University); Jingchao Hou (Shanghai Jiao Tong University); Guanghui He (Shanghai Jiao tong University)*
[poster][video][slides]
(Poster #224 noon / #189 evening) NAFBET: Bokeh Effect Transformation with Parameter Analysis Block based on NAFNet
xiangyu kong (Samsung Research China – Beijing (SRCB))*; Fan Wang (Samsung Research China - Beijing (SRC-B)); dafeng zhang (Samsung Research China – Beijing (SRCB)); jinlong wu (Samsung Research China – Beijing (SRCB)); Zikun Liu (Samsung Research China – Beijing (SRC-B))

(Poster #225 noon / #190 evening) SB-VQA: A Stack-Based Video Quality Assessment Framework for Video Enhancement
Ding-Jiun Huang (KKCompany)*; Yu-Ting Kao (KKCompany); Tieh-Hung Chuang (KKCompany); Ya-Chun Tsai (KKCompany); Jing-Kai Lou (KKCompany); Shuen-Huei Guan (KKCompany)
[poster][video][slides][project]
(Poster #226 noon / #191 evening) Bicubic++: Slim, Slimmer, Slimmest - Designing an Industry-Grade Super-Resolution Network
Bahri Batuhan Bilecen (Aselsan Research); Mustafa Ayazoglu (Aselsan Research)*
[poster][video][slides][project]
(Poster #227 noon / #192 evening) Efficient Multi-Lens Bokeh Effect Rendering and Transformation
Tim Seizinger (University of Würzburg); Marcos V Conde (University of Würzburg)*; Manuel Kolmet (Technical University of Munich); Tom E Bishop (Glass Imaging Inc.); Radu Timofte (University of Wurzburg & ETH Zurich)
[poster][slides][project]
(Poster #228 noon / #193 evening) Lens-to-Lens Bokeh Effect Transformation. NTIRE 2023 Challenge Report
Marcos V Conde (University of Würzburg)*; Manuel Kolmet (Technical University of Munich); Tim Seizinger (University of Würzburg); Tom E Bishop (Glass Imaging Inc.); Radu Timofte (University of Wurzburg & ETH Zurich)
[poster][slides][project]
(Poster #229 noon / #194 evening) Multi-level Dispersion Residual Network for Efficient Image Super-Resolution
Yanyu Mao (Xi’an University of Posts & Telecommunications)*; Nihao Zhang (Xi’an University of Posts & Telecommunications); Qian Wang (Xi’an university of posts and telecommunications ); Bendu Bai (Xi’an University of Posts and Telecommunications); Wanying Bai (Xi'an University of Posts & Telecommunications); Haonan Fang (Xi’an University of Posts & Telecommunications); Peng Liu (Xi'an University of Posts & Telecommunications); Ming Yue Li (Xi'an University of Posts & Telecommunications); Shengbo Yan (Xi'an University of Posts and Telecommunications)
[poster][video][slides]
(Poster #230 noon / #195 evening) TransER: Hybrid Model and Ensemble-based Sequential Learning for Non-homogenous Dehazing
Trung Hoang (Pennsylvania State University)*; Haichuan Zhang (The Pennsylvania State University); Amirsaeed Yazdani (Pennsylvania State University); Vishal Monga (The Pennsylvania State University)
[poster][video][slides][project]
(Poster #231 noon / #196 evening) Refusion: Enabling Large-Size Realistic Image Restoration with Latent-Space Diffusion Models
Ziwei Luo (Uppsala Universitet)*; Fredrik K Gustafsson (Uppsala University); Zheng Zhao (Uppsala University); Jens Sjölund (Uppsala University); Thomas B. Schön (Uppsala University)
[poster][video][slides][project]
(Poster #232 noon / #197 evening) DIPNet: Efficiency Distillation and Iterative Pruning for Image Super-Resolution
Lei Yu (Megvii); Xinpeng Li (Megvii); Youwei Li (Microbt); Ting Jiang (MEGVII); Qi Wu (Megvii); Haoqiang Fan (Megvii Inc(face++)); Shuaicheng Liu (UESTC; Megvii)*
[poster][video]
(Poster #233 noon / #198 evening) Hybrid Transformer and CNN Attention Network for Stereo Image Super-resolution
Ming Cheng (ByteDance)*; Haoyu Ma (the University of Hong Kong); qiufang ma (ByteDance Inc.); Xiaopeng Sun (ByteDance Inc.); Weiqi Li (Peking University Shenzhen Graduate School); Zhenyu Zhang (PKU); Xuhan Sheng (Peking University Shenzhen Graduate School); Shijie Zhao (Bytedance Inc.); Junlin Li (ByteDance Inc.); Li Zhang (Bytedance Inc.)
[poster][video][slides][project]
(Poster #234 noon / #199 evening) Reparameterized Residual Feature Network For Lightweight Image Super-Resolution
Weijian Deng (Communication University of China)*; Hongjie Yuan (Communication University of China); (); Zengtong Lu (Ruijie Networks Co., Ltd.)
[poster][video][slides][project]
(Poster #235 noon / #200 evening) RTTLC: Video Colorization with Restored Transformer and Test-time Local Converter
Jinjing Li (Communication University of China); Qirong Liang (Communication University of China)*; Qipei Li (Communication University of China); Ruipeng Gang (Academy of Broadcasting Science, NRTA); Ji Fang (Academy of Broadcasting Science, NRTA); Chichen Lin (Communication University of China); Shuang Feng (Communication University of China); Xiaofeng Liu (Communication University of China)
[poster][video][slides][project]
(Poster #236 noon / #201 evening) NTIRE 2023 Challenge on 360° Omnidirectional Image and Video Super-Resolution: Datasets, Methods and Results
Mingdeng Cao (The University of Tokyo); Chong Mou (Peking University Shenzhen Graduate School); Fanghua Yu (Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences); Xintao Wang (Tencent)*; Yinqiang Zheng (The University of Tokyo); Jian Zhang (Peking University Shenzhen Graduate School); Chao Dong (SIAT); Ying Shan (Tencent); Gen LI (Tencent); Radu Timofte (University of Wurzburg & ETH Zurich)
[poster][video][slides][project]
(Poster #237 noon / #202 evening) Lightweight Real-Time Image Super-Resolution Network for 4K Images
Ganzorig Gankhuyag (KETI)*; Kihwan Yoon (KETI); Jinman Park (KETI); Haengseon Son (Korea Electronics Technology institute); Kyoungwon Min (Korea Electronics Technology Institute)
[poster][slides][project]
(Poster #238 noon / #203 evening) Attention Retractable Frequency Fusion Transformer for Image Super Resolution
Qiang Zhu (UESTC)*; Li Peng Fei (UESTC); Qianhui Li (UESTC)
[poster][video][slides]
(Poster #239 noon / #204 evening) SwinFSR: Stereo Image Super-Resolution using SwinIR and Frequency Domain Knowledge
KE CHEN (McMaster University)*; Liangyan Li (McMaster university); Huan Liu (Huawei Technologies); Yunzhe Li (McMaster University); Congling Tang (McMaster University); Jun Chen (McMaster University)
[poster][video]
(Poster #240 noon / #205 evening) LSDIR: A Large Scale Dataset for Image Restoration
Yawei Li (ETH Zurich)*; Kai Zhang (ETH, Zurich); Jingyun Liang (ETH Zurich); Jiezhang Cao (ETH Zürich); Ce Liu (ETH Zurich); RUI GONG (ETH Zurich); Yulun Zhang (ETH Zurich); Hao Tang (ETH Zurich); Yun Liu (A*STAR); Denis Demandolx (Meta); Rakesh Ranjan (Meta); Radu Timofte (University of Wurzburg & ETH Zurich); Luc Van Gool (ETH Zurich)

(Poster #241 noon / #206 evening) NTIRE 2023 Image Shadow Removal Challenge Report
Florin-Alexandru Vasluianu (Computer Vision Lab, University of Wurzburg)*; Tim Seizinger (University of Würzburg); Radu Timofte (University of Wurzburg & ETH Zurich)
[poster][slides][project]
(Poster #242 noon / #207 evening) NTIRE 2023 HR NonHomogeneous Dehazing Challenge Report
Codruta O Ancuti (University Politehnica Timisoara)*; Cosmin Ancuti (UCL); Florin-Alexandru Vasluianu (Computer Vision Lab, University of Wurzburg); Radu Timofte (University of Wurzburg & ETH Zurich)
[poster][slides]
(Poster #243 noon / #208 evening) WSRD: A Novel Benchmark for High Resolution Image Shadow Removal
Florin-Alexandru Vasluianu (Computer Vision Lab, University of Wurzburg)*; Tim Seizinger (University of Würzburg); Radu Timofte (University of Wurzburg & ETH Zurich)
[poster][slides][project]
(Poster #244 noon / #209 evening) Temporal Consistent Automatic Video Colorization via Semantic Correspondence
Yu Zhang (Beijing University of Posts and Telecommunications); Siqi Chen (Beijing University of Posts and Telecommunications)*; Mingdao Wang (Beijing University of Posts and Telecommunications); Xianlin Zhang (Beijing University of Posts and Telecommunications); Chuang Zhu (Beijing University of Posts and Telecommunications); Yue Zhang (Beijing University of Posts and Telecommunications); Xueming Li (Beijing University of Posts and Telecommunications)
[poster][video]
(Poster #245 noon / #210 evening) Video Quality Assessment Based on Swin Transformer with Spatio-Temporal Feature Fusion and Data Augmentation
Wei Wu (Alibaba Group)*; Shuming Hu (Alibaba Group); Pengxiang Xiao (Alibaba Group); Sibin Deng (Alibaba Group); Yilin Li (Alibaba Group); Ying Chen (Alibaba Group); Kai Li (Alibaba Group)
[poster][video][slides]
(Poster #246 noon / #211 evening) Streamlined Global and Local Features Combinator (SGLC) for High Resolution Image Dehazing
Bilel Benjdira (Prince Sultan University)*; Anas M. Ali (Prince Sultan University); Anis Koubaa (Prince Sultan University)
[poster][video][slides]
(Poster #247 noon / #212 evening) NTIRE 2023 Challenge on Image Super-Resolution (x4): Methods and Results
Yulun Zhang (ETH Zurich)*; Kai Zhang (ETH, Zurich); Zheng Chen (Shanghai Jiao Tong University); Yawei Li (ETH Zurich); Radu Timofte (University of Wurzburg & ETH Zurich)
[poster][slides][video][project]
(Poster #248 noon / #213 evening) SCANet: Self-Paced Semi-Curricular Attention Network for Non-Homogeneous Image Dehazing
Yu Guo (Wuhan University of Technology); Yuan Gao (Wuhan University of Technology); Wen Liu (Wuhan University of Technology)*; Yuxu Lu (Wuhan University of Technology); Jingxiang Qu (Wuhan University of Technology); Shengfeng He (Singapore Management University); Wenqi Ren (Sun Yat-Sen University)
[poster][video][project]
(Poster #249 noon / #214 evening) Breaking Through the Haze: An Advanced Non-Homogeneous Dehazing Method based on Fast Fourier Convolution and ConvNeXt
Han Zhou (McMaster University)*; Wei Dong (University of Alberta); Yangyi Liu (McMaster University); Jun Chen (McMaster University)
[poster][video][slides][project]
(Poster #250 noon / #215 evening) NTIRE 2023 Challenge on Image Denoising: Methods and Results
Yawei Li (ETH Zurich)*; Yulun Zhang (ETH Zurich); Luc Van Gool (ETH Zurich); Radu Timofte (University of Wurzburg & ETH Zurich)

(Poster #251 noon / #216 evening) NTIRE 2023 Challenge on Efficient Super-Resolution: Methods and Results
Yawei Li (ETH Zurich)*; Yulun Zhang (ETH Zurich); Luc Van Gool (ETH Zurich); Radu Timofte (University of Wurzburg & ETH Zurich)

(Poster #252 noon / #217 evening) Spatial-Angular Multi-Scale Mechanism for Light Field Spatial Super-Resolution
Chen Gao (Beijing Jiaotong University )*; Youfang Lin (Beijing Jiaotong University); Song Chang (Beijing Jiaotong University); Shuo Zhang (Beijing Jiaotong University)
[poster][video][slides]
(Poster #253 noon / #218 evening) A Single Residual Network with ESA Modules and Distillation
Yucong Wang (Hunan University)*; Minjie Cai (Hunan University)
[poster]
(Poster #254 noon / #219 evening) NTIRE 2023 Challenge on Night Photography Rendering
Alina Shutova (IITP RAS); Egor Ershov (IITP RAS)*; Georgy Perevozchikov (IITP RAS); Ivan A Ermakov (IITP RAS); Nikola Banic (Gideon Brozers); Radu Timofte (University of Wurzburg & ETH Zurich); Richard Collins (Practical Photography); Maria Efimova (IITP RAS); Arseniy Terekhin (IITP RAS)


papers (pdf, suppl. mat) available at https://openaccess.thecvf.com/CVPR2023_workshops/NTIRE