The Visual Computing Laboratory (VCL) paper reading group is a forum for reading, presenting and discussing papers related to computer graphics, image processing, machine learning, computer vision and visualization. We meet on a biweekly basis and discuss a selected paper related to these areas. The reading group is orginized in hybrid mode, you can find us in :
Participants are welcome to join based on time and interest. However, there is also the option to participate in the reading group as part of a PhD course and collect course credits. Information on the examination is detailed below.
The aim of the reading group is to both learn about published research and to practice reading and discussing research. More specifically, the aim is to:
For each paper reading seminar, one person, the presenter, selects a paper in advance which all of the participants reads before the seminar. The presenter for an upcoming seminar is either selected when we meet, or by notifying Yifan through email. During a reading seminar, the presenter gives a short (10-15 min) conference-style presentation of the paper. The presentation is followed by a short (5-10 min) review, which is supposed to comment on:
Finally, the presenter opens up a discussion around the paper by providing a few example discussion points. The discussion (20-30 min) expects participants to actively take part in the discussions. This relies on that the participants have read the paper before the meeting.
Participation in the reading group can be based on time and interest, e.g. only taking part in seminars which are deemed most relevant to your research. There is no need to register for this, but in order to be included on the email list, please contact Yifan by yifan.ding@liu.se. However, in order to collect course credits you are required to present and actively participate in discussions. For each course credit (1hp), you are required to
For example, by giving 2 presentations and participating in 6 seminars (including the ones as presenter), you will be awarded 2hp.
In order to sign up for the paper reading PhD course, let me know in advance through email: gabriel.eilertsen@liu.se.
Dates are tentative and subject to change, but the intention is to aim for Tuesdays at 10.15 every other week. Depending on the number of participants, there is also the option to have two papers in a longer seminar session.
Date | Presenter | Paper | Keywords |
---|---|---|---|
Seb. 10, 10.00-11.00 | Nithesh | Li et al. 2023 Return of Unconditional Generation:A Self-supervised Representation Generation Method | Diffusion model, representation learning, machine learning, deep learning, generative modelling |
Seb. 24, 10.00-11.00 | Shreyas | Qu et al. 2024 NeRF-NQA: No-Reference Quality Assessment for Scenes Generated by NeRF and Neural View Synthesis Methods | NeRF, quality assessment, feature extraction, image quality |
Oct. 08, 10.00-11.00 | Yifan | Hertz et al. 2024 Style Aligned Image Generation via Shared Attention | Diffusion model, neural style imaging, machine learning, deep learning, generative modelling |
Oct. 22, 10.00-11.00 | TBA | ||
Nov. 05, 10.00-11.00 | TBA | ||
Nov. 19, 10.00-11.00 | TBA | ||
Dec. 03, 10.00-11.00 | TBA |
Date | Presenter | Paper | Keywords |
---|---|---|---|
Aug. 29, 10.15-11.00 | Behnaz | Zhou et al. 2023 PhotoMat: A Material Generator Learned from Single Flash Photos | BRDF, material measurement, machine learning, deep learning, generative modelling |
Oct. 10, 10.15-11.00 | Gabriel B. | Oord et al. 2017, Neural Discrete Representation Learning Esser et al. 2021, Taming Transformers for High-Resolution Image Synthesis | vector quantization, machine learning, deep learning, generative modelling, VAEs, GANs |
Date | Presenter | Paper | Keywords |
---|---|---|---|
Jan. 31, 10.15-11.00 | Watching and discussing NeurIPS 2022 pre-recorded presentations. | ||
Feb. 15, 10.15-11.00 | Arty | Rombach et al. 2022, High-Resolution Image Synthesis With Latent Diffusion Models | machine learning, deep learning, generative modelling, diffusion models |
Apr. 11, 10.15-11.00 | Wen | Guo et al. 2020, MaterialGAN: reflectance capture using a generative SVBRDF model | BRDF, material measurement, machine learning, deep learning, generative modelling |
May 24, 13.15-14.00 | Saghi | Presentation and discussion of interesting papers from EG2023 |
Date | Presenter | Paper | Keywords |
---|---|---|---|
Sep. 06, 10.15-11.00 | Saghi | Müller et al. 2022, Instant Neural Graphics Primitives with a Multiresolution Hash Encoding | machine learning, deep learning, scene representation, view synthesis, neural representation |
Sep. 19, 10.00-11.00 | Watching and discussing SIGGRAPH 2022 pre-recorded presentations. | ||
Oct. 04, 10.15-11.00 | Gabriel B. | Kellnhofer et al. 2021, Neural Lumigraph Rendering | machine learning, deep learning, scene representation, view synthesis, neural representation |
Oct. 18, 10.15-11.00 | Behnaz | Lagunas et al. 2019, A similarity measure for material appearance | material appearance, machine learning, deep learning, perception |
Nov. 15, 10.15-11.00 | Wen | Zhang et al. 2020, Optimization-Inspired Compact Deep Compressive Sensing | machine learning, deep learning, compressed sensing, image reconstruction |
Nov. 29, 10.15-11.00 | Yifan | Peebles et al. 2022, GAN-Supervised Dense Visual Alignment | machine learning, deep learning, generative modelling, visual alignment |
Date | Presenter | Paper | Keywords |
---|---|---|---|
Jan. 25, 10.15-11.00 | Behnaz | Rainer et al. 2019, Neural BTF Compression and Interpolation | image-based rendering, machine learning, deep learning, image compression |
Feb. 8, 10.15-11.00 | Gabriel B. | Zhang et al. 2018, The Unreasonable Effectiveness of Deep Features as a Perceptual Metric | image quality assessment, IQA, image comparison, image metric, machine learning, deep learning |
Mar. 8, 10.15-11.00 | Ehsan | Wu et al. 2019, Learning a Compressed Sensing Measurement Matrix via Gradient Unrolling | machine learning, deep learning, compressed sensing |
Mar. 22, 10.15-11.00 | Wen | Matusik et al. 2003, A data-driven reflectance model | light reflection models, photometric measurements, reflectance, BRDF, image-based modeling |
Apr. 5, 10.15-11.00 | Tanaboon | Sztrajman et al. 2021, Neural BRDF Representation and Importance Sampling | light reflection models, photometric measurements, BRDF, machine learning, deep learning |
May. 3, 10.15-11.00 | Behnaz | Wang et al. 2019, HyperReconNet: Joint Coded Aperture Optimization and Image Reconstruction for Compressive Hyperspectral Imaging | hyperspectral imaging, image reconstruction, image coding, compressive imaging, deep learning |
May. 17, 10.15-11.00 | Milda | Wang et al. 2019, Aleatoric uncertainty estimation with test-time augmentation for medical image segmentation with convolutional neural networks | uncertainty estimation, deep learning, medical imaging, test-time augmentations, image segmentation |
May. 31, 10.15-11.00 | Saghi | Mizuno et al. 2022, Acquiring a Dynamic Light Field through a Single-Shot Coded Image | light fields, computational photography, machine learning, deep learning, image coding |
Date | Presenter | Paper | Keywords |
---|---|---|---|
Aug. 31, 10.15-11.00 | Saghi | Scetbon et al. 2021, Deep K-SVD Denoising | noise reduction, machine learning, image processing |
Sep. 15, 10.15-11.00 | Milda | Kolesnikov et al. 2020, Big Transfer (BiT): General Visual Representation Learning | deep learning, computer vision, image recognition |
Oct. 12, 10.15-11.00 | Rym | Yan et al. 2020, On Robustness of Neural Ordinary Differential Equations | neural ODE, deep learning, robustness |
Oct. 26, 10.15-11.00 | Karin | Birhane and Prabhu 2020, Large image datasets: A pyrrhic win for computer vision? | deep learning, fairness, bias |
Nov. 09, 10.15-11.00 | Wen | Nielsen et al. 2015, On Optimal, Minimal BRDF Sampling for Reflectance Acquisition | reflectance, BRDF, MERL, reconstruction |
Dec. 7, 10.15-11.00 | Rym | Novak et al. 2018, Sensitivity and Generalization in Neural Networks: an Empirical Study | deep learning, sensitivity analysis, generalization |
Date | Presenter | Paper | Keywords |
---|---|---|---|
Jan. 26, 10.15-11.00 | Gabriel B. | Sun et al. 2019, Single Image Portrait Relighting | computer graphics, deep learning, image-based rendering, computational photography |
Feb. 9, 10.15-11.00 | Apostolia | Hou et al. 2019, Robust Histopathology Image Analysis: to Label or to Synthesize? (builds on the earlier arXiv version: Unsupervised Histopathology Image Synthesis) | deep learning, digital pathology, image synthesis, generative learning, GANs |
Feb. 23, 10.15-11.00 | Saghi | Santos et al. 2020, Single Image HDR Reconstruction Using a CNN with Masked Features and Perceptual Loss | deep learning, image processing, high dynamic range imaging |
Mar. 9, 10.15-11.00 | Tanaboon | Tongbuasirilai et al. 2021, A Non-parametric Sparse BRDF Model | computer graphics, reflectance modeling, BRDF modeling, sparse representation, dictionary learning |
Mar. 23, 10.15-11.00 | Karin | Bender et al. 2021, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? | deep learning, natural language processing, ethical AI |
Apr. 6, 10.15-11.00 | Gabriel B. | Nabati et al. 2018, Fast and Accurate Reconstruction of Compressed Color Light Field | computational photography, light fields, machine learning, deep learning |
Apr. 20, 10.15-11.00 | Milda | Dosovitskiy et al. 2021, An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale | deep learning, computer vision, image recognition, self-attention, transformer |
May 4, 10.15-11.00 | Wen | Iordache et al. 2011, Sparse Unmixing of Hyperspectral Data | abundance estimation, convex optimization, hyperspectral imaging, sparse regression, spectral unmixing |
May 18, 10.15-11.00 | Gabriel E. | Tolstikhin et al. 2021, MLP-Mixer: An all-MLP Architecture for Vision | deep learning, computer vision, image recognition, MLP |
Jun. 1, 10.15-11.00 | Behnaz | Shi et al. 2019, Image Compressed Sensing Using Convolutional Neural Network | compressed sensing, deep learning, convolutional neural network, sampling matrix, image reconstruction |
Date | Presenter | Paper | Keywords |
---|---|---|---|
Sep. 8, 10.15-11.00 | Milda | Karras et al. 2020, Analyzing and Improving the Image Quality of StyleGAN (first StyleGAN can be found in: A Style-Based Generator Architecture for Generative Adversarial Networks) | deep learning, unsupervised learning, generative learning, GANs |
Sep. 23, 10.15-11.00 | Kristofer | Martin-Brualla et al. 2020, NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections (builds on: Representing Scenes as Neural Radiance Fields for View Synthesis) | scene representation, view synthesis, image-based rendering, volume rendering, 3D deep learning |
Oct. 6, 10.15-11.00 | Fereshteh | Mehta and Egiazarian 2016, Texture Classification Using Dense Micro-Block Difference | texture classification, descriptors, compressive sensing, LBP, SVM, Scale Invariant Feature Transform |
Oct. 21, 10.15-11.00 | Apostolia | Karras et al. 2020, Training Generative Adversarial Networks with Limited Data | deep learning, unsupervised learning, generative learning, GANs |
Nov. 3, 10.15-11.00 | Milda | Schirrmeister et al. 2020, Understanding anomaly detection with deep invertible networks through hierarchies of distributions and features (uses flow based generative models: Glow: Generative flow with invertible 1x1 convolutions) | deep learning, anomaly detection, unsupervised learning, generative learning, GLOW |
Nov. 17, 10.15-11.00 | Gabriel E. | Liu et al. 2020, Diverse Image Generation via Self-Conditioned GANs | deep learning, unsupervised learning, generative learning, GANs |
Dec. 1, 10.15-11.00 | Kristofer | Anonymous, LambdaNetworks: Modeling long-range Interactions without Attention (ICLR 2021 submission) | deep learning, neural networks, attention, transformer, vision, image classification |
Dec. 15, 10.15-11.00 | Karin | Taori et al. 2020, Measuring Robustness to Natural Distribution Shifts in Image Classification | deep learning, machine learning, domain shift, generalization |
Date | Presenter | Paper | Keywords |
---|---|---|---|
Feb. 4, 10.15-11.00 | Tanaboon | Deschaintre et al. 2019, Flexible SVBRDF Capture with a Multi-Image Deep Network | computer graphics, deep learning, material capturing, rendering |
Feb. 18, 10.15-11.00 | Kristofer | Ulyanov et al. 2018, Deep Image Prior (extended in "Double-DIP": Unsupervised Image Decomposition via Coupled Deep-Image-Priors) | deep learning, computer vision, deep convolutional networks |
Mar. 3, 10.15-11.00 | Karin | Oord et al. 2018, Representation Learning with Contrastive Predictive Coding | deep learning, representation learning, unsupervised learning |
Mar. 17, 10.15-11.00 | Wito | Günther et al. 2014, Opacity Optimization for Surfaces | visualization, integral surfaces, flow visualization, computer graphics |
Mar. 31, 10.15-11.00 | Apostolia | Xiangli et al. 2020, Real or Not Real, that is the Question | deep learning, image synthesis, generative learning, GANs |
Apr. 14, 10.15-12.00 | Milda | Schlegl et al. 2017, Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery | deep learning, unsupervised learning, generative learning, GANs, medical imaging |
Elmira | Albo et al. 2016, Off the Radar: Comparative Evaluation of Radial Visualization Solutions for Composite Indicators | visualization, multi-dimensional visualization, visualization evaluation, radial layout design | |
Apr. 28, 10.15-11.00 | Jens | Wilson et al. 2018, Evolving simple programs for playing Atari games | computer vision, genetic programming, artificial intelligence, reinforcement learning |
May 12, 10.15-11.00 | Gabriel E. | Frankle & Carbin 2019, The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks [related papers] | deep learning, neural networks, sparsity, pruning, compression |
May 26, 10.15-11.00 | Gabriel B. | Inagaki et al. 2018, Learning to Capture Light Fields through a Coded Aperture Camera | computational photography, light fields, machine learning, deep learning |
Jun. 9, 10.15-12.00 | Fereshteh | Gao et al. 2019, Deep Restoration of Vintage Photographs From Scanned Halftone Prints | machine learning, deep learning, image processing, deep convolutional networks |
Kristofer | Schmidhuber 2019, Reinforcement Learning Upside Down: Don't Predict Rewards – Just Map Them to Actions | machine learning, deep learning, reinforcement learning |
Papers can be chosen from being closely related to your own research, i.e. papers you would read anyway. Relevant papers can also be found by browsing papers from recent conferences such as SIGGRAPH, SIGGRAPH Asia, Eurographics, Pacific graphics, CVPR, ICCV, ECCV, NeurIPS, ICML, ICLR, etc., or journals such as IEEE TIP, IEEE TPAMI, IEEE TVCG, ACM ToG, CGF, etc. Recent papers can also be found on open archives such as arXiv.
Papers should be related to computer graphics, image processing, machine learning, computer vision and/or visualization. Papers that present work in the intersection of some of these areas are perfect for the reading group. Papers can also present more fundamental techniques and ideas, which are applicable in the areas.
The following list will be continuously updated with paper suggestions. It will both contain papers that I would find interesting to read, as well as suggestions from other reading group participants. Please feel free to provide me with suggestions you find interesting.
Authors | Title | Resource |
---|---|---|
Fan et al. | Seeing Unseen: Discover Novel Biomedical Concepts via Geometry- Constrained Probabilistic Modeling | [CVPR 2024] |
Li et al. | Generative Image Dynamics | [CVPR 2024] |
Charatan et al. | pixelSplat: 3D Gaussian Splats from Image Pairs for Scalable Generalizable 3D Reconstruction | [CVPR 2024] |
Shwartz-Ziv & Lecun | To Compress or Not to Compress- Self-Supervised Learning and Information Theory: A Review | [arXiv 2023] |
Haetinger et al. | Controllable Neural Style Transfer for Dynamic Meshes | [SIGGRAPH 2024] |
Hatamizadeh et al. | DiffiT: Diffusion Vision Transformers for Image Generation | [ECCV 2024] |
Yu et al. | Mip-Splatting: Alias-free 3D Gaussian Splatting | [CVPR 2024] |
Zhao et al. | Position: Measure Dataset Diversity, Don't Just Claim It | [ICML 2024] |
Chen et al. | Deconstructing Denoising Diffusion Models for Self-Supervised Learning | [arXiv 2024] |
Kerbl et al. | 3D Gaussian Splatting for Real-Time Radiance Field Rendering | [SIGGRAPH 2023] |
Last updated: 2024-09-24, Yifan Ding