Synthetic and real data set for compressive HDR light field imaging. There are 85 real light fields that are captured with Lytro camera with 5 exposures. The LDR light fields are decoded by [1] and merged view-per-view into an HDR light field using [2].
        The synthetic data set is produced using Blender and contains 5 different scenes. Each scene was rendered using a 7x7 grid of cameras. Moreover, we have also produced different baselines, i.e. the distance between the cameras in the 7x7 grid. 
        For more infomation and questions please contact  Saghi Hajisharif  : saghi.hajisharif at liu.se or  Ehsan Miandji    ehsan.miandji at liu.se.
    
        [1] Dansereau, Donald G., Oscar Pizarro, and Stefan B. Williams. "Decoding, calibration and rectification for lenselet-based plenoptic cameras." Proceedings of the IEEE conference on computer vision and pattern recognition. 2013.
        [2] Debevec, Paul E., and Jitendra Malik. "Recovering high dynamic range radiance maps from photographs." ACM SIGGRAPH 2008 classes. 2008. 1-10.
    
If you found the dataset or our published paper useful please consider citing:
 
             
             
             
             
             
                    