Sunteți pe pagina 1din 11

Noname manuscript No.

(will be inserted by the editor)

Hybrid Light Field Imaging for Improved Spatial Resolution


and Depth Range
M. Zeshan Alam · Bahadir K. Gunturk
arXiv:1611.05008v2 [cs.CV] 31 Jul 2017

Received: date / Accepted: date

Abstract Light field imaging involves capturing both wavelength and polarization), the independent param-
angular and spatial distribution of light; it enables new eters can be reduced to a four-dimensional space as-
capabilities, such as post-capture digital refocusing, cam- suming there is no energy loss during light propagation
era aperture adjustment, perspective shift, and depth and when only the intensity of light is considered; such
estimation. Micro-lens array (MLA) based light field a four-dimensional representation of light field is used
cameras provide a cost-effective approach to light field in many practical applications [3, 20, 15]. Unlike regu-
imaging. There are two main limitations of MLA-based lar cameras, light field cameras capture the directional
light field cameras: low spatial resolution and narrow light information, which enables new capabilities, in-
baseline. While low spatial resolution limits the general cluding post-capture adjustment of camera parameters
purpose use and applicability of light field cameras, nar- (such as focal length and aperture size), post-capture
row baseline limits the depth estimation range and ac- change of camera viewpoint, and depth estimation. As
curacy. In this paper, we present a hybrid stereo imag- a result, light field imaging is getting increasingly used
ing system that includes a light field camera and a reg- in a variety of application areas, including digital pho-
ular camera. The hybrid system addresses both spatial tography, microscopy, robotics, and machine vision.
resolution and narrow baseline issues of the MLA-based Light field imaging systems can be implemented in
light field cameras while preserving light field imaging various ways, including camera arrays [37, 20, 39], micro-
capabilities. lens arrays [26, 23], coded masks [33], objective lens ar-
rays [13], and gantry-based camera systems [32]. Among
Keywords Light field imaging · hybrid stereo imaging
these different implementations, micro-lens array (MLA)
based light field cameras offer a cost-effective approach;
and it is widely adopted in academic research as well
1 Introduction
as in commercial light field cameras [1, 2].
A light field can be defined as the collection of all light MLA-based light field cameras have two limiting is-
rays in 3D space [14, 20]. One of the earliest imple- sues. The first one is low spatial resolution. Because the
mentations of a light field camera was presented [21], image sensor is shared to capture both spatial and an-
where a micro-lens array is placed in front of a film gular information, MLA-based light field cameras suffer
to capture incident light amount from different direc- from a fundamental resolution trade-off between spatial
tions. While a light field, in general, can be parame- and angular resolution. For example, the first-generation
terized in terms of 3D coordinates of ray positions, 2D Lytro camera has a sensor of around 11 megapixels,
ray directions, and physical properties of light (such as producing 11x11 angular resolution and less than 0.15
megapixel spatial resolution [10]. The second-generation
This work is supported by TUBITAK Grant 114E095. Lytro camera has a sensor of 40 megapixels; however,
The authors are with
this large resolution capacity translates to only four
Istanbul Medipol University, Istanbul, Turkey megapixel spatial resolution (with the manufacturer’s
E-mail: mzalam@st.medipol.edu.tr E-mail: bkgun- decoding software) due to the angular-spatial resolu-
turk@medipol.edu.tr tion trade-off.
2 M. Zeshan Alam, Bahadir K. Gunturk

The second issue associated with MLA-based light to apply super-resolution image restoration to light field
field cameras is narrow baseline. The distance between sub-aperture images. Super-resolution in a Bayesian frame-
sub-aperture images decoded from a light field capture work is commonly used, for example, in [5] with Lam-
is very small, significantly limiting the depth estimation bertian and textural priors, in [25] with a Gaussian
range and accuracy. For instance, the maximum base- mixture model, and in [36] with a variational formula-
line (between the leftmost and rightmost sub-aperture tion. Learning-based methods are adopted as well, in-
images) of a first-generation Lytro camera is less than a cluding dictionary-based learning [9] and deep convolu-
centimeter, which typically results in sub-pixel feature tional neural networks [40, 19]. In addition to spatial do-
disparities. There are methods in the literature specif- main super-resolution restoration, Fourier-domain tech-
ically designed to estimate disparities and depth maps niques [27, 28] and wave optics based 3D deconvolution
for MLA-based light field cameras [41, 17, 30]. methods [29, 7, 31, 18] have also been utilized.
To address both resolution and baseline issues, we Alternative to the standard MLA-based light field
propose a hybrid stereo imaging system that consists camera design [26], where the MLA is placed at the im-
of a light field camera and a regular camera.1 The pro- age plane of the main lens and the sensor is placed at
posed imaging system is shown in Figure 1; it has two the focal length of the lenslets, there is another design
main advantages over a single light field camera: First, approach where the MLA is placed to relay image from
high spatial resolution image captured by the regular the intermediate image plane of the main lens to the
camera is fused with low spatial resolution sub-aperture sensor [23]. This design is known as “focused plenoptic
images of the light field camera to enhance the spatial camera.” As in the case of the standard light field cam-
resolution of each sub-aperture image; that is, a high era approach, super-resolution restoration for focused
spatial resolution light field is obtained while preserving plenoptic cameras is also possible [12].
the angular resolution. Second, the distance between All single-sensor light field imaging systems are fun-
the light field camera and the regular camera produces damentally limited by the spatial-angular resolution trade-
a larger baseline compared to the maximum baseline of off, and the above-mentioned restoration methods have
the light field camera; as a result, the hybrid system performance limitations in addition to the computa-
has better depth estimation range and accuracy. tional costs. Another approach for improving spatial
Hybrid light field imaging systems have been pre- resolution is to use a hybrid two camera system, in-
sented previously [6, 38, 34]. Unlike these previous work, cluding a light field camera and a high-resolution cam-
where spatial resolution enhancement is the only goal, era, and merge the images to improve spatial resolu-
we achieve wider baseline through using a calibrated tion [6, 38, 34]. Dictionary-learning based techniques are
system. With wide baseline, both range and accuracy adopted [6, 38] in this problem as well: High-resolution
of depth estimation are improved in addition to spa- image patches from the regular camera are extracted
tial resolution enhancement. Fixing cameras as a stereo and stored as a high-resolution patch dictionary. These
system also enables offline calibration and rectification, high-resolution patches are downsampled; and from the
reducing the computational cost. downsampled patches, low-resolution features are ex-
In Section 2, related work addressing spatial reso- tracted to form a low-resolution patch dictionary. Dur-
lution and narrow baseline issues of MLA-based light ing super-resolution reconstruction, a low-resolution im-
field cameras is provided. The proposed hybrid imag- age patch is enhanced through determining (based on
ing system with resolution enhancement algorithm is feature matching) and using the corresponding high-
explained in Section 3. Experimental results on resolu- resolution patches in the dictionary. In [34], high-resolution
tion enhancement and depth estimation are presented image is decomposed with complex steerable pyramid
in Section 4. Concluding remarks are given in Section filters; the depth map from the light field is upsam-
5. pled using joint bilateral upsampling; perspective shift
amounts are estimated from the upsampled depth map,
and these shift amounts are used to modify the phase of
2 Related Work the decomposed high-resolution image; with the modi-
fied phases, pyramid reconstruction is applied to obtain
On low spatial resolution: There are various methods high-resolution light field.
proposed to address the low spatial resolution issue in On narrow baseline: One of the most important fea-
MLA-based light field cameras. One main approach is tures of light field cameras is the ability to estimate
1 depth. However, it is known that depth accuracy and
A preliminary version of this work was presented as a
conference paper [4]. In this paper, we provide additional ex- range is limited in MLA-based light field cameras due
periments, detailed algorithm explanation and analysis. to narrow baseline. The relation between baseline and
Hybrid Light Field Imaging for Improved Spatial Resolution and Depth Range 3

Fig. 1 Hybrid imaging system including a regular and a light field camera. The maximum baseline of the light field camera
is limited by the camera main lens aperture, and is much less (about an order of magnitude) than the baseline (about 4cm)
between the light field and the regular camera.

depth estimation accuracy in a stereo system has been


studied in [11]. In a stereo system with focal length f
and baseline b, the depth z of a point with disparity d
is obtained through triangulation as z = f b/d. With a
disparity estimation error of d , the depth estimation
error z becomes [11]:
fb fb d 2 d z2
z = − = ≈ d , (1)
d d + d f b + dd fb
(a) (b)
which indicates that the depth estimation error is in-
versely proportional with the baseline and increases Fig. 2 (a) Raw light field. (b) Decoded sub-aperture images.
quadratically with depth. The disparity error d is typ-
ically set to 1, and the depth estimation error z as a the light field camera. That is, we obtain a light field
function depth can be calculated. It is also possible to with enhanced spatial resolution. (ii) The large baseline
set an error bound on z and derive the maximum depth between the regular camera and the light field camera
range from the above equation. results in a wider range and more accurate depth esti-
For an MLA-based light field camera, the maximum mation capability, compared to a single light field cam-
baseline is less than the size of the main lens aperture, era.
making depth estimation very challenging. There are
methods specifically proposed for depth estimation in
MLA-based light field cameras. For example, in [35], the
problem is formulated as a constrained labeling problem
on epipolar plane images in a variational framework.
In [41], ray geometry of 3D line segments is imposed
as constraints on light field triangulation and stereo
(a) (b) (c)
matching. In [30], defocus and shading cues are used
to improve the disparity estimation accuracy. Fig. 3 (a) Regular camera image. (b) Regular camera im-
age after photometric registration. (c) One of the bicubically
resized Lytro sub-aperture image.
3 Hybrid Stereo Imaging

The hybrid stereo imaging system consists of a regu-


lar camera and a light field camera as shown in Figure 3.1 Prototype System and Initial Light Field Data
1. The system has two advantages over a single light Processing
field camera: (i) The high-resolution image produced
by the regular camera is used to improve the spatial The prototype system includes a first-generation Lytro
resolution of each sub-aperture image extracted from camera and a regular camera (AVT Mako G095C). The
4 M. Zeshan Alam, Bahadir K. Gunturk

Fig. 4 Illustration of the resolution enhancement process.

light field is decoded using [10] to obtain 11x11 sub- of the light field sub-aperture images are obtained. The
aperture images, each with size 380x380. The regular HR image is warped onto each light field sub-aperture
camera has a spatial resolution of 1200x780 pixels. The image and fused to produce a high-resolution version of
imaging system is first calibrated: The regular camera each sub-aperture image. As a result, a high-resolution
image and the light field middle sub-aperture image light field is obtained.
is calibrated (utilizing the Matlab Stereo Calibration Image fusion for resolution enhancement is a well-
Toolbox) to determine the overlapping regions between studied topic; the application areas include satellite imag-
the images and rectify the regular camera image. The ing for pan-sharpening, digital camera pipelines for de-
regular image is then photometrically mapped to the mosaicking, and computational photography for focus
color space of the light field sub-aperture images using stacking [24]. In our experiments, we tested two basic
the histogram-based intensity matching function tech- methods for image fusion: (i) a wavelet-based approach
nique [16]. [42], available in Matlab as function wfuseimg, which es-
A raw light field data and the extracted sub-aperture sentially replaces the detail subbands of low-resolution
images are shown in Figure 2. In Figure 3, the rectified image with the detail subbands of the high-resolution
regular camera image is shown along with a light field image, and (ii) alpha blending, also available in Mat-
sub-aperture image. lab as function imfuse, which simply takes the weighted
average of input images.
3.2 Improving Spatial Resolution We further increase the speed of registration process
by using the fact that light field sub-aperture images are
An illustration of the resolution enhancement process is captured on a regular grid. Instead of estimating the op-
given in Figure 4. Each low-resolution (LR) light field tical flow between the middle sub-aperture image and
sub-aperture image is bicubically interpolated to match each of the remaining sub-aperture images, we estimate
the size of the high-resolution (HR) regular camera im- the optical flow between the middle and the leftmost,
age. The optical flow between the HR image and the rightmost, topmost, and bottommost sub-aperture im-
light field middle sub-aperture image and the optical ages as shown in Figure 5. The estimated motion vec-
flow between the light field middle sub-aperture and ev- tors are interpolated for the rest of the sub-aperture
ery other sub-aperture images are estimated. (We use images according to their relative positions within the
the optical flow estimation algorithm presented in [22] light field. As a result, we conduct four within-light-
in our experiments.) Combining these optical flow esti- field-camera optical flow estimation (instead of 120)
mates, motion vectors between the HR image and each and one between-cameras optical flow estimation. In
Hybrid Light Field Imaging for Improved Spatial Resolution and Depth Range 5

4 Experimental Results

In this section, we present our experimental results on


resolution enhancement and depth estimation. All im-
plementations are done with Matlab, running on an
Intel i5 PC with 12GB RAM. For the resolution en-
hancement process, given in Figure 4, the processing
time of an entire Lytro light field is about 70 seconds,
in which the optical flow estimation per image pair is
about 11 seconds. In Figure 7, we compare a light field
sub-aperture image with its resolution-enhanced ver-
sion. Both the alpha blending and wavelet-based image
Fig. 5 Speeding up the optical flow estimation process.
fusion processes produce good results in terms of resolu-
tion enhancement. Alpha blending suppresses the low-
spatial-frequency color noise better than the wavelet-
based approach; this is expected because the wavelet-
based approach preserves the low-frequency content of
the light field images, which have more noise compared
to the image obtained from the regular camera. Alpha
blending, on the other hand, simply averages two im-
ages, reducing the overall noise in all parts of the final
image. (In all our experiments, the weights of the HR
image and light field sub-aperture images are 0.55 and
0.45, respectively, giving slightly more weight to the HR
image in alpha blending.)
In Figure 8, we compare the proposed method with
the hybrid imaging method of Boominathan et al. [6],
the learning-based method of Cho et al. [9], and the
baseline decoder of Dansereau et al. [10]. Zoomed-in
regions from these results are given in Figure 9. Com-
paring these results, we see that single-sensor methods
cannot perform as well as hybrid methods. Among the
hybrid methods, the proposed method produces sharper
images than the method given in [6]. In addition to pro-
ducing sharper images, the proposed method has much
less computational complexity than the one in [6], which
Fig. 6 (Top) Residual between the regular camera image
and light field sub-aperture images before warping. Two sub- takes about an hour on a similar PC configuration. Fi-
aperture images are highlighted. (Bottom) Residual between nally, we provide an epipolar-plane image (EPI) com-
the regular camera image and light field sub-aperture images parison in Figure 10. Again, the proposed method seems
after warping.
to be the best in preserving fine features.
Refocusing: One of the key features of light field
imaging is post-capture digital refocusing through a
simple shift-and-sum procedure [20]. In Figure 11, we
Figure 6, we show the difference between the regular show refocusing at different distances with Lytro light
camera image and light field sub-aperture images be- field images and the resolution-enhanced light field sub-
fore and after registration. The optical flow within the aperture images. It can clearly be seen that we can ob-
light field is estimated as described above. The after- tain sharper refocusing compared to the original Lytro
registration result shows that the registration process images. In Figure 12, we provide refocusing examples
works well. (Note that the residuals for the sub-aperture from another data set captured by our imaging sys-
images in the aperture corners are large because of the tem. Again, the resolution-enhanced light field results
fact that the original sub-aperture images in the corners in higher resolution refocused images compared to the
are too dark due to vignetting.) Lytro light field.
6 M. Zeshan Alam, Bahadir K. Gunturk

(a) (b) (c)


Fig. 7 Resolution enhancement of light field sub-aperture images. (a) One of the bicubically resized Lytro sub-aperture
image. (b) Resolution-enhanced sub-aperture image using alpha blending. (c) Resolution-enhanced sub-aperture image using
wavelet-based fusion.

Dansereau et al. [10] Cho et al. [9] Boominathan et al. [6] Proposed method

Fig. 8 Comparison of various light field decoding and resolution enhancement methods.

Improving depth range and accuracy: To demonstrate micro-lens array based light field cameras. And finally,
the increased depth range and improved depth estima- Figure 13(d) is the disparity map produced by the Lytro
tion accuracy of our hybrid imaging system, we devised manufacturer’s proprietary software. Among these dif-
an experimental setup, where target objects (i.e, “Lego ferent approaches, we see that the proposed system pro-
blocks”) are placed in the scene starting from 40cm duces the best disparity maps in distinguishing objects
away from the imaging system. In Figure 13, we show from different depths. Figure 13(h), the disparities of
the leftmost and rightmost light field sub-aperture im- the target object positions are plotted. For the light
ages as well as the regular camera image, in addition to field camera, the disparity difference from one depth
the disparity maps estimated in different ways. Figure to another becomes too small beyond 100cm, making it
13(g) is the disparity map of the proposed hybrid sys- difficult to distinguish between different depths, and the
tem, computed between the regular camera image and disparities eventually become sub-pixel beyond 200cm.
the middle sub-aperture image using [22]. Figure 13(f) On the other hand, for the hybrid system, the dispari-
is the disparity map of light field camera, computed be- ties are large and distinguishable in the same range.
tween the leftmost and rightmost sub-aperture images
using [22]. Figure 13(e) is the disparity map estimated Addressing occlusion: The proposed method, which
by [8], which uses all the sub-aperture images to es- registers high-resolution image and light field sub-aperture
timate the disparity map and specifically designed for images using optical flow estimation, can handle oc-
cluded regions to a large extent. This is demonstrated
Hybrid Light Field Imaging for Improved Spatial Resolution and Depth Range 7

Dansereau et al. [10] Cho et al. [9] Boominathan et al. [6] Proposed method
Fig. 9 Zoomed-in regions from Figure 8.

in Figure 14. In Figure 14(a), the high-resolution reg-


ular camera image is shown. A low-resolution Lytro
Dansereau et al. [10] sub-aperture image is given in Figure 14(b). In Fig-
ure 14(c), the resolution-enhanced version of the sub-
aperture image is provided. As seen in the zoomed-in
Boominathan et al. [6] regions, the proposed method can handle the occlusion
well in most parts as the optical flow based registration
aims to minimize the brightness difference. However,
Proposed method
there are still some regions, where the difference be-
tween the resolution enhanced and low-resolution input
Dansereau et al. [10] is large. One method to detect these occluded regions
is to compare the absolute difference against a thresh-
old; when the difference in any color channel is larger
than a pre-determined threshold, we can mark the cor-
Boominathan et al. [6]
responding pixel as occluded. Figure 14(d) shows the
occlusion mask when the threshold is set to 0.175 after
Proposed method some trial and error, with the pixel values in the range
0 to 1. The occluded regions can then be filled with
Fig. 10 Comparison of EPI images. The corresponding EPI
lines are marked in Figure 8. the pixel values from the original light field image, as
shown in Figure 14(e). In this approach, the value of the
threshold is critical. Choosing the threshold too small
may cause missing the occlusion regions; on the other
8 M. Zeshan Alam, Bahadir K. Gunturk

[Close focus] [Close focus] [Close focus]

[Mid focus] [Mid focus] [Mid focus]

[Far focus] [Far focus] [Far focus]

(a) (b) (c)


Fig. 11 Post-capture digital refocusing to close, middle and far depth using the shift-and-sum technique. (a) Single-sensor
(Lytro) light field refocusing. (b) Resolution-enhanced (using alpha blending) light field refocusing. (c) Resolution-enhanced
(using wavelet-based fusion) light field refocusing. The bottom row shows zoomed-in regions from middle-depth focusing.

hand, a large threshold value may lead to transferring With proper image registration, even a simple image
noisy and low-resolution data into the final image. fusion, such as alpha blending, produces good results.
The method compares favorably against several meth-
ods in the literature, including two single-image light
5 Conclusions field decoders and a hybrid-camera light field resolu-
tion enhancer.
In this paper, we presented a hybrid imaging system
that includes a light field camera and a regular camera.
The system, while keeping the capabilities of light field References
imaging, improves both spatial resolution and depth
estimation range/accuracy due to increased baseline. 1. Lytro, Inc. https://www.lytro.com/
Because the fixed stereo system allows pre-calibration 2. Raytrix, GmbH. https://www.raytrix.de/
and by utilizing the fact that light field sub-aperture 3. Adelson, E.H., Bergen, J.R.: The plenoptic function and
the elements of early vision. In: Computational Models
images are captured on a regular grid, the registra- of Visual Processing, pp. 3–20. MIT Press (1991)
tion of low-resolution light field sub-aperture images 4. Alam, M.Z., Gunturk, B.K.: Hybrid stereo imaging in-
and high-resolution regular camera images is simplified. cluding a light field and a regular camera. In: IEEE Sig-
Hybrid Light Field Imaging for Improved Spatial Resolution and Depth Range 9

[Lytro] [Lytro] [Lytro]

[Resolution enhanced] [Resolution enhanced] [Resolution enhanced]

(a) (b) (c)


Fig. 12 Post-capture digital refocusing of single-sensor (Lytro) light field data and resolution-enhanced (using alpha blending)
light field data using the shift-and-sum technique. (a) Close-depth focus. (b) Middle-depth focus. (c) Far-depth focus. The
bottom row shows zoomed-in regions.

nal Processing and Communication Applications Conf., on Computer Vision and Pattern Recognition, pp. 1–8
pp. 1293–1296 (2016) (2008)
5. Bishop, T.E., Zanetti, S., Favaro, P.: Light field super- 12. Georgiev, T.: New results on the plenoptic 2.0 camera.
resolution. In: IEEE Int. Conf. on Computational Pho- In: Asilomar Conf. on Signals, Systems and Computers,
tography, pp. 1–9 (2009) pp. 1243–1247 (2009)
6. Boominathan, V., Mitra, K., Veeraraghavan, A.: Improv-
13. Georgiev, T., Zheng, K.C., Curless, B., Salesin, D., Na-
ing resolution and depth-of-field of light field cameras
yar, S., Intwala, C.: Spatio-angular resolution tradeoffs
using a hybrid imaging system. In: IEEE Int. Conf. on
in integral photography. In: Eurographics Conf. on Ren-
Computational Photography, pp. 1–10 (2014)
7. Broxton, M., Grosenick, L., Yang, S., Cohen, N., Andal- dering Techniques, pp. 263–272 (2006)
man, A., Deisseroth, K., Levoy, M.: Wave optics theory 14. Gershun, A.: The light field. Journal of Mathematics and
and 3D deconvolution for the light field microscope. Op- Physics 18(1), 51–151 (1939)
tics Express 21, 25,418–25,439 (2013) 15. Gortler, S.J., Grzeszczuk, R., Szeliski, R., Cohen, M.F.:
8. Calderon, F.C., Parra, C.A., Niño, C.L.: Depth map es- The lumigraph. In: ACM Conf. on Computer Graphics
timation in light fields using an stereo-like taxonomy. In: and Interactive Techniques, pp. 43–54 (1996)
IEEE Symp. on Image, Signal Processing and Artificial 16. Grossberg, M.D., Nayar, S.K.: Determining the camera
Vision, pp. 1–5 (2014) response from images: what is knowable? IEEE Trans. on
9. Cho, D., Lee, M., Kim, S., Tai, Y.W.: Modeling the cal-
Pattern Analysis and Machine Intelligence 25(11), 1455–
ibration pipeline of the Lytro camera for high quality
1467 (2003)
light-field image reconstruction. In: IEEE Int. Conf. on
Computer Vision, pp. 3280–3287 (2013) 17. Jeon, H.G., Park, J., Choe, G., Park, J.: Accurate depth
10. Dansereau, D.G., Pizarro, O., Williams, S.B.: Decoding, map estimation from a lenslet light field camera. In: IEEE
calibration and rectification for lenselet-based plenoptic Conf. on Computer Vision and Pattern Recognition, pp.
cameras. In: IEEE Conf. on Computer Vision and Pat- 1547–1555 (2015)
tern Recognition, pp. 1027–1034 (2013) 18. Junker, A., Stenau, T., Brenner, K.H.: Scalar wave-
11. Gallup, D., Frahm, J.M., Mordohai, P., Pollefeys, M.: optical reconstruction of plenoptic camera images. Ap-
Variable baseline/resolution stereo. In: IEEE Conf. plied Optics 53(25), 5784–5790 (2014)
10 M. Zeshan Alam, Bahadir K. Gunturk

(a) (b) (c)

(d) (e) (f) (g)

(h)
Fig. 13 Disparity map comparison. (a) Leftmost Lytro sub-aperture image. (b) Rightmost Lytro sub-aperture image. (c)
Regular camera image (before photometric registration). (d) Disparity map of the Lytro software. (e) Disparity map of [8]. (f)
Disparity map between the leftmost and rightmost Lytro sub-aperture images. (g) Disparity map between the middle Lytro
sub-aperture image and the regular camera image. (h) Disparities of the target object centers.

19. Kalantari, N.K., Wang, T.C., Ramamoorthi, R.: Computer Vision and Pattern Recognition Workshops,
Learning-based view synthesis for light field cameras. pp. 22–28 (2012)
ACM Trans. on Graphics 35(6), 193:1–10 (2016) 26. Ng, R.: Fourier slice photography. ACM Trans. on Graph-
20. Levoy, M., Hanrahan, P.: Light field rendering. In: ACM ics 24(3), 735–744 (2005)
Int. Conf. on Computer Graphics and Interactive Tech- 27. Perez, F., Perez, A., Rodriguez, M., Magdaleno, E.:
niques, pp. 31–42 (1996) Fourier slice super-resolution in plenoptic cameras. In:
21. Lippmann, G.: Epreuves reversibles donnant la sensation IEEE Int. Conf. on Computational Photography, pp. 1–
du relief. J. Phys. Theor. Appl. 7(1), 821–825 (1908) 11 (2012)
28. Shi, L., Hassanieh, H., Davis, A., Katabi, D., Durand, F.:
22. Liu, C.: Beyond pixels: exploring new representations and
Light field reconstruction using sparsity in the continuous
applications for motion analysis. In: MIT (2009)
Fourier domain. ACM Trans. on Graphics 34(1), 12:1–13
23. Lumsdaine, A., Georgiev, T.: The focused plenoptic cam- (2014)
era. In: IEEE Int. Conf. on Computational Photography, 29. Shroff, S.A., Berkner, K.: Image formation analysis and
pp. 1–8 (2009) high resolution image reconstruction for plenoptic imag-
24. Mitchell, H.B.: Image fusion: theories, techniques and ap- ing systems. Applied Optics 52(10), 22–31 (2013)
plications. Springer (2010) 30. Tao, M., Hadap, S., Malik, J., Ramamoorthi, R.: Depth
25. Mitra, K., Veeraraghavan, A.: Light field denoising, light from combining defocus and correspondence using light-
field superresolution and stereo camera based refocussing field cameras. In: IEEE Int. Conf. on Computer Vision,
using a GMM light field patch prior. In: IEEE Conf. on pp. 673–680 (2013)
Hybrid Light Field Imaging for Improved Spatial Resolution and Depth Range 11

(a) (b) (c) (d) (e)


Fig. 14 Occlusion handling. (a) High-resolution regular camera image. (b) Lytro light field sub-aperture image. (c) Resolution
enhanced light field sub-aperture image. (d) Occlusion mask. (e) Occluded regions are filled from Lytro sub-aperture image.

31. Trujillo-Sevilla, J.M., Rodriguez-Ramos, L.F., Montilla, 41. Yu, Z., Guo, X., Ling, H., Lumsdaine, A., Yu, J.: Line
I., Rodriguez-Ramos, J.M.: High resolution imaging and assisted light field triangulation and stereo matching.
wavefront aberration correction in plenoptic systems. Op- In: IEEE Int. Conf. on Computer Vision, pp. 2792–2799
tics Letters 39(17), 5030–5033 (2014) (2013)
32. Unger, J., Wenger, A., Hawkins, T., Gardner, A., De- 42. Zeeuw, P.M.: Wavelet and image fusion. In: CWI (1998)
bevec, P.: Capturing and rendering with incident light
fields. In: Eurographics Workshop on Rendering, pp. 141–
149 (2003)
33. Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A.,
Tumblin, J.: Dappled photography: mask enhanced cam-
eras for heterodyned light fields and coded aperture refo-
cusing. ACM Trans. on Graphics 26(3), 69:1–12 (2007)
34. Wang, X., Li, L., Houi, G.: High-resolution light field
reconstruction using a hybrid imaging system. Applied
Optics 55(10), 2580–2593 (2016)
35. Wanner, S., Goldluecke, B.: Globally consistent depth la-
beling of 4D light fields. In: IEEE Conf. on Computer
Vision and Pattern Recognition, pp. 41–48 (2012)
36. Wanner, S., Goldluecke, B.: Spatial and angular varia-
tional super-resolution of 4D light fields. In: IEEE Conf.
on Computer Vision and Pattern Recognition, pp. 901–
908 (2012)
37. Wilburn, B., Joshi, N., Vaish, V., Talvala, E.V., Antunez,
E., Barth, A., Adams, A., Horowitz, M., Levoy, M.: High
performance imaging using large camera arrays. ACM
Trans. on Graphics 24(3), 765–776 (2005)
38. Wu, J., Wang, H., Wang, X., Zhang, Y.: A novel light field
super-resolution framework based on hybrid imaging sys-
tem. In: Visual Communications and Image Processing,
pp. 1–4 (2015)
39. Yang, J.C., Everett, M., Buehler, C., McMillan, L.: A
real-time distributed light field camera. In: Eurographics
Workshop on Rendering, pp. 77–86 (2002)
40. Yoon, Y., Jeon, H.G., Yoo, D., Lee, J.Y., Kweon, I.S.:
Learning a deep convolutional network for light-field im-
age super-resolution. In: IEEE Int. Conf. Computer Vi-
sion Workshop, pp. 24–32 (2015)

S-ar putea să vă placă și