Hostname: page-component-78c5997874-4rdpn Total loading time: 0 Render date: 2024-11-10T07:50:30.178Z Has data issue: false hasContentIssue false

Depth Hypotheses Fusion through 3D Weighted Least Squares in Shape from Focus

Published online by Cambridge University Press:  01 March 2021

Usman Ali
Affiliation:
Computer Engineering, School of Computer Science and Engineering, Korea University of Technology and Education, 1600, Chungjeol-ro, Byeongcheon-myeon, 31253Cheonan, South Korea
Muhammad Tariq Mahmood*
Affiliation:
Computer Engineering, School of Computer Science and Engineering, Korea University of Technology and Education, 1600, Chungjeol-ro, Byeongcheon-myeon, 31253Cheonan, South Korea Future Convergence Engineering, School of Computer Science and Engineering, Korea University of Technology and Education, 1600, Chungjeol-ro, Byeongcheon-myeon, 31253Cheonan, South Korea
*
*Author for correspondence: Muhammad Tariq Mahmood, E-mail: tariq@koreatech.ac.kr
Get access

Abstract

In shape-from-focus (SFF) methods, a single focus measure is used to compute the focus volume. However, it seems that a single focus measure operator is not capable of computing accurate focus values for the images of diverse types of object shapes. Furthermore, most of the SFF methods try to improve the depth map without considering any additional structural or prior information. Consequently, the extracted shape of the object might lack important details. In this work, we address these problems and suggest a method in which depth hypotheses are combined for a more accurate 3D shape through 3D weighted least squares. First, depth hypotheses are obtained by applying a number of focus operators. Then, structural prior or guidance volume is extracted from the focus measure volumes. Finally, a 3D weighted least squares optimization technique is applied to the depth hypothesis volume, where weights are computed from the guidance volume. Thus, by inducing structural prior, an improved resultant depth map is obtained. The proposed method was tested using various image sequences of synthetic and microscopic real objects. Experimental results and comparative analysis demonstrated the effectiveness of the proposed method.

Type
Software and Instrumentation
Copyright
Copyright © The Author(s), 2021. Published by Cambridge University Press on behalf of the Microscopy Society of America

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Ahmad, M & Choi, T (2007). Application of three dimensional shape from image focus in LCD/TFT displays manufacturing. IEEE Trans Consum Electron 1, 14.CrossRefGoogle Scholar
Ahmad, MB & Choi, TS (2005). A heuristic approach for finding best focused shape. IEEE Trans Circuits Syst Video Technol 15, 566574.CrossRefGoogle Scholar
Ali, U, Lee, IH & Mahmood, MT (2021). Guided image filtering in shape-from-focus: A comparative analysis. Pattern Recognit 111, 107670.CrossRefGoogle Scholar
Ali, U & Mahmood, MT (2019). Combining depth maps through 3D weighted least squares in shape from focus. International Conference on Electronics, Information and Communication, pp. 868–871. Auckland, New Zeland: IEEE.CrossRefGoogle Scholar
Ali, U & Mahmood, MT (2020). 3D shape recovery by aggregating 3D wavelet transform-based image focus volumes through 3D weighted least squares. J Math Imaging Vis 62, 5472.CrossRefGoogle Scholar
Ali, U, Pruks, V & Mahmood, MT (2019). Image focus volume regularization for shape from focus through 3D weighted least squares. Inf Sci 489, 155166.CrossRefGoogle Scholar
Asif, M & Choi, TS (2001). Shape from focus using multilayer feedforward neural networks. IEEE Trans Image Process 10, 16701675.CrossRefGoogle ScholarPubMed
Aydin, T & Akgul, YS (2010). An occlusion insensitive adaptive focus measurement method. Opt Express 18, 1421214224.CrossRefGoogle ScholarPubMed
Boshtayeva, M, Hafner, D & Weickert, J (2015). A focus fusion framework with anisotropic depth map smoothing. Pattern Recognit 48, 33103323.CrossRefGoogle Scholar
Choi, T, Asif, M & Yun, J (1999). Three-dimensional shape recovery from focused image surface. Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, vol. 6, pp. 3269–3272. Phoenix, AZ: IEEE.CrossRefGoogle Scholar
Gaganov, V & Ignatenko, A (2009). Robust shape from focus via Markov random fields. International Conference on Computer Graphics and Vision, Moscow, Russia, pp. 74–80. GraphiCon.Google Scholar
Gallego, G, Gehrig, M & Scaramuzza, D (2019). Focus is all you need: Loss functions for event-based vision. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, pp. 12280–12289. IEEE.CrossRefGoogle Scholar
Goldstein, T & Osher, S (2009). The split Bregman method for L1-regularized problems. SIAM J Imaging Sci 2, 323343.CrossRefGoogle Scholar
Golub, GH & Van Loan, CF (2012). Matrix Computations, vol. 3. Baltimore, MD: JHU Press.Google Scholar
Hariharan, R & Rajagopalan, A (2012). Shape-from-focus by tensor voting. IEEE Trans Image Process 21, 33233328.CrossRefGoogle ScholarPubMed
Hazirbas, C, Soyer, SG, Staab, MC, Leal-Taixé, L & Cremers, D (2018). Deep depth from focus. Asian Conference on Computer Vision, pp. 525–541. Perth, Australia: Springer.Google Scholar
Helmli, FS & Scherer, S (2001). Adaptive shape from focus with an error estimation in light microscopy. Proceedings of the 2nd International Symposium on Image and Signal Processing and Analysis (ISPA), pp. 188–193. Pula, Croatia: IEEE.CrossRefGoogle Scholar
Jeon, HG, Surh, J, Im, S & Kweon, IS (2019). Ring difference filter for fast and noise robust depth from focus. IEEE Trans Image Process 29, 10451060.CrossRefGoogle Scholar
Kim, Y, Min, D, Ham, B & Sohn, K (2017). Fast domain decomposition for global image smoothing. IEEE Trans Image Process 26, 40794091.CrossRefGoogle Scholar
Krotkov, E & Martin, JP (1986). Range from focus. Proceedings of the International Conference on Robotics and Automation, vol. 3, pp. 1093–1098. San Francisco, CA: IEEE.CrossRefGoogle Scholar
Kumar, P & Sahay, RR (2017). Accurate structure recovery via weighted nuclear norm: A low rank approach to shape-from-focus. In Proceedings of the IEEE International Conference on Computer Vision Workshops, 22–29 Oct. 2017, Venice, Italy. IEEE, pp. 563–574.CrossRefGoogle Scholar
Lee, IH, Mahmood, MT & Choi, TS (2015). Robust focus measure operator using adaptive log-polar mapping for three-dimensional shape recovery. Microsc Microanal 21, 442458.CrossRefGoogle ScholarPubMed
Ma, Z, Kim, D & Shin, YG (2020). Shape-from-focus reconstruction using nonlocal matting Laplacian prior followed by MRF-based refinement. Pattern Recognit 103, 107302.CrossRefGoogle Scholar
Mahmood, MT (2013). MRT letter: A total variation based method for 3D shape recovery of microscopic objects through image focus. Microsc Res Tech 76, 877881.CrossRefGoogle ScholarPubMed
Mahmood, MT & Choi, TS (2012). Nonlinear approach for enhancement of image focus volume in shape from focus. IEEE Trans Image Process 21, 28662873.CrossRefGoogle ScholarPubMed
Mahmood, MT, Majid, A & Choi, TS (2011). Optimal depth estimation by combining focus measures using genetic programming. Inf Sci 181, 12491263.CrossRefGoogle Scholar
Majid, A, Mahmood, MT & Choi, TS (2010). MRT letter: Optimal composite depth function for 3D shape recovery of microscopic objects. Microsc Res Tech 73, 657661.Google ScholarPubMed
Malik, AS & Choi, TS (2007). Consideration of illumination effects and optimization of window size for accurate calculation of depth map for 3D shape recovery. Pattern Recognit 40, 154170.CrossRefGoogle Scholar
Malik, AS & Choi, TS (2009). Comparison of polymers: A new application of shape from focus. IEEE Trans Syst Man Cybern C (Appl Rev) 39, 246250.CrossRefGoogle Scholar
Min, D, Choi, S, Lu, J, Ham, B, Sohn, K & Do, MN (2014). Fast global image smoothing based on weighted least squares. IEEE Trans Image Process 23, 56385653.CrossRefGoogle ScholarPubMed
Minhas, R, Mohammed, AA, Wu, QJ & Sid-Ahmed, MA (2009). 3D shape from focus and depth map computation using steerable filters. International Conference Image Analysis and Recognition, pp. 573–583. Halifax, Canada: Springer.CrossRefGoogle Scholar
Moeller, M, Benning, M, Schönlieb, C & Cremers, D (2015). Variational depth from focus reconstruction. IEEE Trans Image Process 24, 53695378.CrossRefGoogle ScholarPubMed
Mutahira, H, Muhammad, MS, Li, M & Shin, DR (2020). A simplified approach using deep neural network for fast and accurate shape from focus. Microsc Res Tech. https://doi.org/10.1002/jemt.23623.Google ScholarPubMed
Nayar, SK & Nakagawa, Y (1994). Shape from focus. IEEE Trans Pattern Anal Mach Intell 16, 824831.CrossRefGoogle Scholar
Pertuz, S, Puig, D & Garcia, MA (2013). Analysis of focus measure operators for shape-from-focus. Pattern Recognit 46, 14151432.CrossRefGoogle Scholar
Rudin, LI, Osher, S & Fatemi, E (1992). Nonlinear total variation based noise removal algorithms. Physica D 60, 259268.CrossRefGoogle Scholar
Santos, A, Ortiz de Solórzano, C, Vaquero, JJ, Pena, J, Malpica, N & Del Pozo, F (1997). Evaluation of autofocus functions in molecular cytogenetic analysis. J Microsc 188, 264272.CrossRefGoogle ScholarPubMed
Shim, SO & Choi, TS (2010). A novel iterative shape from focus algorithm based on combinatorial optimization. Pattern Recognit 43, 33383347.CrossRefGoogle Scholar
Subbarao, M & Choi, T (1995). Accurate recovery of three-dimensional shape from image focus. IEEE Trans Pattern Anal Mach Intell 17, 266274.CrossRefGoogle Scholar
Subbarao, M & Lu, MC (1994). Image sensing model and computer simulation for CCD camera systems. Mach Vis Appl 7, 277289.CrossRefGoogle Scholar
Thelen, A, Frey, S, Hirsch, S & Hering, P (2009). Improvements in shape-from-focus for holographic reconstructions with regard to focus operators, neighborhood-size, and height value interpolation. IEEE Trans Image Process 18, 151157.CrossRefGoogle ScholarPubMed
Tseng, CY & Wang, SJ (2014). Shape-from-focus depth reconstruction with a spatial consistency model. IEEE Trans Circuits Syst Video Technol 24, 20632076.CrossRefGoogle Scholar
Wang, Q, Tao, Y & Lin, H (2015). Edge-aware volume smoothing using L0 gradient minimization. Comput Graph Forum 34, 131140.CrossRefGoogle Scholar