Introduction
Bio-image informatics is becoming an increasingly important component of various biological studies (de Chaumont et al., Reference de Chaumont, Coura, Serreau, Cressant, Chabout, Granon and Olivo-Marin2012; Myers, Reference Myers2012; Xian et al., Reference Xian, Shen, Chen, Sun, Qiao, Jiang, Yu, Men, Han, Pang, Kaeberlein, Huang and Han2013; Chen et al., Reference Chen, Qian, Wu, Xian, Chen, Cao, Green, Zhao, Tang and Han2015; Weissleder & Nahrendorf, Reference Weissleder and Nahrendorf2015). Bio-image processing techniques are widely used to automatically detect and quantify biological phenotypes (Shamir et al., Reference Shamir, Wolkow and Goldberg2009; Neumann et al., Reference Neumann, Walter, Heriche, Bulkescher, Erfle, Conrad, Rogers, Poser, Held, Liebel, Cetin, Sieckmann, Pau, Kabbe, Wunsche, Satagopam, Schmitz, Chapuis, Gerlich, Schneider, Eils, Huber, Peters, Hyman, Durbin, Pepperkok and Ellenberg2010; Rihel et al., Reference Rihel, Prober, Arvanites, Lam, Zimmerman, Jang, Haggarty, Kokel, Rubin, Peterson and Schier2010; Swierczek et al., Reference Swierczek, Giles, Rankin and Kerr2011; Wang et al., Reference Wang, Gui, Liu, Zhang and Mosig2013; Yemini et al., Reference Yemini, Jucikas, Grundy, Brown and Schafer2013; Zhou et al., Reference Zhou, Cattley, Cario, Bai and Burton2014; Chen & Han, Reference Chen and Han2015; Kirsanova et al., Reference Kirsanova, Brazma, Rustici and Sarkans2015; Chen et al., Reference Chen, Xia, Huang, Chen and Han2016). A wide variety of processing techniques are now available to researchers in order to achieve these desired results. Among these options, image segmentation is a vital processing technique well suited for biological image analysis. Image segmentation is the prerequisite for phenotype quantification and is central to almost all applications related to bio-image informatics (Peng, Reference Peng2008). For evenly illuminated images, Otsu’s (Reference Otsu1979) method is the commonly used approach to first determine a gray intensity threshold and subsequently segment the image. Held et al. (Reference Held, Palmisano, Haberle, Hensel and Wittenberg2011) provided a parameter optimization method to improve the image segmentation performance. The component tree method was a later development which could be applied to segment time-lapse microscopy images and track moving cells (Xiao et al., Reference Xiao, Li, Du and Mosig2011).
Multi-well plates are commonly used to perform high-throughput screening utilizing organisms such as Caenorhabditis elegans (Wahlby et al., Reference Wahlby, Kamentsky, Liu, Riklin-Raviv, Conery, O’Rourke, Sokolnicki, Visvikis, Ljosa, Irazoqui, Golland, Ruvkun, Ausubel and Carpenter2012; O’Reilly et al., Reference O’Reilly, Luke, Perlmutter, Silverman and Pak2014), larval zebrafish (Rihel et al., Reference Rihel, Prober, Arvanites, Lam, Zimmerman, Jang, Haggarty, Kokel, Rubin, Peterson and Schier2010), or cell culture (Balcarcel & Clark, Reference Balcarcel and Clark2003). Image capturing and processing are required for these experiments to collect meaningful data at the necessary speed and accuracy needed for high-throughput screening. One major limitation of previous multi-well plate experiments has been the inherent variation of illumination observed by bright-field microscopy (Fig. 1). The variation is introduced by the surface of the individual wells and their relation to the microscope light source. These uneven illuminations introduce a factor that increases the difficulty associated with image segmentation. This presents a major obstruction to the applications of multi-well plates in high-throughput experiments which rely on the cultured model organisms or cells to be automatically distinguished from the background in experimental images. For example, Figure 1 shows the typical bright-field microscopy image of a multi-well plate. The regions outside of the well have a lower gray intensity, and the gray intensity increases from well edge region to middle region. Due to this variation it is difficult to determine a gray intensity threshold on this image and distinguish worms from the background automatically.
More recently, Wahlby et al. introduced the WormToolbox for the image analysis software CellProfiler. This toolbox revolutionized image analysis in C. elegans in many ways and included a capability for illumination correction to be carried out before the segmentation of microscopy images captured from 384-well plates (Wahlby et al., Reference Wahlby, Kamentsky, Liu, Riklin-Raviv, Conery, O’Rourke, Sokolnicki, Visvikis, Ljosa, Irazoqui, Golland, Ruvkun, Ausubel and Carpenter2012). Their method involves both well region detection and illumination correction. However, in our experiments, we encountered some problems overcoming uneven illumination while using the WormToolbox. We encountered problems including incomplete well edges and the effect of differing light interference from outside the well, which could not be overcome by WormToolbox. Therefore, we have developed a new image segmentation method to solve the uneven illumination problems faced by previous methodology.
Our method utilized information relating to image contrast values to reduce the influence of uneven illumination. We therefore did not need to rely on illumination correction before initiating image segmentation. Results demonstrated improved performance when applying our method to unevenly illuminated microscopy images derived from both the public data set and our own experiments. To our knowledge, the proposed method is the only segmentation method capable of segmenting multi-well plate microscopy images with various levels of uneven illumination simultaneously and with this degree of efficiency.
Materials and Methods
A general overview of our method for segmenting multi-well plate microscopy images with nonhomogeneous illumination problems is presented in the “Overview” section. The sliding window and contrast computation methods are described in the “Contrast Computation” section. The “Comparing to other available methods” section describes the uneven illumination solving results of other available methods.
Overview
The image contrast is a measure of local variations of pixels’ gray intensities (Haralick et al., Reference Haralick, Shanmugam and Dinstein1973). The detailed pipeline of our segmentation method is: first, we used the sliding window to scan the original image, and computed the contrast value in each window; second, we created a contrast image by the contrast values computed in last step; and third, after the contrast image is generated, we can segment the contrast image based on the differences of contrast values to determine each pixel belong to foreground regions or background regions.
Contrast Computation
We used a small sliding window (for the results presented in this paper, we have defined the size of the sliding window as 5×5 pixels) to scan the microscopy image. The Gray Level Co-Occurrence Matrix (GLCM) (Haralick et al., Reference Haralick, Shanmugam and Dinstein1973) was computed for each window. Then the local contrast of each window was computed based on the GLCM.
where, L is the is the number of gray levels in the image, p(i,j) the joint probability occurrence of a pixel with value i is adjacent to a pixel with value j. Image contrast refers to the gray intensity difference among neighboring pixels. The sliding window and local contrast allows us to highlight the difference between foreground and background. Results showed that background regions are more homogeneous and smaller contrasting than foreground regions. From the contrast values we found that there are big differences between foreground regions and background regions.
Figure 2 shows the differences of contrast values between background regions and foreground regions. Compared with the lower contrast values (<5 in these labeled pixels) of background regions, the foreground regions have higher contrast values (>1,000 in these labeled pixels). Then we used the rule that the foreground pixels have a higher contrast values (larger than the defined threshold 100), and the background pixels have lower contrast values (≤100), to segment foreground and background (segmented image showed in Fig. 3a). We also computed the average contrast values for background pixels and foreground pixels (Fig. 4). The average contrast values of background and foreground are 5.71 and 653.52, respectively. The large differences in contrast values make it simple to distinguish foreground from background.
Comparing to Other Available Methods
To compare the performance of our method in solving the uneven illuminations with previously published method, we also processed the 384-well plate images using the established WormToolbox method for use with CellProfiler (Wahlby et al., Reference Wahlby, Kamentsky, Liu, Riklin-Raviv, Conery, O’Rourke, Sokolnicki, Visvikis, Ljosa, Irazoqui, Golland, Ruvkun, Ausubel and Carpenter2012).
Most of the tested problems related to uneven illuminations and noise, such as the problem of a bright region on the right of the image (Supplementary Fig. 1), the problem of an image which does not cover complete well edges (Supplementary Fig. 1), the problem of an image with a dark ripple segment and the problem of an image comprised of multiple wells, cannot be solved by the WormToolbox of CellProfiler (Supplementary Fig. 1). This is primarily due to its inability to reconcile noise present outside well region or when the well edge is incomplete. From the middle results (Supplementary Fig. 1), we can see that the WormToolbox of CellProfiler needs to first find the well region and then correct the illumination. However, most of uneven illumination problems and noise can prevent it from finding the well region correctly. Therefore, the WormToolbox of CellProfiler cannot correct the uneven illuminations in these situations.
Supplementary Figure 1
Supplementary Figure 1 can be found online. Please visit journals.cambridge.org/jid_MAM.
Supplementary Figure 1 shows the middle results and illumination corrected results by the WormToolbox, CellProfiler (Wahlby et al., Reference Wahlby, Kamentsky, Liu, Riklin-Raviv, Conery, O’Rourke, Sokolnicki, Visvikis, Ljosa, Irazoqui, Golland, Ruvkun, Ausubel and Carpenter2012). The software and its illumination correction pipeline were downloaded from website (http://cellprofiler.org/).
Availability of Data and Materials
The results supporting the conclusions of this article are included in the Supplementary Material. Public multi-well plate images were downloaded from the website of CellProfiler (http://cellprofiler.org/).
Results
We tested commonly encountered issues associated with uneven illumination from multi-well plate bright-field microscopy images from both the public data set and our own experiments. Public multi-well plate images were downloaded from the website of CellProfiler (http://cellprofiler.org/). The other multi-well plate images analyzed were generated by our own experiments. These microscopy images were used to test the segmentation performance of our novel method. We also compared our method with the WormToolbox available on CellProfiler (Wahlby et al., Reference Wahlby, Kamentsky, Liu, Riklin-Raviv, Conery, O’Rourke, Sokolnicki, Visvikis, Ljosa, Irazoqui, Golland, Ruvkun, Ausubel and Carpenter2012) performance on the same images. WormToolbox is an open-source platform which is the industry standard for a host of C. elegans image analysis functionalities, including methodology that attempts to overcome uneven illumination within a multi-well plate image. The results showed that our new method overcame all the uneven illumination problems that could not be addressed by the WormToolbox. As a result the presented method was able to demonstrate reliable segmentation performance for all tested images.
Figure 3 shows the contrast values computed from the multi-well plate images with different uneven illumination problems. Other tested uneven illumination images and results are listed in Supplementary Figure 2. The original uneven illumination images are listed in the first column of each panel in Figure 3 and Supplementary Figure 2. The contrast value computed for each pixel is displayed in the second column. From the contrast values, we can see that only worms and well outlines produce higher values. The background regions, regardless of whether they are brighter or darker in the original uneven illumination images, all have lower contrast values. So, we can segment the images based on the difference of contrast values easily. The simple rule is that the foreground pixels have higher contrast values (bigger than the defined threshold 100), and the background pixels have lower contrast values. The segmentation results showed that our method can solve various uneven illumination scenarios, such as the problem of dark regions outside of the well and a bright well center (Fig. 3a), bright noise region on the right of the image (Fig. 3b shows the image which captures a whole well and one more bright region of neighboring well), and the image which does not cover complete well edges (Fig. 3c shows the image which cuts out one edge from the original image of Fig 3a). Moreover, our method can also process images exhibiting a dark ripple in one section of the well (Supplementary Fig. 2b) or an image with multi-wells (Supplementary Fig. 2d shows the simulated image which is composed of four wells spliced together).
Supplementary Figure 2
Supplementary Figure 2 can be found online. Please visit journals.cambridge.org/jid_MAM.
We also tested the performance of WormToolbox of CellProfiler using these same images. Results showed that it was only capable of solving the problems associated with dark regions outside of the well and bright well centers (Supplementary Fig. 1a). However, other uneven illumination problems, such as the problem of a bright noise region on the right of the image (Supplementary Fig. 1b), the problem of an image which does not cover complete well edges (Supplementary Fig. 1c), the problem of an image with a dark ripple and the problem of an image that is composed of multiple wells, cannot be solved by the WormToolbox for CellProfiler (Supplementary Fig. 1). The WormToolbox of CellProfiler needs to first find the well region and then correct the illumination. However, most of uneven illumination problems and noise can prevent it from finding the well region correctly. Furthermore, the illumination correction method based on convex hull cannot solve the problem of a bright noise region on the outside of the well. Therefore, the WormToolbox of CellProfiler cannot correct the uneven illuminations in these situations. The WormToolbox and its illumination correction pipeline were downloaded from the CellProfiler website (http://cellprofiler.org/).
Therefore, the proposed method is the only segmentation method for segmenting multi-well plate microscopy images with all these uneven illumination problems.
Discussion
The unambiguous segmentation of multi-well plate microscopy images with various uneven illuminations is a challenging problem. Due to the widespread usage of multi-well plates in a variety of biological experiments and the image analysis issues associated with illumination variation it is a problem that must be addressed.
Here, we have developed a method for bright-field microscopy image segmentation based on image contrast information. This is a new approach that allows for a simple solution to various uneven illumination problems and results in efficient and accurate segmentation of microscopy images. We have tested our methodology on both our own experimental images and a publically available database containing uneven illumination problems in a bright field, multi-well plate context. The images analyzed in this study included features with the potential to confound high-throughput image analysis including; regions of bright noise, incomplete images of entire wells, and dark ripples. Our method overcomes these challenges in a novel way that rivals the performance of other methods, such as those available in WormToolbox (Wahlby et al., Reference Wahlby, Kamentsky, Liu, Riklin-Raviv, Conery, O’Rourke, Sokolnicki, Visvikis, Ljosa, Irazoqui, Golland, Ruvkun, Ausubel and Carpenter2012).
Despite the improvement to the reliability of image analysis, our methodology still detects artifacts rather than real data in some isolated instances. This demonstrates that human intervention is still required in high-throughput image analysis. However, due to the robust nature of our methodology this human intervention serves more of a quality control function, as the segmentation protocol performs the vast majority of detection and analysis.
It would be interesting to apply our methodology to addressing problems such as the quantification of C. elegans developmental stages. Past attempts to achieve high-throughput determination of developmental stages such as the DevStaR machine learning system have cited issues dealing with the variation of intensity in multi-well plates (White et al., Reference White, Lees, Kao, Cipriani, Munarriz, Paaby, Erickson, Guzman, Rattanakorn, Sontag and Geiger2013). It would be beneficial to compare the performance of our contrast-based methodology with the background removal approach developed by White et al.
High-throughput image analysis is an integral technology for biomedical research. Although we have used C. elegans to demonstrate the power of our methodology, uneven illumination is a widespread issue. This problem has been encountered in a diverse range of disciplines including; cancer research (Malm et al., Reference Malm, Brun and Bengtsson2015), biomarker discovery (Ivanov & Grabowska, Reference Ivanov and Grabowska2017), and toxicology (Hsu et al., Reference Hsu, Huang, Attene-Ramos, Austin, Simeonov and Xia2017) among many others. It would be interesting to assess the efficiency of our contrast value-based methodology to applications such as these, rather than limiting the use to multi-well, bright-field images of C. elegans. Although there are unknown and exciting applications possible in the future, we primarily hope that the presented image segmentation method will allow other researchers to segment and process their own biological images easier than ever before.
Conclusions
We propose a new method which is based primarily on image contrast data to solve various issues associated with uneven illumination. This method does not require illumination correction and results demonstrated that a greater precision can ultimately be achieved by utilizing this approach. We applied this method to various unevenly illuminated multi-well plate microscopy images and the method produced unparalleled segmenting performance. It can be used to process the experimental images of multi-well plate microscopy and segment experimental subjects, in this case C. elegans, cultured on multi-well plates from an unevenly illuminated background. Currently, this is the only method for segmenting multi-well plate microscopy images that is able to collectively overcome all of the uneven illumination problems addressed in this article.
Acknowledgments
The authors acknowledge the CellProfiler project team for releasing the WormToolbox and test images, which are useful for testing our method. Their public test images allowed more robust and independent testing of our method, beyond just the experimental images generated in our lab. This work was supported by National Natural Science Foundation of China (31401025, 81273108, and 71271125).