TY - JOUR AU - Melvin, Adam T. AB - Introduction Development of fluorescence and image-based single cell technologies has enabled systematic investigation of cellular heterogeneity in a wide range of diseased tissues and cellular populations [1, 2]. While conventional single cell analytical tools like flow cytometry (and Fluorescence Activated Cell Sorting, Image Flow Cytometry) can detect, sort and collect cells with desired properties, these techniques do not permit dynamic monitoring of cell responses as the data is collected at a single time point [3]. Considering these limitations, microscale technologies such as droplet microfluidic devices and microfluidic cell trap arrays allow for facile collection and segregation of single cells to enable real-time investigation of cellular processes [4, 5]. Droplet microfluidic devices in particular, have an advantage of working with picoliter to nanoliter volumes of solution that increases sensitivity, specificity, and precise quantification of real-time intra and extracellular processes [3]. The development of a wide variety of sophisticated cellular fluorescent probes in recent times has enabled easy tracking and detection of cellular activities by incorporating static microdroplet trapping arrays with fluorescence microscopy platforms to eliminate the need for high-speed cameras and expensive fiber optics used in large-scale cytometric tools [6, 7]. This technology has found a diverse set of applications in disease detection and diagnostics ranging from single cell analyses to droplet-based quantitative PCR and electrokinetic assays [8–11]. One such example in cellomics is the use of fluorescent stains and organic dyes in droplet microfluidic devices to sort cells based on their dynamic fluorescent responses to external stimuli [12, 13]. Similarly, fluorescent proteins, quantum dots, and luminescent nanoparticles have been used to track protein-protein interactions, intracellular enzyme activities, and identify biomolecules or biomarkers within single cells encapsulated in droplets [14–17]. In addition to cellomics, massively parallelized high-throughput droplet generators are used in combination with fluorescent barcodes to perform single cell DNA- and RNA- sequencing [18, 19]. Digital droplet microfluidics are also extensively used in the quantitative immunoassays and development of biosensors [20]. Beyond disease detection and diagnostics, fluorescence-based droplet microfluidics also finds applications in renewable energy, pharmaceutical industry and managing environmental issues [21–24]. The growing advancement of these single-cell analytical devices in various fields has created a need for specific computational tools capable of processing and quantifying the large amount of intricate data collected from these screening systems. Additionally, there is a need for automated quantification due to the inconsistencies and challenges associated with manual cell counting and analysis, which is highly laborious and prone to subjective results that may vary from person to person. While many of the end-point commercial single cell analysis platforms are equipped with expensive inter-complementary hardware and software options to perform simultaneous imaging and analysis, there are limited stand-alone software tools to analyze high-throughput microarray or microfluidics data. Existing open-source software like ImageJ (NIH) and CellProfiler (Broad Institute) can count and characterize the total number of cells in a defined region; however, this is not suitable for high-throughput intracellular quantification of a large number of cells such as the datasets generated using a droplet microfluidic trapping array [25, 26]. ImageJ utilizes macros to provide grid analysis similar to a hemocytometer but is limited to grid annotation or the detection cell colonies [27]. Commercial platforms accompanied by software tools like WOLF Cell Sorter (NanoCellect Biomedical, Inc., San Diego, CA, USA), Scepter 2.0 Cell Counter (Millipore Sigma, Burlington, MA, USA) and Cellomics (ThermoScientific, Pittsburg, PA, USA) can analyze high-throughput microfluidic data, but are limited in their ability to automatically detect microarrays and quantify intracellular properties [28]. Proprietary microarray software such as Quantarray (Packard BioChip Technologies, Billerica, MA, USA) and Genepix (Molecular Devices, Sunnyvale, CA, USA) allow users to define arrays individually in one image but are limited in their ability to automatically recognize the bounds of the array [29, 30]. Recently, several algorithms have been developed to individually analyze droplet microfluidic data including screening target cells within droplets, tracking droplet morphology and velocity, or tracking single-cell movement trajectories within droplets [31–33]. While these algorithms can be integrated into experimental platforms to extract superficial droplet or cellular information, they are often restrained to a single analysis metric and are limited in their ability to quantify cell encapsulation information and intracellular properties. The goal of this work was to develop an algorithm called FluoroCellTrack that could rapidly analyze a wide range of fluorescent microscopy images from different droplet microfluidic applications. The algorithm was developed using Python 3.6.0 (Python Imaging Library, Python Software Foundation), considering the advantages of it over other programming languages such as MATLAB, C++ and Java [34]. The algorithm was designed to handle overlay images obtained using fluorescent microscopy which consisted of both brightfield and fluorescence overlays. To demonstrate the wide-range utility of FluoroCellTrack, it was utilized for three different applications: (1) to characterize and quantify a heterogeneous population of single OPM2 cancer cells based on their dynamic response to different concentrations of a drug; (2) to distinguish cells from clusters of spectrally independent luminescent nanoparticles (NPs) co-encapsulated in droplets which were successfully used as droplet trackers in prior work by Vaithiyanathan et al. [35] and (3) to quantify heterogeneous uptake of fluorescent cell penetrating peptides (CPPs) in single cancer cells through intracellular fluorescence as described in previous work by Safa et al. [36]. The algorithm was able to automatically (i) distinguish droplets from cells, (ii) distinguish and count live, dead and overlapping cells in each droplet, (iii) retrieve cell and NP co-encapsulation information to track single cells in each droplet, and (iv) quantify the intracellular fluorescence of intact cells. A significant limitation of single cell analysis was the time-consuming nature of manual quantification; however, FluoroCellTrack takes <1 min to count cells and <30 min to quantify intracellular fluorescence, compared to ~60 min and ~20 h using manual analysis. The accuracy of FluoroCellTrack was compared with manual counting of cells and found an average of ~92–99% similarity out of a total population of an average of 320 cells (per experiment) coupled with a 19-fold increase in analysis speed over manual counting. While in this work, FluoroCellTrack was tested on a low-throughput data for the ease of comparison with manual analysis, it can quantify several fluorescence-based, high-throughput and multiplexed data sets obtained from droplet microfluidic platforms in addition to microfluidic cell trap arrays or microdroplet arrays [37, 38]. Beyond microfluidics, this algorithm can also be easily employed to quantify fluorescent cells in platforms such as microarrays, microchannels, 3D micro-environment, microtiter plates, and petri dishes [39, 40]. Materials and methods Cell culture and reagents The details about cell culture, reagents and equipment used in all the three applications [35, 36, 41] are described in S1 Method. Design and fabrication of the microfluidic device The microfluidic device used here is a two-layered droplet generator which was designed and fabricated as described by Vaithiyanathan et al. [35] and Safa et al. [36]. A brief description on this is presented in S2 Method. Fluorescent microscopy: Camera and filter sets For experimentation, the microfluidic device was mounted on the stage of a fluorescent DMi8 inverted microscope (Leica Microsystems, Wetzlar, Germany) to visualize droplet generation, trapping and to acquire microscopy data. This DMi8 inverted microscope, equipped with a range of different magnifications (5x to 20x), was connected to an ORCA-Flash 4.0 V2 Digital CMOS Camera C11440-22CU (Hamamatsu Photonics K. K., Shizuoka, Japan) through a USB 3.0 interface for image collection. The CMOS camera had a 4.0-megapixel resolution (2048 pixels x 2048 pixels) with a high-speed readout (100 frames/s), while achieving a 1.0 electron (median) readout noise performance. The digital camera provided a very high sensitivity through its micro lens (37,000:1 dynamic range) with the CMOS image sensor providing 6.5 μm x 6.5 μm pixel sizes, thus making this camera the most suitable for dynamic brightfield and fluorescent imaging across a wide range of spectra. The microscope was equipped with external trigger functions and time output functions to control the essential timing control during imaging. The following excitation/emission filters (Chroma Tech. Corp., Bellow Falls, VT, USA) were used: fluorescein isothiocyanate FITC (λex: 440–520 nm and λem: 497–557 nm); rhodamine (λex: 536–556 nm and λem: 545–625 nm); filter set 1 (λex: 370–420 nm and λem: 605–645 nm); and filter set 2 (λex: 325–355 nm and λem: 505–565 nm). Experimental set up The experimental steps describing sample preparation and execution of the three test systems (dose response analysis of OPM2 cells to Bortezomib using a droplet trapping array, tracking single cells using luminescent nanoparticles in a droplet trapping array [35], and understanding CPP uptake in intact cancer cells by measuring intracellular fluorescence) [36] are explained in S3 Method. Quantification of data: Automated image analysis and manual analysis The FluoroCellTrack algorithm was developed in Python 3.6.0 software language (Python Software Foundation, Wilmington, DE, USA) using OpenCV (Open Source Computer Vision, Intel Corp.). Despite being slower than C/C++, Python was preferred in this work due to its ease and simplicity: the syntax in Python involves fewer steps as compared to C++ or Java. Another advantage of Python exploited here was its use alongside OpenCV, an open source machine learning software library that consists of a wide range of programming functions for real-time computer vision applications in pattern recognition, event detection, and artificial intelligence. OpenCV has its application programming interface (API) with several other programming languages and the interface in Python is called the OpenCV-Python which combines the best features of OpenCV-C++ API and Python. This feature allowed for writing computationally intensive codes in C/C++ and using a Python wrapper to use these codes as Python modules. This served as a two-in-one advantage. First, it allowed the code to run as fast as the C/C++ script and second, it gave the programmable ease of Python. OpenCV was used instead of the conservative Pillow (Python Imaging Library) due to the strong focus and better computational efficiency in wide range of applications. In FluoroCellTrack, highly optimized libraries of OpenCV-Python (e.g., NumPy, SciPy, scikit and matplotlib) were used to execute several numerical operations, thus making this an appropriate tool for fast prototyping of computer vision problems. FluoroCellTrack was tested for its performance on various experimental test systems using a 64-bit Operating System with a processor of base and turbo speeds of 2.50 GHz and 2.70 GHz. All collected microscopy data consisted of images with overlaid brightfield and fluorescent channels, which were processed and analyzed, resulting in the extraction of the required information using a series of 16-bit processing and operational steps. For multiple types of input, the algorithm was designed to handle folders of microscopy images of all formats (e.g., .tiff, .png, .gif, .jpeg, and .bmp). The acquired results were exported in suitable formats (.csv or .txt) for further data analysis. The efficiency of this algorithm was calculated by comparing the results obtained from automated quantification to manual quantification. The detailed description of the theory and validation of the automated quantification is given in the following sections. The manual analysis of the first two test systems consisted of counting live and dead single/multiple cells and luminescent NPs within droplets to register for encapsulation data, cell viability data and droplet tracking data [35]. For the third test system, the mean intracellular fluorescence intensities of cells were manually quantified through gray values by drawing a polygon scan region of interest (ROI) across each cell using LAS X software 3.3.0. This fluorescence intensity signal was normalized against the background noise obtained from each experiment thus giving a range of normalized intensity values which were further used for statistical and data analyses as described by Safa et al. [36]. Cell culture and reagents The details about cell culture, reagents and equipment used in all the three applications [35, 36, 41] are described in S1 Method. Design and fabrication of the microfluidic device The microfluidic device used here is a two-layered droplet generator which was designed and fabricated as described by Vaithiyanathan et al. [35] and Safa et al. [36]. A brief description on this is presented in S2 Method. Fluorescent microscopy: Camera and filter sets For experimentation, the microfluidic device was mounted on the stage of a fluorescent DMi8 inverted microscope (Leica Microsystems, Wetzlar, Germany) to visualize droplet generation, trapping and to acquire microscopy data. This DMi8 inverted microscope, equipped with a range of different magnifications (5x to 20x), was connected to an ORCA-Flash 4.0 V2 Digital CMOS Camera C11440-22CU (Hamamatsu Photonics K. K., Shizuoka, Japan) through a USB 3.0 interface for image collection. The CMOS camera had a 4.0-megapixel resolution (2048 pixels x 2048 pixels) with a high-speed readout (100 frames/s), while achieving a 1.0 electron (median) readout noise performance. The digital camera provided a very high sensitivity through its micro lens (37,000:1 dynamic range) with the CMOS image sensor providing 6.5 μm x 6.5 μm pixel sizes, thus making this camera the most suitable for dynamic brightfield and fluorescent imaging across a wide range of spectra. The microscope was equipped with external trigger functions and time output functions to control the essential timing control during imaging. The following excitation/emission filters (Chroma Tech. Corp., Bellow Falls, VT, USA) were used: fluorescein isothiocyanate FITC (λex: 440–520 nm and λem: 497–557 nm); rhodamine (λex: 536–556 nm and λem: 545–625 nm); filter set 1 (λex: 370–420 nm and λem: 605–645 nm); and filter set 2 (λex: 325–355 nm and λem: 505–565 nm). Experimental set up The experimental steps describing sample preparation and execution of the three test systems (dose response analysis of OPM2 cells to Bortezomib using a droplet trapping array, tracking single cells using luminescent nanoparticles in a droplet trapping array [35], and understanding CPP uptake in intact cancer cells by measuring intracellular fluorescence) [36] are explained in S3 Method. Quantification of data: Automated image analysis and manual analysis The FluoroCellTrack algorithm was developed in Python 3.6.0 software language (Python Software Foundation, Wilmington, DE, USA) using OpenCV (Open Source Computer Vision, Intel Corp.). Despite being slower than C/C++, Python was preferred in this work due to its ease and simplicity: the syntax in Python involves fewer steps as compared to C++ or Java. Another advantage of Python exploited here was its use alongside OpenCV, an open source machine learning software library that consists of a wide range of programming functions for real-time computer vision applications in pattern recognition, event detection, and artificial intelligence. OpenCV has its application programming interface (API) with several other programming languages and the interface in Python is called the OpenCV-Python which combines the best features of OpenCV-C++ API and Python. This feature allowed for writing computationally intensive codes in C/C++ and using a Python wrapper to use these codes as Python modules. This served as a two-in-one advantage. First, it allowed the code to run as fast as the C/C++ script and second, it gave the programmable ease of Python. OpenCV was used instead of the conservative Pillow (Python Imaging Library) due to the strong focus and better computational efficiency in wide range of applications. In FluoroCellTrack, highly optimized libraries of OpenCV-Python (e.g., NumPy, SciPy, scikit and matplotlib) were used to execute several numerical operations, thus making this an appropriate tool for fast prototyping of computer vision problems. FluoroCellTrack was tested for its performance on various experimental test systems using a 64-bit Operating System with a processor of base and turbo speeds of 2.50 GHz and 2.70 GHz. All collected microscopy data consisted of images with overlaid brightfield and fluorescent channels, which were processed and analyzed, resulting in the extraction of the required information using a series of 16-bit processing and operational steps. For multiple types of input, the algorithm was designed to handle folders of microscopy images of all formats (e.g., .tiff, .png, .gif, .jpeg, and .bmp). The acquired results were exported in suitable formats (.csv or .txt) for further data analysis. The efficiency of this algorithm was calculated by comparing the results obtained from automated quantification to manual quantification. The detailed description of the theory and validation of the automated quantification is given in the following sections. The manual analysis of the first two test systems consisted of counting live and dead single/multiple cells and luminescent NPs within droplets to register for encapsulation data, cell viability data and droplet tracking data [35]. For the third test system, the mean intracellular fluorescence intensities of cells were manually quantified through gray values by drawing a polygon scan region of interest (ROI) across each cell using LAS X software 3.3.0. This fluorescence intensity signal was normalized against the background noise obtained from each experiment thus giving a range of normalized intensity values which were further used for statistical and data analyses as described by Safa et al. [36]. Theory The FluoroCellTrack algorithm consists of several steps as depicted in Fig 1. This schematic outlines the pipeline that can quantify a wide range of data generated using the microfluidic droplet trapping array. The algorithm begins with a basic feature to read fluorescent microscopic images that were collected and stored in folders. Approximately 30–40 unprocessed, overlaid brightfield and fluorescent images were batch fed into the algorithm. The major steps of the algorithm included: (1) pre-processing to remove noise from the images; (2) feature detection including droplet and contour detection; (3) post-processing to extract essential information from the images; and (4) exporting the results for additional analyses. All the processing steps involved 16-bit processing techniques. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 1. Outline of the FluoroCellTrack algorithm. The starting and ending points are depicted by ovals, the input and output commands are depicted by parallelograms, the processing steps are rectangles, and the decision steps are diamonds. The green arrows denote the steps in droplet detection, the blue arrows show contour detection, and the purple arrows represent information extracted from combining droplet and contour outputs. The dotted lines represent extended applications of the algorithm. https://doi.org/10.1371/journal.pone.0215337.g001 Droplet detection The steps used to identify droplets trapped in the microfluidic device were accomplished using Hough Transform, which is a basic technique for digital image processing to detect different shapes.[42] Here, a specific Circular Hough Transform (CHT) was used to extract information about circles from the input images [43, 44]. Prior to detecting the circles (droplets), the input microscopy overlay image (img) was smoothened to remove noise over signal as shown in Eq (1), where S is the smoothening function and the extent of smoothening is decided with a median filter, M. (1) The smoothened grayscale image (img(D, gray)) was further used as an input for the CHT function to detect circles. The parameterization of a circle was described with three features: the center (a,b) and radius R. The equations of a circle in xy space is written as: (2) (3) Using CHT, triplets (x,y,R) were identified that were highly likely to be the parameters of the circle. Next, it was assumed that R was a known quantity and (a,b) were unknown due to the fact that droplets with known radii must be detected from the input image. Eqs (2) and (3) were rearranged into ab space as follows: (4) (5) As θ sweeps between 0° - 360°, every point (xn,yn) in the xy space is equivalent to a circle in the ab space. The steps used to detect circles with respective parameterization are as follows: Load the smoothened image. The smoothened grayscale image img(D,gray) was considered as the input array. The known R values of the droplets were given minimum and maximum range of 25 and 50 microns (due to occasional experimental inconsistency leading in generation of droplets of varied sizes). For these known R values, detect edges in the xy space and create a binary image. For every detected ‘edge pixel’ in the xy space, an equivalent circle is drawn in the ab space. This edge detection step involves efficient thresholding. (6) where, (7) (8) (9) (10) where, (11) (12) (13) where, I is the predefined label Cast ‘votes’ in the accumulator for every point in the ab space. The accumulator is a 2D array which holds the values of two parameters a,b. Points with the highest votes in the ab space were detected as droplet centers, which are stored in the output array, Dcenters. The above steps describing the droplet detection are performed mathematically as shown in Eq (6), where CHT is the mathematical function performed on the smoothened input image (img(D,gray)) with parameterization (param), to yield the output array, Dcenters. The CHT function as described earlier involves the detection of droplet centers [amax, bmax] by casting votes in the accumulator A when the hough gradient, ∇g scans across all the pixels (xi,yi to xn,yn) of the img(D,gray). Once the CHT is implemented to the smoothened input image, the algorithm checks for the detected circles with the parameters as described. If the detected circles satisfy the parameters, the boundaries and centers of these circles are traced and printed. Thus, the number of droplets Ndroplets is obtained by the length of the output array, Dcenters as shown in (7). Contour detection: Fluorescent regions of interest The fluorescent cells (NPs or any fluorescent regions) encapsulated within droplets in the microfluidic device were considered as contours. This detection step included several defining and processing steps outlined below: Define arrays of color boundaries. Specific color boundaries for cells and luminescent NPs were assigned in this step. Create a mask by detecting colors with specified boundaries. Load input image, apply the mask, convert it to grayscale, and smoothen it. Perform edge detection, dilation, and erosion. This is followed by segmentation to detect attached contours. Find contours. Arrays of specific color boundaries were created to detect contours of specific interest. The contours of interest for cells were the colors green, red, and yellow. Yellow was used to denote the overlap between the live (green) cells and dead (red) cells in the small population that showed both fluorescent signals. The fine elements of image processing and the availability of the broad color palette was exploited here where color codes of magenta and blue were used for Europium (Eu3+)-doped NPs and Terbium (Tb3+)-doped NPs to avoid color overlap with live (also GFP) and dead cells (also RFP). The arrays for each color had a lower and an upper boundary limit to detect color in terms of blue, green, and red for BGR vector values. Once the values of lower and upper boundaries were defined, the colors with specified boundaries were detected from the input image (img) using respective BGR [upper, lower]. After this step, a mask, Cmask was created by detecting respective colors when the function fCLR was allowed to scan over each pixel xi,yi of the input image img. This mask was then overlaid on the input image which was then converted to grayscale and was smoothened (S) to filter signal from noise, thus yielding Cgray as shown in Eqs (8) and (9). This masking step was done to ensure the exact mapping of fluorescent layer with that of the brightfield, where even the weakest signal from the contour will be finely detected. The masked, image (Cgray) was subjected to several processing steps before detecting contours. The first processing step was to detect edges using Canny Edge Detector [45]. As described in Eq (10), the Canny Edge Detector involved smoothening of Cgray with median filter M, followed by computing the gradient, ∇f of this smoothened image to finally threshold it through non-maxima suppression (MT) to detect edges. The detected edges between the minimum and maximum threshold values (T) were finally used to plot the resultant edge map (Cedge). This binary edge map was then subjected to the morphological operations erosion and dilation as shown in Eqs (11) and (12). These two functions were used to process images based on their shapes by convolving structural kernels (D, E) with the edge map (Cedge). The dilation function was used to join broken parts of the object edges and accentuate the features. This dilation step was followed by the erosion step, to remove white noise and to detach two connecting objects. The resultant processed image (Cerode) was then put through the random walker segmentation algorithm [46] to distinguish attached contours. As denoted in Eq (13), this segmentation function was executed by comparing each pixel of the image, Cerode to a predefined label (I) and thus determining the probability of this comparison to finally enhance local maxima of each of the attached individual contour. The final step was to highlight contours through contour retrieval and approximation method. Each detected, segmented contour was stored as a vector of points in the output array Cout. The detected contour values in this array were checked for convexity defects using convex hull adjustment function which was similar to the contour approximation method but, checked for bulged-out or flat curves within a contour and approximated them by drawing a defined boundary. In order to effectively detect live/dead/overlapping (dying) cells and NPs, the outlined cell contours were checked for their area. This step was included to exclude cellular or experimental debris from quantification. In case of measuring intracellular fluorescence corresponding to CPP uptake in intact cells, the mean and variance values of each of the positively detected cell contour boundaries were calculated. Next, information about single versus multiple cell encapsulation efficiencies and droplet tracking data was extracted by convolving the detected droplet center map (Dcenters) over the detected contours center map (Ccenters). The droplet centers were used as an axis center and the minimum and maximum R values of 25 and 50 microns were used to scan the overlapped contour positions. The positively overlapped cell counters within droplets gave information about single cell encapsulation versus multiple cell encapsulation in the droplets. This analysis further provided insight on the spatial location of single cells and NPs co-encapsulated in a single droplet to provide information on cellular tracking within droplets. All of these output data were exported to suitable formats (.csv or .txt) for further population analysis. Additionally, it is to be noted that the trap centers in the droplet microfluidic device follow a hexagonal pattern with an exact radius of 35 μm and are placed equidistantly at 360 μm from each other. This was used as an advantage to eliminate multiple droplets within a trap during image pixel scan, where droplet centers once overlapped with contour centers are tested for equidistance (350-370-pixel length) thus aiding in elimination of multiple droplets. Droplet detection The steps used to identify droplets trapped in the microfluidic device were accomplished using Hough Transform, which is a basic technique for digital image processing to detect different shapes.[42] Here, a specific Circular Hough Transform (CHT) was used to extract information about circles from the input images [43, 44]. Prior to detecting the circles (droplets), the input microscopy overlay image (img) was smoothened to remove noise over signal as shown in Eq (1), where S is the smoothening function and the extent of smoothening is decided with a median filter, M. (1) The smoothened grayscale image (img(D, gray)) was further used as an input for the CHT function to detect circles. The parameterization of a circle was described with three features: the center (a,b) and radius R. The equations of a circle in xy space is written as: (2) (3) Using CHT, triplets (x,y,R) were identified that were highly likely to be the parameters of the circle. Next, it was assumed that R was a known quantity and (a,b) were unknown due to the fact that droplets with known radii must be detected from the input image. Eqs (2) and (3) were rearranged into ab space as follows: (4) (5) As θ sweeps between 0° - 360°, every point (xn,yn) in the xy space is equivalent to a circle in the ab space. The steps used to detect circles with respective parameterization are as follows: Load the smoothened image. The smoothened grayscale image img(D,gray) was considered as the input array. The known R values of the droplets were given minimum and maximum range of 25 and 50 microns (due to occasional experimental inconsistency leading in generation of droplets of varied sizes). For these known R values, detect edges in the xy space and create a binary image. For every detected ‘edge pixel’ in the xy space, an equivalent circle is drawn in the ab space. This edge detection step involves efficient thresholding. (6) where, (7) (8) (9) (10) where, (11) (12) (13) where, I is the predefined label Cast ‘votes’ in the accumulator for every point in the ab space. The accumulator is a 2D array which holds the values of two parameters a,b. Points with the highest votes in the ab space were detected as droplet centers, which are stored in the output array, Dcenters. The above steps describing the droplet detection are performed mathematically as shown in Eq (6), where CHT is the mathematical function performed on the smoothened input image (img(D,gray)) with parameterization (param), to yield the output array, Dcenters. The CHT function as described earlier involves the detection of droplet centers [amax, bmax] by casting votes in the accumulator A when the hough gradient, ∇g scans across all the pixels (xi,yi to xn,yn) of the img(D,gray). Once the CHT is implemented to the smoothened input image, the algorithm checks for the detected circles with the parameters as described. If the detected circles satisfy the parameters, the boundaries and centers of these circles are traced and printed. Thus, the number of droplets Ndroplets is obtained by the length of the output array, Dcenters as shown in (7). Contour detection: Fluorescent regions of interest The fluorescent cells (NPs or any fluorescent regions) encapsulated within droplets in the microfluidic device were considered as contours. This detection step included several defining and processing steps outlined below: Define arrays of color boundaries. Specific color boundaries for cells and luminescent NPs were assigned in this step. Create a mask by detecting colors with specified boundaries. Load input image, apply the mask, convert it to grayscale, and smoothen it. Perform edge detection, dilation, and erosion. This is followed by segmentation to detect attached contours. Find contours. Arrays of specific color boundaries were created to detect contours of specific interest. The contours of interest for cells were the colors green, red, and yellow. Yellow was used to denote the overlap between the live (green) cells and dead (red) cells in the small population that showed both fluorescent signals. The fine elements of image processing and the availability of the broad color palette was exploited here where color codes of magenta and blue were used for Europium (Eu3+)-doped NPs and Terbium (Tb3+)-doped NPs to avoid color overlap with live (also GFP) and dead cells (also RFP). The arrays for each color had a lower and an upper boundary limit to detect color in terms of blue, green, and red for BGR vector values. Once the values of lower and upper boundaries were defined, the colors with specified boundaries were detected from the input image (img) using respective BGR [upper, lower]. After this step, a mask, Cmask was created by detecting respective colors when the function fCLR was allowed to scan over each pixel xi,yi of the input image img. This mask was then overlaid on the input image which was then converted to grayscale and was smoothened (S) to filter signal from noise, thus yielding Cgray as shown in Eqs (8) and (9). This masking step was done to ensure the exact mapping of fluorescent layer with that of the brightfield, where even the weakest signal from the contour will be finely detected. The masked, image (Cgray) was subjected to several processing steps before detecting contours. The first processing step was to detect edges using Canny Edge Detector [45]. As described in Eq (10), the Canny Edge Detector involved smoothening of Cgray with median filter M, followed by computing the gradient, ∇f of this smoothened image to finally threshold it through non-maxima suppression (MT) to detect edges. The detected edges between the minimum and maximum threshold values (T) were finally used to plot the resultant edge map (Cedge). This binary edge map was then subjected to the morphological operations erosion and dilation as shown in Eqs (11) and (12). These two functions were used to process images based on their shapes by convolving structural kernels (D, E) with the edge map (Cedge). The dilation function was used to join broken parts of the object edges and accentuate the features. This dilation step was followed by the erosion step, to remove white noise and to detach two connecting objects. The resultant processed image (Cerode) was then put through the random walker segmentation algorithm [46] to distinguish attached contours. As denoted in Eq (13), this segmentation function was executed by comparing each pixel of the image, Cerode to a predefined label (I) and thus determining the probability of this comparison to finally enhance local maxima of each of the attached individual contour. The final step was to highlight contours through contour retrieval and approximation method. Each detected, segmented contour was stored as a vector of points in the output array Cout. The detected contour values in this array were checked for convexity defects using convex hull adjustment function which was similar to the contour approximation method but, checked for bulged-out or flat curves within a contour and approximated them by drawing a defined boundary. In order to effectively detect live/dead/overlapping (dying) cells and NPs, the outlined cell contours were checked for their area. This step was included to exclude cellular or experimental debris from quantification. In case of measuring intracellular fluorescence corresponding to CPP uptake in intact cells, the mean and variance values of each of the positively detected cell contour boundaries were calculated. Next, information about single versus multiple cell encapsulation efficiencies and droplet tracking data was extracted by convolving the detected droplet center map (Dcenters) over the detected contours center map (Ccenters). The droplet centers were used as an axis center and the minimum and maximum R values of 25 and 50 microns were used to scan the overlapped contour positions. The positively overlapped cell counters within droplets gave information about single cell encapsulation versus multiple cell encapsulation in the droplets. This analysis further provided insight on the spatial location of single cells and NPs co-encapsulated in a single droplet to provide information on cellular tracking within droplets. All of these output data were exported to suitable formats (.csv or .txt) for further population analysis. Additionally, it is to be noted that the trap centers in the droplet microfluidic device follow a hexagonal pattern with an exact radius of 35 μm and are placed equidistantly at 360 μm from each other. This was used as an advantage to eliminate multiple droplets within a trap during image pixel scan, where droplet centers once overlapped with contour centers are tested for equidistance (350-370-pixel length) thus aiding in elimination of multiple droplets. Results and discussion Droplet and contour detection: Validation of theory The droplet and contour detection steps of FluoroCellTrack resulted in the successful detection of features, quantification of single versus multiple cells encapsulated in aqueous droplets trapped in the array as well as identifying the individual fluorescent cellular response to different input conditions. FluoroCellTrack was able to read the input folders, each consisting of ~30–40 microscopy images (.tiff format was used here). The CHT mathematical function enabled facile and efficient detection of circular features from the images. Droplets with R values ranging from 25 to 50 microns were detected which demonstrated that results from experimental inconsistency could also be included in quantification. This CHT step proved to be a powerful feature detection tool over the edge detection feature, which has been conventionally used [47, 48]. In order to compare CHT and conventional thresholding, both of the algorithms were run side-by-side (Fig 2). Here, manual thresholding with the edge detector function lacked the sensitivity and specificity needed for accurate detection. Specifically, the lower threshold values led to noise retainment resulting in false positives and higher values resulting in data removal (Fig 2, dashed white boxes). Data smoothening with a median kernel followed by CHT led to precise detection of circular features with their centers and boundaries of exactly 2520 hours/experiment needed for manual inspection. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 8. Quantification of intracellular fluorescence using FluoroCellTrack for the uptake of 50 μM ARG. The input image (A) of single HeLa cells containing the ARG peptide was processed to detect droplets (B). (C) Cell detection required area thresholding to filter out debris (dashed white box) from cells (white box) trapped in droplets. (D) Overlapping of droplet center and contour maps produced output data consisting of encapsulation and intracellular fluorescence information. https://doi.org/10.1371/journal.pone.0215337.g008 In addition to quantifying mean fluorescence values, FluoroCellTrack was able to record minimum, maximum and variance in pixel values within each cell to further characterize intracellular peptide distribution. Fig 9 contains example values obtained for a single cell incubated with 50 μM OWRWR, for which the maximum, minimum and variance values of grayscale intensity values were obtained from automated quantification (2546, 742, 24056). This was similar in trend to the values obtained by manual quantification (2387, 801, 24013), highlighting the utility of this algorithm for a broad application in understanding population heterogeneity. Beyond this broad application in understanding population heterogeneity through intracellular fluorescence, this algorithm can be combined with Spatial Intensity Distribution Analysis (SpIDA) to understand the cellular distribution of fluorescent stains or markers [68, 69]. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 9. Quantification of intracellular parameters to understand heterogeneous peptide uptake. (A) The outlined fluorescent cell from the fluorescent channel of the input image was traced for its center map and the center was detected in red shown in (B). The 8-bit histogram plot was generated to observe the frequency of grayscale intensity values in (C) leading to the generation of maximum, minimum peak values of intensity along with variance in intensity values which is finally mapped into a 16-bit scale as shown in the output data (D). Scale bar represents 10 μm. https://doi.org/10.1371/journal.pone.0215337.g009 Droplet and contour detection: Validation of theory The droplet and contour detection steps of FluoroCellTrack resulted in the successful detection of features, quantification of single versus multiple cells encapsulated in aqueous droplets trapped in the array as well as identifying the individual fluorescent cellular response to different input conditions. FluoroCellTrack was able to read the input folders, each consisting of ~30–40 microscopy images (.tiff format was used here). The CHT mathematical function enabled facile and efficient detection of circular features from the images. Droplets with R values ranging from 25 to 50 microns were detected which demonstrated that results from experimental inconsistency could also be included in quantification. This CHT step proved to be a powerful feature detection tool over the edge detection feature, which has been conventionally used [47, 48]. In order to compare CHT and conventional thresholding, both of the algorithms were run side-by-side (Fig 2). Here, manual thresholding with the edge detector function lacked the sensitivity and specificity needed for accurate detection. Specifically, the lower threshold values led to noise retainment resulting in false positives and higher values resulting in data removal (Fig 2, dashed white boxes). Data smoothening with a median kernel followed by CHT led to precise detection of circular features with their centers and boundaries of exactly 2520 hours/experiment needed for manual inspection. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 8. Quantification of intracellular fluorescence using FluoroCellTrack for the uptake of 50 μM ARG. The input image (A) of single HeLa cells containing the ARG peptide was processed to detect droplets (B). (C) Cell detection required area thresholding to filter out debris (dashed white box) from cells (white box) trapped in droplets. (D) Overlapping of droplet center and contour maps produced output data consisting of encapsulation and intracellular fluorescence information. https://doi.org/10.1371/journal.pone.0215337.g008 In addition to quantifying mean fluorescence values, FluoroCellTrack was able to record minimum, maximum and variance in pixel values within each cell to further characterize intracellular peptide distribution. Fig 9 contains example values obtained for a single cell incubated with 50 μM OWRWR, for which the maximum, minimum and variance values of grayscale intensity values were obtained from automated quantification (2546, 742, 24056). This was similar in trend to the values obtained by manual quantification (2387, 801, 24013), highlighting the utility of this algorithm for a broad application in understanding population heterogeneity. Beyond this broad application in understanding population heterogeneity through intracellular fluorescence, this algorithm can be combined with Spatial Intensity Distribution Analysis (SpIDA) to understand the cellular distribution of fluorescent stains or markers [68, 69]. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 9. Quantification of intracellular parameters to understand heterogeneous peptide uptake. (A) The outlined fluorescent cell from the fluorescent channel of the input image was traced for its center map and the center was detected in red shown in (B). The 8-bit histogram plot was generated to observe the frequency of grayscale intensity values in (C) leading to the generation of maximum, minimum peak values of intensity along with variance in intensity values which is finally mapped into a 16-bit scale as shown in the output data (D). Scale bar represents 10 μm. https://doi.org/10.1371/journal.pone.0215337.g009 Conclusions The multistep Python algorithm, FluoroCellTrack was developed to process and analyze folders of microscopy images obtained from global droplet microfluidic experimentation. The algorithm has several distinct features including the automatic detection of droplet subpopulations (e.g., empty droplets, droplets with single cells, droplets with multiple cells), the quantification of single cell responses to drugs, and the quantification of intracellular fluorescence in intact cells. FluoroCellTrack was successfully implemented in three different systems: (i) live/dead subpopulation studies to understand cellular responses to different doses of drugs, (ii) quantification of cell and nanoparticle co-encapsulation for droplet tracking information, and (iii) quantification of CPP uptake in single intact cells based on fluorescent intensity. This algorithm was found to be superior to the commonly used feature detection techniques like Edge Detector thresholding, template matching and had well-defined and precise steps to eliminate false positives such as debris (e.g., cell or peptide) across the trapping array. Manual control analyses conformed with the Python algorithm with an average similarity of ~92–99% from a mean population of 320 cells. Moreover, automated image analysis took about <1 min to count all the cells trapped in the device and <30 min to quantify the fluorescence intensity across the entire population of cells, proving it to be a powerful tool for microscopy data analysis. This was far superior to the ~60 min required for manual cell counting and ~20 h needed for manual analysis of intracellular fluorescence. While the work here demonstrated the utility of FluoroCellTrack with a low-throughput droplet data (~787 droplets), the algorithm has a potential to quantify high-throughput droplet data (~10,000 droplets) in a couple of hours in comparison to days of manual analysis. Finally, FluoroCellTrack was found to overcome the limitations of existing non-droplet microfluidic algorithms and has the potential to be integrated with several different types of microfluidic devices, trapping arrays and non-microfluidic platforms with easy user-defined modifications. Beyond tracking and quantifying intracellular fluorescence, this algorithm has multiple applications in tracking cellular movements, through time-lapse imaging and position detection and can also be potentially extended to understand subcellular molecular processes by analyzing intracellular localization of biochemical stains through intracellular pixel quantification. Supporting information S1 Method. Cell culture and reagents. https://doi.org/10.1371/journal.pone.0215337.s001 (DOCX) S2 Method. Design and fabrication of the microfluidic device. https://doi.org/10.1371/journal.pone.0215337.s002 (DOCX) S3 Method. Experimental set up. https://doi.org/10.1371/journal.pone.0215337.s003 (DOCX) S1 Table. Automated quantification of single OPM-2 cell response to different doses of Bortezomib. A minimum of 275 cells and n = 787 droplets were analyzed for each case. https://doi.org/10.1371/journal.pone.0215337.s004 (DOCX) S2 Table. Quantification of droplet tracking data by FluoroCellTrack. Information on droplet subpopulations and cell encapsulation obtained from two individual experiments where RFP-expressing MDA-MB-231 cells were co-encapsulated with Eu3+-doped NPs and Tb3+-doped NPs. n = 787 droplets were analyzed in each case. https://doi.org/10.1371/journal.pone.0215337.s005 (DOCX) S3 Table. Comparison of mean intracellular fluorescence representative of CPP uptake in HeLa cells using FluoroCellTrack. A minimum of 142 cells (maximum 367 cells) were analyzed in these experiments. https://doi.org/10.1371/journal.pone.0215337.s006 (DOCX) Acknowledgments The authors would like to thank Dr. Nancy Allbritton (UNC) for supplying the GFP-expressing HeLa cells and OPM2 cells and Dr. Elizabeth Martin (LSU) for supplying the RFP-expressing MDA-MB-231 cells. The authors would also like to thank Dr. Michael Benton (Chemical Engineering, LSU) for insightful discussions during the development of the manuscript. TI - FluoroCellTrack: An algorithm for automated analysis of high-throughput droplet microfluidic data JF - PLoS ONE DO - 10.1371/journal.pone.0215337 DA - 2019-05-01 UR - https://www.deepdyve.com/lp/public-library-of-science-plos-journal/fluorocelltrack-an-algorithm-for-automated-analysis-of-high-throughput-gvstLi0pnp SP - e0215337 VL - 14 IS - 5 DP - DeepDyve ER -