BIOIMAGING 2021 Abstracts


Full Papers
Paper Nr: 2
Title:

PRAQA: Protein Relative Abundance Quantification Algorithm for 3D Fluorescent Images

Authors:

Corrado Ameli, Sonja Fixemer, David S. Bouvier and Alexander Skupin

Abstract: In confocal fluorescent microscopy, the quality of the acquisition strongly depends on diverse factors including the microscope parameterization, the light exposure time, the type and concentration of the antibodies used, the thickness of the sample and the degradation of the biological tissue itself. All these factors critically influence the final result and render tissue protein quantification challenging due to intra- and inter-sample variability. Therefore, image processing techniques need to address the acquisitions variability to minimize the risk of bias coming from changes in signal intensity, noise and parameterization. Here, we introduce Protein Relative Abundance Quantification Algorithm (PRAQA), a 1-parameter based, fast and adaptive approach for quantifying protein abundance in 3D fluorescent-immunohistochemistry stained tissues that requires no image preprocessing. Our method is based on the assessment of the global pixel intensity neighborhood dispersion that allows to statistically infer whether each small region of an image can be considered as positive signal or background noise. We benchmark our method with alternative approaches from literature and validate its applicability and efficiency based on synthetic scenarios and a real-world application to post-mortem human brain samples of Alzheimer’s Disease and Lewy Body Dementia patients. PRAQA is implemented in Matlab and freely available at https://doi.org/10.17881/j20h-pa27.

Paper Nr: 7
Title:

Deep-Learning-based Segmentation of Organs-at-Risk in the Head for MR-assisted Radiation Therapy Planning

Authors:

László Ruskó, Marta E. Capala, Vanda Czipczer, Bernadett Kolozsvári, Borbála Deák-Karancsi, Renáta Czabány, Bence Gyalai, Tao Tan, Zoltán Végváry, Emőke Borzasi, Zsófia Együd, Renáta Kószó, Viktor Paczona, Emese Fodor, Chad Bobb, Cristina Cozzini, Sandeep Kaushik, Barbara Darázs, Gerda M. Verduijn, Rachel Pearson, Ross Maxwell, Hazel Mccallum, Juan A. Hernandez Tamames, Katalin Hideghéty, Steven F. Petit and Florian Wiesinger

Abstract: Segmentation of organs-at-risk (OAR) in MR images has several clinical applications; including radiation therapy (RT) planning. This paper presents a deep-learning-based method to segment 15 structures in the head region. The proposed method first applies 2D U-Net models to each of the three planes (axial, coronal, sagittal) to roughly segment the structure. Then, the results of the 2D models are combined into a fused prediction to localize the 3D bounding box of the structure. Finally, a 3D U-Net is applied to the volume of the bounding box to determine the precise contour of the structure. The model was trained on a public dataset and evaluated on both public and private datasets that contain T2-weighted MR scans of the head-and-neck region. For all cases the contour of each structure was defined by operators trained by expert clinical delineators. The evaluation demonstrated that various structures can be accurately and efficiently localized and segmented using the presented framework. The contours generated by the proposed method were also qualitatively evaluated. The majority (92%) of the segmented OARs was rated as clinically useful for radiation therapy.

Paper Nr: 9
Title:

Advancing Eosinophilic Esophagitis Diagnosis and Phenotype Assessment with Deep Learning Computer Vision

Authors:

William A. III, Alexis Catalano, Lubaina Ehsan, Hans V. von Eckstaedt, Barrett Barnes, Emily McGowan, Sana Syed and Donald E. Brown

Abstract: Eosinophilic Esophagitis (EoE) is an inflammatory esophageal disease which is increasing in prevalence. The diagnostic gold-standard involves manual review of a patient’s biopsy tissue sample by a clinical pathologist for the presence of 15 or greater eosinophils within a single high-power field (400x magnification). Diagnosing EoE can be a cumbersome process with added difficulty for assessing the severity and progression of disease. We propose an automated approach for quantifying eosinophils using deep image segmentation. A U-Net model and post-processing system are applied to generate eosinophil-based statistics that can diagnose EoE as well as describe disease severity and progression. These statistics are captured in biopsies at the initial EoE diagnosis and are then compared with patient metadata: clinical and treatment phenotypes. The goal is to find linkages that could potentially guide treatment plans for new patients at their initial disease diagnosis. A deep image classification model is further applied to discover features other than eosinophils that can be used to diagnose EoE. This is the first study to utilize a deep learning computer vision approach for EoE diagnosis and to provide an automated process for tracking disease severity and progression.

Paper Nr: 12
Title:

CoMixMatch: Semi-supervised Detection of Pancreatic Cancer on Noisy, Gigapixel Histology Images

Authors:

J. V. Pulido, Sana Syed and Donald E. Brown

Abstract: One of the greatest obstacles in the adoption of deep neural networks for new medical applications is that training these models require a large number of manually labeled training samples. In order to circumvent the laborious annotation process, some researchers have turned to semi-supervised learning techniques where models learn from a large body of unlabeled data along with a smaller set of labeled data. However, these techniques have not been fully examined in the histology setting where there is a high degree of noise. This body of work investigates an extension of the semi-supervised method MixMatch–we call CoMixMatch– which applies semi-supervised co-teaching and a contrastive unlabeled loss. More specifically, we study these models’ impact under a highly noisy, open-set histology setting. The findings here motivate the development of semi-supervised methods to ameliorate annotation costs commonly encountered in medical data applications.

Paper Nr: 18
Title:

Computer-aided Abnormality Detection in Chest Radiographs in a Clinical Setting via Domain-adaptation

Authors:

Abhishek K. Dubey, Michael T. Young, Christopher Stanley, Dalton Lunga and Jacob Hinkle

Abstract: Deep learning (DL) models are being deployed at medical centers to aid radiologists for diagnosis of lung conditions from chest radiographs. Such models are often trained on a large volume of publicly available labeled radiographs. These pre-trained DL models’ ability to generalize in clinical settings is poor because of the changes in data distributions between publicly available and privately held radiographs. In chest radiographs, the heterogeneity in distributions arises from the diverse conditions in X-ray equipment and their configurations used for generating the images. In the machine learning community, the challenges posed by the heterogeneity in the data generation source is known as domain shift, which is a mode shift in the generative model. In this work, we introduce a domain-shift detection and removal method to overcome this problem. Our experimental results show the proposed method’s effectiveness in deploying a pre-trained DL model for abnormality detection in chest radiographs in a clinical setting.

Paper Nr: 20
Title:

Smartphone-based Approach for Automatic Focus Assessment in NIR Fundus Images Targeted at Handheld Devices

Authors:

Tudor-Ionut Nedelcu, Francisco Veiga, Miguel Santos, Marcos Liberal and Filipe Soares

Abstract: Mobile fundus imaging devices can play an important role in the decentralization of eye diseases screening methods, increasing the accessibility of telemedicine solutions in this area. Since image focusing is crucial to obtain an optimal retinal image, this work presents a smartphone-based approach for automatic focus assessment of NIR retinal images, acquired by a prototype of a handheld fundus camera device called EyeFundusScope (EFS) A009. A DCT-based focus metric is proposed and compared against a group of Gradient-based, Statistical-based, and Laplacian-based functions in the same experimental setup. The paper also presents the EFS image acquisition logic and the protocol for creating the necessary NIR dataset with the optic disc region around the centre of the image. The results were obtained within 853 images acquired from 8 volunteers. The developed method combined with other features, and a SVM classifier in a Machine Learning approach which attained an AUC of 0.80, has shown to be a viable solution to integrate into the EFS mobile application.

Paper Nr: 21
Title:

Roughness Index and Roughness Distance for Benchmarking Medical Segmentation

Authors:

Vidhiwar S. Rathour, Kashu Yamakazi and T. N. Le

Abstract: Medical image segmentation is one of the most challenging tasks in medical image analysis and has been widely developed for many clinical applications. Most of the existing metrics have been first designed for natural images and then extended to medical images. While object surface plays an important role in medical segmentation and quantitative analysis i.e. analyze brain tumor surface, measure gray matter volume, most of the existing metrics are limited when it comes to analyzing the object surface, especially to tell about surface smoothness or roughness of a given volumetric object or to analyze the topological errors. In this paper, we first analysis both pros and cons of all existing medical image segmentation metrics, specially on volumetric data. We then propose an appropriate roughness index and roughness distance for medical image segmentation analysis and evaluation. Our proposed method addresses two kinds of segmentation errors, i.e. (i) topological errors on boundary/surface and (ii) irregularities on the boundary/surface. The contribution of this work is four-fold: (i) detect irregular spikes/holes on a surface, (ii) propose roughness index to measure surface roughness of a given object, (iii) propose a roughness distance to measure the distance of two boundaries/surfaces by utilizing the proposed roughness index and (iv) suggest an algorithm which helps to remove the irregular spikes/holes to smooth the surface. Our proposed roughness index and roughness distance are built upon the solid surface roughness parameter which has been successfully developed in the civil engineering.

Paper Nr: 26
Title:

Contrast Ratio during Visualization of Subsurface Optical Inhomogeneities in Turbid Tissues: Perturbation Analysis

Authors:

Gennadi Saiko and Alexandre Douplik

Abstract: Visualization and monitoring of the capillary loops and microvasculature patterns in dermis and mucosa are of interest for various clinical applications, including early cancer and shock detection. We developed an approach for the assessment of the contrast ratio during the visualization of subsurface optical heterogeneities. Using the diffuse approximation and perturbation analysis, we considered light absorption heterogeneities as negative light sources. We estimated the contrast ratio as a function of the surface layer's optical properties for diffuse and collimated wide beam illumination. Based on findings, we formulated several practical suggestions: a) proper selection of camera (with maximum dynamic range) is of paramount importance, b) narrow-band illumination is more efficient than white light illumination, and c) use of collimated light provides up to 60% improvement in contrast vs. diffuse illumination. Obtained results can be used for the optimization of imaging techniques

Paper Nr: 31
Title:

High-resolution Controllable Prostatic Histology Synthesis using StyleGAN

Authors:

Gagandeep B. Daroach, Josiah A. Yoder, Kenneth A. Iczkowski and Peter S. LaViolette

Abstract: For use of deep learning algorithms in clinical practice, detailed justification for diagnosis is necessary. Convolutional Neural Networks (CNNs) have been demonstrated to classify prostatic histology using the same diagnostic signals as pathologists. Using the StyleGAN series of networks, we demonstrate that recent advances in high-resolution image synthesis with Generative Adversarial Networks (GANs) can be applied to prostatic histology. The trained network can produce novel histology samples indistinguishable from real histology at 1024x1024 resolution and can learn disentangled representations of histologic semantics that separates at a variety of scales. Through blending of the latent representations, users have the ability to control the projection of histologic semantics onto a reconstructed image. When applied to the medical domain without modification, StyleGAN2 is able to achieve a Fréchet Inception Distance (FID) of 3.69 and perceptual path length (PPL) of 33.25.

Short Papers
Paper Nr: 5
Title:

Genetic Algorithm based L4 Identification and Psoas Segmentation

Authors:

Namitha V. Benjamin, Robert D. Boutin, Abhijit J. Chaudhari and Kwan-Liu Ma

Abstract: Segmentation of the Psoas muscle is an important first step in identifying sarcopenia. Physicians use computed tomography (CT) images to track changes in muscle mass, which, in turn, act as indicators of how well a patient is responding to treatment. To measure the muscle, a radiologist segments a CT manually. This is often time consuming task and can be prone to error. In this paper we propose a novel method to segment psoas muscles from abdominal CT images. The novel approach uses imaging techniques augmented with medical anatomic knowledge. The outcome of the algorithm is two fold; first, the 4th lumbar vertebra (L4) is identified from a series of CT images. Second, the psoas muscle in the identified slice, is segmented based on a genetic algorithm based edge linking method. The algorithm was applied to a series of datasets of 61 patients over the age of 65 with hip fractures, and we obtained an average match (true positive percentage) of 91%.

Paper Nr: 6
Title:

Using Segmentation Networks on Diabetic Retinopathy Lesions: Metrics, Results and Challenges

Authors:

Pedro Furtado

Abstract: Deep segmentation networks are increasingly used in medical imaging, including detection of Diabetic Retinopathy lesions from eye fundus images (EFI). In spite of very high scores in most EFI analysis tasks, segmentation measured as precise delineation of instances of lesions still involves some challenges and deserves analysis of metrics and comparison with prior deep learning approaches. We build and confront state-of-the-art deep learning segmentation networks with prior results, showing up to 15 percentage points improvement in sensitivity, depending on the lesion. But we also show the importance of metrics and that many frequently used metrics can be deceiving in this context. We use visual and numeric evidence to show why there is still ample space for further improvements of semantic segmentation quality in the context of EFI lesions.

Paper Nr: 10
Title:

Efficient Image Registration with Subpixel Accuracy using a Hybrid Fourier-based Approach

Authors:

Jelina Unger and Klaus Brinker

Abstract: In many fields like medical imaging and remote sensing, it is necessary to register images with subpixel accuracy. A general problem is the tradeoff between accuracy and efficiency. This paper presents a highly accurate and efficient algorithm for subpixel image registration using Fourier-based cross correlation to determine the translation between two images. Therefore a coarse to fine strategy is used. It combines a fast method using image projections with an accurate approach using matrix multiplication for refined computation. The results show that the new approach has almost the same level of accuracy as the accurate method, but with reduced computational complexity. Compared to the fast method, the computational complexity of the new approach is slightly higher, but achievs a higher level of accuracy. Overall the hybrid approach achieves an efficient registration with a relatively short runtime.

Paper Nr: 13
Title:

Coupled Active Contours for Clue Cell Segmentation from Fluorescence Microscopy Images

Authors:

Yongjian Yu and Jue Wang

Abstract: Bacterial vaginosis (BV) increases the risk for preterm birth. Immunofluorescent assay provides accurate counting of the clue cells. However, the massive data make it challenging to manually interpret. Towards automatic BV diagnostics, we present a coupled active contour method for segmenting the clue cells using dual-resolution, dual-channel microscopy. The method is formulated in the level set and parametric frameworks. A fast search locates potential clue cells in the low-resolution scan. Each cell is then imaged at high-resolution. The clue cells are segmented and quantified. The efficacy of the method is demonstrated using clinical data. Our method effectively delineates the boundaries of the cell and its nucleus simultaneously. It is efficient and practical. The clue cell detection results indicate a high accuracy for BV diagnosis.

Paper Nr: 16
Title:

EEG Classification for Visual Brain Decoding via Metric Learning

Authors:

Rahul Mishra and Arnav Bhavsar

Abstract: In this work, we propose CNN based approaches for EEG classification which is acquired from a visual perception task involving different classes of images. Our approaches involve deep learning architectures using 1D CNN (on time axis) followed by 1D CNN (on channel axis) and Siamese network (for metric learning) which are novel in this domain. The proposed approaches outperform the state-of-the-art methods on the same dataset. Finally, we also suggest a method to select fewer number of EEG channels.

Paper Nr: 17
Title:

Statistical Inference of the Inter-sample Dice Distribution for Discriminative CNN Brain Lesion Segmentation Models

Authors:

Kevin Raina

Abstract: Discriminative convolutional neural networks (CNNs), for which a voxel-wise conditional Multinoulli distribution is assumed, have performed well in many brain lesion segmentation tasks. For a trained discriminative CNN to be used in clinical practice, the patient’s radiological features are inputted into the model, in which case a conditional distribution of segmentations is produced. Capturing the uncertainty of the predictions can be useful in deciding whether to abandon a model, or choose amongst competing models. In practice, however, we never know the ground truth segmentation, and therefore can never know the true model variance. In this work, segmentation sampling on discriminative CNNs is used to assess a trained model’s robustness by analyzing the inter-sample Dice distribution on a new patient solely based on their magnetic resonance (MR) images. Furthermore, by demonstrating the inter-sample Dice observations are independent and identically distributed with a finite mean and variance under certain conditions, a rigorous confidence based decision rule is proposed to decide whether to reject or accept a CNN model for a particular patient. Applied to the ISLES 2015 (SISS) dataset, the model identified 7 predictions as non-robust, and the average Dice coefficient calculated on the remaining brains improved by 12 percent.

Paper Nr: 19
Title:

Using Anatomical Priors for Deep 3D One-shot Segmentation

Authors:

Duc D. Pham, Gurbandurdy Dovletov and Josef Pauli

Abstract: With the success of deep convolutional neural networks for semantic segmentation in the medical imaging domain, there is a high demand for labeled training data, that is often not available or expensive to acquire. Training with little data usually leads to overfitting, which prohibits the model to generalize to unseen problems. However, in the medical imaging setting, image perspectives and anatomical topology do not vary as much as in natural images, as the patient is often instructed to hold a specific posture to follow a standardized protocol. In this work we therefore investigate the one-shot segmentation capabilities of a standard 3D U-Net architecture in such setting and propose incorporating anatomical priors to increase the segmentation performance. We evaluate our proposed method on the example of liver segmentation in abdominal CT volumes.

Paper Nr: 22
Title:

The Impact of the Wound Shape on Wound Healing Dynamics: Is it Time to Revisit Wound Healing Measures?

Authors:

Gennadi Saiko

Abstract: Introduction: Wound healing is a multifaceted process, which can be impacted by many endogenous (e.g., compromised immune system, limited blood supply) or exogenous (e.g., dressing, presence of infection) factors. An essential step in wound management is to track wound healing progress. It typically includes tracking the wound size (length, width, and area). The wound area is the most often used indicator in wound management. In particular, wound closure is the single parameter used by the FDA to measure wound therapeutics' efficiency. Here, we present some arguments on why the wound area alone is insufficient to predict wound healing progress. Methods: We have developed an analytical approach to characterize an epithelization process based on the wound's area and perimeter. Results: We have obtained the explicit results for wound healing trajectory for several wound shapes: round (2D), elongated cut (1D), and rectangular. The results can be extended to complex shapes. Conclusions: From geometrical considerations, the wound healing time is determined by the shortest dimension (the width) of the wound. However, within that time, the wound healing trajectory can be different. Our calculations show that the shape of the wound may have significant implications on a wound healing trajectory. For example, in the middle of the wound healing process (t/T=0.5), the 1D wound model predicts 50% closure, while the 2D model predicts 75% closure (25% remaining). These considerations can be helpful while analyzing clinical data or designing clinical or pre-clinical experiments.

Paper Nr: 23
Title:

Bacterial Growth and Siderophore Production in Bacteria: An Analytical Model

Authors:

Gennadi Saiko

Abstract: We have analyzed the impact of quorum sensing and resource dependency on the production of critically crucial for bacteria fitness compounds (siderophores). We have built two siderophore production models (quorum sensing and resource dependency) and linked them with Monod’s growth model. As a result, siderophore accumulation is explicitly expressed through bacterial concentration N, which allows direct experimental verification. A nutrient-dependent model predicts three siderophore accumulation phases, which accompany bacterial growth: slow accumulation for [N0, Nth], fast accumulation for [Nth, K/2], and slow or no accumulation for [K/2, K). Here N0 is the initial bacterial concentration, K is the carrying capacity. A quorum-sensing model predicts two regimes of siderophore accumulation: relatively slow accumulation for [N0, Ncr] and much faster non-linear accumulation for [Ncr, K). Ncr and Nth are model parameters. Ncr has an “absolute” value. It is dependent on bacterial strain only. Nth has a “relative” value. In addition to the bacterial strain, it also depends on inoculums concentration and the initial nutrient concentration. Such as models predict entirely different behavior, experimental data may help differentiate between these mechanisms.

Paper Nr: 24
Title:

On Feasibility of Fluorescence-based Bacteria Presence Quantification: P.Aeruginosa

Authors:

Alexander Caschera and Gennadi Saiko

Abstract: Introduction: Wound healing typically occurs in the presence of bacteria at levels ranging from contamination to colonization to infection. The role of bacteria in wound healing depends on multiple factors, including bacterial concentration, species present, and host response. Thus, the determination of bacterial load is of great importance. However, existing clinical bacteria load assessment methods (biopsy or swabbing combined with culture methods) are slow, labor- and time-consuming. Pseudomonas aeruginosa is a known pathogen implicated in numerous healthcare-associated infections and can express fluorescent metabolites during proliferation. In particular, the siderophore pyoverdine produces a fluorescent emission between 450-520 nm when excited at 400nm and can be measured quantitatively via fluorescence spectroscopy. The current project aims to investigate the possibility of quantifying bacterial presence using fluorescence measurements. Methods: Cultures of P.aeruginosa (PA01) were grown at various temperatures (ambient temperature, 30, 37-43C̊), inoculum starting condition (5*107 -5*108 CFUmL-1), and initial nutrient’s concentration (0.6, 1.5, 3.0 gL-1) in Tryptic Soy Broth media. Media optical density (OD, as a proxy of bacterial concentration) and fluorescence (ex. 400nm, em. 420-520nm) were measured hourly for 10 hours. Results: Cultures remained metabolically active in the whole temperature range, producing pyoverdine fluorescence (emission max at 455nm). We have correlated optical density with a fluorescent signal to establish a dependence between fluorescence and growth stage. Noticeable pyoverdine accumulation started approximately 3 hours after the beginning of the log growth phase and experienced saturation at the beginning of the stationary phase. Three distinct regimes (a sigmoid curve) were observed: linear dependence of fluorescence on OD for low concentrations, more rapid nonlinear dependence, and saturation when approaching the stationary phase. Conclusions: The sigmoid dependence of bacterial fluorescence on their concentration persisted through variations in temperature and inoculum starting condition; thus, it may have the potential for determining culture growth phase progression. These results, combined with classical knowledge on disease progression, could also lead to an advanced infection diagnosis before current pathogenesis observation techniques.

Paper Nr: 27
Title:

Deep Learning Type Convolution Neural Network Architecture for Multiclass Classification of Alzheimer’s Disease

Authors:

Gopi Battineni, Nalini Chintalapudi, Francesco Amenta and Enea Traini

Abstract: Alzheimer’s disease (AD) is one of the common medical issues that the world is facing today. This disease has a high prevalence of memory loss and cognitive decline primarily in the elderly. At present, there is no specific treatment for this disease, but it is thought that identification of it at an early stage can help to manage it in a better way. Several studies used machine learning (ML) approaches for AD diagnosis and classification. In this study, we considered the Open Access Series of Imaging Studies-3 (OASIS-3) dataset with 2,168 Magnetic Resonance Imaging (MRI) images of patients with very mild to different stages of cognitive decline. We applied deep learning-based convolution neural networks (CNN) which are well-known approaches for diagnosis-based studies. The model training was done by 70% of images and applied 10-fold cross-validation to validate the model. The developed architecture model has successfully classified the different stages of dementia images and achieved 83.3% accuracy which is higher than other traditional classification techniques like support vectors and logistic regression.

Paper Nr: 29
Title:

Holographic Interferometry Real Time Imaging of Refraction Index 2D Distribution and Surface Deformations in Biomedicine

Authors:

N. A. Davidenko, X. Zheng, I. I. Davidenko, V. A. Pavlov, N. G. Chuprina, N. Kuranda, S. L. Studzinsky, A. Pandya, H. Mahdi, A. Ladak, C. Gergely, F. Cuisinier and A. Douplik

Abstract: Holographic Interferometry of 2D Imaging of Refraction Index and Surface Deformations were recorded in in real time video: (1) monitoring of local refraction index perturbation at accuracy of 10-4 in transmission mode during heat and photochemical reactions with human hemoglobin using methylene blue, protoporphyrin IX and rhodamine as the photosensitizers and (2) monitoring in reflectance mode of human tooth local mechanical pressure at accuracy of 10-7m.

Paper Nr: 4
Title:

Developing a Robust Estimator for Remote Optical Erythema Detection

Authors:

Maksym Ptakh and Gennadi Saiko

Abstract: Introduction: Erythema is redness of the skin or mucous membranes, which is symptomatic for any skin injury, infection, or inflammation. In some cases, it can be indicative of certain medical conditions (e.g., nonblanchable erythema in Stage I pressure injuries), and its detection can facilitate intervention at an earlier timepoint. The most common and effective means of erythema detection is a visual inspection of the skin. However, in many cases (especially for people with darkly pigmented skin), erythema can be masked by melanin. Moreover, it would be useful to have an automated delineation and measurement of erythema using consumer-grade devices, e.g., smartphones. It would facilitate automated symptom detection and measuring healing progress in various settings, including the patient's home. Aims: This study aims to evaluate and compare several algorithms that can be used for automated erythema detection using a smartphone's camera in clinical settings. Methods: We have compared three potential estimators, which can be derived from an RGB image: a) log(R/G), b) R-G, and c) a* channel in CIELAB color space. Here, R and G are red and green channels of an RGB image, respectively. Imaged skin was divided into two classes: erythema and non-erythema. The "erythema" class was seeded with pixels with E>mean(E)+z*st.dev(E), where E is the value of the estimator for a particular pixel, z is a model parameter (z-score). The erythema cluster was then grown by gradually adding nearby regions with an estimator E closer to the estimator’s mean of erythema cluster than the mean of the estimator for the normal skin area (K-Mean (K=2)). The segmentation algorithm was tested on a subset of labeled images from the Swift Medical proprietary wound imaging database. To evaluate algorithm performance, the results of segmentation were compared with ground truth, manually labeled images. To quantify results, sensitivity, specificity, and ROC curves were used. Results: We have found that all investigated estimators could provide reasonable sensitivity (>0.8) and specificity (>0.78). However, a* based estimator offers slightly better performance (0.86/0.84). Discussion: The preliminary data shows that smartphone cameras can delineate erythema with reasonable sensitivity and specificity. Further studies are required to correlate the accuracy with the skin type (melanin concentration in the skin).

Paper Nr: 14
Title:

Virtual Screening of Pharmaceutical Compounds with hERG Inhibitory Activity (Cardiotoxicity) using Ensemble Learning

Authors:

Aditya Sarkar and Arnav Bhavsar

Abstract: In silico prediction of cardiotoxicity with high sensitivity and specificity for potential drug molecules can be of immense value. Hence, building machine learning classification models, based on some features extracted from the molecular structure of drugs, which are capable of efficiently predicting cardiotoxicity is critical. In this paper, we consider the application of various machine learning approaches, and then propose an ensemble classifier for the prediction of molecular activity on a Drug Discovery Hackathon (DDH) (1st reference) dataset. We have used only 2-D descriptors of SMILE notations for our prediction. Our ensemble classification uses 5 classifiers (2 Random Forest Classifiers, 2 Support Vector Machines and a Dense Neural Network) and uses Max-Voting technique and Weighted-Average technique for final decision.

Paper Nr: 25
Title:

A Linear, Pixel-specific Color Normalization Algorithm for Hematology Imaging

Authors:

Rachel Lou and Thanh Le

Abstract: The automated cell recognition of hematology microscope images provides crucial information for the qualitative description of cell morphology and other quantitative applications in analyzing blood pathology. Computer-aided diagnostics and cell segmentation are invaluable tools to help reduce the cost of human labor and time. However, discrepancies in stain protocol and imaging hardware pose challenges to automated cell recognition; noise, blur, lighting contrast, and irregular coloration confound cell differentiation. In this study, we describe a linear pre-processing algorithm that addresses the color variation in hematology images. We qualitatively examine the image outputs and quantitatively assess the efficacy of the proposed algorithm by studying the performance of a cell detection model.