2019 AIChE Annual Meeting
(370a) Computer Vision Improvised Cell Detection for Lab-on-a-Chip Diagnostics Point Care Solutions
Authors
Classifying objects would be computationally expensive in a snapshot utilizing Convolution Neural Networks(CNN). Since the objects of intrigued might have diverse spatial areas in the snapshot and diverse perspective proportions comes about in selecting a tremendous number of locales of intrigued. Regionbased Convolutional Network (R-CNN) & Fast Region-based Convolution Network (Fast R-CNN) are published by Ross Girshick, Microsoft Research in 2014 & 2015 respectively. R-CNN utilizes selective search algorithm for the region search and tends to extract region proposals. It performs the Support Vector Machine based classiï¬cation algorithm to classify the presence of the object among that candidate region proposal. Fast R-CNN is proposed to reduce the time consumption associated with the high number of models needed to analyze all proposals for the region. CNN Feature Extractor generates a convolutional feature map that is later used to identify the proposal region. Shaoqing Ren, Microsoft Research, 2015 published an extension to the Fast R-CNN, Faster Region-based Convolutional Network (Faster R-CNN). Faster R-CNN introduced the Regional Proposal Network (RPN) to generate proposals for regions, predict boundaries and augments object detection. Faster R-CNN is composed of the RPN and the Fast R-CNN. Thus, Fast and Faster R-CNN methods detect proposals for regions and detect an object in each region. Region-based Fully Convolutional Network (R-FCN) published by Jifeng Dai, Microsoft Research, 2016, involves convolution layers that allow end-to-end back-propagation for weights update during training and inference. It simultaneously takes into account the object detection (location invariant) as well as their position (location variant) & merged the two basic steps in a single model. The previous algorithms for object detection use regions to locate the object in the image. You Look Only Once (YOLO) published by Santosh Divvala & Ross Girshick, Allen Institute for AI & Facebook AI Research respectively in 2016. The YOLO network will look at the elements of the snapshot that have high chances of containing the object. It forecasts bounding boxes and class probabilities with one network during a single analysis. Simplicity of YOLO algorithm permits real-time predictions.
Dumitru Erhan, Google Brain, 2016, published Single-Shot Detector (SSD). The developed algorithm namely, SSD simultaneously forecasts the bounding boxes and then the class probabilities with a CNN architecture from end to end. It eliminates the need for a network of regional proposals. A few modiï¬cations, including multi-scale features and default boxes is applied to SSD to recover the drop in accuracy (the accuracy is measured as the mean average precision (mAP), the accuracy of the predictions). These improvements allow SSD to match the Faster R-CNNâs accuracy using lower resolution images which further accelerates the speed. Mask Regional Convolution Network (Mask R-CNN), published by Kaiming He, Facebook AI Research, 2018, adds a parallel branch to the detection of the bounding box by extending Faster R-CNN to predict the mask of the object. An objectâs mask is its pixel segmentation in an image. The segmentation of the image thereby groups pixels of the same object. RetinaNet (Focal loss for Dense object detection) published by Tsung-Yi Lin, Facebook Research, 2017, is a one-stage object detector (such as SSD and YOLO) with a two-stage detector performance (such as Faster-RCNN). They proposed a new loss function called classiï¬cation focal loss, which increased the accuracy signiï¬cantly. RetinaNet is essentially a feature pyramid network with a loss of cross-entropy replaced by a Focal loss. The results obtained by implementing the above mentioned state of the art object detection algorithms are compared to the traditional baseline of cell counting technologies. Greater insights of the detection architectures & the results obtained would be discussed.