Browsing by Author "Sivaramakrishnan, Upasana"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
- Machine Learning Analysis of Hyperspectral Images of Damaged Wheat KernelsDhakal, Kshitiz; Sivaramakrishnan, Upasana; Zhang, Xuemei; Belay, Kassaye; Oakes, Joseph; Wei, Xing; Li, Song (MDPI, 2023-03-28)Fusarium head blight (FHB) is a disease of small grains caused by the fungus Fusarium graminearum. In this study, we explored the use of hyperspectral imaging (HSI) to evaluate the damage caused by FHB in wheat kernels. We evaluated the use of HSI for disease classification and correlated the damage with the mycotoxin deoxynivalenol (DON) content. Computational analyses were carried out to determine which machine learning methods had the best accuracy to classify different levels of damage in wheat kernel samples. The classes of samples were based on the DON content obtained from Gas Chromatography–Mass Spectrometry (GC-MS). We found that G-Boost, an ensemble method, showed the best performance with 97% accuracy in classifying wheat kernels into different severity levels. Mask R-CNN, an instance segmentation method, was used to segment the wheat kernels from HSI data. The regions of interest (ROIs) obtained from Mask R-CNN achieved a high mAP of 0.97. The results from Mask R-CNN, when combined with the classification method, were able to correlate HSI data with the DON concentration in small grains with an R2 of 0.75. Our results show the potential of HSI to quantify DON in wheat kernels in commercial settings such as elevators or mills.
- SAMPLS: A prompt engineering approach using Segment-Anything-Model for PLant Science researchSivaramakrishnan, Upasana (Virginia Tech, 2024-05-30)Comparative anatomical studies of diverse plant species are vital for the understanding of changes in gene functions such as those involved in solute transport and hormone signaling in plant roots. The state-of-the-art method for confocal image analysis called PlantSeg utilized U-Net for cell wall segmentation. U-Net is a neural network model that requires training with a large amount of manually labeled confocal images and lacks generalizability. In this research, we test a foundation model called the Segment Anything Model (SAM) to evaluate its zero-shot learning capability and whether prompt engineering can reduce the effort and time consumed in dataset annotation, facilitating a semi-automated training process. Our proposed method improved the detection rate of cells and reduced the error rate as compared to state-of-the-art segmentation tools. We also estimated the IoU scores between the proposed method and PlantSeg to reveal the trade-off between accuracy and detection rate for different quality of data. By addressing the challenges specific to confocal images, our approach offers a robust solution for studying plant structure. Our findings demonstrated the efficiency of SAM in confocal image segmentation, showcasing its adaptability and performance as compared to existing tools. Overall, our research highlights the potential of foundation models like SAM in specialized domains and underscores the importance of tailored approaches for achieving accurate semantic segmentation in confocal imaging.