SAMPLS: A prompt engineering approach using Segment-Anything-Model for PLant Science research

dc.contributor.authorSivaramakrishnan, Upasanaen
dc.contributor.committeechairLi, Songen
dc.contributor.committeememberHa, Dong S.en
dc.contributor.committeememberHa, Sook Shinen
dc.contributor.departmentElectrical and Computer Engineeringen
dc.date.accessioned2024-05-31T08:00:52Zen
dc.date.available2024-05-31T08:00:52Zen
dc.date.issued2024-05-30en
dc.description.abstractComparative anatomical studies of diverse plant species are vital for the understanding of changes in gene functions such as those involved in solute transport and hormone signaling in plant roots. The state-of-the-art method for confocal image analysis called PlantSeg utilized U-Net for cell wall segmentation. U-Net is a neural network model that requires training with a large amount of manually labeled confocal images and lacks generalizability. In this research, we test a foundation model called the Segment Anything Model (SAM) to evaluate its zero-shot learning capability and whether prompt engineering can reduce the effort and time consumed in dataset annotation, facilitating a semi-automated training process. Our proposed method improved the detection rate of cells and reduced the error rate as compared to state-of-the-art segmentation tools. We also estimated the IoU scores between the proposed method and PlantSeg to reveal the trade-off between accuracy and detection rate for different quality of data. By addressing the challenges specific to confocal images, our approach offers a robust solution for studying plant structure. Our findings demonstrated the efficiency of SAM in confocal image segmentation, showcasing its adaptability and performance as compared to existing tools. Overall, our research highlights the potential of foundation models like SAM in specialized domains and underscores the importance of tailored approaches for achieving accurate semantic segmentation in confocal imaging.en
dc.description.abstractgeneralStudying different plant species' anatomy is crucial for understanding how genes work, especially those related to moving substances and signaling in plant roots. Scientists often use advanced techniques like confocal microscopy to examine plant tissues in detail. Traditional techniques like PlantSeg in automatically segmenting plant cells require a lot of computational resources and manual effort in preparing the dataset and training the model. In this study, we develop a novel technique using Segment-Anything-Model that could learn to identify cells without needing as much training data. We found that SAM performed better than other methods, detecting cells more accurately and making fewer mistakes. By comparing SAM with PlantSeg, we could see how well they worked with different types of images. Our results show that SAM is a reliable option for studying plant structures using confocal imaging. This research highlights the importance of using tailored approaches like SAM to get accurate results from complex images, offering a promising solution for plant scientists.en
dc.description.degreeMaster of Scienceen
dc.format.mediumETDen
dc.identifier.othervt_gsexam:40907en
dc.identifier.urihttps://hdl.handle.net/10919/119191en
dc.language.isoenen
dc.publisherVirginia Techen
dc.rightsIn Copyrighten
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.subjectSegment-Anything-Modelen
dc.subjectLarge-Vision-Modelsen
dc.subjectVision Transformersen
dc.subjectSemantic segmentationen
dc.subjectPrompt Segmentationen
dc.subjectInteractive machine learningen
dc.titleSAMPLS: A prompt engineering approach using Segment-Anything-Model for PLant Science researchen
dc.typeThesisen
thesis.degree.disciplineComputer Engineeringen
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen
thesis.degree.levelmastersen
thesis.degree.nameMaster of Scienceen

Files

Original bundle
Now showing 1 - 1 of 1
Name:
Sivaramakrishnan_U_T_2024.pdf
Size:
2.03 MB
Format:
Adobe Portable Document Format

Collections