College of Engineering (COE)
Permanent URI for this community
Note: The Department of Biological Systems Engineering is listed within the College of Agriculture and Life Sciences (CALS).
Browse
Browsing College of Engineering (COE) by Author "Abbott, A. Lynn"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
- Aerial high-throughput phenotyping of peanut leaf area index and lateral growthSarkar, Sayantan; Cazenave, Alexandre-Brice; Oakes, Joseph C.; McCall, David S.; Thomason, Wade E.; Abbott, A. Lynn; Balota, Maria (Springer Nature, 2021-11-04)Leaf area index (LAI) is the ratio of the total one-sided leaf area to the ground area, whereas lateral growth (LG) is the measure of canopy expansion. They are indicators for light capture, plant growth, and yield. Although LAI and LG can be directly measured, this is time consuming. Healthy leaves absorb in the blue and red, and reflect in the green regions of the electromagnetic spectrum. Aerial high-throughput phenotyping (HTP) may enable rapid acquisition of LAI and LG from leaf reflectance in these regions. In this paper, we report novel models to estimate peanut (Arachis hypogaea L.) LAI and LG from vegetation indices (VIs) derived relatively fast and inexpensively from the red, green, and blue (RGB) leaf reflectance collected with an unmanned aerial vehicle (UAV). In addition, we evaluate the models’ suitability to identify phenotypic variation for LAI and LG and predict pod yield from early season estimated LAI and LG. The study included 18 peanut genotypes for model training in 2017, and 8 genotypes for model validation in 2019. The VIs included the blue green index (BGI), red-green ratio (RGR), normalized plant pigment ratio (NPPR), normalized green red difference index (NGRDI), normalized chlorophyll pigment index (NCPI), and plant pigment ratio (PPR). The models used multiple linear and artificial neural network (ANN) regression, and their predictive accuracy ranged from 84 to 97%, depending on the VIs combinations used in the models. The results concluded that the new models were time- and cost-effective for estimation of LAI and LG, and accessible for use in phenotypic selection of peanuts with desirable LAI, LG and pod yield.
- Automated Mapping of Typical Cropland Strips in the North China Plain Using Small Unmanned Aircraft Systems (sUAS) PhotogrammetryZhang, Jianyong; Zhao, Yanling; Abbott, A. Lynn; Wynne, Randolph H.; Hu, Zhenqi; Zou, Yuzhu; Tian, Shuaishuai (MDPI, 2019-10-10)Accurate mapping of agricultural fields is needed for many purposes, including irrigation decisions and cadastral management. This paper is concerned with the automated mapping of cropland strips that are common in the North China Plain. These strips are commonly 3–8 m in width and 50–300 m in length, and are separated by small ridges that assist with irrigation. Conventional surveying methods are labor-intensive and time-consuming for this application, and only limited performance is possible with very high resolution satellite images. Small Unmanned Aircraft System (sUAS) images could provide an alternative approach to ridge detection and strip mapping. This paper presents a novel method for detecting cropland strips, utilizing centimeter spatial resolution imagery captured by sUAS flying at low altitude (60 m). Using digital surface models (DSM) and ortho-rectified imagery from sUAS data, this method extracts candidate ridge locations by surface roughness segmentation in combination with geometric constraints. This method then exploits vegetation removal and morphological operations to refine candidate ridge elements, leading to polyline-based representations of cropland strip boundaries. This procedure has been tested using sUAS data from four typical cropland plots located approximately 60 km west of Jinan, China. The plots contained early winter wheat. The results indicated an ability to detect ridges with comparatively high recall and precision (96.8% and 95.4%, respectively). Cropland strips were extracted with over 98.9% agreement relative to ground truth, with kappa coefficients over 97.4%. To our knowledge, this method is the first to attempt cropland strip mapping using centimeter spatial resolution sUAS images. These results have demonstrated that sUAS mapping is a viable approach for data collection to assist in agricultural land management in the North China Plain.
- Beyond Finding Change: multitemporal Landsat for forest monitoring and managementWynne, Randolph H.; Thomas, Valerie A.; Brooks, Evan B.; Coulston, J. O.; Derwin, Jill M.; Liknes, Greg C.; Yang, Z.; Fox, Thomas R.; Ghannam, S.; Abbott, A. Lynn; House, M. N.; Saxena, R.; Watson, Layne T.; Gopalakrishnan, Ranjith (2017-07)Take homes
- Tobler’s Law still in effect with time series – spatial autocorrelation in temporal coherence can help in both preprocessing and estimation
- Continual process improvement in extant algorithms needed
- Need additional means by which variations within (parameterization) and across algorithms addressed (the Reverend…)
- Time series improving higher order products (example with NLCD TCC) enabling near continuous monitoring
- Color Invariant Skin SegmentationXu, Han; Sarkar, Abhijit; Abbott, A. Lynn (IEEE, 2022-06)This paper addresses the problem of automatically detecting human skin in images without reliance on color information. A primary motivation of the work has been to achieve results that are consistent across the full range of skin tones, even while using a training dataset that is significantly biased toward lighter skin tones. Previous skin-detection methods have used color cues almost exclusively, and we present a new approach that performs well in the absence of such information. A key aspect of the work is dataset repair through augmentation that is applied strategically during training, with the goal of color invariant feature learning to enhance generalization. We have demonstrated the concept using two architectures, and experimental results show improvements in both precision and recall for most Fitzpatrick skin tones in the benchmark ECU dataset. We further tested the system with the RFW dataset to show that the proposed method performs much more consistently across different ethnicities, thereby reducing the chance of bias based on skin color. To demonstrate the effectiveness of our work, extensive experiments were performed on grayscale images as well as images obtained under unconstrained illumination and with artificial filters. Source code: https://github.com/HanXuMartin/Color-Invariant-Skin-Segmentation
- Optimization of Color Conversion for Face RecognitionJones, Creed F. III; Abbott, A. Lynn (2004-04-21)This paper concerns the conversion of color images to monochromatic form for the purpose of human face recognition. Many face recognition systems operate using monochromatic information alone even when color images are available. In such cases, simple color transformations are commonly used that are not optimal for the face recognition task. We present a framework for selecting the transformation from face imagery using one of three methods: Karhunen-Loève analysis, linear regression of color distribution, and a genetic algorithm. Experimental results are presented for both the well-known eigenface method and for extraction of Gabor-based face features to demonstrate the potential for improved overall system performance. Using a database of 280 images, our experiments using these methods resulted in performance improvements of approximately 4% to 14%.
- PointMotionNet: Point-Wise Motion Learning for Large-Scale LiDAR Point Clouds SequencesWang, Jun; Li, Xiaolong; Sullivan, Alan; Abbott, A. Lynn; Chen, Siheng (IEEE, 2022-06)We propose a point-based spatiotemporal pyramid architecture, called PointMotionNet, to learn motion information from a sequence of large-scale 3D LiDAR point clouds. A core component of PointMotionNet is a novel technique for point-based spatiotemporal convolution, which finds the point correspondences across time by leveraging a time-invariant spatial neighboring space and extracts spatiotemporal features. To validate PointMotionNet, we consider two motion-related tasks: point-based motion prediction and multisweep semantic segmentation. For each task, we design an end-to-end system where PointMotionNet is the core module that learns motion information. We conduct extensive experiments and show that i) for point-based motion prediction, PointMotionNet achieves less than 0.5m mean squared error on Argoverse dataset, which is a significant improvement over existing methods; and ii) for multisweep semantic segmentation, PointMotionNet with a pretrained segmentation backbone outperforms previous SOTA by over 3.3 % mIoU on SemanticKITTI dataset with 25 classes including 6 moving objects.