Let's Agree to Disagree: Comparing Auto-Acoustic Identification Programs for Northeastern Bats
Files
TR Number
Date
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
With the declines in abundance and changing distribution of white-nose syndrome-affected bat species, increased reliance on acoustic monitoring is now the new "normal." As such, the ability to accurately identify individual bat species with acoustic identification programs has become increasingly important. We assessed rates of disagreement between the three U.S. Fish and Wildlife Service-approved acoustic identification software programs (Kaleidoscope Pro 4.2.0, Echoclass 3.1, and Bat Call Identification 2.7d) and manual visual identification using acoustic data collected during summers from 2003 to 2017 at Fort Drum, New York. We assessed the percentage of agreement between programs through pairwise comparisons on a total nightly count level, individual file level (e.g., individual echolocation pass call file), and grouped maximum likelihood estimate level (e.g., probability values that a species is misclassified as present when in fact it is absent) using preplanned contrasts, Akaike Information Criterion, and annual confusion matrices. Interprogram agreement on an individual file level was low, as measured by Cohen's Kappa (0.2-0.6). However, site-night level pairwise comparative analysis indicated that program agreement was higher (40-90%) using single season occupancy metrics. In comparing analytical outcomes of our different datasets (i.e., how comparable programs and visual identification are regarding the relationship between environmental conditions and bat activity), we determined high levels of congruency in the relative rankings of the model as well as the relative level of support for each individual model. This indicated that among individual software packages, when analyzing bat calls, there was consistent ecological inference beyond the file-by-file level at the scales used by managers. Depending on objectives, we believe our results can help users choose automated software and maximum likelihood estimate thresholds more appropriate for their needs and allow for better cross-comparison of studies using different automated acoustic software.