Browsing by Author "Carlyn, David"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
- Discovering Novel Biological Traits From Images Using Phylogeny-Guided Neural NetworksElhamod, Mohannad; Khurana, Mridul; Manogaran, Harish Babu; Uyeda, Josef C.; Balk, Meghan A.; Dahdul, Wasila; Bakış, Yasin; Bart, Henry L. Jr.; Mabee, Paula M.; Lapp, Hilmar; Balhoff, James P.; Charpentier, Caleb; Carlyn, David; Chao, Wei-Lun; Stewart, Charles V.; Rubenstein, Daniel I.; Berger-Wolf, Tanya; Karpatne, Anuj (ACM, 2023-08-06)Discovering evolutionary traits that are heritable across species on the tree of life (also referred to as a phylogenetic tree) is of great interest to biologists to understand how organisms diversify and evolve. However, the measurement of traits is often a subjective and labor-intensive process, making trait discovery a highly label-scarce problem. We present a novel approach for discovering evolutionary traits directly from images without relying on trait labels. Our proposed approach, Phylo-NN, encodes the image of an organism into a sequence of quantized feature vectors–or codes–where different segments of the sequence capture evolutionary signals at varying ancestry levels in the phylogeny. We demonstrate the effectiveness of our approach in producing biologically meaningful results in a number of downstream tasks including species image generation and species-to-species image translation, using fish species as a target example.
- A Simple Interpretable Transformer for Fine-Grained Image Classification and AnalysisPaul, Dipanjyoti; Chowdhury, Arpita; Xiong, Xinqi; Chang, Feng-Ju; Carlyn, David; Stevens, Samuel; Provost, Kaiya; Karpatne, Anuj; Carstens, Bryan; Rubenstein, Daniel I.; Stewart, Charles V.; Berger-Wolf, Tanya Y.; Su, Yu; Chao, Wei-Lun (2023)We present a novel usage of Transformers to make image classification interpretable. Unlike mainstream classifiers that wait until the last fully-connected layer to incorporate class information to make predictions, we investigate a proactive approach, asking each class to search for itself in an image. We realize this idea via a Transformer encoder-decoder inspired by DEtection TRansformer (DETR). We learn “class-specific” queries (one for each class) as input to the decoder, enabling each class to localize its patterns in an image via cross-attention. We name our approach INterpretable TRansformer (INTR), which is fairly easy to implement and exhibits several compelling properties. We show that INTR intrinsically encourages each class to attend distinctively; the cross-attention weights thus provide a faithful interpretation of the prediction. Interestingly, via “multi-head” cross-attention, INTR could identify different “attributes” of a class, making it particularly suitable for fine-grained classification and analysis, which we demonstrate on eight datasets. Our code and pre-trained model are publicly accessible at https://github.com/Imageomics/INTR.