International Conference on Learning Representations (ICLR)
Learned Visual Features to Textual Explanations
Abstract
Learned Visual Features to Textual Explanations
Saeid Asgari Taghanaki, Aliasghar Khani, Amir Khasahmadi, Aditya Sanghi, Karl D.D. Willis, Ali Mahdavi-Amiri
Interpreting the learned features of vision models has posed a longstanding challenge in the field of machine learning. To address this issue, we propose a novel method that leverages the capabilities of large language models (LLMs) to interpret the learned features of pre-trained image classifiers. Our method, called TExplain, tackles this task by training a neural network to establish a connection between the feature space of image classifiers and LLMs. Then, during inference, our approach generates a vast number of sentences to explain the features learned by the classifier for a given image. These sentences are then used to extract the most frequent words, providing a comprehensive understanding of the learned features and patterns within the classifier. Our method, for the first time, utilizes these frequent words corresponding to a visual representation to provide insights into the decision-making process of the independently trained classifier, enabling the detection of spurious correlations, biases, and a deeper comprehension of its behavior. To validate the effectiveness of our approach, we conduct experiments on diverse datasets, including ImageNet-9L and Waterbirds. The results demonstrate the potential of our method to enhance the interpretability and robustness of image classifiers.
Download publicationAssociated Researchers
Ali Mahdavi-Amiri
School of Computing Science, Simon Fraser University
Related Resources
2024
Generative Design through Quality-Diversity Data Synthesis and Language ModelsA new paradigm for AEC design exploration, based on a combination of…
2024
Research Publications by the NumbersExplore the depth and breadth of papers published by our Research…
2023
Generative design for COVID-19 and future pathogens using stochastic multi-agent simulationProposing a generative design workflow that integrates a stochastic…
2022
CLIP-Forge: Towards Zero-Shot Text-to-Shape GenerationGenerating shapes using natural language can enable new ways of…
Get in touch
Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.
Contact us