Publication | AAAI Conference on Artificial Intelligence 2021
Cross-Domain Few-Shot Graph Classification
Few-shot learning is a setting in which a model learns to adapt to novel categories from a few labeled samples. Inspired by human learning, meta-learning address few-shot learning by leveraging distribution of similar tasks to accumulate transferable knowledge from prior experience which then can serve as a strong inductive bias for fast adaptation to downstream tasks. A fundamental assumption in meta-learning is that tasks in meta-training and meta-testing phases are sampled from the same distribution, i.e., tasks are i.i.d. However, in many real-world applications, collecting tasks from the same distribution is infeasible. Instead, there are datasets available from the same modality but different domains. In this research, we address this by introducing an attention-based graph encoder that can accumulate knowledge from tasks that are not similar.
Download publicationAbstract
Cross-Domain Few-Shot Graph Classification
Kaveh Hassani
AAAI Conference on Artificial Intelligence 2021
We study the problem of few-shot graph classification across domains with nonequivalent feature spaces by introducing three new cross-domain benchmarks constructed from publicly available datasets. We also propose an attention-based graph encoder that uses three congruent views of graphs, one contextual and two topological views, to learn representations of task specific information for fast adaptation, and task-agnostic information for knowledge transfer. We run exhaustive experiments to evaluate the performance of contrastive and meta-learning strategies. We show that when coupled with metric-based meta-learning frameworks, the proposed encoder achieves the best average meta-test classification accuracy across all benchmarks.
Related Resources
2023
Amortizing Pragmatic Program Synthesis with RankingsA novel method of amortizing the RSA algorithm by leveraging a global…
2023
What’s In A Name? Evaluating Assembly-Part Semantic Knowledge in Language Models through User-Provided Names in CAD FilesThe natural language names designers use in CAD software are a…
2021
Inferring CAD Modeling Sequences using Zone GraphsIn computer-aided design (CAD), the ability to “reverse engineer” the…
2022
Evolving Through the Looking Glass: Learning Improved Search Spaces with Variational Autoencoders.Nature has spent billions of years perfecting our genetic…
Get in touch
Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.
Contact us