Simulation, Optimization, and Systems
Physics-Informed AI
Webinar Series
In the videos below, experts from academia and industry explore the intersection of physics and AI, and how this emerging field is revolutionizing simulations, design processes, and problem solving across industries.
This project is presented by the Simulation, Optimization, and Systems (SOS) team at Autodesk Research.
LaSDI: Latent Space Dynamics Identification
Dr. Youngsoo Choi, Lawrence Livermore National Laboratory
Sponsored by Hesam Salehipour, Principal Computational Physics Research Scientist, Autodesk Research
SEPT 17, 2024 | Many computational models for physical systems have been developed to expedite scientific discovery, which would otherwise be impeded by the lengthy nature of traditional, non-computational experimentation (e.g., observation, problem identification, hypothesis formulation, experimentation, data collection, analysis, conclusion, and theory development). However, as these physical systems grow more complex, the computational models themselves can become prohibitively time-consuming. To address this challenge, we introduce a framework called Latent Space Dynamics Identification (LaSDI), which transforms complex, high-dimensional computational domains into reduced, succinct coordinate systems—a sophisticated change of variables that preserves essential dynamics.
Bridging Machine Learning and Physics: Exploring Data-Driven Surrogates, Physics-Informed ML, and Physics Discovery
Teeratorn Kadeethum, Sandia National Laboratories
Sponsored by Mehran Ebrahimi, Principal Computational Physics Research Scientist, Autodesk Research
OCT 16, 2024 | Data-driven surrogates, physics-informed machine learning, and physics discovery represent three key approaches in the application of machine learning to physical systems. Data-driven surrogates rely heavily on large amounts of data to approximate complex physical processes, offering ease of training and flexibility across applications. However, these models often fail to honor the underlying physics, leading to potential inaccuracies outside of their training domain and dependence on abundant, high-quality data. Physics-informed ML directly integrates governing equations or physical laws into the learning process, allowing the model to generalize better with limited data. This approach enhances model interpretability and reduces reliance on data, but it can suffer from training instability due to challenges like enforcing boundary conditions or balancing loss terms. For instance, incorporating conservation laws in fluid dynamics modeling often leads to slow convergence or difficulty handling stiff systems. Lastly, physics discovery aims to uncover governing equations from data, leveraging explainable AI techniques to identify the underlying physical laws governing a system. This approach ensures alignment with physics and provides interpretable, symbolic representations of the discovered equations, making it particularly useful in fields where the governing equations are not well known or need refinement. This talk will explore these three approaches in detail, examining their underlying mechanisms, trade-offs, and relationships, providing insights into how they complement one another in solving complex physical problems.
Scale-consistent Learning with Neural Operators
Zongyi Li, California Institute of Technology
Sponsored by Mehran Ebrahimi, Principal Computational Physics Research Scientist, Autodesk Research
OCT 22, 2024 | Data-driven models have emerged as a promising approach for solving partial differential equations (PDEs) in science and engineering. Previous machine learning (ML) models typically cover only a narrow distribution of PDE problems; for example, a trained ML model for the Navier-Stokes equations usually works only for a fixed Reynolds number and domain size. To overcome these limitations, we propose a data augmentation scheme based on scale-consistency properties of PDEs and design a scale-informed neural operator that can model a wide range of scales. Our formulation (i) leverages the fact that many PDEs possess a scale consistency under rescaling of the spatial domain, and (ii) is based on the discretization-convergent property of neural operators, which allows them to be applied across arbitrary resolutions. Our experiments on the 2D Darcy Flow, Helmholtz equation, and Navier-Stokes equations show that the proposed scale-consistency loss helps the scale-informed neural operator model generalize to Reynolds numbers ranging from 250 to 10000. This approach has the potential to significantly improve the efficiency and generalizability of data-driven PDE solvers in various scientific and engineering applications.
Machine Learning in Large-Scale Engineering Simulations
Wenzhuo Xu, Carnegie Mellon University
Sponsored by Daniele Grandi, Principal Research Scientist, Autodesk Research
NOV 14, 2024 | Applying physics-informed neural networks (PINNs) in engineering scenarios involving millions of elements presents a significant computational challenge, as the complexity and diversity of the physics can exceed the capacities of machine learning (ML) models and available GPU memory. We utilize the capacity of graph structures and physics spectrum analysis to harness complex and irregular physical boundaries and large-scale simulations of over 1 million elements. We are able to achieve high validation accuracy on multiple physics scenarios, while achieving full integration with numerical solvers.
Neural PDE: Towards AI-Enhanced Physics Simulation
Peter Yichen Chen, MIT CSAIL
Sponsored by Nigel Morris, Sr. Manager, Principal Research Scientist Autodesk Research
NOV 21, 2024 | Physics simulation has become the third pillar of science and engineering, alongside theory and experiments. Two distinct simulation paradigms have emerged: the classical laws of physics approach, e.g., leveraging partial differential equations (PDEs) derived from first principles, and the data-driven approach, e.g., training neural networks from observations. My research asks: how can we effectively merge these two approaches to amplify their respective strengths? In this talk, I will show that by organically integrating these two approaches, we can create physics simulations that significantly outperform classical physics-only approaches in terms of (1) accuracy, (2) speed, and (3) accessibility. Simultaneously, our hybrid physics-data simulations possess exceptional generalization capabilities, which, unlike their pure data-driven counterparts, carefully incorporate PDEs as an inductive bias.
Related Resources
2023
CAD-LLM: Large Language Model for CAD GenerationThis research presents generating Computer Aided Designs (CAD) using…
2024
Elicitron: An LLM Agent-Based Simulation Framework for Design Requirements ElicitationA novel framework that leverages Large Language Models (LLMs) to…
2024
DesignQA: A Multimodal Benchmark for Evaluating Large Language Models’ Understanding of Engineering DocumentationNovel benchmark aimed at evaluating the proficiency of multimodal…
2024
Reduced-order modeling of unsteady fluid flow using neural network ensemblesA framework to enhance the accuracy of time-series predictions in…
Get in touch
Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.
Load projects