Fast and scalable computational methods for learning and optimization under uncertainty.
发布人: 曹思圆   发布时间: 2021-12-12   浏览次数: 10

*时间:2021年12月14日 10:00-11:00



*主持人:朱升峰 教授


In this talk, I will present some recent work on fast and scalable computational methods for surrogate modeling, statistical learning, experimental design, and stochastic optimization under uncertainty. Tremendous computational challenges are faced for such problems when (1) the model (e.g., large-scale PDE) is expensive to solve; (2) the design/control/uncertain variables are high-dimensional, etc. We tackle these challenges by exploiting the data and system informed properties including smoothness, intrinsic low-dimensionality, and low-rankness. I will briefly present a few methods including model reduction, randomized tensor decomposition, derivative informed deep learning, projected variational inference, and functional Taylor approximations. Time permits, I will also briefly touch on some applications including underground water management, directed self-assembly for semiconductor manufacturing, design of acoustic and electromagnetic metamaterials, gravitational wave detection from black hole collision, turbulent combustion, stellarator design for plasma fusion, and COVID-19.


Dr. Chen obtained his Bachelor degree in Mathematics and Applied Mathematics from Xi'an Jiaotong University in China in 2009. Then he moved to Switzerland and obtained his PhD degree in Computational Mathematics in 2014 from EPFL. After spending a year as a Postdoc and a Lecturer at ETH Zurich, he moved to the United States and joined the Oden Institute at UT Austin as a long-term researcher in 2015. His research interests include inverse problems, stochastic optimization, uncertainty quantification, machine learning.