research

Our research focuses on the convergence of scientific computing with advanced learning systems, revolutionizing the landscape of scientific discovery and engineering design. Through AI4Science/Engineering approaches, we develop frameworks that enable unprecedented acceleration in exploring complex physical systems and optimizing engineering solutions.

Research Interests

The convergence of scientific computing with advanced learning systems is revolutionizing how we approach scientific discovery and engineering design. Our research sits at this exciting intersection, where we develop AI-driven frameworks to accelerate the exploration of complex physical systems and optimize engineering solutions. At the heart of our work lies a fundamental challenge: modern scientific problems require understanding and control across multiple scales - from quantum phenomena to system-level behaviors. Traditional approaches are becoming increasingly insufficient to handle this complexity. We believe the path forward lies in leveraging cutting-edge methodologies like physics-informed neural networks (PINNs), scientific machine learning (SciML), and multi-fidelity surrogate modeling.

Multi-fidelity Learning for Semiconductor Innovation

One of our key breakthroughs has been developing intelligent surrogate modeling frameworks where machine learning agents substitute for computationally intensive simulations. Through our novel approaches, we’ve achieved significant improvements in semiconductor device performance. Our physics-informed machine learning frameworks have reduced quantum tunnel junction resistance from 0.14 to 0.02 Ω/cm2 through multi-scale optimization. We’ve also established multi-physics simulation workflows that combine polarization calculations, band engineering, and carrier transport modeling, leading to a 5.8% enhancement in 2DEG density.

Multi-modal Scientific Discovery

In the realm of scientific understanding, we’re pushing the boundaries of how AI systems can comprehend and integrate different types of scientific data. Our LLAVA-DPO framework represents a significant advance in visual-language scientific understanding. We’ve addressed the fundamental challenge of maintaining model capabilities while incorporating new modalities, achieving remarkable improvements in performance across various benchmarks. The framework effectively decouples visual instruction tuning from language capabilities while maintaining strong performance in both domains.

Interactive Learning Systems for Scientific Discovery

The integration of human domain expertise with artificial intelligence systems presents unique challenges in scientific domains where tacit knowledge is paramount yet difficult to formalize. We’re developing interactive frameworks that can effectively capture and learn from expert preferences and domain insights. Our systems aim to provide interpretable decision pathways, helping experts understand the AI’s reasoning process through advanced uncertainty quantification methods.

Current Projects

Active Learning for Semiconductor Design

Developing machine learning frameworks that combine advanced ML models with TCAD simulations for efficient semiconductor device optimization.

Physics-informed ML for Quantum Devices

Creating physics-informed machine learning approaches for quantum device design with multi-scale optimization capabilities.

LLAVA-DPO Framework

A novel visual-language framework for scientific understanding that maintains strong performance in both visual and language tasks.

ALIGN4DR System

Integration of graph neural networks with language models for enhanced scientific discovery and drug recommendation.

Multi-modal Integration for 2D Materials

Novel approaches for integrating diverse data modalities throughout the 2D material device development pipeline.

Future Directions

  • Multi-agent systems for heterogeneous data integration
  • Interactive knowledge engineering for scientific exploration
  • Uncertainty quantification in AI-suggested designs