Research Areas

MultiModal Foundation Models

MultiModal Foundation Models

We develop versatile foundation models for both single and multi-modal data. Our research aims to create adaptable models capable of processing and interpreting information across multiple data types (e.g., text, images), delivering comprehensive, scalable solutions applicable across diverse fields.

  • Vision-Language Models (VLMs)
  • Multimodal Integration
  • Adversarial Robustness
details
Graph Neural Networks (GNNs)

Graph Neural Networks (GNNs)

We advance the field of GNNs by developing efficient learning algorithms for both static and dynamic graphs. Our research addresses long-range propagation to capture dependencies across distant nodes, enabling deeper insights into complex graph structures. We focus on heterogeneous graphs that contain diverse types of nodes and edges, enhancing the modeling of multifaceted relationships. Additionally, we explore graph pre-training and foundational graph learning to create versatile GNN models that can be adapted across various tasks and domains.

  • Theoretical Foundations
  • Dynamic Graphs
  • Graph Embeddings
details
Resource-Efficient Learning

Resource-Efficient Learning

We develop techniques to maximize learning outcomes from partial and imperfect data in settings such as semi-supervised, weakly supervised, and noisy-labeled environments. This includes computationally efficient model architectures, emphasizing techniques like quantization and Neural Architecture Search (NAS) to streamline performance without compromising accuracy.

  • Model Compression
  • Reduced Supervision Methods
  • Practical Applications
details