Resource-efficient learning is a key research area at the INSIGHT Lab, focusing on developing techniques that significantly reduce the computational and memory requirements of deep neural networks.
Our researchers explore model compression methods such as quantization, pruning, knowledge distillation, and low-rank approximations combined with efficient fine-tuning strategies. The aim is to enable the deployment of powerful, lightweight, and energy-efficient AI models suitable for resource-constrained environments.
We emphasize reduced supervision methods, including weakly supervised, semi-supervised, self-supervised learning, and learning with noisy labels. Our researchers develop models that effectively leverage limited or imperfect supervision for reliable performance across various tasks.
By integrating domain-specific insights with advanced learning techniques, we create robust, adaptable, and practical solutions for real-world scenarios. Our resource-efficient models find applications in areas ranging from mobile devices to medical imaging, enhancing their practicality and scalability.