Lab overview
We study the forms and intelligence of emerging mobile, IoT, and wearable devices. We are interested in the system and algorithmic challenges of building multi-sensory device platforms that learn, infer, and augment human behavior.
In a world beyond screens, batteries, and traditional forms, we're on a mission to unveil devices that resonate deeply with our very essence. These aren’t just gadgets - they'll become harmonious extensions of our very selves - seamlessly integrated into our physical, mental, and cognitive states. Such profound change demands reinventing device software design, development, deployment, and maintenance paradigms. Our research stands humbly at this intersection, questioning long-held beliefs and exploring the uncharted through four critical areas in our pursuit of pioneering future device technologies.
Our people
We are a tight-knit, multidisciplinary team of dreamers, thinkers, and creators. Together, we delve into the heart of device technologies, boldly tackling the pivotal challenges of tomorrow. It's about revolutionizing the way we interact with the world, one innovation at a time.
APA style publications
- Adiba Orzikulova, Diana A. Vasile, Chi Ian Tang, Fahim Kawsar, Sung-Ju Lee, Chulhong Min. "BioQ: Towards Context-Aware Multi-Device Collaboration with Bio-cues." To Appear in 23rd ACM Conference on Embedded Networked Sensor Systems (SenSys 2025).
- Arvind Pillai, Dimitris Spathis, Fahim Kawsar, Mohammad Malekzadeh. "PaPaGei: Open Foundation Models for Optical Physiological Signals." To Appear in the International Conference on Learning Representations (ICLR) 2025.
- Sujin Han, Diana A. Vasile, Fahim Kawsar, Chulhong Min. "SecuWear: Secure Data Sharing Between Wearable Devices." To Appear in Workshop on Security and Privacy in Standardized IoT (SDIoTSec) 2025.
- Ryuhaerang Choi, Soumyajit Chatterjee, Dimitris Spathis, Sung-Ju Lee, Fahim Kawsar, and Mohammad Malekzadeh. "SoundCollage: Automated Discovery of New Classes in Audio Datasets." To Appear In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2025.
- Arnav M. Das, Chi Ian Tang, Fahim Kawsar, and Mohammad Malekzadeh. "PRIMUS: Pretraining IMU Encoders with Multimodal Self-Supervision." To Appear In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2025.
- Harshvardhan C. Takawale, Yang Liu, Khaldoon Al-Naimi, Fahim Kawsar, Alessandro Montanari. "Towards Detecting Auditory Attention from in-Ear Muscle Contractions using Commodity Earbuds." To Appear In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2025.
- Jiatao Quan, Khaldoon Al-Naimi, Xijia Wei, Yang Liu, Fahim Kaswar, Alessandro Montanari, Ting Dang. "Cognitive Load Monitoring via Earable Acoustic Sensing." To Appear In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2025.
- C. Rajashekar Reddy, Vivian Dsouza, Ashok Samraj Thangarajan, Przemysław Pawełczak, Fahim Kawsar, Alessandro Montanari. "BioPulse: Towards Enabling Perpetual Vital Signs Monitoring using a Body Patch." To Appear In the International Workshop on Mobile Computing Systems and Applications (HotMobile), 2025.
- Changmin Jeon, Taesik Gong, Juheon Yi, Fahim Kawsar, Chulhong Min. "Boosting Multi-DNN Inference on Tiny AI Accelerators with Weight Memory Virtualization." To Appear In the International Workshop on Mobile Computing Systems and Applications (HotMobile), 2025.