My research interests are in the intersection of pervasive systems and machine learning, with a focus on building pervasive gaze estimation systems and gaze-based human-computer interactive systems.
A resource-efficient gaze estimation framework that significantly reduces the reliance on labeled data and improves the computational efficiency for inference and calibration.
The first work that investigates the backdoor vulnerability of gaze estimation models and proposes a framework to identify backdoored gaze estimation models.
An approach to transform raw images to obfuscated images that do not contain private attributes of users but can be used for effective gaze estimation by black-box models.
An approach that achieve comparable performance with the supervised baseline, while enabling up to 6.81 and 1.67 times speedup in calibration and inference, respectively.
A eye-tracking dataset in VR, combining high-frame-rate periocular videos and high-frequency gaze data to enable accurate, multimodal emotion recognition.