Hemant Surale, PhD

Reimagining Keyboards at TickerKey

prof_pic.jpg
| | |

Hemant is an HCI+AI researcher who builds and evaluates human-centered systems for mixed reality, wearable sensing, and multimodal input. His work sits at the intersection of machine learning, perceptual modeling, and human factors, usually involving things people wear on their wrists, hands, or face.

He has held senior research roles at Meta, Huawei, Snap, Microsoft Research, Google/North, and NetApp, working on problems like real-time expertise tracking with wearable sensors, gaze-based interaction, haptic feedback for XR productivity, and multi-device fluid interfaces. Earlier in his career, he shipped database features at Oracle and optimized distributed storage systems. Over the years, his work has touched database internals, file systems, hardware prototyping, wearable devices, high-performance systems, and more recently, large language models.

He got his Ph.D. in Computer Science from the University of Waterloo, where his thesis focused on barehand mode-switching in touch and mid-air interfaces. He was advised by Daniel Vogel, Mark Hancock, and Edward Lank. He received the David R. Cheriton Graduate Scholarship three times, two Best Paper Honorable Mentions at CHI and ISS, a CS Achievement Award, and a Snap Research Fellowship (one of 11 worldwide). He has published extensively at CHI, UIST, and ISMAR, and holds multiple patents in input space.

news

Aug 5, 2025 Happy to announce that Ruei-Che’s paper on exploring visual-audio modality transitions for mobile contexts has been accepted at UIST’25!! :tada:
Jan 18, 2025 Happy to announce that Shwetha’s paper on multimodal voice and gesture interaction techniques has been conditionally accepted at CHI’25!!:tada:
May 28, 2024 Two of our intern papers on text-entry topic got conditionally accepted at ISMAR’24. :sparkles:
Jan 26, 2024 Our intern Junxiao (Shawn) is off to start an assistant professor position at the University of Bristol, UK. Congrats!!

selected publications

  1. Gesture2Text.jpg
    Gesture2Text: A Generalizable Decoder for Word-Gesture Keyboards in XR Through Trajectory Coarse Discretization and Pre-training
    Junxiao Shen, Khadija Khaldi, Enmin Zhou, and 2 more authors
    In Proceedings of the IEEE Transactions on Visualization and Computer Graphics, 2024
  2. STARUIST23.jpg
    STAR: Smartphone-analogous Typing in Augmented Reality
    Taejun Kim, Amy Karlson, Aakar Gupta, and 6 more authors
    In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, 2023