I received my M.S. degree in Mechanical Engineering from Stanford University in 2025 and my B.Eng. degree in Robotics Engineering from Beijing University of Chemical Technology (BUCT) in 2023.
I was a research assistant at The Chinese University of Hong Kong (CUHK) from 2022 to 2024, working with
Prof. Jiewen Lai and
Prof. Hongliang Ren.
I care deeply about building a welcoming and inclusive research community. If you'd like to chat about research, career paths, or anything else, feel free to reach out—especially if you're from an underrepresented group in STEM. I'm always happy to connect and help where I can.
[Dec 2025] "Gravity-Aware Proactive Joint-Level Compensation for Portable Soft Slender Robots Using A Single IMU and Real-Time Simulation" is accepted by IJRR.
I am interested in medical robotics, tactile sensing, soft robotics, and sim-to-real applications. Most of my research focuses on sensing-driven and learning-based approaches for safe interaction and navigation in medical and clinical environments.
TL;DR: Combining vision-based tactile imaging with force-torque sensing enables robots to reliably detect subsurface tendon features during physiotherapy palpation, where force signals alone are often ambiguous, while still maintaining safe and controlled contact.
TL;DR: Transferring the navigation strategy that a redundant soft robot learns from what it has seen in the SOFA-based virtual world to the real world.
Twistable Soft Continuum Robots
Jiewen Lai*, Yanjun Liu*, Tian-Ao Ren, Yan Ma, Tao Zhang, Jeremy Teoh, Mark R. Cutkosky, Hongliang Ren
Second Round review by Nature Communications, 2025
TL;DR: Combining vision-based tactile imaging with force–torque sensing enables robots to reliably detect subsurface tendon features during physiotherapy palpation, where force signals alone are often ambiguous, while still maintaining safe and controlled contact.
TL;DR: Gecko-inspired dry adhesives enable gentle robotic grasping in extreme cold, fail below ~-60 °C, and can be reliably restored at much lower temperatures using brief local heating and appropriate preload.
TL;DR: A force-informed deep reinforcement learning strategy enables flexible robotic endoscopes to exploit contact with deformable stomach walls for robust, high-precision navigation in dynamic environments, significantly outperforming contact-agnostic policies and generalizing to unseen disturbances.
TL;DR: A dual stereo vision system with geometry-guided point-cloud relocation enables accurate 3D morphological reconstruction of millimeter-scale soft continuum robots, recovering fine notch-level details despite low-resolution depth sensing.
TL;DR: A model-free deep reinforcement learning controller enables tendon-driven flexible endoscopes to autonomously navigate in both free space and contact-rich environments, achieving over 90% success within clinical accuracy by retraining policies learned in free space for contact scenarios.
TL;DR: A domain-adaptive Sim-to-Real framework combining IoU-guided image blending and style transfer enables accurate and stable segmentation of oropharyngeal organs from synthetic data, significantly improving real-world performance for robotic intubation despite limited real images.
TL;DR: Transferring the navigation strategy that a redundant soft robot learns from what it has seen in the SOFA-based virtual world to the real world.
Academic Services
Reviewer for IROS 2024; ICRA 2024/2025/2026; IEEE Transactions on Medical Robotics and Bionics (T-MRB); IEEE Transactions on Industrial Informatics (T-II)