Hongjie (Tony) Fang 方泓杰

I am a second-year Ph.D. student @ Computer Science in Shanghai Jiao Tong University (SJTU) & Shanghai Artificial Intelligence Laboratory, advised by Prof. Cewu Lu. Previously, I got my B. Eng. degree @ Computer Science and Engineering, and B. Ec. degree @ Finance from SJTU in 2022.
The pronounciation of my Chinese name is [hɒndʒɪə fɑ:n].

Email  /  GScholar  /  Github  /  Twitter  /  WeChat

profile photo
News

[Jan. 2024] Four papers (AirExo, RH20T, Open X-Embodiment and AnyGrasp) are accepted by ICRA 2024. See you in Japan!
[Nov. 2023] AirExo is accepted by CoRL 2023 @ TGR as an oral presentation. I will give a talk about AirExo in the workshop.
[Oct. 2023] Open X-Embodiment is released! Proud of this wonderful collaboration with so many great robotic researchers!
[Sept. 2023] AirExo is released! Check our website for more details.
[Jun. 2023] One paper is accepted by IROS 2023.
[Jun. 2023] Our paper RH20T is accepted by RSS 2023 Learning for Task and Motion Planning Workshop.
[Apr. 2023] Our paper AnyGrasp is accepted by T-RO.
[Feb, 2023] Our paper "Target-referenced Reactive Grasping for Dynamic Objects" is accepted by CVPR 2023.
[Feb, 2023] Our paper TransCG is accepted by ICRA 2023 as RA-L submission. See you in London!
[Dec, 2022] The preprint version of our paper AnyGrasp is released. For more details, see AnyGrasp official page.
[Jun, 2022] Our paper TransCG is accepted by RA-L. For more details, see TransCG official page
[Aug, 2021] Our paper Graspness is accepted by ICCV 2021.

Research

My research interests mainly lie on robotic fields, specifically, robotic manipulation and grasping. I am also interested in computer vision. I am currently a member of GraspNet in SJTU Machine Vision and Intelligence Group (SJTU-MVIG). Representative papers are highlighted. * denotes equal contribution.

Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Open X-Embodiment Collaboration, [...], Hongjie Fang, [...] (173 authors)
ICRA, 2024  
paper / project page / dataset / bibtex

Introduce the Open X-Embodiment Dataset, the largest robot learning dataset to date with 1M+ real robot trajectories, spanning 22 robot embodiments. Train large, transformer-based policies on the dataset (RT-1-X, RT-2-X) and show that co-training with our diverse dataset substantially improves performance.

AirExo: Low-Cost Exoskeletons for Learning Whole-Arm Manipulation in the Wild
Hongjie Fang*, Hao-Shu Fang*, Yiming Wang*, Jieji Ren, Jingjing Chen, Ruo Zhang, Weiming Wang, Cewu Lu
ICRA, 2024
paper / project page / bibtex

Develop AirExo, a low-cost, adaptable, and portable dual-arm exoskeleton, for joint-level teleoperation and demonstration collection. Further leverage AirExo for learning with cheap demonstrations in the wild to improve sample efficiency and robustness of the policy.

RH20T: A Robotic Dataset for Learning Diverse Skills in One-Shot
Hao-Shu Fang, Hongjie Fang, Zhenyu Tang, Jirong Liu, Chenxi Wang, Junbo Wang, Haoyi Zhu, Cewu Lu
ICRA, 2024
paper / API / project page / bibtex

Collect a dataset comprising over 110k contact-rich robot manipulation sequences across diverse skills, contexts, robots, and camera viewpoints, all collected in the real world. Each sequence in the dataset includes visual, force, audio, and action information, along with a corresponding human demonstration video. Put significant efforts in calibrating all the sensors and ensures a high-quality dataset.

Flexible Handover with Real-Time Robust Dynamic Grasp Trajectory Generation
Gu Zhang, Hao-Shu Fang, Hongjie Fang, Cewu Lu
IROS, 2023  
paper / bibtex

Propose an approach for effective and robust flexible handover, which enables the robot to grasp moving objects with flexible motion trajectories with a high success rate. The key innovation of our approach is the generation of real-time robust grasp trajectories. Designs a future grasp prediction algorithm to enhance the system's adaptability to dynamic handover scenes.

Target-Referenced Reactive Grasping for Dynamic Objects
Jirong Liu, Ruo Zhang, Hao-Shu Fang, Minghao Gou, Hongjie Fang, Chenxi Wang, Sheng Xu, Hengxu Yan, Cewu Lu
CVPR, 2023  
paper / code / project page / bibtex

Focus on semantic consistency instead of temporal smoothness of the predicted grasp poses during reactive grasping. Solve the reactive grasping problem in a target-referenced setting by tracking through generated grasp spaces.

AnyGrasp: Robust and Efficient Grasp Perception in Spatial and Temporal Domains
Hao-Shu Fang, Chenxi Wang, Hongjie Fang, Minghao Gou, Jirong Liu, Hengxu Yan, Wenhai Liu, Yichen Xie, Cewu Lu
T-RO, 2023  
ICRA, 2024  
paper / SDK / project page / bibtex

Propose a powerful AnyGrasp model for general grasping, including static scenes and dynamic scenes. AnyGrasp can generate accurate, full-DoF, dense and temporally-smooth grasp poses efficiently, and it works robustly against large depth sensing noise.

TransCG: A Large-Scale Real-World Dataset for Transparent Object Depth Completion and a Grasping Baseline
Hongjie Fang, Hao-Shu Fang, Sheng Xu, Cewu Lu
RA-L, 2022  
ICRA, 2023  
paper / code / project page / bibtex

Propose TransCG, a large-scale real-world dataset for transparent object depth completion, along with a depth completion method DFNet based on the TransCG dataset.

Graspness Discovery in Clutters for Fast and Accurate Grasp Detection
Chenxi Wang, Hao-Shu Fang, Minghao Gou, Hongjie Fang, Jin Gao, Cewu Lu
ICCV, 2021  
paper / bibtex

Propose graspness, a quality based on geometry cues that distinguishes graspable area in cluttered scenes, which can be measured by a look-ahead searching method. Propose a graspness model to approximate the graspness value for quickly detect grasps in practice.

Selected Projects
EasyRobot
Hongjie Fang
research project, under active development
code

Provides an easy and unified interface for robots, grippers, sensors and pedals.

Oh-My-Papers
Hongjie Fang, Zhanda Zhu, Haoran Zhao
course project of SJTU undergraduate course "Mobile Internet"
code / demo / report

Proposes that we can learn "jargons" like "ResNet" and "YOLO" from academic paper citation information, and such citation information can be regarded as the searching results of the corresponding "jargon". For example, when searching "ResNet", the engine should return the "Deep Residual Learning for Image Recognition", instead of the papers that contains word "ResNet" in their titles, as current scholar search engines commonly return.

Services

Reviewer: ICRA 2023/2024, IROS 2023.

More about Me

Some of My Notes ---> Notes


The website is built upon this template.