Skip to content

Click to Grasp takes calibrated RGB-D images of a tabletop and user-defined part instances in diverse source images as input, and produces gripper poses for interaction, effectively disambiguating between visually similar but semantically different concepts (e.g., left vs right arms).

License

Notifications You must be signed in to change notification settings

tsagkas/click2grasp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 

Repository files navigation

Click to Grasp: Zero-Shot Precise Manipulation
via Visual Diffusion Descriptors

Nikolaos Tsagkas1,2, Jack Rome1, Subramanian Ramamoorthy1,2, Oisin Mac Aodha1, Chris Xiaoxuan Lu3,

1University of Edinburgh, 2Edinburgh Centre for Robotics, 3UCL,

Website | Paper

Click to Grasp

The code will be available in late summer, 2024.

click2grasp_method.mp4

Citation

If you found this repo useful for your research, please consider citing the paper. Also, consider citing the following works that made Click to Grasp possible: D3-Fields, F3RM, Tale-of-Two-Features, Diffusion-Hyperfeatures.

@article{tsagkas2024click,
    title={Click to Grasp: Zero-Shot Precise Manipulation via Visual Diffusion Descriptors},
    author={Tsagkas, Nikolaos and Rome, Jack and Ramamoorthy, Subramanian and Mac Aodha, Oisin and Lu, Chris Xiaoxuan},
    journal={arXiv preprint arXiv:2403.14526},
    year={2024}
}

About

Click to Grasp takes calibrated RGB-D images of a tabletop and user-defined part instances in diverse source images as input, and produces gripper poses for interaction, effectively disambiguating between visually similar but semantically different concepts (e.g., left vs right arms).

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published