- Montreal, Canada
- http://krrish94.github.io
- @_krishna_murthy
Highlights
- Pro
Stars
For an education purpose, from-scratch, single-file, python-only pose-graph optimization implementation
[CVPR'24 Highlight & Best Demo Award] Gaussian Splatting SLAM
SplaTAM: Splat, Track & Map 3D Gaussians for Dense RGB-D SLAM (CVPR 2024)
Talk2BEV: Language-Enhanced Bird's Eye View Maps (Accepted to ICRA'24)
Official code release for ConceptGraphs
Pointcloud-based graph SLAM written in C++ using open3D library.
Project Page for "LISA: Reasoning Segmentation via Large Language Model"
Official implementation of "Neuralangelo: High-Fidelity Neural Surface Reconstruction" (CVPR 2023)
Generative Agents: Interactive Simulacra of Human Behavior
AnyLoc: Universal Visual Place Recognition (RA-L 2023)
[NeurIPS 2023] Weakly Supervised 3D Open-vocabulary Segmentation
MultiScan: Scalable RGBD scanning for 3D environments with articulated objects
A python library for handling poses, transforms and frames for robotics applications
H2-Mapping: Real-time Dense Mapping Using Hierarchical Hybrid Representation (2023 RAL Best Paper Award)
A GPU-accelerated TSDF and ESDF library for robots equipped with RGB-D cameras.
TidyBot: Personalized Robot Assistance with Large Language Models
PyTorch code and models for the DINOv2 self-supervised learning method.
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
Light Vanilla Javascript library to compare multiples images with sliders. Also, you can add text and filters to your images.
This is a pytorch implementation of k-means clustering algorithm
KinectFusion implemented in Python with PyTorch
[CVPR'22] NICE-SLAM: Neural Implicit Scalable Encoding for SLAM
Code for our NeurIPS 2022 paper
A collaboration friendly studio for NeRFs
NVIDIA Kaolin Wisp is a PyTorch library powered by NVIDIA Kaolin Core to work with neural fields (including NeRFs, NGLOD, instant-ngp and VQAD).