-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to understand _rle2voxel function #101
Comments
Hi, I used the code from this source. I guess the implementation of the RLE format is based on the method described in the initial SSC paper. |
Now that I have RGB and depth images captured by a depth camera, as well as semantic information annotated on the images, how should I train your project. |
To incorporate depth information into a 3D scene, similar to the approach used in BundleFusion, start by integrating the depth data. After this integration, convert the scene into a voxelized format. Assign semantic labels to each voxel using the semantic labels derived from the corresponding images. |
ok,thank you very much for your answer. I will try the method you mentioned |
I learned a lot from your response. By the way, could you show more detail about converting the scene into a voxelized format? |
Hi @xyIsHere, I suggest using this repository: https://github.com/andyzeng/tsdf-fusion-python. It converts a set of depth and RGB images into a voxelized TSDF scene. |
Hello, I have some misunderstandings about the truth processing of the nyu dataset. In this _rle2voxel function:
Firstly, why index rle by odd or even numbers, check_val looks like a target label, How to understand check_iter? Does it represent dimension? How does it correspond to class label on RGB images.
Hope to receive your reply, thank you.
The text was updated successfully, but these errors were encountered: