-
Notifications
You must be signed in to change notification settings - Fork 750
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
libfreenect2 calibration very inaccurate #596
Comments
How inaccurate is very inaccurate? Post some images? The built-in calibration is surely not as accurate as hand-calibrated. Whether that is acceptable depends on how inaccurate it is. |
Insufficient information. |
It was such a problem that we switched back to primesense sensors we have. I'll ask if @cpaxton, who took screenshots, can put some of them up. |
Here is a video comparison between the Kinectv2 and the Kinectv1: youtube Here is a video of the depth data we are getting from the Kinect v2: youtube Note in the first video large objects like the Bosch cases come out just fine. This is only an issue for us because we are attempting to manipulate relatively small objects (~2 cm across) autonomously. I am not entirely convinced this is a calibration issue -- the depth-to-color registration looks fine to me -- but I am open to ideas. |
I can't spot what is wrong in the two videos. What is the object in the second video? |
The objects are all straight magnetic linking blocks as seen here with meshes available here. The problem is that we are trying to get an accurate pose estimate for relatively small objects, which means that noisy depth data like this is intolerable.You can see the objects are fairly noisy and in some cases badly deformed. |
So the problem now is deformed depth from a straight surface? The issue previously reported was mismatch in the registration with color and depth. I suspect this has to do with surface reflectance, or multipath interference (example: #319). Wrap the magnetic block with paper and see if that improves it? Otherwise it may be multipath interference. |
Sorry if there was any confusion. This issue was probably misreported because this is the only problem we are currently having with the Kinect2. |
Also note that posted videos are with iai calibration. Without it we get colors from the background on depths tied to objects. |
Yeah unfortunately those videos are for a separate problem sorry about the confusion. We don't have any videos yet of uncalibrated data. We plan to take uncalibrated video when we have time. |
Regarding the reliability of depth data, I have created fusion scans of a face using two different Kinect v2s, one where I have modified the optics for near mode, and one where I have replaced the IR lens with a telephoto lens. After manual camera calibration, I get #ifdef NEARMODE And the mean hausdorff distance between NN points of the fusion scans of a face are < 1mm. I cant comment on colour to depth registration, but the z-lookup tables seem to be working well, even with very extreme lenses. |
perhaps #144 is also relevant? |
The iai calibration is done separately with a chessboard, correct? Then it will very likely be better than the factory calibration, regardless of which software is used to access the Kinect2. To separate the different issues being discussed here, could you try to take static color-depth registered snapshots of the same scene with
Then it should be easier to tell which one has the best quality (likely 3.) and if 1. and 2. are any different. |
For the record, I recently came across the RoomAlive Toolkit by Microsoft Research which is used to calibrate multiple Kinects with respect to each other. It performs its own intrinsic calibration, which may perhaps help understand some of the remaining unclear aspects of the factory calibration. Note that I haven't looked into this in detail recently, so I'm just posting this here for reference. (/cc @christiankerl @wiedemeyer ). https://github.com/Microsoft/RoomAliveToolkit/blob/master/ProCamCalibration/ProCamEnsembleCalibration/Kinect2Calibration.cs |
Overview Description:
I've come to believe that there is likely something very inaccurate about the current calibration routines in libfreenect2 in the released 0.1. When I look at data the color mapping onto a point cloud is way off. When I stand 1.5-2m from the camera with a wall at ~5m, sometimes "inches" of wall color will be placed on 3d cloud points that measure the distance to my arm when I stand in front of it using Protonect. This is easier to visualize when using the libfreenect2pclgrabber application.
After iai_kinect2 calibration this is essentially entirely eliminated. Furthermore, when I run Microsoft's 3D builder application they also do not have these same calibration problems using just the hardware so I think there may be something mistaken about the equations as reverse engineered in some of the large calibration issues created for this project.
It may be wise to have another look at those algorithms to see if they can be fixed, or integrate another calibration process directly into libfreenect2.
Version, Platform, and Hardware Bug Found:
0.1 and 0.1.1 release
OS X 10.11 and linux 14.04
Steps to Reproduce:
Actual Results:
Observe wall color on object point cloud points, object color on wall point cloud points.
Expected Results:
Color applied to appropriate cloud points.
Reproducibility:
100% with multiple kinectv2 test devices and test operating systems.
Additional Information:
The text was updated successfully, but these errors were encountered: