Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

libfreenect2 calibration very inaccurate #596

Open
ahundt opened this issue Feb 28, 2016 · 14 comments
Open

libfreenect2 calibration very inaccurate #596

ahundt opened this issue Feb 28, 2016 · 14 comments
Labels

Comments

@ahundt
Copy link

ahundt commented Feb 28, 2016

Overview Description:

I've come to believe that there is likely something very inaccurate about the current calibration routines in libfreenect2 in the released 0.1. When I look at data the color mapping onto a point cloud is way off. When I stand 1.5-2m from the camera with a wall at ~5m, sometimes "inches" of wall color will be placed on 3d cloud points that measure the distance to my arm when I stand in front of it using Protonect. This is easier to visualize when using the libfreenect2pclgrabber application.

After iai_kinect2 calibration this is essentially entirely eliminated. Furthermore, when I run Microsoft's 3D builder application they also do not have these same calibration problems using just the hardware so I think there may be something mistaken about the equations as reverse engineered in some of the large calibration issues created for this project.

It may be wise to have another look at those algorithms to see if they can be fixed, or integrate another calibration process directly into libfreenect2.

Version, Platform, and Hardware Bug Found:

0.1 and 0.1.1 release
OS X 10.11 and linux 14.04

Steps to Reproduce:

  1. run protonect or libfreenect2pclgrabber
  2. Place large object at 1.5-2m from kinect, with different colored wall at 3-5m

Actual Results:

Observe wall color on object point cloud points, object color on wall point cloud points.

Expected Results:

Color applied to appropriate cloud points.

Reproducibility:

100% with multiple kinectv2 test devices and test operating systems.

Additional Information:

@xlz
Copy link
Member

xlz commented Feb 28, 2016

How inaccurate is very inaccurate? Post some images?

The built-in calibration is surely not as accurate as hand-calibrated. Whether that is acceptable depends on how inaccurate it is.

@xlz xlz closed this as completed Mar 23, 2016
@xlz
Copy link
Member

xlz commented Mar 23, 2016

Insufficient information.

@ahundt
Copy link
Author

ahundt commented Mar 27, 2016

It was such a problem that we switched back to primesense sensors we have. I'll ask if @cpaxton, who took screenshots, can put some of them up.

@cpaxton
Copy link

cpaxton commented Mar 28, 2016

Here is a video comparison between the Kinectv2 and the Kinectv1: youtube

Here is a video of the depth data we are getting from the Kinect v2: youtube

Note in the first video large objects like the Bosch cases come out just fine. This is only an issue for us because we are attempting to manipulate relatively small objects (~2 cm across) autonomously. I am not entirely convinced this is a calibration issue -- the depth-to-color registration looks fine to me -- but I am open to ideas.

@xlz
Copy link
Member

xlz commented Mar 28, 2016

I can't spot what is wrong in the two videos. What is the object in the second video?

@xlz xlz reopened this Mar 28, 2016
@cpaxton
Copy link

cpaxton commented Mar 28, 2016

The objects are all straight magnetic linking blocks as seen here with meshes available here.

The problem is that we are trying to get an accurate pose estimate for relatively small objects, which means that noisy depth data like this is intolerable.You can see the objects are fairly noisy and in some cases badly deformed.

@xlz
Copy link
Member

xlz commented Mar 28, 2016

So the problem now is deformed depth from a straight surface? The issue previously reported was mismatch in the registration with color and depth.

I suspect this has to do with surface reflectance, or multipath interference (example: #319). Wrap the magnetic block with paper and see if that improves it? Otherwise it may be multipath interference.

@cpaxton
Copy link

cpaxton commented Mar 28, 2016

Sorry if there was any confusion. This issue was probably misreported because this is the only problem we are currently having with the Kinect2.

@ahundt
Copy link
Author

ahundt commented Mar 28, 2016

Also note that posted videos are with iai calibration. Without it we get colors from the background on depths tied to objects.

@ahundt
Copy link
Author

ahundt commented Mar 28, 2016

Yeah unfortunately those videos are for a separate problem sorry about the confusion. We don't have any videos yet of uncalibrated data. We plan to take uncalibrated video when we have time.

@philipNoonan
Copy link

Regarding the reliability of depth data, I have created fusion scans of a face using two different Kinect v2s, one where I have modified the optics for near mode, and one where I have replaced the IR lens with a telephoto lens. After manual camera calibration, I get

#ifdef NEARMODE
ir_camera_params_.fx = 364.7546f;
ir_camera_params_.fy = 365.5064f;
ir_camera_params_.cx = 254.0044f;
ir_camera_params_.cy = 200.9755f;
ir_camera_params_.k1 = 0.0900f;
ir_camera_params_.k2 = -0.2460f;
ir_camera_params_.k3 = 0.0566f;
ir_camera_params_.p1 = 0.0018f;
ir_camera_params_.p2 = 0.0017f;
#else
ir_camera_params_.fx = 1610.9208f;
ir_camera_params_.fy = 1608.9916f;
ir_camera_params_.cx = 214.2099f;
ir_camera_params_.cy = 154.1397f;
ir_camera_params_.k1 = 0.2806f;
ir_camera_params_.k2 = -12.9896f;
ir_camera_params_.k3 = 182.3996f;
ir_camera_params_.p1 = -0.0128f;
ir_camera_params_.p2 = -0.0108f;
#endif

And the mean hausdorff distance between NN points of the fusion scans of a face are < 1mm. I cant comment on colour to depth registration, but the z-lookup tables seem to be working well, even with very extreme lenses.

@ahundt
Copy link
Author

ahundt commented Mar 31, 2016

perhaps #144 is also relevant?

@floe
Copy link
Contributor

floe commented Mar 31, 2016

Also note that posted videos are with iai calibration. Without it we get colors from the background on depths tied to objects.

The iai calibration is done separately with a chessboard, correct? Then it will very likely be better than the factory calibration, regardless of which software is used to access the Kinect2. To separate the different issues being discussed here, could you try to take static color-depth registered snapshots of the same scene with

  1. the official SDK
  2. the internal libfreenect2 calibration
  3. your iai-kinect calibration

Then it should be easier to tell which one has the best quality (likely 3.) and if 1. and 2. are any different.

@xlz xlz added the question label Dec 5, 2016
@floe
Copy link
Contributor

floe commented May 18, 2017

For the record, I recently came across the RoomAlive Toolkit by Microsoft Research which is used to calibrate multiple Kinects with respect to each other. It performs its own intrinsic calibration, which may perhaps help understand some of the remaining unclear aspects of the factory calibration. Note that I haven't looked into this in detail recently, so I'm just posting this here for reference. (/cc @christiankerl @wiedemeyer ).

https://github.com/Microsoft/RoomAliveToolkit/blob/master/ProCamCalibration/ProCamEnsembleCalibration/Kinect2Calibration.cs
https://github.com/Microsoft/RoomAliveToolkit/blob/master/ProCamCalibration/ProCamEnsembleCalibration/CameraMath.cs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants