Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Hand-Eye calibration of wrist-mounted RGB-D camera
We mounted an intel real-sense d435 to our robot.
We have seen RoboDK videos where stereo cameras and laser trackers are used to calibrate the robot.
We found lots of papers related to hand-eye calibration and rgb-d cameras.

However we struggle to implement a simple algorithm for hand-eye calibration:
We suggest something like:
1) Mark N (how many needed?) calibration-targets within the robots work area
2) Approach calibration-targets with robot and store as targets in RoboDK
3) Find pose, that has all calibration-targets in camera-view
4) Receive coordinates (x,y,z) for each calibration-target from camera (in camera-TCP)
5) [optional?] repeat 3-4 with different poses
6) Calculate calibration

Alternative for 1 (or 1&2 ?): use check-pattern board

We lack the math-skills for 6). Can someone point us towards a paper or some formulas how to calculate the camera-tcp?
This link explains Hand-Eye Camera calibration for a robot and links to source code samples and papers:

In general, I would say more targets is better as you'll reduce the measurement noise. Keep in mind that the 3D Intel camera is not accurate unless you are very close to the target. Using a pattern of objects may help you improve accuracy as well.
Thanks for the quick reply. We are looking into it.

Users browsing this thread:
1 Guest(s)