Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Hand-Eye calibration of wrist-mounted RGB-D camera

#1
We mounted an intel real-sense d435 to our robot.
We have seen RoboDK videos where stereo cameras and laser trackers are used to calibrate the robot.
We found lots of papers related to hand-eye calibration and rgb-d cameras.

However we struggle to implement a simple algorithm for hand-eye calibration:
We suggest something like:
1) Mark N (how many needed?) calibration-targets within the robots work area
2) Approach calibration-targets with robot and store as targets in RoboDK
3) Find pose, that has all calibration-targets in camera-view
4) Receive coordinates (x,y,z) for each calibration-target from camera (in camera-TCP)
5) [optional?] repeat 3-4 with different poses
6) Calculate calibration

Alternative for 1 (or 1&2 ?): use check-pattern board

We lack the math-skills for 6). Can someone point us towards a paper or some formulas how to calculate the camera-tcp?
#2
This link explains Hand-Eye Camera calibration for a robot and links to source code samples and papers:
https://robotics.stackexchange.com/quest...es#tab-top

In general, I would say more targets is better as you'll reduce the measurement noise. Keep in mind that the 3D Intel camera is not accurate unless you are very close to the target. Using a pattern of objects may help you improve accuracy as well.
#3
Thanks for the quick reply. We are looking into it.
#4
(01-19-2019, 09:37 PM)max_tro Wrote: We mounted an intel real-sense d435 to our robot.
We have seen RoboDK videos where stereo cameras and laser trackers are used to calibrate the robot.
We found lots of papers related to hand-eye calibration and rgb-d cameras.

However we struggle to implement a simple algorithm for hand-eye calibration:
We suggest something like:
1) Mark N (how many needed?) calibration-targets within the robots work area
2) Approach calibration-targets with robot and store as targets in RoboDK
3) Find pose, that has all calibration-targets in camera-view
4) Receive coordinates (x,y,z) for each calibration-target from camera (in camera-TCP)
5) [optional?] repeat 3-4 with different poses
6) Calculate calibration

Alternative for 1 (or 1&2 ?): use check-pattern board

We lack the math-skills for 6). Can someone point us towards a paper or some formulas how to calculate the camera-tcp?
did you ever get something like this working?
#5
(07-26-2020, 07:21 PM)spacegardener Wrote: did you ever get something like this working?

Yes. A good starting point is openCV's functionality.
#6
Hi, could I ask you about the calibration result?
I'm also using Intel RealSense, but the hand-eye calibration has bad result.
How much is your translation and rotation error after calibration?
#7
If someone is looking for a way to programmatically determine the camera pose in RoboDK, we added an example of camera pose estimation using OpenCV and ChAruCo board in our Python examples, see https://robodk.com/doc/en/PythonAPI/exam...amera-pose.
Please read the Forum Guidelines before posting!
Find useful information about RoboDK by visiting our Online Documentation.
  




Users browsing this thread:
1 Guest(s)