Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Converting keypoints to world coordinates

#1
Information 
Hi ,
I'm trying to have robotic system that:
scans holes in a space
goes to each hole center for image capture

but I'm having trouble to change image coordinates from a camera to world coordinates.

The final code arrangement would be:

Robot at home inspection position
robot calls a scanning function ( mounting holes in my code)
The code moves towards each circle
camera takes a picture
the robot goes back to home position and moves a tranlsation distance
I attached the code


Attached Files
.py   Prog13.py (Size: 4.37 KB / Downloads: 301)
#2
Hi Nour,

One of the challenges with vision is to convert accurately from a 2D still image to a 3D space.
For this to be accurate, you need to have a reference distance within the image. Ideally, at the same distance / focus point of your features.

This example take advantages of the fact that the distance from the camera to the features is consistent, thus allowing us to find a constant pixel/mm ratio and camera height experimentally.
https://robodk.com/doc/en/PythonAPI/exam...own-object

What exactly are you trying to achieve?
Are you trying to do path correction with a camera to be dead centre with the mounting holes?
How far from the mounting holes is your programmed path?
Are you trying to record the mounting holes positions in 3D space?
How accurate your application needs to be?
Please read the Forum Guidelines before posting!
Find useful information about RoboDK by visiting our Online Documentation.
#3
Hi Sam,
My objective is having the camera optical axis to be directly above the center of the holes
the distance is variable. i.e the distance is equal to the distance that makes the hole fill the camera image, so that a snapshot is taken. Its variable
it needs to be accurate as the photo taken will measure the tilt of the hole to the nearest degree
I think I'm recording the holes in 2D space. My intention is to find some way to have the robot at a home position scan a part for holes. The holes are then stored in a matrix, then the robot goes near each circle and centers the optical axis with the hole center for a snapshot to be taken for future post processing. Is there anyway to do that?
I have checked the link and read there is a why to transfer the coordinates from images to world coordinates?
I attached the file below as an example

(05-02-2022, 12:50 PM)Sam Wrote: Hi Nour,

One of the challenges with vision is to convert accurately from a 2D still image to a 3D space.
For this to be accurate, you need to have a reference distance within the image. Ideally, at the same distance / focus point of your features.

This example take advantages of the fact that the distance from the camera to the features is consistent, thus allowing us to find a constant pixel/mm ratio and camera height experimentally.
https://robodk.com/doc/en/PythonAPI/exam...own-object

What exactly are you trying to achieve?
Are you trying to do path correction with a camera to be dead centre with the mounting holes?
How far from the mounting holes is your programmed path?
Are you trying to record the mounting holes positions in 3D space?
How accurate your application needs to be?


Attached Files
.rdk   Test.rdk (Size: 1.77 MB / Downloads: 317)
.rdk   Test.rdk (Size: 1.77 MB / Downloads: 295)
#4
There are a lot of assumptions, and it is quite difficult for me to assess your needs.
Looking at your other posts, it seems that you are scanning holes on a wing and I already provided a partial answer here: 
https://robodk.com/forum/Thread-Camera-r...ossibility

Finding a blob and its relative distance from the camera axis is fairly easy to do in pixel.
However, can you guarantee that the camera is perpendicular to the hole's surface and that the relative distance from the camera to the holes is constant?

If so, you can retrieve the pixel/mm ratio in the XY plane experimentally. For instance, place a ruler over the hole and the camera at your target, take the snapshot, divide the ruler length in pixels by the ruler length in mm. It should be good enough to iterate and converge to the hole centre. The examples in our documentations have every bit and pieces to achieve this.

If not, you need to consider a lot of variables.
  • How do you align the Z axis of the camera to be perpendicular with the holes? Consider using the moments of the blobs.
  • How do you dynamically find the distance from the camera to the hole? Consider having a reference measurement/marker.
  • How do you compensate for defects?
Please read the Forum Guidelines before posting!
Find useful information about RoboDK by visiting our Online Documentation.
  




Users browsing this thread:
1 Guest(s)