Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Use 2D Camera & Gripper for pick & place simulation
I would like to use the 2D camera and Gripper to do the pick and place operation. I am able to check out a similar simulation on the RoboDK website (and also in the downloaded version), but I couldn't locate any documentation/tutorial on the same to build it. It would be really helpful if I am able to get the tutorial for it (graphical preferred over programming).

Thank you,

I am also interested in where to find this.
In RoboDK you can simulate 2D cameras and 3D depth cameras using the following option:
  • Connect-Simulate 2D Camera
However, image processing needs to be integrated using 3rd party products, your own image processing algorithms or Python itself. As an example, the OpenCV library would be a good start point if you are planning to create your own image processing algorithms. Alternatively, you can find industry-ready cameras such as Cognex, Keyence, Sick, Ensenso, ...

From RoboDK you can save images by right clicking on the simulated view of the camera and save the image to test your image processing algorithms. Supported formats for saving images include PNG, JPEG and TIFF. You can also use the API to open a new simulated camera window and take a snapshot (attached you'll find 2 macros to do so). The same camera settings that you can see when you right click the camera window are accessible through the API. The comments in the attached macros may help you set it up so the camera opens automatically with your settings.


For simulation purposes, you can do one of the following:
  1. Extract the ideal X,Y,Theta that the camera should provide you. This would be recommended as a first step to make sure your pick and place application makes sense.
  2. Simulate the X,Y,Theta values provided by a camera by applying your image recognition algorithms.
Both operations can easily be automated using the RoboDK API. This code provides an example function that waits for a part to come through a conveyor:

# Simulate the behavior of a camera (returns TX, TY and RZ)
def WaitPartCamera():
    if RDK.RunMode() == RUNMODE_SIMULATE:        
        if True:
            # Pure simulation option:
            # Simulate the camera by waiting for an object to be within the camera zone
            while True:
               # Iterate through all the objects that can be picked:
                for part in check_objects:
                    # Calculate the position of the part with respect to the camera reference
                    part_pose = invH(camera_pose_abs) * part.PoseAbs()
                    tx,ty,tz,rx,ry,rz = pose_2_xyzrpw(part_pose)
                   # X,Y,Theta = tx,ty,rz ->  tx, ty is in mm, rz is in degrees
                    if abs(tx) < 400 and ty < 50 and abs(tz) < 400:
                        print('Part detected: TX,TY,TZ=%.1f,%.1f,%.1f' % (tx,ty,rz))
                        return tx, ty, rz
            # Apply image processing (simulation within RoboDK):
            image_file = RDK.getParam('PATH_OPENSTATION') + "/Image-Camera-Simulation.png"
            print("Saving camera snapshot to file:" + image_file)            
            # Implement your image processing here:
            return ImageProcessingAlgorithm(image_file)
        # Image processing should happen on the real camera
        return WaitPartRealCamera()

    return 0,0,0

You'll find a similar function in the following example project:
Example 02-3 - Pick and place with 2D camera.rdk
(inside the PartsToPallet program)

Attached Files
.py (Size: 1.86 KB / Downloads: 27)
.py (Size: 446 bytes / Downloads: 18)
I appreciate your help. Thank you.
I will try it and get back.


Users browsing this thread:
1 Guest(s)