• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5

# Use 2D Camera & Gripper for pick & place simulation

I would like to use the 2D camera and Gripper to do the pick and place operation. I am able to check out a similar simulation on the RoboDK website (and also in the downloaded version), but I couldn't locate any documentation/tutorial on the same to build it. It would be really helpful if I am able to get the tutorial for it (graphical preferred over programming).

Thank you,

Naresh
I am also interested in where to find this.
In RoboDK you can simulate 2D cameras and 3D depth cameras using the following option:
• Connect-Simulate 2D Camera
However, image processing needs to be integrated using 3rd party products, your own image processing algorithms or Python itself. As an example, the OpenCV library would be a good start point if you are planning to create your own image processing algorithms. Alternatively, you can find industry-ready cameras such as Cognex, Keyence, Sick, Ensenso, ...

From RoboDK you can save images by right clicking on the simulated view of the camera and save the image to test your image processing algorithms. Supported formats for saving images include PNG, JPEG and TIFF. You can also use the API to open a new simulated camera window and take a snapshot (attached you'll find 2 macros to do so). The same camera settings that you can see when you right click the camera window are accessible through the API. The comments in the attached macros may help you set it up so the camera opens automatically with your settings.

For simulation purposes, you can do one of the following:
1. Extract the ideal X,Y,Theta that the camera should provide you. This would be recommended as a first step to make sure your pick and place application makes sense.
2. Simulate the X,Y,Theta values provided by a camera by applying your image recognition algorithms.
Both operations can easily be automated using the RoboDK API. This code provides an example function that waits for a part to come through a conveyor:

Code:
```# Simulate the behavior of a camera (returns TX, TY and RZ) def WaitPartCamera():     if RDK.RunMode() == RUNMODE_SIMULATE:                 if True:             # Pure simulation option:             # Simulate the camera by waiting for an object to be within the camera zone             while True:                # Iterate through all the objects that can be picked:                 for part in check_objects:                     # Calculate the position of the part with respect to the camera reference                     part_pose = invH(camera_pose_abs) * part.PoseAbs()                     tx,ty,tz,rx,ry,rz = pose_2_xyzrpw(part_pose)                    # X,Y,Theta = tx,ty,rz ->  tx, ty is in mm, rz is in degrees                     if abs(tx) < 400 and ty < 50 and abs(tz) < 400:                         print('Part detected: TX,TY,TZ=%.1f,%.1f,%.1f' % (tx,ty,rz))                         return tx, ty, rz         else:             # Apply image processing (simulation within RoboDK):             image_file = RDK.getParam('PATH_OPENSTATION') + "/Image-Camera-Simulation.png"             print("Saving camera snapshot to file:" + image_file)                         RDK.Cam2D_Snapshot(image_file)                          # Implement your image processing here:             return ImageProcessingAlgorithm(image_file)                  else:         # Image processing should happen on the real camera         return WaitPartRealCamera()     return 0,0,0```

You'll find a similar function in the following example project:
Example 02-3 - Pick and place with 2D camera.rdk
(inside the PartsToPallet program)

Attached Files
I appreciate your help. Thank you.
I will try it and get back.

Regards
(03-05-2019, 03:53 PM)Albert Wrote: In RoboDK you can simulate 2D cameras and 3D depth cameras using the following option:
• Connect-Simulate 2D Camera
However, image processing needs to be integrated using 3rd party products, your own image processing algorithms or Python itself. As an example, the OpenCV library would be a good start point if you are planning to create your own image processing algorithms. Alternatively, you can find industry-ready cameras such as Cognex, Keyence, Sick, Ensenso, ...

From RoboDK you can save images by right clicking on the simulated view of the camera and save the image to test your image processing algorithms. Supported formats for saving images include PNG, JPEG and TIFF. You can also use the API to open a new simulated camera window and take a snapshot (attached you'll find 2 macros to do so). The same camera settings that you can see when you right click the camera window are accessible through the API. The comments in the attached macros may help you set it up so the camera opens automatically with your settings.

For simulation purposes, you can do one of the following:
1. Extract the ideal X,Y,Theta that the camera should provide you. This would be recommended as a first step to make sure your pick and place application makes sense.
2. Simulate the X,Y,Theta values provided by a camera by applying your image recognition algorithms.
Both operations can easily be automated using the RoboDK API. This code provides an example function that waits for a part to come through a conveyor:

Code:
```# Simulate the behavior of a camera (returns TX, TY and RZ) def WaitPartCamera():     if RDK.RunMode() == RUNMODE_SIMULATE:                 if True:             # Pure simulation option:             # Simulate the camera by waiting for an object to be within the camera zone             while True:                # Iterate through all the objects that can be picked:                 for part in check_objects:                     # Calculate the position of the part with respect to the camera reference                     part_pose = invH(camera_pose_abs) * part.PoseAbs()                     tx,ty,tz,rx,ry,rz = pose_2_xyzrpw(part_pose)                    # X,Y,Theta = tx,ty,rz ->  tx, ty is in mm, rz is in degrees                     if abs(tx) < 400 and ty < 50 and abs(tz) < 400:                         print('Part detected: TX,TY,TZ=%.1f,%.1f,%.1f' % (tx,ty,rz))                         return tx, ty, rz         else:             # Apply image processing (simulation within RoboDK):             image_file = RDK.getParam('PATH_OPENSTATION') + "/Image-Camera-Simulation.png"             print("Saving camera snapshot to file:" + image_file)                         RDK.Cam2D_Snapshot(image_file)                          # Implement your image processing here:             return ImageProcessingAlgorithm(image_file)                  else:         # Image processing should happen on the real camera         return WaitPartRealCamera()     return 0,0,0```

You'll find a similar function in the following example project:
Example 02-3 - Pick and place with 2D camera.rdk
(inside the PartsToPallet program)

Hello Albert,
I've known how to add camera and get the image file. Is it possible to get numpy array from the function of camera to retrieve the view? I want to do Reinforcement Learning in robotic manipulation. Thanks~
(12-15-2020, 06:31 AM)Jeff Wrote: Hello Albert,
I've known how to add camera and get the image file. Is it possible to get numpy array from the function of camera to retrieve the view? I want to do Reinforcement Learning in robotic manipulation. Thanks~

I would also love to be able to access the camera feed more directly than saving an image. Please add this feature :)
It is not possible to get a video feed of a Window. Instead, you could use a screen capture software to do so (such as OBS Software).

With the RoboDK API you can take snapshots of the main window or simulated camera windows. An example to do so is available here:
https://github.com/RoboDK/Plug-In-Interf...napshot.py
(02-28-2019, 07:09 PM)jccourtney Wrote: I am also interested in where to find this.

Hello Albert,
I have enquiries for a 3D Camera:
Does the Robodk support a 3D camera like intel realsense? If yes, is there any requirements, and how to connect? - Is the Robodk able to receive images from the camera in real-time (2D/3D)?