09-30-2025, 07:56 AM (This post was last modified: 09-30-2025, 01:14 PM by Albert.)
Hi everyone,
I am currently using a Python program with the RoboDK driver (RUNMODE_RUN_ROBOT) to execute a machining path on the actual robot. However, I have noticed that the robot briefly stops at each waypoint, instead of moving smoothly as it does in the simulation.
I would like to achieve smooth, continuous motion on the real robot—similar to the simulation—without requiring any manual intervention.
If anyone has experience or knows how to configure this properly, I would greatly appreciate your guidance.
The robot I am using is FANUC CRX-10iA.
You should use the rounding instruction (setRounding when using the API) to make the movements smoother. However, you'll get better results when generating program files instead of using the driver. More specifically for Fanuc it can only buffer 1 movements when you use the driver, however, when you use a program file (LS or TP file for Fanuc controllers), the blending will work better (CNT flag for Fanuc).
10-05-2025, 11:45 PM (This post was last modified: 10-06-2025, 08:33 AM by Albert.)
Dear Albert,
Thank you for your previous response.
As you mentioned, achieving real-time control equivalent to the actual robot seems difficult with Fanuc’s KAREL socket communication due to its sequential execution.
Perhaps this could be resolved in the future if we can utilize Fanuc’s RMI option R912 (Position Data Line Communication, Manual B-84184JA_02).
I would like to confirm the following additional points:
When I double-click a program in the station, the robot executes it. Is there a Python API command that replicates this “double-click” action (i.e., execute on the robot)? My goal is to automate the entire process from program creation to execution on the actual robot.
When transferring a program offline, is it possible to automate the process from transfer to execution on the real robot? Ideally, I would like to accomplish this using the Python API as well.
Thank you for your support, and I look forward to your guidance.
You can find a flag called RUNMODE_MAKE_ROBOTPROG_AND_START. However, this is usually supported with Collaborative robots but not with Fanuc controllers. You could implement a custom integration mixing a station with RoboDK generating the programs and sending them to the robot, and another station with the driver connected to the robot and triggering the program call on the robot.
You can also trigger a program simulation by calling RunProgram.
For example, if you want to run a program simulation:
Code:
# program.setRunType(PROGRAM_RUN_ON_SIMULATOR) # Programs are already in this state by default
program.RunProgram()
If you want to run it on the real robot using the driver you can do:
Dear Albert,
Thank you for your response. I tried the suggested approach, but unfortunately, the program still does not run smoothly when executed from RoboDK.
To confirm whether this issue is specific to Fanuc, I also tested with a YASKAWA GP7 robot. However, the result was the same—the motion was not smooth.
Does RoboDK generally have limitations in replicating the same smooth motion as the actual robot when executing programs directly from the PC? If there is a method to run programs from the PC with the same performance as on the real robot, I would greatly appreciate your guidance.
For reference, I have attached the execution file used for YASKAWA.
Thank you for your support, and I look forward to your advice.
Running programs on the robot controller is the most performant option compared to using drivers on most robot controllers when you look at the rounding effect. The main issue is that most robots can't execute a movement command while looking ahead through the logic and the communication channel.
One exception is ABB which automatically moves the program pointer while the motion command is being executed. Therefore, rounding on an ABB robot works better than other robot controllers.
Thank you very much for sharing the valuable information.
It is encouraging to know that ABB robots may offer the possibility of replicating real-machine behavior more effectively, thanks to their ability to advance the program pointer during motion execution. This insight is highly informative and will be taken into consideration in our future evaluations.
I truly appreciate your support and guidance.