Using the Python API, I am creating a very large number of targets (>10k) from stored data. Example code is below.
During execution, I notice that things start off pretty speedy, but after creating about 2k targets, things slow down dramatically (10x?). I have not done detailed timing profiling, but just judging by where it stops when I pause execution using the debugger, it was consistently stopping on the line `__t.setPose(...)`.
Is there anything I can do to speed up the execution of the setPose method, or am I running into a fundamental limit of Python and I just need to switch to C if I want to speed up further?
Code:
# set target_parent before snippet starts
# __all_dicts has the following structure:
'''
{
'name_0': {
'pose': # a pose
'metadata': # a dict containing other information to be stored with each target item
},
'name_1': {
'pose': # a different pose
'metadata': # another dict with same keys but different values
},
...
}
'''
__i = 0
for __one_name, __one_dict in __all_dicts.items():
print(str(i))
__t = RDK.AddTarget(__one_name, itemparent=target_parent)
__t.setPose(__one_dict['pose'])
__t.setParam('metadata', json.dumps(__one_dict['metadata'].encode('utf-8'))
__i += 1
During execution, I notice that things start off pretty speedy, but after creating about 2k targets, things slow down dramatically (10x?). I have not done detailed timing profiling, but just judging by where it stops when I pause execution using the debugger, it was consistently stopping on the line `__t.setPose(...)`.
Is there anything I can do to speed up the execution of the setPose method, or am I running into a fundamental limit of Python and I just need to switch to C if I want to speed up further?