In general, the time 1 test takes is mostly (one of) four factors:

1) Acquisition time. Reduction could be achieved based on what your goal is:

Acquisition) Reducing the number of samples will speed up acquisition. If the interesting region is not at the start, use the trigger offset parameter to capture only the relevant region.

Perturbation) If you want/need to measure actual samples, see Acquisition) above. If not, set the Oscilloscope to SineScope and set the number of samples to 1. This almost completely removes the overhead of acquisition. 


2) Communication time. This depends on how you communicate with the target, but can often be reduced by:
2a) Building one long command and sending it in one go instead of sending a lot of single commands
2b) Sending as few commands as possible
2c) Reducing the communication latency -> For COM ports (such as FTDI) this can be done (in windows) by going to Device Manager >Right click the COM port > Port Settings > Advanced... > BM Options > Latency timer > set as low as possible (usually 1ms)
 
3) Reset time. Often this overhead can be reduced by:
3a) Only resetting if the device crashed or was glitched. Be careful, this may result in less device stability and lower test reproducibility
3b) Reducing the boot time of the device. This can often be done by only resetting the relevant power domain
 
4) Script execution time. This should be pretty straightforward: If you put sleeps, long device initialization, etc. in your run, this will slow down the process