Python Launcher (python.jar)

The Python launcher allows interfacing with Python (.py) scripts.
It has been tested with Python 2.6


The python.xml file is just a template and must NOT be edited. It's used by the system to build dynamically the form that the user will be able to fill in from the GUI when creating a custom execution configuration.

Parameter Description
Test root path This must indicate where are located all the .py scripts.
This is a root path. Each test in XStudio has a canonical path that will be appended to this path.
This path MUST not include an ending slash.

Default value is: C:/my_python_scripts
Asynchronous timeout (in seconds) This must indicates the maximum time the system will wait for the test to complete.

Default value is: 600
Python install path This must indicate where is installed Python on the host.

Default value is: C:/Python26

These values can be changed while creating the campaign session from XStudio.

Note about file path parameters:

Any parameter referring to a file or folder path (for instance Test root path) can be provided either using \ separator (if the tests are going to be executed on a Windows agent) or / separator (if the tests are going to be executed on a linux or MacOSX agent).

On windows, if you provide a path containing a, OS-localizable folder such as C:\Program Files, always prefer the English version (i.e. NOT C:\Programmes if you're using a french-localized Windows) or the corresponding native environment variable (i.e. %PROGRAMFILES%).


1) Each test in XStudio must have his dedicated .py script. The name of the script MUST be equal to the name of the test.

2) The .py script must be able to parse the argument testcaseIndex passed during interpretation. This allows the script to execute different routines depending on the testcase index.
The interpreter is executed by the launcher using this syntax:

<pythonInstallPath>/python.exe <testRootPath>/<testPath>/<testName>.py /debug
/testcaseIndex=<testcaseIndex> /log=<testName>_<testcaseIndex>.txt
<attribName0>=<attribValue0> <attribName1>=<attribValue1> ... <attribNameN>=<attribValueN>

3) When the .py has executed all its actions, it MUST create an empty test_completed.txt file. Indeed, the execution of the Python scripts are asynchronous. This mechanism allows the launcher to know when the test is completed. A timeout of 10 minutes is predefined. If the .python script did not create the test_completed.txt within the first 10 minutes, then the launcher considers the test has crashed and skips it.

4) the .python script must generate a <testName>_<testcaseIndex>.txt file during its execution. This file MUST describe all the actions performed by the test as well as the result of each action. This file will be parsed by the launcher and all the information will be passed/stored automatically in the XStudio database. The log file MUST respect a specific format: Each line MUST include the strings [Success]", "[Failure]" or "[Log]" or the line will not be treated. Based on this information, the testcase will be marked as passed or failed.


WARNING: if you're running your tests on Windows, it may be required to run the tests as administrator.
Having an account with Administrators permissions may even not be enough in some cases (especially if you're using Windows 10) and you may need to disable completely the UAC (User Access Control) on your computer.

To do so:
  • Press the Windows + R key combination
  • Type in regedit
  • Go to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System
  • In the right-side pane, look for EnableLUA and set the value 0
  • Close the registry editor
  • Restart your computer


If your tests are not executed correctly or are reporting only failures, this is very likely because your configuration is incorrect or because you used a wrong naming convention for your tests and test cases.

The best way to quickly find out what's wrong is to look at the traces generated by XStudio (or XAgent).
The traces always include the detailed description of what the launcher performs (command line execution, script execution, API calling etc.) to run a test case. So, if you experiment some problems, the first thing to do is to activate the traces and look at what's happening when you run your tests.

Then, try to execute manually in a cmd box the exact same commands.
This will normally fail the same way.
At this point, you needs to figure out what has to be changed in these commands in order to have them run properly.

When you have something working, compare these commands to what's described in the Process chapter above. This will tell you exactly what you need to change.

Most of the time, this is related to:
  • some incorrect values in some parameters of your configuration,
  • the name of your tests,
  • the name of your test cases,
  • the canonical path of your tests