Executable Launcher (exe.jar)

The Executable launcher allows interfacing with any executable.


Configuration

The exe.xml file is just a template and must NOT be edited. It's used by the system to build dynamically the form that the user will be able to fill in from the GUI when creating a custom execution configuration.

Parameter Description
General
Test root path This must indicate where are located all the .exe files. This is a root path. Each test in XStudio has a canonical path that will be appended to this path.
This path MUST not include an ending slash.

Default value is: C:/my_executables
Synchronous executable This must indicate if the executable is synchronous or not

Default value is: true
Asynchronous timeout (in seconds) This must indicates the maximum time the system will wait for the test to complete.

Default value is: 600


These values can be changed while creating the campaign session from XStudio.


Note about file path parameters:

Any parameter referring to a file or folder path (for instance Test root path) can be provided either using \ separator (if the tests are going to be executed on a Windows agent) or / separator (if the tests are going to be executed on a linux or MacOSX agent).

On windows, if you provide a path containing a, OS-localizable folder such as C:\Program Files, always prefer the English version (i.e. NOT C:\Programmes if you're using a french-localized Windows) or the corresponding native environment variable (i.e. %PROGRAMFILES%).




Process

1) Each test in XStudio must have his dedicated .exe file. The name of the executable MUST be equal to the name of the test.

2) The .exe file must be able to parse the argument testcaseIndex passed during execution. This allows executing different routines depending on the testcase index.

The test is executed by the launcher using this syntax:
<testRootPath>/<testPath>/<testName>.exe /debug /testcaseIndex=<testcaseIndex>

3) In asynchronous mode, when the .exe has executed all its action, it MUST create an empty test_completed.txt file. This mechanism allows the launcher to know when the test is completed. A timeout is predefined for this. If the executable did not create the test_completed.txt file within the timeout value then the launcher considers the test has crashed and skips it.

4) In synchronous mode, the returned code is used to determine if the test passed or failed: a returned code equals to 0 will be understood as a success, any other value will be interpreted as a failure.

5) In asynchronous mode, the executable must generate a log.txt during its execution. This file MUST describe all the actions performed by the test as well as the result of each action. This file will be parsed by the launcher and all the information will be passed/stored automatically in the XStudio database. The log.txt MUST respect a specific format: Each line MUST include the strings [Success]", "[Failure]" or "[Log]" or the line will not be treated. Based on this information, the testcase will be marked as passed or failed.



Permissions

WARNING: if you're running your tests on Windows, it may be required to run the tests as administrator.
Having an account with Administrators permissions may even not be enough in some cases (especially if you're using Windows 10) and you may need to disable completely the UAC (User Access Control) on your computer.

To do so:
  • Press the Windows + R key combination
  • Type in regedit
  • Go to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System
  • In the right-side pane, look for EnableLUA and set the value 0
  • Close the registry editor
  • Restart your computer



Debug

If your tests are not executed correctly or are reporting only failures, this is very likely because your configuration is incorrect or because you used a wrong naming convention for your tests and test cases.


The best way to quickly find out what's wrong is to look at the traces generated by XStudio (or XAgent).
The traces always include the detailed description of what the launcher performs (command line execution, script execution, API calling etc.) to run a test case. So, if you experiment some problems, the first thing to do is to activate the traces and look at what's happening when you run your tests.


Then, try to execute manually in a cmd box the exact same commands.
This will normally fail the same way.
At this point, you needs to figure out what has to be changed in these commands in order to have them run properly.

When you have something working, compare these commands to what's described in the Process chapter above. This will tell you exactly what you need to change.


Most of the time, this is related to:
  • some incorrect values in some parameters of your configuration,
  • the name of your tests,
  • the name of your test cases,
  • the canonical path of your tests