EndToEnd Test Framework
pyOpenPASS
This tool acts as configurable executor for complete sets of configs for the openPASS simulation.
The test framework is located at sim/tests/endToEndTests/pyOpenPASS
.
Prerequisites
The test framework is based on Python and some additional Python modules.
Installation of the required modules can be accomplished using pip
.
Please refer to the file requirements.txt
located in the source code repository at sim/tests/endToEndTests/pyOpenPASS
for a list of dependencies.
See Installing openPASS for instructions on repository checkout.
The requirements file can be directly passed to pip
for installation:
pip install -r requirements.txt
(executed from sim/tests/endToEndTests/pyOpenPASS
)
Warning
pip install
will try to fetch precompiled packages by default.
If it is unable to locate a binary package for the current environment, packages will be compiled from source.
This step fails for the numpy
package when being built from the MSYS2 environment.
Thus, it is recommended to set up a native Windows Python environment and perform the installation there.
To force the usage of a specific Python environment, the variable Python3_EXECUTABLE
can be set to the indended Python interpreter executable during cmake configuration (see Installing openPASS).
Execution
As pyOpenPASS is a pytest plugin (and is not yet a standalone-plugin) it will be automatically executed, when pytest finds its entry-point conftest.py
(= local-pytest-plugin) next to files named test_*.json
.
So test files must be copied into the pyOpenPASS directory before execution.
pytest
--simulation=SIMULATION_EXE # path to simulation executable, e.g. /openPASS/bin/opSimulation
--mutual=MUTUAL_RESOURCES_PATH # path to mutual config files for all runs, e.g. /openPASS/bin/examples/common
--resources=RESOURCES_PATH # path from where configs are retrieved - override common files if necessary
--report-path=REPORT_PATH # path to where the report shall be stored
--allowed-warnings=ALLOWED_WARNINGS_PATH # path to file which contains the list of warnings that can be ignored
TEST_FILE # file under test, named `test_*.json`
Note
You can use additional pytest arguments, such as -v
for verbose output, --collect-only
for listing the available tests and so on (see https://docs.pytest.org).
In addition pyOpenPASS supports the following optional arguments:
--configs-path=INPUT # path for providing configs during testing
--results-path=OUTPUT # path for collecting test results during testing
--artifacts-path=ARTIFACTS # path for collecting test artifacts during testing
For each specified test_*.json
a corresponding test_*.html
will be generated.
Warning
Depending on the names of the config file sets and test cases configured in the JSON file, the resulting collection of artifacts might conflict with a specific path length limit.
This limit can be increased by setting the Windows Registry key variable LongPathsEnabled
to 1.
The variable can be accessed at Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\LongPathsEnabled
.
Please note that the limit cannot be disabled completely.
Resulting error messages are often misleading (e.g. File not found
where the file actually exists or shutil.py 2
, etc.).
Parallel Execution
If pytest-xdist
is installed, pyOpenPASS can be invoked with the addition parameter -n auto
(or similar - see https://pypi.org/project/pytest-xdist/).
In this case, pyOpenPASS will execute the given tests on n
parallel worker.
Note
Running tests in parallel will result in the report displaying results in an arbitrary order, with only the executed tests listed (disabled tests will not be shown).
Test Configuration
Test configuration is done by the test-json file, individually for each test. Depending on the users choice, three different tests runners are executed:
Determinism: Check executability of configs + Determinism test (1 x n vs n x 1 tests).
Parameterized: Check executability of configs using different parameters.
Query: Execute config and check for specific results in the output of the simulator, given one or more queries.
In general, the test-json splits into two sections:
Definition of
Configuration Sets
Definition of
Tests
using theConfiguration Sets
or a singleConfig
directly
Note
Whenever possible, pyOpenPASS re-uses the results to speed up result analysis.
{
"config_sets": {
"Config_Set_1": [ // user defined name
"Config_Folder_1",
"Config_Folder_2"
],
"Config_Set_2": [
"Config_Folder_2",
"Config_Folder_3"
],
"Config_Set_3": [
"Config_Folder_4"
]
},
"tests": {
"Execution and Determinism": {
"config_sets": [
"Config_Set_1",
"Config_Set_2",
"Config_Set_3"
],
"determinism": true, // ACTIVATES DETERMINISM
"duration": 30, // how long shall be simulated
"invocations": 3 // compare 1x3 run with 3x1 runs
},
"Parameterization": {
"config_sets": [
"Config_Set_2"
],
"parameterization": { // ACTIVATES PARAMETERIZATION
"file": "systemConfigFmu.xml", // Name of config, which shall be parameterized
"xpath": "//value[../id='FmuPath']", // XPath, where values needs to be replaced
"values": [ // Values, which shall be set
"resources/FMU1_StaticFMU.fmu",
"resources/FMU2_StaticFMU.fmu"
],
"duration": 10,
"invocations": 100
},
"Querying": {
"config": "Config_Folder_2" // single config specification
],
"queries": [ // ACTIVATES QUERYING
"count(AgentId | AgentId == 0 and Timestep == 10000 and VelocityEgo >= 30) == 1",
"mean(VelocityEgo | AgentId != 0) > 30"
],
"success_rate": 0.8, // 80% of 60 invocations needs to pass
"duration": 10,
"invocations": 60,
"ram_limit": 512.0, // Optional RAM Limit in MB, measured for each invocation
"description": "Optional description"
}
}
}
}
If the success_rate is specified, its values must be between 0 and 1.
It is also possible to define a range of success (e.g. for excluding 100%) by using the following syntax:
"success_rate": [0.8, 0.99] // 80% to 99% need to pass
If the ram_limit is specified, its values are measured in MB
Querying Results
Internally, pyOpenPASS uses DataFrames to aggregate data. This data is then accessed using a custom query language described below. Before the query is executed, pyOpenPASS gathers data from the relevant simulation output folder.
Typically, the following files are expected:
simulationOutput.xml
: This file is the source for events (see below for more details).Cyclics_Run<run_id>.csv
: Here,<run_id>
is a placeholder for the number of the corresponding invocation. This file contains cyclic data, such as x-position, y-position, or velocity.
OpenPASS also allows for independent output of controllers in subfolders, where these subfolders follow the pattern run<run_id>/entity<entity_id>/<controller>
.
If pyOpenPASS discovers such subfolders, it will look recursilvy for CSV files within them.
For every CSV file, pyOpenPASS checks if the belowstanding conditions are satisfied.
If they are, the file is merged with the corresponding Cyclics_Run<runId>.csv file.
The file must contain a column named
Timestep
.Every other column must start with the corresponding entity id, matching the entity id in the subfolder name. For example,
00:DetectedObjects
.Note
If a column name follows the pattern
<id>:<Prefix>.<ColumnName>
it will be shortened to<ColumnName>
.Warning
pyOpenPASS does not take care of columns with duplicate names. If such columns are found, duplicate names will be suffixed (see here for details).
When merging succeeds, columns from the additional controllers can be queried like every other column in the queries described below.
Basic Syntax
[aggregate]([column] | [filter]) [operator] [value]
Aggregate: Everything pandas supports on dataframes, such as pandas.DataFrame.count, min, max, mean
Column: A column on which the aggregate should operate.
Columns are generally given by the simulation outputs cyclic columns, such as
PositionRoute
. In addition the following columns are available:AgentId
From the tag
Agents
(seesimulationOutput.xml
):AgentTypeGroupName
AgentTypeName
VehicleModelType
DriverProfileName
AgentType
Everything from the tag
RunStatistics
(seesimulationOutput.xml
), which is currently:RandomSeed
VisibilityDistance
StopReason
StopTime
EgoAccident
TotalDistanceTraveled
EgoDistanceTraveled
Filter: A filter based on pandas.DataFrame.filter syntax using the available columns.
Operator: A comparison operator from the following list: ==, <=, >=, <, >, !=, ~= (approximate). The approximate operator allows
1*e-6 x value
as maximum deviation from value.Value: A number
Note
In seldom cases, the filter can be skipped, e.g. when securing that no agent has been spawned: count(AgentId) == 0
.
Example
count(AgentId | PositionRoute >= 800 and Lane != -3) == 0
Using Events in Filter
In order to query for a specific event, use #(EVENT)
within the filter syntax.
Example
count(AgentId | PositionRoute >= 800 and #(Collision) == True) == 0
Event Payload
Each event is associated with a set of triggering entity ids, affected entity ids, and arbitrary key/value pairs (please refer to the openPASS documentation for details). This information is transformed into a “per agent” scope.
In the following the Collision
event is taken as example.
TriggeringEntity
All agents, flagged as triggering become IsTriggering
Query: #(Collision):IsTriggering == True
AffectedEntity
All agents, flagged as affected become IsAffected
Query: #(Collision):IsAffected == True
Key/Value Pairs
If an event publishes additional payload with the key XYZ
, it will can be queried by #(EVENT):XYZ
.
Query: #(Collision):WithAgent
Warning
Keys carrying the event name as prefix, such as in #(Collision):CollisionWithAgent
, will be stripped to Collision:WithAgent
Query Example
count(AgentId | AgentId == 0 and #(Collision):WithAgent == 1) == 0
Using OpenSCENARIO Events
OpenSCENARIO events are processed in the same manner as regular events (see above).
This allows to query for occurrences of OpenSCENARIO events with a name specified within the following xpath:
OpenSCENARIO/Story/Act/Sequence/Maneuver/Event/@name
OpenSCENARIO Event Definition
<Story name="TheStory">
<Act name="TheAct">
<Sequence name="TheSequence" numberOfExecutions="1">
...
<Maneuver name="TheManeuver">
...
<!-- example name "ttc_event"-->
<Event name="ttc_event" priority="overwrite">
...
<StartConditions>
<ConditionGroup>
<Condition name="Conditional">
<ByEntity>
...
<EntityCondition>
<TimeToCollision>
...
</TimeToCollision>
</EntityCondition>
</ByEntity>
</Condition>
</ConditionGroup>
</StartConditions>
</Event>
...
</Maneuver>
</Sequence>
</Act>
</Story>
Example openPASS Output
<Event Time="0" Source="OpenSCENARIO" Name="TheStory/TheAct/TheSequence/TheManeuver/ttc_event">
<TriggeringEntities/>
<AffectedEntities>
<Entity Id="1"/>
</AffectedEntities>
<Parameters/>
</Event>
Query
count(AgentId | #(TheStory/TheAct/TheSequence/TheManeuver/ttc_event) == True ) > 0
Querying Transitions
Sometimes it is necessary to check, whether a transition happened, such as counting agents, passing a certain position.
This can be achieved by shifting individual columns by N
time steps.
Time Shift Syntax
Column-Shift
=> PositionRoute-1
means PositionRoute at one time step earlier
Example Use Case
Counting agents passing PositionRoute == 350
on LaneId == -1
Query
count(AgentId | LaneId == -1 and PositionRoute-1 < 350 and PositionRoute >= 350 ) > 0
Warning
In seldom cases, a result column happens to have a name like Name-N
where N
is an integer.
Querying this column would automatically apply time shifting (default behavior) leading to a parsing error.
In such cases, escape the column name with single quotes (e.g. 'Name-1'
).
Querying Spawning Time
Queries can be restricted to the spawning time:
Query
count(AgentId | Timestep == {first} and Velocity < 30) == 0
Warning
Timestep == {first}
must be the first parameter in the filter and can only succeeded by and
.
Explicit Datatypes
pyOpenPASS uses Pandas DataFrames internally. Pandas will try to detect the datatype of the individual cyclic columns automatically. This won’t fit the user’s intention in some cases, such as when the column holds a semicolon separated list of integers but every list contains just one element. In such cases it is impossible to distinguish between integers and strings based on the data.
For this reason, datatypes can be specified explicitly along with a query:
"queries": [ ... ],
"datatypes": {
"Sensor0_DetectedAgents": "str" // string with "missing value" support
}
Dev Notes
If you want to execute/debug pyOpenPASS in VS-Code, you can add a configuration, similar to the one shown below, to the launch.json
after opening pyOpenPASS as VS-Code project:
"configurations": [
{
"name": "pytest-openpass",
"type": "python",
"module": "pytest",
"args": [
"--simulation=/openPASS/bin/core/opSimulation",
"--mutual=/openPASS/bin/core/examples/OSS/Common/",
"--resources=/openPASS/bin/core/examples/OSS/Configurations/",
"--report-path=/openPASS/reports",
"--allowed-warnings=/repo/sim/tests/endToEndTests/allowed_end_to_end_warnings.txt",
"test_end_to_end.json",
"-v"],
"request": "launch",
"console": "integratedTerminal"
}]