|
Formula Student Autonomous Systems
The code for the main driverless system
|
PacSim (Planning and Controls Simulator) is a simulator for Formula Student Driverless competitions originally developed at Elbflorace.
Example of a pipeline running in PacSim with visualizations in Foxglove:

This package is developed and tested on Ubuntu 22.04 with ROS2 Iron.
Install dependencies:
sudo apt install ros-iron-desktop ros-iron-xacro
To use PacSim with your autonomous system, you need to create a message converter node to match your own interfaces with the simulator.
We provide an example launch file (example.launch.py) which shows an example of how to start the simulator node and the robot_state_publisher for the 3d visualization of the car.
The sensors and vehicle model are configured using config files. Examples are provided in the config folder. Things such as the discipline or the path of the track file or config files are defined using ros2 parameters.
The default vehicle model provided is rather simple and just meant to be a starting point. You are encouraged to integrate your own vehicle model by implementing the IVehicleModel class.
A Dockerfile is configured that already contains all the dependencies. It can be used with a dev container environment or launched independently wiht the docker-compose file. For more info, check the docs folder.
Contributions in any form (reports, feedback, requests, submissions) are welcome. Preferably create an Issue or Pull request for that.
The project also has a discord server.
The initial version was developed at Elbflorace by:
As mentioned above, pacsim generates a final report after every simulation that contains some relevant metrics that allow us to evaluate performance and validate changes. Currently, the report shows the following:
These metrics can be split in groups to evaluate the subsystems' performance. We can evaluate performance for the following subsystems:
Perception cannot be evaluated directly unless we increase the data on the report. Right now the only way to evaluate perception is to find errors in other subsystems that depend on perception. Possible data we could add to the report in order to properly evaluate perception:
To validate state estimation we will use the same approach we used to path planning in this case we can add a few metrics:
Almost every single metric is used to evaluate control, if the car is off-course, takes cones, penalties or increases the run time, control could be the issue.
Data Infrastructure and Simulation & validation cannot be evaluated by this simulation.
The current report generated by pacsim contains a lot of interesting information. Realistically it is very hard to separate a subsystem from another purely based on the current data. Most subsystems are so interdependent that it is almost impossible to detect if the error is in one or the other. Adding the suggested metric could allow for separation between perception and the rest. Generating automated tests is possible but analyzing the results will be paramount to ensure that we can detect which subsystem is failing.