# How to benchmark Beluga ## Run rosbag benchmark parameterizing the number of particles This script will run the benchmark with the specified number of particles and record: - The `timem` output: CPU usage, RSS memory, virtual memory. - A rosbag with the reference and estimated trajectories. - Perf events (optionally). To run, use: ```bash ros2 run beluga_benchmark parameterized_run ``` The results of the different runs will be stored in folders named `benchmark_${N_PARTICLES}_particles_output`, where `N_PARTICLES` are the numbers specified in the above command. To run the same experiment using another AMCL node, e.g. nav2, use: ```bash ros2 run beluga_benchmark parameterized_run --package nav2_amcl --executable amcl ``` For other options, e.g. using a different rosbag, see: ```bash ros2 run beluga_benchmark parameterized_run --help ``` ## Visualizing timem results of one run Use the following command: ``` ros2 run beluga_benchmark timem_results ``` Where the specified directory needs to have a `timem-output.json` file, e.g. the output directory generated by `parameterized_run`. The script will print cpu usage, peak rss and elapsed time. It will also plot virtual and RSS memory over time. ## Visualizing estimated trajectory APE of one run Use the following command: ``` evo_ape bag2 /odometry/ground_truth /pose -p ``` For `nav2_amcl`, replace `/pose` with `/amcl_pose`. The bagfiles generated by `parameterized_run` can be found in the generated output directories for each run. This will print APE metrics (mean, median, max, std, rmse, etc), and also plot APE over time. ## Comparing parameterized runs The following command allows to compare the results of different benchmarking runs in a single plot for each metric being measured. This can be used to compare different `beluga_amcl` and/or `nav2_amcl` runs, or to compare the same node with different base configuration settings. The command is ``` ros2 run beluga_benchmark compare_results -s -l -s -l ... ``` where `PATH1` and `PATH2` are the paths to the output directories of the benchmarking runs to compare, and `LABEL1` and `LABEL2` are the labels to use in the plot for each of them. Any number of runs can be added in the same plot by providing additional `-s -l