Updated: Apr 17
ROS 2 is a rather heterogeneous and complex system that depends on and consists of several components. The performance of ROS 2 greatly depends on the hardware, operating system, the DDS implementation underneath, as well as the application above.
Recently we have seen multiple attempts on creating performance reports in ROS 2 that were plainly incorrect or largely incomplete (1, 2, 3, 4). Such practice is unfair, misleading, and can hurt businesses. In other domains, we have a plethora of publicly available performance evaluation systems such as PascalVOC and Kitti, which enable a consistent and fair evaluation process, and we should follow these established practices.
In the paper, we first explain the terminology relevant for performance testing with the emphasis on the parameters that influence the performance of ROS 2 the most. Next, we present a standardized experiment setup and discuss a working implementation of an end-to-end performance testing system that will guarantee a standardized, unbiased, and reproducible evaluation.
To read the full paper proceed here.