Over the last decade, research on cyber-physical networks and systems has led to smart systems at different scales and environments, from smart homes to smart cities and smart factories. Significant progress has been made through contributions in areas as diverse as embedded and real-time systems, robotics and control, wireless communication and networking, signal processing, and machine learning. Despite these advances, it is difficult to measure and compare the utility of these results due to a lack of standard evaluation criteria and methodologies. This problem concerns the evaluation against the state of the art in an individual area, the comparability of different integrated designs that span multiple areas (e.g., control and networking), and the applicability of tested scenarios to the present and future realworld cyber-physical applications and deployments. This state of affairs is alarming as it may significantly hinder further progress in cyber-physical networks and systems research.
The Workshop on Benchmarking Cyber-Physical Networks and Systems – CPSBench – brings together researchers from the different sub-communities to engage in a lively debate on all facets of rigorously evaluating and comparing cyber-physical networks and systems. CPSBench provides a venue for learning about each other’s challenges and evaluation methodologies and for debating future research agendas to jointly define the performance metrics and benchmarking scenarios that matter from an overall system’s perspective.
We invite researchers and practitioners from academia and industry to submit short position papers. We particularly encourage submissions that focus on one of the following:
- identify fundamental challenges and open questions in rigorous benchmarking and
evaluation of cyberphysical networks and systems;
- offer a constructive critique on the current practice and state of experimental
- report on success stories or failures with using standard evaluation criteria;
- present example benchmark systems and approaches from any of the relevant communities (embedded systems, networking, control, robotics, machine learning, etc.);
- propose new research directions, methodologies, or tools to increase the level of reproducability and comparability of evaluation results.
Well-reasoned arguments or preliminary evaluations are sufficient to support a paper’s claims.
Detailed submission instructions are available here.
Authors of accepted papers are expected to present their work at the workshop.
Accepted papers will be published as part of the CPSWEEK proceedings, which will be considered for inclusion in ACM Digital Library or IEEE Xplore.