|
|
Call for Papers |
The focus is not so much on computer systems, as on the experimental approach of studying them. High-quality papers are solicited on the nine themes identified below. In particular, we proactively encourage papers that advance the methodological aspects of experimental computer science, or present new real-world data and observations about computer systems. A major criterion for acceptance will be that the paper contributes to the discourse on the subject; it does not have to be the definitive final word on it.
Note that we intend a rather wide interpretation of the term "systems". This includes not only software-based systems such as operating systems, but also architecture, networking, large-scale applications as in AI, and software engineering methodologies used to build large systems.
In the interest of reproducibility and advancing the state of the art, it is highly desirable that papers be accompanied by software and data sets used in the experiments.
Experimental computer systems engineering seeks to understand the behavior of computer systems by building and measuring artifacts, discovering behavior that emerges from the inherent complexity of working systems. Papers should report on techniques, insights, and understanding that come from building and using computer systems.
Measurements are used to observe computer systems in action. The question is what to measure, and what metrics will turn raw measurements into useful information. A related question is methodological: how to achieve reliable measurements in face of noise and errors.
Performing measurements or simulations correctly and reliably is not an easy task. This theme strives to document best practices that should be followed, and pitfalls that need to be avoided.
Initial experiments are often performed by hand, but we also have the option to automate various processes and collect incredibly rich datasets. Papers will range from the description of measurement tools that support sound instrumentation, to tools for automatic flagging of anomalous conditions, to tools for automatic management of experiments, to the construction and use of large-scale experimental infrastructure.
Claims that "approach A is better because of the interaction of foo with bar" may look convincing, but to really prove them one needs to design and execute appropriate experiments. This theme emphasizes the study of complex systems, with the goal of achieving a deeper understanding of why they behave in the way they do.
Reproducibility is at the heart of the scientific method. However, it is not always easy to achieve, especially in computer systems that may be very sensitive to configuration details. Papers in this theme will report on reproducing previous results, the scope to which they pertain, and factors that affect them, and experimental methodology that promotes reproducibility.
This theme is designed to encourage the collection of data --- such as workload data --- and making it generally available for research use. In addition, it includes discussion of anonymization (making data available without compromising privacy), data sanitization (e.g. the identification of erroneous data and the removal of unrepresentative outliers), and data manipulation (e.g. how to combine several activity logs to create a realistic "log" with higher load).
It is said that "in theory, there is no difference between theory and practice; in practice, there is". The goal of this theme is to investigate this issue, and in particular, showcase examples where one approach challenges the other. This includes cases where theory leads to hypotheses that can be checked experimentally, and cases where experimentation questions assumptions used as a basis for theory.
Last but not least, this theme addresses the need for a cultural change in order to make the experimental approach more prevalent and more respected. As part of this, it includes the development of courses and curricular material regarding experimental computer science.
Papers are generally limited to 12 pages in the usual ACM double-column proceedings format. In order to reduce the hassles of preparing papers for submittal, the original version may spill onto the 13th page, with the understanding that the paper will be reformatted to fit in 12 pages upon acceptance.
The above notwithstanding, we will also consider papers of up to 15 pages in the same format. These will be expected to be formatted as 14 pages in the final version, provided the extra length is recommended by the reviewers. If the reviewers are not convinced that the extra length is needed, such papers will have to be shortened to 12 pages.
Note that the 12-page limit and the option for extra pages are meant to allow full papers that need the space to describe the experimental details. But full and mature papers, while welcome, are not necessarily a requirement; short and concise papers are perfectly OK too. If you have something interesting to say regarding experimental computer science, submit a paper, and don't let page-count issues confuse you.