Publication

Back to overview

An empirical investigation on the readability of manual and generated test cases

Type of publication Peer-reviewed
Publikationsform Proceedings (peer-reviewed)
Author Grano Giovanni, Scalabrino Simone, Gall Harald C., Oliveto Rocco,
Project SURF-MobileAppsData
Show all

Proceedings (peer-reviewed)

Editor , Khomh Foutse; , Siegmund Janet; , Roy Chanchal K.
Page(s) 348 - 351
Title of proceedings Proceedings of the 26th Conf on Program Comprehension, ICPC 2018
DOI 10.1145/3196321.3196363

Open Access

URL https://www.zora.uzh.ch/id/eprint/150985/
Type of Open Access Repository (Green Open Access)

Abstract

Software testing is one of the most crucial tasks in the typical development process. Developers are usually required to write unit test cases for the code they implement. Since this is a time-consuming task, in last years many approaches and tools for automatic test case generation - such as EvoSuite - have been introduced. Nevertheless, developers have to maintain and evolve tests to sustain the changes in the source code; therefore, having readable test cases is important to ease such a process. However, it is still not clear whether developers make an effort in writing readable unit tests. Therefore, in this paper, we conduct an explorative study comparing the readability of manually written test cases with the classes they test. Moreover, we deepen such analysis looking at the readability of automatically generated test cases. Our results suggest that developers tend to neglect the readability of test cases and that automatically generated test cases are generally even less readable than manually written ones.
-