software testing and analysis; concolic execution; test automation; code coverage; structural testing; data flow testing
(2015), Automated Test Oracles: A Survey, 1-48.
(2015), Combining Multiple Coverage Criteria in Search-Based Unit Test Generation, in Proceedings of the 7th International Symposium on Search-Based Software Engineeri
(2015), Dynamic Data Flow Testing of Object Oriented Systems, in Proceedings of the 37th International Conference on Software Engineering
(2015), Efficient analysis of event processing applications, in Proceedings of the 9th ACM International Conference on Distributed Event-Based Systems
(2015), Introduction to the Special Issue on ISSTA 2013, in ACM Transactions on Software Engineering and Methodology
, 24(4), 21-21.
(2015), Recent Advances in Automatic Black-Box Testing, 157-193.
(2014), Dynamic Data-flow Testing, in Proceedings of the 36th International Conference on Software Engineering
(2014), On the Right Objectives of Data Flow Testing, in Proceedings of the International Conference on Software Testing, Verification and
(2014), Towards Automated A/B Testing, in Proceedings of the 6th International Symposium on Search-Based Software Engineering
(2013), Search-based Data-flow Test Generation, in Proceedings of the International Symposium on Software Reliability Engineering
(2013), Software testing with code-based test generators: data and lessons learned from a case study with an industrial software component, in Software Quality Journal
Testing is an essential activity in software development, and code coverage is an important technique to assess the adequacy of software testing, and support decision making over a quality process. While most testing managers would probably agree on these considerations, despite the research effort that produced many code coverage criteria to address several testing goals, code coverage has not made its journey to become consolidated best practice yet. The main reasons for the limited success of code coverage are traceable to the high (often unaffordable) costs of designing test suites with high coverage indicators. Manually inspecting programs to find test cases that exercise non-trivial portions of uncovered code can be extremely expensive. Even worse, relevant amount of effort might be wasted in the attempt of covering infeasible code elements, generally induced in non-negligible amounts by many relevant coverage domains. The many techniques to automatically generate test cases to increase code coverage work fairly well for simple metrics, like statement and branch coverage, but do not address well complex metrics, like data flow coverage criteria. The DyStaCCo project is grounded on the excellent results of the recent SNF project AVATAR. The DyStaCCo project aims to progress over the current achievements and findings along two major research directions. First, this project will extend the AVATAR results to deal with large programs. This requires extending the approach to cope with inter-procedural program flows, study alternative dynamic and static analysis techniques to solve performance problems, define models and procedures to efficiently integrate the different analysis techniques to overcome the limitation of the current technique. Second, DyStaCCo aims to investigate qualitatively and quantitatively automatic dataflow testing. Dataflow testing techniques have not been automated beyond the computation of testing requirements, and have not been studied extensively due to the difficulty of achieving reasonable data flow coverage indicators even for simple programs. Grounded on the AVATAR experience with automating branch testing, this project aims to automate the generation of test suites that can achieve high data flow coverage and thus enable a quantitative study of these metrics. To extend the use of analysis for data flow testing we will need to experiment with different analysis techniques, and define new models and techniques to suitably combine the techniques and compute infeasible data flow elements.