Projekt

Zurück zur Übersicht

ASTERIx: Automatic System TEsting of inteRactive software applIcations

Titel Englisch ASTERIx: Automatic System TEsting of inteRactive software applIcations
Gesuchsteller/in Pezzè Mauro
Nummer 178742
Förderungsinstrument Projektförderung (Abt. I-III)
Forschungseinrichtung Facoltà di scienze informatiche Università della Svizzera italiana
Hochschule Università della Svizzera italiana - USI
Hauptdisziplin Informatik
Beginn/Ende 01.04.2018 - 31.03.2022
Bewilligter Betrag 1'045'800.00
Alle Daten anzeigen

Keywords (6)

testing in-the-field; testing concurrent software systems; software engineering; GUI testing; test oracles; software testing

Lay Summary (Italienisch)

Lead
ASTERIx: Automatic System TEsting of inteRactive software applIcations
Lay summary
In sintesi

Il software è il motore delle infinite applicazioni con cui interagiamo via i diversi browser, dispositivi mobili e dispositivi connessi alla rete.  Malfunzionamenti di queste applicazioni possono causare problemi alla vita quotidiana, da piccoli incidenti quali il temporaneo malfunzionamento di applicazioni web o di dispositivi mobili, a gravi problemi come lasciarci a terra in aeroporto, privarci di mezzi di pagamento, interrompere accesso a dati essenziali, causare ritardi e malfunzionamenti sulle linee di produzione e nei trasporti, provocare gravi danni a settori economici essenziali.  Il test di applicazioni mobili è essenziale per rilevare e rimuovere difetti in fase di produzione e per prevenire malfunzionamenti in ambienti operativi.  La complessità e le caratteristiche di queste applicazioni mettono a dura prova le tecnologie di test attuali e richiedono nuovi approcci in grado di verificare le nuove caratteristiche e proprietà di applicazioni e dispositivi.

Obiettivi

L’obiettivo del progetto è studiare nuove tecniche per il test automatico di applicazioni interattive, in grado di sollecitare gli aspetti più rilevanti di queste applicazioni e rilevare difetti e problemi prima che questi possano causare malfunzionamenti e problemi nella vita quotidiana.  La completa automazione del test mediante l’esplorazione automatica delle molteplici modalità interattive, il rilevamento automatico di comportamenti e risultati inattesi e l’analisi sistematica di interazioni parallele con e tra applicazioni interattive permetterà un test completo e coerente di queste applicazioni con costi estremamente contenuti ed in linea con le necessità di sviluppo, evoluzione e diffusione commerciale di queste applicazioni. 
Direktlink auf Lay Summary Letzte Aktualisierung: 30.03.2018

Verantw. Gesuchsteller/in und weitere Gesuchstellende

Mitarbeitende

Projektpartner

Verbundene Projekte

Nummer Titel Start Förderungsinstrument
162409 ASysT: Automatic System Testing 01.10.2015 Projektförderung (Abt. I-III)

Abstract

In this project, we will investigate the problem of testing interactive software applications. We will define and develop a holistic approach to automatically test such applications, and reveal and remove both unavoidable and emerging bugs, to reduce the impact of software failures.Interactive applications are software systems that provide services to people who dialog with the applications through different kinds of interfaces. They are deployed as concurrent desktop and distributed web and mobile applications, which interact with people through graphical user interfaces (GUIs) and wearable devices. They are popular in many domains, including retail, education, management, finance, entertainment. Interactive applications are commonly designed as concurrent systems that combine shared memory, message passing and event-driven paradigms. Failures of interactive applications are unavoidable due to the combination of many factors: the complexity of the testing process, the limitation of the current testing practices, the presence of concurrent failures that manifest non-deterministically, the heterogeneity of interactions with people and environment, and the presence of execution conditions that emerge in the field and are impossible to reproduce and test before deployment. Failures may severely impact the business value of the applications, and may lead to consistent economic loss.State-of-the-art testing practices do not address all the challenges of preventing failures of interactive applications: (i) approaches for testing interactive applications sample the execution space mostly referring to the structure of the interface, largely ignoring the application semantics, and the problem of testing concurrent messages and events exchanged with wearable devices has been only marginally studied so far,(ii) most approaches for testing concurrent systems target shared memory systems, and only few approaches consider the concurrency issues that derive from message passing and event driven paradigms, which commonly used in interactive systems,(iii) cost-effective test oracles catch mostly system crashed and regression failures, missing many relevant semantically related problems, (iv) testing approaches work before deployment, and hardly deal with problems that emerge at runtime.In this project we will define and develop an effective and coherent approach for testing interactive applications, by addressing four main open issues that hinder the automatic testing of such applications: - System Testing:Generating system test cases that exercise interactive systems by interacting with the applications through GUIs, mobiles and wearable devices. We will consider both semantically related aspects of user interactions and concurrency scenarios that emerge from event driven interactions. We will investigate dynamic model inference techniques for generating system test cases from implicitly available knowledge, and probabilistic analysis and machine learning techniques to learn from system execution, and test the effects of imprecise and noisy data from wearable devices.- Concurrency testing: Generating test cases and event interleavings for message passing and event driven concurrent systems. We will define current testing techniques that address concurrency failures for message passing and event driven systems, and that generate test cases and event interleavings to exercise the interplay among different concurrency paradigms.- Test oracles: Generating semantically relevant oracles. We will define techniques to generate test oracles from information provided as natural language and semi-structured comments and annotations, by exploiting natural language processing, and from knowledge that becomes incrementally available when executing the application, by exploiting dynamic model inference and analysis.- Field Testing: Generating system test cases for field testing. We will define approaches that dynamically analyse information from field execution to identify emerging execution condition and unpredictable environment interactions, and generate test cases to be executed online for verifying the new execution conditions.We will study and evaluate the approaches in an integrated and coherent framework for automatic testing of interactive applications.
-