ERASMUS+ Traineeship Offers, Academic Year 2019-2020

Below you find a list of project topics for ERASMUS+ Traineeship offered by the software engineering research group in 2019 for students who intend to defend in June 2020.


Software Competence Center Hagenberg GmbH (SCCH)
in Hagenberg (near Linz), Austria

Send expression of interest to:
Dietmar Pfahl (dietmar [dot] pfahl [at] ut [dot] ee)

The expression of interest should contain the following:

  • Topic of interest to you (see below)
  • Explain your skills and why you are interested in the topic
  • Your degree program / specialization
  • Your grades in the courses of core module and specialisation

Theme: Applying AI Techniques in the Testing of (Embedded) Software Systems

Topic 1:
Mutation testing has gained new interest in industry due to advances in automatic mutant generation [1]. However, growing test suite sizes and large numbers of auto-generated mutants make it difficult to run all tests on all mutants to identify mutants that won’t be killed. Experiments at SCCH have shown that running all tests on all mutants requires calendar weeks even when using high-performing hardware [2]. Undetected mutants inform about the actual strength of a test suite and the kind of tests to add for optimization. Building upon previous research [3], a ML-based baseline approach will be designed to build a model that predicts which mutants will be killed. This makes the execution of the test suite on mutants obsolete.

  1. Goran Petrovic and Marko Ivankovic (2018) State of Mutation Testing at Google. In ICSE-SEIP ’18: 40th International Conference on Software Engineering: Software Engineering in Practice Track, May 27-June 3, 2018, Gothenburg, Sweden. ACM, New York, NY, USA, 9 pages.
  2. Rudolf Ramler, Thomas Wetzlmaier, and Claus Klammer (2017) An empirical study on the application of mutation testing for a safety-critical industrial software system. In Proceedings of the Symposium on Applied Computing (SAC '17). ACM, New York, NY, USA, 1401-1408.
  3. Jie Zhang , Ziyi Wang , Lingming Zhang , Dan Hao , Lei Zang , Shiyang Cheng , Lu Zhang (2016) Predictive mutation testing, in: Proceedings of the 25th International Symposium on Software Testing and Analysis, July 18-20, 2016, Saarbrücken, Germany

Topic 2:
A popular approach to increase the effectiveness of test suites is the use of automatically generated test data (e.g., in the context of random testing and combinatorial testing) [1]. To address the problem of the missing test oracle, Machine Learning techniques can be used to support the (semi-) automatic generation of test oracles [2].

  1. J. D. Hagar, T. L. Wissink, D. R. Kuhn, R. N. Kacker (2015) Introducing Combinatorial Testing in a Large Organization. IEEE Computer, (4), 64-72.
  2. Huai Liu, Fei-Ching Kuo, Dave Towey, and Tsong Yueh Chen (2014) How Effectively Does Metamorphic Testing Alleviate the Oracle Problem?. IEEE Trans. Softw. Eng. 40, 1 (January 2014), 4-22.

Topic 3:
Model-based approaches have potential to facilitate the automatic generation of test suites [1]. Moreover, the mining of execution logs created during software use in the field and during system testing is making advances [2, 3]. Combining these two approaches could be used to analyse the overlap of the usage profiles. High overlap indicates that the test suite used during development of the software adequately anticipates the actual usage in the field. Low overlap indicates a potential improvement potential. ML approaches will be applied to identify under-specified areas in the models that were used to generate the test suites. This information helps closing the test suite gaps such that the usage profiles during testing become similar to those observed during field usage.

  1. Mark Utting, Alexander Pretschner, and Bruno Legeard (2012) A taxonomy of model-based testing approaches. Softw. Test. Verif. Reliab. 22, 5 (August 2012), 297-312.
  2. Zhen Ming Jiang, Alberto Avritzer, Emad Shihab, Ahmed E. Hassan, and Parminder Flora (2010) An Industrial Case Study on Speeding Up User Acceptance Testing by Mining Execution Logs. In Proceedings of the 2010 Fourth International Conference on Secure Software Integration and Reliability Improvement (SSIRI '10). IEEE Computer Society, Washington, DC, USA, 131-140.
  3. Aichernig B.K., Mostowski W., Mousavi M.R., Tappler M., Taromirad M. (2018) Model Learning and Model-Based Testing. In: Bennaceur A., Hähnle R., Meinke K. (eds) Machine Learning for Dynamic Software Analysis: Potentials and Limits. Lecture Notes in Computer Science, vol 11026. Springer, Cham

Requirements (what you bring)

  • Strong background in AI (Machine-Learning)
  • Strong interest in software testing/QA
  • Strong interest in learning how to do proper research
  • Good programming skills
  • Good communication skills
  • Ability to work autonomously
  • Ability to adapt to new work environment
  • Couriosity and willingness to walk the extra mile

Competences to be acquired (what you get)

  • Industry-relevant knowledge and skills in software testing and QA
  • Hands-on experience of applying AI techniques to software engineering data

Other benefits

  • The work conducted during the traineeship should lead to a MSc thesis topic. The MSc thesis would be finalized during January-Aprild 2020 (thesis submission in May 2020).
  • Excellent students with deep research interests will be given the opportunity to continue their research in the context of a PhD project.


  • University of Tartu: Dietmar Pfahl (dietmar [dot] pfahl [at] ut [dot] ee)
  • SCCH: Rudolf Ramler (rudolf [dot] ramler [at] scch [dot] at)