Testing a Scout Application with JUnit and Jubula

You are not authorized to post comments.
Session Type: 
Standard [35 minutes]

Writing tests is one of the best practices of software development. This talk presents three ways to test a Scout application: plain JUnit, JUnit within an OSGi context, and in combination with service mock and black-box UI testing with Jubula.

JUnit is probably one of the most well-known tools in the Java community. For plugin based applications, it is a good practice to store the test code in a separate test plugin or in test fragments (which lets you use the same class loader and also lets you access internal packages). This variant is useful for testing small units of code without any other dependencies.

A framework like Eclipse Scout relies on OSGi for its dependency lookup mechanism. Without an OSGi context, a call to SERVICES.getService(..) in the Eclipse Scout code will return null, producing ugly null pointer exceptions. The Eclipse Scout Framework provides a small extension to JUnit: a specific runner that ensures that the OSGi context is loaded before the tests are executed. This may slow down the test execution, but it also opens up new perspectives. For example it is possible to exchange a real service (a call to the server) with a mocked one. The test becomes decoupled from the real service implementation and it is possible to test the code against different responses of the service.

The third testing variant we want to present is UI black-box tests with Jubula. The aim of tests at this level is to ensure that the workflows and use cases that the user requires can be performed correctly. One of the advantages of using Jubula is the ability to write the automated tests before the application is ready - using e.g. prototypes, specifications, mockups and team discussions for information. As well as clearing up misunderstandings early on in the development process, this also means that the tests can run against the application as soon as a version is available and therefore provide timely feedback.

In the presentation, we show examples to test a simple Scout application. A single form application is introduced at the beginning of the talk. The talk combines theory (introduction of testing concepts) and demonstration based on examples with real code.

Schedule info



Experience level: 


Some feedback was requested on this talk submission. The main weakness of the talk was the very narrow scope. The talk is directed at people building Scout applications who are interested in using Jubula for testing it. Only a small audience will know enough about Scout already that they are ready to dive into a talk specifically on writing automated tests for Scout. Also there were already Jubula and Scout talks accepted, and Alexandra has an accepted talk that looks at testing in a broader context. Some Eclipse projects had no talks accepted so overall Jubula and Scout did well on getting represented in the program. We encourage you to submit talks to future conferences, but suggest trying to ensure the topic is wide enough that more of the EclipseCon audience will get value from it.

Copyright © 2013 The Eclipse Foundation. All Rights Reserved.