testing

Fuzzy Testing

Session Type: 
Standard [35 minutes]
Speakers

Many of the test cases for an application involve data defined by the data model. To achieve good test coverage it is customary to run the test cases for a variety of different input data sets. This can be achieved by manually specifying different input data. However, this is a complex task and it is easy to miss important input data and thereby miss bugs.

Schedule info

Time slot: 
28 March 13:30 - 14:05
Room: 
Back Bay
Status: 
Accepted

Audience

Track: 
ALM Connect
Experience level: 
Intermediate

Continuous testing with Jubula – where the rubber meets the road!

Session Type: 
Standard [35 minutes]
Speakers

You've got software. You've got a list of features to implement. You've got some automated tests. You've got upcoming releases. The only glue that is going to make this scenario work is Continuous Integration. When you're adding and changing functionality, knowing what your changes are doing to your quality on a daily basis can be the difference between a successful release and a horrifically painful one.

Schedule info

Time slot: 
26 March 11:15 - 11:50
Status: 
Accepted

Audience

Track: 
ALM Connect
Experience level: 
Intermediate

One test to @Rule them all

Session Type: 
Standard [35 minutes]
Speakers

One abstraction layer, forty implementations and one test? Have you
ever build a pluggable API which allows others to extend your product?
We did this with the Mylyn Tasks framework and while the framework
provides common UI for accessing tasks, anyone can plug in specific
connectors to access their change management system of choice.
Connector authors are responsible for fulfilling the expectations of
the Mylyn API which isn't always documented in every little detail and
test their implementation for a number of edge cases. We'll show you
how we simplified testing of connectors and improved test coverage
significantly by introducing an integration test infrastructure on the
API level that tests all implementations with an elaborate test suite
for common behavior and also covers exceptional circumstances.

Schedule info

Status: 
Declined

Audience

Track: 
Cool Stuff (Other)
Experience level: 
Intermediate

UI testing with Jubula - wacky widgets 2.0

Session Type: 
Standard [35 minutes]
Speakers

Standard widgets and usage concepts are great. They are known by users, respond in expected ways, and are generally testable out-of-the-box with UI automation tools like Jubula.

Apparently though, standard widgets are boring, that table-in-a-combo-box-with-a-tree-in-it is the new black. Joking aside, the temptation (or necessity) to stray from the standard path will happen to all of us at one time or another. Good examples for that can e.g. be found in the Nebula project. You may well ask yourself what that means for UI testing ...

Schedule info

Status: 
Declined

Audience

Track: 
Cool Stuff (Other)
Experience level: 
Beginner

Is the integration testing challenge compromising your definition of "done, done, done"?

Session Type: 
Standard [35 minutes]
Speakers

As an agile developer, you've completed everything defined as "done, done, done." You move onto your next sprint. and while you're deep into this new code, you start receiving defects about the code you thought was "done". In reality, for complex systems and/or distributed teams, whole system testing for agile development teams can become a bottleneck. So we are caught defining "done" to what can easily be tested in a sprint and pushing the complex testing down the line. This bottleneck makes it hard to assure the quality necessary for monthly, quarterly or continuous delivery.

Schedule info

Status: 
Declined

Audience

Track: 
ALM Connect
Experience level: 
Beginner

Copyright © 2013 The Eclipse Foundation. All Rights Reserved.