For most enterprise projects, testing is not really fun. It’s boring, cumbersome, and takes time and effort — especially for distributed applications or when changes in existing functionality forces test scenarios to adapt. Still, software tests are crucial; so, how can we tackle them in an effective and productive way?
Database communication has a distinctive role in the codebase. We have to deal with libraries dedicated to specific databases, and we often use query languages such as SQL. It is tempting to avoid touching such code, and when it comes to testing, we face many challenges. To mention only a few examples: Will we mock or use an in-memory database? Do we need to write an integration test or a unit test? How do we know which code is responsible for database communication?
As software testers, we accept that each new role will require us to learn new technologies and skills. We also know that we often feel the need (or are told of the need) to provide value to the project quickly. Both of these competing expectations are normal to a certain degree. When I joined a new project about testing an API against a European Union standard for payment services, I had to do both to an extreme I had never experienced before.
Subtitle: what every developer should know about testing and testers
Testers and developers sometimes seem to be like creatures from different planets. Recognize these? Developers can't test. Testers can't code. Testers make the life of a developer miserable. Developers never test their own stuff. Testers can't keep up because testing is too slow, etc.
Java agents are a little-known but extremely powerful part of the Java ecosystem. Agents are able to transform existing classes at runtime, allowing scenarios such as logging and monitoring, hot reload or gathering code coverage. However, their usage presents a number of pitfalls as well.
Cognitive testing is a modern testing technique in this fast changing and dynamic technology world . The erstwhile testing arena is making a shift towards quality engineering, and testing is intended to become more iterative, progressive, Intelligent , contextual and seamlessly integrate with development.
Testing is still a topic that most developers would like to avoid. Even though it is crucial for working software, developing and maintaining tests takes certain time and effort — especially for distributed applications or when changes in existing functionality forces test scenarios to adapt. Omitting software tests can’t be the solution; therefore, how can we tackle enterprise tests in an effective and productive way?
When I started at my current employer five years ago the CI system consisted of an outdated Jenkins installation on a PC which was located under the desk of a developer. Builds were triggered three times a day, so a developer had to wait multiple hours after a commit until the feedback arrived. The builds couldn’t be reproduced locally, so debugging was at times done via console logging on the CI system.
A well-designed and tested IoT product should successfully help customers save money based on the information the system produces – but what can that look like for a real customer, and how can testing be done in the project?
In this talk, I will present a case study for a customer who produces metal-pressing machines for the can industry. The customer’s aim is to reduce unnecessary support visits to production sites: by measuring various parameters of the machine, they can determine whether the support case may have been caused e.g. by changing the thickness of the metal.
How can we more easily run performance benchmarks against Java SDKs and analyze and compare results? What information is coming out of some common open-source benchmarks and why might it be interesting? How can you incorporate performance tests into your continuous delivery pipeline? This talk addresses all of these questions and more as it surveys the performance testing story at AdoptOpenJDK and Eclipse OpenJ9.