Writing Great Test Cases and Becoming a Great Software Tester

I completely agree with Kyle McMeekin when he states in a blog post titled “5 Manual Test Case Writing Hacks,” from April 11th, 2016, that it should come as no surprise that great software testers should have an eye for detail. What may not be as obvious, however, is that great software testers should be able to write great and effective test cases. McMeekin goes on to observe that writing effective test cases requires both talent and experience. In an attempt to begin my journey to become a great software tester, I decided that I should pay close attention to the advice offered by experienced testers as they reflect on the skills they have gained from their time in the industry. Hopefully, by following the tips of more experienced testers, I too will someday be able to contribute to highly valuable test cases the improve productivity and help to create high quality software.

The first step to writing great test cases is knowing what components make up the test case. While many of the components were obvious to me, there were others that I had not thought of. The test steps, for example, are important because the person performing the test may not be the same person who wrote the test. Knowing how the test should be performed is important to obtaining a valuable result from the test.

What I found most valuable about McMeekin’s post, however, were his tips on how to “write better test cases that will lead to better quality software for your company.” His first piece of advice is to keep test cases simple. They should be in simple language, and follow the company’s template. Although not specifically mentioned in this guide, I remember reading that if a test case seems to become too complex, you should begin considering breaking it up into smaller pieces. Second, McMeekin recommends making test cases reusable. Taking into consideration that your test cases could be adapted to other scenarios or reused in another application should help to develop test cases that are reusable. Third, McMeekin suggests placing yourself in the shoes of the tester or developer rather than the test-case writer, and being your own critic. Considering what parts of your test-case may be ambiguous or frustrating for others using them will often help to create better tests. This goes hand in hand with the fourth recommendation, which is to think about the end user. Understanding the expectations and desires of the end user will certainly help to create test cases that lead to better, more successful software. The last recommendation that McMeekin gives is to stay organized. This suggestion could apply just about anywhere, but with hundreds or possibly even thousands of potential test cases, staying organized is certainly essential to being a great tester.

Although I am sure there is a great deal more to consider in my quest to one day become a great software tester, I think that keeping these things in mind will certainly improve the quality of the test cases that I write. In the rapidly advancing field of computer science, I don’t feel that I will ever stop learning new and improved ways of doing things or further developing my skills.

How to Effectively Discuss Software Testing

It is not easy to come up with a definition that truly encompasses all that the discipline of software testing entails. From writing test cases to performing automated testing, what a particular software tester does can vary greatly. It is not surprising then, that there often misconceptions surrounding discussions of software testing. Reading a post titled “Six Things That Go Wrong With Discussions About Testing” by James Bach on his blog gave me insight into some of the things that I should avoid when talking about testing.

The first point that Bach brings up is that the number of test cases that you have written does not matter. To anyone who has ever written a test case, this is a pretty obvious one. You could write a million test cases that all do the same thing, but if you’re not exercising the code in unique and possibly unexpected way, then you are wasting your time. Any tester who brags about the number of test cases that he or she has written is clearly missing some critical understanding of the purpose of testing software.

Secondly, Bach states that each test should be thought of as an event rather than as an object. Whether it is done automatically or manually, tests must be performed, they do not simply exist. This is why software testers will never be replaced by computers or algorithms. Each time a tester performs a test, it produces a unique and meaningful result. This ties in with Bach’s fourth point, which is that even in the case of automated testing, humans are responsible for the successes or failures of the tests being run. Computers are unable to think critically about problems like humans can, and they can only go so far as to check simple facts, not effectively test software.

Backtracking to Bach’s third point, testers should always be able to describe their testing strategy, even as it evolves over the course of testing. Understanding how and why a given test is being performed allows testers to think about things such as how to improve the test, what tests need to be developed next, and how to develop better tests more quickly in the future.

The fifth topic that Bach discusses is how people “talk as if there is only one kind of test coverage”. As a student currently learning about the types of testing coverage, this one came as a bit of a surprise to me. I imagine that testers who fall victim to this fallacy of testing never received formal education in software testing or have become so set in a single way of testing that they are failing to see the bigger picture.

Lastly, Bach discusses testing as an exploratory learning task rather than as some scripted, monotonous procedure. Again, I think that my current enrollment in a software testing course leaves me a bit biased on this point. Nearly every test that I write is a learning experience for me, and I certainly do not feel that testing is a static task.

While I can certainly see where Bach is coming from with all six of his “Things That Go Wrong With Discussions About Testing,” I am pleased to know that the my understanding of software testing does not fall into any of his categories for the most part. I feel that many of these things apply to testers who may have been self-taught or had minimal theoretical training. I think that my education certainly helps me to see the big picture of why software testing is so essential, rather than just going through the motions of writing test cases for the sake of it.

Effective Data Migration Testing

When considering the term software quality assurance and testing, what comes to mind? For me, I think of developing test cases that exercise program functionality and aim to expose flaws in the implementation. In my mind, this type of testing comes mainly before a piece of software is released, and often occurs alongside development. After the product is released, the goals and focus of software quality assurance and testing change.

My views were challenged, however, when I recently came across an interesting new take on software testing. The post by Nandini and Gayathri titled “Data Migration Testing Tutorial: A Complete Guide” provides helpful advice and a process to follow when testing the migration of data. These experienced testers draw on their experiences to point out specific places in the migration of software where errors are likely to occur, and effective methods of exposing these flaws before they impact end-users and the reputation of the company.

The main point that Nandini and Gayathri stress is that there are three phases of testing in data migration. The first phase of testing is pre-migration testing, which occurs, as the name would suggest, before migration occurs. In this phase, the legacy state of the data is observed and provides a baseline to which the new system can than be compared to. During this phase, differences between the legacy application and the new application are also noted. Methods of dealing with these differences in implementation are developed and implemented, to ensure a smooth transmission of data.

The second phase is the migration testing phase, where a migration guide is followed to ensure that all of the necessary tasks are performed in order to accurately migrate the data from the legacy application to the new application. The first step of the phase is to create a backup of the data, which can be relied upon in case of disaster as a rollback point. Also during this phase metrics including downtime, migration time, time to complete n transfers, and other relevant information are recorded to later evaluate the success of the migration.

The final phase of data migration testing occurs post-migration. During this phase, many of the tests that are used can be automated in nature. These tests compare the data from the legacy application to the data in the new application, and alerts testers to any abnormalities or inconsistencies in the data. The tutorial lists 24 categories of post-migration tests that should be completed satisfactorily in order to say that migration was successful.

Reading this tutorial on data migration testing has certainly changed my views on what testing means. The actual definition seems much broader than what I would had thought in the past. Seeing testing from the perspective of migrating applications gave me insight on the capabilities of and responsibilities placed on software testers. If something in the migration does not go according to plan, it may be easy to place blame on the testers for not considering that case. I enjoyed reading about software testing from this new perspective and learning some of the most important things to consider when performing data migration testing.

The Dangers of Relying on Automated Testing

After listening to Jean Ann Harrison’s discussion about how important critical thinking is in the context of software testing and quality assurance on an episode of Test Talks, I wrote a post about The Limits of Automated Testing. Although Harrison’s explanation was great, I had a few remaining questions and this week chose to look for more information on automated testing. I came across a post by Martin Jansson from March 2017 titled Implication of emphasis on automation in CI, and it seemed to provide me with the more comprehensive view of testing automation that I was looking for.

Jansson starts out on a positive note, stating that he “less frequently see[s] the argumentation that testing is not needed.” To me it is almost comical to think about someone arguing that testing is unnecessary. While I completely understand that managers and executives are enticed by the possibility of saving time and money by not testing software, this is an extremely risky and careless method of creating a product. I doubt that anyone releasing untested software lasts very long or makes any money in the industry.

So if not testing at all is not an option, what are the options? Going with the bare-minimum for testing would be running only automated tests, a method that Jansson says is actually used. I have to agree with Jansson, however, when he says that this is not testing, rather it is simply checking. Instead of exploring parts of the code that are likely to contain bugs, you will simply be checking acceptance criteria. By not exploring the code fully, you are failing to find anything that might be outside the scope of the specification or the requirements. I feel that the following graphic provides an excellent representation of how few tests are actually performed when following a testing strategy that relies solely on automation.

(Source: http://thetesteye.com/blog/2017/03/implication-of-emphasis-on-automation-in-ci/)

What constitutes the perfect blending of automated and manual testing may be impossible to know. What is certain, however, is that automated testing cannot be relied upon as the sole method for testing. Jansson puts it in layman’s terms when he says that “you rarely automate serendipity.” Just as Jean Ann Harrison points out in the Test Talks podcast mentioned earlier, automation is not and will never be a replacement for thought. It is a bit of a relief to know that the software development companies are maturing and beginning to understand the importance of having testers who use a combination of automated and manual testing. As long as there continues to be humans writing code, there will need to be humans who test that code.

The Limits of Automated Testing

Automated testing is great, and it isn’t going anywhere. The ability to find bugs in a program with minimal human intervention saves both time and money, as well as helps to make the program more reliable. The problem with automated testing is that the automation can not think like a human. The automated tests simply follow the algorithm that they were programmed to follow. This may be great for finding simple bugs or obvious faults, but may be insufficient for revealing complex or hidden bugs.

On the September 10, 2017 episode of Test Talks, Jean Ann Harrison argues that what we need in order to find these complex, hidden bugs, is critical thinking. As an experienced tester who has worked in the industry for nearly 20 years as everything from a mobile tester to a quality assurance auditor, Harrison knows a fair deal about finding bugs in software. As a medical device tester, Jean Ann states that she often considered not was the product was designed to do, but what it was capable of. When it is a matter of life or death, there is no room for crippling bugs to make it into the final product.

I think that Harrison took many of her experiences as a medical device tester into her current position as a quality assurance auditor of airline entertainment software. In addition to the strict FAA requirements that she must adhere to, once again there are lives at risk if there are bugs that go unnoticed and unaddressed. When considering possible scenarios to test the product under, Harrison repeatedly states that she uses critical thinking skills to think outside of the box; she is always asking herself “what if…?” She states that asking these sorts of questions, along with imagining the possible scenarios in which the product would be used, will lead to the development of meaning tests and possibly reveal bugs.

Strictly following the testing methods that I’ve learned as a Software QA & Testing student so far seems to keep me inside a bubble. I am only able to test what the method states should be tested. This is sometimes difficult because my mind has a tendency to think of all of the possibilities, much like what Harrison is advocating for testers to do. I want to stray from the strictly defined values that the method demands I input and attempt to use my experience as an end-user and also my experience as a programmer to attempt to break the program. Of course, in the context of testing, breaking a program is a success. It means that you have found a bug and that the finished product will be that much better. I look forward to applying Harrison’s critical thinking strategy to my testing in the future. I am excited to investigate “what if…?” and hopefully make programs better by discovering bugs that would have otherwise gone unchecked.

Making Testing More Effective Through Increased Testability

While there are few people who would would argue that testing is easy, it also should not be prohibitively difficult. The difficult part in testing software should be in deciding what to test, not how to test it. In a post from late September 2017, Michael Bolton describes how important testability is in the creation of a stable project with less risk of bugs at the time of delivery. If releasing a project untested or with insufficient depth of testing sounds risky to you, you are in good company. Making testing easier, known as increasing testability, allows for more thorough testing and (hopefully) a more polished, bug-free finished product.

Bolton describes testability in terms of visibility and controllability. The examples that he gives for visibility are log files and continuous monitoring. For controllability, Bolton cites application programming interfaces or APIs as the most common method for the easy manipulation of the product. An important takeaway from the post is that while it is certainly helpful to the tester if a product has things like log files and an API, this is not all that testability encompasses. Bolton presents the idea of testability as a set of relationships between multiple elements in the design process including the product, the tester, the development team, and the development environment. The overall testability of a product is a result of the complex interactions between all of these people and things involved.

The first category that Bolton mentions is epistemic testability. It is impossible for a tester to know all of the bugs in the code before performing any testing. If this were the case, there would be no need for software testers at all. The act of testing explores what Bolton calls a “risk gap,” or the areas in the project that the tester is uncertain about or unfamiliar with. Next, Bolton considers value-related testability, which refers to knowledge of what the intended user of a program is looking to gain. Understanding what is valuable to others allows a tester to focus his or her efforts where it will have the most significant impact. Intrinsic testability refers to the product’s ability to be easily understood by the tester. If a program’s behavior is easy to follow and its state is transparent to the tester, he or she will have a far easier time properly testing it. Since most projects are assigned to teams of people in many different positions, with different tools and knowledge, access to these people and resources is essential for project-related testability. Finally, subjective testability refers to the skills of the tester or testing teams matches the requirements of the project.

Bolton’s more literary definitions of testing were a welcome change from the testing material that I’ve read online and in text. Bolton seems to focus more on the people and the environment that the testing is being conducted in rather than on what specific tests are used. I think that as a student, many of the points that he makes are important to carry with me into any potential professional positions. Evaluating the testability of products through Bolton’s methods will allow me to better manage risk and deliver products with fewer bugs.

Which Came First… The Test or the Code?

In episode 31 of a podcast by Brian Okken on his show titled “Test and Code,” Brian has a discussion with guest Paul Merrill about the testing pyramid and why he is frustrated with certain test-driven development (TDD) models. The discussion began as a Twitter disagreement between the two test enthusiasts and blossomed into a full-fledged “civil discussion.” While both Brian and Paul agree about the importance of testing and see the value in test-driven development, they disagree about how extensive testing needs to be in order to be effective. The two also disagree about how tests should be written and to what extent code should be based on testing versus tests written based on code. Although they do not seem to ever reach a consensus on the issue, each of them make some excellent points and give examples of personal experiences to support their opinions.

Brian, for example, is fed up with the sheer number of redundant unit tests that are written to test the same thing in traditional pyramid testing. Brian presents an example of a hypothetical method that has a test written for handling negative numbers, but the higher-level method that passes values to the first method will never pass a negative value. Brian sees testing the first method’s ability to handle negative numbers as unnecessary and a waste of time. If these same types of tests are written for hundreds of methods, Brian argues, an extraordinary amount of time is wasted on useless testing. As a rebuttal, Paul argues that if changes are made to the higher-level method that allows negatives to be passed down, the tests would already be written. Clearly not convinced, Brian scoffs at this justification for the test he clearly sees no value in.

Paul seems to have some sort of personal experience in all of the obscure areas that Brian dismisses as rare or unusual scenarios. Paul seems to support a bottom-up test-driven development platform, where tests are written that outline how every last detail that a program will perform. Paul argues that this is the best way for tests to effectively drive development. He seems to think that tests are not able to aid in the development process if they are written after development has taken place. Brian, on the other hand, sees this issue from a top-down perspective. He argues that higher, user-level tests should be written first. When the high-level tests are insufficient to further drive development or are too ambiguous to write code for, then lower-level unit tests should be written.

It was extremely interesting to listen to two experienced testers discuss such a controversial topic. While I can see where both men are coming from with their opinions, I seem to lean towards Brian’s side in the argument over test-driven development. I think that the points he makes about a pragmatic approach to testing is important. Rather than generating some ridiculous number of unit tests that may not have any bearing on the actual functioning of the program, the effort should be put into doing what was originally promised by the program specification. I think that I will follow Brian’s advice and also take the pragmatic approach to writing tests, in the hopes of avoiding rewriting tests and code. In the end, its not about how many tests can be written, its about testing for the correct things in order to deliver a bug-free program.

On Exploratory Testing and ‘Playing About’

A thought that occurred to me recently while following procedures for testing using the boundary value testing method is, “What would happen if I tried values for inputs of an incorrect type rather than whatever values the particular testing method called for?” While simple test procedures such as normal boundary value testing clearly leave much to be desired, more comprehensive testing methods such as robust equivalence class testing seem to cover most of the potential sources of input errors, right? I think not. It was often difficult to fight my intuition and desire to choose test values that did not necessarily line up with the testing procedures covered in boundary value or equivalence class testing procedures. This desire to choose what to test and how to test it based simply on experience and intuition is whats known in the software quality assurance field as exploratory testing, and it is an extremely powerful method of testing when used properly and appropriately.

In a Let’s Talk About Tests podcast titled “Players gonna play play play play play,” from September 21, 2017, Gem Hills talks about the benefits of getting familiar and gaining a better understanding of a program simply by “playing about,” with the program itself. Hills argues that sometimes simply using the program and applying knowledge from previous experiences with similar programs or with the development team itself can often provide a software tester with potentially revealing test cases that would be missed if the tester was relying solely on a testing framework. In her post, Hills references Richard Bradshaw, perhaps better known as the “Friendly Tester.” Specifically, Hills talks about attending one of Bradshaw’s talks at the BBC in August where he spoke about how “having a play” with a program can become far more impactful by simply recording the results and tests, effectively turning this play into exploratory testing.

One of the reasons that Hills cites for playing with a program with no real agenda is that, “[She] is not confident that [she’s] tested everything but [she’s] not sure what [she’s] missing.” I found this point especially telling. This is exactly how I felt when I was writing test cases for the boundary value testing assignment, as if I was sure that I was missing something, but unsure as to exactly what it was that I was missing. It would therefore seem to make sense to me that I should begin playing around in an attempt to discover what it is that I am missing. Perhaps it is simply that I am lacking the appropriate testing method, and performing some exploratory testing provides insight into an effective method. It is possible, however, that a bug exists in the program that cannot be found by applying any number of testing procedures. In these cases, exploratory testing, or “playing about,” is the only effective testing tool.