Skip to main content

STAMP: The 5 Elements of Writing Automated UI Tests

What are the common elements for writing and running automated UI-driven functional tests?

I am a software developer, not a QA tester, so my interest is in writing some form of scripts to perform tests through a user interface; I am not into tools that record a user's UI interaction.

The question arose recently, while I sat through an introduction to the custom-built testing framework of a project I am working on. (first, a clarification: by "framework" for this article I mean the custom code that a development team needs to create in order for the automated testing system to interact in a meaningful way with the production code. I do not intend it to refer to the testing platform itself. I'll call a tool like jUnit or Selenium a tool or a platform, and the set of tests, test doubles, and other artefacts of testing a specific system I will call the Framework.)

My guide for the learning session had a tendency to do deep-dives into the low-level details. After several minutes, I pulled us into a higher-level discussion, so that I could map the home-brewed implementation to concepts I had already used on several previous projects.

I would suggest that there are at least 5 common elements that any functional testing framework will require of the devs who write tests and expand the framework. Since my Preaching class (see my bio!) taught me to work in memory hooks, I'll spell the acronym "STAMP":

  1. S - Scripting: some way to define a sequence of actions
  2. T - Test Cases: the ability to define and differentiate any number of different test scenarios
  3. A - Asserting: an ability to compare an expected state or condition against the actual state or condition
  4. M - Mapping: an association between the screen elements and the test framework
  5. P - Persistence: a means of defining and injecting data with which the UI will interact, since UI state and functional behaviour will often depend on some notion of persistence



#1 - Scripting
To create effective tests, we need some way to tell the computer what steps to take, where to go, what data or other resources to marshal, what ones to stub or mock.

In past projects, I have used Selenium to define the sequence of steps that the test would execute. In that respect, it was very similar to the coding I was doing for unit tests.

The home-built proprietary system I am learning also needs a way to define the sequence of steps a test must take. It uses CSV files (which we affectionately call the RECIPE CSVs). These are reusable across different tests, and define the sequence of screens that will be stepped through. If a test wants to skip a given screen, it has ways of indicating that.

#2 Test Cases
The more heavyweight a test script, the harder it is to maintain, and the less focused it is on a single purpose.

The challenge, especially with UI tests, is that they are much slower than code-centred unit tests. To get to screen X, you may need to spin up the whole system (or at least the front end) and step through screens A through N.

With so much effort, it's a shame to test only one thing, so there is sometimes pressure or a desire to test many things along the way.

I recommend keeping the test's purpose very focused, and to make other tests for other small sub-sets of behaviour.

To do that, you need the ability to easily create, distinguish, and run different tests for different test cases.

It seems obvious, but one whole category of file types in the system I was learning made little sense until I realized that they were essentially defining suites of individual tests. It was in these test lists that the tests were given a unique identifier, which would then be used in various other files to distinguish one test's data or configuration variations from another's.

In our case, these are also comma-separated files (LIST files) that the test platform knows how to decipher.

JUnit has its own definition of Test Case Classes and Test Suites. TestNG has its own testng.xml

#3 Asserting
In my daughter's education, her teachers are as concerned with the journey as the destination - the process of her thinking is as much a concern in their pedagogy as the "right answer". They don't like grading, and they especially don't like to use the language of failure.

When it comes to automated test suites, however, the "right answer" matters. So we need a way to identify to our test scripts what is expected, so that the test can catch and flag deviations, or fails.

We need the ability to assert what is our expected state, or output, or behaviour.

The most popular automated-testing platforms use variations of exactly that word. "Assert" shows up in jUnit, TestNG, or Ruby's MiniTest. Selenium WebDriver gives language like findElement, which can then be used with Assertions or Verify commands.

It turns out that the home-brewed system I was learning was geared specifically toward testing the impacts on an underlying data model and view caused by the steps of the test. All that was required was to add the verification key-word to the sequence of steps, along with an expression of data expectation, and the platform would cross-reference its state and data.

As I use it more, I appreciate its power and how targeted it is to the application domain. It is, however, much more complex to understand and use than the various flavours of Asserts.

#4 Mapping
As a developer, I am interested in writing code to define the tests (the Scripting discussed above). To do so effectively, I need some way to hook my test code into the elements, widgets, frames, whatever is on my UI.

In past projects, I have used Selenium with a couple different strategies. One is User Interface Mapping, in which a sub-system's element locators are stored in a common place. The test scripts then do not hard-code details of how to locate the element they want. Instead, they refer to the common mapping name. Then if/when the element's id or path changes, the mapping is changed once, rather than in dozens of scattered tests. For more information see here.

I have also used Selenium with a Page Object Model design pattern. It uses an Object-Oriented Page object to allow test code to interact with a given UI page. Any changes made to the actual page then only impact the Page object, not the dozens of tests. For more reading, see here.

In the proprietary home-grown tool, another kind of CSV file (our 3rd different kind, called DATA files) defines this mapping. Each column identifies a UI element of the page, and maps between the UI element's ID and the underlying data model's representation. It has also been extended, so that optional action columns can be associated with each element column, to give per-test influence over any UI-to-Server interactions when the elements are accessed by the tests.

I find this approach clever and potentially quite powerful, but at the cost of human readability. With practice, I am getting better at creating and maintaining them, but the need to scroll horizontally is a minor but persistent annoyance.

#5 Persistence
The final element a good code-driven UI testing framework needs is the ability to define data for the test to use.

Sometimes the data needs to be entered as input into the UI; sometimes the data needs to be in place for lookups or for the UI to act upon.

As with all other cases in the proprietary system I am learning, this is done through CSV files. One set of CSVs (our 4th different kind, call them LOAD CSVs) is used in populating an empty database schema with some minimum data, such as country codes, account types, process flows, etc. These define the base data available to all tests.

The DATA CSVs are then reused in the framework, to define the data details that a specific test is expected to enter on the screen that the same DATA file maps.

So the DATA CSVs therefore serve the double-duty of mapping the UI and specifying values used by a specific test. Each new row uses the test case's unique identifier, and the values relevant to the test's scenario.

I like how easy this makes it to see what data is being tested on a particular screen. At a glance I can see if a screen is well tested (or at least that it has lots of tests inserting data).

However I dislike that it is impossible to reuse data with this framework. If three tests all need to enter the same data on screen A and then variations on screen B, the data for screen A needs to be copied three times in the CSV.

There are other solutions to Persistence as well. I remember one project that used HypersonicDB to create an entirely in-memory database, and a sophisticated set of classes to populate the database with core data, and to allow tests to easily build up test-specific additional values.

On another project, the team's solution had been to define a fairly heavyweight core dataset that lived in a live SQL Server instance. It was maintained through a rather top-down process, and was pushed out regularly to individual workstations and servers for writing and running automated tests. Any changes like new data had to be proven on the dev's computer, then go up the ladder to ensure the appropriate data was added to the next release of the test-bed.

Links:
Selenium 
Hypersonic 
jUnit
TestNG 

Popular posts from this blog

How to do Git Rebase in Eclipse

This is an abbreviated version of a fuller post about Git Rebase in Eclipse. See the longer one here : One side-effect of merging Git branches is that it leaves a Merge commit. This can create a history view something like: The clutter of parallel lines shows the life spans of those local branches, and extra commits (nine in the above screen-shot, marked by the green arrows icon). Check out this extreme-case history:  http://agentdero.cachefly.net/unethicalblogger.com/images/branch_madness.jpeg Merge Commits show all the gory details of how the code base evolved. For some teams, that’s what they want or need, all the time. Others may find it unnecessarily long and cluttered. They prefer the history to tell the bigger story, and not dwell on tiny details like every trivial Merge-commit. Git Rebase offers us 2 benefits over Git Merge: First, Rebase allows us to clean up a set of local commits before pushing them to the shared, central repository. For this

Java 8: Rewrite For-loops using Stream API

Java 8 Tip: Anytime you write a Java For-loop, ask yourself if you can rewrite it with the Streams API. Now that I have moved to Java 8 in my work and home development, whenever I want to use a For-loop, I write it and then see if I can rewrite it using the Stream API. For example: I have an object called myThing, some Collection-like data structure which contains an arbitrary number of Fields. Something has happened, and I want to set all of the fields to some common state, in my case "Hidden"

Git Reset in Eclipse

Using Git and the Eclipse IDE, you have a series of commits in your branch history, but need to back up to an earlier version. The Git Reset feature is a powerful tool with just a whiff of danger, and is accessible with just a couple clicks in Eclipse. In Eclipse, switch to the History view. In my example it shows a series of 3 changes, 3 separate committed versions of the Person file. After commit 6d5ef3e, the HEAD (shown), Index, and Working Directory all have the same version, Person 3.0.

Code Coverage in C#.NET Unit Tests - Setting up OpenCover

The purpose of this post is to be a brain-dump for how we set up and used OpenCover and ReportGenerator command-line tools for code coverage analysis and reporting in our projects. The documentation made some assumptions that took some digging to fully understand, so to save my (and maybe others') time and effort in the future, here are my notes. Our project, which I will call CEP for short, includes a handful of sub-projects within the same solution. They are a mix of Web APIs, ASP MVC applications and Class libraries. For Unit Tests, we chose to write them using the MSTest framework, along with the Moq mocking framework. As the various sub-projects evolved, we needed to know more about the coverage of our automated tests. What classes, methods and instructions had tests exercising them, and what ones did not? Code Coverage tools are conveniently built-in for Visual Studio 2017 Enterprise Edition, but not for our Professional Edition installations. Much less for any Commun

Scala Collections: A Group of groupBy() Examples

Scala provides a rich Collections API. Let's look at the useful groupBy() function. What does groupBy() do? It takes a collection, assesses each item in that collection against a discriminator function, and returns a Map data structure. Each key in the returned map is a distinct result of the discriminator function, and the key's corresponding value is another collection which contains all elements of the original one that evaluate the same way against the discriminator function. So, for example, here is a collection of Strings: val sports = Seq ("baseball", "ice hockey", "football", "basketball", "110m hurdles", "field hockey") Running it through the Scala interpreter produces this output showing our value's definition: sports: Seq[String] = List(baseball, ice hockey, football, basketball, 110m hurdles, field hockey) We can group those sports names by, say, their first letter. To do so, we need a disc