Thursday, November 29, 2018

Automation Pain Points II: Resilience Part 2, Flows

In the last piece, we looked at the benefits I've seen at other companies using strict encapsulation at the page level. But in my current gig, we take it a step farther. We have another layer between tests and pages called flows. A flow interacts with one or more page objects.

This means that pages are the workers, they know how to do specific pieces of work.

Flows are the managers, they line up which workers and in what order. Flows are also a series of interactions the user will take to perform some action. They model the end user's interaction with the system.

The tests are the executives that decide which flows to call to test a specific underlying feature or group of features.

In practice, this looks like a lot of unnecessary complexity, especially for test cases who's flows are one line long. But it also further isolates the test from the changes in the UI.

We've had large scale redesigns that mostly take page object updates. Sometimes flows need to be updated. But the underlying functionality hasn't really changed.

This means tests are updated more quickly, and so the automation can do more for less effort.

It also means we can write automated tests and flows before we have page mockups, as long as we have detailed enough specs. We can add the page objects as they become available.

Woo hoo!

Saturday, November 3, 2018

Automation Pain Points II: Resilience Part 1, The Page

I am a fundamentally lazy guy. That's why I write test automation, because I don't want to keep doing the same crap over and over.

But the problem with encoding knowledge into source code is that the representation of knowing thus created is brittle. Every time the AUT changes, the tests have to change.

I saw this first at Symantec in 1991, working on OnTarget, a project management package. There were 3 of us automators, and we built up a rather small lot of tests, before we became unable to create any more. We were too busy keeping up with AUT changes.

By this time, we were using SilkTest, and their implementation of page objects. Their page objects were really just intended to be a better tool for managing selectors than the huge lists of constants that was the way QAWorkbench (which became SilkTest) had done in the alpha stage of it's development.

But we took it a step further, and built what folks today would recognize as page objects. The point behind the way we use page objects at Home Depot today is to encapsulate all the details of interacting with a page into one place.

When I started there, we had tests directly calling methods in page objects. So when they completely redesigned the app, all the tests had to be thrown out. This is not good resilience.

The problem was that the test code was tightly coupled to the page code. The rule when I arrived was that no test used the selectors on a page. Instead, methods were created. I understand why they created that rule, but the problem was they had routines like click_submit_button(), which meant that how to operate the page was encoded in the test. The AUT underwent a complete redesign, and all the tests were then trash, and had to be rebuilt almost completely from scratch.

Today, each page object implements subflows. A subflow takes a dictionary with all the data needed for that page, and implements all the steps for a single action on that page. Everything about how the page does it's work is encapsulated and isolated from the test. Even default values we expect controls to have, such as the text of error messages that should appear, are stored in the page object itself.

Monday, October 29, 2018

Automation Pain Points I: Synchronization

Let me say first that I love what I can do with test automation. It has definitely become an art over the years I've been doing it.

One of the very first pain points I encountered was with synchronization. In those days, I had to build my own tools. This is just before SilkTest went Beta 1. I tried sleeps. They worked poorly.

And when I started at Home Depot, I got into quite the argument about using sleep statements. A co-worker, Clay, got bent at me because I had put a sleep of 1/3 second into a routine.

Now, mind you, what this routine did was to poll for multiple different controls, every third of a second, until one of several conditions were met. He insisted "no sleep statements!". So I took him for a walk inside of webdriver, where it does the exact same thing. It was a good example of somebody following a rule because there's a rule, not because it was warranted in that case.

Sleep statements are not a good solution in test automation, mostly ever. My condition was unusual because there was no way to do all these checks at the same time otherwise.

But when anybody puts a sleep in a test otherwise, I call them on it. I know just how bad careless use of sleep statements can be. I have made tests take longer than necessary by trying to extend a wait to cover all the various response times from the application under test (AUT).

So instead, I ask folks to look at "how do you as a human know the AUT is ready to continue?" Is it because a field has become populated? We have an assert_not_empty() to cover that condition. Is it because a control exists? We have assert_exist() to cover that condition. Are we waiting for it to have a specific value? We have assert_value() to cover that one. All of these validation routines take a timeout as an argument.

I don't believe in rules for their own sake. In fact, I think the fewer rules we have, the faster development goes. Everything about our framework is about velocity. Reducing the time to build and maintain tests. Don't use sleep except in the most unusual of circumstances is one I do keep.

Keeping Your Perspective

For those of us who've been in IT for many years, the number of new and exciting technologies, paradigms, methodologies and philosophies around us right now can seem overwhelming.  Ideas that started in small shops, startups and incubators are now reaching in to even the most conservative of industries. Even my own industry, dental insurance, is starting to adopt agile practices.

It can be difficult to find your way through this barrage of new ideas: We want continuous integration and continuous delivery and your framework needs to support our particular flavour of agile but it also needs to support our legacy waterfall apps but we're not going to call them waterfall any more and also we need better reporting and traceability and and and ...

At times like this, I find it very helpful to take a step back and re-orient myself around a single, simple tenant:
   My job, as automation engineer, is to support quality.

Let's unpack that a little.  Quality means different things to different people and organizations.  You might measure quality with defect metrics, you might have some sort of federally mandated guidelines, you hopefully have a set of functional and non-functional requirements your are gauging against.  Regardless of how you measure it, your job, as an automation engineer, is to do everything you can to help ensure quality.

Functionally, test automation is a component of overall QA. If your core QA practices are shaky, the best automation in the world will not save you.  Everything you do as an automation engineer need to ultimately serve to bolster QA.  Whether you are part of a small team or have an impact across the enterprise, this holds true.

I use this tenant every day.  We are in the midst of developing a new enterprise-wide automation framework.  Keeping my perspective on "support quality" helps me filter through the options when choosing technologies and methodologies.  It helps me to remember who the stakeholders are when I'm designing elements of the framework, such as reporting.  It helps me figure out the how when tasked with something like adding testing to our CI setup.  Hopefully it can help you too.

Sunday, October 21, 2018

The Pain Of Test Automation

"So what do YOU think are the biggest pain points for test automation?" Someone asked me. I've been thinking about that for the last several weeks. This is the list I came up with:

Synchronization - Making sure the test doesn't get ahead of itself, and making sure that it can continue.

Resilience - Recovering quickly after the application under test changes.

Interface Proliferation - When test automation libraries get too big.

Networking Problems - Possibly my number one issue at my current gig.

Looking Stuff Up - How much time is spent looking up how to do things. A lot more than most people think.

Data Management - Getting consistent data into the application under test, mocking external interfaces, and so on.

Artifact Aging - What test artifacts to hold on to and for how long.

Reading and Maintaining Other People's Code - Coding standards, training, and so on.

The next few posts will look at each of those in a little more depth, including how I deal with that in my current work.

Stay tuned!