In my current gig, we have more than one testing framework. For predominantly historical reasons, we have 3 different test automation frameworks at work. One in python, one in java, and one in javascript.
When I started here, I was given ownership of the team producing test automation in python. I set one of them up as the Framework Owner, responsible for all the framework changes, and supporting the rest of the team in the use of it.
And then my framework owner had an emergency and had to take a month's hiatus. I was left to fill his shoes. But I hadn't actually used the framework. I'd built some helpers, and built unit tests for them, but I hadn't even myself had to debug any of the cases. Much less rummage though the output to understand what was happening.
I wound up spending time to solve the short term problem. I added code to output the test status and debugging info I needed to be able to report results. When he got back, I removed that code.
And thought about that experience. And the fact that, where we've been, if we had a java opening, and a surplus python programmer, we'd probably fire the one and go looking for the other. Thing is, when that happens, we lose a lot of institutional knowledge.
And the language itself is the easy part. I learned python in a week by asking google how to do things I already knew how to do in other programming languages. I even won a "fastest code wins" competition with a developer at a company I was trying to get on with. In a week.
The harder part is finding the pieces in the code and the output.
As I ruminated on that, I was reminded that while I was at Home Depot, I was given the directive that every person with the company shall be able to read and understand the summary portions of the test result output. Which we did.
And I realized that if I had that same model, with all the pieces talked about below, not only would everybody be able to understand it, it would also speed up any debugging we needed to do. The reasons will become clear in the examples below.
And, if we implemented this same kind of output in all three of our frameworks, engineers who knew to look for these landmarks in the output of one would be able to switch to other projects, even ones where they had to learn a new programming language, and be able to add value within a week. So maybe we can keep folks we might otherwise have to lay off.
Work has given me leave to roll out Uniform Results. This doesn't replace any existing thing. In fact, we're also looking at moving all the frameworks to Allure for at least some of the reporting. But this will add to the console output, and may also eventually serve as a springboard for deploying a logging server (since now all the traffic will come though one set of pipes).
BANNERS
A banner is a chunk of output which looks like this:
This particular one is performing an assertion that a substring will be in a string. If instead it had been a failure from a selector not finding the element, then the fields would instead include things like what the selector was, how long the timeout was, and similar information.
My own experience has been that in the past I've often had to run a failed test again to get some missing piece of information. The point behind the banner is to never have to run the test again for more information. If you need to do that, you add that to the template for whatever kind of banner it is.
Banners get emitted for every assertion failure (including soft ones), or error.
TERMINAL REPORT
The Terminal Report comes at the end of the test run, and summarizes a lot of information. It comes in three parts:
TERMINAL REPORT: TEST ENVIRONMENT REPORT
This basically just lists anything about the test run which might be of value later in investigating any issues. This is a completely faked up example:
For each test case, it could end with any of the following end states:
We can attach marks to the test cases indicating if there's a bug against that case. This lets us report Unchanged and Fixed as a test status.
This shows status information for 3 cases, at least, most of 3. :) The middle one is a data set test, and row 6 suffered an assertion failure, while row 11 suffered an error.
TERMINAL REPORT: TEST RUN SUMMARY
This comes at the very end, and summarizes the whole run: