New tool: View Feature-files in your browser

I’m proud to announce a new tool: Featrz

Featrz (featrz.com) can display the feature-files from a repository without starting an IDE, like IntelliJ or Visual Studio.

Feature-files are text-files describing the features of an application in a language called Gherkin. These feature files are often used by tools like Cucumber (java) or SpecFlow (C#) to facilitate living documentation, i.e. Specify the features of an application with examples and have these examples executed as automated checks in your continuous integration (CI) pipeline.

In addition, Featrz will add the following benefits:
– Read the Features in a browser, also by non-techy people. Business Analysts, Product Owners, and Department Managers can see how an application is working. And they will know it works like that, because the examples are executed as automated checks as well.
– Inclusion of images to illustrate the features. Sometimes a picture says more than words.
– Ability to add hierarchy to distinguish high-level features from very detailed examples that cover e.g. only 1 of the micro-services in the entire landscape of micro-services.

I encourage you to have a look at featrz.com. Under ‘Project’ in the top-right corner, you can select one of the demo projects or use a URL of your own open-source project with feature files.

New Techniques: Single Page Application

In my previous blog, I wrote that I’m developing a new tool: Featrz (featrz.com). Go ahead take a sneak-preview. You can see some Demo-features by selecting a Project from the top-right corner.

Bootstrap

For the GUI, I used a Single Page Application. I was a bit hesitant at first, because it meant for me to learn a lot of JavaScript as well. But the architecture makes more sense to me: initially loading all the HTML, CSS, and JavaScript and after that it is just REST messages going back-and-forward. At least this makes more sense for this application.

I made a choice for Bootstrap where it could have been Vue.js, Angular or any other framework. At that time Bootstrap was just what I heard most around me. Nowadays I probably would have picked Vue.js. It will be interesting to see if Bootstrap can be easily replaced without harming the architecture and giving me the same benefits (see below). But that’s for another time.

Unit Tests for JavaScript

Since JavaScript was new to me, I wanted to use it in a clean manner. Not the hacky scripting way, but more in a structured OO way. And I wanted to create automated checks for them as well. I figured that if half of the lines of code of the entire tool is in JavaScript, I must have Unit-tests for that JavaScript as well. Using classes was the logical thing to do.

But I took it a bit further. I separated the data (model) from the services and the services from the views and controllers, creating a Model-View-Service structure. It allowed me to test the Services in a Unit Test manner, i.e. very well isolated, using mocha.

Note that JavaScript does not prevent you to take short-cuts. This comes down to just don’t do hacks and a lot of self-discipline. Even if you keep to this, JavaScript will give you enough challenges with Callbacks, Promises and plain magic in some libraries. Most examples on the internet (yes, I google a lot) are not applicable for the way you want to use JavaScript (e.g. different ES-level, Browser-side vs Client-side). I learned a lot, but most importantly that JavaScript is BIG and a lot of libraries and solutions exist for the same problem. Don’t try to master it at first, but rather keep it simple and don’t use things that you don’t understand. Or else learn to understand it and adding Unit Tests certainly helps with that.

Conclusion

I’m very happy with the Single Page Application setup in combination with JavaScript Classes and MVS structure. Since I do switch a lot between Java and JavaScript, I had no problem switching back to JavaScript and understanding the code quickly. I don’t think I could have said the same when it was a big 1-file script.

But I’m also very pleased with the set of Unit Tests to run on the code. I’ve just seen too many examples where Unit Tests were well in place on the server, but when it comes to GUI tests, the JavaScript logic is tested with Selenium in Integration or End-to-End Tests. The structure I used here checks the JavaScript logic on the right level, i.e. Unit-level, resulting in quick feedback with low maintenance.

New Techniques: Quarkus

In the past year or so, I’ve been working on Featrz (featrz.com). Go ahead take a sneak-preview. You can see some Demo-features by selecting a Project from the top-right corner.

Interesting about this application are the used techniques. Whenever I hear about a new technique on a conference or in a blog, I make a note of it with the intention of trying it out. To see if it is really useful and will work in a real project. Not a Helloworld!-application, but with security, a real database, a CI pipeline, and of course, automated checks. One of those techniques that I noted is Quarkus.

Quarkus is “supersonic subatomic java“. And “a Kubernetes Native Java stack tailored for OpenJDK HotSpot and GraalVM, crafted from the best of breed Java libraries and standards“. In my words: it allows for java applications to run inside a container and be fast. And also very handy: you can develop while it is running, seeing immediate effect of your changes.

Featrz

The Featrzz application is about showing Feature Files (Gherkin) in a browser. It starts with cloning a repository and converting the files with the .feature extension to JSON. These JSON blobs are then stored in a NoSQL database and used to display in the GUI.

There are 4 micro-services: a frontend, a server, a converter, and a data-store service. All are build with Quarkus and, except for the data-store, all are running in native-mode, i.e. they are not compiled to java byte-code to run on a JVM, but to an executable for a specific Operating System. That’s not a step back because we run it in a container of which we know the OS and the container itself will give us the portability as well. The containers are deployed in Google Cloud and when needed they will run using Cloud Run.

To illustrate how fast Quarkus is, the conversion happens in real-time. I wanted to store the results to make it even faster, but it turns out that retrieving the data from the database is hardly any faster than converting them again. At least not for the small projects that I used so far.

Support for libraries

The drawback of Quarkus may be that not all libraries are supported when compiling in Native mode. The Quarkus project is working hard on getting more support for libraries. E.g. instead of using Gson, I had to make my own package to convert my POJO’s to JSON. But I didn’t find it that troublesome either. It forced me to think very well about the design and as long as the POJO’s are simple, which they are, they are my POJO’s, the conversion is easy as well. If there is no other way, you can always run Quarkus in JVM-mode. As I did with the data-store service.

The use of containers also allows me to run the instances in a Kubernetes cluster as well. That makes the application very flexible in where and how to deploy it. Perhaps later I may release this tool or a variant for private Kubernetes clouds as well. Plus I can test the application locally.

Conclusion

If you’re like me and want to try-out something new in order to really find out if and how this technique works in a real project, I can definitely recommend to try out Quarkus as well. But also if you’re looking for ways to create micro-services in a public or private cloud, Quarkus may be an interesting way to go.

“If you think you’re done, think again”-heuristic

When all tests are done and you think you’ve done enough to justify a release, think again. If there is still time, sleep a night over it.

The extra time will allow you to defocus, to be creative, and to look for other ways. If you step back from your tests, you usually have a better eye for the bigger picture. It results in new “what happens if…”-questions, new test approaches, and new test-techniques.

The extra time needed depends on the project, the person, and the System Under Test. Sometimes a cup of coffee or a go to the toilet is sufficient. In other cases a night or a weekend gives better results.

In my experience new tests will pop-up and some of them even reveal some yet undiscovered bugs. It is often well worth the extra time.

Automated testing is more than automated checking

I was involved in an interesting discussion on Twitter, the other day. It started with a tweet by @testingqa (Guy Mason): “Still of the opinion that ‘Automated Testing’ is a deceptive term,no testing is being performed,it should be called ‘Automated Checking’ #qa“. With that he probably refered to Michael Bolton’s blog that there is a difference between testing and checking.

After that blog lot’s of people, mainly automation sceptics, stated that Automated Testing should be called Automated Checking. Although I acknowledge and agree that there is a difference between Testing and Checking, I don’t think it should be called Automated Checking. I’ll explain below why not, but first the rest of the Twitter conversation:

I responded to @testingqa’s tweet with: “@testingqa but that’s not true either. Automated Testing is more than checks. There are actions as well.

(@michaelbolton: “. @testingqa I agree, but I think @AdamPKnight expressed things well. He specified checking, so I read “automat[ion assist]ed tests”. #qa“)

@testingqa: “@ArjanKranenburg Yes, actions are taken which verify behaviour is consistent with expectations. But that’s still a check.

@arjankranenburg: “@testingqa I meant actions before you can start checking. You have to tickle the SUT before verifying the response.

@arjankranenburg: “. @testingqa and then there are preparations, cleanup, reporting, etc. Automated Tests is so much more than just Checks. #testing #qa

@michaelbolton: “. @ArjanKranenburg @testingqa Test automation (any use of tools to support #testing) should be much more than checks. Alas, often, it isn’t.

@arjankranenburg: “. @michaelbolton @testingqa I think it often is. E.g clicking a button is an action, verifying the response is a check. #testing #qa

@arjankranenburg: “. @michaelbolton @testingqa I understand you don’t want to call anyting automated a test, but it’s more than a check. #testing #qa

@michaelbolton: “. @ArjanKranenburg Your assertion that I want to call anything automated a check is incorrect. #testing #qa

@michaelbolton: “. @ArjanKranenburg Something is a check when it doesn’t involve cognitive engagement. Tools can extend cognitive engagement. #testing #qa

@michaelbolton: “. @ArjanKranenburg #Testing tool use turns into checking when it *displaces* cognitive engagement. #CultOfTheGreenBar #qa

@michaelbolton: “. @ArjanKranenburg Note that risk identification, design, and programming–the preparation–are #testing activities, requiring sapience. #qa

@michaelbolton: “. @ArjanKranenburg Verifying *one factor* of the response is a check. Checking focuses on output; #testing on outCOME. Beizer might agree.

@testingqa: “@ArjanKranenburg Automation does gather info & does allow one to verify response but distinction remains that it only checks to verify…

@testingqa: “@ArjanKranenburg …if it meets expected outcome (or not). Preparing/cleanup does not reveal new information though and reports based on…

@testingqa: “@ArjanKranenburg … checks only verifies against expectations still. As @michaelbolton said it can be used for more, but most times do not.

To summarize Michael’s blog, but please read all his blog-series on testing vs checking because there is more to it, an important difference between testing and checking is that testing requires cognition to interpret the information revealed by one or more checks. But I’d like to extend that as I think testing is more than checking + interpretation.

And this becomes apparent when trying to automate a test. The base of a test consists of actions, checks and interpretation of the retrieved information. Most Systems Under Test (SUTs) need to be tickled before it responses. You need to click a button, send a request, press a key, etc. etc. In theory, the SUT can do things without an external stimulant, but in most cases it doesn’t.

Then the SUT responses and that response can be checked for certain aspects and the revealed information must be interpreted. If you state that Automated Testing should be called Automated Checking because the cognitive part can’t be automated, you’re ignoring the actions that can and often need to be done as well.

And there is more:

  • Before starting the actual test, you need to prepare the SUT to make sure it is in the correct state for the test to execute.
  • Automated TCs are often run in a batch, so it is good practice to restore the SUT to its original state.
  • Since interpretations of the results is still needed, the results need to be presented in a tester-friendly way. What and how you are reporting is very important.
  • etc. etc.

All these activities can be automated as well and are often included in the automated test script.

My point is, if you don’t want to use the term Automated Testing, call it Automation Assisted Testing (I like that), but Automated Checking simply doesn’t cover the activities done in Automated Testing.

Diversified Testing

Lately, it’s popular to have Tests driven by something. Since long we have Requirements based and Risk based testing, which undoubtedly would have been called Requirements-driven and Risk-driven Testing if they were thought of in the past decade. Then there are Use Case-driven, Design-driven, Data-driven, Keyword-driven, Model-driven, and Business-driven testing. And I’ve probably forgotten a few, but Common Sense-driven testing never seems to be an option.

What’s missing in the discussion is that it is in most cases best to combine approaches and techniques in order to get a diversified test approach. If you consider the types of bugs that can exist in your application and consider that every type of bug has it’s own best way to be detected, it is common sense that different approaches and techniques should be applied. There are approaches that will allow you to find a majority of the bugs, but that may not be the best way to find some types of bugs.

Approaches should be taken broad. Some bugs are easily detected by reviews, white board sessions, or other static testing methods. What is the ‘best way’ depends on your product, your organization, and the circumstances and is certainly easier said than determined. But I don’t believe that there are situations where only one approach is best. And trying to find the bug in a later phase may be best as well.

If you rely on one approach it is getting harder and harder to find the next bug. And there is a substantial risk that not all bugs will be found. To minimize that risk, diversifying the test techniques and approaches is the logical thing to do.

This is why I don’t read test books

On some conferences, time and space is reserved for some shameless book-promotion. Sometimes entire reviews are published in magazines and if you have written a book, you are almost automatically considered the authority on the subject. Test books are popular and in these days of crisis, it seems that even more books are published.

Personally, I don’t read a lot of books. At least not on the subject of testing, and here is why:

  • A book is an old technology. There is no interaction, no feedback, no discussion. Especially on the subject of testing, the interaction and discussion is very necessary.
  • The content is already old when the book is published. The IT world goes faster and faster these days, but books must be reviewed, edited, printed, distributed, etc. And a good book gives discussion. This discussion is never printed (see point 1), but corrections resulting from this discussion are only published in later versions of the book.
  • The purpose of the authors of test books is often for their own promotion. It looks good on their CV, not that of the reader.
  • Test books are rarely based on good research. E.g. the kind of research done at universities. This makes the foundation very thin and often applicable to a very specific situation. What is left are opinions. Blogs and online fora are far more suitable for that.
  • I am, and this a personal one, a slow reader. I simply don’t have the time to read boring books of 400+ pages.

This does not mean that I don’t educate myself. I read a lot of blogs, magazines, participate in online fora, and goto events and conferences when possible. For me these are valuable sources of information. They provide me with tips, new insights, hints, etc., and keep me up-to-date with the latest from the testing field. And in a much faster, honost, and direct manner.

No doubt that there are exception and that there exist books that do not have one of the drawbacks mentioned above. Let me know if you’ve found one.

Context-Driven Testing

A test ideology that I totally agree with is Context-Driven Testing. But I will never describe my activities like that. In short Context-Driven Testing says that the best way of testing depends on the context. There are so many external factors of influence to your test activities, there is no one best practice that will work for all.

So why won’t I use the term Context-Driven Testing?

Because it is a typical engineer’s answer to the question how should be tested: “it depends”. This is true, but the answer will not get you any further.

Of course it depends. Every project is different. Every team is different. The customer, the budget, the timing and the time are all different. So it makes sense that testing is different as well. Therefor every test project should start with a good thinking session about how the test activities should be done. (Actually, the first question is if testing needs to be done at all.)

And best practices, as well as experience, can help you with choosing a strategy, techniques, tools, etc. Like a list of tips & tricks that can be used. Pick the ones you think are suitable or invent your own. Whether or not right for the job is you’re responsibility, not that of the author. No one ever claims that best practices are universally applicable.

Besides, best practices already have a context in them. Testing for traditional engineering is far different than testing software. That’s why best practices are often presented like: “Best practices in Software Testing” or “Best practices for Model Based Testing in an Embedded PLC Solution”. But unfortunately that context is not always copied and the Best Practice ends up on a list of general Test Best Practices.

Nonetheless people talk about testing like it’s all the same. Often you hear a presentation about a strategy on testing a web application and people ask questions with testing of an embedded application in mind. When the context is forgotten, miscommunication is the result.

Model Based Testing

Model Based Testing (MBT) is a term frequently used these days. It sounds well and just from the words seems like a good thing to do. But is it really worth the hype?

The strict definition of Model Based Testing is when you base your testing on one or more models. This still sounds good, but not so much if you consider the following:

  • A model is a simplification of reality. So if your tests are solely based on one or more models, you’re bound to leave some paths untested.
  • If code is also generated from the same model, you’re testing the generation process in stead of the product or model. Even if the code was not generated automatically, but the developer also based his code on the model, it is not likely that many faults are found.
  • Models must be created or provided.
  • Models can contain faults as well.

But lately the term is also used for tests described by models. The model still describes the system, because every test case does, but the primary purpose of the model is to describe the test case. Describe what path is followed, what actions are to be taken, and what is to be expected. Although I don’t call this Model Based Testing, Test Case Modeling is perhaps a better term, I do see a great benefit of it.

In earlier, pre-Agile times long descriptions used to be made for every test case followed by another description of the next test case that was only different by a few (essential) words. Later, Excel was used to describe test cases by key- and action-words. Test Case Modeling can be the next step in the evolution of describing test cases and to visualize them with the best possible model-type.