Request a quote Quote

Image comparing tool usage for the UKAD website testing

Image comparing tool usage for the UKAD website testing

While we were testing the UKAD web site manually, we’ve defined that there are over a hundred web pages, so manual reviewing all of them for each deployment takes too much time. Also, people can just make mistakes while doing a big amount of monotonous work. That’s why we decided to use a pixel-by-pixel comparison tool aShot, which allows testing our GUI state automatically.

Here is a list of the used tools:

  • Java 7
  • Ashot
  • Selenium WebDriver
  • Jenkins
  • Maven
  • TestNG
  • Xenu

First of all, we need to define the exact list of the pages to test. For this purpose we use Xenu. This tool simply check website URL, gets the URLs list, then filters not necessary entries (see the screenshot below)

URLs list

Now, we have to check the site manually to ensure that our current GUI state has no issues. It's necessary because the reference images will be created based on the current state of the website.

If the current site version has no issues we are ready to start the first part of the tool called ‘UkadWebSite_CreateExpectedScreens’. We have a separate Jenkins job on a remote server for this purpose. This part will perform the next steps with the following order:

  • Open each page from the list in Chrome web browser
  • Make a screenshot of the whole page from header to footer 
  • Save the result in a separate .png file on the server with specific name which include name of the page, ‘expected’ label and window size.

In that way we are getting reference files for our testing.

Now, when expected images are saved and any changes on site will be performed, the second part of the tool should be used. We have a separate Jenkins job for it, which is called ‘UkadWebSite_CompareScreens’. This part of the tool performs the next actions:

  • Open each page from the list in Chrome web browser
  • Make a screenshot of the whole page from header to footer 
  • Save the result in a separate .png file on the server with the name, which include: name of the page, ‘actual’ label and window size
  • Compare pixel-by-pixel expected and actual files 
  • If some pixels of the reference and current files are different, tool marks them with red color
  • Save the test result to a separate .gif file, which includes expected, actual and difference .png files. The difference of states is highlighted with red colour and blink on the gif file
  • Send a common .gif file which includes actual, expected and difference images to email address, which is saved in the system

In the screenshot below we can see actual, expected and different images from left to right. On the rightmost image we see that the difference between actual and expected images is highlighted with red colour. 

Screenshots

In this way we reduce the time consumption of the manual QA for testing each page of the site. Also, we can schedule the starting of our tests in non-working hours such as weekends, nights, holidays and so on. One more thing to keep in mind is a 100% error spotted rate for automation framework so this approach is much more efficient than manual tests.

  • QA
  • AQA
  • Quality Assurance
Back to Articles

Comments

Popular articles

Contact Us