Back To Blogs
Rob Dixon 2nd Jun 2020

Software development: the importance of quality assurance testing

Whatever type of organisation is implementing computer software, ensuring that the application developed is of the highest quality is of paramount importance. Failure to employ at least the bare minimum of quality assurance (QA) best practise can cost a company its reputation, reduce revenue, delay the delivery of the application and other projects in the pipeline and make it a less attractive workplace for potential employees.

Best practice quality analysis requires a thorough understanding of stakeholder business goals to enable a clear comprehension of whether the proposed solution(s) will meet the desired goals. Having an agreed set of requirements with clearly defined acceptance criteria is essential to the delivery of a reliable, efficient, secure and maintainable quality solution – delivered on time and within budget. Any confusion at this stage will hamper the creative process, encourage decision making based on false assumptions, limit the ability to suggest better alternatives to meet the business requirements, lead to increased costs and delayed delivery.

Distinct stages of the QA process

Six distinct stages involved in the process of QA testing:

PLAN: Requirements gathering and analysis is the most important step in achieving the successful delivery of a solution. There must be a clear understanding of what conditions must be met and where there may be conflicting requirements from the various stakeholders involved. Fully documenting the analysis will facilitate the planning of what testing will be required and what test scripts will be necessary if automated tests should be run. What is required? What are the objectives? What should the test plan compromise of? 

DESIGN: Based on the agreed requirements and acceptance criteria, and in conjunction with the software developers, the next step is to then design how those requirements can be tested. This includes determining what types of tests must be run, if any can be automated and what the tests should encompass. 

BUILD: The test plan and cases are written, ensuring that both align with one another. Development work of automated test cases is completed and verified.

TEST: Now to prove that the objectives have been met and that the outcome of the test cases is correct. Any issues/errors should be highlighted and reported.

REVIEW: Any changes that are required as a result of the testing and checks performed should be implemented and re-tested.

LAUNCH: User acceptance testing is completed and the application is deployed to live.

Automated Testing vs. Manual Testing

You have done all the planning and all the requirements have been analysed. The plan is in place ready for you to start designing the testing that will be performed. The decisions made affect not just the building of the test plan and its cases, but everything through to launch. So, on what basis do you choose between manual tests, automated tests or a combination of the two? Time and cost are invariably the major driving force behind which method is used to test and to what degree. There are also several other key points to consider when deciding which method to use, these include:

  • The nature of the application
  • Project requirements
  • Timeline
  • Budget
  • Expertise
  • Suitability

There are pros and cons of each. Test automation software is not cheap, nor are the people required to build the test cases that will be run. In the short-term, test case development will require more time. It is very dependent on there being clear, concise, and complete requirements and specifications. A change to these will require even more time for test case development and more associated costs.

For some applications it is simply impossible to automate any part of their operation. The key factor is determining which method is most relevant to what is being tested and at what stage in the applications development. Ideally, any test plan will include manual and automated cases but this will not always be the case. For example, the creation of SQL Server Reporting Server reports can be automated, and test cases written to cover the various data inputs leading to report generation, but the determination of whether the output is correct can only be made manually.

Automated Testing

There are scenarios where automation facilitates predictable and reliable testing for:

  • Code version releases
  • Regression tests
  • Load tests
  • Performance tests

Automated tests are hugely beneficial when you want to simulate many users or putting load and stress on many aspects of an application (code, database, network). Longer term, test cases are cheaper because you are not spending time developing them again and are reusable so can be used as often as required without additional cost. They can also be made to run without the need for any user interaction; run a test whenever a new build of a component of the application is performed.

Manual Testing

Manual testing on the other hand covers a wide range of circumstances. For instance:

  • Ad hoc testing: an unplanned test method that relies on the testers knowledge to allow any results to be valid and to stand scrutiny.
  • Exploratory testing: this relies heavily on the testers’ expertise. When specifications are poor and time scales are short, manual skills are essential.
  • Usability testing: checking user-friendliness and convenience.
  • User acceptance testing: determining with the people who will use the application whether the application fully meets their requirements and usability.
  • Perceived efficiency of the application: performance tests will provide the metrics for how an application performs, but how does it feel to the user? Data can be returned from a table in seconds but if the form that presents the data is badly written, the user experience is not good.
  • Cross application data validation: testing whether the data shown in one part of the application is being rendered consistently in other parts of the application.
  • Cross report data validation: testing whether common calculated data is rendered consistently across multiple reports.
  • Document generation validity: testing whether data merged into a document is correctly represented, whether the correct template has been used for the language applicable to the document.

Irrespective of what is being developed, for who, or for what purpose, the result of testing must be reliable. Requirements must of course be satisfied and the user experience positive. But the likelihood of poor performance or failure from whatever cause must be minimised and mitigated. For example, if invalid data manages to get into a dataset, can the solution handle this in a way which will not cause the application to stall and will allow swift and easy recovery? If the data has the potential to contain control characters through the pasting of text into form fields, or punctuation marks being used in name for example, this has to be tested and measures taken to mitigate what a user could do in the real world.

Bug Reporting

Whatever method of testing is employed, good bug reporting will help make testing and resolution more efficient. There are many applications available on the market; some integrate into testing software, some standalone, some very good ones are even open source.

Clear and concise bug reporting helps stakeholders, testers and development engineers to clearly navigate their way to a solution. To this end, whatever application is used (Atlassian JIRA, Selenium, TestLink, Excel or OneNote to name but five) it should facilitate:

1CommunicationTo form a cornerstone between stakeholders, QA testers and developers.
2Concise bug descriptionSo that the nature of the issue can be determined quicker, also useful when searching for bugs with a similar description.
3Clear reportingEach bug report should only address a single issue.
It is essential that the number of individual issues found, and for the duration until their resolution, are clearly and quickly quantifiable.
4Issue reproductionIf a bug is to be reported, the steps to reproduce the issue need to be fully and completely documented.
The context of the issue should be highlighted to avoid any confusion.
5Screenshots / screencastsThe inclusion of screenshots, or even screencasts, to highlight how the issue is being manifested simplifies the developers work in finding a fix.
6Postulate solutionsAllow potential solutions to the issue raised to be documented, including the steps required.


Conclusion

Both automated and manual testing methods have their merits and disadvantages. Both are good in specific scenarios and less so in others. Neither are best at everything. What is certain is that every project, individual application, or even minor update, needs to be thoroughly tested to ensure it meets all of the aspects covered in this blog.

Related Blogs


Leveraging machine learning capabilities in application development

In recent years, the digital sector has been transformed by artificial intelligence (AI). With tools such as ChatGPT and DALL-E, public access to AI resources is at an all-time high.

Find Out More

Git integration in Mendix

Mendix has chosen Git as their standard for version control going forwards. Explore some of the differences between using Git and SVN and walk through how developers use Git version control when creating both new applications and when converting existing Mendix applications.

Find Out More

From graduate to expert developer | Matthew Pratley’s Mendix journey

Ever wondered what it takes to reach Mendix Expert level? The Expert certification process is quite different from the other certifications (Rapid, Intermediate and Advanced) and is not only proof of a Mendix developers knowledge, but it also confirms their expertise, proves that they have applied what they know about Mendix in their day-to-day job and shows that others trust their knowledge.

Find Out More

Kick-start your Mendix learning!

As part of my Maths course at University, I studied some modules in which we used traditional programming. With some exposure to programming languages, I had an awareness of software development, but low-code application development wasn’t something I had heard of before joining AuraQ.

Find Out More

Why low-code IS a matter for the board

Before I speak to customers about their technology requirements, my first question is: “What are you trying to achieve as a company?” A quick look at their annual report and the evidence is clear to see – since corporate strategic objectives are set by the board and published to the markets. Almost without exception, these objectives will be aligned to growing revenue and reducing cost.

Find Out More
Drag