Complete the form below to request a free gap analysis:
AuraQ has vast experience helping organisations find the gaps that can be filled to enhance their business. This could be to improve efficiency, integrate legacy systems or deliver new portals and strategic applications to create competitive advantage. Contact us to request a free, no obligation Gap Analysis.
Software development: the importance of quality assurance testing
Whatever type of organisation is implementing computer software, ensuring that the application developed is of the highest quality is of paramount importance. Failure to employ at least the bare minimum of quality assurance (QA) best practise can cost a company its reputation, reduce revenue, delay the delivery of the application and other projects in the pipeline and make it a less attractive workplace for potential employees.
Best practice quality analysis requires a thorough understanding of stakeholder business goals to enable a clear comprehension of whether the proposed solution(s) will meet the desired goals. Having an agreed set of requirements with clearly defined acceptance criteria is essential to the delivery of a reliable, efficient, secure and maintainable quality solution – delivered on time and within budget. Any confusion at this stage will hamper the creative process, encourage decision making based on false assumptions, limit the ability to suggest better alternatives to meet the business requirements, lead to increased costs and delayed delivery.
Distinct stages of the QA process
Six distinct stages involved in the process of QA testing:
PLAN: Requirements gathering and analysis is the most important step in achieving the successful delivery of a solution. There must be a clear understanding of what conditions must be met and where there may be conflicting requirements from the various stakeholders involved. Fully documenting the analysis will facilitate the planning of what testing will be required and what test scripts will be necessary if automated tests should be run. What is required? What are the objectives? What should the test plan compromise of?
DESIGN: Based on the agreed requirements and acceptance criteria, and in conjunction with the software developers, the next step is to then design how those requirements can be tested. This includes determining what types of tests must be run, if any can be automated and what the tests should encompass.
BUILD: The test plan and cases are written, ensuring that both align with one another. Development work of automated test cases is completed and verified.
TEST: Now to prove that the objectives have been met and that the outcome of the test cases is correct. Any issues/errors should be highlighted and reported.
REVIEW: Any changes that are required as a result of the testing and checks performed should be implemented and re-tested.
LAUNCH: User acceptance testing is completed and the application is deployed to live.
Automated Testing vs. Manual Testing
You have done all the planning and all the requirements have been analysed. The plan is in place ready for you to start designing the testing that will be performed. The decisions made affect not just the building of the test plan and its cases, but everything through to launch. So, on what basis do you choose between manual tests, automated tests or a combination of the two? Time and cost are invariably the major driving force behind which method is used to test and to what degree. There are also several other key points to consider when deciding which method to use, these include:
The nature of the application
Project requirements
Timeline
Budget
Expertise
Suitability
There are pros and cons of each. Test automation software is not cheap, nor are the people required to build the test cases that will be run. In the short-term, test case development will require more time. It is very dependent on there being clear, concise, and complete requirements and specifications. A change to these will require even more time for test case development and more associated costs.
For some applications it is simply impossible to automate any part of their operation. The key factor is determining which method is most relevant to what is being tested and at what stage in the applications development. Ideally, any test plan will include manual and automated cases but this will not always be the case. For example, the creation of SQL Server Reporting Server reports can be automated, and test cases written to cover the various data inputs leading to report generation, but the determination of whether the output is correct can only be made manually.
Automated Testing
There are scenarios where automation facilitates predictable and reliable testing for:
Code version releases
Regression tests
Load tests
Performance tests
Automated tests are hugely beneficial when you want to simulate many users or putting load and stress on many aspects of an application (code, database, network). Longer term, test cases are cheaper because you are not spending time developing them again and are reusable so can be used as often as required without additional cost. They can also be made to run without the need for any user interaction; run a test whenever a new build of a component of the application is performed.
Manual Testing
Manual testing on the other hand covers a wide range of circumstances. For instance:
Ad hoc testing: an unplanned test method that relies on the testers knowledge to allow any results to be valid and to stand scrutiny.
Exploratory testing: this relies heavily on the testers’ expertise. When specifications are poor and time scales are short, manual skills are essential.
Usability testing: checking user-friendliness and convenience.
User acceptance testing: determining with the people who will use the application whether the application fully meets their requirements and usability.
Perceived efficiency of the application: performance tests will provide the metrics for how an application performs, but how does it feel to the user? Data can be returned from a table in seconds but if the form that presents the data is badly written, the user experience is not good.
Cross application data validation: testing whether the data shown in one part of the application is being rendered consistently in other parts of the application.
Cross report data validation: testing whether common calculated data is rendered consistently across multiple reports.
Document generation validity: testing whether data merged into a document is correctly represented, whether the correct template has been used for the language applicable to the document.
Irrespective of what is being developed, for who, or for what purpose, the result of testing must be reliable. Requirements must of course be satisfied and the user experience positive. But the likelihood of poor performance or failure from whatever cause must be minimised and mitigated. For example, if invalid data manages to get into a dataset, can the solution handle this in a way which will not cause the application to stall and will allow swift and easy recovery? If the data has the potential to contain control characters through the pasting of text into form fields, or punctuation marks being used in name for example, this has to be tested and measures taken to mitigate what a user could do in the real world.
Bug Reporting
Whatever method of testing is employed, good bug reporting will help make testing and resolution more efficient. There are many applications available on the market; some integrate into testing software, some standalone, some very good ones are even open source.
Clear and concise bug reporting helps stakeholders, testers and development engineers to clearly navigate their way to a solution. To this end, whatever application is used (Atlassian JIRA, Selenium, TestLink, Excel or OneNote to name but five) it should facilitate:
1
Communication
To form a cornerstone between stakeholders, QA testers and developers.
2
Concise bug description
So that the nature of the issue can be determined quicker, also useful when searching for bugs with a similar description.
3
Clear reporting
Each bug report should only address a single issue. It is essential that the number of individual issues found, and for the duration until their resolution, are clearly and quickly quantifiable.
4
Issue reproduction
If a bug is to be reported, the steps to reproduce the issue need to be fully and completely documented. The context of the issue should be highlighted to avoid any confusion.
5
Screenshots / screencasts
The inclusion of screenshots, or even screencasts, to highlight how the issue is being manifested simplifies the developers work in finding a fix.
6
Postulate solutions
Allow potential solutions to the issue raised to be documented, including the steps required.
Conclusion
Both automated and manual testing methods have their merits and disadvantages. Both are good in specific scenarios and less so in others. Neither are best at everything. What is certain is that every project, individual application, or even minor update, needs to be thoroughly tested to ensure it meets all of the aspects covered in this blog.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.