Defect Testing

At the completion of each QA Test Cycle, QA Valley analyzes the results and Classifies Defects based on the criticality scheme established by the Client, User, or Development Team.  Defects that result in software crashes.

In addition to the classification of defects based on error severity, defects may be segmented based on different versions of the operating system, database, browser types, hardware devices, form factor, and other third party components. This presents a universal picture of the performance of the software, the gap in user expectation, and in aggregate this can be used to gauge the readiness of the product for general release. Poor performance in selected platforms may imply that the product can be released in those environments that are tested to be functionally complete.

Quantifiable metrics can be derived by correlating the Defect Statistics with the results from previous Test Cycles. Observations can be made regarding the improvement of the software from one Test Cycle to another, or conversely the quality of the software can be graded between previous Sprints or Builds. Financial decisions can be made to invest in the development of another build prior to a final release. If the quality difference of the software between builds are not significant, it may mean that performing another development cycle may only lead to minor improvements.

If it is determined that the specific modules, code segments, or data classes are notoriously error prone, Project Managers and QA Managers may exercise the liberty of tracing the affected code to the original Programmer. This level of Defect Tracing to the root source can provide valuable insight on the performance of human capital assigned to the affected project. This may reveal specific weaknesses within the staff, or failure in the understanding of the user requirements. If these discoveries are addressed, then the delivery timelines in future projects can be significantly reduced.