Verification is the process of checking a software product. The place of verification among software development processes In methods of software verification

Verification and attestation refers to the verification and review processes in which conformity is verified. software its specification and customer requirements. Verification and qualification covers the full life cycle Software - they begin at the stage of requirements analysis and end with the verification of the program code at the stage of testing the finished software system.

Verification and attestation are not the same thing, although it is easy to confuse them. Briefly, the difference between them can be defined as follows:

Verification answers the question of whether the system is properly designed;

Certification answers the question of whether the system works correctly.

According to these definitions, verification checks that the software conforms to the system specification, in particular the functional and non-functional requirements. Certification – more general process. During validation, you need to make sure that the software product meets the customer's expectations. Validation is carried out after verification in order to determine how the system meets not only the specification, but also the customer's expectations.

As noted earlier, system requirements validation is very important in the early stages of software development. There are often errors and omissions in the requirements; in such cases final product, probably will not meet the customer's expectations. But, of course, requirements validation cannot reveal all the problems in the requirements specification. Sometimes gaps and errors in the requirements are discovered only after the implementation of the system is completed.

The verification and validation processes use two main techniques for checking and analyzing systems.

1. Software inspection. Analysis and validation of various views of the system, such as requirements specification documentation, architectural diagrams, or source code programs. Inspection is performed at all stages of the software system development process. In parallel with the inspection, automatic analysis of the source code of programs and related documents can be performed. Inspection and automated analysis are static verification and validation methods because they do not require an executable system.

2. Software testing. Running executable code with test data and examining output and performance software product to check the correct operation of the system. Testing is a dynamic method of verification and validation because it is applied to the running system.

On fig. Figure 20.1 shows the place of inspection and testing in the software development process. The arrows indicate the stages in the development process where these methods can be applied. According to this scheme, inspection can be performed at all stages of the system development process, and testing - when a prototype or executable program is created.

Inspection methods include: program inspection, automatic source code analysis, and formal verification. But static methods can only check if the programs conform to the specification; they cannot be used to check the correct functioning of the system. In addition, non-functional characteristics such as performance and reliability cannot be tested with static methods. Therefore, to evaluate non-functional characteristics, system testing is carried out.

Rice. 20.1. Static and dynamic verification and attestation

Despite the widespread use of software inspection, testing is still the predominant method of verification and certification. Testing is a verification of the operation of programs with data similar to real ones, which will be processed during the operation of the system. The presence in the program of defects and inconsistencies with the requirements is detected by examining the output data and identifying anomalous ones among them. Testing is performed during the implementation phase of the system (to verify that the system meets the expectations of the developers) and after its implementation is completed.

At various stages of the software development process, different kinds testing.

1. Defect testing conducted to detect inconsistencies between the program and its specification, which are due to errors or defects in the programs. Such tests are designed to detect errors in the system, and not to simulate its operation.

2. Statistical Testing evaluates the performance and reliability of programs, as well as the operation of the system in various operating modes. Tests are designed to mimic the actual operation of the system with real input data. The reliability of the system is evaluated by the number of failures noted in the work of programs. The performance is evaluated by measuring the total operation time and system response time when processing test data.

the main objective Verification and Qualification - to ensure that the system is "fit for purpose". The suitability of a software system for its intended purpose does not imply that it should be absolutely free of errors. Rather, the system should be reasonably well suited to the purposes for which it was intended. Required validity of compliance depends on the purpose of the system, user expectations and market conditions for software products.

1. Purpose of software. The level of compliance reliability depends on how critical the developed software is according to certain criteria. For example, the level of confidence for safety-critical systems should be significantly higher than that for prototype software systems being developed to demonstrate some new idea.

2. User expectations. It should be noted with sadness that at present, most users have low requirements for software. Users are so accustomed to failures that occur while programs are running that they are not surprised by this. They are willing to tolerate system failures if the benefits of using it outweigh the drawbacks. However, since the early 1990s, user tolerance for failures in software systems has been gradually declining. Recently, the creation of unreliable systems has become almost unacceptable, so software development companies need to pay more and more attention to software verification and validation.

3. Software market conditions. When evaluating a software system, the seller must know the competing systems, the price the buyer is willing to pay for the system, and the expected time-to-market for that system. If the development company has several competitors, it is necessary to determine the date of the system's entry into the market before the end of full testing and debugging, otherwise competitors may be the first to enter the market. If customers do not wish to purchase the software at high price perhaps they are willing to tolerate more system failures. When determining the costs of the verification and qualification process, all these factors must be taken into account.

As a rule, errors are found in the system during verification and attestation. Changes are made to the system to correct errors. This debug process usually integrated with other verification and attestation processes. However, testing (or more generally, verification and validation) and debugging are different processes that have different goals.

1. Verification and validation is the process of detecting defects in a software system.

2. Debugging - the process of localizing defects (errors) and fixing them (Fig. 20.2).

Rice. 20.2. Debugging Process

There are no simple methods for debugging programs. Experienced debuggers find bugs by comparing test output patterns with the output of systems under test. To locate an error requires knowledge of error types, output patterns, programming language, and programming process. Knowledge of the software development process is very important. Debuggers are aware of the most common programming errors (such as incrementing a counter). It also takes into account errors that are typical for certain programming languages, for example, those associated with the use of pointers in the C language.

Locating bugs in program code is not always a simple process, because the bug is not necessarily located near the place in the program code where the crash occurred. To isolate bugs, the debugger develops additional software tests that help identify the source of the bug in the program. You may need to manually trace program execution.

The interactive debugging tools are part of a set of language support tools that are integrated with the code compilation system. They provide a special program execution environment through which you can access the table of identifiers, and from there to the values ​​of variables. Users often control the execution of a program in a step-by-step manner, stepping from statement to statement in sequence. After the execution of each statement, the values ​​of the variables are checked and possible errors are identified.

The error found in the program is corrected, after which it is necessary to check the program again. To do this, you can re-inspect the program or repeat the previous test. Retesting is used to make sure that changes made to the program do not introduce new bugs into the system, because in practice a high percentage of "bug fixes" either do not complete completely or introduce new bugs into the program.

In principle, it is necessary to run all tests again during retesting after each fix, but in practice, this approach is too expensive. Therefore, when planning the testing process, dependencies between parts of the system are determined and tests are assigned for each part. Then it is possible to trace program elements using special test cases (control data) selected for these elements. If trace results are documented, then only a subset of the total set of test data can be used to test the changed program element and its dependent components.

Planning for verification and attestation

Verification and validation is an expensive process. For large systems, such as real-time systems with complex non-functional constraints, half of the system development budget is spent on the verification and validation process. Therefore, the need for careful planning of this process is obvious.

Planning for verification and validation, as part of the development of software systems, should start as early as possible. On fig. Figure 20.3 shows a software development model that takes into account the test planning process. This is where planning begins as early as the specification and system design phases. This model is sometimes referred to as the V-model (to see the V, rotate Figure 20.3 by 90°). This diagram also shows the division of the verification and attestation process into several stages, with the corresponding tests performed at each stage.

Rice. 20.3. Test planning during development and testing

In the process of planning verification and certification, it is necessary to determine the relationship between static and dynamic methods for checking the system, determine standards and procedures for software inspection and testing, approve technological map software reviews (see Section 19.2) and develop a software test plan. Whether inspection or testing is more important depends on the type of system being developed and the experience of the organization. The more critical the system, the more attention should be paid to static verification methods.

The verification and validation plan focuses on test process standards rather than describing specific tests. This plan is not just for guidance, it's mainly intended for system development and testing professionals. The plan allows technical staff get a complete picture of the system tests and plan your work in this context. In addition, the plan provides information to managers who are responsible for ensuring that the testing team has all the necessary hardware and software.

The test plan is not a fixed document. It should be reviewed regularly as testing depends on the system implementation process. For example, if the implementation of some part of the system is not completed, then it is impossible to test the assembly of the system. Therefore, the plan must be reviewed periodically so that testing staff can be used in other jobs.

Saint Petersburg

State Electrotechnical University

Department of MOEM

by discipline

“Software Development Process”

“Software Verification”

Saint Petersburg

    Purpose of verification………………………………………………………………… page 3

    Introductory remarks……………………………………………………………….. page 3

    Special and General Targets………………………………………….. page 4

    Expected practice by targets……………………………………… page 4

SG1 Preparing for Verification…………………………………………………..... page 4

SG2 Conducting examinations (peer assessment)………………………… page 7

SG3 Implementation of Verification………………………………………………..... page 9

    Appendix 1. Overview of automation tools for the verification process……….. page 11

    Annex 2. Main modern approaches to verification…………….. page 12

    List of used literature……………………………………………….. page 14

An integrated model of excellence and maturity

Verification (Maturity Level 3)

    Target

The purpose of verification is providing assurance that the selected middleware or end product meets the specified requirements.

  1. water notes

Verification of software products is verification of the finished product or its intermediate versions to meet the original requirements. This implies not only testing the program itself, but also auditing the project, user and technical documentation, etc.

The purpose of software system verification is to identify and report errors that may be made during the life cycle stages. Main tasks of verification:

    determining the compliance of high-level requirements with system requirements;

    taking into account high-level requirements in the system architecture;

    compliance with the architecture and requirements for it in the source code;

    determining the compliance of the executable code with the requirements for the system;

    determination of the means used to solve the above tasks, which are technically correct and sufficiently complete.

Verification includes verification of finished products and verification of intermediate products against all selected requirements, including customer requirements, requirements for finished products and requirements for its individual components.

Verification is inherently an incremental (incremental) process from the moment of its inception throughout the development of products and all work on products. Verification begins with the verification of requirements, then follows the verification of all intermediate products at various stages of their development and manufacture, and ends with the verification of the final product.

Verification of intermediate products at each stage of their development and manufacture significantly increases the likelihood that the final product will meet the requirements of the customer, the requirements for the finished product and the requirements for its individual components.

Verification and Validation of processes are essentially related processes, aimed, however, at obtaining different results. The purpose of Validation is to demonstrate that the finished product actually fulfills its original purpose. Verification is aimed at making sure that the product exactly meets certain requirements. In other words, Verification ensures that “ you do it right”, and Validation is that “ you are doing the right thing”.

Verification should be implemented as early as possible in relevant processes (such as delivery, development, operation, or maintenance) to evaluate cost effectiveness and performance. This process may include analysis, verification and testing (testing).

This process can be performed with varying degrees of independence of the performers. The degree of independence of performers can be distributed both between different entities in the organization itself, and entities in another organization, with different degrees of distribution of responsibilities. This process is called the process independent verification if the implementing organization is independent of the vendor, developer, operator, or maintainers.

Expert assessments (expertise) are an important part of verification as a well-established tool for effective defect elimination. An important takeaway from this is the need to develop a deeper understanding and understanding of the working versions of the product, as well as the workflows used to identify possible defects and create an opportunity for improvements if necessary.

Examinations include a methodical study of the work performed by experts in order to identify defects and other required changes.

The main methods of expert assessment are:

    inspection

    end-to-end structural control

As you know, universal computers can be programmed to solve the most diverse problems - this is one of their main features, which is of great practical value. One and the same computer, depending on what program is in its memory, is capable of performing arithmetic calculations, proving theorems and editing texts, managing the course of an experiment and creating a project for a car of the future, playing chess and teaching foreign language. However, the successful solution of all these and many other problems is possible only on the condition that computer programs do not contain errors that can lead to incorrect results.

It can be said that the requirement for the absence of errors in the software is quite natural and does not need to be substantiated. But how can you be sure that there are no errors? The question is not as simple as it might seem at first glance.

Informal methods for proving the correctness of programs include debugging and testing, which are a necessary component at all stages of the programming process, although they do not completely solve the problem of correctness. Significant errors are easy to find if you use the appropriate debugging techniques (control printouts, traces).

Testing The process of executing a program with the intention of finding an error rather than confirming that the program is correct. Its essence is as follows. The program to be tested is repeatedly run with those inputs for which the result is known in advance. Then the result obtained by the machine is compared with the expected one. If in all cases of testing there is a coincidence of these results, there is some confidence that subsequent calculations will not lead to an erroneous result, i.e. that the original program works correctly.

We have already discussed the concept of correctness of a program in terms of the absence of errors in it. From an intuitive point of view, the program will be correct if, as a result of its execution, the result is achieved, with the aim of obtaining which the program was written. By itself, the fact that the program terminated without a crash does not say anything: it is quite possible that the program actually does something completely different from what was intended. Errors of this kind can occur for various reasons.

In the following, we will assume that the programs under discussion do not contain syntactical errors, therefore, when substantiating their correctness, attention will be paid only to the content side of the matter, related to the question of whether this specific goal is achieved with the help of this program. The goal can be considered the search for a solution to the problem, and the program can be considered as a way to solve it. The program will correct, If she solves the given problem.

The method of establishing the correctness of programs by rigorous means is known as program verification.

Unlike program testing, where the properties of individual program execution processes are analyzed, verification deals with with properties programs.

The verification method is based on the assumption that there is a program documentation, the conformity of which is required to be proved. Documentation must contain:

input-output specification (description of data that does not depend on the processing process);

properties of relations between elements of state vectors at selected points of the program;

specifications and properties of structural subcomponents of the program;

specification of data structures depending on the processing process.

This method of proving the correctness of programs is method of inductive statements, independently formulated by K. Floyd and P. Naur.

The essence of this method is as follows:

1) input and output statements are formulated: the input statement describes all the necessary input conditions for the program (or program fragment), the output statement describes the expected result;

2) assuming the input statement to be true, an intermediate statement is constructed, which is derived based on the semantics of the operators located between the input and output (input and output statements); such a statement is called a derived statement;

3) a theorem is formulated (verification conditions):

the output statement follows from the deduced statement;

4) the theorem is proved; the proof testifies to the correctness of the program (program fragment).

The proof is carried out using well-developed mathematical methods using first-order predicate calculus.

Verification conditions can also be constructed in the opposite direction, i.e., assuming the output statement to be true, obtain the input statement and prove the theorem:

the output statement follows from the input statement.

This method of constructing verification conditions simulates the execution of the program in the opposite direction. In other words, the verification conditions should answer the following question: if some statement is true after the execution of the program statement, then which statement must be true before the statement?

The construction of inductive statements helps to formalize intuitions about the logic of the program. It is the most difficult in the process of proving the correctness of the program. This is explained, firstly, by the fact that it is necessary to describe all the substantive conditions, and, secondly, by the fact that an axiomatic description of the programming language semantics is necessary.

An important step in the proof process is the proof of program termination, for which some informal reasoning is sufficient.

Thus, the algorithm for proving the correctness of the program by the method of inductive assertions is presented in the following form:

1) Build the structure of the program.

2) Write out the input and output statements.

3) Formulate inductive statements for all cycles.

4) Make a list of selected paths.

5) Build verification conditions.

6) Prove the verification condition.

7) Prove that the execution of the program will end.

This method is comparable to the usual process of reading the text of the program (the method of end-to-end control). The difference lies in the degree of formalization.

The advantage of verification is that the proof process is so formalized that it can be performed on a computer. In this direction, research was carried out in the eighties, even automated dialogue systems were created, but they did not find practical application.

For an automated dialogue system, the programmer must specify inductive statements in the language of predicate calculus. The syntax and semantics of a programming language must be stored in the system in the form of axioms in the predicate calculus language. The system should determine the paths in the program and build verification conditions.

The main component of the proving system is the verification condition builder, which contains operations for manipulating predicates, algorithms for interpreting program statements. The second component of the system is the theorem proving subsystem.

We note the difficulties associated with the method of inductive assertions. It is difficult to construct "a set of basic axioms, limited enough to avoid contradictions, but rich enough to serve as a starting point for proving statements about programs" (E. Dijkstra). The second difficulty is semantic, consisting in the formation of the statements themselves to be proved. If the problem for which the program is written does not have a strict mathematical description, then it is more difficult to formulate verification conditions for it.

The listed methods have one thing in common: they consider the program as an already existing object and then prove its correctness.

The method formulated by K. Hoare and E. Dijkstra is based on the formal derivation of programs from the mathematical formulation of the problem.

Verification and validation ( verification and validation-V& v) are designed to analyze, verify the correct execution and compliance of the software with the specifications and requirements of the customer. These methods of checking the correctness of programs and systems respectively mean:

  • verification is the verification of the correctness of the creation of the system in accordance with its specification;
  • Validation is a verification of the correctness of fulfilling the specified requirements for the system.

Verification helps to make a conclusion about the correctness of the created system after the completion of its design and development. Validation allows you to establish the feasibility of specified requirements and includes a number of actions to obtain the correct programs and systems, namely:

  • planning procedures for checking and controlling design decisions and requirements;
  • providing the level of automation of program design by CASE-means;
  • checking the correct functioning of programs by testing methods on sets of target tests;
  • adaptation of the product to the operating environment, etc.

Validation performs these activities by reviewing and inspecting the specifications and design outputs in the life cycle stages to confirm that there is a correct implementation of the initial requirements and that the specified conditions and constraints are met. The tasks of verification and validation include checking the completeness, consistency and unambiguity of the requirements specification and the correctness of the performance of the system functions.

Verification and validation are subject to:

  • the main components of the system;
  • interfaces of components (software, technical and informational) and interactions of objects (protocols and messages) that ensure the implementation of the system in distributed environments;
  • means of access to the database and files (transactions and messages) and verification of means of protection against unauthorized access to data of different users;
  • documentation for the software and for the system as a whole;
  • tests, test procedures and input data.

In other words, the main systematic methods of program correctness are:

  • verification PS components and requirements specification validation;
  • PS inspection to establish conformity of the program with given specifications;
  • testing output code of the PS on test data in a specific operating environment to identify errors and defects caused by various flaws, anomalies, equipment failures or system crashes (see Chapter 9).

ISO/IEC 3918-99 and 12207 include verification and validation processes. For them, goals, tasks and actions are defined to verify the correctness of the created product (including working, intermediate products) at the stages of the life cycle and compliance with its requirements.

The main task of the verification and validation processes is to check and confirm that the final software is fit for purpose and satisfies the requirements of the customer. These processes allow you to identify errors in the work products of the stages of the life cycle, without finding out the reasons for their occurrence, as well as to establish the correctness of the software in relation to its specification.

These processes are interrelated and are defined by one term - "verification and validation" (V&V 7).

Verification is carried out:

  • verification of the correctness of the translation of individual components into the output code, as well as interface descriptions by tracing the relationships of components in accordance with the specified requirements of the customer;
  • analysis of the correctness of access to files or a database, taking into account the procedures adopted in the system tools used for manipulating data and transmitting results;
  • verification of component protection means for compliance with customer requirements and their tracing.

After checking the individual components of the system, their integration is carried out, as well as verification and validation of the integrated system. The system is tested on a plurality of test suites to determine if the test suites are adequate and sufficient to complete the test and establish the correctness of the system.

The idea of ​​creating an international project on formal verification was proposed by T. Hoare, it was discussed at a symposium on verified software in February 2005 in California. Then, in October of the same year, at the IFIP conference in Zurich, an international project was adopted for a period of 15 years to develop a "holistic automated set of tools for checking the correctness of PS".

It formulated the following main tasks:

  • development of a unified theory of construction and analysis of programs;
  • building a comprehensive integrated set of verification tools for all production stages, including the development of specifications and their verification, the generation of test cases, refinement, analysis and verification of programs;
  • creation of a repository of formal specifications and verified software objects different types and types.

IN this project It is assumed that verification will cover all aspects of creating and checking the correctness of software and will become a panacea for all the troubles associated with the constant occurrence of errors in the programs being created.

Many formal methods for proving and verifying specified programs have been tested in practice. A lot of work has been done by the ISO/IEC international committee within the ISO standard/ IEC 12207:2002 on standardization of software verification and validation processes. Checking the correctness by formal methods of different programming objects is promising.

The repository is a repository of programs, specifications and tools used in the development and testing, evaluation of finished components, tools and method blanks. It has the following general tasks:

  • accumulation of verified specifications, proof methods, program objects and code implementations for complex applications;
  • accumulation of various verification methods, their design in a form suitable for searching and selecting a realized theoretical idea for further application;
  • development of standard forms for setting and exchanging formal specifications of various programming objects, as well as tools and ready-made systems;
  • development of interoperability and interaction mechanisms for transferring finished verified products from the repository to new distributed and network environments to create new PSs.

This project is supposed to be developed within 50 years. Earlier projects set similar goals: improving software quality, formalizing service models, reducing complexity through the use of PICs, creating debugging tools for visually diagnosing errors and eliminating them, etc. However, a fundamental change in programming has not occurred either in the sense of visual debugging or in achieving high quality software. The development process continues.

A new international software verification project requires from its participants not only knowledge theoretical aspects program specifications, but also highly qualified programmers for its implementation in the coming years.

  • 2.1. Integration properties of systems
  • 2.2. The system and its environment
  • 2.3. Systems Modeling
  • 2.4. The process of creating systems
  • 2.5. Purchasing Systems
  • 3. The process of creating software
  • 3.1. Models of the software development process
  • 3.2. Iterative Software Development Models
  • 3.3. Software specification
  • 3.4. Software design and implementation
  • 3.5. The evolution of software systems
  • 3.6. Automated software development tools
  • 4. Software production technologies
  • Part II. Software requirements
  • 5. Software Requirements
  • 5.1. Functional and non-functional requirements
  • 5.2. Custom requirements
  • 5.3. System requirements
  • 5.4. Documenting system requirements
  • 6. Development of requirements
  • 6.1. Feasibility study
  • 6.2. Formation and analysis of requirements
  • 6.3. Requirements validation
  • 6.4. Requirements Management
  • 7. Matrix of requirements. Development of requirements matrix
  • Part III. Software simulation
  • 8. Architectural design
  • 8.1. Structuring the system
  • 8.2. Management Models
  • 8.3. Modular decomposition
  • 8.4. Problem-Dependent Architectures
  • 9. Architecture of distributed systems
  • 9.1. Multiprocessor architecture
  • 9.2. Client/server architecture
  • 9.3. Distributed Object Architecture
  • 9.4. Corba
  • 10. Object Oriented Design
  • 10.1. Objects and object classes
  • 10.2. The Object Oriented Design Process
  • 10.2.1. System environment and models of its use
  • 10.2.2. architecture design
  • 10.2.3. Definition of objects
  • 10.2.4. architecture models
  • 10.2.5. Specification of object interfaces
  • 10.3. System Architecture Modification
  • 11. Real-time system design
  • 11.1. Real-time system design
  • 11.2. Control programs
  • 11.3. Monitoring and control systems
  • 11.4. Data Acquisition Systems
  • 12. Design with component reuse
  • 12.1. Component Development
  • 12.2. Application families
  • 12.3. Design patterns
  • 13. User interface design
  • 13.1. User Interface Design Principles
  • 13.2. User interaction
  • 13.3. Presentation of information
  • 13.4. User Support Tools
  • 13.5. Interface evaluation
  • Part IV. Software development technologies
  • 14. Software life cycle: models and their features
  • 14.1. Waterfall life cycle model
  • 14.2. Evolutionary life cycle model
  • 14.2.1. Formal systems development
  • 14.2.2. Software development based on previously created components
  • 14.3. Iterative life cycle models
  • 14.3.1 Incremental development model
  • 14.3.2 Spiral development model
  • 15. Methodological foundations of software development technologies
  • 16. Methods of structural analysis and software design
  • 17. Methods of object-oriented analysis and software design. uml modeling language
  • Part V. Written communication. Software Project Documentation
  • 18. Documenting the stages of software development
  • 19. Project planning
  • 19.1 Clarification of the content and scope of work
  • 19.2 Content management planning
  • 19.3 Organizational planning
  • 19.4 Planning for configuration management
  • 19.5 Quality management planning
  • 19.6 Basic project schedule
  • 20. Software verification and validation
  • 20.1. Planning for verification and attestation
  • 20.2. Software systems inspection
  • 20.3. Automatic static program analysis
  • 20.4. Clean room method
  • 21. Software testing
  • 21.1. Defect testing
  • 21.1.1. Black box testing
  • 21.1.2. Areas of equivalence
  • 21.1.3. Structural testing
  • 21.1.4. Branch testing
  • 21.2. Build testing
  • 21.2.1. Downward and Upward Testing
  • 21.2.2. Interface testing
  • 21.2.3. Load testing
  • 21.3. Testing object-oriented systems
  • 21.3.1. Object Class Testing
  • 21.3.2. Object integration
  • 21.4. Testing Tools
  • Part VI. Software project management
  • 22. Project management
  • 22.1. Management processes
  • 22.2. Project planning
  • 22.3. Operating schedule
  • 22.4. Management of risks
  • 23. Personnel management
  • 23.1. Limits of thinking
  • 23.1.1. Organization of human memory
  • 23.1.2. Problem solving
  • 23.1.3. Motivation
  • 23.2. group work
  • 23.2.1. Creation of a team
  • 23.2.2. Team cohesion
  • 23.2.3. Group communication
  • 23.2.4. Group organization
  • 23.3. Recruitment and retention of staff
  • 23.3.1. Working environment
  • 23.4. Model for assessing the level of personnel development
  • 24. Estimating the cost of a software product
  • 24.1. Performance
  • 24.2. Assessment Methods
  • 24.3. Algorithmic cost modeling
  • 24.3.1. sosomo model
  • 24.3.2. Algorithmic cost models in project planning
  • 24.4. Project duration and recruitment
  • 25. Quality management
  • 25.1. Quality assurance and standards
  • 25.1.1. Technical documentation standards
  • 25.1.2. The quality of the software development process and the quality of the software product
  • 25.2. Quality planning
  • 25.3. Quality control
  • 25.3.1. Quality checks
  • 25.4. Measuring software performance
  • 25.4.1. Measurement process
  • 25.4.2. Software Product Metrics
  • 26. Software reliability
  • 26.1. Ensuring Software Reliability
  • 26.1.1 Critical systems
  • 26.1.2. Operability and reliability
  • 26.1.3. Safety
  • 26.1.4. security
  • 26.2. Reliability attestation
  • 26.3. Security Guarantees
  • 26.4. Software security assessment
  • 27. Improving software production
  • 27.1. Product and production quality
  • 27.2. Analysis and simulation of production
  • 27.2.1. Exceptions during creation by
  • 27.3. Manufacturing process measurement
  • 27.4. Model for assessing the level of development
  • 27.4.1. Assessment of the level of development
  • 27.5. Classification of improvement processes
  • 20. Software verification and validation

    Verification and validation refers to the verification and review processes that verify that software conforms to its specification and customer requirements. Verification and validation cover the full life cycle of software - they begin at the stage of requirements analysis and end with verification of the program code at the stage of testing the finished software system.

    Verification and attestation are not the same thing, although it is easy to confuse them. Briefly, the difference between them can be defined as follows:

    Verification answers the question of whether the system is properly designed;

    Certification answers the question of whether the system works correctly.

    According to these definitions, verification checks that the software conforms to the system specification, in particular the functional and non-functional requirements. Certification is a more general process. During validation, you need to make sure that the software product meets the customer's expectations. Validation is carried out after verification in order to determine how the system meets not only the specification, but also the customer's expectations.

    As noted earlier, system requirements validation is very important in the early stages of software development. There are often errors and omissions in the requirements; in such cases, the final product will probably not meet the customer's expectations. But, of course, requirements validation cannot reveal all the problems in the requirements specification. Sometimes gaps and errors in the requirements are discovered only after the implementation of the system is completed.

    The verification and validation processes use two main techniques for checking and analyzing systems.

    1. Software inspection. Analysis and verification of various representations of the system, such as requirements specification documentation, architectural diagrams, or program source code. Inspection is performed at all stages of the software system development process. In parallel with the inspection, automatic analysis of the source code of programs and related documents can be performed. Inspection and automated analysis are static verification and validation methods because they do not require an executable system.

    2. Software testing. Running executable code with test data and examining the output and performance of the software product to verify that the system is working correctly. Testing is a dynamic method of verification and validation because it is applied to the running system.

    On fig. Figure 20.1 shows the place of inspection and testing in the software development process. The arrows indicate the stages in the development process where these methods can be applied. According to this scheme, inspection can be performed at all stages of the system development process, and testing - when a prototype or executable program is created.

    Inspection methods include: program inspection, automatic source code analysis, and formal verification. But static methods can only check if the programs conform to the specification; they cannot be used to check the correct functioning of the system. In addition, non-functional characteristics such as performance and reliability cannot be tested with static methods. Therefore, to evaluate non-functional characteristics, system testing is carried out.

    Rice. 20.1. Static and dynamic verification and attestation

    Despite the widespread use of software inspection, testing is still the predominant method of verification and certification. Testing is a verification of the operation of programs with data similar to real ones, which will be processed during the operation of the system. The presence in the program of defects and inconsistencies with the requirements is detected by examining the output data and identifying anomalous ones among them. Testing is performed during the implementation phase of the system (to verify that the system meets the expectations of the developers) and after its implementation is completed.

    Different types of testing are used at different stages of the software development process.

    1. Defect testing conducted to detect inconsistencies between the program and its specification, which are due to errors or defects in the programs. Such tests are designed to detect errors in the system, and not to simulate its operation.

    2. Statistical Testing evaluates the performance and reliability of programs, as well as the operation of the system in various operating modes. Tests are designed to mimic the actual operation of the system with real input data. The reliability of the system is evaluated by the number of failures noted in the work of programs. The performance is evaluated by measuring the total operation time and system response time when processing test data.

    The main purpose of verification and qualification is to make sure that the system is "fit for purpose". The suitability of a software system for its intended purpose does not imply that it should be absolutely free of errors. Rather, the system should be reasonably well suited to the purposes for which it was intended. Required validity of compliance depends on the purpose of the system, user expectations and market conditions for software products.

    1. Purpose of software. The level of compliance reliability depends on how critical the developed software is according to certain criteria. For example, the level of confidence for safety-critical systems should be significantly higher than that for prototype software systems being developed to demonstrate some new idea.

    2. User expectations. It should be noted with sadness that at present, most users have low requirements for software. Users are so accustomed to failures that occur while programs are running that they are not surprised by this. They are willing to tolerate system failures if the benefits of using it outweigh the drawbacks. However, since the early 1990s, user tolerance for failures in software systems has been gradually declining. Recently, the creation of unreliable systems has become almost unacceptable, so software development companies need to pay more and more attention to software verification and validation.

    3. Software market conditions. When evaluating a software system, the seller must know the competing systems, the price the buyer is willing to pay for the system, and the expected time-to-market for that system. If the development company has several competitors, it is necessary to determine the date of the system's entry into the market before the end of full testing and debugging, otherwise competitors may be the first to enter the market. If customers are unwilling to purchase software at a high price, they may be willing to tolerate more system failures. When determining the costs of the verification and qualification process, all these factors must be taken into account.

    As a rule, errors are found in the system during verification and attestation. Changes are made to the system to correct errors. This debug process usually integrated with other verification and attestation processes. However, testing (or more generally, verification and validation) and debugging are different processes that have different goals.

    1. Verification and validation is the process of detecting defects in a software system.

    2. Debugging - the process of localizing defects (errors) and fixing them (Fig. 20.2).

    Rice. 20.2. Debugging Process

    There are no simple methods for debugging programs. Experienced debuggers find bugs by comparing test output patterns with the output of systems under test. To locate an error requires knowledge of error types, output patterns, programming language, and programming process. Knowledge of the software development process is very important. Debuggers are aware of the most common programming errors (such as incrementing a counter). It also takes into account errors that are typical for certain programming languages, for example, those associated with the use of pointers in the C language.

    Locating bugs in program code is not always a simple process, because the bug is not necessarily located near the place in the program code where the crash occurred. To isolate bugs, the debugger develops additional software tests that help identify the source of the bug in the program. You may need to manually trace program execution.

    The interactive debugging tools are part of a set of language support tools that are integrated with the code compilation system. They provide a special program execution environment through which you can access the table of identifiers, and from there to the values ​​of variables. Users often control the execution of a program in a step-by-step manner, stepping from statement to statement in sequence. After the execution of each statement, the values ​​of the variables are checked and possible errors are identified.

    The error found in the program is corrected, after which it is necessary to check the program again. To do this, you can re-inspect the program or repeat the previous test. Retesting is used to make sure that changes made to the program do not introduce new bugs into the system, because in practice a high percentage of "bug fixes" either do not complete completely or introduce new bugs into the program.

    In principle, it is necessary to run all tests again during retesting after each fix, but in practice, this approach is too expensive. Therefore, when planning the testing process, dependencies between parts of the system are determined and tests are assigned for each part. Then it is possible to trace program elements using special test cases (control data) selected for these elements. If trace results are documented, then only a subset of the total set of test data can be used to test the changed program element and its dependent components.