Software testing/History of testing

From Wikiversity
Jump to navigation Jump to search

time phases in testing[edit | edit source]

phases in history of mankind[edit | edit source]

Since beginning of time definitely there were tests - here you see one classification from the last century.
This (there are of course more) classification originates in 1988 by D. Gelperin und W.C. Hetzel. They classified for software testing the following phases and goals:

  • Until 1956 it was the debugging oriented period, where testing was often associated to debugging: there was no clear difference between testing and debugging.
  • From 1957-1978 there was the demonstration oriented period where debugging and testing was distinguished now - in this period it was shown, that software satisfies the requirements.
  • The time between 1979-1982 is announced as the destruction oriented period, where the goal was to find errors.
  • 1983-1987 is classified as the evaluation oriented period: intention here is that during the software lifecycle a product evaluation is provided and measuring quality.
  • From 1988 on it was seen as prevention oriented period where tests were to demonstrate that software satisfies its specification, to detect faults and to prevent faults.

More about these phases can be read in the freely available postscript document by G. T. Laycock.

phases in software development models[edit | edit source]

One possibility for classification can be: testing right from beginning or not ab initio (analog to 0 and 1 in PCs). Because many seem not capable of this pattern, in theory the classification is done with test levels:

some test philosophies[edit | edit source]

info: at the moment not all test philosophies are listed

(only) our customers/users should test[edit | edit source]

Well, some people think: Yeah, right! the customers/users will complain already, if something does not work. There is no need that we test (at all) (that much). Let's not argue yet, if people with this approach are behind the times.
But there are also completely other test directions.

Test Driven Development (TDD)[edit | edit source]

This is primary a design process. Before source code is implemented, reflections are done how to prevent that the source code gets "bad". This is done in this way: before creating source code, unit tests are created and after this - or in an iterative process - the system is created. Whereas minimalistic claim is, that the source code does only fulfill the tests. Therefore there is a "natural" end to program creation: not when the developer has finished all features, but when all claims to software by unit tests are fulfilled.

Here you have some survey results (February 2006) about TDD usage: Methods & Tools survey: How is unit testing performed at your location?
(Please consider, that surveys do not necessarily represent reality (see also: statistical|survey))

Extreme Programming (XP)[edit | edit source]

XP exists since 1996, cofounder: Kent Beck, Ron Jeffries

Extreme programming is one of the agile methods. It comprises of a set of methods in software development, to which also the TDD approach belongs to. In contrast to agile methods there are heavyweight, bureaucratic methods: e.g. V-Modell or Rational|Unified Process (RUP).
With XP following improvements are intended in a software project:

  • communication: developers communicate with customers and other developers
  • simplicity: when designing source code
  • courage: the system is delivered as early as possible to the customer, so that requested changes can be implemented
  • feedback: software is tested right from the first day

One thought during this is: the unit tests serve as specification. Also a continuous integration for all unit tests is required - they must run at any time without errors, before continuing with development. For this test frameworks (e.g. JUnit) are used.

Another aid is pair programming: For this the old wisdom is used, that four eyes see more than just two. While one developer writes the source code, the other is thinking with. After a role change the tasks for both change. Though at first view here is spend double human resources, some figures show, that the extra costs are only 30%, but the quality of the created source code improves a lot. The team pairs are regularly mixed again in certain project phases. With this there is an useful side effect: not only one developer knows a certain source code, there is at least one more developer available for contacting, if modifications have to be done.

The method proves effective only if BOTH the participants are active. While one person is developing the code, the partner has to be actively thinking on it and should come up with ideas.

Behaviour Driven Development (BDD)[edit | edit source]

cofounder: Dave Astels, Dan North

BDD is under technical viewpoint the same like TDD.
But BDD devotes to following aspect: because TDD uses test centered vocabulary (e.g. test cases, test suite, test ...) this can lead to: thinking mostly of testing - but TDD ist however a design process. BDD tries to improve this by a common vocabulary. Because language influences the way of thinking (Sapir–Whorf hypothesis). If you use for example always terms like test, people tend perhaps to think only about verification. But when you speak of behaviour or interactions, it is not important to prove that e.g. the source code is correct; the focus lies then on: What should my source code actually do?

Because of this BDD concentrates on specification. The focus is not anymore on the structure of source code (e.g. for this class Y I need also one test class Test_Y), the focus lies on the behaviour of the source code. It should be created a specification which shows the behaviour precise, unambiguous and in executable form. Actually you do not write tests anymore, you write specifications.

what can testers expect?[edit | edit source]

anything - no kidding[edit | edit source]

I guess you all heard of Murphy's law already?
"If there's more than one way to do a job, and one of those ways will result in disaster, then somebody will do it that way."

Think about it: didn't you have yourself a really bad day once? How should software feel, which is captured day after day in a PC? :-) Perhaps some senior semester remember also the film Tron?

a standard does not automatically lead to less anomalies[edit | edit source]

Some people (developers, testers, ...) think that once the source code has been compiled, many anomalies have been found (so to speak, the worst is done and the testers can continue their job and the developers are now done), because as the compilers are finished the code, and they are reliable. Because a programming language (and its compiler) was discussed by a committee, and the standardization leads to fewer errors.

Unfortunately, a compiler is just software. The advantages of standards are of course obvious (e.g. it is better to have a a standard, than not having it). It should be noted, however, that some standards do not necessarily lead to fewer anomalies and can therefore also create a false sense of security.

Les Hatton describes the following situations in Static Analysis: ISO committees do not need 100% agreement to decide on an aspect. A decision can also be made on a point with a 75% majority. Imagine 40-50 people who want to advise together and come to a decision. Don't you think that at some point, when you've had enough, you just want to close everything and don't want to listen to a few people? (Let us assume, as in the American court system, the jury. Haven't innocent people also been sentenced to death?)

When the C programming language was published in 1990, the C standard contained 201 items on which the committee could not sufficiently agree. The 1999 standard included almost twice as many points (namely 366). For many of these points, the behavior is not even defined. There is no requirement or regulation to track down these problems.

If the standardization of the programming language Fortran is considered: The scope of performance from Fortran77, Fortran90 to Fortran95 has increased. But the new features have created new problems and there should also be backward compatibility with the old programming languages. Who should change the existing code (that ultimately costs resources)? So many of the anomalies persisted.

For this reason, static testing techniques are still recommended, as some anomalies occur only under certain conditions, but these can have catastrophic effects. And these can't be found at all, or before the static analysis, through reviews, for example.

theory[edit | edit source]

To you: this chapter is under development.
So: do you have a question about it? Please ask one of the tutors here.
Don't lose heart! we tutors do not bite - well, only sometimes :-)