Jump to content

Software testing/Design technique

From Wikiversity

One possible classification (another one could be black box, white box, grey box: see here) of the design techniques could be: static and dynamic techniques.
The static techniques also test software (like the dynamic ones), but the software is not executed during this. Goal of both types it is amongst others to find anomalies (more goals you can read here). We could also phrase it like this: software interacts/reacts while dynamic testing - but for the static techniques software just lies like a skeleton before us and we can dissect it like a pathologist.
Examples to some of the techniques you can find in: Standard for Software Component Testing from BCS SIGIST.

manual

[edit | edit source]

For the following design techniques the ability of humans to analyze and think is of primary importance. For this there are used different kind of reviews:

with tools

[edit | edit source]

Tools for static analysis need the test object to have a formal structure/syntax (so a test object can also be something else than source code). Then the tools search for certain patterns. Tools can be:

Compiler: they check if e.g. for the syntax of the used programming language or they calculate metrics
analyzer: e.g. this can be also "just" a program for spell checking
tools for modeling: they create first a model of e.g. the specification or the source code - so other tools can start
also other open source and commercial tools (please see discussion)

dynamic design techniques

[edit | edit source]

In complement to static techniques the testobject is being executed for testing.

systematic techniques

[edit | edit source]

An overview about black box, white box and grey box you can find in the article here.

white box design techniques can be grouped into control flow based or data flow based.


control flow based

(todo: explaining advantages of control flow graphs and CSD)


data flow based
Data flow based techniques build on and extend control flow based techniques. Data is saved in variables and these variables have a defined lifecycle:

undeclared/undefined (u): variable has no value and no memory
declared (d): variable has no defined value, but there is already memory assigned
initialized (i): assign a value to the variable
referrenced (r): use/read of the value of the variable


Then there can be many dataflow anomalies: e.g. dr, id, ii. This shall be explained with one part of the source code of the algorithm BubbleSort:

 4.         temp, swaps : INTEGER;
..
13.                 IF ~(a[i] <= a[i+1]) THEN
14a.                     temp   := a[i];
14b.                     a[i]   := a[i+1]; 
14c.                     a[i+1] := temp;
15.                      INC(swaps);
16.                 END

Let's say the source code has now anomalies (line 14a - 14c):

 4.         temp, swaps : INTEGER;
..
13.                 IF ~(a[i] <= a[i+1]) THEN
14a.                                     (* line 14a is now deleted                                    *) 
14b.                     a[i+1] := temp; (* dr-anomaly: variable temp from line 4 was not yet defined, *)
                                         (*             but it is used.                                *)
14c.                     a[i+1] := a[i]; (* ii-anomaly: there are now two a[i+1] (line 14b + 14c)      *)
                                         (*             this should make you wonder.                   *)
15.                      INC(swaps);
16.                 END

grey box

[edit | edit source]

Here the advantages of black box and white box techniques are combined to design better tests.
(TODO: examples)

tools for dynamic analysis

[edit | edit source]
  • test execution tools
  • GUI-capture and playback: Actions on the screen are recorded in a test script. These type of tools have disadvantages like more effort when reusing due to editing of the script.
  • data driven: The advantage here is: the same recorded test (which is saved in a test script - like above) can be repeated with different input data (which the user adds e.g. over Excel).
  • keyword-driven/actionword-driven: difference to data-driven: each line of input data has now its own keyword, which tells what to do with the input-data. Imagine it like this: the keyword is something like a function, which tells what to do.
  • interaction-driven: If something has changed on the testscripts itself, the maintenance effort decreases, because other testscripts, which work with this test script get changed automatically. The scripts/modules can be taken by drag and drop from the database.
  • comparator/comparison tools
  • dynamic analysis tools
  • coverage tools
  • test frameworks
  • debugger
  • and other

non-systematic techniques

[edit | edit source]