Soort van start met hfdst5
[tt2015.git] / a3 / 5discussion.tex
1 % Comparison between tools and how fit the tools we choose were
2
3 % Discuss and classify the MBT tools that you used, compare them, compare with
4 % Manual testing, and reflect on the theory:
5 % • What is implementation relation?
6 % • Is there support for non-determinism, concurrency?
7 % • Are the generated tests sound, exhaustive? Why do you think so?
8 % • Which method of test test selection is used?
9 % • What is the modeling notation, its expressiveness, and ease of use?
10 % • Do the tools use on/off-line testing (on-the-fly/batch)?
11 % • Do they support both: test input generation and output checking?
12
13 The SUT has been tested using two tools, JTorX and \GAST. Both tools allowed
14 for extensive testing of the SUT compared to the model of
15 Section~\ref{sec:modeling} using the architecture of Figure~\ref{fig:arch}.
16
17 However setting up both \GAST and JTorX required extensive effort. To be able
18 to use \GAST extensive changes had to be made to the source code of \GAST.
19 Without these changes \GAST could not be used effectively. This was due in
20 part to problems with Cleans uniqueness typing when using \GAST over TCP. The
21 complete list of changes can be obtained by comparing \GAST s source code to the
22 source code found in the attached \texttt{code/Gast/} folder.
23
24 \GAST selects tests by trying all inputs from the alphabet in each possible
25 state. When the SUT responds in a way which is not in the model the test case
26 fails. JTorX selects test cases by ???????????????
27 \GAST s selection ensures that all possible traces are covered, however, this
28 exhaustive search will also generate huge tests for big models.
29
30 % hier iets over non-determinism?
31
32 Both JTorX and \GAST use on-line checking. They test the SUT on the fly by
33 syncing state transitions in the internal model of the SUT with observed output (after input) from the SUT.
34
35 %iets over ease of use van test model generation