miljoen typfouten eruit gehaald en wat comments geplaatst
[tt2015.git] / 2approach.tex
1 The test process consists of several stages. The results of the first stage,
2 planning, are descibed in this document. On the basis of this document actual
3 test cases will be designed. Afterwards the actual tests will be implemented
4 and executed. The results of these tests will then be evaluated against the
5 exit criteria found in Section~\ref{sec:exitcriteria} and depending on the
6 outcome of these evaluations further tests might be deemed necessary.
7
8 \subsection{Quality Characteristics}
9 The quality characteristics that are to be tested are described using the
10 ISO/IEC 25010 \cite{iso25010} as a guideline. In the following sections we will
11 discuss the relevant \textit{product quality} and \textit{quality in use}
12 characteristics.
13
14 \subsubsection{Product quality}
15 Product quality is divided into eight main categories which are divided into
16 several subcategories. Below we will discuss the qualities which are relevant
17 to the SUT.
18 \begin{itemize}
19 \item \textbf{Functional suitability}\\
20 As described in Section~\ref{sec:risks} the SUT is core functionality of
21 the networking capable system. Because many other systems running on the
22 system could rely on it it is very important that the SUT is functionality
23 suitable. Therefore all three sub characteristics of Functional
24 Suitability (\textit{Functional completeness, Functional correctness,
25 Functional appropriateness}) are of vital importance. As was previously
26 mentioned in Section~\ref{sec:risks} extra emphasis should be placed on
27 testing \emph{Functional Correctness} as recovery from Failures in
28 computer-to-computer systems is problematic.
29 \item \textbf{Performance efficiency} \label{sec:perf_eff}\\
30 As the SUT runs as a service on a system with other programs it must have
31 efficient \emph{resource utilisation}. It can not contain any memory leaks
32 or use other resources more than necessary.
33 \item \textbf{Compatibility}\\
34 \emph{Interoperability} is the key feature of the SUT as it's purpose is to
35 communicate with other systems implementing the TCP protocol. Therefore it
36 is of vital importance that the SUT implements the TCP protocol correctly.
37 Furthermore it is very important that the SUT can \emph{co-exist} with
38 other programs on the system it runs on, since it is used as a service by
39 those programs. This means that the SUT has to handle preemption as well as
40 having multiple programs requesting it's services at once.
41 \item \textbf{Reliability}\\
42 As stated before, the SUT is used as a core service, this means it has to
43 be very \emph{mature}. It needs to behave as expected under normal working
44 conditions. As it can continually be requested the SUT needs to have
45 constant
46 \emph{availability}. As the SUT relies on a potentially unreliable channel
47 to send and receive data it needs to be \emph{fault tolerant}. The SUT
48 needs to properly handle errors in received data or complete unavailability
49 of the underlying channel.
50 \item \textbf{Security}\\
51 \emph{Confidentiality} is an important aspect for the SUT. Received data
52 should only be delivered to the program to which it is addressed.
53 \end{itemize}
54 This leaves three categories which are not relevant the SUT. Below we will
55 shortly discuss per category why these are not relevant. \emph{Maintainability}
56 is an important aspect of any software system, however for the SUT it is not a
57 core aspect, as it is first and foremost of importance that the implementation
58 is correct, furthermore TCP does not change often. \emph{Usability} isn't a
59 core aspect either, as the SUT is not used directly by humans, but is a service
60 which is addressed when another program needs it. \emph{Portability} isn't
61 either as the SUT is installed on a system once and intended to work on that
62 system.
63
64 \subsubsection{Quality in use}
65 Quality in use is dived into five subcategories. Below we will discuss the
66 categories which are relevant to the SUT.
67 \begin{itemize}
68 \item \textbf{Effectiveness}\\
69 This is the core aspect of the SUT, users (other programs) need to be able
70 to effectively use the SUT to send and receive data.
71 \item \textbf{Efficiency}\\
72 This issue has already been covered above under
73 ``performance efficiency''~\ref{sec:perf_eff}.
74 \item \textbf{Satisfaction}\\
75 It is important that programs using the SUT can \emph{trust} that the SUT
76 provides the promised services. This means that data is send and received
77 reliably and the SUT provides clear and unambiguous errors when this
78 service
79 can not be provided.
80 \item \textbf{Context Coverage}\\
81 The SUT needs to behave as expected in all specified contexts
82 (\emph{context completeness}).
83 \end{itemize}
84 This leaves \emph{freedom from risk}, which we consider not relevant as the SUT
85 itself does not pose any risks, and correct implementation (which is covered in
86 the other categories) gives clear guarantees to programs using the services of
87 the SUT.
88
89 \subsection{Levels and types of testing} \label{levels}
90 The client will deliver a product for certification. This means our team will
91 only conduct acceptance testing and assume that the client who requested
92 certification has conducted unit, module and integration testing. We will only
93 be conducting black-box testing and the client is not required to handover any
94 source-code. Initially we will conduct several basic test cases based on
95 experience acquired from previous certification requests (error guessing). If
96 the product fails these basic tests we reject it and seize all further
97 activities. If the product is not rejected we will proceed with more thorough
98 testing. For each test we produce a test report. If any of the test cases fail
99 the product is still rejected but in order to deliver usable feedback to the
100 client we will still produce a test report. For each test case a performance
101 analysis will be included.
102
103 \subsubsection{Test generation}
104 The basic tests mentioned in Section \ref{levels} are conducted using a
105 checklist. If any of the checks fail we immediately reject the product.
106
107 \begin{enumerate}
108 \item Is the product complete?
109 \item Does the product come with a manual or quick start guide?
110 \item Is it possible to get the product in a usable state?
111 \item Can we use the product to initiate a connection in a corruption
112 free
113 environment?
114 \item ....
115 \end{enumerate}
116
117 For the remaining tests we first use equivalence partitioning to reduce the
118 overall number of test cases.
119
120 \begin{enumerate}
121 \item Valid requests:
122 \begin{enumerate}
123 \item Single request.
124 \item Multiple requests.
125 \end{enumerate}
126 \item Invalid requests:
127 \begin{enumerate}
128 \item Single request.
129 \item Multiple requests.
130 \end{enumerate}
131 \end{enumerate}
132
133 For these requests we can introduce more cases using equivalence partitioning
134 for the different packets that are sent during one request.
135
136 \begin{enumerate}
137 \item Packets received in order.
138 \item Packets received out of order.
139 \end{enumerate}
140
141 For each individual packet we can specify the follow equivalent classes.
142
143 \begin{enumerate}
144 \item Valid packet.
145 \item Corrupted packet.
146 \item Missing packets.
147 \end{enumerate}
148
149 We will test all possible combinations of requests/packet order/packet content.
150 For each combination we will use boundary value analysis to reduce the total
151 number of test cases. Boundary values are constructed using the following
152 parameters:
153
154 \begin{enumerate}
155 \item Checksum: valid/invalid
156 \item Header: valid/invalid
157 \item Payload: valid/invalid
158 \item ...
159 \end{enumerate}
160
161 \subsubsection{Test environment and automatization}
162 All the tools we are going to use together with the SUT gives us the following
163 collection of software.
164
165 \begin{enumerate}
166 \item Windows, used as a host OS.
167 \item Linux, used as both a host and guest OS.
168 \item VirtualBox, used to run the guest OS containing the SUT.
169 \item Wireshark, used on the guest in order to capture and analyze network
170 traffic.
171 \item Tcpdump, used to prepare network packets.
172 \end{enumerate}
173
174 All test will be conducted in a virtual environment. We will use VirtualBox to
175 run a Linux distribution with the product installed. All the tests are
176 performed from within the VirtualBox environment. When testing network
177 transmissions we will only analyze the packets sent/received to/from the guest
178 OS. The host system is disconnected from the Internet or any other network in
179 order to prevent unnecessary traffic.
180 % Dit is niet nodig omdat het via loopback gaat
181
182 For each test case (except for the basic tests) a file containing previously
183 captured network traffic will be replayed using Wireshark. We will use tcpdump
184 to update the prepared packets with the MAC address of the guest network
185 adapter. The response packets coming from the guest OS will be recorded and
186 analyzed at a later stage. The valid packets are obtained by capturing traffic
187 between known working alternatives to the SUT. Invalid packets are generated
188 from this valid traffic using tcpdump. The boundary values for the different
189 parameters (fields in packets) are determined by hand. Automated scripts are
190 written that will generate packets with some fields replaced with these
191 boundary values. The performance analysis will consists of measured latencies
192 for all packets sent.
193
194 % Dit is mooier om footnotes van te maken en te gebruiken als het voor het
195 % eerst gerefereerd is
196 \begin{enumerate}
197 \item VirtualBox, https://www.virtualbox.org/
198 \item Whireshark, https://www.wireshark.org/
199 \item Tcpdump, http://www.tcpdump.org/
200 \end{enumerate}