Finish 2.1.1 and 2.1.2
[tt2015.git] / 2approach.tex
1 %6
2 %7
3 %8
4 %9
5 %10
6 %11
7 %12
8
9 The test process consists of several stages. The results of the first stage,
10 planning, are descibed in this document. On the basis of this document actual
11 test cases will be designed. Afterwards the actual tests will be implemented
12 and executed. The results of these tests will then be evaluated against the
13 exit criteria (see Section~\ref{sec:exitcriteria}) and depending on the outcome
14 of these evaluations further tests might be deemed necessary.
15
16 \subsection{Quality Characteristics}
17 The quality characteristics that are to be tested are described using the
18 ISO/IEC 25010 \cite{iso25010} as a guideline. In the following sections we will
19 discuss the relevant \textit{product quality} and \textit{quality in use}
20 characteristics.
21
22 \subsubsection{Product quality}
23 Product quality is divided into eight main categories which are divided into
24 several subcategories. Below we will discuss the qualities which are relevant
25 to the SUT.
26 \begin{itemize}
27 \item \textbf{Functional suitability}\\
28 As described in Section~\ref{sec:risks} the SUT is core functionality of
29 the networking capable system. Because many other systems running on the
30 system could rely on it it is very important that the SUT is functionality
31 suitable. Therefore all three sub characteristics of Functional
32 Suitability (\textit{Functional completeness, Functional correctness,
33 Functional appropriateness}) are of vital importance. As was previously
34 mentioned in Section~\ref{sec:risks} extra emphasis should be placed on
35 testing \emph{Functional Correctness} as recovery from Failures in
36 computer-to-computer systems is problematic.
37 \item \textbf{Performance efficiency} \label{sec:perf_eff}\\
38 As the SUT runs as a service on a system with other programs it must have
39 efficient \emph{resouce utilisation}. It can not contain any memory leaks
40 or use other resources more than necessary.
41 \item \textbf{Compatibility}\\
42 \emph{Interoperability} is the key feauture of the SUT as it's purpose is to
43 communicate with other systems implementing the TCP protocol. Therefore
44 it is of vital importance that the SUT impements the TCP procotol correctly.
45 Furthermore it is very important that the SUT can \emph{co-exist} with other
46 programs on the system it runs on, since it is used as a service by those
47 programs. This means that the SUT has to handle preemption as well as having
48 multiple programs requesting it's services at once.
49 \item \textbf{Reliability}\\
50 As stated before, the SUT is used as a core service, this means it has to be
51 very \emph{mature}. It needs to behave as expected under normal working
52 conditions. As it can continually be requested the SUT needs to have
53 constant
54 \emph{availibility}. As the SUT relies on an (potentially) unreliable
55 channel to send and receive data it needs to be \emph{fault tolerant}. The
56 SUT needs to properly handle errors in received data or complete
57 unavailibilty of the underlying channel.
58 \item \textbf{Security}\\
59 \emph{Confidentiliality} is an important aspect for the SUT. Received
60 data should only be delivered to the program to which it is adressed.
61 \end{itemize}
62 This leaves three categories which are not relevant the SUT. Below we will
63 shortly discuss per categorie why these are not relevant. \emph{Maintainability}
64 is an important aspect of any software system, however for the SUT it is not a
65 core aspect, as it is first and foremost of importance that the implementation
66 is correct, furthermore TCP does not change often. \emph{Usability} isn't a core
67 aspect either, as the SUT is not used directly by humans, but is a service which
68 is adressed when another program needs it. \emph{Portability} isn't either as
69 the SUT is installed on a system once and intended to work on that system.
70
71 \subsubsection{Quality in use}
72 Quality in use is dived into five subcategories. Below we will discuss the
73 categories which are relevant to the SUT.
74 \begin{itemize}
75 \item \textbf{Effectiveness}\\
76 This is the core aspect of the SUT, users (other programs) need to be able
77 to effectively use the SUT to send and receive data.
78 \item \textbf{Efficiency}\\
79 This issue has already been covered above under
80 ``performance efficiency''~\ref{sec:perf_eff}.
81 \item \textbf{Satisfaction}\\
82 It is important that programs using the SUT can \emph{trust} that the SUT
83 provides the promised services. This means that data is send and received
84 reliably and the SUT provides clear and unambigious errors when this service
85 can not be provided.
86 \item \textbf{Context Coverage}\\
87 The SUT needs to behave as expected in all specified contexts
88 (\emph{context completeness}).
89 \end{itemize}
90 This leaves \emph{freedom from risk}, which we consider not relevant as the
91 SUT itself does not pose any risks, and correct implementation (which is covered
92 in the other categories) gives clear guarantees to programs using the services
93 of the SUT.
94
95 \subsection{Levels and types of testing} \label{levels}
96
97 The client will deliver a product for certification. This means our team will only conduct acceptance testing and assume that the client who requested certification has conducted unit, module and integration testing. We will only be conducting black-box testing and the client is not required to handover any source-code. Initially we will conduct several basic test cases based on experience acquired from previous certification requests (error guessing). If the product fails these basic tests we reject it and seize all further activities. If the product is not rejected we will proceed with more thorough testing. For each test we produce a test report. If any of the test cases fail the product is still rejected but in order to deliver usable feedback to the client we will still produce a test report.
98
99 \subsection{Test generation}
100
101 The basic tests mentioned in section \ref{levels} are conducted using a checklist. If any of the checks fail we immediately reject the product.
102
103 \begin{enumerate}
104 \item Is the product complete?
105 \item Does the product come with a manual or quick start guide?
106 \item Is it possible to get the product in a usable state?
107 \item Can we use the product to initiate a connection in a corruption free environment?
108 \item ....
109 \end{enumerate}
110
111 For the remaining tests we first use equivalence partitioning to reduce the overall number of test cases.
112
113 \begin{enumerate}
114 \item Valid requests:
115 \begin{enumerate}
116 \item Single request.
117 \item Multiple requests.
118 \end{enumerate}
119 \item Invalid requests:
120 \begin{enumerate}
121 \item Single request.
122 \item Multiple requests.
123 \end{enumerate}
124 \end{enumerate}
125
126 For these requests we can introduce more cases using equivalence partitioning for the different packets that are sent during one request.
127
128 \begin{enumerate}
129 \item Packets received in order.
130 \item Packets received out of order.
131 \end{enumerate}
132
133 For each individual packet we can specify the follow equivalent classes.
134
135 \begin{enumerate}
136 \item Valid packet.
137 \item Corrupted packet.
138 \item Missing packets.
139 \end{enumerate}
140
141 We will test all possible combinations of requests/packet order/packet content. For each combination we will use boundary value analysis to reduce the total number of test cases. Boundary values are constructed using the following parameters:
142
143 \begin{enumerate}
144 \item Checksum: valid/invalid
145 \item Header: valid/invalid
146 \item Payload: valid/invalid
147 \item ...
148 \end{enumerate}