From: Mart Lubbers Date: Tue, 26 Jan 2016 20:34:05 +0000 (+0100) Subject: tex opgeschoond' X-Git-Url: https://git.martlubbers.net/?a=commitdiff_plain;h=cab2c8ddec042832d02bcd45190a1d1320f54eb8;p=tt2015.git tex opgeschoond' --- diff --git a/.gitignore b/.gitignore index 7b58c03..92e6959 100644 --- a/.gitignore +++ b/.gitignore @@ -43,3 +43,4 @@ C:\\nppdf32Log\\debuglog.txt sn.txt *.exe *.swp +*.eps diff --git a/a4/Makefile b/a4/Makefile index 418142e..b159295 100644 --- a/a4/Makefile +++ b/a4/Makefile @@ -2,6 +2,7 @@ LATEX:=latex DOCUMENT:=tt4 MODELS=model.small.LStar.rand.eps model.partial.LStar.rand.eps model.full.LStar.rand.eps +TEXS=question1.tex question2.tex question3.tex question4.tex .SECONDARY: $(DOCUMENT).fmt .PHONY: clean @@ -11,7 +12,7 @@ all: $(MODELS) $(DOCUMENT).pdf %.pdf: %.dvi dvipdfm $< -%.dvi: %.tex %.fmt +%.dvi: %.tex %.fmt $(TEXS) $(LATEX) $< $(LATEX) $< diff --git a/a4/preamble.tex b/a4/preamble.tex index a3a539e..02e1044 100644 --- a/a4/preamble.tex +++ b/a4/preamble.tex @@ -4,6 +4,8 @@ \usepackage[dvipdfm]{hyperref} \usepackage{graphicx} \usepackage{longtable} +\usepackage{booktabs} +\usepackage{float} \author{% Charlie Gerhardus\and diff --git a/a4/question2.tex b/a4/question2.tex index 8286ddb..2e098c1 100644 --- a/a4/question2.tex +++ b/a4/question2.tex @@ -1,26 +1,40 @@ -In order to be allow learnlib to learn the TCP model it was necessary to have a deterministic model. -We accomplished this by modifying the adapter so it can reach a \emph{ERROR} or \emph{CLOSED} state. In these states all inputs are discarded and a default output is returned. -In the case of a state where an input results in a non-deterministic output we jump to the \emph{ERROR} state for additional this given input. When the connection is successfully closed using a \emph{FIN} packet we move the adapter to the \emph{CLOSED} state. +In order to be allow learnlib to learn the TCP model it was necessary to have a +deterministic model. We accomplished this by modifying the adapter so it can +reach a \texttt{ERROR} or \texttt{CLOSED} state. In these states all inputs are +discarded and a default output is returned. In the case of a state where an +input results in a non-deterministic output we jump to the \texttt{ERROR} state +for additional this given input. When the connection is successfully closed +using a \texttt{FIN} packet we move the adapter to the \texttt{CLOSED} state. -We divided the input alphabet into three sets, this way we can control the size of the model learned by learnlib. +We divided the input alphabet into three sets, this way we can control the size +of the model learned by learnlib. -\begin{longtable}{|c|l|} - \caption{Different input alphabets used during learning.} \\\hline - Alphabet & Inputs \\\hline \hline - small & SYN, ACK \\\hline - partial & SYN, ACK, DATA \\\hline - full & SYN, ACK, DATA, RST, FIN \\\hline -\end{longtable} +\begin{table}[H] + \begin{tabular}{cl} + \toprule + Alphabet & Inputs \\ + \midrule + small & \texttt{SYN}, \texttt{ACK} \\ + partial & \texttt{SYN}, \texttt{ACK}, \texttt{DATA} \\ + full & \texttt{SYN}, \texttt{ACK}, \texttt{DATA}, \texttt{RST}, + \texttt{FIN} \\ + \bottomrule + \end{tabular} + \caption{Different input alphabets used during learning.} +\end{table} -Just as in our previous assignment the \emph{DATA} packet is actually a \emph{ACK} with an user data payload and the \emph{push} flag set. -These input alphabets will influence the size of the model produced. \emph{small} will result in a 2 state model, \emph{partial} will be the full model without the \emph{CLOSED} state and \emph{full} should result in the full model as used in the previous assignment. +Just as in our previous assignment the \texttt{DATA} packet is actually a +\texttt{ACK} with an user data payload and the \emph{push} flag set. These +input alphabets will influence the size of the model produced. \emph{small} +will result in a 2 state model, \emph{partial} will be the full model without +the \texttt{CLOSED} state and \emph{full} should result in the full model as +used in the previous assignment. \paragraph{Model learned with small input alphabet} -\includegraphics{model.small.LStar.rand.eps} - +%\includegraphics{model.small.LStar.rand.eps} \paragraph{Model learned with partial input alphabet} -\includegraphics{model.partial.LStar.rand.eps} +%\includegraphics{model.partial.LStar.rand.eps} \paragraph{Model learned with full input alphabet} -\includegraphics{model.full.LStar.rand.eps} \ No newline at end of file +%\includegraphics{model.full.LStar.rand.eps} diff --git a/a4/question3.tex b/a4/question3.tex index 212b38d..d088a21 100644 --- a/a4/question3.tex +++ b/a4/question3.tex @@ -1,36 +1,44 @@ -The table below contains some statistics about all the different parameter configurations we ran learnlib with. -All except \emph{RivestSchapire} using the Random test method result in the correct model being learned. -When \emph{WMethod} is selected as the testing method \emph{RivestSchapire} is also able to learn the correct model. -\emph{WMethod} does however increase the time needed to learn the model significantly, when a different learner is used there is no reason not to use the Random testing method. - -\begin{longtable}{| l | l | l | c | c | c |} - \caption{Learning parameters and resulting model size.} \\\hline - Alphabet & Method & Test method & States & Time \\\hline \hline - small & LStar & Random & 2 & 12 sec \\\hline - small & TTT & Random & 2 & 5 sec \\\hline - small & RivestSchapire & Random & 2 & 6 sec \\\hline - small & KearnsVazirani & Random & 2 & 5 sec \\\hline - small & LStar & WMethod & 2 & 35 sec \\\hline - small & TTT & WMethod & 2 & 32 sec \\\hline - small & RivestSchapire & WMethod & 2 & 33 sec \\\hline - small & KearnsVazirani & WMethod & 2 & 33 sec \\\hline - - partial & LStar & Random & 4 & 18 sec \\\hline - partial & TTT & Random & 4 & 16 sec \\\hline - partial & RivestSchapire & Random & 4 & 13 sec \\\hline - partial & KearnsVazirani & Random & 4 & 22 sec \\\hline - partial & LStar & WMethod & 4 & 384 sec \\\hline - partial & TTT & WMethod & 4 & 390 sec \\\hline - partial & RivestSchapire & WMethod & 4 & 384 sec \\\hline - partial & KearnsVazirani & WMethod & 4 & 383 sec \\\hline - - full & LStar & Random & 5 & 44 sec \\\hline - full & TTT & Random & 5 & 25 sec \\\hline - full & RivestSchapire & Random & 4 & 12 sec \\\hline - full & KearnsVazirani & Random & 5 & 19 sec \\\hline - full & LStar & WMethod & 5 & 2666 sec \\\hline - full & TTT & WMethod & 5 & 2632 sec \\\hline - full & RivestSchapire & WMethod & 5 & 2638 sec \\\hline - full & KearnsVazirani & WMethod & - & - \\\hline -\end{longtable} +The table below contains some statistics about all the different parameter +configurations we ran learnlib with. All except \emph{RivestSchapire} using +the Random test method result in the correct model being learned. When +\emph{WMethod} is selected as the testing method \emph{RivestSchapire} is also +able to learn the correct model. +\emph{WMethod} does however increase the time needed to learn the model +significantly, when a different learner is used there is no reason not to use +the Random testing method. +\begin{table}[H] + \begin{tabular}{lllccc} + \toprule + Alphabet & Method & Test method & States & Time \\ + \midrule + small & LStar & Random & 2 & 12 sec \\ + small & TTT & Random & 2 & 5 sec \\ + small & RivestSchapire & Random & 2 & 6 sec \\ + small & KearnsVazirani & Random & 2 & 5 sec \\ + small & LStar & WMethod & 2 & 35 sec \\ + small & TTT & WMethod & 2 & 32 sec \\ + small & RivestSchapire & WMethod & 2 & 33 sec \\ + small & KearnsVazirani & WMethod & 2 & 33 sec \\ + + partial & LStar & Random & 4 & 18 sec \\ + partial & TTT & Random & 4 & 16 sec \\ + partial & RivestSchapire & Random & 4 & 13 sec \\ + partial & KearnsVazirani & Random & 4 & 22 sec \\ + partial & LStar & WMethod & 4 & 384 sec \\ + partial & TTT & WMethod & 4 & 390 sec \\ + partial & RivestSchapire & WMethod & 4 & 384 sec \\ + partial & KearnsVazirani & WMethod & 4 & 383 sec \\ + + full & LStar & Random & 5 & 44 sec \\ + full & TTT & Random & 5 & 25 sec \\ + full & RivestSchapire & Random & 4 & 12 sec \\ + full & KearnsVazirani & Random & 5 & 19 sec \\ + full & LStar & WMethod & 5 & 2666 sec \\ + full & TTT & WMethod & 5 & 2632 sec \\ + full & RivestSchapire & WMethod & 5 & 2638 sec \\ + full & KearnsVazirani & WMethod & - & - \\ + \bottomrule + \end{tabular} + \caption{Learning parameters and resulting model size.} +\end{table}