From 413e3cf706ac48714004588de72025a6de6cca27 Mon Sep 17 00:00:00 2001 From: Mart Lubbers Date: Tue, 15 Mar 2016 09:39:40 +0100 Subject: [PATCH] yesterday done --- shorts1/yesterday.tex | 31 ++++++++++++++++++++++++++----- 1 file changed, 26 insertions(+), 5 deletions(-) diff --git a/shorts1/yesterday.tex b/shorts1/yesterday.tex index 6a4e397..3ac5a3f 100644 --- a/shorts1/yesterday.tex +++ b/shorts1/yesterday.tex @@ -19,21 +19,42 @@ report about the algorithm. \subsubsection*{Strengths \& Weaknesses} %Strength (what positive basis is there for publishing/reading it?) +The strengths of the paper is that is an easy read. The reader is slowly +introduced into the theoretical framework to later get clear real world +examples showing the capabilities of the algorithms. %Weaknesses +Weaknesses are that the writer makes assumptions about the data that are not +supported. For example on Page $255$ it states that worst case you need to test +all $2^n$ configurations. But in practise this almost never is the case. Also +he cites almost no related work and assumes by looking at one related paper +that thus is no related work. \subsubsection*{Evaluation} %Evaluation (if you were running the conference/journal where it was published, %would you recommend acceptance?) The author is very clear about the strengths and weaknesses of the proposed -methods. +methods. It even provides a full implementation. I would recommend acceptance, +but possible only after more related work was found. %Comments on quality of writing -The paper is an easy read and is a good mix of formal descriptions and -natural language. Also the structure of the paper is clear and it navigates the -reader in a natural order through the materials. +Concerning quality of writing; the paper is an easy read and is a good mix of +formal descriptions and natural language. Also the structure of the paper is +clear and it navigates the reader in a natural order through the materials. +It's not very deeply embedded in the literature, this was already mentioned in +the introduction. \subsubsection*{Discussion} %Queries for discussion - +\begin{itemize} + \item Page $255$ states that worst case you need to test all $2^n$ + configurations. But in practise this almost never is the case. Is this + really almost never the case? This is not obvious since is other fields + of computer science, such as time complexity the average complexity + usually is closer to the maximal complexity than to the minimal + complexity. + \item Would it be better to research not so much the delta debugging + algorithm but the heuristics in searching since different clustering + heuristics give significantly different results. +\end{itemize} \end{document} -- 2.20.1