From: Kelley van Evert Date: Wed, 30 Nov 2016 12:29:23 +0000 (+0100) Subject: spelling X-Git-Url: https://git.martlubbers.net/?a=commitdiff_plain;h=b54dd2d22b417f03a0772caba0cec3817614b503;p=ssproject1617.git spelling --- diff --git a/report/organization.tex b/report/organization.tex index 51d3f36..e782083 100644 --- a/report/organization.tex +++ b/report/organization.tex @@ -14,7 +14,7 @@ Each of us has initially set up the \CMS{} and made ourselves familiar with the \CMS{}. This was easy, because one of us had made a \code{Dockerfile} for the others to use. This made running and installing the application trivially easy. Running the application made us understand the outline and components of -the application. We could also find some spots were easy to find vulnerabilities +the application. We could also find some spots where easy to find vulnerabilities could be expected. However, looking at the source code was more effective, especially when verifying that the \CMS{} \emph{passes} a requirement. Buggy code is easy to find. Bugless code is not. @@ -34,18 +34,19 @@ When we had found that a requirement was not satisfied, we elaborate shortly and move on. This went well, because with five people the individual workload is just not -that big. Furthermore, finding vulnerabilities is a lot easier that verifying -the security in a lot of cases. This speeds up the auditing process, because +that big. Furthermore, finding vulnerabilities is a lot easier than verifying +the security in a lot of cases. This sped up the auditing process, because the \CMS{} turned out to not satisfy the ASVS in most cases. % Use of Fortify Because we were on track early, most of the audit was already done by when we were introduced to the Fortify tool. Nonetheless, we used it to verify our own verdicts. Some of us have installed and used the Fortify tool itself. These -students have exported a PDF report, which the others could then use. +students exported a PDF report and described the results, which the others +could then use. % Double-checking process -When we finished the report, each of us has reread each others' parts to check +When we finished the report, each of us reread the other parts to check if things had been missed or reported incorrect. This may not have thorough, but because in the end five pairs of eyes have read all verdicts, we trust that, in -the end, all verdicts are sufficiently checked. +the end, we feel all verdicts are sufficiently checked. diff --git a/report/reflection.testcms_code.tex b/report/reflection.testcms_code.tex index 08416e3..44d647e 100644 --- a/report/reflection.testcms_code.tex +++ b/report/reflection.testcms_code.tex @@ -12,9 +12,11 @@ critical components. Input sanitization happens all over the place (and in some cases it does not happen at all). Middleware based design patterns could make the processing of input and output a somewhat less cluttered. -Another thing that striked us about the TestCMS code is that all functionality -was written by the programmer theirself. Although it may make the application +Another thing that struck us about the TestCMS code is that all functionality +was written by the programmer him/herself. Although it may make the application a bit slower, using a template engine (like Twig\footnote{\url{http://twig.sensiolabs.org/}}) could make the application design clearer and more secure by design. While a -template engine is not necesarry, we think that using the new \code{MySQLi} API +template engine is not necessary, we think that using the new \code{MySQLi} API and in combination with prepared statements is a good change to start with. +This would improve the security by default, as the designers of libraries +like these usually have more security knowledge than the random programmer. diff --git a/report/reflection.tools.tex b/report/reflection.tools.tex index 1805b37..49c7654 100644 --- a/report/reflection.tools.tex +++ b/report/reflection.tools.tex @@ -1,7 +1,7 @@ % How useful were code analysis tools? The usefulness of the Fortify Static Code Analysis tool turned out to be very limited. Since we had most verdicts ready before a license was provided we couldn't use -the tool as an initial guide trough the code. This forced us to manually check +the tool as an initial guide through the code. This forced us to manually check the application source which took quite some time. After the tool became available we didn't get any new insights regarding potential security risks, just more examples of problems we already detected. An example would be the use of the \code{crypt()} \PHP{} @@ -18,7 +18,7 @@ In our opinion the tool could have proved very useful in pointing out certain se flaws in the initial stage of this project since we spent a lot of time scanning the application code-base. Since Fortify located relatively low-level problems we could have used these to locate potential hot-spots. -Saving us from going trough every source file and trying to determine if they are part of the +Saving us from going through every source file and trying to determine if they are part of the applications external access points. In order to improve upon the tool we suggest a larger focus on determining which parts of a application need to be secure and less on pointing out actual security flaws.