9f687b00e9233218e550e0ac5801ef6a5916d6af
[ss1617.git] / mock / mock.txt
1 1a.
2 malloc might fail and thus return NULL.
3 strncpy is used with a length off by one thus not copying the null byte.
4 src is too small to fit domain.
5
6 To fix this change the code to:
7 char *src = malloc(10);
8 if(src == NULL){
9 perror("malloc");
10 exit(EXIT_FAILURE);
11 }
12 char *domain = "www.ru.nl";
13 strncpy(src, domain, 10);
14 1b.
15 Probably, this all happens in the same scope and the same control flow and
16 therefore via static analysis you can catch these simple errors.
17 1c.
18 Yes, a return to libc attack overwrites the return address of a function
19 with the pointer to some libc function. Therefore when overflowing often
20 the stack canary is destroyed and then the program can act upon by refusing
21 to continue execution for example.
22 1d.
23 With annotations for sure. An off-by-one error is made in the initialize
24 function. When buf is annotated with the fact that it must be at of length
25 len PREfast will complain that at some point buf[len] will be written.
26
27 If the code is not annotated with SAL it will probably not detect the error
28 since complex inference is often difficult.
29 2.
30 The first type of memory safety is the fact that you can never write/read
31 in memory you are not supposed to write/read in.
32
33 The second type of memory safety is the fact that there is no such thing as
34 uninitialized memory. You can never read memory that has not been
35 initialized.
36 3a.
37 In a normal SQL injection you try to input user information that modifies
38 not only the value but the query itself.
39
40 A blind SQL injection does not modify the query enough to make a difference
41 but the information is gathered via side effects such as the type of the
42 respons and the lack of response. By trying to generate errors or long
43 query times you can get to know things about the server or application.
44 3b.
45 TOCTOU is misusing the non-atomicity of operations. Often you first ask
46 whether you may do something and then do it. In between these moments a
47 clever attacker can change things. For example a setuid wants to write a
48 file such as /etc/passwd. First it checks whether the file is a regular
49 file and not a symlink. After checking it will write. However, a different
50 thread can try to change /etc/passwd to a symlink in the meantime thus
51 misleading the program.
52 3c.
53 Whitelisting means only allowing a subset of inputs. Blacklisting means
54 disallowing a subset of inputs. Whitelisting is safer but often more work
55 since the set of inputs may be very large. Blacklisting is more difficult
56 because it is very difficult to know the exact set of disallowed patterns.
57 4a.
58 Deserializing an object means that you bypass all checks on the object and
59 possibly you can violate constraints that would be made when you would use
60 the regular object creation techniques.
61 4b.
62 If the program writes the object to a file and then reads it back in the
63 attacker can inspect the serialized object and change things before it is
64 read back.
65 5a.
66 A class that is not privileged to execute a certain operation can contain
67 an object of a class that does have a permission that is denied in the
68 him. When calling this subroutine on the object it can do stuff it was not
69 permitted to do and therefore violate the permission rule. When walking the
70 entire stack it can see that the caller of the function does not have said
71 permission and therefore deny it.
72 5b.
73 try0 will fail because class U does not have permission for P.
74 try1 will fail because class I does not have permission, class T does but
75 class I is also in the stack walk and permission has not been
76 explicitly granted.
77 try2 will succeed because class U does not have permission but class T
78 grants this permission by calling enablePrivilege(P).
79 try4 will succeed in the first call to m2 but later it will fail on m1
80 because when going back up the callstack the permissions are not kept.
81 6a.
82 e:t s1:ok t s2:ok t
83 -------------------
84 if e then s1 else s2:ok t
85
86 6b.
87 Implicit information flow can reveal information about a H compenent when
88 it is in the conditional part of the if statement. For example the
89 statement: "if (hi > 0) then lo1 else lo2". This doesn't seem illogical
90 since lo1 and l2 are supposed to be both safe. However, execution time
91 analysis may reveal whether hi is bigger than zero which may not happen
92 since high is confidential.
93 This is taken care of in the rule by also needing e to be safe for type t.
94 7a.
95 It means that there is no information leaking. For example when we have the
96 statement "if (h > 0) then l=print(l) else print()" this leaks information
97 . about h.
98 7b.
99 When an exception is thrown this can inform the hacker about information in
100 h.
101 8.
102
103 9.
104 We can do generational fuzzing. We know that the format and the fact that
105 it is ordered so to test the protocol we can send messages in the wrong
106 order, in the wrong format or even both.
107
108 Moreover some dumb fuzzing can be done by sending a bunch of nonsense, or
109 sending it extremely slow.
110 10.
111 Almost all buffer overflows become malicious because the user can influence
112 with which data the buffer overflows. When a language
113 11.
114 In the functions add, remove and containts the value of i must be bigger or
115 equal than 0 and smaller than 10. This can be done with an
116 "requires i >= 0" and a "requires i<10"
117
118 12.
119 The execution time of i and ii will be the same because the PPC code will
120 not insert runtime checks. It will just statically analyse whether the code
121 is safe and then leave it be. Binary iii will be slower because memory-safe
122 languages like Java and C# often introduce runtime checks for things like
123 array bounds checking.