more updates
[phd-thesis.git] / top / imp.tex
1 \documentclass[../thesis.tex]{subfiles}
2
3 \input{subfilepreamble}
4
5 \begin{document}
6 \input{subfileprefix}
7 \chapter{Implementation}%
8 \label{chp:implementation}
9 \begin{chapterabstract}
10 This chapter shows the implementation of the \gls{MTASK} system.
11 It is threefold: first it shows the implementation of the byte code compiler for \gls{MTASK}'s \gls{TOP} language, then is details of the implementation of \gls{MTASK}'s \gls{TOP} engine that executes the \gls{MTASK} tasks on the microcontroller, and finally it shows how the integration of \gls{MTASK} tasks and \glspl{SDS} is implemented both on the server and on the device.
12 \end{chapterabstract}
13
14 Microcontrollers usually have flash-based program memory which wears out fairly quick.
15 For example, the atmega328p in the \gls{ARDUINO} UNO is rated for 10000 write cycles.
16 While this sounds like a lot, if new tasks are sent to the device every minute or so, a lifetime of not even seven days is guaranteed.
17 Hence, for dynamic applications, generating code at run-time for interpretation on the device is necessary.
18 This byte code is then interpreted on MCUs with very little memory and processing power and thus save precious write cycles of the program memory.
19 precious write cycles of the program memory.
20
21 In order to provide the device with the tools to interpret the byte code, it is programmed with a \gls{RTS}, a customisable domain-specific \gls{OS} that takes care of the execution of tasks but also low-level mechanisms such as the communication, multi tasking, and memory management.
22 Once the device is programmed with the \gls{MTASK} \gls{RTS}, it can continuously receive new tasks.
23
24 \subsection{Instruction set}
25 The instruction set is a fairly standard stack machine instruction set extended with special \gls{TOP} instructions.
26 \Cref{lst:instruction_type} shows the \gls{CLEAN} type representing the instruction set of which \cref{tbl:instr_task} gives detailed semantics.
27 Type synonyms are used to provide insight on the arguments of the instructions.
28 One notable instruction is the \cleaninline{MkTask} instruction, it allocates and initialises a task tree node and pushes a pointer to it on the stack.
29
30 \begin{lstClean}[caption={The type housing the instruction set.},label={lst:instruction_type}]
31 :: ArgWidth :== UInt8 :: ReturnWidth :== UInt8
32 :: Depth :== UInt8 :: Num :== UInt8
33 :: SdsId :== UInt8 :: JumpLabel =: JL UInt16
34
35 //** Datatype housing all instructions
36 :: BCInstr
37 //Return instructions
38 //Jumps
39 = BCJumpF JumpLabel | BCJump JumpLabel | BCLabel JumpLabel | BCJumpSR ArgWidth JumpLabel
40 | BCReturn ReturnWidth ArgWidth | BCTailcall ArgWidth ArgWidth JumpLabel
41 //Arguments
42 | BCArgs ArgWidth ArgWidth
43 //Task node creation and refinement
44 | BCMkTask BCTaskType | BCTuneRateMs | BCTuneRateSec
45 //Task value ops
46 | BCIsStable | BCIsUnstable | BCIsNoValue | BCIsValue
47 //Stack ops
48 | BCPush String255 | BCPop Num | BCRot Depth Num | BCDup | BCPushPtrs
49 //Casting
50 | BCItoR | BCItoL | BCRtoI | ...
51 // arith
52 | BCAddI | BCSubI | ...
53 ...
54
55 //** Datatype housing all task types
56 :: BCTaskType
57 = BCStableNode ArgWidth | ArgWidth
58 // Pin io
59 | BCReadD | BCWriteD | BCReadA | BCWriteA | BCPinMode
60 // Interrupts
61 | BCInterrupt
62 // Repeat
63 | BCRepeat
64 // Delay
65 | BCDelay | BCDelayUntil //* Only for internal use
66 // Parallel
67 | BCTAnd | BCTOr
68 //Step
69 | BCStep ArgWidth JumpLabel
70 //Sds ops
71 | BCSdsGet SdsId | BCSdsSet SdsId | BCSdsUpd SdsId JumpLabel
72 // Rate limiter
73 | BCRateLimit
74 ////Peripherals
75 //DHT
76 | BCDHTTemp UInt8 | BCDHTHumid UInt8
77 ...
78 \end{lstClean}
79
80 \subsection{Compiler}
81 The bytecode compiler interpretation for the \gls{MTASK} language is implemented as a monad stack containing a writer monad and a state monad.
82 The writer monad is used to generate code snippets locally without having to store them in the monadic values.
83 The state monad accumulates the code, and stores the stateful data the compiler requires.
84 \Cref{lst:compiler_state} shows the data type for the state, storing:
85 function the compiler currently is in;
86 code of the main expression;
87 context (see \todo{insert ref to compilation rules step here});
88 code for the functions;
89 next fresh label;
90 a list of all the used \glspl{SDS}, either local \glspl{SDS} containing the initial value (\cleaninline{Left}) or lifted \glspl{SDS} (see \cref{sec:liftsds}) containing a reference to the associated \gls{ITASK} \gls{SDS};
91 and finally there is a list of peripherals used.
92
93 \begin{lstClean}[label={lst:compiler_state},caption={\Gls{MTASK}'s byte code compiler type}]
94 :: BCInterpret a :== StateT BCState (WriterT [BCInstr] Identity) a
95 :: BCState =
96 { bcs_infun :: JumpLabel
97 , bcs_mainexpr :: [BCInstr]
98 , bcs_context :: [BCInstr]
99 , bcs_functions :: Map JumpLabel BCFunction
100 , bcs_freshlabel :: JumpLabel
101 , bcs_sdses :: [Either String255 MTLens]
102 , bcs_hardware :: [BCPeripheral]
103 }
104 :: BCFunction =
105 { bcf_instructions :: [BCInstr]
106 , bcf_argwidth :: UInt8
107 , bcf_returnwidth :: UInt8
108 }
109 \end{lstClean}
110
111 Executing the compiler is done by providing an initial state.
112 After compilation, several post-processing steps are applied to make the code suitable for the microprocessor.
113 First, in all tail call \cleaninline{BCReturn}'s are replaced by \cleaninline{BCTailCall} to implement tail call elimination.
114 Furthermore, all byte code is concatenated, resulting in one big program.
115 Many instructions have commonly used arguments so shorthands are introduced to reduce the program size.
116 For example, the \cleaninline{BCArg} instruction is often called with argument \qtyrange{0}{2} and can be replaced by the \cleaninline{BCArg0}--\cleaninline{BCArg2} shorthands.
117 Furthermore, redundant instructions (e.g.\ pop directly after push) are removed as well in order not to burden the code generation with these intricacies.
118 Finally the labels are resolved to represent actual program addresses instead of freshly generated identifiers.
119 After the byte code is ready, the lifted \glspl{SDS} are resolved to provide an initial value for them.
120 The result---byte code, \gls{SDS} specification and perpipheral specifications---are the result of the process, ready to be sent to the device.
121
122 \section{Compilation rules}
123 This section describes the compilation rules, the translation from abstract syntax to byte code.
124 The compilation scheme consists of three schemes\slash{}functions.
125 When something is surrounded by $\parallel$, e.g.\ $\parallel{}a_i\parallel{}$, it denotes the number of stack cells required to store it.
126
127 Some schemes have a \emph{context} $r$ as an argument which contains information about the location of the arguments in scope.
128 More information is given in the schemes requiring such arguments.
129
130 \newcommand{\cschemeE}[2]{\mathcal{E}\llbracket#1\rrbracket~#2}
131 \newcommand{\cschemeF}[1]{\mathcal{F}\llbracket#1\rrbracket}
132 \newcommand{\cschemeS}[3]{\mathcal{S}\llbracket#1\rrbracket~#2~#3}
133 \begin{table}
134 \centering
135 \begin{tabularx}{\linewidth}{l X}
136 \toprule
137 Scheme & Description\\
138 \midrule
139 $\cschemeE{e}{r}$ & Produces the value of expression $e$ given the context $r$ and pushes it on the stack.
140 The result can be a basic value or a pointer to a task.\\
141 $\cschemeF{e}$ & Generates the bytecode for functions.\\
142 $\cschemeS{e}{r}{w} $ & Generates the function for the step continuation given the context $r$ and the width $w$ of the left-hand side task value.\\
143 \bottomrule
144 \end{tabularx}
145 \end{table}
146
147 \subsection{Expressions}
148 Almost all expression constructions are compiled using $\mathcal{E}$.
149 The argument of $\mathcal{E}$ is the context (see \cref{ssec:functions}).
150 Values are always placed on the stack; tuples and other compound data types are unpacked.
151 Function calls, function arguments and tasks are also compiled using $\mathcal{E}$ but their compilations is explained later.
152
153 \begin{align*}
154 \cschemeE{\text{\cleaninline{lit}}~e}{r} & = \text{\cleaninline{BCPush (bytecode e)}};\\
155 \cschemeE{e_1\mathbin{\text{\cleaninline{+.}}}e_2}{r} & = \cschemeE{e_1}{r};
156 \cschemeE{e_2}{r};
157 \text{\cleaninline{BCAdd}};\\
158 {} & \text{\emph{Similar for other binary operators}}\\
159 \cschemeE{\text{\cleaninline{Not}}~e}{r} & =
160 \cschemeE{e}{r};
161 \text{\cleaninline{BCNot}};\\
162 {} & \text{\emph{Similar for other unary operators}}\\
163 \cschemeE{\text{\cleaninline{If}}~e_1~e_2~e_3}{r} & =
164 \cschemeE{e_1}{r};
165 \text{\cleaninline{BCJmpF}}\enskip l_{else}; \mathbin{\phantom{=}} \cschemeE{e_2}{r}; \text{\cleaninline{BCJmp}}\enskip l_{endif};\\
166 {} & \mathbin{\phantom{=}} \text{\cleaninline{BCLabel}}\enskip l_{else}; \cschemeE{e_3}{r}; \mathbin{\phantom{=}} \text{\cleaninline{BCLabel}}\enskip l_{endif};\\
167 {} & \text{\emph{Where $l_{else}$ and $l_{endif}$ are fresh labels}}\\
168 \cschemeE{\text{\cleaninline{tupl}}~e_1~e_2}{r} & =
169 \cschemeE{e_1}{r};
170 \cschemeE{e_2}{r};\\
171 {} & \text{\emph{Similar for other unboxed compound data types}}\\
172 \cschemeE{\text{\cleaninline{first}}~e}{r} & =
173 \cschemeE{e}{r};
174 \text{\cleaninline{BCPop}}\enskip w;\\
175 {} & \text{\emph{Where $w$ is the width of the left value and}}\\
176 {} & \text{\emph{similar for other unboxed compound data types}}\\
177 \cschemeE{\text{\cleaninline{second}}\enskip e}{r} & =
178 \cschemeE{e}{r};
179 \text{\cleaninline{BCRot}}\enskip w_1\enskip (w_1+w_2);
180 \text{\cleaninline{BCPop}}\enskip w_2;\\
181 {} & \text{\emph{Where $w_1$ is the width of the left and, $w_2$ of the right value}}\\
182 {} & \text{\emph{similar for other unboxed compound data types}}\\
183 \end{align*}
184
185 Translating $\mathcal{E}$ to \gls{CLEAN} code is very straightforward, it basically means executing the monad.
186 Almost always, the type of the interpretation is not used, i.e.\ it is a phantom type.
187 To still have the functions return the correct type, the \cleaninline{tell`}\footnote{\cleaninline{tell` :: BCInterpret a}} helper is used.
188 This function is similar to the writer monad's \cleaninline{tell} function but is casted to the correct type.
189 \Cref{lst:imp_arith} shows the implementation for the arithmetic and conditional expressions.
190 Note that $r$, the context, is not an explicit argument but stored in the state.
191
192 \begin{lstClean}[caption={Interpretation implementation for the arithmetic and conditional classes.},label={lst:imp_arith}]
193 instance expr BCInterpret where
194 lit t = tell` [BCPush (toByteCode{|*|} t)]
195 (+.) a b = a >>| b >>| tell` [BCAdd]
196 ...
197 If c t e = freshlabel >>= \elselabel->freshlabel >>= \endiflabel->
198 c >>| tell` [BCJumpF elselabel] >>|
199 t >>| tell` [BCJump endiflabel,BCLabel elselabel] >>|
200 e >>| tell` [BCLabel endiflabel]
201 \end{lstClean}
202
203 \subsection{Functions}
204 Compiling functions occurs in $\mathcal{F}$, which generates bytecode for the complete program by iterating over the functions and ending with the main expression.
205 When compiling the body of the function, the arguments of the function are added to the context so that the addresses can be determined when referencing arguments.
206 The main expression is a special case of $\mathcal{F}$ since it neither has arguments nor something to continue.
207 Therefore, it is just compiled using $\mathcal{E}$.
208
209 \begin{align*}
210 \cschemeF{main=m} & =
211 \cschemeE{m}{[]};\\
212 \cschemeF{f~a_0 \ldots a_n = b~\text{\cleaninline{In}}~m} & =
213 \text{\cleaninline{BCLabel}}~f;\\
214 {} & \mathbin{\phantom{=}} \cschemeE{b}{[\langle f, i\rangle, i\in \{(\Sigma^n_{i=0}\parallel{}a_i\parallel{})..0\}]};\\
215 {} & \mathbin{\phantom{=}} \text{\cleaninline{BCReturn}}~\parallel{}b\parallel{}~n;\\
216 {} & \mathbin{\phantom{=}} \cschemeF{m};\\
217 \end{align*}
218 %
219 %A function call starts by pushing the stack and frame pointer, and making space for the program counter (a) followed by evaluating the arguments in reverse order (b).
220 %On executing \Cl{BCJumpSR}, the program counter is set and the interpreter jumps to the function (c).
221 %When the function returns, the return value overwrites the old pointers and the arguments.
222 %This occurs right after a \Cl{BCReturn} (d).
223 %Putting the arguments on top of pointers and not reserving space for the return value uses little space and facilitates tail call optimization.
224 %
225 %\begin{figure}
226 % \subfigure[\Cl{BCPushPtrs}]{\includegraphics[width=.24\linewidth]{memory1}}
227 % \subfigure[Arguments]{\includegraphics[width=.24\linewidth]{memory2}}
228 % \subfigure[\Cl{BCJumpSR}]{\includegraphics[width=.24\linewidth]{memory3}}
229 % \subfigure[\Cl{BCReturn}]{\includegraphics[width=.24\linewidth]{memory4}}
230 % \caption{The stack layout during function calls.}
231 % \Description{A visual representation of the stack layout during a function call and a return.}
232 %\end{figure}
233 %
234 %Calling a function and referencing function arguments are an extension to $\mathcal{E}$ as shown below.
235 %Arguments may be at different places on the stack at different times (see Subsection~\ref{ssec:step}) and therefore the exact location always has to be determined from the context using \Cl{findarg}\footnote{%
236 % \lstinline{findarg [l':r] l = if (l == l`) 0 (1 + findarg r l)}
237 %}.
238 %Compiling argument $a_{f^i}$, the $i$th argument in function $f$, consists of traversing all positions in the current context.
239 %Arguments wider than one stack cell are fetched in reverse to preserve the order.
240 %
241 %\begin{compilationscheme}
242 % \cschemeE{f(a_0, \ldots, a_n)}{r} & =
243 % \text{\Cl{BCPushPtrs}};\\
244 % {} & \mathbin{\phantom{=}} \cschemeE{a_n}{r}; \cschemeE{a_{\ldots}}{r}; \cschemeE{a_0}{r};\\
245 % {} & \mathbin{\phantom{=}} \text{\Cl{BCJumpSR}}\enskip n\enskip f;\\
246 % \cschemeE{a_{f^i}}{r} & =
247 % \text{\Cl{BCArg} findarg}(r, f, i)\enskip \text{for all}\enskip i\in\{w\ldots v\};\\
248 % {} & v = \Sigma^{i-1}_{j=0}\|a_{f^j}\|\\
249 % {} & w = v + \|a_{f^i}\|\\
250 %\end{compilationscheme}
251 %
252 %Translating the compilation schemes for functions to Clean is not as straightforward as other schemes due to the nature of shallow embedding.
253 %The \Cl{fun} class has a single function with a single argument.
254 %This argument is a Clean function that---when given a callable Clean function representing the mTask function---will produce \Cl{main} and a callable function.
255 %To compile this, the argument must be called with a function representing a function call in mTask.
256 %Listing~\ref{lst:fun_imp} shows the implementation for this as Clean code.
257 %To uniquely identify the function, a fresh label is generated.
258 %The function is then called with the \Cl{callFunction} helper function that generates the instructions that correspond to calling the function.
259 %That is, it pushes the pointers, compiles the arguments, and writes the \Cl{JumpSR} instruction.
260 %The resulting structure (\Cl{g In m}) contains a function representing the mTask function (\Cl{g}) and the \Cl{main} structure to continue with.
261 %To get the actual function, \Cl{g} must be called with representations for the argument, i.e.\ using \Cl{findarg} for all arguments.
262 %The arguments are added to the context and \Cl{liftFunction} is called with the label, the argument width and the compiler.
263 %This function executes the compiler, decorates the instructions with a label and places them in the function dictionary together with the metadata such as the argument width.
264 %After lifting the function, the context is cleared again and compilation continues with the rest of the program.
265 %
266 %\begin{lstlisting}[language=Clean,label={lst:fun_imp},caption={The backend implementation for functions.}]
267 %instance fun (BCInterpret a) BCInterpret | type a where
268 % fun def = {main=freshlabel >>= \funlabel->
269 % let (g In m) = def \a->callFunction funlabel (byteWidth a) [a]
270 % in addToCtx funlabel zero (argwidth def)
271 % >>| liftFunction funlabel (argwidth def)
272 % (g (findArgs funlabel zero (argwidth def))) Nothing
273 % >>| clearCtx >>| m.main
274 % }
275 %
276 %callFunction :: JumpLabel UInt8 [BCInterpret b] -> BCInterpret c | ...
277 %liftFunction :: JumpLabel UInt8 (BCInterpret a) (Maybe UInt8) -> BCInterpret ()
278 %\end{lstlisting}
279 %
280 %\subsection{Tasks}\label{ssec:scheme_tasks}
281 %Task trees are created with the \Cl{BCMkTask} instruction that allocates a node and pushes it to the stack.
282 %It pops arguments from the stack according to the given task type.
283 %The following extension of $\mathcal{E}$ shows this compilation scheme (except for the step combinator, explained in Subsection~\ref{ssec:step}).
284 %
285 %\begin{compilationscheme}
286 % \cschemeE{\text{\Cl{rtrn}}\enskip e}{r} & =
287 % \cschemeE{e}{r};
288 % \text{\Cl{BCMkTask BCStable}}_{\|e\|};\\
289 % \cschemeE{\text{\Cl{unstable}}\enskip e}{r} & =
290 % \cschemeE{e}{r};
291 % \text{\Cl{BCMkTask BCUnstable}}_{\|e\|};\\
292 % \cschemeE{\text{\Cl{readA}}\enskip e}{r} & =
293 % \cschemeE{e}{r};
294 % \text{\Cl{BCMkTask BCReadA}};\\
295 % \cschemeE{\text{\Cl{writeA}}\enskip e_1\enskip e_2}{r} & =
296 % \cschemeE{e_1}{r};
297 % \cschemeE{e_2}{r};
298 % \text{\Cl{BCMkTask BCWriteA}};\\
299 % \cschemeE{\text{\Cl{readD}}\enskip e}{r} & =
300 % \cschemeE{e}{r};
301 % \text{\Cl{BCMkTask BCReadD}};\\
302 % \cschemeE{\text{\Cl{writeD}}\enskip e_1\enskip e_2}{r} & =
303 % \cschemeE{e_1}{r};
304 % \cschemeE{e_2}{r};
305 % \text{\Cl{BCMkTask BCWriteD}};\\
306 % \cschemeE{\text{\Cl{delay}}\enskip e}{r} & =
307 % \cschemeE{e}{r};
308 % \text{\Cl{BCMkTask BCDelay}};\\
309 % \cschemeE{\text{\Cl{rpeat}}\enskip e}{r} & =
310 % \cschemeE{e}{r};
311 % \text{\Cl{BCMkTask BCRepeat}};\\
312 % \cschemeE{e_1\text{\Cl{.||.}}e_2}{r} & =
313 % \cschemeE{e_1}{r};
314 % \cschemeE{e_2}{r};
315 % \text{\Cl{BCMkTask BCOr}};\\
316 % \cschemeE{e_1\text{\Cl{.&&.}}e_2}{r} & =
317 % \cschemeE{e_1}{r};
318 % \cschemeE{e_2}{r};
319 % \text{\Cl{BCMkTask BCAnd}};\\
320 %\end{compilationscheme}
321 %
322 %This simply translates to Clean code by writing the correct \Cl{BCMkTask} instruction as exemplified in Listing~\ref{lst:imp_ret}.
323 %
324 %\begin{lstlisting}[language=Clean,caption={The backend implementation for \Cl{rtrn}.},label={lst:imp_ret}]
325 %instance rtrn BCInterpret where rtrn m = m >>| tell` [BCMkTask (bcstable m)]
326 %\end{lstlisting}
327 %
328 %\subsection{Step combinator}\label{ssec:step}
329 %The \Cl{step} construct is a special type of task because the task value of the left-hand side may change over time.
330 %Therefore, the continuation tasks on the right-hand side are \emph{observing} this task value and acting upon it.
331 %In the compilation scheme, all continuations are first converted to a single function that has two arguments: the stability of the task and its value.
332 %This function either returns a pointer to a task tree or fails (denoted by $\bot$).
333 %It is special because in the generated function, the task value of a task can actually be inspected.
334 %Furthermore, it is a lazy node in the task tree: the right-hand side may yield a new task tree after several rewrite steps (i.e.\ it is allowed to create infinite task trees using step combinators).
335 %The function is generated using the $\mathcal{S}$ scheme that requires two arguments: the context $r$ and the width of the left-hand side so that it can determine the position of the stability which is added as an argument to the function.
336 %The resulting function is basically a list of if-then-else constructions to check all predicates one by one.
337 %Some optimization is possible here but has currently not been implemented.
338 %
339 %\begin{compilationscheme}
340 % \cschemeE{t_1\text{\Cl{>>*.}}t_2}{r} & =
341 % \cschemeE{a_{f^i}}{r}, \langle f, i\rangle\in r;
342 % \text{\Cl{BCMkTask}}\enskip \text{\Cl{BCStable}}_{\|r\|};\\
343 % {} & \mathbin{\phantom{=}} \cschemeE{t_1}{r};\\
344 % {} & \mathbin{\phantom{=}} \text{\Cl{BCMkTask}}\enskip \text{\Cl{BCAnd}};\\
345 % {} & \mathbin{\phantom{=}} \text{\Cl{BCMkTask}}\\
346 % {} & \mathbin{\phantom{=}} \enskip (\text{\Cl{BCStep}}\enskip (\cschemeS{t_2}{(r + [\langle l_s, i\rangle])}{\|t_1\|}));\\
347 %%
348 % \cschemeS{[]}{r}{w} & =
349 % \text{\Cl{BCPush}}\enskip \bot;\\
350 % \cschemeS{\text{\Cl{IfValue}}\enskip f\enskip t:cs}{r}{w} & =
351 % \text{\Cl{BCArg}} (\|r\| + w);
352 % \text{\Cl{BCIsNoValue}};\\
353 % {} & \mathbin{\phantom{=}} \cschemeE{f}{r};
354 % \text{\Cl{BCAnd}};\\
355 % {} & \mathbin{\phantom{=}} \text{\Cl{BCJmpF}}\enskip l_1;\\
356 % {} & \mathbin{\phantom{=}} \cschemeE{t}{r};
357 % \text{\Cl{BCJmp}}\enskip l_2;\\
358 % {} & \mathbin{\phantom{=}} \text{\Cl{BCLabel}}\enskip l_1;
359 % \cschemeS{cs}{r}{w};\\
360 % {} & \mathbin{\phantom{=}} \text{\Cl{BCLabel}}\enskip l_2;\\
361 % {} & \text{\emph{Where $l_1$ and $l_2$ are fresh labels}}\\
362 % {} & \text{\emph{Similar for \Cl{IfStable} and \Cl{IfUnstable}}}\\
363 % \cschemeS{\text{\Cl{IfNoValue}}\enskip t:cs}{r}{w} & =
364 % \text{\Cl{BCArg}} (\|r\|+w);
365 % \text{\Cl{BCIsNoValue}};\\
366 % {} & \mathbin{\phantom{=}} \text{\Cl{BCJmpF}}\enskip l_1;\\
367 % {} & \mathbin{\phantom{=}} \cschemeE{t}{r};
368 % \text{\Cl{BCJmp}}\enskip l_2;\\
369 % {} & \mathbin{\phantom{=}} \text{\Cl{BCLabel}}\enskip l_1;
370 % \cschemeS{cs}{r}{w};\\
371 % {} & \mathbin{\phantom{=}} \text{\Cl{BCLabel}}\enskip l_2;\\
372 % {} & \text{\emph{Where $l_1$ and $l_2$ are fresh labels}}\\
373 % \cschemeS{\text{\Cl{Always}}\enskip f:cs}{r}{w} & =
374 % \cschemeE{f}{r};\\
375 %\end{compilationscheme}
376 %
377 %First the context is evaluated.
378 %The context contains arguments from functions and steps that need to be preserved after rewriting.
379 %The evaluated context is combined with the left-hand side task value by means of a \Cl{.&&.} combinator to store it in the task tree so that it is available after a rewrite.
380 %This means that the task tree is be transformed as follows:
381 %
382 %\begin{lstlisting}[language=Clean]
383 %t1 >>= \v1->t2 >>= \v2->t3 >>= ...
384 %//is transformed to
385 %t1 >>= \v1->rtrn v1 .&&. t2 >>= \v2->rtrn (v1, v2) .&&. t3 >>= ...
386 %\end{lstlisting}
387 %
388 %The translation to Clean is given in Listing~\ref{lst:imp_seq}.
389 %
390 %\begin{lstlisting}[language=Clean,caption={Backend implementation for the step class.},label={lst:imp_seq}]
391 %instance step BCInterpret where
392 % (>>*.) lhs cont
393 % //Fetch a fresh label and fetch the context
394 % = freshlabel >>= \funlab->gets (\s->s.bcs_context)
395 % //Generate code for lhs
396 % >>= \ctx->lhs
397 % //Possibly add the context
398 % >>| tell` (if (ctx =: []) []
399 % //The context is just the arguments up till now in reverse
400 % ( [BCArg (UInt8 i)\\i<-reverse (indexList ctx)]
401 % ++ map BCMkTask (bcstable (UInt8 (length ctx)))
402 % ++ [BCMkTask BCTAnd]
403 % ))
404 % //Increase the context
405 % >>| addToCtx funlab zero lhswidth
406 % //Lift the step function
407 % >>| liftFunction funlab
408 % //Width of the arguments is the width of the lhs plus the
409 % //stability plus the context
410 % (one + lhswidth + (UInt8 (length ctx)))
411 % //Body label ctx width continuations
412 % (contfun funlab (UInt8 (length ctx)))
413 % //Return width (always 1, a task pointer)
414 % (Just one)
415 % >>| modify (\s->{s & bcs_context=ctx})
416 % >>| tell` [BCMkTask $ instr rhswidth funlab]
417 %
418 %toContFun :: JumpLabel UInt8 -> BCInterpret a
419 %toContFun steplabel contextwidth
420 % = foldr tcf (tell` [BCPush fail]) cont
421 %where
422 % tcf (IfStable f t)
423 % = If ((stability >>| tell` [BCIsStable]) &. f val)
424 % (t val >>| tell` [])
425 % ...
426 % stability = tell` [BCArg $ lhswidth + contextwidth]
427 % val = retrieveArgs steplabel zero lhswidth
428 %\end{lstlisting}
429 %
430 %\subsection{Shared Data Sources}
431 %The compilation scheme for SDS definitions is a trivial extension to $\mathcal{F}$ since there is no code generated as seen below.
432 %
433 %\begin{compilationscheme}
434 % \cschemeF{\text{\Cl{sds}}\enskip x=i\enskip \text{\Cl{In}}\enskip m} & =
435 % \cschemeF{m};\\
436 % \cschemeF{\text{\Cl{liftsds}}\enskip x=i\enskip \text{\Cl{In}}\enskip m} & =
437 % \cschemeF{m};\\
438 %\end{compilationscheme}
439 %
440 %The SDS access tasks have a compilation scheme similar to other tasks (see~Subsection~\ref{ssec:scheme_tasks}).
441 %The \Cl{getSds} task just pushes a task tree node with the SDS identifier embedded.
442 %The \Cl{setSds} task evaluates the value, lifts that value to a task tree node and creates an SDS set node.
443 %
444 %\begin{compilationscheme}
445 % \cschemeE{\text{\Cl{getSds}}\enskip s}{r} & =
446 % \text{\Cl{BCMkTask}} (\text{\Cl{BCSdsGet}} s);\\
447 % \cschemeE{\text{\Cl{setSds}}\enskip s\enskip e}{r} & =
448 % \cschemeE{e}{r};
449 % \text{\Cl{BCMkTask BCStable}}_{\|e\|};\\
450 % {} & \mathbin{\phantom{=}} \text{\Cl{BCMkTask}} (\text{\Cl{BCSdsSet}} s);\\
451 %\end{compilationscheme}
452 %
453 %While there is no code generated in the definition, the bytecode compiler is storing the SDS data in the \Cl{bcs_sdses} field in the compilation state.
454 %The SDSs are typed as functions in the host language so an argument for this function must be created that represents the SDS on evaluation.
455 %For this, an \Cl{BCInterpret} is created that emits this identifier.
456 %When passing it to the function, the initial value of the SDS is returned.
457 %This initial value is stored as a bytecode encoded value in the state and the compiler continues with the rest of the program.
458 %
459 %Compiling \Cl{getSds} is a matter of executing the \Cl{BCInterpret} representing the SDS, which yields the identifier that can be embedded in the instruction.
460 %Setting the SDS is similar: the identifier is retrieved and the value is written to put in a task tree so that the resulting task can remember the value it has written.
461 %Lifted SDSs are compiled in a very similar way.
462 %The only difference is that there is no initial value but an iTasks SDS when executing the Clean function.
463 %A lens on this SDS converting \Cl{a} from the \Cl{Shared a} to a \Cl{String255}---a bytecode encoded version---is stored in the state.
464 %The encoding and decoding itself is unsafe when used directly but the type system of the language and the abstractions make it safe.
465 %Upon sending the mTask task to the device, the initial values of the lifted SDSs are fetched to complete the SDS specification.
466 %
467 %\begin{lstlisting}[language=Clean,caption={Backend implementation for the SDS classes.},label={lst:comp_sds}]
468 %:: Sds a = Sds Int
469 %instance sds BCInterpret where
470 % sds def = {main = freshsds >>= \sdsi->
471 % let sds = modify (\s->{s & bcs_sdses=put sdsi
472 % (Left (toByteCode t)) s.bcs_sdses})
473 % >>| pure (Sds sdsi)
474 % (t In e) = def sds
475 % in e.main}
476 % getSds f = f >>= \(Sds i)-> tell` [BCMkTask (BCSdsGet (fromInt i))]
477 % setSds f v = f >>= \(Sds i)->v >>| tell`
478 % ( map BCMkTask (bcstable (byteWidth v))
479 % ++ [BCMkTask (BCSdsSet (fromInt i))])\end{lstlisting}
480 %
481 %\section{Run time system}
482 %
483 %The RTS is designed to run on systems with as little as 2kB of RAM.
484 %Aggressive memory management is therefore vital.
485 %Not all firmwares for MCUs support heaps and---when they do---allocation often leaves holes when not used in a Last In First Out strategy.
486 %Therefore the RTS uses a chunk of memory in the global data segment with its own memory manager tailored to the needs of mTask.
487 %The size of this block can be changed in the configuration of the RTS if necessary.
488 %On an Arduino {UNO} ---equipped with 2kB of RAM--- this size can be about 1500 bytes.
489 %
490 %In memory, task data grows from the bottom up and an interpreter stack is located directly on top of it growing in the same direction.
491 %As a consequence, the stack moves when a new task is received.
492 %This never happens within execution because communication is always processed before execution.
493 %Values in the interpreter are always stored on the stack, even tuples.
494 %Task trees grow from the top down as in a heap.
495 %This approach allows for flexible ratios, i.e.\ many tasks and small trees or few tasks and big trees.
496 %
497 %The event loop of the RTS is executed repeatedly and consists of three distinct phases.
498 %
499 %%TODO evt subsubsections verwijderen
500 %\subsubsection{Communication}
501 %In the first phase, the communication channels are processed.
502 %The messages announcing SDS updates are applied immediately, the initialization of new tasks is delayed until the next phase.
503 %
504 %\subsubsection{Execution}
505 %The second phase consists of executing tasks.
506 %The RTS executes all tasks in a round robin fashion.
507 %If a task is not initialized yet, the bytecode of the main function is interpreted to produce the initial task tree.
508 %The rewriting engine uses the interpreter when needed, e.g.\ to calculate the step continuations.
509 %The rewriter and the interpreter use the same stack to store intermediate values.
510 %Rewriting steps are small so that interleaving results in seemingly parallel execution.
511 %In this phase new task tree nodes may be allocated.
512 %Both rewriting and initialization are atomic operations in the sense that no processing on SDSs is done other than SDS operations from the task itself.
513 %The host is notified if a task value is changed after a rewrite step.
514 %
515 %\subsubsection{Memory management}
516 %The third and final phase is memory management.
517 %Stable tasks, and unreachable task tree nodes are removed.
518 %If a task is to be removed, tasks with higher memory addresses are moved down.
519 %For task trees---stored in the heap---the RTS already marks tasks and task trees as trash during rewriting so the heap can be compacted in a single pass.
520 %This is possible because there is no sharing or cycles in task trees and nodes contain pointers pointers to their parent.
521 %\subsection{Memory management}
522 %\subsection{Interpreter}
523 %\subsection{Rewrite engine}
524 %\section{Task rewriting}\label{sec:rewrite}
525 %Tasks are rewritten every event loop iteration and one rewrite cycle is generally very fast.
526 %This results in seemingly parallel execution of the tasks because the rewrite steps are interleaved.
527 %Rewriting is a destructive process that actually modifies the task tree nodes in memory and marks nodes that become garbage.
528 %The task value is stored on the stack and therefore only available during rewriting.
529 %
530 %\subsection{Basic tasks}
531 %The \Cl{rtrn} and \Cl{unstable} tasks always rewrite to themselves and have no side effects.
532 %The GPIO interaction tasks do have side effects.
533 %The \Cl{readA} and \Cl{readD} tasks will query the given pin every rewrite cycle and emit it as an unstable task value.
534 %The \Cl{writeA} and \Cl{writeD} tasks write the given value to the given pin and immediately rewrite to a stable task of the written value.
535 %
536 %\subsection{Delay and repetition}
537 %The \Cl{delay} task stabilizes once a certain amount of time has been passed by storing the finish time on initialization.
538 %In every rewrite step it checks whether the current time is bigger than the finish time and if so, it rewrites to a \Cl{rtrn} task containing the number of milliseconds that it overshot the target.
539 %The \Cl{rpeat} task combinator rewrites the argument until it becomes stable.
540 %Rewriting is a destructive process and therefore the original task tree must be saved.
541 %As a consequence, on installation, the argument is cloned and the task rewrites the clone.
542 %
543 %\subsection{Sequential combination}
544 %First the left-hand side of the step task is rewritten.
545 %The resulting value is passed to the continuation function.
546 %If the continuation function returns a pointer to a task tree, the task tree rewrites to that task tree and marks the original left-hand side as trash.
547 %If the function returns $\bot$, the step is kept unchanged.
548 %The step itself never fields a value.
549 %
550 %\subsection{Parallel combination}\label{ssec:parallelExecution}
551 %There are two parallel task combinators available.
552 %A \Cl{.&&.} task only becomes stable when both sides are stable.
553 %A \Cl{.||.} task becomes stable when one of the sides is stable.
554 %The combinators first rewrite both sides and then merge the task values according to the semantics given in Listing~\ref{lst:parallel_combinators}.
555 %
556 %\begin{lstlisting}[language=Clean,caption={Task value semantics for the parallel combinators.},label={lst:parallel_combinators}]
557 %(.&&.) :: (TaskValue a) (TaskValue b) -> TaskValue (a, b)
558 %(.&&.) (Value lhs stab1) (Value rhs stab2) = Value (lhs, rhs) (stab1 && stab2)
559 %(.&&.) _ _ = NoValue
560 %
561 %(.||.) :: (TaskValue a) (TaskValue a) -> TaskValue a
562 %(.||.) lhs=:(Value _ True) _ = lhs
563 %(.||.) (Value lhs _) rhs=:(Value _ True) = rhs
564 %(.||.) NoValue rhs = rhs
565 %(.||.) lhs _ = lhs\end{lstlisting}
566 %
567 %\subsection{Shared Data Source tasks}
568 %The \Cl{BCSdsGet} node always rewrites to itself.
569 %It will read the actual SDS embedded and emit the value as an unstable task value.
570 %
571 %Setting an SDS is a bit more involved because after writing, it emits the value written as a stable task value.
572 %The \Cl{BCSdsSet} node contains the identifier for the SDS and a task tree that, when rewritten, emits the value to be set as a stable task value.
573 %The naive approach would be to just rewrite the \Cl{BCSdsSet} to a node similar to the \Cl{BCSdsGet} but only with a stable value.
574 %However, after writing the SDS, its value might have changed due to other tasks writing it, and then the \Cl{setSDS}'s stable value may change.
575 %Therefore, the \Cl{BCSdsSet} node is rewritten to the argument task tree which always represents constant stable value.
576 %In future rewrites, the constant value node emits the value that was originally written.
577 %
578 %The client only knows whether an SDS is a lifted SDS, not to which iTasks SDS it is connected.
579 %If the SDS is modified on the device, it sends an update to the server.
580 %
581 %\section{Conclusion}
582
583 \input{subfilepostamble}
584 \end{document}