comments ms
[phd-thesis.git] / top / imp.tex
1 \documentclass[../thesis.tex]{subfiles}
2
3 \input{subfilepreamble}
4
5 \setcounter{chapter}{6}
6
7 \begin{document}
8 \input{subfileprefix}
9 \chapter{The implementation of mTask}%
10 \label{chp:implementation}
11 \begin{chapterabstract}
12 This chapter shows the implementation of the \gls{MTASK} system by:
13 \begin{itemize}
14 \item showing the compilation and execution toolchain;
15 \item showing the implementation of the byte code compiler for the \gls{MTASK} language;
16 \item elaborating on the implementation and architecture of the \gls{RTS} of \gls{MTASK};
17 \item and explaining the machinery used to automatically serialise and deserialise data to and fro the device.
18 \end{itemize}
19 \end{chapterabstract}
20
21 The \gls{MTASK} system targets resource-constrained edge devices that have little memory, processor speed, and communication.
22 Such edge devices are often powered by microcontrollers, tiny computers specifically designed for embedded applications.
23 The microcontrollers usually have flash-based program memory which wears out fairly quickly.
24 For example, the flash memory of the popular atmega328p powering the \gls{ARDUINO} UNO is rated for \num{10000} write cycles.
25 While this sounds like a lot, if new tasks are sent to the device every minute or so, a lifetime of only seven days is guaranteed.
26 Hence, for dynamic applications, storing the program in the \gls{RAM} of the device and thus interpreting this code is necessary in order to save precious write cycles of the program memory.
27 In the \gls{MTASK} system, the \gls{MTASK} \gls{RTS}, a domain-specific \gls{OS}, is responsible for interpreting the programs.
28
29 Programs in \gls{MTASK} are \gls{DSL} terms constructed at run time in an \gls{ITASK} system.
30 \Cref{fig:toolchain} shows the compilation and execution toolchain of such programs.
31 First, the source code is compiled to a byte code specification, this specification contains the compiled main expression, the functions, and the \gls{SDS} and peripheral configuration.
32 How an \gls{MTASK} task is compiled to this specification is shown in \cref{sec:compiler_imp}.
33 This package is then sent to the \gls{RTS} of the device for execution.
34 In order to execute a task, first the main expression is evaluated in the interpreter, resulting in a task tree.
35 Then, using small-step reduction, the task tree is continuously rewritten by the rewrite engine of the \gls{RTS}.
36 At times, the reduction requires the evaluation of expressions, using the interpreter.
37 During every rewrite step, a task value is produced.
38 On the device, the \gls{RTS} may have multiple tasks at the same time active.
39 By interleaving the rewrite steps, parallel operation is achieved.
40 The design, architecture and implementation of the \gls{RTS} is shown in \cref{sec:compiler_rts}.
41
42 \begin{figure}
43 \centering
44 \centerline{\includestandalone{toolchain}}
45 \caption{Compilation and execution toolchain of \gls{MTASK} programs.}%
46 \label{fig:toolchain}
47 \end{figure}
48
49 \section{Compiler}\label{sec:compiler_imp}
50 The byte code compiler for \gls{MTASK} is an interpretation of the \gls{MTASK} language.
51 In order to compile terms, instances for all \gls{MTASK} type classes are required for the \cleaninline{:: BCInterpret a} type.
52 Terms in \gls{MTASK} are constructed and compiled at run time, but type checked at compile time in the host language \gls{CLEAN}.
53 The compiled tasks are sent to the device for interpretation.
54 The result of the compilation is the byte code and some metadata regarding the used peripherals and \glspl{SDS}.
55 The compilation target is the interpreter of the \gls{MTASK} \gls{RTS}.
56 In order to keep the hardware requirements down, all expressions are evaluated on a stack.
57 Rewriting of tasks uses the same stack and also a heap.
58 The heap usage is minimised by applying aggressive memory management.
59 A detailed overview of the \gls{RTS} including the interpreter and rewriter is found in \cref{sec:compiler_rts}.
60
61 \subsection{Compiler infrastructure}
62 The byte code compiler interpretation for the \gls{MTASK} language is implemented as a monad stack containing a writer monad and a state monad.
63 The writer monad is used to generate code snippets locally without having to store them in the monadic values.
64 The state monad accumulates the code, and stores the state the compiler requires.
65 \Cref{lst:compiler_state} shows the data type for the state, storing:
66 function the compiler currently is in;
67 code of the main expression;
68 context (see \cref{ssec:step});
69 code for the functions;
70 next fresh label;
71 a list of all the used \glspl{SDS}, either local \glspl{SDS} containing the initial value (\cleaninline{Left}) or lowered \glspl{SDS} (see \cref{sec:liftsds}) containing a reference to the associated \gls{ITASK} \gls{SDS};
72 and finally there is a list of peripherals used.
73
74 \begin{lstClean}[label={lst:compiler_state},caption={The type for the \gls{MTASK} byte code compiler.}]
75 :: BCInterpret a :== StateT BCState (WriterT [BCInstr] Identity) a
76 :: BCState =
77 { bcs_infun :: JumpLabel
78 , bcs_mainexpr :: [BCInstr]
79 , bcs_context :: [BCInstr]
80 , bcs_functions :: Map JumpLabel BCFunction
81 , bcs_freshlabel :: JumpLabel
82 , bcs_sdses :: [Either String255 MTLens]
83 , bcs_hardware :: [BCPeripheral]
84 }
85 :: BCFunction =
86 { bcf_instructions :: [BCInstr]
87 , bcf_argwidth :: UInt8
88 , bcf_returnwidth :: UInt8
89 }
90 \end{lstClean}
91
92 Executing the compiler is done by providing an initial state and running the monad.
93 After compilation, several post-processing steps are applied to make the code suitable for the microprocessor.
94 First, in all tail call \cleaninline{BCReturn} instructions are replaced by \cleaninline{BCTailCall} instructions to optimise the tail calls.
95 Furthermore, all byte code is concatenated, resulting in one big program.
96 Many instructions have commonly used arguments, so shorthands are introduced to reduce the program size.
97 For example, the \cleaninline{BCArg} instruction is often called with argument \numrange{0}{2} and can be replaced by the \cleaninline{BCArg0} to \cleaninline{BCArg2} shorthands.
98 Furthermore, redundant instructions such as pop directly after push are removed as well in order not to burden the code generation with these intricacies.
99 Finally, the labels are resolved to represent actual program addresses instead of the freshly generated identifiers.
100 After the byte code is ready, the lowered \glspl{SDS} are resolved to provide an initial value for them.
101 The byte code, \gls{SDS} specification and peripheral specifications are the result of the process, ready to be sent to the device.
102
103 \subsection{Instruction set}
104 The instruction set is a fairly standard stack machine instruction set extended with special \gls{TOP} instructions for creating task tree nodes.
105 All instructions are housed in a \gls{CLEAN} \gls{ADT} and serialised to the byte representation using generic functions (see \cref{sec:ccodegen}).
106 Type synonyms and newtypes are used to provide insight on the arguments of the instructions (\cref{lst:type_synonyms}).
107 Labels are always two bytes long, all other arguments are one byte long.
108
109 \begin{lstClean}[caption={Type synonyms for instructions arguments.},label={lst:type_synonyms}]
110 :: ArgWidth :== UInt8 :: ReturnWidth :== UInt8
111 :: Depth :== UInt8 :: Num :== UInt8
112 :: SdsId :== UInt8 :: JumpLabel =: JL UInt16
113 \end{lstClean}
114
115 \Cref{lst:instruction_type} shows an excerpt of the \gls{CLEAN} type that represents the instruction set.
116 Shorthand instructions such as instructions with inlined arguments are omitted for brevity.
117 Detailed semantics for the instructions are given in \cref{chp:bytecode_instruction_set}.
118 One notable instruction is the \cleaninline{MkTask} instruction, it allocates and initialises a task tree node and pushes a pointer to it on the stack.
119
120 \begin{lstClean}[caption={The type housing the instruction set in \gls{MTASK}.},label={lst:instruction_type}]
121 :: BCInstr
122 //Jumps
123 = BCJumpF JumpLabel | BCLabel JumpLabel | BCJumpSR ArgWidth JumpLabel
124 | BCReturn ReturnWidth ArgWidth
125 | BCTailcall ArgWidth ArgWidth JumpLabel
126 //Arguments
127 | BCArgs ArgWidth ArgWidth
128 //Task node creation and refinement
129 | BCMkTask BCTaskType | BCTuneRateMs | BCTuneRateSec
130 //Stack ops
131 | BCPush String255 | BCPop Num | BCRot Depth Num | BCDup | BCPushPtrs
132 //Casting
133 | BCItoR | BCItoL | BCRtoI | ...
134 // arith
135 | BCAddI | BCSubI | ...
136 ...
137
138 :: BCTaskType
139 = BCStableNode ArgWidth | BCUnstableNode ArgWidth
140 // Pin io
141 | BCReadD | BCWriteD | BCReadA | BCWriteA | BCPinMode
142 // Interrupts
143 | BCInterrupt
144 // Repeat
145 | BCRepeat
146 // Delay
147 | BCDelay | BCDelayUntil
148 // Parallel
149 | BCTAnd | BCTOr
150 //Step
151 | BCStep ArgWidth JumpLabel
152 //Sds ops
153 | BCSdsGet SdsId | BCSdsSet SdsId | BCSdsUpd SdsId JumpLabel
154 // Rate limiter
155 | BCRateLimit
156 ////Peripherals
157 //DHT
158 | BCDHTTemp UInt8 | BCDHTHumid UInt8
159 ...
160 \end{lstClean}
161
162 \section{Compilation rules}
163 This section describes the compilation rules, the translation from \gls{AST} to byte code.
164 The compilation scheme consists of three schemes\slash{}functions.
165 Double vertical bars, e.g.\ $\stacksize{a_i}$, denote the number of stack cells required to store the argument.
166
167 Some schemes have a context $r$ as an argument which contains information about the location of the arguments in scope.
168 More information is given in the schemes requiring such arguments.
169
170 \begin{table}
171 \centering
172 \caption{An overview of the compilation rules.}
173 \begin{tabularx}{\linewidth}{l X}
174 \toprule
175 Scheme & Description\\
176 \midrule
177 $\cschemeE{e}{r}$ & Generates code for expressions given the context $r$\\
178 $\cschemeF{e}$ & Generates the code for functions.\\
179 $\cschemeS{e}{r}{w} $ & Generates the code for the step continuations given the context $r$ and the width $w$ of the left-hand side task value.\\
180 \bottomrule
181 \end{tabularx}
182 \end{table}
183
184 \subsection{Expressions}
185 Almost all expression constructions are compiled using $\mathcal{E}$.
186 The argument of $\mathcal{E}$ is the context (see \cref{ssec:functions}).
187 Values are always placed on the stack; tuples and other compound data types are unpacked.
188 Function calls, function arguments and tasks are also compiled using $\mathcal{E}$ but their compilations is explained later.
189
190 \begingroup
191 \allowdisplaybreaks%
192 \begin{align*}
193 \cschemeE{\text{\cleaninline{lit}}~e}{r} & = \text{\cleaninline{BCPush (bytecode e)}};\\
194 \cschemeE{e_1\mathbin{\text{\cleaninline{+.}}}e_2}{r} & = \cschemeE{e_1}{r};
195 \cschemeE{e_2}{r};
196 \text{\cleaninline{BCAdd}};\\
197 {} & \text{\emph{Similar for other binary operators}}\\
198 \cschemeE{\text{\cleaninline{Not}}~e}{r} & =
199 \cschemeE{e}{r};
200 \text{\cleaninline{BCNot}};\\
201 {} & \text{\emph{Similar for other unary operators}}\\
202 \cschemeE{\text{\cleaninline{If}}~e_1~e_2~e_3}{r} & =
203 \cschemeE{e_1}{r};
204 \text{\cleaninline{BCJmpF}}\enskip l_{else}; \mathbin{\phantom{=}} \cschemeE{e_2}{r}; \text{\cleaninline{BCJmp}}\enskip l_{endif};\\
205 {} & \mathbin{\phantom{=}} \text{\cleaninline{BCLabel}}\enskip l_{else}; \cschemeE{e_3}{r}; \mathbin{\phantom{=}} \text{\cleaninline{BCLabel}}\enskip l_{endif};\\
206 {} & \text{\emph{Where $l_{else}$ and $l_{endif}$ are fresh labels}}\\
207 \cschemeE{\text{\cleaninline{tupl}}~e_1~e_2}{r} & =
208 \cschemeE{e_1}{r};
209 \cschemeE{e_2}{r};\\
210 {} & \text{\emph{Similar for other unboxed compound data types}}\\
211 \cschemeE{\text{\cleaninline{first}}~e}{r} & =
212 \cschemeE{e}{r};
213 \text{\cleaninline{BCPop}}\enskip w;\\
214 {} & \text{\emph{Where $w$ is the width of the right value and}}\\
215 {} & \text{\emph{similar for other unboxed compound data types}}\\
216 \cschemeE{\text{\cleaninline{second}}\enskip e}{r} & =
217 \cschemeE{e}{r};
218 \text{\cleaninline{BCRot}}\enskip (w_l+w_r)\enskip w_r;
219 \text{\cleaninline{BCPop}}\enskip w_l;\\
220 {} & \text{\emph{Where $w_l$ is the width of the left and, $w_r$ of the right value}}\\
221 {} & \text{\emph{similar for other unboxed compound data types}}\\
222 \end{align*}
223 \endgroup
224
225 Translating $\mathcal{E}$ to \gls{CLEAN} code is very straightforward, it basically means writing the instructions to the writer monad.
226 Almost always, the type of the interpretation is not used, i.e.\ it is a phantom type.
227 To still have the functions return the correct type, the \cleaninline{tell`}\footnote{\cleaninline{tell` :: [BCInstr] -> BCInterpret a}} helper is used.
228 This function is similar to the writer monad's \cleaninline{tell} function but is cast to the correct type.
229 \Cref{lst:imp_arith} shows the implementation for the arithmetic and conditional expressions.
230 Note that $r$, the context, is not an explicit argument here but stored in the state.
231
232 \begin{lstClean}[caption={Interpretation implementation for the arithmetic and conditional functions.},label={lst:imp_arith}]
233 instance expr BCInterpret where
234 lit t = tell` [BCPush (toByteCode{|*|} t)]
235 (+.) a b = a >>| b >>| tell` [BCAdd]
236 ...
237 If c t e = freshlabel >>= \elselabel->freshlabel >>= \endiflabel->
238 c >>| tell` [BCJumpF elselabel] >>|
239 t >>| tell` [BCJump endiflabel,BCLabel elselabel] >>|
240 e >>| tell` [BCLabel endiflabel]
241 \end{lstClean}
242
243 \subsection{Functions}\label{ssec:functions}
244 Compiling functions and other top-level definitions is done using in $\mathcal{F}$, which generates bytecode for the complete program by iterating over the functions and ending with the main expression.
245 When compiling the body of the function, the arguments of the function are added to the context so that the addresses can be determined when referencing arguments.
246 The main expression is a special case of $\mathcal{F}$ since it neither has arguments nor something to continue.
247 Therefore, it is just compiled using $\mathcal{E}$ with an empty context.
248
249 \begin{align*}
250 \cschemeF{main=m} & =
251 \cschemeE{m}{[]};\\
252 \cschemeF{f~a_0 \ldots a_n = b~\text{\cleaninline{In}}~m} & =
253 \text{\cleaninline{BCLabel}}~f; \cschemeE{b}{[\langle f, i\rangle, i\in \{(\Sigma^n_{i=0}\stacksize{a_i})..0\}]};\\
254 {} & \mathbin{\phantom{=}} \text{\cleaninline{BCReturn}}~\stacksize{b}~n; \cschemeF{m};\\
255 \end{align*}
256
257 A function call starts by pushing the stack and frame pointer, and making space for the program counter (\cref{lst:funcall_pushptrs}) followed by evaluating the arguments in reverse order (\cref{lst:funcall_args}).
258 On executing \cleaninline{BCJumpSR}, the program counter is set, and the interpreter jumps to the function (\cref{lst:funcall_jumpsr}).
259 When the function returns, the return value overwrites the old pointers and the arguments.
260 This occurs right after a \cleaninline{BCReturn} (\cref{lst:funcall_ret}).
261 Putting the arguments on top of pointers and not reserving space for the return value uses little space and facilitates tail call optimisation.
262
263 \begin{figure}
264 \begin{subfigure}{.24\linewidth}
265 \centering
266 \includestandalone{memory1}
267 \caption{\cleaninline{BCPushPtrs}.}\label{lst:funcall_pushptrs}
268 \end{subfigure}
269 \begin{subfigure}{.24\linewidth}
270 \centering
271 \includestandalone{memory2}
272 \caption{Arguments.}\label{lst:funcall_args}
273 \end{subfigure}
274 \begin{subfigure}{.24\linewidth}
275 \centering
276 \includestandalone{memory3}
277 \caption{\cleaninline{BCJumpSR}.}\label{lst:funcall_jumpsr}
278 \end{subfigure}
279 \begin{subfigure}{.24\linewidth}
280 \centering
281 \includestandalone{memory4}
282 \caption{\cleaninline{BCReturn}.}\label{lst:funcall_ret}
283 \end{subfigure}
284 \caption{The stack layout during function calls.}%
285 \end{figure}
286
287 Calling a function and referencing function arguments are an extension to $\mathcal{E}$ as shown below.
288 Arguments may be at different places on the stack at different times (see \cref{ssec:step}) and therefore the exact location is always is determined from the context using \cleaninline{findarg}\footnote{\cleaninline{findarg [l`:r] l = if (l == l`) 0 (1 + findarg r l)}}.
289 Compiling argument $a_{f^i}$, the $i$th argument in function $f$, consists of traversing all positions in the current context.
290 Arguments wider than one stack cell are fetched in reverse to reconstruct the original order.
291
292 \begin{align*}
293 \cschemeE{f(a_0, \ldots, a_n)}{r} & =
294 \text{\cleaninline{BCPushPtrs}}; \cschemeE{a_i}{r}~\text{for all}~i\in\{n\ldots 0\}; \text{\cleaninline{BCJumpSR}}~n~f;\\
295 \cschemeE{a_{f^i}}{r} & =
296 \text{\cleaninline{BCArg}~findarg}(r, f, i)~\text{for all}~i\in\{w\ldots v\};\\
297 {} & v = \Sigma^{i-1}_{j=0}\stacksize{a_{f^j}}~\text{ and }~ w = v + \stacksize{a_{f^i}}\\
298 \end{align*}
299
300 Translating the compilation schemes for functions to \gls{CLEAN} is not as straightforward as other schemes due to the nature of shallow embedding in combination with the use of state.
301 The \cleaninline{fun} class has a single function with a single argument.
302 This argument is a \gls{CLEAN} function that---when given a callable \gls{CLEAN} function representing the \gls{MTASK} function---produces the \cleaninline{main} expression and a callable function.
303 To compile this, the argument must be called with a function representing a function call in \gls{MTASK}.
304 \Cref{lst:fun_imp} shows the implementation for this as \gls{CLEAN} code.
305 To uniquely identify the function, a fresh label is generated.
306 The function is then called with the \cleaninline{callFunction} helper function that generates the instructions that correspond to calling the function.
307 That is, it pushes the pointers, compiles the arguments, and writes the \cleaninline{JumpSR} instruction.
308 The resulting structure (\cleaninline{g In m}) contains a function representing the mTask function (\cleaninline{g}) and the \cleaninline{main} structure to continue with.
309 To get the actual function, \cleaninline{g} must be called with representations for the argument, i.e.\ using \cleaninline{findarg} for all arguments.
310 The arguments are added to the context using \cleaninline{infun} and \cleaninline{liftFunction} is called with the label, the argument width and the compiler.
311 This function executes the compiler, decorates the instructions with a label and places them in the function dictionary together with the metadata such as the argument width.
312 After lifting the function, the context is cleared again and compilation continues with the rest of the program.
313
314 \begin{lstClean}[label={lst:fun_imp},caption={The interpretation implementation for functions.}]
315 instance fun (BCInterpret a) BCInterpret | type a where
316 fun def = {main=freshlabel >>= \funlabel->
317 let (g In m) = def \a->callFunction funlabel (toByteWidth a) [a]
318 argwidth = toByteWidth (argOf g)
319 in addToCtx funlabel zero argwidth
320 >>| infun funlabel
321 (liftFunction funlabel argwidth
322 (g (retrieveArgs funlabel zero argwidth)
323 ) ?None)
324 >>| clearCtx >>| m.main
325 }
326
327 argOf :: ((m a) -> b) a -> UInt8 | toByteWidth a
328 callFunction :: JumpLabel UInt8 [BCInterpret b] -> BCInterpret c | ...
329 liftFunction :: JumpLabel UInt8 (BCInterpret a) (?UInt8) -> BCInterpret ()
330 infun :: JumpLabel (BCInterpret a) -> BCInterpret a
331 \end{lstClean}
332
333 \subsection{Tasks}\label{ssec:scheme_tasks}
334 Task trees are created with the \cleaninline{BCMkTask} instruction that allocates a node and pushes a pointer to it on the stack.
335 It pops arguments from the stack according to the given task type.
336 The following extension of $\mathcal{E}$ shows this compilation scheme (except for the step combinator, explained in \cref{ssec:step}).
337
338 \begingroup
339 \allowdisplaybreaks%
340 \begin{align*}
341 \cschemeE{\text{\cleaninline{rtrn}}~e}{r} & =
342 \cschemeE{e}{r};
343 \text{\cleaninline{BCMkTask BCStable}}_{\stacksize{e}};\\
344 \cschemeE{\text{\cleaninline{unstable}}~e}{r} & =
345 \cschemeE{e}{r};
346 \text{\cleaninline{BCMkTask BCUnstable}}_{\stacksize{e}};\\
347 \cschemeE{\text{\cleaninline{readA}}~e}{r} & =
348 \cschemeE{e}{r};
349 \text{\cleaninline{BCMkTask BCReadA}};\\
350 \cschemeE{\text{\cleaninline{writeA}}~e_1~e_2}{r} & =
351 \cschemeE{e_1}{r};
352 \cschemeE{e_2}{r};
353 \text{\cleaninline{BCMkTask BCWriteA}};\\
354 \cschemeE{\text{\cleaninline{readD}}~e}{r} & =
355 \cschemeE{e}{r};
356 \text{\cleaninline{BCMkTask BCReadD}};\\
357 \cschemeE{\text{\cleaninline{writeD}}~e_1~e_2}{r} & =
358 \cschemeE{e_1}{r};
359 \cschemeE{e_2}{r};
360 \text{\cleaninline{BCMkTask BCWriteD}};\\
361 \cschemeE{\text{\cleaninline{delay}}~e}{r} & =
362 \cschemeE{e}{r};
363 \text{\cleaninline{BCMkTask BCDelay}};\\
364 \cschemeE{\text{\cleaninline{rpeat}}~e}{r} & =
365 \cschemeE{e}{r};
366 \text{\cleaninline{BCMkTask BCRepeat}};\\
367 \cschemeE{e_1\text{\cleaninline{.\|\|.}}e_2}{r} & =
368 \cschemeE{e_1}{r};
369 \cschemeE{e_2}{r};
370 \text{\cleaninline{BCMkTask BCOr}};\\
371 \cschemeE{e_1\text{\cleaninline{.&&.}}e_2}{r} & =
372 \cschemeE{e_1}{r};
373 \cschemeE{e_2}{r};
374 \text{\cleaninline{BCMkTask BCAnd}};\\
375 \end{align*}
376 \endgroup%
377
378 This compilation scheme translates to Clean code by first writing the arguments and then the correct \cleaninline{BCMkTask} instruction.
379 This is shown for the \cleaninline{.&&.} task in \cref{lst:imp_ret}.
380
381 \begin{lstClean}[caption={The byte code interpretation implementation for \cleaninline{rtrn}.},label={lst:imp_ret}]
382 instance .&&. BCInterpret where
383 (.&&.) l r = l >>| r >>| tell` [BCMkTask BCTAnd]
384 \end{lstClean}
385
386 \subsection{Sequential combinator}\label{ssec:step}
387 The \cleaninline{step} construct is a special type of task because the task value of the left-hand side changes over time.
388 Therefore, the task continuations on the right-hand side are \emph{observing} this task value and acting upon it.
389 In the compilation scheme, all continuations are first converted to a single function that has two arguments: the stability of the task and its value.
390 This function either returns a pointer to a task tree or fails (denoted by $\bot$).
391 It is special because in the generated function, the task value of a task is inspected.
392 Furthermore, it is a lazy node in the task tree: the right-hand side may yield a new task tree after several rewrite steps, i.e.\ it is allowed to create infinite task trees using step combinators.
393 The function is generated using the $\mathcal{S}$ scheme that requires two arguments: the context $r$ and the width of the left-hand side so that it can determine the position of the stability which is added as an argument to the function.
394 The resulting function is basically a list of if-then-else constructions to check all predicates one by one.
395 Some optimisation is possible here but has currently not been implemented.
396
397 \begin{align*}
398 \cschemeE{t_1\text{\cleaninline{>>*.}}e_2}{r} & =
399 \cschemeE{a_{f^i}}{r}, \langle f, i\rangle\in r;
400 \text{\cleaninline{BCMkTask}}~\text{\cleaninline{BCStable}}_{\stacksize{r}}; \cschemeE{t_1}{r};\\
401 {} & \mathbin{\phantom{=}} \text{\cleaninline{BCMkTask}}~\text{\cleaninline{BCAnd}}; \text{\cleaninline{BCMkTask}}~(\text{\cleaninline{BCStep}}~(\cschemeS{e_2}{(r + [\langle l_s, i\rangle])}{\stacksize{t_1}}));\\
402 \end{align*}
403
404 \begin{align*}
405 \cschemeS{[]}{r}{w} & =
406 \text{\cleaninline{BCPush}}~\bot;\\
407 \cschemeS{\text{\cleaninline{IfValue}}~f~t:cs}{r}{w} & =
408 \text{\cleaninline{BCArg}} (\stacksize{r} + w);
409 \text{\cleaninline{BCIsNoValue}};\\
410 {} & \mathbin{\phantom{=}} \cschemeE{f}{r};
411 \text{\cleaninline{BCAnd}};\\
412 {} & \mathbin{\phantom{=}} \text{\cleaninline{BCJmpF}}~l_1;\\
413 {} & \mathbin{\phantom{=}} \cschemeE{t}{r};
414 \text{\cleaninline{BCJmp}}~l_2;\\
415 {} & \mathbin{\phantom{=}} \text{\cleaninline{BCLabel}}~l_1;
416 \cschemeS{cs}{r}{w};\\
417 {} & \mathbin{\phantom{=}} \text{\cleaninline{BCLabel}}~l_2;\\
418 {} & \text{\emph{Where $l_1$ and $l_2$ are fresh labels}}\\
419 {} & \text{\emph{Similar for \cleaninline{IfStable} and \cleaninline{IfUnstable}}}\\
420 \end{align*}
421
422 The step combinator has a task as the left-hand side and a list of continuations at the right-hand side.
423 First the context is evaluated ($\cschemeE{a_{f^i}}{r}$).
424 The context contains arguments from functions and steps that need to be preserved after rewriting.
425 The evaluated context is combined with the left-hand side task value by means of a \cleaninline{.&&.} combinator to store it in the task tree so that it is available after a rewrite step.
426 This means that the task tree is transformed as seen in \cref{lst:context_tree}.
427 In this figure, the expression \cleaninline{t1 >>=. \\v1->t2 >>=. \\v2->...} is shown\footnote{%
428 \cleaninline{t >>=. e} is a shorthand combinator for \cleaninline{t >>* [OnStable (\\_->true) e].}}.
429 Then, the right-hand side list of continuations is converted to an expression using $\mathcal{S}$.
430
431 \begin{figure}
432 \begin{subfigure}{.5\textwidth}
433 \includestandalone{contexttree1}
434 \caption{Without the embedded context.}
435 \end{subfigure}%
436 \begin{subfigure}{.5\textwidth}
437 \includestandalone{contexttree2}
438 \caption{With the embedded context.}
439 \end{subfigure}
440 \caption{Context embedded in a virtual task tree.}%
441 \label{lst:context_tree}
442 \end{figure}
443
444 The translation to \gls{CLEAN} is given in \cref{lst:imp_seq}.
445
446 \begin{lstClean}[caption={Byte code compilation interpretation implementation for the step class.},label={lst:imp_seq}]
447 instance step BCInterpret where
448 (>>*.) lhs cont
449 //Fetch a fresh label and fetch the context
450 = freshlabel >>= \funlab->gets (\s->s.bcs_context)
451 //Generate code for lhs
452 >>= \ctx->lhs
453 //Possibly add the context
454 >>| tell` (if (ctx =: []) []
455 //The context is just the arguments up till now in reverse
456 ( [BCArg (UInt8 i)\\i<-reverse (indexList ctx)]
457 ++ map BCMkTask (bcstable (UInt8 (length ctx)))
458 ++ [BCMkTask BCTAnd]
459 ))
460 //Increase the context
461 >>| addToCtx funlab zero lhswidth
462 //Lift the step function
463 >>| liftFunction funlab
464 //Width of the arguments is the width of the lhs plus the
465 //stability plus the context
466 (one + lhswidth + (UInt8 (length ctx)))
467 //Body label ctx width continuations
468 (contfun funlab (UInt8 (length ctx)))
469 //Return width (always 1, a task pointer)
470 (Just one)
471 >>| modify (\s->{s & bcs_context=ctx})
472 >>| tell` [BCMkTask (instr rhswidth funlab)][+\pagebreak+]
473 toContFun :: JumpLabel UInt8 -> BCInterpret a
474 toContFun steplabel contextwidth
475 = foldr tcf (tell` [BCPush fail]) cont
476 where
477 tcf (IfStable f t)
478 = If ((stability >>| tell` [BCIsStable]) &. f val)
479 (t val >>| tell` [])
480 ...
481 stability = tell` [BCArg (lhswidth + contextwidth)]
482 val = retrieveArgs steplabel zero lhswidth
483 \end{lstClean}
484
485 \subsection{Shared data sources}\label{lst:imp_sds}
486 The compilation scheme for \gls{SDS} definitions is a trivial extension to $\mathcal{F}$.
487 While there is no code generated in the definition, the byte code compiler is storing all \gls{SDS} data in the \cleaninline{bcs_sdses} field in the compilation state.
488 Regular \glspl{SDS} are stored as \cleaninline{Right String255} values.
489 The \glspl{SDS} are typed as functions in the host language, so an argument for this function must be created that represents the \gls{SDS} on evaluation.
490 For this, an \cleaninline{BCInterpret} is created that emits this identifier.
491 When passing it to the function, the initial value of the \gls{SDS} is returned.
492 In the case of a local \gls{SDS}, this initial value is stored as a byte code encoded value in the state and the compiler continues with the rest of the program.
493 The \gls{SDS} access tasks have a compilation scheme similar to other tasks (see \cref{ssec:scheme_tasks}).
494 The \cleaninline{getSds} task just pushes a task tree node with the \gls{SDS} identifier embedded.
495 The \cleaninline{setSds} task evaluates the value, lifts that value to a task tree node and creates \pgls{SDS} set node.
496
497 \begin{align*}
498 \cschemeF{\text{\cleaninline{sds}}~x=i~\text{\cleaninline{In}}~m} & =
499 \cschemeF{m};\\
500 \\
501 \cschemeE{\text{\cleaninline{getSds}}~s}{r} & =
502 \text{\cleaninline{BCMkTask}}~(\text{\cleaninline{BCSdsGet}}~s);\\
503 \cschemeE{\text{\cleaninline{setSds}}~s~e}{r} & =
504 \cschemeE{e}{r};
505 \text{\cleaninline{BCMkTask BCStable}}_{\stacksize{e}};\\
506 {} & \mathbin{\phantom{=}} \text{\cleaninline{BCMkTask}}~(\text{\cleaninline{BCSdsSet}}~s);\\
507 \end{align*}
508
509 \Cref{lst:comp_sds} shows the implementation of the \cleaninline{sds} type class.
510 First, the initial \gls{SDS} value is extracted from the expression by bootstrapping the fixed point with a dummy value.
511 This is safe because the expression on the right-hand side of the \cleaninline{In} is never evaluated.
512 Then, using \cleaninline{addSdsIfNotExist}, the identifier for this particular \gls{SDS} is either retrieved from the compiler state or generated freshly.
513 This identifier is then used to provide a reference to the \cleaninline{def} definition to evaluate the main expression.
514 Compiling \cleaninline{getSds} is a matter of executing the \cleaninline{BCInterpret} representing the \gls{SDS}, which yields the identifier that can be embedded in the instruction.
515 Setting the \gls{SDS} is similar: the identifier is retrieved, and the value is written to put in a task tree so that the resulting task can remember the value it has written.
516
517 % VimTeX: SynIgnore on
518 \begin{lstClean}[caption={Backend implementation for the \gls{SDS} classes.},label={lst:comp_sds}]
519 :: Sds a = Sds Int
520 instance sds BCInterpret where
521 sds def = {main =
522 let (t In e) = def (abort "sds: expression too strict")
523 in addSdsIfNotExist (Left $ String255 (toByteCode{|*|} t))
524 >>= \sdsi-> let (t In e) = def (pure (Sds sdsi))
525 in e.main
526 }
527 getSds f = f >>= \(Sds i)-> tell` [BCMkTask (BCSdsGet (fromInt i))]
528 setSds f v = f >>= \(Sds i)->v >>| tell`
529 ( map BCMkTask (bcstable (byteWidth v))
530 ++ [BCMkTask (BCSdsSet (fromInt i))])
531 \end{lstClean}
532 % VimTeX: SynIgnore off
533
534 Lowered \glspl{SDS} are stored in the compiler state as \cleaninline{Right MTLens} values.
535 The compilation of the code and the serialisation of the data throws away all typing information.
536 The \cleaninline{MTLens} is a type synonym for \pgls{SDS} that represents the typeless serialised value of the underlying \gls{SDS}.
537 This is done so that the \cleaninline{withDevice} task can write the received \gls{SDS} updates to the according \gls{SDS} while the \gls{SDS} is not in scope.
538 The \gls{ITASK} notification mechanism then takes care of the rest.
539 Such \pgls{SDS} is created by using the \cleaninline{mapReadWriteError} which, given a pair of read and write functions with error handling, produces \pgls{SDS} with the lens embedded.
540 The read function transforms the typed value to a typeless serialised value.
541 The write function will, given a new serialised value and the old typed value, produce a new typed value.
542 It tries to decode the serialised value, if that succeeds, it is written to the underlying \gls{SDS}, otherwise, an error is thrown otherwise.
543 \Cref{lst:mtask_itasksds_lens} shows the implementation for this.
544
545 % VimTeX: SynIgnore on
546 \begin{lstClean}[label={lst:mtask_itasksds_lens},caption={Lens applied to lowered \gls{ITASK} \glspl{SDS} in \gls{MTASK}.}]
547 lens :: (Shared sds a) -> MTLens | type a & RWShared sds
548 lens sds = mapReadWriteError
549 ( \r-> Ok (fromString (toByteCode{|*|} r)
550 , \w r-> ?Just <$> iTasksDecode (toString w)
551 ) ?None sds
552 \end{lstClean}
553 % VimTeX: SynIgnore off
554
555 \Cref{lst:mtask_itasksds_lift} shows the code for the implementation of \cleaninline{lowerSds} that uses the \cleaninline{lens} function shown earlier.
556 It is very similar to the \cleaninline{sds} constructor in \cref{lst:comp_sds}, only now a \cleaninline{Right} value is inserted in the \gls{SDS} administration.
557
558 % VimTeX: SynIgnore on
559 \begin{lstClean}[label={lst:mtask_itasksds_lift},caption={The implementation for lowering \glspl{SDS} in \gls{MTASK}.}]
560 instance lowerSds BCInterpret where
561 lowerSds def = {main =
562 let (t In _) = def (abort "lowerSds: expression too strict")
563 in addSdsIfNotExist (Right $ lens t)
564 >>= \sdsi->let (_ In e) = def (pure (Sds sdsi)) in e.main
565 }\end{lstClean}
566 % VimTeX: SynIgnore off
567
568 \section{Run-time system}\label{sec:compiler_rts}
569 The \gls{RTS} is a customisable domain-specific \gls{OS} that takes care of the execution of tasks.
570 Furthermore, it also takes care of low-level mechanisms such as the communication, multitasking, and memory management.
571 Once a device is programmed with the \gls{MTASK} \gls{RTS}, it can continuously receive new tasks without the need for reprogramming.
572 The \gls{OS} is written in portable \ccpp{} and only contains a small device-specific portion.
573 In order to keep the abstraction level high and the hardware requirements low, much of the high-level functionality of the \gls{MTASK} language is implemented not in terms of lower-level constructs from \gls{MTASK} language but in terms of \ccpp{} code.
574
575 Most microcontroller software consists of a cyclic executive instead of an \gls{OS}.
576 This one loop function is continuously executed and all work is performed there.
577 In the \gls{RTS} of the \gls{MTASK} system, there is also such an event loop function.
578 It is a function with a relatively short execution time that gets called repeatedly.
579 The event loop consists of three distinct phases.
580 After doing the three phases, the devices goes to sleep for as long as possible (see \cref{chp:green_computing_mtask} for more details on task scheduling).
581
582 \subsection{Communication phase}
583 In the first phase, the communication channels are processed.
584 The exact communication method is a customisable device-specific option baked into the \gls{RTS}.
585 The interface is kept deliberately simple and consists of two layers: a link interface and a communication interface.
586 Besides opening, closing and cleaning up, the link interface has three functions that are shown in \cref{lst:link_interface}.
587 Consequently, implementing this link interface is very simple, but it is still possible to implement more advanced link features such as buffering.
588 There are implementations for this interface for serial or \gls{WIFI} connections using \gls{ARDUINO}, and \gls{TCP} connections for Linux.
589
590 \begin{lstArduino}[caption={Link interface of the \gls{MTASK} \gls{RTS}.},label={lst:link_interface}]
591 bool link_input_available(void);
592 uint8_t link_read_byte(void);
593 void link_write_byte(uint8_t b);
594 \end{lstArduino}
595
596 The communication interface abstracts away from this link interface and is typed instead.
597 It contains only two functions as seen in \cref{lst:comm_interface}.
598 There are implementations for direct communication, or communication using an \gls{MQTT} broker.
599 Both use the automatic serialisation and deserialisation shown in \cref{sec:ccodegen}.
600
601 \begin{lstArduino}[caption={Communication interface of the \gls{MTASK} \gls{RTS}.},label={lst:comm_interface}]
602 struct MTMessageTo receive_message(void);
603 void send_message(struct MTMessageFro msg);
604 \end{lstArduino}
605
606 Processing the received messages from the communication channels happens synchronously and the channels are exhausted completely before moving on to the next phase.
607 There are several possible messages that can be received from the server:
608
609 \begin{description}
610 \item[SpecRequest]
611 is a message instructing the device to send its specification.
612 It is received immediately after connecting.
613 The \gls{RTS} responds with a \texttt{Spec} answer containing the specification.
614 \item[TaskPrep]
615 tells the device a task is on its way.
616 Especially on faster connections, it may be the case that the communication buffers overflow because a big message is sent while the \gls{RTS} is busy executing tasks.
617 This message allows the \gls{RTS} to postpone execution for a while, until the larger task has been received.
618 The server sends the task only after the device acknowledged the preparation by sending a \texttt{TaskPrepAck} message.
619 \item[Task]
620 contains a new task, its peripheral configuration, the \glspl{SDS}, and the byte code.
621 The new task is immediately copied to the task storage but is only initialised during the next phase.
622 The device acknowledges the task by sending a \texttt{TaskAck} message.
623 \item[SdsUpdate]
624 notifies the device of the new value for a lowered \gls{SDS}.
625 The old value of the lowered \gls{SDS} is immediately replaced with the new one.
626 There is no acknowledgement required.
627 \item[TaskDel]
628 instructs the device to delete a running task.
629 Tasks are automatically deleted when they become stable.
630 However, a task may also be deleted when the surrounding task on the server is deleted, for example when the task is on the left-hand side of a step combinator and the condition to step holds.
631 The device acknowledges the deletion by sending a \texttt{TaskDelAck}.
632 \item[Shutdown]
633 tells the device to reset.
634 \end{description}
635
636 \subsection{Execution phase}
637 The second phase performs one execution step for all tasks that wish for it.
638 Tasks are placed in a priority queue orderd by the time a task needs to execute.
639 The \gls{RTS} selects all tasks that can be scheduled, see \cref{sec:scheduling} for more details.
640 Execution of a task is always an interplay between the interpreter and the rewriter.
641 The rewriter scans the current task tree and tries to rewrite it using small-step reduction.
642 Expressions in the tree are always strictly evaluated by the interpreter.
643
644 When a new task is received, the main expression is evaluated to produce a task tree.
645 A task tree is a tree structure in which each node represents a task combinator and the leaves are basic tasks.
646 If a task is not initialised yet, i.e.\ the pointer to the current task tree is still null, the byte code of the main function is interpreted.
647 The main expression of \gls{MTASK} programs sent to the device before execution always produces a task tree.
648 Execution of a task consists of continuously rewriting the task until its value is stable.
649
650 Rewriting is a destructive process, i.e.\ the rewriting is done in place.
651 The rewriting engine uses the interpreter when needed, e.g.\ to calculate the step continuations.
652 The rewriter and the interpreter use the same stack to store intermediate values.
653 Rewriting steps are small so that interleaving results in seemingly parallel execution.
654 In this phase new task tree nodes may be allocated.
655 Both rewriting and initialization are atomic operations in the sense that no processing on \glspl{SDS} is done other than \gls{SDS} operations from the task itself.
656 The host is notified if a task value is changed after a rewrite step by sending a \texttt{TaskReturn} message.
657
658 Take for example a blink task for which the code is shown in \cref{lst:blink_code}.
659
660 \begin{lstClean}[caption={Code for a blink program.},label={lst:blink_code}]
661 declarePin D13 PMOutput \ledPin->
662 fun \blink=(\st->delay (lit 500) >>|. writeD ledPin st >>=. blink o Not)
663 In {main = blink true}
664 \end{lstClean}
665
666 On receiving this task, the task tree is still null and the initial expression \cleaninline{blink true} is evaluated by the interpreter.
667 This results in the task tree shown in \cref{fig:blink_tree1}.
668 Rewriting always starts at the top of the tree and traverses to the leaves, the basic tasks that do the actual work.
669 The first basic task encountered is the \cleaninline{delay} task, that yields no value until the time, \qty{500}{\ms} in this case, has passed.
670 When the \cleaninline{delay} task yielded a stable value after a number of rewrites, the task continues with the right-hand side of the \cleaninline{>>\|.} combinator by evaluating the expression (see \cref{fig:blink_tree2})\footnotemark.
671 \footnotetext{\cleaninline{t1 >>\|. t2} is a shorthand for \cleaninline{t1 >>*. [IfStable id \\_->t2]}.}
672 This combinator has a \cleaninline{writeD} task at the left-hand side that becomes stable after one rewrite step in which it writes the value to the given pin.
673 When \cleaninline{writeD} becomes stable, the written value is the task value that is observed by the right-hand side of the \cleaninline{>>=.} combinator.
674 Then the interpreter is used again to evaluate the expression, now that the argument of the function is known.
675 The result of the call to the function is again a task tree, but now with different arguments to the tasks, e.g.\ the state in \cleaninline{writeD} is inversed.
676
677 \begin{figure}
678 \centering
679 \begin{subfigure}[t]{.5\textwidth}
680 \includestandalone{blinktree1}
681 \caption{Initial task tree}%
682 \label{fig:blink_tree1}
683 \end{subfigure}%
684 \begin{subfigure}[t]{.5\textwidth}
685 \includestandalone{blinktree2}
686 \caption{Task tree after the delay passed}%
687 \label{fig:blink_tree2}
688 \end{subfigure}
689 \caption{The task trees during reduction for a blink task in \gls{MTASK}.}%
690 \label{fig:blink_tree}
691 \end{figure}
692
693 \subsection{Memory management}
694 The third and final phase is memory management.
695 The \gls{MTASK} \gls{RTS} is designed to run on systems with as little as \qty{2}{\kibi\byte} of \gls{RAM}.
696 Aggressive memory management is therefore vital.
697 Not all firmwares for microprocessors support heaps and---when they do---allocation often leaves holes when not used in a \emph{last in first out} strategy.
698 The \gls{RTS} uses a chunk of memory in the global data segment with its own memory manager tailored to the needs of \gls{MTASK}.
699 The size of this block can be changed in the configuration of the \gls{RTS} if necessary.
700 On an \gls{ARDUINO} UNO---equipped with \qty{2}{\kibi\byte} of \gls{RAM}---the maximum viable size is about \qty{1500}{\byte}.
701 The self-managed memory uses a similar layout as the memory layout for \gls{C} programs only the heap and the stack are switched (see \cref{fig:memory_layout}).
702
703 \begin{figure}
704 \centering
705 \includestandalone{memorylayout}
706 \caption{Memory layout in the \gls{MTASK} \gls{RTS}.}\label{fig:memory_layout}
707 \end{figure}
708
709 A task is stored below the stack and it consists of the task id, a pointer to the task tree in the heap (null if not initialised yet), the current task value, the configuration of \glspl{SDS}, the configuration of peripherals, the byte code and some scheduling information.
710
711 In memory, task data grows from the bottom up and the interpreter stack is located directly on top of it growing in the same direction.
712 As a consequence, the stack moves when a new task is received.
713 This never happens within execution because communication is always processed before execution.
714 Values in the interpreter are always stored on the stack.
715 Compound data types are stored unboxed and flattened.
716 Task trees grow from the top down as in a heap.
717 This approach allows for flexible ratios, i.e.\ many tasks and small trees or few tasks and big trees.
718
719 Stable tasks, and unreachable task tree nodes are removed.
720 If a task is to be removed, tasks with higher memory addresses are moved down.
721 For task trees---stored in the heap---the \gls{RTS} already marks tasks and task trees as trash during rewriting, so the heap can be compacted in a single pass.
722 This is possible because there is no sharing or cycles in task trees and nodes contain pointers to their parent.
723
724
725 \section{C code generation for communication}\label{sec:ccodegen}
726 All communication between the \gls{ITASK} server and the \gls{MTASK} server is type parametrised and automated.
727 From the structural representation of the type, a \gls{CLEAN} parser and printer is constructed using generic programming.
728 Furthermore, a \ccpp{} parser and printer is generated for use on the \gls{MTASK} device.
729 The technique for generating the \ccpp{} parser and printer is very similar to template metaprogramming and requires a rich generic programming library or compiler support that includes a lot of metadata in the record and constructor nodes.
730 Using generic programming in the \gls{MTASK} system, both serialisation and deserialisation on the microcontroller and the server is automatically generated.
731
732 \subsection{Server}
733 On the server, off-the-shelve generic programming techniques are used to make the serialisation and deserialisation functions (see \cref{lst:ser_deser_server}).
734 Serialisation is a simple conversion from a value of the type to a string.
735 Deserialisation is a bit different in order to support streaming\footnotemark.
736 \footnotetext{%
737 Here the \cleaninline{*!} variant of the generic interface is chosen that has less uniqueness constraints for the compiler-generated adaptors \citep{alimarine_generic_2005,hinze_derivable_2001}.%
738 }
739 Given a list of available characters, a tuple is always returned.
740 The right-hand side of the tuple contains the remaining characters, the unparsed input.
741 The left-hand side contains either an error or a maybe value.
742 If the value is a \cleaninline{?None}, there was no full value to parse.
743 If the value is a \cleaninline{?Just}, the data field contains a value of the requested type.
744
745 \begin{lstClean}[caption={Serialisation and deserialisation functions in \gls{CLEAN}.},label={lst:ser_deser_server}]
746 generic toByteCode a :: a -> String
747 generic fromByteCode a *! :: [Char] -> (Either String (? a), [Char])
748 \end{lstClean}
749
750 \subsection{Client}
751 The \gls{RTS} of the \gls{MTASK} system runs on resource-constrained microcontrollers and is implemented in portable \ccpp{}.
752 In order to achieve more interoperation safety, the communication between the server and the client is automated, i.e.\ the serialisation and deserialisation code in the \gls{RTS} is generated.
753 The technique used for this is very similar to the technique shown in \cref{chp:first-class_datatypes}.
754 However, instead of using template metaprogramming, a feature \gls{CLEAN} lacks, generic programming is used also as a two-stage rocket.
755 In contrast to many other generic programming systems, \gls{CLEAN} allows for access to much of the metadata of the compiler.
756 For example, \cleaninline{Cons}, \cleaninline{Object}, \cleaninline{Field}, and \cleaninline{Record} generic constructors are enriched with their arity, names, types, \etc.
757 Furthermore, constructors can access the metadata of the objects and fields of their parent records.
758 Using this metadata, generic functions are created that generate \ccpp{} type definitions, parsers and printers for any first-order \gls{CLEAN} type.
759 The exact details of this technique can be found in the future in a paper that is in preparation.
760
761 \Glspl{ADT} are converted to tagged unions, newtypes to typedefs, records to structs, and arrays to dynamic size-parametrised allocated arrays.
762 For example, the \gls{CLEAN} types in \cref{lst:ser_clean} are translated to the \ccpp{} types seen in \cref{lst:ser_c}
763
764 \begin{lstClean}[caption={Simple \glspl{ADT} in \gls{CLEAN}.},label={lst:ser_clean}]
765 :: T a = A a | B NT {#Char}
766 :: NT =: NT Real
767 \end{lstClean}
768
769 \begin{lstArduino}[caption={Generated \ccpp{} type definitions for the simple \glspl{ADT}.},label={lst:ser_c}]
770 typedef double Real;
771 typedef char Char;
772
773 typedef Real NT;
774 enum T_c {A_c, B_c};
775
776 struct Char_HshArray { uint32_t size; Char *elements; };
777 struct T {
778 enum T_c cons;
779 struct { void *A;
780 struct { NT f0; struct Char_HshArray f1; } B;
781 } data;
782 };
783 \end{lstArduino}
784
785 For each of these generated types, two functions are created, a typed printer, and a typed parser (see \cref{lst:ser_pp}).
786 The parser functions are parametrised by a read function, an allocation function and parse functions for all type variables.
787 This allows for the use of these functions in environments where the communication is parametrised and the memory management is self-managed such as in the \gls{MTASK} \gls{RTS}.
788
789 \begin{lstArduino}[caption={Printer and parser for the \glspl{ADT} in \ccpp{}.},label={lst:ser_pp}]
790 struct T parse_T(uint8_t (*get)(), void *(*alloc)(size_t),
791 void *(*parse_0)(uint8_t (*)(), void *(*)(size_t)));
792
793 void print_T(void (*put)(uint8_t), struct T r,
794 void (*print_0)(void (*)(uint8_t), void *));
795 \end{lstArduino}
796
797 \section{Conclusion}
798 This chapter showed the implementation of the \gls{MTASK} byte code compiler, the \gls{RTS}, and the internals of their communication.
799 It is not straightforward to execute \gls{MTASK} tasks on resources-constrained \gls{IOT} edge devices.
800 To achieve this, the terms in the \gls{DSL} are compiled to compact domain-specific byte code.
801 This byte code is sent for interpretation to the light-weight \gls{RTS} of the edge device.
802 The \gls{RTS} first evaluates the main expression in the interpreter.
803 The result of this evaluation, a run time representation of the task, is a task tree.
804 This task tree is rewritten according to small-step reduction rules until a stable value is observed.
805 Rewriting multiple tasks at the same time is achieved by interleaving the rewrite steps, resulting in seemingly parallel execution of the tasks.
806 All communication, including the serialisation and deserialisation, between the server and the \gls{RTS} is automated.
807 From the structural representation of the types, printers and parsers are generated for the server and the client.
808
809 \input{subfilepostamble}
810 \end{document}