091adb8dd10f144d2cf020b52c34f27469faae05
[phd-thesis.git] / top / imp.tex
1 \documentclass[../thesis.tex]{subfiles}
2
3 \input{subfilepreamble}
4
5 \setcounter{chapter}{4}
6
7 \begin{document}
8 \input{subfileprefix}
9 \chapter{The implementation of mTask}%
10 \label{chp:implementation}
11 \begin{chapterabstract}
12 This chapter shows the implementation of the \gls{MTASK} system by:
13 \begin{itemize}
14 \item showing the compilation and execution toolchain;
15 \item elaborating on the implementation and architecture of the \gls{RTS} of \gls{MTASK};
16 \item showing the implementation of the byte code compiler for the \gls{MTASK} language;
17 \item and explaining the machinery used to automatically serialise and deserialise data to-and-fro the device.
18 \end{itemize}
19 \end{chapterabstract}
20
21 The \gls{MTASK} system targets resource-constrained edge devices that have little memory, processor speed, and communication.
22 Such edge devices are often powered by microcontrollers, tiny computers specifically designed for embedded applications.
23 The microcontrollers usually have flash-based program memory which wears out fairly quickly.
24 For example, the flash memory of the popular atmega328p powering the \gls{ARDUINO} UNO is rated for \num{10000} write cycles.
25 While this sounds like a lot, if new tasks are sent to the device every minute or so, a lifetime of only seven days is guaranteed.
26 Hence, for dynamic applications, storing the program in the \gls{RAM} of the device and interpreting this code is necessary in order to save precious write cycles of the program memory.
27 In the \gls{MTASK} system, the \gls{MTASK} \gls{RTS}, a domain-specific \gls{OS}, is responsible for interpreting the programs.
28
29 Programs in \gls{MTASK} are \gls{DSL} terms constructed at run time in an \gls{ITASK} system.
30 \Cref{fig:toolchain} shows the compilation and execution toolchain of such programs.
31 First, the source code is compiled to a byte code specification, this specification contains the compiled main expression, the functions, and the \gls{SDS} and peripheral configuration.
32 How an \gls{MTASK} task is compiled to this specification is shown in \cref{sec:compiler_imp}.
33 This package is then sent to the \gls{RTS} of the device for execution, shown in \cref{sec:compiler_rts}.
34 On the device, the \gls{RTS} may have multiple tasks at the same time active.
35
36 \begin{figure}
37 \centering
38 \centerline{\includestandalone{toolchain}}
39 \caption{Compilation and execution toolchain of \gls{MTASK} programs.}%
40 \label{fig:toolchain}
41 \end{figure}
42
43 \section{Run-time system}\label{sec:compiler_rts}
44 The \gls{RTS} is a customisable domain-specific \gls{OS} that takes care of the execution of tasks.
45 Furthermore, it also takes care of low-level mechanisms such as the communication, multitasking, and memory management.
46 Once a device is programmed with the \gls{MTASK} \gls{RTS}, it can continuously receive new tasks without the need for reprogramming.
47 The \gls{OS} is written in portable \ccpp{} and only contains a small device-specific portion.
48 In order to keep the abstraction level high and the hardware requirements low, much of the high-level functionality of the \gls{MTASK} language is implemented not in terms of lower-level constructs from \gls{MTASK} language but in terms of \ccpp{} code.
49
50 Most microcontrollers software consists of a cyclic executive instead of an \gls{OS}, this one loop function is continuously executed and all work is performed there.
51 In the \gls{RTS} of the \gls{MTASK} system, there is also such an event loop function.
52 It is a function with a relatively short execution time that gets called repeatedly.
53 The event loop consists of three distinct phases.
54 After doing the three phases, the devices goes to sleep for as long as possible (see \cref{chp:green_computing_mtask} for more details on task scheduling).
55
56 \subsection{Communication phase}
57 In the first phase, the communication channels are processed.
58 The exact communication method is a customisable device-specific option baked into the \gls{RTS}.
59 The interface is kept deliberately simple and consists of two layers: a link interface and a communication interface.
60 Besides opening, closing and cleaning up, the link interface has three functions that are shown in \cref{lst:link_interface}.
61 Consequently, implementing this link interface is very simple but it is still possible to implement more advanced link features such as buffering.
62 There are implementations for this interface for serial or \gls{WIFI} connections using \gls{ARDUINO}, and \gls{TCP} connections for Linux.
63
64 \begin{lstArduino}[caption={Link interface of the \gls{MTASK} \gls{RTS}.},label={lst:link_interface}]
65 bool link_input_available(void);
66 uint8_t link_read_byte(void);
67 void link_write_byte(uint8_t b);
68 \end{lstArduino}
69
70 The communication interface abstracts away from this link interface and is typed instead.
71 It contains only two functions as seen in \cref{lst:comm_interface}.
72 There are implementations for direct communication, or communication using an \gls{MQTT} broker.
73 Both use the automatic serialisation and deserialisation shown in \cref{sec:ccodegen}.
74
75 \begin{lstArduino}[caption={Communication interface of the \gls{MTASK} \gls{RTS}.},label={lst:comm_interface}]
76 struct MTMessageTo receive_message(void);
77 void send_message(struct MTMessageFro msg);
78 \end{lstArduino}
79
80 Processing the received messages from the communication channels happens synchronously and the channels are exhausted completely before moving on to the next phase.
81 There are several possible messages that can be received from the server:
82
83 \begin{description}
84 \item[SpecRequest]
85 is a message instructing the device to send its specification and it is received immediately after connecting.
86 The \gls{RTS} responds with a \texttt{Spec} answer containing the specification.
87 \item[TaskPrep]
88 tells the device a task is on its way.
89 Especially on faster connections, it may be the case that the communication buffers overflow because a big message is sent while the \gls{RTS} is busy executing tasks.
90 This message allows the \gls{RTS} to postpone execution for a while, until the larger task has been received.
91 The server sends the task only after the device acknowledged the preparation by by sending a \texttt{TaskPrepAck} message.
92 \item[Task]
93 contains a new task, its peripheral configuration, the \glspl{SDS}, and the byte code.
94 The new task is immediately copied to the task storage but is only initialised during the next phase.
95 The device acknowledges the task by sending a \texttt{TaskAck} message.
96 \item[SdsUpdate]
97 notifies the device of the new value for a lowered \gls{SDS}.
98 The old value of the lowered \gls{SDS} is immediately replaced with the new one.
99 There is no acknowledgement required.
100 \item[TaskDel]
101 instructs the device to delete a running task.
102 Tasks are automatically deleted when they become stable.
103 However, a task may also be deleted when the surrounding task on the server is deleted, for example when the task is on the left-hand side of a step combinator and the condition to step holds.
104 The device acknowledges the deletion by sending a \texttt{TaskDelAck}.
105 \item[Shutdown]
106 tells the device to reset.
107 \end{description}
108
109 \subsection{Execution phase}
110 The second phase performs one execution step for all tasks that wish for it.
111 Tasks are ordered in a priority queue ordered by the time a task needs to execute, the \gls{RTS} selects all tasks that can be scheduled, see \cref{sec:scheduling} for more details.
112 Execution of a task is always an interplay between the interpreter and the rewriter.
113
114 When a new task is received, the main expression is evaluated to produce a task tree.
115 A task tree is a tree structure in which each node represents a task combinator and the leaves are basic tasks.
116 If a task is not initialized yet, i.e.\ the pointer to the current task tree is still null, the byte code of the main function is interpreted.
117 The main expression always produces a task tree.
118 Execution of a task consists of continuously rewriting the task until its value is stable.
119
120 Rewriting is a destructive process, i.e.\ the rewriting is done in place.
121 The rewriting engine uses the interpreter when needed, e.g.\ to calculate the step continuations.
122 The rewriter and the interpreter use the same stack to store intermediate values.
123 Rewriting steps are small so that interleaving results in seemingly parallel execution.
124 In this phase new task tree nodes may be allocated.
125 Both rewriting and initialization are atomic operations in the sense that no processing on \glspl{SDS} is done other than \gls{SDS} operations from the task itself.
126 The host is notified if a task value is changed after a rewrite step by sending a \texttt{TaskReturn} message.
127
128 Take for example a blink task for which the code is shown in \cref{lst:blink_code}.
129
130 \begin{lstClean}[caption={Code for a blink program.},label={lst:blink_code}]
131 fun \blink=(\st->delay (lit 500) >>|. writeD d3 st >>=. blink o Not)
132 In {main = blink true}
133 \end{lstClean}
134
135 On receiving this task, the task tree is still null and the initial expression \cleaninline{blink true} is evaluated by the interpreter.
136 This results in the task tree shown in \cref{fig:blink_tree}.
137 Rewriting always starts at the top of the tree and traverses to the leaves, the basic tasks that do the actual work.
138 The first basic task encountered is the \cleaninline{delay} task, that yields no value until the time, \qty{500}{\ms} in this case, has passed.
139 When the \cleaninline{delay} task yielded a stable value after a number of rewrites, the task continues with the right-hand side of the \cleaninline{>>\|.} combinator.
140 This combinator has a \cleaninline{writeD} task at the left-hand side that becomes stable after one rewrite step in which it writes the value to the given pin.
141 When \cleaninline{writeD} becomes stable, the written value is the task value that is observed by the right-hand side of the \cleaninline{>>=.} combinator.
142 This will call the interpreter to evaluate the expression, now that the argument of the function is known.
143 The result of the function is again a task tree, but now with different arguments to the tasks, e.g.\ the state in \cleaninline{writeD} is inversed.
144
145 \begin{figure}
146 \centering
147 \includestandalone{blinktree}
148 \caption{The task tree for a blink task in \cref{lst:blink_code} in \gls{MTASK}.}%
149 \label{fig:blink_tree}
150 \end{figure}
151
152 \subsection{Memory management}
153 The third and final phase is memory management.
154 The \gls{MTASK} \gls{RTS} is designed to run on systems with as little as \qty{2}{\kibi\byte} of \gls{RAM}.
155 Aggressive memory management is therefore vital.
156 Not all firmwares for microprocessors support heaps and---when they do---allocation often leaves holes when not used in a \emph{last in first out} strategy.
157 The \gls{RTS} uses a chunk of memory in the global data segment with its own memory manager tailored to the needs of \gls{MTASK}.
158 The size of this block can be changed in the configuration of the \gls{RTS} if necessary.
159 On an \gls{ARDUINO} UNO---equipped with \qty{2}{\kibi\byte} of \gls{RAM}---the maximum viable size is about \qty{1500}{\byte}.
160 The self-managed memory uses a similar layout as the memory layout for \gls{C} programs only the heap and the stack are switched (see \cref{fig:memory_layout}).
161
162 \begin{figure}
163 \centering
164 \includestandalone{memorylayout}
165 \caption{Memory layout in the \gls{MTASK} \gls{RTS}.}\label{fig:memory_layout}
166 \end{figure}
167
168 A task is stored below the stack and its complete state is a \gls{CLEAN} record contain most importantly the task id, a pointer to the task tree in the heap (null if not initialised yet), the current task value, the configuration of \glspl{SDS}, the configuration of peripherals, the byte code and some scheduling information.
169
170 In memory, task data grows from the bottom up and the interpreter stack is located directly on top of it growing in the same direction.
171 As a consequence, the stack moves when a new task is received.
172 This never happens within execution because communication is always processed before execution.
173 Values in the interpreter are always stored on the stack.
174 Compound data types are stored unboxed and flattened.
175 Task trees grow from the top down as in a heap.
176 This approach allows for flexible ratios, i.e.\ many tasks and small trees or few tasks and big trees.
177
178 Stable tasks, and unreachable task tree nodes are removed.
179 If a task is to be removed, tasks with higher memory addresses are moved down.
180 For task trees---stored in the heap---the \gls{RTS} already marks tasks and task trees as trash during rewriting so the heap can be compacted in a single pass.
181 This is possible because there is no sharing or cycles in task trees and nodes contain pointers pointers to their parent.
182
183 \section{Compiler}\label{sec:compiler_imp}
184 \subsection{Compiler infrastructure}
185 The bytecode compiler interpretation for the \gls{MTASK} language is implemented as a monad stack containing a writer monad and a state monad.
186 The writer monad is used to generate code snippets locally without having to store them in the monadic values.
187 The state monad accumulates the code, and stores the state the compiler requires.
188 \Cref{lst:compiler_state} shows the data type for the state, storing:
189 function the compiler currently is in;
190 code of the main expression;
191 context (see \cref{ssec:step});
192 code for the functions;
193 next fresh label;
194 a list of all the used \glspl{SDS}, either local \glspl{SDS} containing the initial value (\cleaninline{Left}) or lowered \glspl{SDS} (see \cref{sec:liftsds}) containing a reference to the associated \gls{ITASK} \gls{SDS};
195 and finally there is a list of peripherals used.
196
197 \begin{lstClean}[label={lst:compiler_state},caption={The type for the \gls{MTASK} byte code compiler.}]
198 :: BCInterpret a :== StateT BCState (WriterT [BCInstr] Identity) a
199 :: BCState =
200 { bcs_infun :: JumpLabel
201 , bcs_mainexpr :: [BCInstr]
202 , bcs_context :: [BCInstr]
203 , bcs_functions :: Map JumpLabel BCFunction
204 , bcs_freshlabel :: JumpLabel
205 , bcs_sdses :: [Either String255 MTLens]
206 , bcs_hardware :: [BCPeripheral]
207 }
208 :: BCFunction =
209 { bcf_instructions :: [BCInstr]
210 , bcf_argwidth :: UInt8
211 , bcf_returnwidth :: UInt8
212 }
213 \end{lstClean}
214
215 Executing the compiler is done by providing an initial state and running the monad.
216 After compilation, several post-processing steps are applied to make the code suitable for the microprocessor.
217 First, in all tail call \cleaninline{BCReturn} instructions are replaced by \cleaninline{BCTailCall} instructions to optimise the tail calls.
218 Furthermore, all byte code is concatenated, resulting in one big program.
219 Many instructions have commonly used arguments so shorthands are introduced to reduce the program size.
220 For example, the \cleaninline{BCArg} instruction is often called with argument \numrange{0}{2} and can be replaced by the \numrange[parse-numbers=false]{\cleaninline{BCArg0}}{\cleaninline{BCArg2}} shorthands.
221 Furthermore, redundant instructions such as pop directly after push are removed as well in order not to burden the code generation with these intricacies.
222 Finally the labels are resolved to represent actual program addresses instead of the freshly generated identifiers.
223 After the byte code is ready, the lowered \glspl{SDS} are resolved to provide an initial value for them.
224 The byte code, \gls{SDS} specification and perpipheral specifications are the result of the process, ready to be sent to the device.
225
226 \subsection{Instruction set}
227 The instruction set is a fairly standard stack machine instruction set extended with special \gls{TOP} instructions for creating task tree nodes.
228 All instructions are housed in a \gls{CLEAN} \gls{ADT} and serialised to the byte representation using generic functions (see \cref{sec:ccodegen}).
229 Type synonyms and newtypes are used to provide insight on the arguments of the instructions (\cref{lst:type_synonyms}).
230 Labels are always two bytes long, all other arguments are one byte long.
231
232 \begin{lstClean}[caption={Type synonyms for instructions arguments.},label={lst:type_synonyms}]
233 :: ArgWidth :== UInt8 :: ReturnWidth :== UInt8
234 :: Depth :== UInt8 :: Num :== UInt8
235 :: SdsId :== UInt8 :: JumpLabel =: JL UInt16
236 \end{lstClean}
237
238 \Cref{lst:instruction_type} shows an excerpt of the \gls{CLEAN} type that represents the instruction set.
239 Shorthand instructions such as instructions with inlined arguments are omitted for brevity.
240 Detailed semantics for the instructions are given in \cref{chp:bytecode_instruction_set}.
241 One notable instruction is the \cleaninline{MkTask} instruction, it allocates and initialises a task tree node and pushes a pointer to it on the stack.
242
243 \begin{lstClean}[caption={The type housing the instruction set in \gls{MTASK}.},label={lst:instruction_type}]
244 :: BCInstr
245 //Jumps
246 = BCJumpF JumpLabel | BCLabel JumpLabel | BCJumpSR ArgWidth JumpLabel
247 | BCReturn ReturnWidth ArgWidth
248 | BCTailcall ArgWidth ArgWidth JumpLabel
249 //Arguments
250 | BCArgs ArgWidth ArgWidth
251 //Task node creation and refinement
252 | BCMkTask BCTaskType | BCTuneRateMs | BCTuneRateSec
253 //Stack ops
254 | BCPush String255 | BCPop Num | BCRot Depth Num | BCDup | BCPushPtrs
255 //Casting
256 | BCItoR | BCItoL | BCRtoI | ...
257 // arith
258 | BCAddI | BCSubI | ...
259 ...
260
261 :: BCTaskType
262 = BCStableNode ArgWidth | BCUnstableNode ArgWidth
263 // Pin io
264 | BCReadD | BCWriteD | BCReadA | BCWriteA | BCPinMode
265 // Interrupts
266 | BCInterrupt
267 // Repeat
268 | BCRepeat
269 // Delay
270 | BCDelay | BCDelayUntil
271 // Parallel
272 | BCTAnd | BCTOr
273 //Step
274 | BCStep ArgWidth JumpLabel
275 //Sds ops
276 | BCSdsGet SdsId | BCSdsSet SdsId | BCSdsUpd SdsId JumpLabel
277 // Rate limiter
278 | BCRateLimit
279 ////Peripherals
280 //DHT
281 | BCDHTTemp UInt8 | BCDHTHumid UInt8
282 ...
283 \end{lstClean}
284
285 \section{Compilation rules}
286 This section describes the compilation rules, the translation from \gls{AST} to byte code.
287 The compilation scheme consists of three schemes\slash{}functions.
288 Double vertical bars, e.g.\ $\stacksize{a_i}$, denote the number of stack cells required to store the argument.
289
290 Some schemes have a context $r$ as an argument which contains information about the location of the arguments in scope.
291 More information is given in the schemes requiring such arguments.
292
293 \begin{table}
294 \centering
295 \caption{An overview of the compilation schemes.}
296 \begin{tabularx}{\linewidth}{l X}
297 \toprule
298 Scheme & Description\\
299 \midrule
300 $\cschemeE{e}{r}$ & Produces the value of expression $e$ given the context $r$ and pushes it on the stack.
301 The result can be a basic value or a pointer to a task.\\
302 $\cschemeF{e}$ & Generates the bytecode for functions.\\
303 $\cschemeS{e}{r}{w} $ & Generates the function for the step continuation given the context $r$ and the width $w$ of the left-hand side task value.\\
304 \bottomrule
305 \end{tabularx}
306 \end{table}
307
308 \subsection{Expressions}
309 Almost all expression constructions are compiled using $\mathcal{E}$.
310 The argument of $\mathcal{E}$ is the context (see \cref{ssec:functions}).
311 Values are always placed on the stack; tuples and other compound data types are unpacked.
312 Function calls, function arguments and tasks are also compiled using $\mathcal{E}$ but their compilations is explained later.
313
314 \begin{align*}
315 \cschemeE{\text{\cleaninline{lit}}~e}{r} & = \text{\cleaninline{BCPush (bytecode e)}};\\
316 \cschemeE{e_1\mathbin{\text{\cleaninline{+.}}}e_2}{r} & = \cschemeE{e_1}{r};
317 \cschemeE{e_2}{r};
318 \text{\cleaninline{BCAdd}};\\
319 {} & \text{\emph{Similar for other binary operators}}\\
320 \cschemeE{\text{\cleaninline{Not}}~e}{r} & =
321 \cschemeE{e}{r};
322 \text{\cleaninline{BCNot}};\\
323 {} & \text{\emph{Similar for other unary operators}}\\
324 \cschemeE{\text{\cleaninline{If}}~e_1~e_2~e_3}{r} & =
325 \cschemeE{e_1}{r};
326 \text{\cleaninline{BCJmpF}}\enskip l_{else}; \mathbin{\phantom{=}} \cschemeE{e_2}{r}; \text{\cleaninline{BCJmp}}\enskip l_{endif};\\
327 {} & \mathbin{\phantom{=}} \text{\cleaninline{BCLabel}}\enskip l_{else}; \cschemeE{e_3}{r}; \mathbin{\phantom{=}} \text{\cleaninline{BCLabel}}\enskip l_{endif};\\
328 {} & \text{\emph{Where $l_{else}$ and $l_{endif}$ are fresh labels}}\\
329 \cschemeE{\text{\cleaninline{tupl}}~e_1~e_2}{r} & =
330 \cschemeE{e_1}{r};
331 \cschemeE{e_2}{r};\\
332 {} & \text{\emph{Similar for other unboxed compound data types}}\\
333 \cschemeE{\text{\cleaninline{first}}~e}{r} & =
334 \cschemeE{e}{r};
335 \text{\cleaninline{BCPop}}\enskip w;\\
336 {} & \text{\emph{Where $w$ is the width of the right value and}}\\
337 {} & \text{\emph{similar for other unboxed compound data types}}\\
338 \cschemeE{\text{\cleaninline{second}}\enskip e}{r} & =
339 \cschemeE{e}{r};
340 \text{\cleaninline{BCRot}}\enskip (w_l+w_r)\enskip w_r;
341 \text{\cleaninline{BCPop}}\enskip w_l;\\
342 {} & \text{\emph{Where $w_l$ is the width of the left and, $w_r$ of the right value}}\\
343 {} & \text{\emph{similar for other unboxed compound data types}}\\
344 \end{align*}
345
346 Translating $\mathcal{E}$ to \gls{CLEAN} code is very straightforward, it basically means writing the instructions to the writer monad.
347 Almost always, the type of the interpretation is not used, i.e.\ it is a phantom type.
348 To still have the functions return the correct type, the \cleaninline{tell`}\footnote{\cleaninline{tell` :: [BCInstr] -> BCInterpret a}} helper is used.
349 This function is similar to the writer monad's \cleaninline{tell} function but is casted to the correct type.
350 \Cref{lst:imp_arith} shows the implementation for the arithmetic and conditional expressions.
351 Note that $r$, the context, is not an explicit argument here but stored in the state.
352
353 \begin{lstClean}[caption={Interpretation implementation for the arithmetic and conditional functions.},label={lst:imp_arith}]
354 instance expr BCInterpret where
355 lit t = tell` [BCPush (toByteCode{|*|} t)]
356 (+.) a b = a >>| b >>| tell` [BCAdd]
357 ...
358 If c t e = freshlabel >>= \elselabel->freshlabel >>= \endiflabel->
359 c >>| tell` [BCJumpF elselabel] >>|
360 t >>| tell` [BCJump endiflabel,BCLabel elselabel] >>|
361 e >>| tell` [BCLabel endiflabel]
362 \end{lstClean}
363
364 \subsection{Functions}\label{ssec:functions}
365 Compiling functions and other top-level definitions is done using in $\mathcal{F}$, which generates bytecode for the complete program by iterating over the functions and ending with the main expression.
366 When compiling the body of the function, the arguments of the function are added to the context so that the addresses can be determined when referencing arguments.
367 The main expression is a special case of $\mathcal{F}$ since it neither has arguments nor something to continue.
368 Therefore, it is just compiled using $\mathcal{E}$ with an empty context.
369
370 \begin{align*}
371 \cschemeF{main=m} & =
372 \cschemeE{m}{[]};\\
373 \cschemeF{f~a_0 \ldots a_n = b~\text{\cleaninline{In}}~m} & =
374 \text{\cleaninline{BCLabel}}~f; \cschemeE{b}{[\langle f, i\rangle, i\in \{(\Sigma^n_{i=0}\stacksize{a_i})..0\}]};\\
375 {} & \mathbin{\phantom{=}} \text{\cleaninline{BCReturn}}~\stacksize{b}~n; \cschemeF{m};\\
376 \end{align*}
377
378 A function call starts by pushing the stack and frame pointer, and making space for the program counter (\cref{lst:funcall_pushptrs}) followed by evaluating the arguments in reverse order (\cref{lst:funcall_args}).
379 On executing \cleaninline{BCJumpSR}, the program counter is set and the interpreter jumps to the function (\cref{lst:funcall_jumpsr}).
380 When the function returns, the return value overwrites the old pointers and the arguments.
381 This occurs right after a \cleaninline{BCReturn} (\cref{lst:funcall_ret}).
382 Putting the arguments on top of pointers and not reserving space for the return value uses little space and facilitates tail call optimization.
383
384 \begin{figure}
385 \begin{subfigure}{.24\linewidth}
386 \centering
387 \includestandalone{memory1}
388 \caption{\cleaninline{BCPushPtrs}.}\label{lst:funcall_pushptrs}
389 \end{subfigure}
390 \begin{subfigure}{.24\linewidth}
391 \centering
392 \includestandalone{memory2}
393 \caption{Arguments.}\label{lst:funcall_args}
394 \end{subfigure}
395 \begin{subfigure}{.24\linewidth}
396 \centering
397 \includestandalone{memory3}
398 \caption{\cleaninline{BCJumpSR}.}\label{lst:funcall_jumpsr}
399 \end{subfigure}
400 \begin{subfigure}{.24\linewidth}
401 \centering
402 \includestandalone{memory4}
403 \caption{\cleaninline{BCReturn}.}\label{lst:funcall_ret}
404 \end{subfigure}
405 \caption{The stack layout during function calls.}%
406 \end{figure}
407
408 Calling a function and referencing function arguments are an extension to $\mathcal{E}$ as shown below.
409 Arguments may be at different places on the stack at different times (see \cref{ssec:step}) and therefore the exact location always is be determined from the context using \cleaninline{findarg}\footnote{\cleaninline{findarg [l`:r] l = if (l == l`) 0 (1 + findarg r l)}}.
410 Compiling argument $a_{f^i}$, the $i$th argument in function $f$, consists of traversing all positions in the current context.
411 Arguments wider than one stack cell are fetched in reverse to reconstruct the original order.
412
413 \begin{align*}
414 \cschemeE{f(a_0, \ldots, a_n)}{r} & =
415 \text{\cleaninline{BCPushPtrs}}; \cschemeE{a_i}{r}~\text{for all}~i\in\{n\ldots 0\}; \text{\cleaninline{BCJumpSR}}~n~f;\\
416 \cschemeE{a_{f^i}}{r} & =
417 \text{\cleaninline{BCArg}~findarg}(r, f, i)~\text{for all}~i\in\{w\ldots v\};\\
418 {} & v = \Sigma^{i-1}_{j=0}\stacksize{a_{f^j}}~\text{ and }~ w = v + \stacksize{a_{f^i}}\\
419 \end{align*}
420
421 Translating the compilation schemes for functions to \gls{CLEAN} is not as straightforward as other schemes due to the nature of shallow embedding in combination with the use of state.
422 The \cleaninline{fun} class has a single function with a single argument.
423 This argument is a \gls{CLEAN} function that---when given a callable \gls{CLEAN} function representing the \gls{MTASK} function---produces the \cleaninline{main} expression and a callable function.
424 To compile this, the argument must be called with a function representing a function call in \gls{MTASK}.
425 \Cref{lst:fun_imp} shows the implementation for this as \gls{CLEAN} code.
426 To uniquely identify the function, a fresh label is generated.
427 The function is then called with the \cleaninline{callFunction} helper function that generates the instructions that correspond to calling the function.
428 That is, it pushes the pointers, compiles the arguments, and writes the \cleaninline{JumpSR} instruction.
429 The resulting structure (\cleaninline{g In m}) contains a function representing the mTask function (\cleaninline{g}) and the \cleaninline{main} structure to continue with.
430 To get the actual function, \cleaninline{g} must be called with representations for the argument, i.e.\ using \cleaninline{findarg} for all arguments.
431 The arguments are added to the context using \cleaninline{infun} and \cleaninline{liftFunction} is called with the label, the argument width and the compiler.
432 This function executes the compiler, decorates the instructions with a label and places them in the function dictionary together with the metadata such as the argument width.
433 After lifting the function, the context is cleared again and compilation continues with the rest of the program.
434
435 \begin{lstClean}[label={lst:fun_imp},caption={The interpretation implementation for functions.}]
436 instance fun (BCInterpret a) BCInterpret | type a where
437 fun def = {main=freshlabel >>= \funlabel->
438 let (g In m) = def \a->callFunction funlabel (toByteWidth a) [a]
439 argwidth = toByteWidth (argOf g)
440 in addToCtx funlabel zero argwidth
441 >>| infun funlabel
442 (liftFunction funlabel argwidth
443 (g (retrieveArgs funlabel zero argwidth)
444 ) ?None)
445 >>| clearCtx >>| m.main
446 }
447
448 argOf :: ((m a) -> b) a -> UInt8 | toByteWidth a
449 callFunction :: JumpLabel UInt8 [BCInterpret b] -> BCInterpret c | ...
450 liftFunction :: JumpLabel UInt8 (BCInterpret a) (?UInt8) -> BCInterpret ()
451 infun :: JumpLabel (BCInterpret a) -> BCInterpret a
452 \end{lstClean}
453
454 \subsection{Tasks}\label{ssec:scheme_tasks}
455 Task trees are created with the \cleaninline{BCMkTask} instruction that allocates a node and pushes a pointer to it on the stack.
456 It pops arguments from the stack according to the given task type.
457 The following extension of $\mathcal{E}$ shows this compilation scheme (except for the step combinator, explained in \cref{ssec:step}).
458
459 \begin{align*}
460 \cschemeE{\text{\cleaninline{rtrn}}~e}{r} & =
461 \cschemeE{e}{r};
462 \text{\cleaninline{BCMkTask BCStable}}_{\stacksize{e}};\\
463 \cschemeE{\text{\cleaninline{unstable}}~e}{r} & =
464 \cschemeE{e}{r};
465 \text{\cleaninline{BCMkTask BCUnstable}}_{\stacksize{e}};\\
466 \cschemeE{\text{\cleaninline{readA}}~e}{r} & =
467 \cschemeE{e}{r};
468 \text{\cleaninline{BCMkTask BCReadA}};\\
469 \cschemeE{\text{\cleaninline{writeA}}~e_1~e_2}{r} & =
470 \cschemeE{e_1}{r};
471 \cschemeE{e_2}{r};
472 \text{\cleaninline{BCMkTask BCWriteA}};\\
473 \cschemeE{\text{\cleaninline{readD}}~e}{r} & =
474 \cschemeE{e}{r};
475 \text{\cleaninline{BCMkTask BCReadD}};\\
476 \cschemeE{\text{\cleaninline{writeD}}~e_1~e_2}{r} & =
477 \cschemeE{e_1}{r};
478 \cschemeE{e_2}{r};
479 \text{\cleaninline{BCMkTask BCWriteD}};\\
480 \cschemeE{\text{\cleaninline{delay}}~e}{r} & =
481 \cschemeE{e}{r};
482 \text{\cleaninline{BCMkTask BCDelay}};\\
483 \cschemeE{\text{\cleaninline{rpeat}}~e}{r} & =
484 \cschemeE{e}{r};
485 \text{\cleaninline{BCMkTask BCRepeat}};\\
486 \cschemeE{e_1\text{\cleaninline{.\|\|.}}e_2}{r} & =
487 \cschemeE{e_1}{r};
488 \cschemeE{e_2}{r};
489 \text{\cleaninline{BCMkTask BCOr}};\\
490 \cschemeE{e_1\text{\cleaninline{.&&.}}e_2}{r} & =
491 \cschemeE{e_1}{r};
492 \cschemeE{e_2}{r};
493 \text{\cleaninline{BCMkTask BCAnd}};\\
494 \end{align*}
495
496 This translates to Clean code by writing the correct \cleaninline{BCMkTask} instruction as exemplified in \cref{lst:imp_ret}.
497
498 \begin{lstClean}[caption={The byte code interpretation implementation for \cleaninline{rtrn}.},label={lst:imp_ret}]
499 instance rtrn BCInterpret
500 where
501 rtrn m = m >>| tell` [BCMkTask (bcstable m)]
502 \end{lstClean}
503
504 \subsection{Sequential combinator}\label{ssec:step}
505 The \cleaninline{step} construct is a special type of task because the task value of the left-hand side changes over time.
506 Therefore, the task continuations on the right-hand side are \emph{observing} this task value and acting upon it.
507 In the compilation scheme, all continuations are first converted to a single function that has two arguments: the stability of the task and its value.
508 This function either returns a pointer to a task tree or fails (denoted by $\bot$).
509 It is special because in the generated function, the task value of a task is inspected.
510 Furthermore, it is a lazy node in the task tree: the right-hand side may yield a new task tree after several rewrite steps, i.e.\ it is allowed to create infinite task trees using step combinators.
511 The function is generated using the $\mathcal{S}$ scheme that requires two arguments: the context $r$ and the width of the left-hand side so that it can determine the position of the stability which is added as an argument to the function.
512 The resulting function is basically a list of if-then-else constructions to check all predicates one by one.
513 Some optimization is possible here but has currently not been implemented.
514
515 \begin{align*}
516 \cschemeE{t_1\text{\cleaninline{>>*.}}t_2}{r} & =
517 \cschemeE{a_{f^i}}{r}, \langle f, i\rangle\in r;
518 \text{\cleaninline{BCMkTask}}~\text{\cleaninline{BCStable}}_{\stacksize{r}}; \cschemeE{t_1}{r};\\
519 {} & \mathbin{\phantom{=}} \text{\cleaninline{BCMkTask}}~\text{\cleaninline{BCAnd}}; \text{\cleaninline{BCMkTask}}~(\text{\cleaninline{BCStep}}~(\cschemeS{t_2}{(r + [\langle l_s, i\rangle])}{\stacksize{t_1}}));\\
520 \end{align*}
521
522 \begin{align*}
523 \cschemeS{[]}{r}{w} & =
524 \text{\cleaninline{BCPush}}~\bot;\\
525 \cschemeS{\text{\cleaninline{IfValue}}~f~t:cs}{r}{w} & =
526 \text{\cleaninline{BCArg}} (\stacksize{r} + w);
527 \text{\cleaninline{BCIsNoValue}};\\
528 {} & \mathbin{\phantom{=}} \cschemeE{f}{r};
529 \text{\cleaninline{BCAnd}};\\
530 {} & \mathbin{\phantom{=}} \text{\cleaninline{BCJmpF}}~l_1;\\
531 {} & \mathbin{\phantom{=}} \cschemeE{t}{r};
532 \text{\cleaninline{BCJmp}}~l_2;\\
533 {} & \mathbin{\phantom{=}} \text{\cleaninline{BCLabel}}~l_1;
534 \cschemeS{cs}{r}{w};\\
535 {} & \mathbin{\phantom{=}} \text{\cleaninline{BCLabel}}~l_2;\\
536 {} & \text{\emph{Where $l_1$ and $l_2$ are fresh labels}}\\
537 {} & \text{\emph{Similar for \cleaninline{IfStable} and \cleaninline{IfUnstable}}}\\
538 \end{align*}
539
540 First the context is evaluated.
541 The context contains arguments from functions and steps that need to be preserved after rewriting.
542 The evaluated context is combined with the left-hand side task value by means of a \cleaninline{.&&.} combinator to store it in the task tree so that it is available after a rewrite.
543 This means that the task tree is be transformed as seen in \cref{lst:context_tree}.
544
545 \begin{figure}
546 \begin{subfigure}{.5\textwidth}
547 \includestandalone{contexttree1}
548 \caption{Without the embedded context.}
549 \end{subfigure}%
550 \begin{subfigure}{.5\textwidth}
551 \includestandalone{contexttree2}
552 \caption{With the embedded context.}
553 \end{subfigure}
554 \caption{Context embedded in a task tree.}%
555 \label{lst:context_tree}
556 \end{figure}
557
558 The translation to \gls{CLEAN} is given in \cref{lst:imp_seq}.
559
560 \begin{lstClean}[caption={Byte code compilation interpretation implementation for the step class.},label={lst:imp_seq}]
561 instance step BCInterpret where
562 (>>*.) lhs cont
563 //Fetch a fresh label and fetch the context
564 = freshlabel >>= \funlab->gets (\s->s.bcs_context)
565 //Generate code for lhs
566 >>= \ctx->lhs
567 //Possibly add the context
568 >>| tell` (if (ctx =: []) []
569 //The context is just the arguments up till now in reverse
570 ( [BCArg (UInt8 i)\\i<-reverse (indexList ctx)]
571 ++ map BCMkTask (bcstable (UInt8 (length ctx)))
572 ++ [BCMkTask BCTAnd]
573 ))
574 //Increase the context
575 >>| addToCtx funlab zero lhswidth
576 //Lift the step function
577 >>| liftFunction funlab
578 //Width of the arguments is the width of the lhs plus the
579 //stability plus the context
580 (one + lhswidth + (UInt8 (length ctx)))
581 //Body label ctx width continuations
582 (contfun funlab (UInt8 (length ctx)))
583 //Return width (always 1, a task pointer)
584 (Just one)
585 >>| modify (\s->{s & bcs_context=ctx})
586 >>| tell` [BCMkTask (instr rhswidth funlab)]
587
588 toContFun :: JumpLabel UInt8 -> BCInterpret a
589 toContFun steplabel contextwidth
590 = foldr tcf (tell` [BCPush fail]) cont
591 where
592 tcf (IfStable f t)
593 = If ((stability >>| tell` [BCIsStable]) &. f val)
594 (t val >>| tell` [])
595 ...
596 stability = tell` [BCArg (lhswidth + contextwidth)]
597 val = retrieveArgs steplabel zero lhswidth
598 \end{lstClean}
599
600 \subsection{Shared data sources}\label{lst:imp_sds}
601 The compilation scheme for \gls{SDS} definitions is a trivial extension to $\mathcal{F}$ since there is no code generated as seen below.
602
603 \begin{align*}
604 \cschemeF{\text{\cleaninline{sds}}~x=i~\text{\cleaninline{In}}~m} & =
605 \cschemeF{m};\\
606 \end{align*}
607
608 The \gls{SDS} access tasks have a compilation scheme similar to other tasks (see \cref{ssec:scheme_tasks}).
609 The \cleaninline{getSds} task just pushes a task tree node with the \gls{SDS} identifier embedded.
610 The \cleaninline{setSds} task evaluates the value, lifts that value to a task tree node and creates \pgls{SDS} set node.
611
612 \begin{align*}
613 \cschemeE{\text{\cleaninline{getSds}}~s}{r} & =
614 \text{\cleaninline{BCMkTask}} (\text{\cleaninline{BCSdsGet}} s);\\
615 \cschemeE{\text{\cleaninline{setSds}}~s~e}{r} & =
616 \cschemeE{e}{r};
617 \text{\cleaninline{BCMkTask BCStable}}_{\stacksize{e}};\\
618 {} & \mathbin{\phantom{=}} \text{\cleaninline{BCMkTask}} (\text{\cleaninline{BCSdsSet}} s);\\
619 \end{align*}
620
621 While there is no code generated in the definition, the byte code compiler is storing all \gls{SDS} data in the \cleaninline{bcs_sdses} field in the compilation state.
622 Regular \glspl{SDS} are stored as \cleaninline{Right String255} values.
623 The \glspl{SDS} are typed as functions in the host language so an argument for this function must be created that represents the \gls{SDS} on evaluation.
624 For this, an \cleaninline{BCInterpret} is created that emits this identifier.
625 When passing it to the function, the initial value of the \gls{SDS} is returned.
626 In the case of a local \gls{SDS}, this initial value is stored as a byte code encoded value in the state and the compiler continues with the rest of the program.
627
628 \Cref{lst:comp_sds} shows the implementation of the \cleaninline{sds} type class.
629 First, the initial \gls{SDS} value is extracted from the expression by bootstrapping the fixed point with a dummy value.
630 This is safe because the expression on the right-hand side of the \cleaninline{In} is never evaluated.
631 Then, using \cleaninline{addSdsIfNotExist}, the identifier for this particular \gls{SDS} is either retrieved from the compiler state or generated freshly.
632 This identifier is then used to provide a reference to the \cleaninline{def} definition to evaluate the main expression.
633 Compiling \cleaninline{getSds} is a matter of executing the \cleaninline{BCInterpret} representing the \gls{SDS}, which yields the identifier that can be embedded in the instruction.
634 Setting the \gls{SDS} is similar: the identifier is retrieved and the value is written to put in a task tree so that the resulting task can remember the value it has written.
635
636 % VimTeX: SynIgnore on
637 \begin{lstClean}[caption={Backend implementation for the SDS classes.},label={lst:comp_sds}]
638 :: Sds a = Sds Int
639 instance sds BCInterpret where
640 sds def = {main =
641 let (t In e) = def (abort "sds: expression too strict")
642 in addSdsIfNotExist (Left $ String255 (toByteCode{|*|} t))
643 >>= \sdsi-> let (t In e) = def (pure (Sds sdsi))
644 in e.main
645 }
646 getSds f = f >>= \(Sds i)-> tell` [BCMkTask (BCSdsGet (fromInt i))]
647 setSds f v = f >>= \(Sds i)->v >>| tell`
648 ( map BCMkTask (bcstable (byteWidth v))
649 ++ [BCMkTask (BCSdsSet (fromInt i))])
650 \end{lstClean}
651 % VimTeX: SynIgnore off
652
653 Lowered \glspl{SDS} are stored in the compiler state as \cleaninline{Right MTLens} values.
654 The compilation of the code and the serialisation of the data throws away all typing information.
655 The \cleaninline{MTLens} is a type synonym for \pgls{SDS} that represents the typeless serialised value of the underlying \gls{SDS}.
656 This is done so that the \cleaninline{withDevice} task can write the received \gls{SDS} updates to the according \gls{SDS} while the \gls{SDS} is not in scope.
657 The \gls{ITASK} notification mechanism then takes care of the rest.
658 Such \pgls{SDS} is created by using the \cleaninline{mapReadWriteError} which, given a pair of read and write functions with error handling, produces \pgls{SDS} with the lens embedded.
659 The read function transforms converts the typed value to a typeless serialised value.
660 The write function will, given a new serialised value and the old typed value, produce a new typed value.
661 It tries to decode the serialised value, if that succeeds, it is written to the underlying \gls{SDS}, an error is thrown otherwise.
662 \Cref{lst:mtask_itasksds_lens} shows the implementation for this.
663
664 % VimTeX: SynIgnore on
665 \begin{lstClean}[label={lst:mtask_itasksds_lens},caption={Lens applied to lowered \gls{ITASK} \glspl{SDS} in \gls{MTASK}.}]
666 lens :: (Shared sds a) -> MTLens | type a & RWShared sds
667 lens sds = mapReadWriteError
668 ( \r-> Ok (fromString (toByteCode{|*|} r)
669 , \w r-> ?Just <$> iTasksDecode (toString w)
670 ) ?None sds
671 \end{lstClean}
672 % VimTeX: SynIgnore off
673
674 \Cref{lst:mtask_itasksds_lift} shows the code for the implementation of \cleaninline{lowerSds} that uses the \cleaninline{lens} function shown earlier.
675 It is very similar to the \cleaninline{sds} constructor in \cref{lst:comp_sds}, only now a \cleaninline{Right} value is inserted in the \gls{SDS} administration.
676
677 % VimTeX: SynIgnore on
678 \begin{lstClean}[label={lst:mtask_itasksds_lift},caption={The implementation for lowering \glspl{SDS} in \gls{MTASK}.}]
679 instance lowerSds BCInterpret where
680 lowerSds def = {main =
681 let (t In _) = def (abort "lowerSds: expression too strict")
682 in addSdsIfNotExist (Right $ lens t)
683 >>= \sdsi->let (_ In e) = def (pure (Sds sdsi)) in e.main
684 }\end{lstClean}
685 % VimTeX: SynIgnore off
686
687 \section{C code generation}\label{sec:ccodegen}
688 All communication between the \gls{ITASK} server and the \gls{MTASK} server is type parametrised.
689 From the structural representation of the type, a \gls{CLEAN} parser and printer is constructed using generic programming.
690 Furthermore, a \ccpp{} parser and printer is generated for use on the \gls{MTASK} device.
691 The technique for generating the \ccpp{} parser and printer is very similar to template metaprogramming and requires a rich generic programming library or compiler support that includes a lot of metadata in the record and constructor nodes.
692 Using generic programming in the \gls{MTASK} system, both serialisation and deserialisation on the microcontroller and and the server is automatically generated.
693
694 \subsection{Server}
695 On the server, off-the-shelve generic programming techniques are used to make the serialisation and deserialisation functions (see \cref{lst:ser_deser_server}).
696 Serialisation is a simple conversion from a value of the type to a string.
697 Deserialisation is a little bit different in order to support streaming\footnotemark.
698 \footnotetext{%
699 Here the \cleaninline{*!} variant of the generic interface is chosen that has less uniqueness constraints for the compiler-generated adaptors \citep{alimarine_generic_2005,hinze_derivable_2001}.%
700 }
701 Given a list of available characters, a tuple is always returned.
702 The right-hand side of the tuple contains the remaining characters, the unparsed input.
703 The left-hand side contains either an error or a maybe value.
704 If the value is a \cleaninline{?None}, there was no full value to parse.
705 If the value is a \cleaninline{?Just}, the data field contains a value of the requested type.
706
707 \begin{lstClean}[caption={Serialisation and deserialisation functions in \gls{CLEAN}.},label={lst:ser_deser_server}]
708 generic toByteCode a :: a -> String
709 generic fromByteCode a *! :: [Char] -> (Either String (? a), [Char])
710 \end{lstClean}
711
712 \subsection{Client}
713 The \gls{RTS} of the \gls{MTASK} system runs on resource-constrained microcontrollers and is implemented in portable \ccpp{}.
714 In order to achieve more interoperation safety, the communication between the server and the client is automated, i.e.\ the serialisation and deserialisation code in the \gls{RTS} is generated.
715 The technique used for this is very similar to the technique shown in \cref{chp:first-class_datatypes}.
716 However, instead of using template metaprogramming, a feature \gls{CLEAN} lacks, generic programming is used also as a two-stage rocket.
717 In contrast to many other generic programming systems, \gls{CLEAN} allows for access to much of the metadata of the compiler.
718 For example, \cleaninline{Cons}, \cleaninline{Object}, \cleaninline{Field}, and \cleaninline{Record} generic constructors are enriched with their arity, names, types, \etc.
719 Furthermore, constructors can access the metadata of the objects and fields of their parent records.
720 Using this metadata, generic functions are created that generate \ccpp{} type definitions, parsers and printers for any first-order \gls{CLEAN} type.
721 The exact details of this technique can be found in the future in a paper that is in preparation.
722
723 \Glspl{ADT} are converted to tagged unions, newtypes to typedefs, records to structs, and arrays to dynamic size-parametrised allocated arrays.
724 For example, the \gls{CLEAN} types in \cref{lst:ser_clean} are translated to the \ccpp{} types seen in \cref{lst:ser_c}
725
726 \begin{lstClean}[caption={Simple \glspl{ADT} in \gls{CLEAN}.},label={lst:ser_clean}]
727 :: T a = A a | B NT {#Char}
728 :: NT =: NT Real
729 \end{lstClean}
730
731 \begin{lstArduino}[caption={Generated \ccpp{} type definitions for the simple \glspl{ADT}.},label={lst:ser_c}]
732 typedef double Real;
733 typedef char Char;
734
735 typedef Real NT;
736 enum T_c {A_c, B_c};
737
738 struct Char_HshArray { uint32_t size; Char *elements; };
739 struct T {
740 enum T_c cons;
741 struct { void *A;
742 struct { NT f0; struct Char_HshArray f1; } B;
743 } data;
744 };
745 \end{lstArduino}
746
747 For each of these generated types, two functions are created, a typed printer, and a typed parser (see \cref{lst:ser_pp}).
748 The parser functions are parametrised by a read function, an allocation function and parse functions for all type variables.
749 This allows for the use of these functions in environments where the communication is parametrised and the memory management is self-managed such as in the \gls{MTASK} \gls{RTS}.
750
751 \begin{lstArduino}[caption={Printer and parser for the \glspl{ADT} in \ccpp{}.},label={lst:ser_pp}]
752 struct T parse_T(uint8_t (*get)(), void *(*alloc)(size_t),
753 void *(*parse_0)(uint8_t (*)(), void *(*)(size_t)));
754
755 void print_T(void (*put)(uint8_t), struct T r,
756 void (*print_0)(void (*)(uint8_t), void *));
757 \end{lstArduino}
758
759 \section{Conclusion}
760 It is not straightforward to execute \gls{MTASK} tasks on resources-constrained \gls{IOT} edge devices.
761 To achieve this, the terms in the \gls{DSL} are compiled to domain-specific byte code.
762 This byte code is sent for interpretation to the light-weight \gls{RTS} of the edge device.
763 The \gls{RTS} first evaluates the main expression.
764 The result of this evaluation, a run time representation of the task, is a task tree.
765 This task tree is rewritten according to rewrite rules until a stable value is observed.
766 Rewriting multiple tasks at the same time is achieved by interleaving the rewrite steps, resulting in seamingly parallel execution of the tasks.
767 Furthermore, the \gls{RTS} automates communication and coordinates multi tasking.
768
769 \input{subfilepostamble}
770 \end{document}