From: Mart Lubbers Date: Fri, 10 Jun 2016 10:33:06 +0000 (+0200) Subject: some updates X-Git-Url: https://git.martlubbers.net/?a=commitdiff_plain;h=762caf4d755005c8d9a50fec24cba74482bf50c1;p=cc1516.git some updates --- diff --git a/deliverables/report/ext.tex b/deliverables/report/ext.tex index 70ae2f6..446f7bf 100644 --- a/deliverables/report/ext.tex +++ b/deliverables/report/ext.tex @@ -53,7 +53,7 @@ makes sure the correct function is branched to and the caller can resume the normal execution path. -\subsection{Syntactic sugars} +\subsection{Syntactic sugars}\label{ssec:synsugar} Several small syntactic sugars have been added on several levels of processing to make writing programs more easy. diff --git a/deliverables/report/pars.tex b/deliverables/report/pars.tex index 2bb4a35..c7df619 100644 --- a/deliverables/report/pars.tex +++ b/deliverables/report/pars.tex @@ -37,7 +37,7 @@ can be lexed as one token such as literal characters. As said, the lexer uses a \Yard{} parser. The parser takes a list of characters and returns a list of \CI{Token}s. A token is a \CI{TokenValue} accompanied with a position and the \emph{ADT} used is show in Listing~\ref{lst:lextoken}. -Parser combinators make it very easy to account for arbitrary whitespace and it +Parser combinators make it very easy to account for arbitrary white space and it is much less elegant to do this in a regular way. By choosing to lex with parser combinators the speed of the phase decreases. However, since the parsers are very easy this increase is very small. @@ -69,15 +69,15 @@ for matching literal strings. Comments and literals are the only exception that require a non trivial parser. \subsection{Parsing} -%On March 10 you have to give a very brief presentation. In this presentation you tell the other -%students and us how your parser is constructed and demonstrate your parser and pretty printer. -%You should mention things like implementation language used, changes to the grammar, and other -%interesting points of your program. -%For this demonstration you have to prepare at least 10 test programs in -%SPL -%. In your presentation -%you have to show only the most interesting or challenging example. You can use the program -%4 -%above as a starting point. Hand in the test programs, and a document containing the transformed -%grammar as used by your parser. Indicate what parts of the semantic analysis are handled by your -%scanner +The parsing phase is the second phase of the parser and is yet again a \Yard{} +parser that transforms a list of tokens to an Abstract Syntax Tree(\AST). The +full abstract syntax tree is listed in Listing~\ref{lst:ast} which closely +resembles the grammar. + +The parser uses the standard \Yard{} combinators. For clarity and easy the +parser closely resembles the grammar. Due to the modularity of combinators it +was and is very easy to add functionality to the parser. The parser also +handles some syntactic sugar(Section~\ref{ssec:synsugar}). For example the +parser expands the literal lists and literal strings to the according list or +string representation. Moreover the parser transforms the let expressions to +real functions representing constant values. diff --git a/deliverables/report/report.tex b/deliverables/report/report.tex index c770ac7..6de42e5 100644 --- a/deliverables/report/report.tex +++ b/deliverables/report/report.tex @@ -14,14 +14,15 @@ \lstset{% basicstyle=\ttfamily\footnotesize, breaklines, - captionpos=b + captionpos=b, + frame=L } \newcommand{\SPLC}{\texttt{SPLC}} \newcommand{\SPL}{\texttt{SPL}} \newcommand{\SSM}{\texttt{SSM}} \newcommand{\Yard}{\textsc{Yard}} -\def\AST/{\texttt{AST}} +\newcommand{\AST}{\emph{AST}} \let\tt\texttt @@ -45,4 +46,12 @@ \subsection{Grammar} \lstinputlisting[label={lst:grammar}]{../../grammar/grammar.txt} +\newpage +\subsection{Abstract Syntax Tree} +\lstinputlisting[ + label={lst:ast}, + language=Clean, + firstline=6, + lastline=42]{../../AST.dcl} + \end{document} diff --git a/deliverables/report/sem.tex b/deliverables/report/sem.tex index 21a7cc4..f3325b2 100644 --- a/deliverables/report/sem.tex +++ b/deliverables/report/sem.tex @@ -19,11 +19,11 @@ The semantic analysis phase asses if a -grammatically correct- program is also semantically correct and decorates the program with extra information it -finds during this checking. The input of the semantic analysis is an \AST/ of +finds during this checking. The input of the semantic analysis is an \AST{} of the parsed program, its output is either a semantically correct and completely typed -\AST/ plus the environment in which it was typed (for debug purposes), or it is -an error describing the problem with the \AST/. +\AST{} plus the environment in which it was typed (for debug purposes), or it is +an error describing the problem with the \AST{}. During this analysis \SPLC{} checks four properties of the program: \begin{enumerate} @@ -35,11 +35,11 @@ During this analysis \SPLC{} checks four properties of the program: The first three steps are simple sanity checks which are coded in slightly over two dozen lines of Clean code. The last step is a Hindley Milner type inference algorithm, which is a magnitude more complex. This last step also decorates the -\AST/ with inferred type information. +\AST{} with inferred type information. \subsection{Sanity checks} The sanity checks are defined as simple recursive functions over the function -declarations in the AST. We will very shortly describe them here. +declarations in the \AST. We will very shortly describe them here. \begin{description} \item [No duplicate functions] This function checks that the program does @@ -97,11 +97,11 @@ type then in itself can be a function. \end{lstlisting} \subsubsection{Environment} -As all stages of \SPLC{}, the type inference is programmed in Clean. It takes an -AST and produces either a fully typed AST, or an error if the provided AST -contains type errors. The Program is typed in an environment $\Gamma : id -\rightarrow \Sigma$ which maps identifiers of functions or variables to -type schemes. +As all stages of \SPLC{}, the type inference is programmed in Clean. It takes +an \AST{} and produces either a fully typed \AST{}, or an error if the provided +\AST{} contains type errors. The Program is typed in an environment $\Gamma : +id \rightarrow \Sigma$ which maps identifiers of functions or variables to type +schemes. In \SPLC{} type inference and unification are done in one pass. They are unified in the \tt{Typing} monad, which is @@ -201,4 +201,4 @@ determined and are shown below. \begin{equation} \infer[\emptyset] {\Gamma \vdash [] \Rightarrow (\Gamma, \star_0, \alpha, [])}{} -\end{equation} \ No newline at end of file +\end{equation}