normal execution path.
-\subsection{Syntactic sugars}
+\subsection{Syntactic sugars}\label{ssec:synsugar}
Several small syntactic sugars have been added on several levels of processing
to make writing programs more easy.
As said, the lexer uses a \Yard{} parser. The parser takes a list of characters
and returns a list of \CI{Token}s. A token is a \CI{TokenValue} accompanied
with a position and the \emph{ADT} used is show in Listing~\ref{lst:lextoken}.
-Parser combinators make it very easy to account for arbitrary whitespace and it
+Parser combinators make it very easy to account for arbitrary white space and it
is much less elegant to do this in a regular way. By choosing to lex with
parser combinators the speed of the phase decreases. However, since the parsers
are very easy this increase is very small.
require a non trivial parser.
\subsection{Parsing}
-%On March 10 you have to give a very brief presentation. In this presentation you tell the other
-%students and us how your parser is constructed and demonstrate your parser and pretty printer.
-%You should mention things like implementation language used, changes to the grammar, and other
-%interesting points of your program.
-%For this demonstration you have to prepare at least 10 test programs in
-%SPL
-%. In your presentation
-%you have to show only the most interesting or challenging example. You can use the program
-%4
-%above as a starting point. Hand in the test programs, and a document containing the transformed
-%grammar as used by your parser. Indicate what parts of the semantic analysis are handled by your
-%scanner
+The parsing phase is the second phase of the parser and is yet again a \Yard{}
+parser that transforms a list of tokens to an Abstract Syntax Tree(\AST). The
+full abstract syntax tree is listed in Listing~\ref{lst:ast} which closely
+resembles the grammar.
+
+The parser uses the standard \Yard{} combinators. For clarity and easy the
+parser closely resembles the grammar. Due to the modularity of combinators it
+was and is very easy to add functionality to the parser. The parser also
+handles some syntactic sugar(Section~\ref{ssec:synsugar}). For example the
+parser expands the literal lists and literal strings to the according list or
+string representation. Moreover the parser transforms the let expressions to
+real functions representing constant values.
\lstset{%
basicstyle=\ttfamily\footnotesize,
breaklines,
- captionpos=b
+ captionpos=b,
+ frame=L
}
\newcommand{\SPLC}{\texttt{SPLC}}
\newcommand{\SPL}{\texttt{SPL}}
\newcommand{\SSM}{\texttt{SSM}}
\newcommand{\Yard}{\textsc{Yard}}
-\def\AST/{\texttt{AST}}
+\newcommand{\AST}{\emph{AST}}
\let\tt\texttt
\subsection{Grammar}
\lstinputlisting[label={lst:grammar}]{../../grammar/grammar.txt}
+\newpage
+\subsection{Abstract Syntax Tree}
+\lstinputlisting[
+ label={lst:ast},
+ language=Clean,
+ firstline=6,
+ lastline=42]{../../AST.dcl}
+
\end{document}
The semantic analysis phase asses if a -grammatically correct- program is also
semantically correct and decorates the program with extra information it
-finds during this checking. The input of the semantic analysis is an \AST/ of
+finds during this checking. The input of the semantic analysis is an \AST{} of
the
parsed program, its output is either a semantically correct and completely typed
-\AST/ plus the environment in which it was typed (for debug purposes), or it is
-an error describing the problem with the \AST/.
+\AST{} plus the environment in which it was typed (for debug purposes), or it is
+an error describing the problem with the \AST{}.
During this analysis \SPLC{} checks four properties of the program:
\begin{enumerate}
The first three steps are simple sanity checks which are coded in slightly over
two dozen lines of Clean code. The last step is a Hindley Milner type inference
algorithm, which is a magnitude more complex. This last step also decorates the
-\AST/ with inferred type information.
+\AST{} with inferred type information.
\subsection{Sanity checks}
The sanity checks are defined as simple recursive functions over the function
-declarations in the AST. We will very shortly describe them here.
+declarations in the \AST. We will very shortly describe them here.
\begin{description}
\item [No duplicate functions] This function checks that the program does
\end{lstlisting}
\subsubsection{Environment}
-As all stages of \SPLC{}, the type inference is programmed in Clean. It takes an
-AST and produces either a fully typed AST, or an error if the provided AST
-contains type errors. The Program is typed in an environment $\Gamma : id
-\rightarrow \Sigma$ which maps identifiers of functions or variables to
-type schemes.
+As all stages of \SPLC{}, the type inference is programmed in Clean. It takes
+an \AST{} and produces either a fully typed \AST{}, or an error if the provided
+\AST{} contains type errors. The Program is typed in an environment $\Gamma :
+id \rightarrow \Sigma$ which maps identifiers of functions or variables to type
+schemes.
In \SPLC{} type inference and unification are done in one pass. They are unified
in the \tt{Typing} monad, which is
\begin{equation}
\infer[\emptyset]
{\Gamma \vdash [] \Rightarrow (\Gamma, \star_0, \alpha, [])}{}
-\end{equation}
\ No newline at end of file
+\end{equation}