updated images and makefile
[bsc-thesis1415.git] / thesis2 / 3.methods.tex
1 \section{Application overview}
2 The backend consists of several processing steps that the input has go through
3 before it is converted to a crawler specification. These steps are visualized
4 in Figure~\ref{appinternals}. All the nodes are important milestones in the
5 process of processing the user data. Arrows indicate informatio transfer
6 between these steps. The Figure is a detailed explanation of the
7 \textit{Backend} node in Figure~\ref{appoverview}.
8
9 \begin{figure}[H]
10 \label{appinternals}
11 \centering
12 \includegraphics[width=\linewidth]{backend.pdf}
13 \strut\\
14 \caption{Main module internals}
15 \end{figure}
16
17 \section{HTML data}
18 The raw data from the Frontend with the user markings enter the backend as a
19 HTTP \textit{POST} request. This \textit{POST} request consists of several
20 information data fields. These data fields are either fields from the static
21 description boxes in the frontend or raw \textit{HTML} data from the table
22 showing the processed RSS feed entries which contain the markings made by the
23 user. The table is sent in whole precicely at the time the user presses the
24 submit button. Within the \textit{HTML} data of the table markers are placed
25 before sending. These markers make the parsing of the tables more easy and
26 remove the need for an advanced \textit{HTML} parser to extract the markers.
27 The \textit{POST} request is not send asynchronously. Synchronous sending means
28 the user has to wait until the server has processed the request. In this way
29 the user will be notified immediately when the processing has finished
30 successfully and will be able to review and test the resulting crawler. All the
31 data preparation for the \textit{POST} request is done in \textit{Javascript}
32 and thus happens on the user side. All the processing afterwards is done in the
33 backend and thus happens on the server side. All descriptive fields will also
34 be put directly in the final aggregating dictionary. All the raw \textit{HTML}
35 data will be sent to the next step.
36
37 \section{Table rows}
38 The first conversion step is to extract individual table rows from the
39 \textit{HTML} data. In the previous step markers were placed to make the
40 identification easy of table rows. Because of this, extracting the table rows
41 only requires rudimentary matching techniques that require very little
42 computational power. User markings are highlights of certain text elements.
43 The highlighting is done using \texttt{SPAN} elements and therefore all the
44 \texttt{SPAN} elements have to be found and extracted. To achieve this for
45 every row all \texttt{SPAN} elements are extracted, again with simple matching
46 techniques, to extract the color and afterwards to remove the element to
47 retrieve the original plain text of the \textit{RSS} feed entry. When this step
48 is done a data structure containing all the text of the entries together with
49 the markings will go to the next step. All original data, namely the
50 \textit{HTML} data per row, will be transferred to the final aggregating
51 dictionary.
52
53 \section{Node lists}
54 Every entry gotten from the previous step is going to be processing into so
55 called node-lists. A node-list can be seen as a path graph where every
56 character and marking has a node. A path graph $G$ is defined as
57 $G=(V,n_1,E,n_i)$ where $V=\{n_1, n_2, \cdots, n_{i-1}, n_i\}$ and
58 $E=\{(n_1, n_2), (n_2, n_3), \ldots\\ (n_{i-1}, n_{i})\}$. A path graph is basically
59 a graph that is a single linear path of nodes where every node is connected to
60 the next node except for the last one. The last node is the only final node.
61 The transitions between two nodes is either a character or a marking. As an
62 example we take the entry \texttt{19:00, 2014-11-12 - Foobar} and create the
63 corresponding node-lists and it is shown in Figure~\ref{nodelistexample}.
64 Characters are denoted with single quotes, spaces with an underscore and
65 markers with angle brackets. Node-lists are the basic elements from which the
66 DAWG will be generated. These node-lists will also be available in the final
67 aggregating dictionary to ensure consistency of data and possibility of
68 regenerating the data.
69 \begin{figure}[H]
70 \label{nodelistexample}
71 \centering
72 \includegraphics[width=\linewidth]{nodelistexample.pdf}
73 \caption{Node list example}
74 \end{figure}
75
76 \section{DAWGs}
77 \subsection{Terminology}
78 \textbf{Parent nodes} are nodes that have an arrow to the child.\\
79 \textbf{Confluence nodes} are nodes that have multiple parents.
80
81 \subsection{Datastructure}
82 We represent the user generated patterns as DAWGs by converting the node-lists
83 to DAWGs. Normally DAWGs have single letters from an alphabet as edgelabel but
84 in our implementation the DAWGs alphabet contains all letters, whitespace and
85 punctuation but also the specified user markers which can be multiple
86 characters of actual length but for the DAWGs sake they are one transition in
87 the graph.
88
89 DAWGs are graphs but due to the constraints we can use a DAWG to check if a
90 match occurs by checking if a path exists that creates the word by
91 concatenating all the edge labels. The first algorithm to generate DAWGs from
92 words was proposed by Hopcroft et al\cite{Hopcroft1971}. It is an incremental
93 approach in generating the graph. Meaning that entry by entry the graph will be
94 expanded. Hopcrofts algorithm has the constraint of lexicographical ordering.
95 Later on Daciuk et al.\cite{Daciuk1998}\cite{Daciuk2000} improved on the
96 original algorithm and their algorithm is the algorithm we used to generate
97 minimal DAWGs from the node lists.
98
99 \subsection{Algorithm}
100 The algorithm of building DAWGs is an iterative process that goes roughly in
101 four steps. We start with the null graph that can be described by
102 $G_0=(\{q_0\},\{q_0\},\{\}\{\})$ and does not contain any edges, one node and
103 $\mathcal{L}(G_0)=\emptyset$. The first word that is added to the graph will be
104 added in a naive way and is basically replacing the graph by the node-list
105 which is a path graph. We just create a new node for every transition of
106 character and we mark the last node as final. From then on all words are added
107 using a four step approach described below. Pseudocode for this algorithm can
108 be found in Listing~\ref{pseudodawg} named as the function
109 \texttt{generate\_dawg(words)}. A \textit{Python} implementation can be found
110 in Listing~\ref{dawg.py}.
111 \begin{enumerate}
112 \item
113 Say we add word $w$ to the grahp. Step one is finding the
114 common prefix of the word already in the graph. The common
115 prefix is defined as the longest subword $w'$ for which there
116 is a $\delta^*(q_0, w')$. When the common prefix is found we
117 change the starting state to the last state of the common
118 prefix and remain with suffix $w''$ where $w=w'w''$.
119 \item
120 We add the suffix to the graph starting from the last node of
121 the common prefix.
122 \item
123 When there are confluence nodes present within the common
124 prefix they are cloned starting from the last confluence node
125 to avoid adding unwanted words.
126 \item
127 From the last node of the suffix up to the first node we
128 replace the nodes by checking if there are equivalent nodes
129 present in the graph that we can merge with.
130 \end{enumerate}
131
132 \begin{algorithm}[H]
133 \SetKwProg{Def}{def}{:}{end}
134 \Def{generate\_dawg(words)}{
135 register := $\emptyset$\;
136 \While{there is another word}{%
137 word := next word\;
138 commonprefix := CommonPrefix(word)\;
139 laststate := %
140 delta\textsuperscript{*}(q0, commonprefix)\;
141 currentsuffix := %
142 word[length(commonprefix)\ldots length(word)]\;
143 \If{has\_children(laststate)}{%
144 replace\_or\_register(laststate)\;
145 }
146 add\_suffix(laststate, currentsuffix)\;
147 }
148 replace\_or\_register(q0)\;
149 }
150 \Def{replace\_or\_register\_dawg(state)}{%
151 child := last\_child(state)\;
152 \If{has\_children(child)}{%
153 replace\_or\_register(child)\;
154 }
155 \eIf{there is an equivalent state q}{%
156 last\_child(state)\;
157 delete(child)\;
158 }{%
159 register.add(child)\;
160 }
161 }
162 \caption{Generating DAWGs pseudocode}
163 \label{pseudodawg}
164 \end{algorithm}
165
166 \subsection{Example}
167 We visualize this with an example shown in the {Subgraphs in
168 Figure}~\ref{dawg1} that builds a DAWG with the following entries:
169 \texttt{abcd}, \texttt{aecd} and \texttt{aecf}.
170
171 \begin{itemize}
172 \item No words added yet\\
173 Initially we begin with the null graph. This graph is show in
174 the figure as SG0. This DAWG does not yet accept any words.
175 \item Adding \texttt{abcd}\\
176 Adding the first entry \texttt{abcd} is trivial because we can
177 just create a single path which does not require any hard work.
178 This is because the common prefix we find in Step 1 is empty
179 and the suffix will thus be the entire word. Merging the suffix
180 back into the graph is also not possible since there are no
181 nodes except for the first node. The result of adding the
182 first word is visible in subgraph SG1.
183 \item Adding \texttt{aecd}\\
184 For the second entry we will have to do some
185 extra work. The common prefix found in Step 1 is \texttt{a}
186 which we add to the graph. This leaves us in Step 2 with the
187 suffix \texttt{ecd} which we add too. In Step 3 we see that
188 there are no confluence nodes in our common prefix and
189 therefore we do not have to clone nodes. In Step 4 we traverse
190 from the last node back to the beginning of the suffix and find
191 a common suffix \texttt{cd} and we can merge these nodes. In
192 this way we can reuse the transition from $q_3$ to $q_4$. This
193 leaves us with subgraph SG2.
194 \item Adding \texttt{aecf}\\
195 We now add the last entry which is the word \texttt{aecf}. When
196 we do this without the confluence node checking we encounter an
197 unwanted extra word. In Step 1 we find the common prefix
198 \texttt{aec} and that leaves us with the suffix \texttt{f}
199 which we will add. This creates subgraph SG3 and we notice
200 there is an extra word present in the graph, namely the word
201 \texttt{abcf}. This extra word appeared because of the
202 confluence node $q_3$ which was present in the common prefix
203 introducing an unwanted path. Therefore in Step 3 when we check
204 for confluence node we would see that node $q_3$ is a
205 confluence node and needs to be cloned off the original path,
206 by cloning we take the added suffix with it to create an
207 entirely new route. Tracking the route back again we do not
208 encounter any other confluence nodes so we can start Step 4.
209 For this word there is no common suffix and therefore no
210 merging will be applied. This results in subgraph SG4 which is
211 the final DAWG containing only the words added.
212 \end{itemize}
213
214 \begin{figure}[H]
215 \label{dawg1}
216 \centering
217 \includegraphics[width=0.8\linewidth]{inccons.pdf}
218 \strut\\\strut\\
219 \caption{Incrementally constructing a DAWG}
220 \end{figure}
221
222 \subsection{Appliance on extraction of patterns}
223 The text data in combination with the user markings can not be converted
224 automatically to a DAWG using the algorithm we described. This is because the
225 user markings are not necessarily a single character or word. Currently user
226 markings are basically multiple random characters. When we add a user marking,
227 we are inserting a kind of subgraph in the place of the node with the marking.
228 By doing this we can introduce non determinism to the graph. Non determinism is
229 the fact that a single node has multiple edges with the same transition, in
230 practise this means it could happen that a word can be present in the graph in
231 multiple paths. An example of non determinism in one of our DAWGs is shown in
232 Figure~\ref{nddawg}. This figure represents a generated DAWG with the following
233 entries: \texttt{ab<1>c}, \texttt{a<1>bbc}.
234
235 In this graph the word \texttt{abdc} will be accepted and the user pattern
236 \texttt{<1>} will be filled with the subword \texttt{d}. However if we try the
237 word \texttt{abdddbc} both paths can be chosen. In the first case the user
238 pattern \texttt{<1>} will be filled with \texttt{dddb} and in the second case
239 with \texttt{bddd}. In such a case we need to choose the hopefully smartest
240 choice.
241
242 \begin{figure}[H]
243 \label{nddawg}
244 \centering
245 \includegraphics[width=\linewidth]{nddawg.pdf}
246 \strut\\
247 \caption{Example non determinism}
248 \end{figure}
249
250 \subsection{Minimality \& non-determinism}
251 The Myhill-Nerode theorem~\cite{Hopcroft1979} states that for every number of
252 graphs accepting the same language there is a single graph with the least
253 amount of states. Mihov\cite{Mihov1998} has proven that the algorithm for
254 generating DAWGs is minimal in its original form. Our program converts the
255 node-lists to DAWGs that can possibly contain non deterministic transitions
256 from nodes and therefore one can argue about Myhill-Nerodes theorem and Mihovs
257 proof holding. Due to the nature of the determinism this is not the case and
258 both hold. In reality the graph itself is only non-deterministic when expanding
259 the categories and thus only during matching.
260
261 Choosing the smartest path during matching the program has to choose
262 deterministically between possibly multiple path with possibly multiple
263 results. There are several possibilities or heuristics to choose from.
264 \begin{itemize}
265 \item Maximum fields heuristic\\
266
267 This heuristic prefers the result that has the highest amount
268 of categories filled with actual text. Using this method the
269 highest amount of data fields will be getting filled at all
270 times. The downside of this method is that because of this it
271 might be that some data is not put in the right field because a
272 suboptimal splitting occurred that has put the data in two
273 separate fields whereas it should be in one field.
274 \item Maximum path heuristic\\
275
276 Maximum path heuristic tries to find a match with the highest
277 amount of fixed path transitions. Fixed path transitions are
278 transitions that occur not within a category. The philosophy
279 behind is, is that because the path are hard coded in the graph
280 they must be important. The downside of this method is when
281 overlap occurs between hard coded paths and information within
282 the categories. For example a band that is called
283 \texttt{Location} could interfere greatly with a hard coded
284 path that marks a location using the same words.
285 \end{itemize}
286
287 If we would know more about the categories the best heuristic automatically
288 becomes the maximum path heuristic. When, as in our implementation, there is
289 very little information both heuristics perform about the same.