250ef7e4358200ce217d3ee665aa6c36f4d26f91
[bsc-thesis1415.git] / thesis2 / 3.methods.tex
1 \section{Application overview}
2 The backend consists of several processing steps that the input has go through
3 before it is converted to a crawler specification. These steps are visualized
4 in Figure~\ref{appinternals}. All the nodes are important milestones in the
5 process of processing the user data. Arrows indicate informatio transfer
6 between these steps. The Figure is a detailed explanation of the
7 \textit{Backend} node in Figure~\ref{appoverview}.
8
9 \begin{figure}[H]
10 \label{appinternals}
11 \centering
12 \includegraphics[width=\linewidth]{backend.eps}
13 \strut\\
14 \caption{Main module internals}
15 \end{figure}
16
17 \section{HTML data}
18 The raw data from the Frontend with the user markings enter the backend as a
19 HTTP \textit{POST} request. This \textit{POST} request consists of several
20 information data fields. These data fields are either fields from the static
21 description boxes in the frontend or raw \textit{HTML} data from the table
22 showing the processed RSS feed entries which contain the markings made by the
23 user. The table is sent in whole precicely at the time the user presses the
24 submit button. Within the \textit{HTML} data of the table markers are placed
25 before sending. These markers make the parsing of the tables more easy and
26 remove the need for an advanced \textit{HTML} parser to extract the markers.
27 The \textit{POST} request is not send asynchronously. Synchronous sending means
28 the user has to wait until the server has processed the request. In this way
29 the user will be notified immediately when the processing has finished
30 successfully and will be able to review and test the resulting crawler. All the
31 data preparation for the \textit{POST} request is done in \textit{Javascript}
32 and thus happens on the user side. All the processing afterwards is done in the
33 backend and thus happens on the server side. All descriptive fields will also
34 be put directly in the final aggregating dictionary. All the raw \textit{HTML}
35 data will be sent to the next step.
36
37 \section{Table rows}
38 The first conversion step is to extract individual table rows from the
39 \textit{HTML} data. In the previous step markers were placed to make the
40 identification easy of table rows. Because of this, extracting the table rows
41 only requires rudimentary matching techniques that require very little
42 computational power. User markings are highlights of certain text elements.
43 The highlighting is done using \texttt{SPAN} elements and therefore all the
44 \texttt{SPAN} elements have to be found and extracted. To achieve this for
45 every row all \texttt{SPAN} elements are extracted, again with simple matching
46 techniques, to extract the color and afterwards to remove the element to
47 retrieve the original plain text of the \textit{RSS} feed entry. When this step
48 is done a data structure containing all the text of the entries together with
49 the markings will go to the next step. All original data, namely the
50 \textit{HTML} data per row, will be transferred to the final aggregating
51 dictionary.
52
53 \section{Node lists}
54 Every entry gotten from the previous step is going to be processing into so
55 called node-lists. A node-list can be seen as a path graph where every
56 character and marking has a node. A path graph $G$ is defined as
57 $G=(V,n_1,E,n_i)$ where $V=\{n_1, n_2, \cdots, n_{i-1}, n_i\}$ and $E=\{(n_1,
58 n_2), (n_2, n_3), ... (n_{i-1}, n_{i})\}$. A path graph is basically a graph
59 that is a single linear path of nodes where every node is connected to the next
60 node except for the last one. The last node is the only final node. The
61 transitions between two nodes is either a character or a marking. As an example
62 we take the entry \texttt{19:00, 2014-11-12 - Foobar} and create the
63 corresponding node-lists and it is shown in Figure~\ref{nodelistexample}.
64 Characters are denoted with single quotes, spaces with an underscore and
65 markers with angle brackets. Node-lists are the basic elements from which the
66 DAWG will be generated. These node-lists will also be available in the final
67 aggregating dictionary to ensure consistency of data and possibility of
68 regenerating the data.
69 \begin{figure}[H]
70 \label{nodelistexample}
71 \centering
72 \includegraphics[width=\linewidth]{nodelistexample.eps}
73 \caption{Node list example}
74 \end{figure}
75
76 \section{DAWGs}
77 \subsection{Terminology}
78 \textbf{Parent nodes} are nodes that have an arrow to the child.\\
79 \textbf{Confluence nodes} are nodes that have multiple parents.
80
81 \subsection{Datastructure}
82 We represent the user generated patterns as DAWGs by converting the node-lists
83 to DAWGs. Normally DAWGs have single letters from an alphabet as edgelabel but
84 in our implementation the DAWGs alphabet contains all letters, whitespace and
85 punctuation but also the specified user markers which can be multiple
86 characters of actual length but for the DAWGs sake they are one transition in
87 the graph.
88
89 DAWGs are graphs but due to the constraints we can use a DAWG to check if a
90 match occurs by checking if a path exists that creates the word by
91 concatenating all the edge labels. The first algorithm to generate DAWGs from
92 words was proposed by Hopcroft et al\cite{Hopcroft1971}. It is an incremental
93 approach in generating the graph. Meaning that entry by entry the graph will be
94 expanded. Hopcrofts algorithm has the constraint of lexicographical ordering.
95 Later on Daciuk et al.\cite{Daciuk1998}\cite{Daciuk2000} improved on the
96 original algorithm and their algorithm is the algorithm we used to generate
97 minimal DAWGs from the node lists.
98
99 \subsection{Algorithm}
100 The algorithm of building DAWGs is an iterative process that goes roughly in
101 four steps. We start with the null graph that can be described by
102 $G_0=(\{q_0\},\{q_0\},\{\}\{\})$ and does not contain any edges, one node and
103 $\mathcal{L}(G_0)=\emptyset$. The first word that is added to the graph will be
104 added in a naive way and is basically replacing the graph by the node-list
105 which is a path graph. We just create a new node for every transition of
106 character and we mark the last node as final. From then on all words are added
107 using a four step approach described below. Pseudocode for this algorithm can
108 be found in Listing~\ref{pseudodawg} named as the function
109 \texttt{generate\_dawg(words)}
110 \begin{enumerate}
111 \item
112 Say we add word $w$ to the grahp. Step one is finding the
113 common prefix of the word already in the graph. The common
114 prefix is defined as the longest subword $w'$ for which there
115 is a $\delta^*(q_0, w')$. When the common prefix is found we
116 change the starting state to the last state of the common
117 prefix and remain with suffix $w''$ where $w=w'w''$.
118 \item
119 We add the suffix to the graph starting from the last node of
120 the common prefix.
121 \item
122 When there are confluence nodes present within the common
123 prefix they are cloned starting from the last confluence node
124 to avoid adding unwanted words.
125 \item
126 From the last node of the suffix up to the first node we
127 replace the nodes by checking if there are equivalent nodes
128 present in the graph that we can merge with.
129 \end{enumerate}
130
131 \begin{algorithm}[H]
132 \SetKwProg{Def}{def}{:}{end}
133 \Def{generate\_dawg(words)}{
134 register := $\emptyset$\;
135 \While{there is another word}{%
136 word := next word\;
137 commonprefix := CommonPrefix(word)\;
138 laststate := %
139 delta\textsuperscript{*}(q0, commonprefix)\;
140 currentsuffix := %
141 word[length(commonprefix)\ldots length(word)]\;
142 \If{has\_children(laststate)}{%
143 replace\_or\_register(laststate)\;
144 }
145 add\_suffix(laststate, currentsuffix)\;
146 }
147 replace\_or\_register(q0)\;
148 }
149 \Def{replace\_or\_register\_dawg(state)}{%
150 child := last\_child(state)\;
151 \If{has\_children(child)}{%
152 replace\_or\_register(child)\;
153 }
154 \eIf{there is an equivalent state q}{%
155 last\_child(state)\;
156 delete(child)\;
157 }{%
158 register.add(child)\;
159 }
160 }
161 \caption{Generating DAWGs pseudocode}
162 \label{pseudodawg}
163 \end{algorithm}
164
165 \subsection{Example}
166 We visualize this with an example shown in the {Subgraphs in
167 Figure}~\ref{dawg1} that builds a DAWG with the following entries:
168 \texttt{abcd}, \texttt{aecd} and \texttt{aecf}.
169
170 \begin{itemize}
171 \item Initial\\
172 Initially we begin with the null graph. This graph is show in
173 the figure as SG0. This DAWG does not yet accept any words.
174 \item \texttt{abcd}\\
175 Adding the first entry \texttt{abcd} is trivial because we can
176 just create a single path which does not require any hard work.
177 This is because the common prefix we find in Step 1 is empty
178 and the suffix will thus be the entire word. Merging the suffix
179 back into the graph is also not possible since there are no
180 nodes except for the first node. The result of adding the
181 first word is visible in subgraph SG1.
182 \item \texttt{aecd}\\
183 For the second entry we will have to do some
184 extra work. The common prefix found in Step 1 is \texttt{a}
185 which we add to the graph. This leaves us in Step 2 with the
186 suffix \texttt{ecd} which we add too. In Step 3 we see that
187 there are no confluence nodes in our common prefix and
188 therefore we do not have to clone nodes. In Step 4 we traverse
189 from the last node back to the beginning of the suffix and find
190 a common suffix \texttt{cd} and we can merge these nodes. In
191 this way we can reuse the transition from $q_3$ to $q_4$. This
192 leaves us with subgraph SG2.
193 \item \texttt{aecf}\\
194 We now add the last entry which is the word \texttt{aecf}. When
195 we do this without the confluence node checking we encounter an
196 unwanted extra word. In Step 1 we find the common prefix
197 \texttt{aec} and that leaves us with the suffix \texttt{f}
198 which we will add. This creates subgraph SG3 and we notice
199 there is an extra word present in the graph, namely the word
200 \texttt{abcf}. This extra word appeared because of the
201 confluence node $q_3$ which was present in the common prefix
202 introducing an unwanted path. Therefore in Step 3 when we check
203 for confluence node we would see that node $q_3$ is a
204 confluence node and needs to be cloned off the original path,
205 by cloning we take the added suffix with it to create an
206 entirely new route. Tracking the route back again we do not
207 encounter any other confluence nodes so we can start Step 4.
208 For this word there is no common suffix and therefore no
209 merging will be applied. This results in subgraph SG4 which is
210 the final DAWG containing only the words added.
211 \end{itemize}
212
213 \begin{figure}[H]
214 \label{dawg1}
215 \centering
216 \includegraphics[width=0.9\linewidth]{inccons.eps}
217 \strut\\\strut\\
218 \caption{Incrementally constructing a DAWG}
219 \end{figure}
220
221 \subsection{Appliance on extraction of patterns}
222 The text data in combination with the user markings can not be converted
223 automatically to a DAWG using the algorithm we described. This is because the
224 user markings are not necessarily a single character or word. User markings are
225 basically one or more characters. When we add a user marking we insert a
226 character that is not in the alphabet and later on we change the marking to a
227 kind of subgraph. When this is applied it can be possible that non determinism
228 is added to the graph. Non determinism is the fact that a single node has
229 multiple edges with the same transition, in practise this means that a word can
230 be present in the graph in multiple paths. This is shown in
231 Figure~\ref{nddawg} with the following words: \texttt{ab<1>c}, \texttt{a<1>bbc}.
232 In this graph the word \texttt{abdc} will be accepted and the user pattern
233 \texttt{<1>} will be filled with the word \texttt{d}. However if we try the
234 word \texttt{abdddbc} both paths can be chosen. In the first case the user
235 pattern \texttt{<1>} will be filled with \texttt{dddb} and in the second case
236 with \texttt{bddd}. In such a case we need to choose the hopefully smartest
237 choice.
238
239 \begin{figure}[H]
240 \label{nddawg}
241 \centering
242 \includegraphics[width=\linewidth]{nddawg.eps}
243 \strut\\
244 \caption{Example non determinism}
245 \end{figure}
246
247 \subsection{Minimality and non-determinism}
248 The Myhill-Nerode theorem~\cite{Hopcroft1979} states that for every number of
249 graphs accepting the same language there is a single graph with the least
250 amount of states. Mihov\cite{Mihov1998} has proven that the algorithm is
251 minimal in its original form. Our program converts the node-lists to DAWGs that
252 can possibly contain non deterministic transitions from nodes and therefore one
253 can argue about Myhill-Nerodes theorem and Mihovs proof holding.. Due to the
254 nature of the determinism this is not the case and both hold. In reality the
255 graph itself is only non-deterministic when expanding the categories and thus
256 only during matching.
257
258 Choosing the smartest path during matching the program has to choose
259 deterministically between possibly multiple path with possibly multiple
260 results. There are several possibilities or heuristics to choose from.
261 \begin{itemize}
262 \item Maximum fields heuristic\\
263
264 This heuristic prefers the result that has the highest amount
265 of categories filled with actual text. Using this method the
266 highest amount of data fields will be getting filled at all
267 times. The downside of this method is that because of this it
268 might be that some data is not put in the right field because a
269 suboptimal splitting occurred that has put the data in two
270 separate fields whereas it should be in one field.
271 \item Maximum path heuristic\\
272
273 Maximum path heuristic tries to find a match with the highest
274 amount of fixed path transitions. Fixed path transitions are
275 transitions that occur not within a category. The philosophy
276 behind is, is that because the path are hard coded in the graph
277 they must be important. The downside of this method is when
278 overlap occurs between hard coded paths and information within
279 the categories. For example a band that is called
280 \texttt{Location} could interfere greatly with a hard coded
281 path that marks a location using the same words.
282 \end{itemize}
283
284 If we would know more about the categories the best heuristic automatically
285 becomes the maximum path heuristic. When, as in our implementation, there is
286 very little information both heuristics perform about the same.