update
[bsc-thesis1415.git] / thesis2 / 1.introduction.tex
1 \section{Introduction}
2 What do people do when they want to grab a movie? Attend a concert? Find out
3 which theater shows play in local theater?
4
5 When the internet was in its early days and it started to be accessible to most
6 of the people information about entertainment was still obtained almost
7 exclusively from flyers, books, posters, radio/TV advertisements. People had to
8 look hard for information and you could easily miss a show just because you did
9 not cross paths with it.
10 Today the internet is used by almost everyone in the western society on a daily
11 basis and we would think that missing an event would be impossible because of
12 the enormous loads of information you can receive every day using the internet.
13 The opposite is true for information about leisure activities.
14
15 Nowadays information on the internet about entertainment is offered via two
16 main channels: individual venues websites and information bundling websites.
17
18 Individual venues put a lot of effort and resources in building a beautiful,
19 fast and most of all modern website that bundles their information with nice
20 graphics, animations and gimmicks. There also exist companies that bundle the
21 information from different websites to provide an overview. Information
22 bundling websites often have multiple individual venue websites as the source
23 for their information and therefore the information is most of the time not
24 complete. This is because individual venues tend to think, for example, that it
25 is obvious what the address of their venue is, that their ticket price is
26 always fixed to \EURdig$5.-$ and that you need a membership to attend the
27 events. Individual organizations usually put this non specific information in a
28 disclaimer or a separate page and information bundling website miss out on
29 these types of information a lot. They miss out because they can crawl these
30 individual events but gathering the miscellaneous information is usually done
31 by hand.
32
33 Combining the information from the different data source turns out to be a hard
34 task for such information bundling websites. It is a hard task because
35 information bundling websites do not have the resources and time reserved for
36 these tasks and therefore often serve incomplete information. Because of
37 the complexity of getting complete information there are not many websites
38 trying to bundle entertainment information into a complete and consistent
39 database. Hyperleap\footnote{\url{http://hyperleap.nl}} tries to achieve goal
40 of serving complete and consistent information.
41
42 \section{Hyperleap \& Infotainment}
43 Hyperleap is an internet company that existed in the time that internet was not
44 widespread. Hyperleap, active since 1995, is specialized in producing,
45 publishing and maintaining \textit{infotainment}. \textit{Infotainment} is a
46 combination of the words \textit{information} and \textit{entertainment}. It
47 represents a combination of factual information and subjectual information
48 (entertainment) within a certain category or field. In the case of Hyperleap
49 the category is the leisure industry, leisure industry encompasses all facets of
50 entertainment going from cinemas, theaters, concerts to swimming pools, bridge
51 competitions and conferences. Within the entertainment industry factual
52 information includes, but is not limited to, information such as starting time,
53 location, host or venue and duration. Subjectual information includes, but is
54 not limited to, information such as reviews, previews, photos, background
55 information and trivia.
56
57 Hyperleap manages the largest database containing \textit{infotainment} about
58 the leisure industry focussed on the Netherlands and surrounding regions. The
59 database contains over $10.000$ categorized events on average per week and
60 their venue database contains over $54.000$ venues delivering the leisure
61 activities ranging from theaters and music venues to petting zoos and fast food
62 restaurants. All the subjectual information is obtained or created by Hyperleap
63 and all factual information is gathered from different sources, quality checked
64 and therefore very reliable. Hyperleap is the only company in its kind that has
65 such high quality information. The \textit{infotainment} is presented via
66 several websites specialized per genre or category and some sites attract over
67 $500.000$ visitors per month.
68
69 \section{Information flow}
70 The reason why Hyperleap is the only in its kind with the high quality data is
71 because Hyperleap spends a lot of time and resources on quality checking, cross
72 comparing and consistency checking before the data enters the database. To
73 achieve this, the data will go through several different stages before it
74 enters the database. These stages are visualized in
75 Figure~\ref{informationflow} as an information flow diagram. In this diagram
76 the nodes are processing steps and the arrows denote information transfer or
77 flow.
78
79 \begin{figure}[H]
80 \label{informationflow}
81 \centering
82 \includegraphics[scale=0.7]{informationflow.eps}
83 \strut\\
84 \strut\\
85 \caption{Information flow Hyperleap database}
86 \end{figure}
87
88 \newpage
89 \subsection*{Sources}
90 The information that enters the database has to be quality checked and this
91 checking starts at the source of the data. There are several criteria the
92 information from the source have to comply to before it can enter the
93 database. The prerequisites for a source are for example the fact that the
94 source has to be reliable, consistent and free by licence. Event information
95 from a source must have at least the following fields of information.\\
96 \textbf{What:}\\
97 The \textit{What} field is the field that describes the content, content is a
98 very broad definition. In practice it can be describing the concert tour name,
99 theater show title, movie title, festival title and many more.\\
100 \textbf{Where:}\\
101 The \textit{Where} field is the location of the event. The location is often
102 omitted because the organization behind source presenting the information think
103 it is obvious. This field can also include different sub locations. For
104 example when a pop concert venue has their own building but in the summer they
105 organize a festival in some park. This data is often assumed to be trivial and
106 inherent but in practice this is not the case. In this example for an outsider
107 only the name of the park is often not enough.\\
108 \textbf{When:}\\
109 The \textit{When} field is the time and date of the event. Hyperleap wants
110 to have at minimum the date, start time and end time. In the field end
111 times are often omitted because they are not fixed or the organization
112 think it is obvious.\\
113 \strut\\
114 Venues often present incomplete data entries that do not comply to the
115 requirements explained before. Within the information flow categorizing and
116 grading the source is the first step. Hyperleap processes different sources and
117 source types and every source has different characteristics. Sources can be
118 modern sources like websites or social media but even today a lot of
119 information arrives at Hyperleap via flyers, fax or email. As sources vary in
120 content structure sources also vary in reliability. For example the following
121 entry is an example of a very well structured and probably generated, and thus
122 probably also reliable, event description. The entry can originate for example
123 from the title of an entry in a RSS feed. The example has a clear structure and
124 almost all information required is available directly from the entry.
125
126 \begin{flushleft}
127 \texttt{2015-05-20, 18:00-23:00 - \textit{Foobar} presenting their new%
128 CD in combination with a show. Location: small salon.}
129 \end{flushleft}
130
131 An example of a terrible item could be for example the following text that
132 could originate from a flyer or social media post. This example lacks a
133 precise date, time and location and is therefore hard for people to grasp at
134 first, let alone for a machine. When someone wants to get the full information
135 he has to tap in different resources which might not always be available.
136
137 \begin{flushleft}
138 \texttt{\textit{Foobar} playing to celebrate their CD release in the%
139 park tomorrow evening.}
140 \end{flushleft}
141
142 \subsection*{Crawler}
143 When the source has been determined and classified the next step is
144 periodically crawling the source. At the moment the crawling happens using two
145 main methods.\\
146 \textbf{Manual crawling:} Manual crawling is basically letting an employee
147 access the source and put the information directly in the database. This often
148 happens with non digital sources and with very sudden events or event changes
149 such as surprise concerts or event cancellation.\\
150 \textbf{Automatic crawling:} Some sites are very structured and a programmer
151 can create a program that can visit the website systematically and
152 automatically to extract all the new information. Not all digital sources are
153 suitable to be crawled automatically and will still need manual crawling. The
154 programmed crawlers are always specifically created for one or a couple sources
155 and when the source changes for example structure the programmer has to adapt
156 the crawler which is costly. Information from the all the crawlers goes first
157 to the \textit{Temporum}.
158
159 \subsection*{Temporum}
160 The \textit{Temporum} is a big bin that contains raw data extracted from
161 different sources and has to be post processed to be suitable enough for the
162 actual database. This post-processing encompasses several possible tasks. The
163 first task is to check the validity of the entry. This is not done for every
164 entry but random samples are taken and these entries will be tested to see
165 if the crawler is not malfunctioning and there is no nonsense in the
166 data.
167
168 The second step is matching the entry to several objects. Objects in the
169 broadest sense can basically be anything, in the case of a theater show
170 performance the matched object can be the theater show itself, describing the
171 show for all locations. In case of a concert it can be that the object is a
172 tour the band is doing. When the source is a ticket vendor the object is the
173 venue where the event takes place since the source itself is not a venue.
174 Many of these tasks are done by a human or with aid of a computer program.
175 The \textit{Temporum} acts as a safety net because no data is going straight
176 through to the database. It is first checked, matched and validated.
177
178 \subsection*{Database \& Publication}
179 When the data is post processed it is entered in the final database. The
180 database contains all the events that happened in the past and all the events
181 that are going to happen in the future. The database also contains information
182 about the venues. The database is linked to several categorical websites that
183 offer the information to users and accompany it with the other part of the
184 \textit{infotainment} namely the subjectual information that is usually
185 presented in the form of trivia, photos, interviews, reviews, previews and much
186 more.
187
188 \section{Goal \& Research question}
189 Crawling the sources and applying the preprocessing is the most expensive task
190 in the entire information flow because it requires a programmer to be hired.
191 Only programmers currently can create crawlers and design preprocessing jobs
192 because all current programs are specifically designed to do one thing, most of
193 the time for one source. Because a big group of sources often changes this task
194 becomes very expensive and has a long feedback loop. When a source changes the
195 source is first preprocessed in the old way, send to the \textit{Temporum} and
196 checked by a human and matched. The human then notices the error in the data
197 and contacts the programmer. The programmer then has to reprogram the specific
198 crawler to the new structure. This feedback loop, shown in
199 Figure~\ref{feedbackloop} can take days and can be the reason for gaps and
200 faulty information in the database.
201 \begin{figure}[H]
202 \label{feedbackloop}
203 \centering
204 \includegraphics[scale=0.5]{feedbackloop.eps}
205 \strut\\\strut\\
206 \caption{Feedback loop for malfunctioning crawlers}
207 \end{figure}
208
209 The goal of the project is to relieve the programmer of repairing crawlers all
210 the time and make the task of adapting, editing and removing crawlers doable
211 for someone without programming experience. In practice this means in
212 Figure~\ref{feedbackloop} removing the dotted arrows by dashed arrow.
213
214 For this project an application has been developed that can provide an
215 interface to a crawler generation system that is able to crawl RSS\cite{Rss}
216 and Atom\cite{Atom} publishing feeds. The interface provides the user with
217 point and click interfaces meaning that no computer science background is
218 needed to use the interface, to create, modify, test and remove crawlers. The
219 current Hyperleap backend system that handles the data can, via an interface,
220 generate XML feeds that contain the crawled data. The structure of the
221 application is very modular and generic and therefore it is easy to change
222 things in the program without having to know a lot about the programming
223 language used. This is because for example all visual things like buttons are
224 defined in a human readable text file. In practice it means that one person,
225 not by definition a programmer, can be instructed to change the structure and
226 this can also greatly reduce programmer intervention time.
227
228 The actual problem statement then becomes:
229 \begin{center}
230 \textit{Is it possible to shorten the feedback loop for repairing and%
231 adding crawlers by making a system that can create, add and maintain crawlers%
232 for RSS feeds}
233 \end{center}
234
235 \section{RSS/Atom}
236 RSS/Atom feeds, from now on called RSS feeds, are publishing feeds in the XML
237 format\cite{Xml} that are used to publish events. Every event or entry consists
238 of several standardized fields. The main data fields are the \textit{title} and
239 the \textit{description} field. In these fields the raw data is stored that
240 describes the event. Further there are several auxiliary fields that store the
241 link to the full article, store the publishing data, store some media in the
242 form of an image or video URL and store a \textit{Globally Unique
243 Identifier}(GUID)\footnote{A GUID is a unique identifier that in most cases is
244 the permalink of the article. A permalink is a link that will point to the
245 article}. An example of a RSS feed can be found in Listing~\ref{examplerss},
246 this listing shows a partly truncated RSS feed of a well known venue in the
247 Netherlands. As visible in the listing the similarities with XML are very
248 clear. Every RSS feed contains a \texttt{channel} tag and within that tag there
249 is some metadata and a list op \texttt{item} tags. Every \texttt{item} tag has
250 a fixed number of different fields. The most important fields for RSS within
251 the leisure industry are the \texttt{title} and the \texttt{description field}.
252
253 \begin{listing}
254 \caption{An example of a, partly truncated RSS feed of a well known dutch
255 venue}
256 \label{examplerss}
257 \xmlcode{exrss.xml}
258 \end{listing}
259
260 RSS feeds are mostly used by news sites to publish their articles, the feed
261 then only contains the headlines and if the user that is reading the feed is
262 interested he can click the so called deeplink and he is then sent to the full
263 website containing the article. Users often use programs that bundle user
264 specified RSS feeds into big summary feeds that can be used to sift through a
265 lot of news feeds. The RSS feed reader uses the GUID to skip feeds that are
266 already present in its internal database.
267
268 Generating RSS feeds by hand is a tedious task but usually RSS feeds are
269 generated by the Content Management Systems(CMS) the website runs on. With this
270 automatic method the RSS feeds are generated using the content published on the
271 website. Because the process is automatic the RSS feeds are generally very
272 structured and consistent in its structure. In the entertainment industry
273 venues often use a CMS for their website to allow users with no programming or
274 website background be able to post news items and event information.
275
276 \section{Why RSS?}
277 \label{sec:whyrss}
278 We described a lot of different source formats like HTML, fax/email, RSS and
279 XML. Because of the limited scope of the project and the time planned for it
280 we had to remove some of the input formats because they all require different
281 techniques. For example when the input source is in HTML format, most probably
282 a website, then there can be a great deal of information extraction be
283 automated using the structural information which is characteristic for HTML.
284 For fax/email however there is almost no structural information and most of the
285 automation techniques require natural language processing. We chose RSS feeds
286 because RSS feeds lack inherent structural information but are still very
287 structured. This structure is because, as said above, the RSS feeds are
288 generated and therefore almost always look the same. Also, in RSS feeds most
289 venues use particular structural identifiers that are characters. They separate
290 fields with vertical bars, commas, whitespace and more non text characters.
291 These field separators and keywords can be hard for a computer to detect but
292 people are very good in detecting these. With one look they can identify the
293 characters and keywords and build a pattern in their head. Another reason we
294 chose RSS is their temporal consistency, RSS feeds are almost always generated
295 and because of that the structure of the entries is very unlikely to change.
296 Basically the RSS feeds only change structure when the CMS that generates it
297 changes the generation algorithm. This property is usefull because the crawlers
298 then do not have to be retrained very often. Because the non inherent structure
299 entirely new strategies had to be applied.
300
301 \section{Directed Acyclic Graphs}
302 \paragraph{Directed graphs}
303 Directed graphs are a mathematical structure that can describe relations
304 between nodes. A directed graph $G$ is be defined as the tuple $(V,E)$ where
305 $V$ is a set of named nodes and $E$ is the set of edges defined by $E\subseteq
306 V\times V$. An edge $e\in E$ can be seen as $(v1, v2)$ where $v1,v2\in V$ and
307 can be interpreted as in a figure as an arrow between node $v1$ and node $v2$.
308 Multiple connections between two nodes are possible. For example the graph
309 visualized in Figure~\ref{graphexample} can be mathematically described as:
310 $$G=(\{n1, n2, n3, n4\}, \{(n1, n2), (n2, n1), (n2, n3), (n3, n4), (n1, n4)\}$$
311
312 \begin{figure}[H]
313 \label{graphexample}
314 \scalebox{0.7}{
315 \digraph[]{graphexample}{
316 rankdir=LR;
317 n1->n2;
318 n2->n1;
319 n2->n3;
320 n3->n4;
321 n1->n4;
322 }
323 }
324 \caption{Example DG}
325 \end{figure}
326
327 \paragraph{Directed acyclic graphs}
328 Directed Acyclic Graphs(DAGs) are a special kind of directed graphs. DAGs
329 are also defined as $G=(V,E)$ but with a restriction on $E$. Namely that cycles
330 are not allowed. Figure~\ref{dagexample} shows two graphs. The bottom graph
331 ontains a cycle and the right graph does not. Only the top graph is a valid
332 DAG. A cycle can be defined as follows:
333
334 Say $e\in E$ can be seen as $(v_1,v_2)\in E$ or $v_1\rightarrow v2$ then
335 $v_1\xrightarrow{+}v_2$ is defined as $v_1\rightarrow \ldots
336 \rightarrow v_2$ meaning that there is a path with a length bigger then $1$
337 between $v_1$ and $v_2$. In a non cyclic graph the following holds $\nexists
338 v1: v1\xrightarrow{+}v1$. In words this means that a node can not be reached
339 again traveling the graph. Adding the property of non cyclicity to graphs
340 lowers the computational complexity of path existence in the graph to
341 $\mathcal{O}(L)$ where $L$ is the length of the path.
342
343 \begin{figure}[H]
344 \label{dagexample}
345 \centering
346 \scalebox{0.7}{
347 \digraph[]{dagexample}{
348 rankdir=LR;
349 n01 -> n02;
350 n02 -> n03;
351 n03 -> n01;
352 n11 -> n12;
353 n12 -> n13;
354 n12 -> n14;
355 }
356 }
357 \caption{Example DAG}
358 \end{figure}
359
360 \paragraph{Directed Acyclic Word Graphs}
361 The type of graph used in the project is a special kind of DAG called Directed
362 Acyclic Word Graphs(DAWGs). A DAWG can be defined by the tuple $G=(V,v_0,E,F)$.
363 $V$ is the same as in directed graphs, namely the set of nodes. $E$ is the set
364 of edges described by $E\subseteq V\times V\times L$ where $L\in A$ where $A$
365 is the alphabet used as node labels. In words this means that an edge is a
366 labeled arrow between two nodes and for example the edge $(v_1, v_2, a)$ can be
367 visualized as: $v_1\xrightarrow{a}v_2$. In a standard DAWG the alphabet $A$
368 contains all the characters used in natural language but in theory the alphabet
369 $A$ can contain anything. $F$ describes the set of final nodes, final nodes are
370 nodes that can be the end of a sequence even if there is an arrow leading out.
371 In the example graph in Figure~\ref{exampledawg} the final nodes are visualized
372 with a double circle as node shape. In this example it is purely cosmetic
373 because $n6$ is a final node anyways because there are no arrows leading out.
374 But this does not need to be the case, for example in $G=(\{n1, n2, n3\},
375 \{(n1, n2), (n2, n3)\}, \{n2, n3\})$ there is a distinct use for the final node
376 marking. Graph $G$ accepts the words \textit{a,ab} and to simplify the graph
377 node $n2$ and $n3$ are final. Finally $v_0$ describes the initial node, this is
378 visualized in figures as an incoming arrow. Because of the property of labeled
379 edges, data can be stored in a DAWG. When traversing a DAWG and saving all the
380 edgelabels one can construct words. Using graph minimalization big sets of
381 words can be stored using a small amouth of storage because edges can be
382 re-used to specify transitions. For example the graph in
383 Figure~\ref{exampledawg} can describe the language $L$ where all words $w$ that
384 are accepted $w\in\{abd, bad, bae\}$. Testing if a word is present in the DAWG
385 is the same technique as testing if a node path is present in a normal DAG and
386 therefore also falls in the computational complexity class of $\mathcal{O}(L)$.
387 This means that it grows linearly with the length of the word.
388
389 \begin{figure}[H]
390 \label{exampledawg}
391 \centering
392 \scalebox{0.7}{
393 \digraph[]{graph21}{
394 rankdir=LR;
395 node [shape="circle"]; n1 n2 n3 n4 n5;
396 n6 [shape="doublecircle"];
397 n [style=invis];
398 n -> n1;
399 n1 -> n2 [label="a"];
400 n2 -> n3 [label="b"];
401 n3 -> n6 [label="d"];
402 n1 -> n4 [label="b"];
403 n4 -> n5 [label="a"];
404 n5 -> n6 [label="d"];
405 n5 -> n6 [label="e"];
406 }
407 }
408 \caption{Example DAWG}
409 \end{figure}