update
[bsc-thesis1415.git] / thesis2 / 1.introduction.tex
1 \section{Introduction}
2 What do people do when they want to grab a movie? Attend a concert? Find out
3 which theater shows play in their town theater?
4
5 When the internet was in its early days and it started to be accessible to most
6 of the people information about entertainment was still obtained almost
7 exclusively from flyers, books, posters, radio/tv advertisements. People had to
8 look pretty hard for the information and you could easily miss a show just
9 because it didn't cross paths with you.
10 Today the internet is used by almost everyone in the westen society on a daily
11 basis and we would think that missing an event would be impossible because of
12 the loads of information you receive every day. The opposite is true.
13
14 Nowadays information about entertainment is offered via two main channels on
15 the internet namely individual venues and combined websites.
16
17 Individual venues put a lot of effort and resources in building a beautiful,
18 fast and most of all modern website that bundles their information with nice
19 graphics, animations and gimmicks. There also exist companies that bundle the
20 information from different websites. Information bundling websites often have
21 the individual venue website as the source for their information and therefore
22 the information is most of the time not complete.
23 Individual organisations tend to think, for example, that it is obvious what
24 the address of their venue is, that their ticket price is always fixed to
25 \EURdig$5.-$ and that you need a membership to attend the events. Individual
26 organizations usually put this non specific information in a disclaimer or a
27 separate page and information bundling website miss out on these types of
28 information a lot.
29
30 Combining the information from the different data source turns out to be a hard
31 task for such information bundling websites. It is a hard task because
32 information bundling websites do not have the resources and time reserved for
33 these tasks and therefore often also serve incomplete information. Because of
34 the complexity of complete information there are not many websites trying to
35 bundle entertainment information into a complete and consistent databese.
36 Hyperleap\footnote{\url{http://hyperleap.nl}} tries to achieve goal of serving
37 complete and consistent information.
38
39 \section{Hyperleap \& Infotainment}
40 Hyperleap is a internet company that existed in the time that internet was not
41 widespread. Hyperleap, active since 1995, is specialized in producing,
42 publishing and maintaining \textit{infotainment}. \textit{Infotainment} is a
43 combination of the words \textit{information} and \textit{entertainment}. It
44 means a combination of factual information and subjectual information
45 (entertainment) within a certain category. In the case of Hyperleap the
46 category is the entertainment industry, entertainment industry encompasses all
47 facets of entertainment going from cinemas, theaters, concerts to swimming
48 pools, bridge matches and conferences. Within the entertainment industry
49 factual information includes, but is not limited to, information such as
50 starting time, location, host or venue and location. Subjectual information
51 includes, but is not limited to, things such as reviews, previews, photos and
52 background information or trivia.
53
54 Hyperleap manages the largest database containing \textit{infotainment} about
55 the entertainment industry. The database contains over $10.000$ categorized
56 events per week on average and their venue database contains over $54.000$
57 venues delivering the entertainment ranging from theaters and music venues to
58 petting zoos and fastfood restaurants. All the subjectual information is
59 obtained or created by Hyperleap and all factual information is gathered from
60 different sources and quality checked and therefore very reliable. Hyperleap is
61 the only company in its kind that has such high quality information. The
62 \textit{infotainment} is presented via several websites specialized per genre
63 or category and some sites attract over $500.000$ visitors per month.
64
65 \section{Information flow}
66 The reason why Hyperleap is the only in its kind with the high quality data is
67 because Hyperleap spends a lot of time and resources on quality checking, cross
68 comparing and consistency checking before the data enters the database. To
69 achieve this the data is inserted in the database in several different stages
70 that are visualized in Figure~\ref{informationflow} as an information flow
71 diagram using nodes as processing or publishing steps and arrows are
72 information flow.
73
74 \begin{figure}[H]
75 \label{informationflow}
76 \centering
77 \scalebox{0.7}{
78 \digraph[]{graph111}{
79 rankdir=TB;
80 node [shape="rectangle",fontsize=10,nodesep=0.7,ranksep=0.75,width=1];
81 edge [weight=5.];
82 i0 [label="Website"];
83 i1 [label="Email"];
84 i2 [label="Fax"];
85 i3 [label="RSS/Atom"];
86 p1 [label="Preproccessing"];
87 p2 [label="Temporum: Postproccesing"];
88 o1 [label="Database: Insertion"];
89 o2 [label="TheAgenda"];
90 o3 [label="BiosAgenda"];
91 o4 [label="..."];
92 node [width=5]; p1 p2 o1;
93 i0 -> p1;
94 i1 -> p1;
95 i2 -> p1;
96 i3 -> p1;
97 p1 -> p2;
98 p2 -> o1;
99 o1 -> o2;
100 o1 -> o3;
101 o1 -> o4;
102 }
103 }
104 \caption{Information flow Hyperleap database}
105 \end{figure}
106
107 \paragraph{Sources}
108 The information that enters the database has to be quality checked. There are
109 several criteria the information and the source have to comply to before any of
110 them can enter the database. For example the source has to be reliable,
111 consistent and free by licence whereas the event entries have to have at least
112 the following fields:
113 \begin{itemize}
114 \item[What]
115 The \textit{What} field is the field that describes the content, content is
116 a very broad definition. In practice it can be describing the concert tour
117 name, theater show title, movie title, festival title and many more.
118 \item[Where]
119 The \textit{Where} field is the location of the event. The location is often
120 omitted because the organization behind source presenting the information
121 think it is obvious. This field can also include different sublocations.
122 For example when a pop concert venue has their own building but in the
123 summer they organize a festival in some park. This data is often assumed
124 to be trivial and inherent but in practice this is not the case. In this
125 example for an outsider only the name of the park is often not enough.
126 \item[When]
127 The \textit{When} field is the time and date of the event. Hyperleap wants
128 to have at minimum the date, start time and end time. In the field end
129 times are often omitted because they are not fixed or the organization
130 think it is obvious.
131 \end{itemize}
132
133 Venues often present incomplete data entries that do not comply to the
134 requirements explained before.
135
136 The first step in the information flow is the source. Hyperleap processes
137 different sources and every source has different characteristics. Sources can
138 be modern sources like websites or social media but even today a lot of
139 information arrives at Hyperleap via flyers, fax or email. As sources vary in
140 content structure sources also vary in reliability. For example the following
141 entry is an example of a very well structured and probably generated, and thus
142 probably also reliable, event description. The entry can originate for example
143 from a RSS feed.
144
145 \begin{flushleft}
146 \texttt{2015-05-20, 18:00-23:00 - \textit{Foobar} presenting their new CD in
147 combination with a show. Location: small salon.}
148 \end{flushleft}
149
150 An example of a terrible entry could be for example the following text that
151 could originate from a flyer or social media post.
152
153 \begin{flushleft}
154 \texttt{\textit{Foobar} playing to celebrate their CD release in the park
155 tomorrow evening.}
156 \end{flushleft}
157
158 This example lacks a precise date, time and location and is therefore hard for
159 people to grasp at first, let alone for a machine. When someone wants to get
160 the full information he has to tap in different resources which might not
161 always be available.
162
163 \paragraph{Crawler}
164 When the source has been determined and classified the next step is
165 periodically crawling the source. At the moment the crawling happens using a
166 couple of different methods. Some sites are very structured and a programmer
167 creates a program that can visit the website systematically and extract that
168 information. Some other sources are very hard to programmatically crawled and
169 they are visited by employees. The programmed crawlers are always specifically
170 created for one or a couple sources and when the source changes for example
171 structure the programmer has to adapt the crawler which is costly. Information
172 from the different crawlers then goes to the \textit{Temporum}.
173
174 \paragraph{Temporum}
175 The \textit{Temporum} is a big bin that contains raw data extracted from
176 different sources and has to be post processed to be suitable enough for the
177 actual database. This post-processing encompasses several possible tasks. The
178 first task is to check the validity of the entry. This is not done for every
179 entry but random samples are taken and these entries will be tested to see
180 if the crawler is not malfunctioning and there is no nonsense in the
181 data.
182 The second step is matching the entry to several objects. Objects in the
183 broadest sense can basically be anything, in the case of a theater show
184 performance the matched object can be the theater show itself, describing the
185 show for all locations. In case of a concert it can be that the object is a
186 tour the band is doing. When the source is a ticket vendor the object is the
187 venue where the event takes place since the source itself is not a venue.
188 Many of these tasks are done by a human or with aid of a computer program.
189 The \textit{Temporum} acts as a safety net because no data is going straigt
190 through to the database. It is first checked, matched and validated.
191
192 \paragraph{Database \& Publication}
193 When the data is post processed it is entered in the final database. The
194 database contains all the events that happened in the past and all the events
195 that are going to happen in the future. The database also contains information
196 about the venues. The database is linked to several categorical websites that
197 offer the information to users and accompany it with the other part of the
198 \textit{infotainment} namely the subjectual information that is usually
199 presented in the form of trivia, photos, interviews, reviews, previews and much
200 more.
201
202 \section{Goal \& Research question}
203 Crawling the sources and applying the preprocessing is the most expensive task
204 in the entire information flow because it requires a programmer to be hired.
205 Only programmers currently can create crawlers and design preprocessing jobs
206 because all current programs are specifically designed to do one thing, most of
207 the time for one source. Because a big group of sources often changes this task
208 becomes very expensive and has a long feedback loop. When a source changes the
209 source is first preprocessed in the old way, send to the \textit{Temporum} and
210 checked by a human and matched. The human then notices the error in the data
211 and contacts the programmer. The programmer then has to reprogram the specific
212 crawler to the new structure. This feedback loop, shown in
213 Figure~\ref{feedbackloop} can take days and can be the reason for gaps and
214 faulty information in the database.
215 \begin{figure}[H]
216 \label{feedbackloop}
217 \centering
218 \scalebox{0.5}{
219 \digraph[]{graph112}{
220 rankdir=LR;
221 node [shape="rectangle"];
222 source [label="Source"];
223 crawler [label="Crawler"];
224 temporum [label="Temporum"];
225 employee [label="User"];
226 programmer [label="Programmer"];
227 database [label="Database"];
228 source -> crawler;
229 crawler -> temporum;
230 temporum -> user;
231 user -> database;
232 user -> programmer [constraint=false,style="dotted"];
233 user -> crawler [constraint=false,style="dashed"];
234 programmer -> crawler [constraint=false,style="dotted"];
235 }
236 }
237 \caption{Feedback loop for malfunctioning crawlers}
238 \end{figure}
239
240 The goal of the project is to relieve the programmer of repairing crawlers all
241 the time and make the task of adapting, editing and removing crawlers doable
242 for someone without programming experience. In practice this means in
243 Figure~\ref{feedbackloop} removing the dotted arrows by dashed arrow.
244
245 For this project an application has been developed that can provide an
246 interface to a crawler generation system that is able to crawl RSS\cite{Rss}
247 and Atom\cite{Atom} publishing feeds. The interface provides the user with
248 point and click interfaces meaning that no computer science background is
249 needed to use the interface, to create, modify, test and remove crawlers. The
250 current Hyperleap backend system that handles the data can, via an interface,
251 generate XML feeds that contain the crawled data. The structure of the
252 application is very modular and generic and therefore it is easy to change
253 things in the program without having to know a lot about the programming
254 language used. This is because for example all visual things like buttons are
255 defined in a human readable text file. In practice it means that one person,
256 not by definition a programmer, can be instructed to change the structure and
257 this can also greatly reduce programmer intervention time.
258
259 The actual problem statement then becomes:
260 \begin{center}
261 \textit{Is it possible to shorten the feedback loop for repairing and adding
262 crawlers by making a system that can create, add and maintain crawlers for
263 RSS feeds}
264 \end{center}
265
266 \section{RSS/Atom}
267 RSS/Atom feeds, from now on called RSS feeds, are publishing feeds in the XML
268 format\cite{Xml} that are used to publish events. Every event or entry consists
269 of several standardized fields. The main data fields are the \textit{title} and
270 the \textit{description} field. In these fields the raw data is stored that
271 describes the event. Further there are several auxiliary fields that store the
272 link to the full article, store the publishing data, store some media in the
273 form of an image or video URL and store a \textit{Globally Unique
274 Identifier}(GUID)\footnote{A GUID is a unique identifier that in most cases is
275 the permalink of the article. A permalink is a link that will point to the
276 article}. An example of a RSS feed can be found in Listing~\ref{examplerss},
277 this listing shows a partly truncated RSS feed of a well known venue in the
278 Netherlands. As visible in the listing the similarities with XML are very
279 clear. Every RSS feed contains a \texttt{channel} tag and within that tag there
280 is some metadata and a list op \texttt{item} tags. Every \texttt{item} tag has
281 a fixed number of different fields. The most important fields for RSS within
282 the leisure industry are the \texttt{title} and the \texttt{description field}.
283
284 \begin{listing}
285 \caption{An example of a, partly truncated RSS feed of a well known dutch
286 venue}
287 \label{examplerss}
288 \xmlcode{exrss.xml}
289 \end{listing}
290
291 RSS feeds are mostly used by news sites to publish their articles, the feed
292 then only contains the headlines and if the user that is reading the feed is
293 interested he can click the so called deeplink and he is then sent to the full
294 website containing the article. Users often use programs that bundle user
295 specified RSS feeds into big summary feeds that can be used to sift through a
296 lot of news feeds. The RSS feed reader uses the GUID to skip feeds that are
297 already present in its internal database.
298
299 Generating RSS feeds by hand is a tedious task but usually RSS feeds are
300 generated by the Content Management Systems(CMS) the website runs on. With this
301 automatic method the RSS feeds are generated using the content published on the
302 website. Because the process is automatic the RSS feeds are generally very
303 structured and consistent in its structure. In the entertainment industry
304 venues often use a CMS for their website to allow users with no programming or
305 website background be able to post news items and event information.
306
307 \section{Why RSS?}
308 \label{sec:whyrss}
309 We described a lot of different source formats like HTML, fax/email, RSS and
310 XML. Because of the limited scope of the project and the time planned for it
311 we had to remove some of the input formats because they all require different
312 techniques. For example when the input source is in HTML format, most probably
313 a website, then there can be a great deal of information extraction be
314 automated using the structural information which is characteristic for HTML.
315 For fax/email however there is almost no structural information and most of the
316 automation techniques require natural language processing. We chose RSS feeds
317 because RSS feeds lack inherent structural information but are still very
318 structured. This structure is because, as said above, the RSS feeds are
319 generated and therefore almost always look the same. Also, in RSS feeds most
320 venues use particular structural identifiers that are characters. They separate
321 fields with vertical bars, commas, whitespace and more non text characters.
322 These field separators and keywords can be hard for a computer to detect but
323 people are very good in detecting these. With one look they can identify the
324 characters and keywords and build a pattern in their head. Another reason we
325 chose RSS is their temporal consistency, RSS feeds are almost always generated
326 and because of that the structure of the entries is very unlikely to change.
327 Basically the RSS feeds only change structure when the CMS that generates it
328 changes the generation algorithm. This property is usefull because the crawlers
329 then do not have to be retrained very often. Because the non inherent structure
330 entirely new strategies had to be applied.
331
332 \section{Directed Acyclic Graphs}
333 \paragraph{Directed graphs}
334 Directed graphs are a mathematical structure that can describe relations
335 between nodes. A directed graph $G$ is be defined as the tuple $(V,E)$ where
336 $V$ is a set of named nodes and $E$ is the set of edges defined by $E\subseteq
337 V\times V$. An edge $e\in E$ can be seen as $(v1, v2)$ where $v1,v2\in V$ and
338 can be interpreted as in a figure as an arrow between node $v1$ and node $v2$.
339 Multiple connections between two nodes are possible. For example the graph
340 visualized in Figure~\ref{graphexample} can be mathematically described as:
341 $$G=(\{n1, n2, n3, n4\}, \{(n1, n2), (n2, n1), (n2, n3), (n3, n4), (n1, n4)\}$$
342
343 \begin{figure}[H]
344 \label{graphexample}
345 \scalebox{0.7}{
346 \digraph[]{graphexample}{
347 rankdir=LR;
348 n1->n2;
349 n2->n1;
350 n2->n3;
351 n3->n4;
352 n1->n4;
353 }
354 }
355 \caption{Example DG}
356 \end{figure}
357
358 \paragraph{Directed acyclic graphs}
359 Directed Acyclic Graphs(DAGs) are a special kind of directed graphs. DAGs
360 are also defined as $G=(V,E)$ but with a restriction on $E$. Namely that cycles
361 are not allowed. Figure~\ref{dagexample} shows two graphs. The bottom graph
362 ontains a cycle and the right graph does not. Only the top graph is a valid
363 DAG. A cycle can be defined as follows:
364
365 Say $e\in E$ can be seen as $(v_1,v_2)\in E$ or $v_1\rightarrow v2$ then
366 $v_1\xrightarrow{+}v_2$ is defined as $v_1\rightarrow \ldots
367 \rightarrow v_2$ meaning that there is a path with a length bigger then $1$
368 between $v_1$ and $v_2$. In a non cyclic graph the following holds $\nexists
369 v1: v1\xrightarrow{+}v1$. In words this means that a node can not be reached
370 again traveling the graph. Adding the property of non cyclicity to graphs
371 lowers the computational complexity of path existence in the graph to
372 $\mathcal{O}(L)$ where $L$ is the length of the path.
373
374 \begin{figure}[H]
375 \label{dagexample}
376 \centering
377 \scalebox{0.7}{
378 \digraph[]{dagexample}{
379 rankdir=LR;
380 n01 -> n02;
381 n02 -> n03;
382 n03 -> n01;
383 n11 -> n12;
384 n12 -> n13;
385 n12 -> n14;
386 }
387 }
388 \caption{Example DAG}
389 \end{figure}
390
391 \paragraph{Directed Acyclic Word Graphs}
392 The type of graph used in the project is a special kind of DAG called Directed
393 Acyclic Word Graphs(DAWGs). A DAWG can be defined by the tuple $G=(V,v_0,E,F)$.
394 $V$ is the same as in directed graphs, namely the set of nodes. $E$ is the set
395 of edges described by $E\subseteq V\times V\times L$ where $L\in A$ where $A$
396 is the alphabet used as node labels. In words this means that an edge is a
397 labeled arrow between two nodes and for example the edge $(v_1, v_2, a)$ can be
398 visualized as: $v_1\xrightarrow{a}v_2$. In a standard DAWG the alphabet $A$
399 contains all the characters used in natural language but in theory the alphabet
400 $A$ can contain anything. $F$ describes the set of final nodes, final nodes are
401 nodes that can be the end of a sequence even if there is an arrow leading out.
402 In the example graph in Figure~\ref{exampledawg} the final nodes are visualized
403 with a double circle as node shape. In this example it is purely cosmetic
404 because $n6$ is a final node anyways because there are no arrows leading out.
405 But this does not need to be the case, for example in $G=(\{n1, n2, n3\},
406 \{(n1, n2), (n2, n3)\}, \{n2, n3\})$ there is a distinct use for the final node
407 marking. Graph $G$ accepts the words \textit{a,ab} and to simplify the graph
408 node $n2$ and $n3$ are final. Finally $v_0$ describes the initial node, this is
409 visualized in figures as an incoming arrow. Because of the property of labeled
410 edges, data can be stored in a DAWG. When traversing a DAWG and saving all the
411 edgelabels one can construct words. Using graph minimalization big sets of
412 words can be stored using a small amouth of storage because edges can be
413 re-used to specify transitions. For example the graph in
414 Figure~\ref{exampledawg} can describe the language $L$ where all words $w$ that
415 are accepted $w\in\{abd, bad, bae\}$. Testing if a word is present in the DAWG
416 is the same technique as testing if a node path is present in a normal DAG and
417 therefore also falls in the computational complexity class of $\mathcal{O}(L)$.
418 This means that it grows linearly with the length of the word.
419
420 \begin{figure}[H]
421 \label{exampledawg}
422 \centering
423 \scalebox{0.7}{
424 \digraph[]{graph21}{
425 rankdir=LR;
426 node [shape="circle"]; n1 n2 n3 n4 n5;
427 n6 [shape="doublecircle"];
428 n [style=invis];
429 n -> n1;
430 n1 -> n2 [label="a"];
431 n2 -> n3 [label="b"];
432 n3 -> n6 [label="d"];
433 n1 -> n4 [label="b"];
434 n4 -> n5 [label="a"];
435 n5 -> n6 [label="d"];
436 n5 -> n6 [label="e"];
437 }
438 }
439 \caption{Example DAWG}
440 \end{figure}