\toprule
Method & Correctly classified & Root relative squared error\\
\midrule
- NaiveBayes & $96.6449\%$ & $35.7222\%$\\
- NaiveBayes (10FCF) & $96.4352\%$ & $37.1926\%$\\
- J48 & $96.6449\%$ & $34.9136\%$\\
- J48 (10FCF) & $96.4352\%$ & $36.5122\%$\\
+ \emph{NaiveBayes} & $96.6449\%$ & $35.7222\%$\\
+ \emph{NaiveBayes (10FCF)} & $96.4352\%$ & $37.1926\%$\\
+ \emph{J48} & $96.6449\%$ & $34.9136\%$\\
+ \emph{J48 (10FCF)} & $96.4352\%$ & $36.5122\%$\\
\bottomrule
\end{tabular}
\caption{Results for \texttt{P1D} and \texttt{FD}\label{t1}}
& & & & \checkmark{} & $88.4436\%$\\
\bottomrule
\end{tabular}
- \caption{NaiveBayes on all sensible combinations\label{t2}}
+ \caption{\emph{NaiveBayes} on all sensible combinations\label{t2}}
\end{table}
\subsection*{Chapter 6: Exercises}
sequence, i.e.\ both the positions more than $n$ back and the following
positions.}
+ A probability is always based on the most probably preceding sequence,
+ however there are no backpointers to all states. Thus the probability
+ is not based on all possible previous paths and you can only recover
+ the most likely path. The following states are also of an influence
+ since when the path does not belong to the most likely sequence it will
+ not be connected to the final path via a backpointer and will be lost.
+
\item\emph{Explain in your own words (at most 50) how the EM algorithm
works. I don't mean the mathematics, but the underlying concept.}
+ The \emph{Expectation-Maximization}-algorithm (EM) searches for the
+ settings of parameters where the likelihood is (locally) optimal.
+ The algorithm usually takes the derivative of the likelihood function
+ to get the maximum value.
\end{itemize}
\end{document}