Learning, complexity and information density - Computer Science > Information TheoryReportar como inadecuado




Learning, complexity and information density - Computer Science > Information Theory - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.

Abstract: What is the relationship between the complexity of a learner and therandomness of his mistakes? This question was posed in \cite{rat0903} whoshowed that the more complex the learner the higher the possibility that hismistakes deviate from a true random sequence. In the current paper we report onan empirical investigation of this problem. We investigate two characteristicsof randomness, the stochastic and algorithmic complexity of the binary sequenceof mistakes. A learner with a Markov model of order $k$ is trained on a finitebinary sequence produced by a Markov source of order $k^{*}$ and is tested on adifferent random sequence. As a measure of learner-s complexity we define aquantity called the \emph{sysRatio}, denoted by $ ho$, which is the ratiobetween the compressed and uncompressed lengths of the binary string whose$i^{th}$ bit represents the maximum \emph{a posteriori} decision made at state$i$ of the learner-s model. The quantity $ ho$ is a measure of informationdensity. The main result of the paper shows that this ratio is crucial inanswering the above posed question. The result indicates that there is acritical threshold $ ho^{*}$ such that when $ ho\leq ho^{*}$ the sequence ofmistakes possesses the following features: 1\emph{}low divergence $\Delta$from a random sequence, 2 low variance in algorithmic complexity. When$ ho> ho^{*}$, the characteristics of the mistake sequence changes sharplytowards a\emph{}high\emph{$\Delta$} and high variance in algorithmiccomplexity.



Autor: Joel Ratsaby

Fuente: https://arxiv.org/







Documentos relacionados