Learning transformation-invariant visual representations in spiking neural networksReportar como inadecuado




Learning transformation-invariant visual representations in spiking neural networks - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.



Skip to Content

Search Deposit Browse Help

1 view

Thesis

Links & Downloads

- XML - 4.2 kB Oxford_Thesis_paper.pdf - PDF - 6.3 MB QR Code

SEARCH DEPOSIT BROWSE

Search Terms Filter All records Records with Full Text Help

Learning transformation-invariant visual representations in spiking neural networks

Abstract: This thesis aims to understand the learning mechanisms which underpin the process of visual object recognition in the primate ventral visual system. The computational crux of this problem lies in the ability to retain specificity to recognize particular objects or faces, while exhibiting generality across natural variations and distortions in the view (DiCarlo et al., 2012). In particular, the work presented is focussed on gaining insight into the processes through which transformation-invariant visual representations may develop in the primate ventral visual system.The primary motivation for this work is the belief that some of the fundamental mechanisms employed in the primate visual system may only be captured through modelling the individual action potentials of neurons and therefore, existing rate-coded models of this process constitute an inadequate level of description to fully understand the learning processes of visual object recognition. To this end, spiking neural network models are formulated and applied to the problem of learning transformation-invariant visual representations, using a spike-time dependent learning rule to adjust the synaptic efficacies between the neurons.The ways in which the existing rate-coded CT (Stringer et al., 2006) and Trace (Földiák, 1991) learning mechanisms may operate in a simple spiking neural network model are explored, and these findings are then applied to a more accurate model using realistic 3-D stimuli. Three mechanisms are then examined, through which a spiking neural network may solve the problem of learning separate transformation-invariant representations in scenes composed of multiple stimuli by temporally segmenting competing input representations. The spike-time dependent plasticity in the feed-forward connections is then shown to be able to exploit these input layer dynamics to form individual stimulus representations in the output layer. Finally, the work is evaluated and future directions of investigation are proposed.

Type of Award:DPhil Level of Award:Doctoral Awarding Institution:University of Oxford

Funders

Economic & Social Research Council more by this funder

Grant numberPTA-031-2006-00182 Received ByProject

  Item Description

Type: Thesis Language: English Keywords: Object Recognition Visual System STDP Neuroscience Spiking Neural Network Subjects: Computational neuroscience

Relationships





Autor: Benjamin D. Evans - AffiliationUniversity of Exeter Roles Author, Copyright holder - - - Simon M. Stringer More by this superviso

Fuente: https://ora.ox.ac.uk/objects/uuid:15bdf771-de28-400e-a1a7-82228c7f01e4



DESCARGAR PDF




Documentos relacionados