Doctoral Dissertation Defense: Bryce Carey
Advisor: Dr. James Lo
Location
Mathematics/Psychology : 401
Date & Time
April 27, 2017, 10:00 am – 12:00 pm
Description
Title: Developing a computational model of neural networks into a learning machine
Abstract
The purpose of this dissertation work is to contribute to the development of a biologically plausible model of neural networks into a learning machine.
Temporal hierarchical probabilistic associative memory (THPAM) is a functional model of biological neural networks which performs a variant of supervised and unsupervised Hebbian learning to store information in the synapses, uses dendritic trees to encode information, and communicates information via spike trains. THPAM can be viewed as a recurrent hierarchical network of processing units, neuronal compartments that serve as pattern recognizers.
This work proposes supplemental developments, a parallel programming implementation, and several benchmark results pertaining to the processing unit architecture. Supplemental theories and mechanisms pertaining to the processing unit architecture are contributed in this dissertation. These contributions serve to confirm propositions contained in original publications, enable alternative constructions of the processing unit generalization component, and allow for an alternative generalization mechanism. The new generalization mechanism has a unique application in efficiently learning data clusters centered at a target input vector. Orthogonal expansion of a vector in the processing unit is an exponential function of the dimension of the vector. Although there are ways to avoid vectors with a large dimension, a parallel programming implementation proposed in this work is utilized to somewhat alleviate the severe limitations imposed by this complexity on serial machines. The scalability of the parallel program is examined on the maya cluster of the UMBC High Performance Computing Facility. The parallelized processing unit implementation is beneficial in reducing the run time of sufficiently large fixed problem sizes from several hours to a few seconds.
The performance of the processing unit as a pattern recognizer is demonstrated on sample data sets obtained from the UCI Machine Learning Repository. These data sets independently contained categorical data, missing data, and real-valued data. Several data encoding techniques are performed and examined in order to best suit the predictive performance of the processing unit on the data sets considered. Differences in performance between particular encoding methods are thoroughly examined and discussed in relation to the processing unit mechanisms, and the effects hyperparameter adjustments are precisely considered.
Abstract
The purpose of this dissertation work is to contribute to the development of a biologically plausible model of neural networks into a learning machine.
Temporal hierarchical probabilistic associative memory (THPAM) is a functional model of biological neural networks which performs a variant of supervised and unsupervised Hebbian learning to store information in the synapses, uses dendritic trees to encode information, and communicates information via spike trains. THPAM can be viewed as a recurrent hierarchical network of processing units, neuronal compartments that serve as pattern recognizers.
This work proposes supplemental developments, a parallel programming implementation, and several benchmark results pertaining to the processing unit architecture. Supplemental theories and mechanisms pertaining to the processing unit architecture are contributed in this dissertation. These contributions serve to confirm propositions contained in original publications, enable alternative constructions of the processing unit generalization component, and allow for an alternative generalization mechanism. The new generalization mechanism has a unique application in efficiently learning data clusters centered at a target input vector. Orthogonal expansion of a vector in the processing unit is an exponential function of the dimension of the vector. Although there are ways to avoid vectors with a large dimension, a parallel programming implementation proposed in this work is utilized to somewhat alleviate the severe limitations imposed by this complexity on serial machines. The scalability of the parallel program is examined on the maya cluster of the UMBC High Performance Computing Facility. The parallelized processing unit implementation is beneficial in reducing the run time of sufficiently large fixed problem sizes from several hours to a few seconds.
The performance of the processing unit as a pattern recognizer is demonstrated on sample data sets obtained from the UCI Machine Learning Repository. These data sets independently contained categorical data, missing data, and real-valued data. Several data encoding techniques are performed and examined in order to best suit the predictive performance of the processing unit on the data sets considered. Differences in performance between particular encoding methods are thoroughly examined and discussed in relation to the processing unit mechanisms, and the effects hyperparameter adjustments are precisely considered.
Tags: