← Back to Event List

Graduate Students Seminar


Mathematics/Psychology : 106

Date & Time

April 12, 2023, 11:00 am12:00 pm


Session Chair:Alex Sheranko
Discussant:Dr. Draganescu

Speaker 1: Zainab Almutawa
Switching between synchronization and desynchronization in islets with coupled heterogenous beta cells: Finding switch cells
Beta cells are cells in the pancreas that produce and release insulin in response to blood glucose levels. Interactions between beta cells within their local network of an islet is important for the regulation of insulin secretion and to enhance the glucose stimulated response. Beta cells are coupled through gap junctions and generate synchronous threshold-based oscillations of their membrane potential. Dysfunction of coupling has been associated with diabetes. Experiments have suggested individual beta cells can control synchronization. In this work we use a mathematical model of bursting in two triplet configurations, chain and triangle. Biological heterogeneity is introduced by varying the coupling and the rate of {Ca}^{2+} extrusion parameters for each of the cells, which permits reproducing different types of frequencies. We measure the amplitude of a patched steady cell, and we investigate how the bursting of a high frequency cell can lead to change of the behavior of the patched steady cell. To demonstrate a switch cell exists, we silence it, setting the voltage to rest, or ablate it, disconnecting this cell from other cells, and observe loss of synchronization. We have found the range of coupling strength and parameters that support switch cell in the simplified system..

Speaker 2: Vahid Andalib
Data-Adaptive Binary Classifiers in High Dimensions Using Random Partitioning
Classification in high dimensions has been highlighted for the past two decades since Fisher's linear discriminant analysis (LDA) is not optimal in a small sample size [image: n] comparing the number of covariates [image: p], i.e., [image: p>n], which is mostly due to the singularity of the sample covariance matrix. Rather than modifying how to estimate the sample covariance and sample mean vector in constructing a classifier, as a new perspective, we propose two types of high-dimensional classifiers using data splitting, i.e., single data splitting (SDS) and multiple data splitting (MDS). Moreover, we introduce a weighted version of the MDS classifier that improves classification performance as illustrated in numerical studies. Each of the split data sets has a smaller size of covariates compared to the sample size so that LDA is applicable, and classification results can be combined with respect to minimizing classification inaccuracy rate. We conduct a wide range of simulations and analyze four microarray data sets, which demonstrates that our proposed methods outperform some existing methods or result in at least comparable performances..