← Back to Event List

Graduate Students Seminar

Location

Mathematics/Psychology : 103

Date & Time

March 13, 2024, 11:00 am12:00 pm

Description

Session Chair:Haoyu Ren
Discussant:Dr. Ansu Chatterjee

Speaker 1: Ji Li
Title
Bayesian Predictive Probabilities for Interim Analyses in Two-arm Studyj
Abstract
In the realm of clinical trials, Bayesian methods offer a distinctive approach to statistical analysis, contrasting with traditional frequentist methods. A key aspect of the Bayesian approach is the Bayesian Predictive Probability (BPP), which allows for interim analysis in clinical trials by evaluating the probability of future trial outcomes based on current data and prior distributions. A simulation study illustrates how BPP is calculated and used to make decisions regarding the continuation or early termination of a trial based on efficacy or futility criteria.

Speaker 2: Saeed Damadi
Title
Sparsification of Neural Networks
Abstract
Neural networks are structured as vector-valued functions, which are not only easy to manage but can also be systematically expanded. When equipped with an appropriate loss function and minimization algorithm, they have the capability to learn from almost any dataset presented to them. This is not surprising to mathematicians due to the possibility of defining a function for a set comprising pairs of x (input) and y (output). The widespread adoption of neural networks is largely due to our ability to define these networks and approximate the mapping between a given domain and codomain. With this problem addressed, the subsequent question arises: Is this approximated function the most efficient in terms of the number of parameters used? In simpler terms, is it possible to eliminate some parameters without compromising the efficiency of the mapping? This leads to the concept of sparse optimization, where the goal is to solve it directly, without approximation. By doing so, we aim to find an approximate function that is less complex while maintaining its performance. In the context of computer science, solving sparse optimization allows us to provide a sparse neural network that is as effective as a dense neural network. This presentation will progressively elucidate this problem, requiring no prior knowledge of neural networks or optimization.