Graduate Students Seminar
Online on Blackboard Collaborate
Location
Online
Date & Time
April 14, 2021, 11:00 am – 12:00 pm
Description
Session Chair: | Sumaya Alzuhairy |
Discussant: | Dr. Yehenew Kifle |
Speaker 1: Yewon Kim
- Title
- Simultaneous hypothesis testing of grouped hypotheses
- Abstract
- Modern statistical inference processes commonly involve testing hundreds of or thousands of hypotheses simultaneously. When testing many hypotheses at the same time, it is mainly aimed at controlling the False Discovery Rate (FDR) introduced by Benjamini & Hochberg (1995) or local FDR proposed by Efron (2012). In most cases, these processes take into account the joint distribution of the test statistics or p-values, and additional information that can increase the power of the test. In particular to deal with grouped hypotheses, the two-stage method is popular that is controlling the overall FDR at level alpha using the within and between group structure. For example, Liu et al. (2016) provides a two-stage multiple testing process called Two-fold Loop Testing Algorithm (TLTA) using grouped hypotheses. In this presentation, we will mainly review TLTA with medium-scale data and check the limitations of applying TLTA to large-scale data.
Speaker 2: Saeed Damadi
- Title
- State of the art deep neural network compression algorithms
- Abstract
- The original optimization problem of a neural network sparsification is a constrained, stochastic, nonconvex, and non-differentiable problem with a very large size decision variable. If one were able to solve the aforementioned problem, the obtained sparse network will have less parameters than the underlying dense network which is over-parameterized and includes a significant amount of redundancies. As a result of sparsification, the energy consumption reduces, hardware requirements are relaxed, and responses to queries become quicker.
- Due to the difficult nature of this problem, the theoretical advancements have not been achieved as fruitful as numerical methods. Also, all these numerical progresses have been done in the computer science field. To attack the theoretical problem, understanding different numerical methods will provide insights about the problem and will help us to be able to figure out their commonalities so that one can develop theorems. To this end, I will be going over seven recent papers and their algorithms to understand this problem better. The following are links to the papers.
- Rigging the Lottery: Making All Tickets Winners
- The lottery ticket hypothesis: Finding sparse, trainable neural networks
- Stabilizing the lottery ticket hypothesis
- Soft threshold weight reparameterization for learnable sparsity
- Single-shot network pruning based on connection sensitivity
- Winning the Lottery with Continuous Sparsification
- Picking winning tickets before training by preserving gradient flow
Tags: