# Stat Colloquium [In-Person]: Dr. Takeru Matsuda

### University of Tokyo

Location

Mathematics/Psychology : 401

Date & Time

March 1, 2024, 11:00 am – 12:00 pm

Description

**Title**: Matrix
estimation via singular value shrinkage

**Abstract**: In the
estimation of a normal mean vector under the quadratic loss, the maximum
likelihood estimator (MLE) is inadmissible and dominated by shrinkage
estimators (e.g. James–Stein estimator) when the dimension is greater than or
equal to three (Stein’s paradox). In particular, generalized Bayes estimators
with respect to superharmonic priors (e.g. Stein’s prior) are minimax and
dominate MLE. The James--Stein shrinkage has been applied to develop adaptive
minimax estimators in nonparametric statistics.

In this talk, I will introduce recent studies on generalizations of the above results to matrices. First, we develop a superharmonic prior for matrices that shrinks singular values, which can be viewed as a natural generalization of Stein’s prior. This prior is motivated from the Efron–Morris estimator, which is an extension of the James–Stein estimator to matrices. The generalized Bayes estimator with respect to this prior is minimax and dominates MLE under the Frobenius loss. In particular, since it shrinks towards the space of low-rank matrices, it attains large risk reduction when the unknown matrix is close to low-rank (e.g. reduced-rank regression). This idea also leads to an empirical Bayes matrix completion algorithm. Next, we construct a theory of shrinkage estimation under the “matrix quadratic loss”, which is a matrix-valued loss function suitable for matrix estimation. A notion of “matrix superharmonicity” for matrix-variate functions is introduced and the generalized Bayes estimator with respect to a matrix superharmonic prior is shown to be minimax under the matrix quadratic loss. The matrix-variate improper t-priors are matrix superharmonic and this class includes the above generalization of Stein’s prior. Finally, we show that the blockwise Efron--Morris estimator attains adaptive minimaxity in a multivariate Gaussian sequence model, where adaptation is not only to unknown smoothness but also to arbitrary quadratic loss.

**Tags:**