Dear all,
This Thursday, *Dr. Chiyuan Zhang* from *Google Research *will give us a
guest lecture on Neural Network Memorization in Vision and Language Models.
The teaching staff from ECE 5984 (Trustworthy Machine Learning) is inviting
anyone from the VT community who is interested in this guest lecture.
You may attend the lecture via this zoom link:
https://virginiatech.zoom.us/j/87681568527?pwd=RnpJNGlDYTlVNlRrSENoN0doUUcr…
.
Meeting ID: 876 8156 8527
Passcode: 024027
You may find information about the talk here:
*Title: *Neural Network Memorization in Vision and Language Models
*Abstract: *Deep learning algorithms are well-known to have a propensity
for fitting the training data very well, often fitting even outliers and
mislabeled data points. Such fitting requires memorization of training data
labels, a phenomenon that has attracted significant research interest but
has not been given a compelling explanation so far. In this talk, we
introduce a notion of 'counterfactual memorization' which formally
characterizes what neural networks memorize during training. We show
experiments from both image classification and language modeling to
demonstrate memorization behaviors. We further discuss the impact of neural
network memorization on its generalization performance, which sheds light
on why large overparameterized neural networks tend to generalize better
despite heavier memorization.
*Bio:* Chiyuan Zhang is a research scientist at Google Research. He is
interested in analyzing and understanding the generalization and
memorization of deep neural networks and their connections to related areas
such as systematic generalization and reasoning, as well as differentially
private learning. Chiyuan Zhang holds a Ph.D. from MIT (2017), a Bachelor's
(2009), and a Master's (2012) degree in computer science from Zhejiang
University, China.
--
Yi Zeng http://www.yi-zeng.com/
Ph.D. student
Bradley Department of Electrical and Computer Engineering
Virginia Tech