Referenced Papers (2)
🔧
On calibration of modern neural networks
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q Weinberger
ICML, 2017
"This paper's figures are used to demonstrate that the softmax outputs of modern neural networks are not reliable for quantifying model uncertainty, as models are often overconfident and their confidence scores are not well-calibrated with their actual accuracy."
Referenced at: 02:39
🔧
Adversarial examples: Attacks and defenses for deep learning
Xiaoyong Yuan, Pan He, Qile Zhu, Xiaolin Li
IEEE Trans. Neural Netw. Learn. Syst., 2017
"The speaker uses an image from this paper (a panda misclassified due to adversarial noise) to illustrate that the guarantees of conformal prediction can break under noise, similar to how a model's classification can be compromised."
Referenced at: 04:34
