Title: Coarse-to-Fine Memory Matching for Joint Retrieval and Classification Allen Schmaltz, Harvard University April 5, 2021 UMass BioNLP group seminar (virtual): Data meets Healthcare; University of Massachusetts (Amherst and Lowell) Deep neural networks have exhibited promising effectiveness on a number of research tasks, but challenges in interpreting the large numbers of non-identifiable parameters preclude their use in many settings. Concomitantly, the possibility of annotation errors in the massive datasets needed to train deep networks gives us further pause in considering such models in higher-risk settings, such as medicine. In today's talk we examine an alternative approach for analyzing neural networks and for analyzing data, leveraging deep networks as metric learners. In this way, we can create an updatable dense database of the task environment, formed by "exemplar" memories/representations from a parametric neural network. The database then serves as a bridge between new instances and instances with known labels, accessible via non-parametric matching. We demonstrate this framework on fact verification, a retrieval-classification task. Retrieval-classification tasks encompass a number of real-world settings, whereby we seek a model that retrieves information from an external datastore and then makes a classification decision over the retrieved information. In this context, our modeling decisions yield a sequence-oriented classification model that is updatable via two distinct mechanisms: The retrieved information can be updated explicitly, and the model behavior can be modified via the dense exemplar database.