I research how people express meaning in context, and how to model that computationally. I’m generally interested in how sentence-level representations of meaning (particularly the propositional semantics – who is doing what to whom) can be connected to how we keep track of participants and events in a document or a discourse.

My dissertation focuses on a particular subset of this – implicit semantic roles. If you hear someone say “We won!”, you can usually figure out which competition/context was won, despite it being unstated. I focus on building models for how we actually “resolve” that kind of unstated participant, by searching through the prior context (and common ground) for what it might refer to. I take a data-driven, computational approach to that – focusing upon building corpora, training machine learning models, and doing analysis that can give us insight into how this kind of phenomenon works.

Building corpora of complicated semantic phenomena has been one focus of my work at CU that I aim to continue in the future. I have been able to be involved in the annotation of a range of representations of within-sentence semantics (Propbank and Abstract Meaning Representation (AMR)), annotation of event coreference with temporal and causal structure, and coreference over AMRs, which involved the largest-ever annotation of these implicit semantic roles for English. I’m interested in a range of theoretical questions regarding annotation and corpus building – mainly with an interest in how we do annotation of new, tricky phenomena (e.g. quantification, modality, aspect), and in using methods such as active learning, annotation projection and crowd-sourcing to make annotation practical for new tasks, new domains, and new languages.

Another ongoing direction of my research interest is in connecting “deep learning”-derived representations to representations postulated in the linguistics literature. I believe that there is a great deal of interesting work to be done in pivoting from complex, inscrutible neural networks, toward complex neural networks that whose deep, learned representations can be linked to actually interpretable representations. I think that unlike many fields in machine learning, linguistics often provides very strong priors for the kinds of underlying representations we might expect to learn, which provides both an opportunity for models to be more interpretable, and an opportunity for us to test those linguistic theories. For example, with implicit semantic roles, I pre-train modules that can handle linguistically interpretable sub-problems (such as selectional preference: what kinds of nouns are plausible fits for a particular event roles?), and then attempt to build models where an individual decision can be analyzed in terms of those more interpretable ideas.

All of these individual interests fit into a larger methodological approach – I am interested in increasing our understanding of how people do things with language, but believe that the best way to get at that task is to actually build datasets and computational models to test that.