Cognitively-Motivated Models of Semantic Compositionality

Students

College

Faculty

Associate Professor

Project Summary

 

This summer, I worked with Dr. Chris Callison-Burch in the Computer Science department on natural language processing (NLP) research. NLP is a branch of Computer Science that deals with human language tasks, including translation and chatbots. In most NLP tasks, words’ meanings are represented by embeddings, fixed-length vectors calculated based off of a word’s co-occurrence with other words. Representing words as vectors allows for ease of evaluation: we take the cosine between two embeddings to be the similarity in their meaning, which can then be correlated with human judgements. We can also use vector operations for analogies; for example, the vector corresponding to “queen” - “female” + “male” is closest to the embedding for “king.”

We use co-occurrence to calculate embeddings because, by linguistic principle, words that occur in similar contexts are likely to have similar meanings (e.g. “dog” and “cat” might both co-occur frequently with “pet” or “fur,” while “apple” might not).

I began this summer continuing a project that other lab members and I began in the spring – mapping word embeddings to image embeddings to create multi-modal embeddings. These embeddings are cognitively motivated, as they combine textual information with visual information similarly to how a human might remember the meaning of a given word. Likely because of this, they correlate better with human similarity judgements than text-only embeddings.

As we continued to develop our multi-modal embeddings, we realized that many of the images in our dataset correspond to phrases, not words. However, we did not have phrase embeddings that we could pair with these images. I thus began researching methods for getting phrase embeddings from word embeddings.

The linguistic idea of compositionality states that the meaning of a phrase can be composed from the meaning of its constituents. I explored vector-based compositionality methods to get the phrase embeddings we needed. By composing the multi-modal embeddings we had already created using existing composition functions, I was able to improve accuracy over previous work. I then utilized NLP tools to create syntax-based composition functions and further improve results.

I ended the summer working on a very different project than the one I began with, although several other lab members are still working with multi-modal embeddings. As a part of this process, I learned that research isn’t always a straight path from problem to solution – sometimes you have to take a big detour or abandon your original plan completely. Sometimes, as in my case, you encounter a problem along the way that you find to be even more fascinating than the one you were originally trying to solve.

Part of my interest in PURM stemmed from the fact that I was, at the beginning of the summer, considering pursuing a career in academia. The immersion in research and academic life provided to me by this program has made me even more certain not only that academia is right for me, but that I want to continue to study NLP and hopefully work in the field one day.