Authors: Shusen Liu, Peer-Timo Bremer, Jayaraman J. Thiagarajan, Vivek Srikumar, Bei Wang, Yarden Livnat, Valerio Pascucci
Abstract: Constructing distributed representations for words through neural language models and using the resulting vector spaces for analysis has become a crucial component of natural language processing (NLP). However, despite their widespread application, little is known about the structure and properties of these spaces. To gain insights into the relationship between words, the NLP community has begun to adapt high-dimensional visualization techniques. \ In particular, researchers commonly use t-distributed stochastic neighbor embeddings (t-SNE) and principal component analysis (PCA) to create two-dimensional embeddings for assessing the overall structure and exploring linear relationships (e.g., word analogies), respectively. \ Unfortunately, these techniques often produce mediocre or even misleading results and cannot address domain-specific visualization challenges that are crucial for understanding semantic relationships in word embeddings. \ Here, we introduce new embedding techniques for visualizing semantic \ and syntactic analogies, and the corresponding tests to determine whether \ the resulting views capture salient structures. \ Additionally, we introduce two novel views for a \ comprehensive study of analogy relationships. Finally, we augment \ t-SNE embeddings to convey uncertainty information in order to allow a \ reliable interpretation. \ Combined, the different views address a number of domain-specific \ tasks difficult to solve with existing tools.