Authors: Lindsey Sawatzky, Steven Bergner, Fred Popowich
Abstract: Recurrent Neural Networks are an effective and prevalent tool used to model sequential data such as natural language text.
However, their deep nature and massive number of parameters pose a challenge for those intending to study precisely how they work.
We present a visual technique that gives a high level intuition behind the semantics of the hidden states within Recurrent Neural Networks.
This semantic encoding allows for hidden states to be compared throughout the model independent of their internal details.
The proposed technique is displayed in a proof of concept visualization tool which is demonstrated to visualize the natural language processing task of language modelling.