Text-based games are rich virtual environments with textual interactions using mostly natural language. Since the game states are unobservable, these games are especially challenging for automatic game players. In our project, we attempt to solve a virtual sub-domain by learning corresponding control policies. We employ a deep reinforcement learning framework to jointly learn state representations and action representations, using game rewards as feedback. We approximate the Q-function using an interaction function with separate embedding vectors for states and actions. We evaluate our approach on two game worlds, one simple and one with elaborated state descriptions. Our experiments demonstrate that the model is capable of extracting meaning, and therefore underline the importance of learning expressive representations.
