SuperSonicAI : Deep Reinforcement Learning for Low Dimensional Environments

Abstract

This project showcases an application which plays Sonic the Hedgehog Genesis (Sonic) using artificial intelligence. It is able to generalize to any other game with similar game mechanics and controls. The application executes a decision making algorithm which plays games by training neural networks to see what the user sees, extract useful information, and make decisions to navigate the environment. Our application interfaces with the game using Gym Retro, an open source platform for reinforcement learning training and visualization. In order to develop the optimal agent to play Sonic, the team constructed a stable software architecture as a platform for experimentation across implementations of reinforcement learning agents, computer vision image processing techniques, and helper functions to support explainability. This experimentation led us to a solution that can complete the first level of Sonic and generalizes fairly well to unseen environments. This solution involves generating a synthetic dataset of images from the game to train a DeepLab V3 semantic segmentation model. We then apply this trained model to the Sonic emulator as a preprocessing step to feed segmented images into a Deep Q Learning Agent. The Deep Q Learning Agent was then trained on several levels of Sonic to develop the optimal policy of state-action pairs to support generalization on unseen environments.

Description

Keywords

Reinforcement Learning, Semantic Segmentation

Citation