Khurana, Suhani2023-07-012023-07-012023-06-30vt_gsexam:37816http://hdl.handle.net/10919/115620Android is the most popular operating system and occupies close to 70% of the market share. With the growth in the usage of Android OS, the number of games also increased and the Android play store has over 500,000 games. Testing of Android games is done either manually or through some of the existing tools which automate some parts of this testing. Manual testing requires a great deal of effort and can be expensive to afford. The existing tools which automate testing do not make use of any domain knowledge. This can cause the testing to be ineffective as the game may involve complex strategies, intricate details, widgets, etc. Existing tools like Android Monkey and Time Machine generate random Android events, including gestures like touch, swipe, clicks, and other system-level events across the application. Some deep learning methods like Wuji were only created for combat-type games. These limitations make it imperative to create a testing paradigm that uses domain knowledge as well as is easy to use by a developer who doesn't have any machine or deep learning knowledge. In this work, we develop a tool called DRAG- Deep Reinforcement learning based Android Gamer - which leverages Reinforcement Learning to learn the requisite domain knowledge and play the game in a fashion like a human would. DRAG uses a unified Reinforcement Learning agent and a Unified Reinforcement Learning environment. It only customizes the action space for each game. This generalization is done in the following ways- 1) Record an 8-minute demo video of the game and capture the underlying Android action log. 2) Analyze the recorded video and the action log to generate an action space for the Reinforcement Learning Agent. The unified RL agent is trained by providing it the score and coverage as a reward and screenshots of the game as observed states. We chose a set of 19 different open-sourced games for evaluation of the created tool. These games differ in the action set required by each of them - some require tapping icons, some require swiping in random directions, and some require more complex actions which are a combination of different gestures. The evaluation of our tool outperformed state-of-the-art TimeMachine for all 19 games and outperformed Monkey in 16 of the 19 games. This strengthens the fact that Deep Reinforcement Learning can be used to test Android games and can provide better results than tools that make no use of any domain knowledge.ETDenIn CopyrightAndroidTestingReinforcement LearningSoftware EngineeringAndroid Game Testing using Reinforcement LearningThesis