Testing is an integral part of the Android application development process: by running regularly testing their apps, developers can verify correct behavior and usability before making the applications available to the public. Espresso is a testing framework that allows developers to write concise, reliable, and readable Android user interface (UI) test cases and is the only UI testing framework with widespread adoption among application developers. Several automated test generation tools have been proposed to assist developers in the testing task. However, many of these tools only report errors and do not produce executable test cases, and of those that generate tests, only some support the Espresso format. This thesis focuses on improving the generation of Espresso test cases for Android applications. We begin by conducting an empirical study comparing the effectiveness of different evolutionary algorithms and show that such algorithms are not suitable for generating Android test cases, often being surpassed by purely random algorithms. Next, we analyze the challenges of generating Espresso test cases, using a translation-based approach that leverages the output of existing automated testing tools. We find that one of the main challenges is the lack of unique properties to unequivocally identify specific widgets in the UI. This is exacerbated by the fact that many tools use the Android Accessibility Service, which can return inconsistent information. Finally, this thesis presents a technique for generating Espresso test cases that are significantly more reliable than those generated using the translation-based approach, according to an experimental evaluation on $1,035$ Android apps. This technique includes novel algorithms for generating Espresso View Matchers that concisely select Android widgets and for creating Espresso View Assertions used in regression tests. It also directly utilizes the Espresso framework to gather information and interact with the application under test.