Project Summary: The basic idea of the project is to develop a general understanding of how state agencies' policy announcements are interpreted by taking streams of policy announcements and interpretations, annotating (coding) the interpretations by hand, then using machine learning (ML) and natural language processing (NLP) techniques to develop a model which generates annotations from announcements. Specifically, we will use streams of two different types of policy announcements, each for two countries (three in total), one on foreign policy related issues and the other on central bank monetary policy; we will also use streams of one particular type of interpretation, namely journalistic accounts in ideologically different newspapers; and we will annotate the journalistic accounts in terms of attributed motives, conditional predictions, and other ways in which interpreters typically gloss policy announcements. We will then use ML techniques to develop and train deep learning models of textual entailment and inference which use syntactic and semantic (both in general and for specific domains) information in the announcement texts to map from announcements to annotations. Those models will be examined to distinguish between issue-specific and country-specific) features of interpretation of policy announcements and more general, cross-domain features, such that the latter can be applied to other issue domains. The work brings together expert knowledge on political science and economics (particularly domain-specific knowledge about foreign policy and monetary policy) with methodological skills and expertise in computational linguistics; it will involve disparate tasks ranging from collecting archival materials through to the development and application of textual annotation schema and the development of computational models of textual entailment.