ABSTRACT

Over the last decade, several middle powers have sought to strengthen their state capacity by digitalizing the provision of key governance services. Through public digital infrastructure, they have attempted to address problems of scale by fostering greater precision in domains such as healthcare, access to credit, and social security. These digital services – ‘middleware’ – have traditionally been built with the help of Application Programming Interfaces (APIs): lines of code governing how data is fetched from public databases and shared or stored by public and private entities alike. Advancements in AI present a further opportunity for these states to leapfrog bottlenecks associated with human resources development by building “intelligent” bots and service providers. AI-enabled middleware too is built atop APIs relying on machine learning models to detect patterns and provide predictive responses. High degrees of standardization and centralization in the provision and upkeep of intelligent algorithms carry risks and vulnerabilities, also of a strategic nature. Malicious actors could compromise the API, and corrupt training data, thereby manipulating the AI-enabled service at the front end, to worrying consequences. This chapter surveys proposals by India, Singapore, and Brazil to build AI-enabled governance services, and details strategic risks associated with their deployment. While intelligent programs may allow middle powers to sidestep logistical and resource-based constraints, they also open new theatres of cyber conflict, inviting adversaries to disrupt or disable key engines of their economic growth.