ABSTRACT

There exists an increasing concern among practitioners, academics, and researchers regarding biased decisions while using artificial intelligence, specifically machine learning, in order to arrive at a given set of possible decisions. Some countries like the United States, China, and France, just to mention a few, have proposed common grounds to frame the implications of using artificial intelligence in daily life in an attempt to control the way artificial intelligence solutions should be built. Their approach leans towards ethical and sociological aspects of this problem to defined what trustworthy AI is supposed to be. However, there still is a need to explore how their approach intersects and harmonizes with the design-based engineering pursuit to achieve fairer decisions. The aim of this study is to identify which elements within the emergent regulatory framework described as “principled artificial intelligence” may be influenced to achieve a design-based trustworthy artificial intelligence. Our results highlight the importance of the language used in the definition of artificial intelligence principles and the need for policymakers and software developers to join efforts in framing what is a common issue for both camps. It also shows the main difficulties the current approach faces in providing a methodological instrument to be used for artificial intelligence designers.