ABSTRACT

Artificial intelligence (AI) will change many companies and industries. The pace of development of the use of AI in practice is influenced by lack of trust. Traditionally, trust was based on trust in family or friends, and in an extended form, organizations or professional groups. Stakeholder choices are based on human ethical standards and elements such as family, culture, religion, and communities. Creating a framework for using AI and risk management may seem complicated, but the process is similar to creating controls, rules, and processes that already exist for humans. The risks of AI technology depend on how it is used. However, it should be noted that the technology remains under human control. The aim of this study is to assess stakeholder confidence in AI depending on ethical standards and the degree of risk by presenting information and data from the literature and the results of the authors’ own research. AI has been found to cause embarrassment among some users. Only restrictive guidelines and a high level of ethical standards can change the attitudes of stakeholders toward creating trust in AI.