ABSTRACT

The way artificial intelligence (AI) can enrich our lives are immense. Increased efficiency and lowers costs, huge improvements in healthcare and research, increased safety of vehicles, and general convenience, are just some of the promises of AI. But, as with any new technology, the opportunities of AI come with an array of challenges for our privacy and data security. In many cases, AI systems are designed with no considerations for personal data, making them highly targeted by new data security policies. Regulations such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) have provided more concerns to the way the data is treated by big companies, whether in terms of collecting, storing, utilizing, and transferring of personal data. This led to the need to discover anew approach to train AI models without violating those policies. In 2017, federated learning was introduced by Google to train a model with greater privacy and other security attributes, but a full integration of data security attributes is still to come. Blockchain technology has shown tremendous contributions in applying security features such as transparency, traceability, and integrity. In this chapter, we discuss the merging of federated learning and blockchain to enhance AI models to train effectively and efficiently while complying with data security and privacy regulations.