ABSTRACT

We live in a time of unprecedented change. The digital revolution that we are only beginning to experience will soon change occupations and industries as well as the way we work and live. Disruptive innovations, such as 3D printing, computer-brain interfaces, robots, drones, driverless vehicles, and others are rapidly transforming business processes, requiring new skills and mindsets (Bojanova, 2014). The new Airbus A350 XWB, for example, contains over 1,000 parts that were 3D printed from lightweight materials (Simmons, 2015). Newer technologies have allowed for digitization of nonroutine tasks, enabling autonomous robots to perform tasks that may even require subtle judgment usually reserved for individuals (Frey & Osborne, 2015). Cars have driven themselves over millions of miles in the last ten years, recognizing the difference between trees and pedestrians, without causing a single accident. Certain computers can now see, hear, write, recognize, and understand nuances better than humans (Howard, 2014). They are better than human pathologists at detecting cancer, defining treatments, and predicting survival rates and are even teaching humans new lessons as they work together (Howard, 2014; Rometty, 2015). IBM CEO Ginni Rometty (2015, p. 2) predicts that “in the future, every decision mankind makes is going to be informed by a cognitive system and as a result our lives are going to be better for it.” However, not everyone agrees with this optimistic outlook. Stephen Hawkins, Elon Musk, and Bill Gates are all concerned about artificial intelligence (AI) getting out of human control (McMillan, 2015). According to Bill Gates, “we should be worried about the threat posed by artificial intelligence” (Rawlinson, 2015). “Google, Facebook, Microsoft, and Baidu, to name a few, are hiring artificial intelligence researchers at an unprecedented rate and putting hundreds of millions of dollars into the race for better algorithms and smarter computers” (McMillan, 2015, p. 2).