ABSTRACT

In 1960, Norbert Wiener discussed “Some Moral and Technical Consequences of Automation”. The first observation relates to learning and adaptive capabilities as prerequisites of human-like ‘intelligent’ computers – whereby, to be considered artificially intelligent, a machine must be able to learn from and to act on the environment in ways that maximise the probability of it achieving some pre-specified objective. The significance of Wiener’s second observation, which links directly to the implication stated, remained largely obscured to AI practitioners until the recent advances in AI and the shift of AI from research laboratories into mainstream usage. One important characteristic of AIED that is rarely recognised outside of the immediate AIED community is that it relies on AI models not just to build intelligent learning environments to deliver educational solutions, but also to address the fundamental questions raised within the field.