ABSTRACT

This chapter presents Machine Learning (ML)1 techniques for learning analytics models in the form of networks, trees, and rules. We start with algorithms for learning Decision Trees (DTs). Next we learn various probabilistic graphical models, namely, Naïve Bayesian Classifier (NBC), k-dependence NBC (kNBC), and Bayesian Belief Networks (BN). Finally, we present a general rule-induction technique, called Inductive Logic Programming (ILP). Each of these models has a structural component easily comprehensible to end users, and thus provides users with an opportunity to tweak models using subjective knowledge. In contrast, a feed-forward Neural Network (NN) is black-box in nature, exhibiting only the input-output interface and certain learning parameters. NN models (presented in the last chapter) are learned directly from the data and are then used for classification. In contrast, DTs, ILP rules, and graphical NBC, kNBC, and BN models can be produced in consultation with subject-matter experts even without having any observational data from which to learn.