ABSTRACT

Machine learning (ML) is coming out from the research laboratories to the hands of the human users owing to the increasing availability of software tools with ML capabilities integrated into them. Humans and machines have their unique strengths, and collaboration between these two has the potential to improve ML systems further. For any such collaboration, human users must be able to interpret the behavior of ML systems. Moreover, human users should have a provision to give feedback to ML systems based on their domain knowledge. Consequently, apart from accuracy, human interpretability and the ability to interact with human experts are becoming crucial parameters for ML systems. During recent years, a renewed interest in conferring interpretability has been observed among the ML community. Several model-specific and model-agnostic approaches have been proposed, but the field is still evolving. Interest in conferring ability to interact with ML systems has also started catching up. However, there is a lack of consensus in definitions, standard practices, and evaluation metrics. This chapter reviews the state of the art in the field of interactive ML systems. The objectives of the proposed work include extracting principles and guidelines for the design of interactive ML systems, algorithmic description, and graphical representation of the workflow involved, identifying metrics for the evaluation, and proposing a human-feedback adaptive learning algorithm that adapts itself to incorporate the human expert feedback. The proposed approach is capable of reporting any conflict between human feedback and data along with an interpretable explanation. Moreover, it is capable of accommodating human users with different expertise levels of the domain. Establishing principles and guidelines for the design of interactive ML systems will help in standardization and building a consensus. A basic algorithm for the design of interactive ML systems gives a common starting point for further improvements. A set of commonly agreed-upon metrics will help the evaluation of interactive ML systems. Having an interface that can capture feedback from human experts with different levels of expertise will help make human experiments economical and enable leveraging of masses in improving or verifying interactive ML systems.