ABSTRACT

Modern models of artificial intelligence, as a rule, work on the principle of a black box. Having trained the model on the training data, we proceed to its exploitation, relying on its accuracy, assessed in the process of training, cross-validation and testing. However, within the framework of this approach, we cannot justify the prediction of the model, since in many cases it is either completely impossible (e.g., in the case of convolutional neural networks that generate features themselves) or extremely difficult (e.g., in the case of the support vector machine, which is based on the method of optimizing a certain functional, and not on the assessment of the probability of data belonging to a certain class). Understanding how a model works increases confidence in its results. In medical practice, this means that for an individual patient, his/her individual prognosis is more important than the average probability estimate for the group in which he/she is included. In the proposed paper, an important practical problem is considered – the detection of a change points in a multidimensional time series. Also, we propose a novel method for solving this problem. Such time series are often generated by patient monitoring devices and require clear and fast classification. A clear probabilistic interpretation of the method underlying this classification greatly enhances its value in the frame of explainable artificial intelligence.