Games are being widely used as an instructional strategy for Software Engineering (SE) education. Therefore, it is essential to systematically evaluate such games in order to obtain evidence of their quality. However, currently, there are few approaches providing a support for a systematic evaluation of these games, thus, limiting the analysis of their quality. In this context, a prominent model for the evaluation of games for SE education is MEEGA (Model for the Evaluation of Educational GAmes). Yet, based on a large-scale analysis, improvement opportunities with respect to its validity and reliability have been identified. Thus, the objective of this chapter is to present an evolution of the model, the MEEGA+ model, which has been systematically developed by decomposing evaluation goals into measures and defining a measurement instrument to evaluate the quality of games for SE education. Results of the analysis of the reliability and construct validity of the MEEGA+ measurement instrument, based on data collected from 29 case studies on 13 different games involving 589 students, indicate a satisfactory reliability of the MEEGA+ model (Cronbach’s alpha α=.929). In addition, the results of the exploratory factor analysis indicate that the quality of games for SE education is evaluated through two quality factors (player experience and usability). Thus, our results provide a systematic, valid, and reliable model to evaluate SE games, assisting game creators, SE instructors, and/or researchers contributing to the systematic development, improvement, and adoption in practice.