In the previous chapter, I described some applications of the model to 3-D and virtual simulations. Simulations involve the viewer being immersed as a participant to some degree, however, instructional videos engage the viewer visually while he or she performs certain tasks in conjunction with, or subse - quent to, viewing the video. In this chapter, I apply attributes of the model to instructional video, specifically machinima video. Machinima video is a video product that integrates virtual environments, much like the environments described in the previous chapter. These videos are not as interactive as simulators like those used in the previous chapter, however, they are used for instructional purposes. The model’s principles can be applied to analysis of these much as they were in the previous chapter. I illustrate these applications in this chapter to suggest further study of these kinds of instructional tools with the model. Simulators like SL engage visual, aural, spatial, and behavioral modes of representation in an immersive multisensory experience similar to actually performing the task in the real world. Further, demonstrations in SL can be recorded like videos and shown subsequently. This allows viewers to see particular movements and motions associated with a task before they actually attempt it. Consequently, such videos can be analyzed in terms of intermodal sensory redundancy, temporal synchronicity, and attention-modal filtering. In particular, such video can be studied relative to its rhetoric and sensory experi - ences applying new theories of multimodal assessment. Many of these use concepts integrated into the model. I apply the model to two case studies of machinima video instructional products. One was developed by a student and another developed professionally. I have presented information about these products previously (Remley, 2010, 2014). However, as with the cases pre - sented in the previous two chapters, that discussion was framed within different theoretical foci. Nevertheless, they can be used to illustrate attributes of the model presented here.