While heterogeneous platforms based on many-core accelerators are well established in embedded and high-performance computing domains, they still struggle to find applicability to real-time systems. This is mainly due to unique characteristics of such computing systems. Real-time engineers build systems that deliver a correct result at the correct instant in time, or at least, within a given time bound. This crucial property is called predictability, and it is typically achieved by design with a thorough analysis phase that involves both hardware and software components of the system. Unfortunately, when hundreds/thousands of cores of the modern accelerators come into play, the complexity quickly becomes unmanageable for the traditional methods, which in some cases completely fail in modeling the system. To tackle this, methodologies and frameworks are being investigated, that potentially can handle such a complexity in a more relaxed manner. Research in the field more and more focusing on the shared resources of the architectures, such as caches, memories and I/O devices, and both academia and industry are well-aware that now it is the memory the scarce resource of the system, hence the one that must be adequately managed.