Real-time resource management is becoming increasingly challenging as control technology advances such as intelligent cruise control, advanced traction control, and in the not distant future, robotic cars. In modern automotive and avionics applications, the use of multiple sensors and especially real-time imaging sensors creates unprecedented workloads. From a computational perspective, multicore architectures have become mainstream and they offer great potential to handle the new workloads. However, there are also unprece- dented challenges to real-time resource management theory and its supporting tools. Much of the real-time scheduling theory research has been focusing on the scheduling of CPUs. However, as the number of cores rapidly increases, the bottleneck of shared resources is no longer CPU cycles. As a multicore chip is pro- cessing the increasing volumes of real-time imaging data flows, the memory hierarchy (the DRAM and the cache hierarchy, especially the last level cache shared among multiple cores) has become the bottleneck resource. For instance, due to the memory bottleneck, whenever a task suffers a cache miss, contention for access to main memory can significantly delay data fetch, greatly affecting and increasing tasks’ Worst Case Execution Time (WCET). This problem is especially severe in multicore systems, since multiple cores can simultaneously compete for access to shared cache and main memory; in fact, in the worst case task execution time can grow linearly with the number of cores in the system. Thus, the memory hierarchy is becoming a serious bottleneck for real-time computing platforms.
Much of the real-time scheduling theory in the past two decades was based on the assumption that we can compute the WCET of each task when it is executing alone. When tasks are executing together, the scheduling theory would compute the worst case response time as a function of the run-alone WCETs. In modern multicore chips, this is no longer the case; when tasks run together, the WCET of each task increases because contention can occur when multiple cores experience a cache miss at the same time. Due to this effect, as well as the timing complexities of DRAM, memory controller, and the shared interconnect, the cache miss stall time becomes effectively unpredictable with an unusable and very pessimistic upper bound. Thus, we see the need for a paradigm shift towards a modern memory-centric scheduling theory that can effectively coschedule the use of the memory hierarchy, the cores, and the on-chip network, including the I/O channels.
MemGuard: Memory Bandwidth Reservation System for Effcient Performance Isolation in Multi-core Platforms, RTAS, 2013
Pellentesque tristique ante ut risus. Quisque dictum. Integer nisl risus, sagittis convallis, rutrum id, elementum congue, nibh. Suspendisse dictum porta lectus. Donec placerat odio vel elit. Nullam ante orci, pellentesque quis.
Curabitur sit amet nulla. Nam in massa. Sed vel tellus. Curabitur sem urna, consequat vel, suscipit in, mattis placerat, nulla. Sed ac leo. Pellentesque imperdiet. In posuere odio quisque semper augue mattis maecenas ligula.
Copyright (c) 2013 UIUC. All rights reserved.