Abstract
Computation offloading has been shown to be a viable solution for addressing the challenges of processing compute-intensive workloads between low-power devices and nearby servers known as cloudlets. However, factors such as dynamic network conditions, concurrent user access, and limited resource availability often result in offloading decisions negatively impacting end users in terms of delay and energy consumption. To address these shortcomings, we investigate the benefits of using Machine Learning for predicting offloading costs for a facial recognition service in a series of realistic wireless experiments. We also perform a set of trace-driven simulations to emulate a multi-edge protest crowd incident case study and formulate an optimization model that minimizes the time taken for all service tasks to be completed. Because optimizing offloading schedules for such a system is a well-known NP-complete problem, we use mixed integer programming and show that our scheduling solution scales efficiently for a moderate number of user devices (10-100) with a correspondingly small number of cloudlets (1-10), a scale commonly sufficient for public safety officials in crowd incident management. Moreover, our results indicate that using Machine Learning for predicting offloading costs leads to near-optimal scheduling in 70 % of the cases we investigated and offers a 40 % gain in performance over baseline estimation techniques.