Intelligent task processing using mobile edge computing: processing time optimization

,


INTRODUCTION
Our world today is dominated by cloud computing which is an efficient computing platform that grew rapidly over the last few decades.Driven by the endless connected devices which represent the internet of things and their massive real-time computing and processing demands, as well as the astonishing evolution of communication and networking technologies, cloud computing infrastructures became inappropriate to perform with an acceptable level in face of these high quality requirements.Motivated by these challenges and the need to make a move towards the 5 th Generation of cellular network, mobile edge computing appeared with capabilities such as storage and computational capacities that offer low latency, high bandwidth and realtime access [1].The concept of edge computing is regarded as the mechanism that allows the computation to be performed at the edge of the network [2].However, the main challenge is that the internet of things connects many heterogeneous devices with limited local computing resources, which will require a strategy that enables these devices to offload their heavy tasks to a processing environment represented by the deployed virtual machines on the nearby edge servers [3].Therefore, computation offloading comes in useful with the ❒ ISSN: 2252-8938 ability to overcome the resource constraints on user devices, especially for the computation intensive tasks [2].
Computation offloading is then considered as a way to offer powerful infrastructure resources to augment the computing capabilities of mobile devices [4].
In computation offloading, a mobile device can adopt one of the following three modes: local execution, partial offloading and full offloading [5].This process, made to achieve a minimal task completion and the least energy consumption, involves application partitioning, task execution and offloading decision on which, according to [6], the edge server depends to calculate the execution time of each mobile devices' request.In this regard, since most of the existing literature concerns only one or two of the said metrics, a variety of solutions have been proposed, and the related publications according to [7] has increased in the recent years to make joint optimization a promising research area.Furthermore, Shakarami et al. [8] produced a well organized review on computation offloading mechanisms that are based on game theory methodology.In recent surveys [9]- [10], many existing offloading methods were examined, such as machine learning and artificial intelligence methodologies, which proved to have an important impact on the subject.Lin et al. [9] found that machine learning methods can solve the scalability issue in large scale computation offloading based on a centralized mode to achieve an intelligent decision.Moreover, in [11] a detailed taxonomy of offloading mechanisms based on machine learning was proposed.
In order to improve the offloading rate and reduce the energy consumption of the device in a single user multitask scenario, in [12] dynamically adjusted the transmitting power and local central processing unit (CPU) frequency.Adopting the same single multitask device scenario, where the task has an execution time deadline and the device is constrained by limited energy, El Ghmary et al. [13], [14] aimed to minimize the energy consumption by using simulated annealing that serves to decide the tasks' offloading and the resource allocation.The offloading decision is obtained based on a multitask scenario, since a single application running on a mobile device is generally divided into multiple tasks which means the offloading decision concerns each one of these tasks [15].In a more complex environment consisting of an edge server and multiple mobile devices each having a set of tasks, Huang et al. [16] discussed the computation offloading and considered its optimization as a mixed integer non linear programming problem, to which they proposed an algorithm based on time and energy consumption to optimize the computation offloading process.Moreover, when it comes to time consumption resources, it includes local and edge processing time as well as communication time consumption in which uploading the input data and downloading the output result.However, most studies ignore the latter, assuming that the output data size is insignificant when compared with the input data size [5], [12], [15]- [17].
Besides, knowing that internet of things sensors are generally powered by batteries with a limited capacity, Khan et al. [18] added energy harvesting devices that are being highly considered to improve energy efficiency along with computation offloading in a scenario consisting of multiple mobile devices and edge servers, the authors then proposed an improved strategy based on integer linear programming to solve the energy consumption issue.Likewise, aiming an energy optimized model, Bi et al. [19] proposed an offloading decision optimization based on a genetic and simulated annealing method.Meanwhile, Zhang et al. [20] combined a simulated annealing method with the genetic algorithm for the purpose of selecting mobile edge servers automatically.Furthermore, in order to minimize the costs, Kuang et al. [21] solved the offloading decision problem by comparing the genetic algorithm, the greedy strategy and backtracking to find that the greedy strategy is the suitable one for their problem resolution.
There are many tools available for researchers to simulate and obtain real world experiment results, such as EdgeCloudSim [22], in which the authors implemented various samples using different architectures in each one of them, to prove the benefits of deploying the edge computing paradigm [23].The existing simulation of computation offloading uses the virtual machine capacity to decide whether to offload or process locally.Due to the constraint of the mobile device's energy consumption and the limited time that a task should take to be completed, we propose a new offloading decision making mechanism in which local processing time will be taken in consideration as well as the mobile device's energy consumption and compare it with the existing two tier with an edge orchestrator architecture.After introducing the theme and its literature, the paper is structured as follows.Section 2 is a description of the system model.The problem is formulated in section 3 as well as the elaboration of its resolution.The simulation environment is then described in section 4 and the results are presented and discussed in section 5. Finally, section 6 represents the paper's conclusion.

SYSTEM MODEL
Computation offloading in mobile edge computing was introduced to support the interconnection of resource-limited devices with the internet, it enables the mobile device to offload a part of the computation to a nearby remote server in order to increase its capabilities and prolong its battery lifespan.However, computation offloading is a complicated process based on three key components: task partition, offloading decision an resource allocation [9].It is also divided into two distinct modes: binary and partial offloading [9].For the purpose of studying this technique, we will consider an architecture where a single mobile device needs to process N independent tasks, either by executing the task locally or offloading it to the mobile edge server according to a certain decision making mechanism.Figure 1 is an illustration of the general deployed architecture.
Figure 1.System model topology The adopted system model involves, as said before, a Single Mobile device that contains a set of N independent tasks ready to be processed, either by the mobile device itself or by offloading it to a nearby edge server in case the local resources were not enough.The computation offloading decision can minimize the processing time [24], hence the response time will be drastically improved [25] as well as energy consumption [26], [27].Each task is represented by i ∈ N where N ={1,2, ... , N} and identified by its data size D size i , data input size D in i and data output size D out i .In the context of studying computation offloading in a mobile edge computing server, we will focus on the communication model while considering the usage of the mobile device.Assuming that tasks are independent, communication and computation costs will be calculated separately, on both the mobile device and the mobile edge computing server.

Local processing
Given D size i as the data size of a task i ∈ N = {1,2, ... , N}, and S md as the processing speed of the mobile device, the processing time will be calculated as shown in (1).The processing time T md i represents the time cost that will be required for a mobile device to process and execute a certain task i. T md i will be used later to calculate local energy consumption.

Edge processing
When the computation resources in the mobile device are not enough to process a certain task, it gets offloaded to the mobile edge computing server.Thus, response time in this case involves a transmission delay which represents both uploading the input and downloading the output as well as the time for processing the task on the mobile edge computing server.In order to get the cost of response time, we will elaborate both transmission delay and the processing time.
Intelligent task processing using mobile edge computing: processing time optimization (Sara Maftah) − Transmission delay: it includes both time to upload the task's input data denoted as T up i as well as time to download its output data denoted as T down i , those metrics are related to the transmission bandwidth bw.The representation of each one is shown in the ( 2) and (3): − Processing time: time for the mobile edge computing server to process a certain task, denoted as T mec i where S mec is the processing speed of the mobile edge computing server.
− Total time cost: in order to get the total time cost in case a task i was offloaded to the mobile edge computing server, we will be using the ( 2)-( 4) to calculate T of f i .

Local energy consumption
In the previous section we identified and denoted the various equations to calculate time consumption at each level.Similarly, with each operation, whether it's processing, transmitting or waiting for a response, both the mobile device and the mobile edge computing server consume energy.In this paper, we focus on the energy consumed by the mobile device only, this consumption is composed of three parts.In case the task is processed locally we calculate the energy cost of local processing, otherwise, the task gets offloaded and we calculate the energy cost while transmitting the data which includes uploading the data and downloading the results and the idle energy, which represents the consumption while the mobile device is on hold and waiting for the task to be processed on the mobile edge computing server.The energy consumption model for the mobile device is then divided into three modes: local processing, transmission and idle which is a state, according to [28], where Mobile devices are often on a connected-standby operation mode, which means they are idle and connected to the communication channels as well for any eventual usability.This mode allows the mobile devices to run on low power which increases the battery life.Energy consumption on the mobile device while processing a task is denoted as E local i and calculated by multiplying time cost T md i by the consumed power by the mobile device P md proc while processing the task.

Computation offloading decision
The decision, in computation offloading, to whether process a certain task locally or send it to a more powerful device, the mobile edge server in our case, is crucial.This process consists of migrating computing tasks to higher resourceful servers that are located at the edge of the network.Jaddoa et al. [25] proposed a dynamic offloading decision based on response time and energy consumption.Likewise, Liao et al. [29] implemented a joint decision making based on both communication and computation resources.Meanwhile, in [24], [30], [31] the decision to offload was based on a fuzzy logic using minimum and maximum functions to determine the result of multiple combined rules within a set.In our paper, knowing that mobile devices have limited computing resources, the decision mechanism is based on setting a limited processing time in the mobile device in order to deliver an output result within a reasonable response time, as well as balancing the usage of these resources by setting a CPU usage threshold.The CPU usage in a utilization-based approach affects directly the consumed power [32], and it is considered that energy consumption and CPU utilization increase linearly [27], [33], [34].

PROBLEM FORMULATION AND RESOLUTION 3.1. Problem formulation
Given x i ∈ {0, 1} the offloading decision of a certain task i, represented by two possibilities.If x i = 0, the task is processed locally, however, if x i = 1 the computation is then offloaded to the mobile edge server.In this paper, we aim to reduce the processing time of tasks considering a processing deadline and energy consumption that should not be exceeded by the mobile device.Hence, the optimization problem can be written as follows: where the first constraint C1 implies that the offloading decision x i of a task i is a binary variable and should be equal to 0 or 1.The second constraint C2 was set to limit the processing time locally T md i , in case the processing time of a task i exceeds the maximum value T max i , which represents the maximum time cost for a single task to be processed.Therefore, if T md i > T max i the task i gets offloaded to an edge server.The third constraint C3 makes sure, in case of local processing, that the energy consumed while processing the set of tasks of a certain application does not exceed a maximum value E max which represents a maximum energy consumption by the mobile device.
Finally, using EdgeCloudSim, the load generator module generates a certain task i, and the orchestrator predicts the required virtual machine capacity and compares it with the available resources at that moment whether on the mobile device or the edge server.Therefore, the required processing speed for a certain task S req i is compared to the virtual machine processing speed S mec accordingly to satisfy the fourth constraint C4.The value of S md is defined before starting the simulation, along with other metrics.Once the simulation begins, the load generator generates the tasks and predicts its required processing speed S req i in a sequential order based on their start time, and each task has the necessary characteristics such as data size D size i , input D in i and output D out i size and the required computing resources.The objective function aims processing time minimization based on a deadline and a maximum energy consumption on the mobile device as well as the required processing speed.

Offloading decision mechanism
Assuming that the offloading decision is known, resolving the problem P1 is achieved by satisfying the processing time in a condition of a deadline and maximum energy consumption in case of local processing.Otherwise, in case of offloading the task, the required processing speed will be considered for the task's completion.S md is given as a constant which is used to compute the local processing time, and therefore T md i will be subjected to C2.The CPU usage is calculated by EdgeCloudSim using a class that provides a realistic utilization model based on some metrics, mainly the usage percentage and processing speed required by the task.Therefore, in case x i = 0 the utilization model predicts the task's requirements and its CPU usage in the mobile device, and calculates afterwards the consumed energy to satisfy the third constraint C3.However, in case x i = 1, P1 is achieved by satisfying the task's processing time in a condition of processing speed requirement only, hence P1 is subjected to C4.
The latest version of EdgeCloudSim is provided with five different samples and each one of them functions different than the other according to a certain processing scenario and orchestration mechanism.The purpose of this paper is on one hand to show the utility of computation offloading and its added value in terms of processing time.On the other hand, implementing an algorithm which is an initiation to experiment different decision making mechanisms in order to obtain better results.The algorithm is a an implementation of the objective function aiming to minimize processing time considering a local processing deadline, a mobile device energy consumption threshold and a processing speed requirements.
Intelligent task processing using mobile edge computing: processing time optimization (Sara Maftah)

Modified two tier with edge orchestator algorithm
Our approach to tackle the problem we formulated in 3.1.consists on a simple implementation of the objective function.The objective function aims to minimize the processing time considering a local processing deadline, a mobile device energy consumption threshold and a processing speed requirement.The modified two tier with edge orchestrator algorithm shown in Algorithm 1.
Algorithm 1 : Modified two tier with edge orchestrator algorithm x i ← tmp 19: end for 4.
SIMULATION ENVIRONMENT A number of simulation experiments have been conducted using the mentioned simulating tool, which is an extended version of CloudSim [35], developed to match the edge computing environment and provide accurate results.A simulation starts with an initialization phase by loading the configuration files needed to run the scenario and by generating a random set of tasks that will be processed sequentially according to their start time.For each task, there will be a decision on whether to process it on the mobile device or offload it to an edge server.This decision will be based on multiple factors.

RESULTS AND DISCUSSION
In order to compare the processing time of the generated tasks, the results were obtained by running each scenario multiple times.In each experiment, we adopted three different scenarios: Mobile, Edge and Hybrid.The first one is an only mobile processing scenario in which the tasks are executed locally in the mobile device.Meanwhile the second scenario depends on a coarse-grained offloading [36] and offloads the entire set of tasks to an edge server.Finally, the third scenario depends on partial offloading or what we can call fine-grained offloading [36].In this case, only the tasks with a heavy computation are offloaded according to a decision making strategy in order to optimize the response time.Moreover, in order to evaluate the adopted offloading decision making mechanism, we compare the results of a modified approach of the original Twotier with Edge Offloading architecture.Figure 2 and Figure 3 are graphical representations of processing a load of independent tasks sequentially, in terms of response time using the three different scenarios and opting two different offloading decision making mechanisms.In Figure 2 we use the Two-tier with Edge Offloading mechanism which depends on comparing the required capacity to process the task and the available resources.We accumulate afterwords the time cost throughout the experiment, which resulted a total response time cost of 1272.5716,544.3834 and 1055.8426seconds adopting respectively the Mobile, Edge and Hybrid scenario.It is obvious that response time when the task is processed in the edge server is better due to the available computing resources, meanwhile, in the hybrid scenario, the mobile device depends mainly on its resources until they are no longer enough, however it offloads at the same time the tasks which percentage usage was predicted to be demanding which explains the high time consumption at first (local processing and computation offloading transmission delays).Meanwhile, in Figure 3 the benefits of a full or partial offloading to an edge server are obvious in terms of processing time.
Figure 2. Time consumption while processing a set of heavy tasks using a Two-tier with edge orchestrator architecture Figure 3.Time consumption while processing a set of heavy tasks using a modified Two-tier with edge orchestrator architecture Moreover, in case of partial offloading, as said before, the decision making is the crucial part.In this experiment, beside comparing the required capacity to process a task and the available resources, we depend on three metrics to decide whether to offload or not, which are the local processing time and the CPU usage, hence energy consumption.We obtained a total response time cost of 1289.2961,581.3376 and 711.8558 seconds adopting respectively the Mobile, Edge and Hybrid scenario.We compare the results obtained from the Two-tier with edge orchestrator architecture and the modified version to find that 343.9868 seconds were saved when processing the tasks based on the customized offloading decision.Figure 4 represents the total processing cost in terms of time consumption and shows the enhancement made when we adopted the customized offloading decision making in the hybrid processing scenario.Hence, whether opting for the Two-tier Intelligent task processing using mobile edge computing: processing time optimization (Sara Maftah)

CONCLUSION
The aim of this study is to simulate the computation offloading process between a mobile device and an edge server in order to improve the processing time of completed tasks, by adopting an offloading decision considering the virtual machine capacity, a local processing deadline, as well as a limited CPU usage when it comes to the mobile device.The obtained results are reduced in terms of processing time and energy consumption as well compared to results provided by the simulation tool EdgeCloudSim using an existing Two-tier with edge orchestrator and a modified version of the said architecture.This paper is the first step into computation offloading in edge computing, a field that caught the attention of many researchers, and a first attempt to implement a strategy that could be extended in future work to include an optimization problem where energy consumption whether on the mobile device or the mobile edge computing server is also considered in the process of the computation offloading decision.Including the cloud into our system model is also intended to have a larger ground for further experiments.

❒
ISSN: 2252-8938 with edge orchestrator architecture or a modified version of this strategy, response time is optimized.However, the average response time was reduced by approximately 32.5% compared with the initial results in a hybrid scenario.

Figure 4 .
Figure 4. Comparing time consumption using two different offloading decision making mechanism