Delay aware downlink resource allocation scheme for future generation tactical wireless networks

Received Apr 5, 2021 Revised Sep 21, 2021 Accepted Oct 2, 2021 For a very long time protecting physical border integrity is considered to be a challenging thing. Government organizations must provide trade operations for economic growth and at the same time must prevent malicious activity. A different resource such as drones, sensors, and radars are used for monitoring border areas which must be communicated to the remote border security force. Efficient wireless communication is required for communicating information. However, these devices cannot connect to a centralized network directly; thus, are connected in an ad-hoc fashion to connect centralized server. Different tactical network applications require different quality of service (QoS); hus efficient resource scheduling plays a very important role. Existing resource scheduling adopting deep learning and reinforcement techniques fails to meet the quality of experience (QoE) of the user and doesn’t assure access fairness among contending users. Further, require network information in prior and induce high training time. For overcoming research issues, this paper presents a delay-aware downlink resource scheduling (DADRA) technique for future generation networks. The optimization problem of reducing buffer overflow and improving scheduling QoS performance is solved using a genetic algorithm with an improved crossover function. Experiment outcome shows DADRA achieves much better throughput, slot utilization, and packet failure performance when compared with standard resource allocation technique.


INTRODUCTION
Nowadays, the use of communication technologies in border surveillance has become inevitable. For this reason, several technologies have been proposed in the market and each country adopted the appropriate one according to the nature of the ground, the climate, and the threats surrounding its territory. Border surveillance practices have moved from state-of-art approaches such as trenching, building insulation walls, installation of barriers, and human patrols, to the practice of innovative surveillance practices such as camera towers, radars, unmanned aerial vehicles (UAVs), and wireless sensor networks (WSN). These innovative practices permit the amalgamation of a vast number of radar coverage, satellite, UAV's, seismic, and cameras. The objective of the tactical network combining the above element is to monitor borders and construct a virtual fence [1], which produces an enormous quantity of video information that needs to be sent to the remote location on time. Thus, efficient communication design is required. However, access to a centralized cellular network to communicate information is difficult; thus, the armed vehicles communicate information in an ad-hoc manner through intermediated device [2] for connecting to a centralized network such as worldwide interoperability for microwave access (Wi-MAX) network [1], long term evolution (LTE) network and comprehensive r archive network (CRAN) [3]. This network provides dynamic quality of service (QoS) provisioning for allocating communication resources to end-user.
Scheduling plays vital part in resource management in wireless communication networks and provide major feature in provisioning quality of service prerequisite such as packet failure, throughput and delay for diverse application service class in future generation cellular network. Resource allocation is composed of identifying the service order in accordance with priority or QoS and allocating radio resource to users. The objective is assure fairness for provisioning certain resource, packet failures, delay, and reduce congestion. in comparison with wired network, wireless network exhibit dynamic behavior with respect to time; thus scheduling resource is more challenging. Effective resource scheduling is important is utilizing starved resource more efficiently. For enhancing application services, network scheduler have adopted QoS provisioning which are discussed in literature below.
Zhang et al. [4] illustrated user channel access based on time slots. The set of channels is shared, and each user is allocated a transmission slot. The model presented here is time synchronized and throughput invariant. However, synchronization failure could result in packet failure, and they failed to understand the service prioritization requirements of various WiMAX services. Their model is applied without feedback, resulting in wasted bandwidth. Chang et al. [5] an optimal scheduling algorithm with dynamic programming approaches (SADP) that will increase the probability of the spatial reuses and also increase the throughput of the network on the basis of network throughput and the uplink bandwidth request. To reduce the compute complexity a heuristic scheduling algorithm is presented. The outcome of their model achieves better result than exiting model interm of throughput, drop rate and time complexity. However performance evaluation of bandwidth utilization is not presented. In the communication network, guarantee of QoS is major factor for the performance for this traffic policing, call admission control and the scheduling mechanism should be present. Pranindito et al. [6] presented various homogenous and hybrid scheduling strategy such as weighted fair queuing scheduling algorithm (WFQ) and deficit round robin (DRR) which are homogenous strategy merge these two strategy. Dosciatti et al. [7] presented a scheduler for better QoS by adopting meta-heuristic swarm optimization. The optimization for provisioning QoS is done at frame level. The model addressed the packet drop by computing the time duration of the frame and tries to find the optimal value to provide the better resource allocation for the network users. Rahimzadeh and Ashtiani [8] analyzed the saturation throughput of the cognitive WLAN overlaid that based on orthogonal frequency-division multiple-access (OFDMA) TDD network like Wi-MAX. The data packet is transmitted to the to the external empty resource block after the contention between the secondary nodes. To the empty resource block of the primary network, in the primary network the time duration for the secondary network, follows the simple exponential on-off pattern. For this, they presented an analytical model to comprise of discrete Markov chain and two interrelated open multi-class queuing network to model the dynamic behavior of the secondary nodes. Due to the random number of empty resource block at different frame the random number of upload and download data packet is transmitted on the wireless local area network (WLAN). Here they also included the different resource allocation in primary network. Experiment outcome shows, accuracy is proven for different condition. However scalabity performance is not evaluated considering varied subscribed user.
Despite significant work done on QoS provisioning [9], [10], existing protocol of resource scheduling design [11]- [14] predominantly considering improving just throughput and resource utilization under homogenous network; however, improving user quality of experience play very important role in future generation cellular network which is heterogeneous in nature [15]- [17]. Recently, deep learning and reinforcement learning approach have been adopted for resource provisioning in cellular network [18]- [20]. However, these model require huge amount of data in prior and require higher training time. Further, did not addressed access fairness issues [21] in provisioning resource of mobile subscribers. Along with, induce huge delay for scheduling new packet [21]. Thus, for overcoming research challenges this paper presented delay aware resource scheduling scheme. The delay-aware downlink resource scheduling (DADRA) scheme bring good tradeoffs in reducing backlogged packet in buffer and improve scheduling performance of newly arrived packet. The optimization is done through genetic algorithm with crossover mechanism.

DOWNLINK RESOURCE ALLOCATION SCHEME FOR FUTURE GENERATION TACTICAL NETWORK
This section present DADRA method for future generation tactical network. The tactical network is expected to connect different military defense vehicle through cellular network where these device connect through central network and as well as in adhoc manner [22]. Each vehicle require different QoS mechanism Int J Artif Intell ISSN: 2252-8938  Delay aware downlink resource allocation scheme for future (Ravishankar H) 1027 such as high throughput, less delay, high resource utilization or combination of these; thus QoS provisioning for provisioning multiple service in tactical network is challenging [23]. Thus, this research aims at designing efficient downlink resource allocation considering different service classes. The DADRA first present a multi-objective performance metric for QoS provisioning in allocating downlink resource to end users. Then, genetic algorithm is applied for decision making parameter setting in bringing good tradeoffs in scheduling newly arrived packet and backlogged packet [24].

Downlink resource allocation metrics
This section present buffer aware resource allocation metrics and algorithm for mobile subscriber that operates under unknown and complex territories. This paper first define different QoS metric affecting downlink resource allocation performance. In downlink resource allocation scheme, first the base station estimate amount of packet remaining in downlink queues at beginning of establishing new connection and makes decision of what portion of backlog packet in queue to be scheduled along with new packets in present communication slot. Let's assume the buffer size of communication channel at the start of new connection slot is represented as (0) then the amount of packet (0) at respective time slot will be given access to communicate is defined using (1): Where, represent the maximal capacity constraint (i.e., number of packet) on the buffer size, and (0 < < 1) is an optimization parameter dependent on communication channel . An important thing to be noted here is that at least one packet with be scheduled to non-empty queue. This is done to prevent slot being unutilized for very long-period of time because of less traffic.
Here the incoming packet arrival is estimated through markov poisson process (MPP). Let define the maximum packet that can arrive in respective individual time slots. Thus, we obtain transition probabilities matrices as (2): where, 0 ≤ ≤ and (ℎ) is computed as (3): The ( ) defines stats transition probabilities after arrival of packets in respective time slots. Setting the value of will induce certain error considering Poisson process. This is because the value of is infinite in nature in real-time environment. Thus, in this work the is bounded within as described in (4): The above equation defines the maximum modelling error that may arise for predicting arrival state with maximum incoming packet in respective time slot is configureured to . Let consider that matrix is independent with respect to time; then, the stable state probabilities vector γ of incoming packet can be established using (5): where, 1 represent the column matrix with correct dimension size. Thus, the average incoming packet arrival rate δ can be obtained using (6): Here we compute different QoS measure such as packet failure, throughput and delay for provisioning different applications services. It is important to bring good tradeoffs in scheduling newly arrived packet and backlogged packet kept in buffers. An important to be noted here that the arriving packet might be put fourth buffer due to burst in traffic of arriving packets. Thus, the arriving packet can be placed in any location. Let (0) defines the re-blocking probability vector with respect to packets seen in front by arriving packet is defined as (7), where, (0) (0 ≤ ≤ + − 1) defines he probabilities vector with respect to packet sets in front perceived by newly arrived packet. Thus, the buffer overflow upon arrival of new packet into the communication environment can be computed using (8): In this work, the delay induced for admitting packets is computed in probabilistic manner as (9): Similarly, the cumulative packet delay ℰ without considering arrival frame is obtained through in (10): Next using above delay equation the packet failure probability at buffer level is computed as (11): However, the above equation doesn't consider packet failure at receiver side because of propagation environment factor. Then, using packet failure probability the throughput is computed without considering any error recovery mechanism as (12): where PFP defines mean packet failure probability of future generation tactical network. Next this work establishes buffer size distribution and its delay. The bonded probabilistic delay with respect to buffer size is expressed as (13): Similarly, the mean buffer size is expressed as (14), and mean delay is expressed as (15):

Delay aware downlink resource allocation algorithm
The DADRA uses optimized outcome of previous QoS measure. The DARDA make sure different individual user meets it QoS requirement with access fairness guarantee i.e., doesn't incurring delay to other contending users. Existing resource allocation model just focus on maximizing system throughput; Thus, may induce QoS guaranteeing issues meeting traffic flows of users. On the other side, the DADRA enhances network performance for provisioning service flow in meeting QOS by assigning a smaller number of slots. Further, the schedule mechanism assures access fairness (i.e., delay) in provisioning service to end users.
Thus, in our work, both fairness and QoS are given equal importance in allocating resource to endusers. The algorithm 1, is used to compute QoS measure required for provisioning resource to connection . Algorithm 1. Compute QoS measure for for respective connection during allocating resource for communication.

Input.
defines packet arrival features for connection , iteration size, ( ) Quality of service prerequisite for connection . Output. QoS parameter .
Step 3. Iterate The tradeoff parameter is used in our DADRA model for determining the minimal amount of packet that has to be given resource from connection for different instance. However, the actual packet that must be given resource is established using algorithm 2.

Genetic algorithm with improved crossover mechanism
The genetic algorithm is used [25] for setting an ideal value in bringing good tradeoffs in reducing buffer overflow and scheduling newly arrived packet. As discussed in [22] genetic algorithm is very efficient solving multi-objective decision making. Let there be a total number of Base Station and number of Subscriber Station in each . Each chromosome consists of ( + ) sub chromosome feasible solution. Then each of the resultant sub chromosome consist of sub genes 1 : ( 1 , 2 , 3 , … . . ) which denotes the bandwidth parameter allocated for different subscriber station and 1 : ( 1 , 2 , 3 , … . . ) is buffer size for each connection. The objective of proposed fitness function is to reduce backlogged packet in buffer and maximize scheduling newly arrived packet. Hence, the following weighted fitness function is presented: where ℱ ↓ and ℱ are the sub fitness function parameter to minimize the backlogged packets and maximize newly arrived packet scheduling efficiency respectively, 1 and 2 are the weights to minimize the backlogged queue size and maximize scheduling efficiency, respectively, Р DOU queue size of sub channel , ℋ is the number of sub-channel, Р is the maximum buffer size of a single channel, is the maximum newly arrived packet that can scheduled in a single channel, is the newly arrived packet in sub-channel . For easiness we consider 1 and 2 parameter to be 0.5. Algorithm 2. Packet scheduling, assign slots for downlink connection with currently available channel.
← number of slots available in downlink sub-channel-number of slots assigned to downlink sub-channel overhead.
← number of downlink quality of service associations; [ 1 , 2 ] ← unique connection IDs of downlink quality of service associations.
←byte for each slot with modulation and coding scheme chosen for . Step  ] are its respective buffer size.
For ← 1 … . do Step 13. ←bytes size in topmost packets within buffer of .
Step Let consider that the initial population be generated arbitrarily. Therefore, it can be acquired arbitrarily within stipulated parameter bound. Then based on the fitness parameter of next population, the chromosomes are chosen. The chromosomes with higher fitness parameter have a greater chance of chromosome being selected.in our case we adopt roulette wheel selection methodology. Therefore, based on fitness parameter the chromosomes are placed in a roulette wheel. As a result, the selection regeneration likelihood of each chromosome is computed based on following fitness function: Where, depicts the selection regeneration likelihood of ℎ chromosomes, which is an arbitrary parameter between zero and one, ( )is the fitness function computed for the ℎ chromosomes, and represent the population size. The arbitrary parameter of is compared with selection likelihood of each chromosomes, a smaller chromosomes will be selected and others are removed. As a result a new population is again formed. The evolutionary process (crossover and mutation) are very crucial in genetic algorithm. However, the existing genetic algorithm presented so far are not efficient. Since the likelihood are fixed in evolutionary process. As a result, it is not suitable for multi-objective fitness optimization. This model presents an adaptive evolutionary scheme. The scheme adaptively updates the likelihood based on fitness parameter. The likelihood, if ℱ > ℱ ↓ is computed as (18), the likelihood, if ℱ , ℱ ↓ is computed as (19), where ℱ ↓ ℱ and ℱ → denotes the minimum, maximum and average likelihood of fitness parameter respectively, 1 and 2 denotes the likelihood updated parameter of crossover and mutation. The crossover operation is a critical factor in identify an optimal solution. The exiting crossover operation is not efficient and it is time consuming. To address, this work presents an efficient and fast adaptive crossover selection scheme. Namely, single level crossover (SLC) and two-level crossover (TLC). In SLC, two chromosomes are chosen to carryout crossover and likelihood of crossover is computed using (18) and (19). Except interior of sub-chromosomes, the point of crossover is arbitrarily selected from two parent chromosome. The SLC ISSN: 2252-8938  Delay aware downlink resource allocation scheme for future (Ravishankar H) 1031 permits the individual crossover as shown in Figure 1. Similarly, let also consider for a given sub chromosome, the TLC process is required to increase search speed and range. Each individual chromosome takes decision whether to carryout TLC based on likelihood of TLC. Only one sub-chromosome is chosen and gene order is changed, if TLC is performed as shown in Figure 2. This aid in improving the search range of TLC in assuring chromosome satisfies optimization parameter constraint. The proposed adaptive crossover scheme (SLC and TLC) enhances the search space and convergence speed. Subsequently, in the next population, ideal genes are searched as stable chromosomes. Mutation process plays an important role in avoiding selecting sub optimal solution and finds optimal solution. To represent a gene, the adaptive crossover searching model adopts decimal encoding technique. Here, we adopt Mutation based on per-bit basis. As a result the chromosomes chosen for mutation have an arbitrarily chosen bit that is changed from 1 to 0 or 0 to 1. This aid the proposed scheme reduces computation complexity and thus we can set large size of population for initialization. The flow of proposed adaptive crossover searching scheme is presented. The performance efficiency of DADRA method over exiting resource allocation scheme [8] is evaluated in next section. Step 1. First sense the area for the set of inputs.
Step 2. The inputs are encoded to convert it into chromosomes.
Step 4. Add them in population ℝ.
Step 7. Add them in ℝ 1 Step 8. End for loop.
Step 11. Collect the crossover processing parameter.
Step 13. Offspring parameter in ℝ 2 is updated.
Step 14. Collect the mutation processing parameter.
Step 15. Compute crossover processing parameter and update crossover rate .
Step 16. Compute mutation processing parameter and update mutation rate .

SIMULATION RESULT AND ANNALYSIS
Here experiment is conducted for evaluating performance of DADRA over existing resource allocation mechanism [8]. Both models are evaluated through simulation which is implemented using customized simulator designed using C# programing language. Different kind of service classes such as constant bit-rate, real-time and non-real-time poling service. The following metric such as packet loss rate, throughput and slot utilization performance are considered for performance evaluation of DADRA method over existing resource allocation mechanism.

Packet failure performance evaluation
Here experiment is conducted by varying the subscriber station size and number of packets failed in network is noted and is graphically shown in Figure 3. The DADRA reduce packet failure by 81.74%, 70.79%, 70.02%, and 72.44% over standard resource allocation method when subscriber station size is set to 10, 20, 30, and 40, respectively. From result we can state that the packet failure increases with increase in density of subscriber station. This shows how efficiently the DADRA schedule the newly arrived packet in future generation tactical network. An average packet failure reduction of 73.44% is attained by DADRA method over standard resource allocation scheme. From overall result attained it can be seen, the DADRA method efficient irrespective of subscriber station density with different QoS prerequisite of different application.

Slot utilization performance evaluation
Y Here experiment is conducted by varying the subscriber station size and slot utilization performance in network is noted and is graphically shown in Figure 4. The DADRA improves slot utilization performance by 8.736%, 11.58%, 13.92%, and 17.18% over standard resource allocation method when subscriber station size is set to 10, 20, 30, and 40, respectively. From result we can state that the slot utilization performance achieved by DADRA is stable irrespective of subscriber station size; however, using standard resource allocation the slot utilization is degraded significantly when there is increase in density of

Throughput performance evalaution
Here experiment is conducted by varying the subscriber station size, speed of subscriber station and throughput performance in network is noted and is graphically shown in Figure 5 and Figure 6. The DADRA improves throughput performance by 19.6%, 18.407%, 24.07%, and 34.46645142% over standard resource allocation method when subscriber station size is set to 10, 20, 30, and 40, respectively. From result we can state that the throughput performance achieved by DADRA and standard resource allocation method increases as more number of packet being generated in network. Nonetheless, DADRA achieves much better throughput performance. Further, to study impact of mobility here experiment conducted by varying speed of subscriber station. The DADRA improves throughput performance by 17.093%, 35.11%, 37.65%, and 32.32%, over standard resource allocation method when subscriber station mobility speed is set to 3, 4, 5, and 6, respectively. From result it can stated as speed is increased the throughput performance is significantly impacted for defense advanced research projrcts agency (DARPA) and SRA. However, DARPA achieves much better performance than SRA. An average throughput performance improvement of 24.113 and 30.23% is attained by DADRA over standard resource allocation scheme under varied subscriber station density and varied mobility speed of subscriber station. From overall result attained it can be seen, the DADRA method is efficient irrespective of subscriber station density and mobility speed with different QoS prerequisite of different application.

CONCLUSION
Providing quality of service and quality if experience together is a challenging task, especially when application and network is dynamic in nature. For achieving this work presented a performance delay aware resource allocation metric considering Markov decision process. Then, the optimization problem of reducing buffer backlog and improve scheduling performance of newly arrived packet. The optimization problem is solved using genetic algorithm with improved cross over function. Experiment is conducted to evaluate the performance in terms of slot utilization and throughput performance. Packet failure performance is evaluated to validate quality of experience of end users. Further, throughput performance is evaluated varying speed to study the impact of mobility in overall throughput achieved in network. The outcome shows DADRA improves slot utilization by 73.44% over standard resource allocation scheme. DADRA reduces packet failure by 12.85% over standard resource allocation scheme. The DADRA improves throughput by 24.113 and 30.23% under varied subscribes station and mobility speed, respectively. Future work would consider performance evaluation under more dynamic condition with high mobility and traffic pattern under highly dense and noisy communication environment.