Reference Hub7
Multi Objective Resource Scheduling in LTE Networks Using Reinforcement Learning

Multi Objective Resource Scheduling in LTE Networks Using Reinforcement Learning

Ioan Sorin Comsa, Mehmet Aydin, Sijing Zhang, Pierre Kuonen, Jean–Frédéric Wagen
Copyright: © 2012 |Volume: 3 |Issue: 2 |Pages: 19
ISSN: 1947-3532|EISSN: 1947-3540|EISBN13: 9781466611764|DOI: 10.4018/jdst.2012040103
Cite Article Cite Article

MLA

Comsa, Ioan Sorin, et al. "Multi Objective Resource Scheduling in LTE Networks Using Reinforcement Learning." IJDST vol.3, no.2 2012: pp.39-57. http://doi.org/10.4018/jdst.2012040103

APA

Comsa, I. S., Aydin, M., Zhang, S., Kuonen, P., & Wagen, J. (2012). Multi Objective Resource Scheduling in LTE Networks Using Reinforcement Learning. International Journal of Distributed Systems and Technologies (IJDST), 3(2), 39-57. http://doi.org/10.4018/jdst.2012040103

Chicago

Comsa, Ioan Sorin, et al. "Multi Objective Resource Scheduling in LTE Networks Using Reinforcement Learning," International Journal of Distributed Systems and Technologies (IJDST) 3, no.2: 39-57. http://doi.org/10.4018/jdst.2012040103

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

The use of the intelligent packet scheduling process is absolutely necessary in order to make the radio resources usage more efficient in recent high-bit-rate demanding radio access technologies such as Long Term Evolution (LTE). Packet scheduling procedure works with various dispatching rules with different behaviors. In the literature, the scheduling disciplines are applied for the entire transmission sessions and the scheduler performance strongly depends on the exploited discipline. The method proposed in this paper aims to discuss how a straightforward schedule can be provided within the transmission time interval (TTI) sub-frame using a mixture of dispatching disciplines per TTI instead of a single rule adopted across the whole transmission. This is to maximize the system throughput while assuring the best user fairness. This requires adopting a policy of how to mix the rules and a refinement procedure to call the best rule each time. Two scheduling policies are proposed for how to mix the rules including use of Q learning algorithm for refining the policies. Simulation results indicate that the proposed methods outperform the existing scheduling techniques by maximizing the system throughput without harming the user fairness performance.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.