Efficient exploration in reinforcement learning-based cognitive radio spectrum sharing

Access Full Text

Efficient exploration in reinforcement learning-based cognitive radio spectrum sharing

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Communications — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

This study introduces two novel approaches, pre-partitioning and weight-driven exploration, to enable an efficient learning process in the context of cognitive radio. Learning efficiency is crucial when applying reinforcement learning to cognitive radio since cognitive radio users will cause a higher level of disturbance in the exploration phase. Careful control of the tradeoff between exploration and exploitation for a learning-enabled cognitive radio in order to efficiently learn from the interactions with a dynamic radio environment is investigated. In the pre-partitioning scheme, the potential action space of cognitive radios is reduced by initially randomly partitioning the spectrum in each cognitive radio. Cognitive radios are therefore able to finish their exploration stage faster than more basic reinforcement learning-based schemes. In the weight-driven exploration scheme, exploitation is merged into exploration by taking into account the knowledge gained in exploration to influence action selection, thereby achieving a more efficient exploration phase. The learning efficiency in a cognitive radio scenario is defined and the learning efficiency of the proposed schemes is investigated. The simulation results show that the exploration of cognitive radio is more efficient by using pre-partitioning and weight-driven exploration and the system performance is improved accordingly.

Inspec keywords: cognitive radio; learning (artificial intelligence)

Other keywords: cognitive radio spectrum; reinforcement learning; dynamic radio environment

Subjects: Radio links and equipment; Neural computing techniques

References

    1. 1)
      • Jiang, T., Grace, D., Liu, Y.: `Cognitive radio spectrum sharing schemes with reduced spectrum sensing requirements', The IET Seminar on Cognitive Radio and Software Defined Radios: Technologies and Techniques London, September 2008.
    2. 2)
    3. 3)
      • FCC: ‘Notice of proposed rule making and order’. ET Docket No 03–222, December 2003.
    4. 4)
    5. 5)
      • R.S. Sutton , A.G. Barto . (1998) Reinforcement learning: an introduction.
    6. 6)
      • C. Cordeiro , K. Challapali , D. Birru , N.S. Shankar . (2005) IEEE 802.22: the first worldwide wireless standard based on cognitive radios.
    7. 7)
    8. 8)
    9. 9)
      • Jiang, T., Grace, D., Mitchell, P.D.: `Improvement of Pre-partitioning on reinforcement learning based spectrum sharing', IET Int. Communication Conf. on Wireless Mobile and Computing (CCWMC), 2009.
    10. 10)
    11. 11)
    12. 12)
      • B. Fette . (2006) Cognitive radio technology.
    13. 13)
      • Thrun, S.B.: `Efficient exploration in reinforcement learning', CS-92–102, Technical, 1992, School of Computer Science, Carnegie-Mellon University.
    14. 14)
      • L. Kleinrock . (1975) Queueing systems volume i: theory.
    15. 15)
      • Neel, J.O., Reed, J.: `Game models for cognitive radio algorithm analysis', Software Define Radio Forum Technical Conf., 2004.
    16. 16)
      • C.K. Tan , M.L. Sim , T.C. Chuah . Game theoretic approach for channel assignment and power control with no-internal-regret learning in wireless ad hoc networks. IEE Commun. , 9 , 1159 - 1169
    17. 17)
    18. 18)
    19. 19)
      • J. Mitola . (2000) Cognitive radio: an integrated agent architecture for software defined radio.
    20. 20)
      • ‘Cognitive radio technology: a study for Ofcom’. Summary Report, QinetiQ Ltd, Febreary 2007.
    21. 21)
    22. 22)
      • S.R. Saunders . (1999) Antennas and propagation for wireless communication systems.
    23. 23)
      • Čabrić, D., Mishra, S.M., Willkomm, D., Brodersen, R., Wolisz, A.: `A cognitive radio approach for usage of virtual unlicensed spectrum', Fourteenth IST Mobile Wireless Communications Summit, June 2005, Dresden, Germany.
    24. 24)
      • L.P. Kaelbling , M.L. Littman , A.W. Moore . Reinforcement learning: a survey. J. Artif. Intell. Res. , 237 - 285
    25. 25)
      • Jiang, T., Grace, D., Liu, Y.: `Performance of cognitive radio reinforcement spectrum sharing using different weighting factors', Int. Workshop on Cognitive Networks and Communications (COGCOM) in conjunction with CHINACOM Hangzhou, August 2008, China.
    26. 26)
      • Bublin, M., Pan, J., Kambourov, I., Slanina, P.: `Distributed spectrum sharing by reinforcement and game theory', Fifth Karlsruhe Workshop on Software Radio, March 2008, Karlsruhe, Germany.
    27. 27)
      • L. Dasilva , A. Mackenzie . (2007) Cognitive networks: tutorial.
    28. 28)
      • J.O. Neel . (2007) Game theory in the analysis and design of cognitive radio networks.
    29. 29)
    30. 30)
      • Chen, T., Zhang, H., Katz, M., Zhou, Z.: `Swarm intelligence based dynamic control channel assignment in CogMesh', IEEE ICC (IEEE CoCoNet Workshop), May 2008, Beijing, China.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-com.2010.0258
Loading

Related content

content/journals/10.1049/iet-com.2010.0258
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading