http://iet.metastore.ingenta.com
1887

Value-based deep reinforcement learning for adaptive isolated intersection signal control

Value-based deep reinforcement learning for adaptive isolated intersection signal control

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Intelligent Transport Systems — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Under efficiency improvement of road networks by utilizing advanced traffic signal control methods, intelligent transportation systems intend to characterize a smart city. Recently, due to significant progress in artificial intelligence, machine learning-based framework of adaptive traffic signal control has been highly concentrated. In particular, deep Q-learning neural network is a model-free technique and can be applied to optimal action selection problems. However, setting variable green time is a key mechanism to reflect traffic fluctuations such that time steps need not be fixed intervals in reinforcement learning framework. In this study, the authors proposed a dynamic discount factor embedded in the iterative Bellman equation to prevent from a biased estimation of action-value function due to the effects of inconstant time step interval. Moreover, action is added to the input layer of the neural network in the training process, and the output layer is the estimated action-value for the denoted action. Then, the trained neural network can be used to generate action that leads to an optimal estimated value within a finite set as the agents' policy. The preliminary results show that the trained agent outperforms a fixed timing plan in all testing cases with reducing system total delay by 20%..

References

    1. 1)
      • 1. Sutton, R.S., Barto, A.G.: ‘Reinforcement learning: an introduction’ (The MIT Press, Cambridge, Massachusetts, USA/London, England, 1998).
    2. 2)
      • 2. Kaelbling, L.P., Littman, M.L., Moore, A.W.: ‘Reinforcement learning: a survey’, J. Artif. Intell. Res., 1996, 4, pp. 237285.
    3. 3)
      • 3. Bertsekas, D.P.: ‘Dynamic programming and optimal control’ (Athena Scientific, Nashua, NH, USA, 2007, 3rd edn.), vol. 2.
    4. 4)
      • 4. Abdulhai, B., Kattan, L.: ‘Reinforcement learning: Introduction to theory and potential for transport applications1’, Can. J. Civ. Eng., 2003, 30, pp. 981991.
    5. 5)
      • 5. Abdulhai, B., Kattan, L.: ‘Reinforcement learning: introduction to theory and potential for transport applications’, Can. J. Civ. Eng., 2003, 30, pp. 981991.
    6. 6)
      • 6. Watkins, C.J.C.H.: ‘Learning from delayed rewards’, Ph.D. thesis, Cambridge University, 1989.
    7. 7)
      • 7. Watkins, C., Dayan, P.: Q-learning, Machine Learning, 1992, 8, (3-4), pp. 279292.
    8. 8)
      • 8. Mnih, V., et al: ‘Human-level control through deep reinforcement learning’, Nature, 2015, 518, (7540), pp. 529533.
    9. 9)
      • 9. Bengio, Y.: ‘Learning deep architectures for AI’, Found. Trends Mach. Learn., 2009, 2, pp. 1127.
    10. 10)
      • 10. Krizhevsky, A., Sutskever, I., Hinton, G.: ‘Imagenet classification with deep convolutional neural networks’, Adv. Neural Inf. Process. Syst., 2012, 25, pp. 11061114.
    11. 11)
      • 11. Hinton, G.E., Salakhutdinov, R.R.: ‘Reducing the dimensionality of data with neural networks’, Science, 2006, 313, pp. 504507.
    12. 12)
      • 12. Abdulhai, B., Pringle, R., Karakoulas, G.J.: ‘Reinforcement learning for true adaptive traffic signal control’, J. Transp. Eng., 2003, 129, (3), pp. 278285.
    13. 13)
      • 13. Wiering, M.: ‘Multi-agent reinforcement learning for traffic light control’. ICML, Stanford, CA, USA, 2000, pp. 11511158.
    14. 14)
      • 14. Bingham, E.: ‘Reinforcement learning in neurofuzzy traffic signal control’, Eur. J. Oper. Res., 2001, 131, (2), pp. 232241.
    15. 15)
      • 15. Mnih, V., Kavukcuoglu, K., Silver, D., et al: ‘Playing atari with deep reinforcement learning’, arXiv preprint arXiv:1312.5602, 2013.
    16. 16)
      • 16. Genders, W., Razavi, S.: ‘Using a deep reinforcement learning agent for traffic signal control’, Available at: https://arxiv.org/abs/1611.01142, November 2016.
    17. 17)
      • 17. van der Pol, E.: ‘Deep reinforcement learning for coordination in traffic light control’, Master's thesis, University of Amsterdam, August 2016.
    18. 18)
      • 18. Li, L., Lv, Y., Wang, F.-Y.: ‘Traffic signal timing via deep reinforcement learning’, IEEE/CAA J. Autom. Sinica, 2016, 3, (3), pp. 247254.
    19. 19)
      • 19. Lin, L.-J.: ‘Self-improving reactive agents based on reinforcement learning, planning and teaching’, Mach. Learn., 1992, 8, (3–4), pp. 293321.
    20. 20)
      • 20. Kingma, D.P., Ba, J.: ‘Adam: a method for stochastic optimization’, arXiv:1412.6980 [cs.LG], December 2014.
    21. 21)
      • 21. Glorot, X., Bengio, Y.: ‘Understanding the difficulty of training deep feedforward neural networks’. Proc. of the Int. Conf. on Artificial Intelligence and Statistics. Society for Artificial Intelligence and Statistics, 2010, vol. 9, pp. 249256.
    22. 22)
      • 22. Webster, F.V.: ‘Traffic Signal Settings’. Road Research Technical Paper No. 39. London: Great Britain Road Research Laboratory, 1958.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-its.2018.5170
Loading

Related content

content/journals/10.1049/iet-its.2018.5170
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address