http://iet.metastore.ingenta.com
1887

Rules of photography for image memorability analysis

Rules of photography for image memorability analysis

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Image Processing — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Photos are becoming more spread with digital age. Cameras, smart phones and Internet provide large dataset of images available to a wide audience. Assessing memorability of these photos is becoming a challenging task. Besides, finding the best representative model for memorable images will enable memorability prediction. The authors develop a new approach-based rule of photography to evaluate image memorability. In fact, they use three groups of features: image basic features, layout features and image composition features. In addition, they introduce a diversified panel of classifiers based on some data mining techniques used for memorability analysis. They experiment their proposed approach and they compare its results to the state-of-the-art approaches dealing with image memorability. Their approach experiment's results prove that models used in their approach are encouraging predictors for image memorability.

References

    1. 1)
      • 1. Isola, P., Xiao, J., Parikh, D., et al: ‘What makes a photograph memorable?’, IEEE Trans. Pattern Anal. Mach. Intell., 2014, 36, (7), pp. 14691482.
    2. 2)
      • 2. Isola, P., Parikh, D., Torralba, A., et al: ‘Understanding the intrinsic memorability of image’. Advances in Neural Information Processing Systems, Granada, Spain, December 2011, pp. 24292437.
    3. 3)
      • 3. Khosla, A., Xiao, J., Torralba, A., et al: ‘Memorability of image regions’. Advances in Neural Information Processing Systems, Lake Tahoe, Nevada, United States, December 2012, vol. 25, pp. 305313.
    4. 4)
      • 4. Mancas, M., Le Meur, O.: ‘Memorability of natural scene: the role of attention’. Proc. IEEE Int. Conf. Image Process., Melbourne, Australia, September 2013, pp. 196200.
    5. 5)
      • 5. Redies, C.: ‘A universal model of esthetic perception based on the sensory coding of natural stimuli’, Spat. Vis., 2007, 21, (1–2), pp. 97117.
    6. 6)
      • 6. Liu, L., Chen, R., Wolf, L., et al: ‘Optimizing photo composition’, Comput. Graph. Forum, 2010, 29, (2), pp. 469478.
    7. 7)
      • 7. Celikkale, B., Erdem, A., Erdem, E.: ‘Visual attention-driven spatial pooling for image memorability’. IEEE Computer Vision and Pattern Recognition Workshops, Portland, OR, USA, June 2013, pp. 976983.
    8. 8)
      • 8. Wang, W., Sun, J., Li, J., et al: ‘Investigation on the influence of visual attention on image memorability’. Image and Graphics – 8th Int. Conf., Tianjin, China, August 2015, pp. 573582.
    9. 9)
      • 9. Kim, J., Yoon, S., Pavlovic, V.: ‘Relative spatial features for image memorability’. 21st ACM Int. Conf. Multimedia Proc., Barcelona, Spain, October 2013, pp. 761764.
    10. 10)
      • 10. Bylinskii, Z., Isola, P., Bainbridge, C., et al: ‘Intrinsic and extrinsic effects on image memorability’, Vis. Res., 2015, 116, pp. 165178.
    11. 11)
      • 11. Peng, H., Li, K., Li, B., et al: ‘Predicting image memorability by multi-view adaptive regression’. 23rd ACM Int. Conf. Multimedia Proc., Brisbane, Australia, October 2015, pp. 11471150.
    12. 12)
      • 12. Khosla, A., Raju, A., Torralba, A., et al: ‘Understanding and predicting image memorability at a large scale’. IEEE Int. Conf. Computer Vision, Santiago, Chile, December 2015, pp. 23902398.
    13. 13)
      • 13. Lahrache, S., El Ouazzani, R., El Qadi, A.: ‘Bag-of-features for image memorability evaluation’, IET Comput. Vis., 2016, 10, (6), pp. 19.
    14. 14)
      • 14. Borkin, M., Azalea, A.V., Bylinskii, Z., et al: ‘What makes a visualization memorable?’, IEEE Trans. Vis. Comput. Graph., 2013, 19, (12), pp. 23062315.
    15. 15)
      • 15. Han, J., Chen, C., Shao, L., et al: ‘Learning computational models of video memorability from fMRI brain imaging’, IEEE Trans. Cybern., 2015, 45, (8), pp. 16921703.
    16. 16)
      • 16. Aydn, T., Smolic, A., Gross, M.: ‘Automated aesthetic analysis of photographic images’, IEEE Trans. Vis. Comput. Graph., 2015, 21, (1), pp. 3142.
    17. 17)
      • 17. Lo, K.-Y., Liu, K.-H., Chen, C.: ‘Intelligent photographing interface with on-device aesthetic quality assessment’. Computer Vision – ACCV Workshops, Daejeon, Korea, November 2012, pp. 533544.
    18. 18)
      • 18. Ng, W.-S., Kao, H.-C., Yeh, C.-H., et al: ‘Automatic photo ranking based on esthetics rules of photography’. Technical report, National Chengchi University, Taipei, Taiwan, 2009.
    19. 19)
      • 19. Bora, D., Gupta, A., Khan, F.: ‘Comparing the performance of L*A*B* and HSV color spaces with respect to color image segmentation’, CoRR, abs/1506.01472, 2015, pp. 192203.
    20. 20)
      • 20. Gao, X., Xin, J., Sato, T., et al: ‘Analysis of cross-cultural color emotion’, Color Res. Appl., 2007, 32, (3), pp. 223229.
    21. 21)
      • 21. Schanda, J.: ‘CIE colorimetry’, in Schanda, J. (Ed.) ‘Colorimetry: understanding the CIE system’ (John Wiley & Sons, Hoboken, NJ, 2007), pp. 2578.
    22. 22)
      • 22. Crete, F., Dolmiere, T., Ladret, P., et al: ‘The blur effect: perception and estimation with a new no-reference perceptual blur metric’. Conf. Human Vision and Electronic Imaging XII, San Jose, CA, USA, January–February 2007, p. 64920I.
    23. 23)
      • 23. Pertuz, S., Puig, D., Garc, M.A.: ‘Analysis of focus measure operators for shape-from-focus’, Pattern Recognit., 2013, 46, (5), pp. 14151432.
    24. 24)
      • 24. Yang, G., Nelson, B.: ‘Wavelet-based auto focusing and unsupervised segmentation of microscopic images’. Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, Las Vegas, Nevada, USAOctober 2003, vol. 3, pp. 21432148.
    25. 25)
      • 25. Thelen, A., Frey, S., Hirsch, S., et al: ‘Improvements in shape-from-focus for holographic reconstructions with regard to focus operators, neighborhood size, and height value interpolation’, IEEE Trans. Image Process., 2009, 18, (1), pp. 151157.
    26. 26)
      • 26. Minhas, R., Mohammed, A., Wu, Q.: ‘An efficient algorithm for focus measure computation in constant time’, IEEE Trans. Circuits Syst. Video Technol., 2012, 22, (1), pp. 152156.
    27. 27)
      • 27. Mai, L., Le, H., Niu, Y., et al: ‘Detecting rule of simplicity from photos’. ACM Multimedia, New York, NY, USA, October–November 2012, pp. 11491152.
    28. 28)
      • 28. Healey, C., Enns, J.: ‘Attention and visual memory in visualization and computer graphics’, IEEE Trans. Vis. Comput. Graph., 2012, 18, (7), pp. 11701188.
    29. 29)
      • 29. ‘Matlab Central’. Available at http://www.mathworks.com/matlabcentral/fileexchange/36484-local-binary-patterns/, accessed April 2016.
    30. 30)
      • 30. Loy, G., Eklundh, O.: ‘Detecting symmetry and symmetric constellations of features’. 9th European Conf. Computer Vision, Graz, Austria, May 2006, pp. 508521.
    31. 31)
      • 31. ‘Digital photography school, how to use leading lines for better composition’. Available at http://digital-photography-school.com/, accessed April 2016.
    32. 32)
      • 32. Datta, R., Joshi, D., Li, J., et al: ‘Studying aesthetics in photographic images using a computational approach’. 9th European Conf. Computer Vision, Graz, Austria, May 2006, pp. 288301.
    33. 33)
      • 33. Matas, J., Galambos, C., Kittler, J.: ‘Robust detection of lines using the progressive probabilistic Hough transform’, Comput. Vis. Image Underst., 2000, 78, (1), pp. 119137.
    34. 34)
      • 34. Ballard, D.: ‘Generalizing the Hough transform to detect arbitrary shapes’, Pattern Recognit., 1981, 13, (2), pp. 111122.
    35. 35)
      • 35. Dhar, S., Ordonez, V., Berg, T.: ‘High level describable attributes for predicting aesthetics and interestingness’. Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, June 2011, pp. 16571664.
    36. 36)
      • 36. Mai, L., Le, H., Niu, Y., et al: ‘Rule of thirds detection from photograph’. IEEE Int. Symp. Multimedia, CA, USA, December 2011, pp. 9196.
    37. 37)
      • 37. Fayyad, U., Shapiro, G.P., Smyth, P.: ‘From data mining to knowledge discovery: an overview’, Adv. Knowl. Discov. Data Min., 1996, pp. 134.
    38. 38)
      • 38. Cortes, C., Vapnik, V.: ‘Support-vector networks’, Mach. Learn., 1995, 20, (3), pp. 273297.
    39. 39)
      • 39. Orr, M.: ‘Introduction to radial basis function networks’. Technical report, Technical Report 4/96, Center for Cognitive Science, University of Edinburgh, 1996.
    40. 40)
      • 40. Salzberg, S.: ‘C4.5: programs for machine learning by J. Ross Quinlan. Morgan Kaufmann Publishers, 1993’, Mach. Learn., 1994, 16, (3), pp. 235240.
    41. 41)
      • 41. Wang, Y., Witten, I.H.: ‘Induction of model trees for predicting continuous classes’. Proc. Poster Papers of the European Conf. Machine Learning, University of Economics, Faculty of Informatics and Statistics, Prague, 1997.
    42. 42)
      • 42. Quinlan, R.: ‘Learning with continuous classes’. Proc. 5th Australian Joint Conf. Artificial Intelligence, Hobart, Tasmania, November 1992, pp. 343348.
    43. 43)
      • 43. Bouckaert, R., Frank, E., Hall, M., et al: ‘Weka manual for version 3-7-13’. Technical report, The University of Waikato, 2015.
    44. 44)
      • 44. Xiao, J., Hayes, J., Ehinger, K., et al: ‘Sun database: large-scale scene recognition from abbey to zoo’. Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, June 2010, pp. 34853492.
    45. 45)
      • 45. Murray, N., Marchesotti, L., Perronnin, F.: ‘AVA: a large scale database for aesthetic visual analysis’. Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, June 2012, pp. 24082415.
    46. 46)
      • 46. Ramanathan, S., Katti, H., Sebe, N., et al: ‘An eye fixation database for saliency detection in images’. 11th European Conf. Computer Vision, Heraklion, Crete, Greece, September 2010, pp. 3043.
    47. 47)
      • 47. ‘Amazon Mechanical Turk’. Available at https://www.mturk.com/mturk/welcome/, accessed June 2017.
    48. 48)
      • 48. Kohavi, R.: ‘A study of cross-validation and bootstrap for accuracy estimation and model selection’. Proc. Fourteenth Int. Joint Conf. Artificial Intelligence (IJCAI), Montréal, Québec, Canada, August 1995, pp. 11371145.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2017.0631
Loading

Related content

content/journals/10.1049/iet-ipr.2017.0631
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address