Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Survey on visual sentiment analysis

Visual Sentiment Analysis aims to understand how images affect people, in terms of evoked emotions. Although this field is rather new, a broad range of techniques have been developed for various data sources and problems, resulting in a large body of research. This paper reviews pertinent publications and tries to present an exhaustive overview of the field. After a description of the task and the related applications, the subject is tackled under different main headings. The paper also describes principles of design of general Visual Sentiment Analysis systems from three main points of view: emotional models, dataset definition, feature design. A formalization of the problem is discussed, considering different levels of granularity, as well as the components that can affect the sentiment toward an image in different ways. To this aim, this paper considers a structured formalization of the problem which is usually used for the analysis of text, and discusses it's suitability in the context of Visual Sentiment Analysis. The paper also includes a description of new challenges, the evaluation from the viewpoint of progress toward more sophisticated systems and related practical applications, as well as a summary of the insights resulting from this study.

References

    1. 1)
      • 20. Marchesotti, L., Perronnin, F., Larlus, D., et al: ‘Assessing the aesthetic quality of photographs using generic image descriptors’, 2011 Int. Conf. on Computer Vision, Barcelona, Spain, 2011, pp. 17841791..
    2. 2)
      • 17. Datta, R., Joshi, D., Li, J., et al: ‘Studying aesthetics in photographic images using a computational approach’, European Conf. on Computer Vision, Graz, Austria, 2006, pp. 288301.
    3. 3)
      • 65. Zhao, S., Ding, G., Huang, Q., et al: ‘Affective image content analysis: a comprehensive survey’, Int. Joint Conf. on Artificial Intelligence, 2018, pp. 55345541.
    4. 4)
      • 116. Yu, A., Grauman, K.: ‘Just noticeable differences in visual attributes’, Proc. of the IEEE Int. Conf. on Computer Vision, Washington, DC, USA, 2015, pp. 24162424.
    5. 5)
      • 42. Wang, Y., Wang, S., Tang, J., et al: ‘Unsupervised sentiment analysis for social media images’, Proc. of the 24th Int. Conf. on Artificial Intelligence (IJCAI'15), Buenos Aires, Argentina, 2015, pp. 23782379.
    6. 6)
      • 107. Wu, B., Mei, T., Cheng, W.-H., et al: ‘Unfolding temporal dynamics: predicting social media popularity using multi-scale temporal decomposition’, Proc. of the Thirtieth AAAI Conf. on Artificial Intelligence (AAAI), Phoenix, AZ, USA, 2016.
    7. 7)
      • 131. Katsurai, M., Ogawa, T., Haseyama, M.: ‘A cross-modal approach for extracting semantic relationships between concepts using tagged images’, IEEE Trans. Multimed., 2014, 16, (4), pp. 10591074.
    8. 8)
      • 136. Ortis, A., Farinella, G.M., D'amico, V., et al: ‘Recfusion: automatic video curation driven by visual content popularity’, Proc. of the 23rd ACM Int. Conf. on Multimedia, Brisbane Australia, 2015, pp. 11791182.
    9. 9)
      • 43. Campos, V., Salvador, A., Giró-i Nieto, X., et al: ‘Diving deep into sentiment: understanding fine-tuned cnns for visual sentiment prediction’, Proc. of the 1st Int. Workshop on Affect & Sentiment in Multimedia (ASM '15), New York, NY, USA, 2015, pp. 5762.
    10. 10)
      • 84. Karpathy, A., Fei-Fei, L.: ‘Deep visual-semantic alignments for generating image descriptions’, The IEEE Conf. on Computer Vision and Pattern Recognition, Boston, MA, USA, June 2015.
    11. 11)
      • 54. Li, Z., Fan, Y., Liu, W., et al: ‘Image sentiment prediction based on textual descriptions with adjective noun pairs’, Multimedia Tools Appl., 2018, 77, (1), pp. 11151132.
    12. 12)
      • 93. Kazim, S.: An introduction to emotive UI, April 2016, accessed 17 April 2018.
    13. 13)
      • 68. Hayashi, T., Hagiwara, M.: ‘Image query by impression words-the iqi system’, IEEE Trans. Consum. Electron., 1998, 44, (2), pp. 347352.
    14. 14)
      • 15. Russell, J.A., Mehrabian, A.: ‘Evidence for a three-factor theory of emotions’, J. Res. Personality, 1977, 11, (3), pp. 273294.
    15. 15)
      • 119. Xu, K., Ba, J., Kiros, R., et al: ‘Show, attend and tell: neural image caption generation with visual attention’, Int. Conf. on Machine Learning, Lille, France, 2015, pp. 20482057.
    16. 16)
      • 23. Jia, J., Wu, S., Wang, X., et al: ‘Can we understand van gogh's mood?: learning to infer affects from images in social networks’, Proc. of the 20th ACM Int. Conf. on Multimedia, Nara, Japan, 2012, pp. 857860.
    17. 17)
      • 36. Ekman, P., Friesen, W.V., O'Sullivan, M., et al: ‘Universals and cultural differences in the judgments of facial expressions of emotion’, J. Personality Soc. Psychol., 1987, 53, (4), p. 712.
    18. 18)
      • 105. Wu, B., Cheng, W.-H., Zhang, Y., et al: ‘Time matters: multi-scale temporalization of social media popularity’, Proc. of the 2016 ACM on Multimedia Conf. (ACM MM), Amsterdam, The Netherlands, 2016.
    19. 19)
      • 71. Shaver, P., Schwartz, J., Kirson, D., et al: ‘Emotion knowledge: further exploration of a prototype approach’, J. Personality Soc. Psychol., 1987, 52, (6), p. 1061.
    20. 20)
      • 79. Van de, J., eijer, W, Schmid, C., et al: ‘Learning color names from real-world images’, 2007 IEEE Conf. on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 2007, pp. 18.
    21. 21)
      • 124. Kralj Novak, P., Smailović, J., Sluban, B., et al: ‘Sentiment of emojis’, PLOS ONE, 2015, 10, (12), pp. 122.
    22. 22)
      • 29. Esuli, A., Sebastiani, F.: ‘Sentiwordnet: a publicly available lexical resource for opinion mining’, Proc. of LREC, Genoa, Italy, 2006, vol.6, pp. 417422.
    23. 23)
      • 110. Ortis, A., Farinella, G.M., Battiato, S.: ‘Predicting social image popularity dynamics at time zero’, IEEE Access, 2019, 7, pp. 11.
    24. 24)
      • 7. Colombo, C., Del Bimbo, A., Pala, P.: ‘Semantics in visual information retrieval’, IEEE Multimedia, 1999, 6, (3), pp. 3853.
    25. 25)
      • 37. Chen, T., Borth, D., Darrell, T., et al: ‘Deepsentibank: Visual sentiment concept classification with deep convolutional neural networks. ArXiv preprint arXiv:1410.85862014.
    26. 26)
      • 5. Zhang, L., Wang, S., Liu, B.: ‘Deep learning for sentiment analysis: a survey’, Wiley Interdiscip. Rev.: Data Min. Knowl. Discov., 2018, 8, (4), p. e1253.
    27. 27)
      • 108. Li, L., Situ, R., Gao, J., et al: ‘A hybrid model combining convolutional neural network with xgboost for predicting social media popularity’, Proc. of the 2017 ACM on Multimedia Conf. (MM ’17), New York, NY, USA, 2017, pp. 19121917.
    28. 28)
      • 94. Reece, A.G., Danforth, C.M.: ‘Instagram photos reveal predictive markers of depression’, EPJ Data Sci., 2017, 6, (1), p. 15.
    29. 29)
      • 132. Li, Z., Liu, J., Tang, J., et al: ‘Robust structured subspace learning for data representation’, IEEE Trans. Pattern Anal. Mach. Intell., 2015, 37, (10), pp. 20852098.
    30. 30)
      • 18. Joshi, D., Datta, R., Fedorovskaya, E., et al: ‘On aesthetics and emotions in scene images: a computational perspective’, in Kveraga, K., Bar, M. (Eds.): Scene vision: making sense of what we see, (The MIT Press, Cambridge, MA, USA, 2014), p. 241.
    31. 31)
      • 128. Fu, Y., Hospedales, T.M., Xiang, T., et al: ‘Transductive multi-view embedding for zero-shot recognition and annotation’, European Conf. on Computer Vision, Zurich, Switzerland, 2014, pp. 584599.
    32. 32)
      • 133. Rasiwasia, N., Costa Pereira, J., Coviello, E., et al: ‘A new approach to cross-modal multimedia retrieval’, Proc. of the 18th ACM Int. Conf. on Multimedia, Firenze, Italy, 2010, pp. 251260.
    33. 33)
      • 97. Hanna, R., Rohm, A., Crittenden, V.L.: ‘We're all connected: the power of the social media ecosystem’, Bus. Horiz., 2011, 54, (3), pp. 265273.
    34. 34)
      • 35. Yang, Y., Jia, J., Zhang, S., et al: ‘How do your friends on social media disclose your emotions?’, Proc. of the Twenty-Eighth AAAI Conf. on Artificial Intelligence (AAAI’14), Québec City, Québec, Canada, 2014, pp. 306312.
    35. 35)
      • 72. Izard, C.E.: ‘Basic emotions, relations among emotions, and emotion-cognition relations’, 1992.
    36. 36)
      • 127. Wilson, T., Wiebe, J., Hoffmann, P.: ‘Recognizing contextual polarity in phrase-level sentiment analysis’, Proc. of the Conf. on Human Language Technology and Empirical Methods in Natural Language Processing, 2005, pp. 347354.
    37. 37)
      • 85. Karpathy and, A., Fei-Fei, L.: ‘Deep visual-semantic alignments for generating image descriptions’, Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Boston, MA, USA, 2015, pp. 31283137.
    38. 38)
      • 22. Isola, P., Xiao, J., Torralba, A., et al: ‘What makes an image memorable?’, IEEE Conf. on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 2011, pp. 145152.
    39. 39)
      • 14. Osgood, C.E.: ‘The nature and measurement of meaning’, Psychol. Bull., 1952, 49, (3), p. 197.
    40. 40)
      • 112. Deza, A., Parikh, D.: ‘Understanding image virality’, 2015 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, June 2015, pp. 18181826.
    41. 41)
      • 9. Wei-ning, W., Ying-lin, Y., Sheng-ming, J.: ‘Image retrieval by emotional semantics: a study of emotional space and feature extraction’, IEEE Int. Conf. on Systems, Man and Cybernetics, Taipei, Taiwan, 2006, vol. 4, pp. 35343539.
    42. 42)
      • 11. Bradley, M.M.: ‘Emotional memory: a dimensional analysis’, in van Goozen, S.H.M., van de Poll, N.E., Sergeant, J.A. (Eds.): Emotions: Essays on Emotion Theory, (Erlbaum, Hillsdale, 1994), pp. 97134.
    43. 43)
      • 83. Hanjalic, A.: ‘Extracting moods from pictures and sounds: towards truly personalized tv’, IEEE Signal Process. Mag., 2006, 23, (2), pp. 90100.
    44. 44)
      • 55. Thelwall, M., Buckley, K., Paltoglou, G., et al: ‘Sentiment strength detection in short informal text’, J. Assoc. Inf. Sci. Technol., 2010, 61, (12), pp. 25442558.
    45. 45)
      • 118. Liu, H., Singh, P.: ‘Conceptnet — a practical commonsense reasoning tool-kit’, BT Technol. J., 2004, 22, (4), pp. 211226.
    46. 46)
      • 121. Hogenboom, A., Bal, D., Frasincar, F., et al: ‘Exploiting emoticons in sentiment analysis’, Proc. of the 28th Annual ACM Symp. on Applied Computing, Las Vegas, NV, USA, 2013, pp. 703710.
    47. 47)
      • 135. Battiato, S., Farinella, G.M., Milotta, F.L., et al: ‘Organizing videos streams for clustering and estimation of popular scenes’, Int. Conf. on Image Analysis and Processing, Catania, Italy, 2017, pp. 5161.
    48. 48)
      • 28. Siersdorfer, S., Minack, E., Deng, F., et al: ‘Analyzing and predicting sentiment of images on the social web’, Proc. of the 18th ACM Int. Conf. on Multimedia, Firenze, Italy, 2010, pp. 715718.
    49. 49)
      • 56. Song, K., Yao, T., Ling, Q., et al: ‘Boosting image sentiment analysis with visual attention’, Neurocomputing, 2018, 312, pp. 218228.
    50. 50)
      • 87. Gong, Y., Ke, Q., Isard, M., et al: ‘A multi-view embedding space for modeling internet images, tags, and their semantics’, Int. J. Comput. Vis., 2014, 106, (2), pp. 210233.
    51. 51)
      • 34. Yuan, J., Mcdonough, S., You, Q., et al: ‘Sentribute: image sentiment analysis from a mid-level perspective’, Proc. of the Second Int. Workshop on Issues of Sentiment Discovery and Opinion Mining, Chicago, IL, USA, 2013, p. 10.
    52. 52)
      • 129. Gong, Y., Wang, L., Hodosh, M., et al: ‘Improving image-sentence embeddings using large weakly annotated photo collections’, European Conf. on Computer Vision, Zurich, Switzerland, 2014, pp. 529545.
    53. 53)
      • 115. Parikh, D., Grauman, K.: ‘Relative attributes’, IEEE Int. Conf. on Computer Vision, Barcelona, Spain, 2011, pp. 503510.
    54. 54)
      • 49. Jufeng Yang, M.S., She, D.: ‘Joint image emotion classification and distribution learning via deep convolutional neural network’, Proc. of the Twenty-Sixth Int. Joint Conf. on Artificial Intelligence, (IJCAI-17), Melbourne, Australia, 2017, pp. 32663272.
    55. 55)
      • 64. Wu, L., Qi, M., Jian, M., et al: ‘Visual sentiment analysis by combining global and local information’, Neural Process. Lett., 2019, pp. 113.
    56. 56)
      • 75. Marchewka, A., Żurawski, Ł., Jednoróg, K., et al: ‘The nencki affective picture system (naps): introduction to a novel, standardized, wide-range, high-quality, realistic picture database’, Beh. Res. Meth., 2014, 46, (2), pp. 596610.
    57. 57)
      • 88. Sang, J., Xu, C., Liu, J.: ‘User-aware image tag refinement via ternary semantic analysis’, IEEE Trans. Multimed., 2012, 14, (3), pp. 883895.
    58. 58)
      • 61. Campos, V., Giro-i Nieto, X., Jou, B., et al: ‘Sentiment concept embedding for visual affect recognition’, in Alameda-Pineda, X., Ricci, E., Sebe, N. (Eds.): Multimodal behavior analysis in the wild, (Elsevier, Long Beach, CA, USA, 2019), pp. 349367.
    59. 59)
      • 53. Zhou, B., Lapedriza, A., Xiao, J., et al: ‘Learning deep features for scene recognition using places database’, Advances in Neural Information Processing Systems, Montreal, Quebec, Canada, 2014, pp. 487495.
    60. 60)
      • 69. Hanjalic, A., Xu, L.-Q.: ‘Affective video content representation and modeling’, IEEE Trans. Multimedia, 2005, 7, (1), pp. 143154.
    61. 61)
      • 33. Plutchik, R.: ‘A general psychoevolutionary theory of emotion’, Theor. Emotion, 1980, 1, pp. 331.
    62. 62)
      • 122. Dimson, T.: Emojineering part 1: Machine learning for emoji trends. https://instagram-engineering.com/emojineering-part-1-machine-learning-for-emoji-trendsmachine-learning-for-emoji-trends-7f5f9cb979ad, 2015, accessed 17 April 2018.
    63. 63)
      • 59. Zhu, X., Cao, B., Xu, S., et al: ‘Joint visual-textual sentiment analysis based on cross-modality attention mechanism’, Int. Conf. on Multimedia Modeling, Thessaloniki, Greece, 2019, pp. 264276.
    64. 64)
      • 73. Yanulevskaya, V., Van Gemert, J., Roth, K., et al: ‘Emotional valence categorization using holistic image features’, 15th IEEE Int. Conf. on Image Processing, San Diego, CA, USA, 2008, pp. 101104.
    65. 65)
      • 13. Lang, P.J.: ‘The network model of emotion: motivational connections’, Perspect. Anger Emotion: Adv. Soc. Cogn., 1993, 6, pp. 109133.
    66. 66)
      • 81. Liu, B.: ‘Sentiment analysis and opinion mining’, Synth. Lect. Hum. Lang. Technol., 2012, 5, (1), pp. 1167.
    67. 67)
      • 86. Li, X., Uricchio, T., Ballan, L., et al: ‘Socializing the semantic gap: a comparative survey on image tag assignment, refinement, and retrieval’, ACM Comput. Surv. (CSUR), 2016, 49, (1), p. 14.
    68. 68)
      • 130. Guillaumin, M., Verbeek, J., Schmid, C.: ‘Multimodal semi-supervised learning for image classification’, IEEE Conf. on Computer Vision & Pattern Recognition, San Francisco, CA, USA, 2010, pp. 902909.
    69. 69)
      • 2. Tumasjan, A., Sprenger, T.O., Sandner, P.G., et al: ‘Predicting elections with twitter: What 140 characters reveal about political sentiment’, Fourth Int. AAAI Conf. on Weblogs and Social Media, Washington, DC, USA, 2010.
    70. 70)
      • 80. Tamura, H., Mori, S., Yamawaki, T.: ‘Textural features corresponding to visual perception’, IEEE Trans. Syst. Man Cybern., 1978, 8, (6), pp. 460473.
    71. 71)
      • 27. Totti, L.C., Costa, F.A., Avila, S., et al: ‘The impact of visual attributes on online image diffusion’, Proc. of the 2014 ACM Conf. on Web Science, Bloomington, IN, USA, 2014, pp. 4251.
    72. 72)
      • 92. Lockner, D., Bonnardel, N., Bouchard, C., et al: ‘Emotion and interface design’, Proc. of the 2014 Ergonomie et Informatique Avancée Conf.-Design, Ergonomie et IHM: ‘Quelle Articulation Pour la Co-Conception de l'interaction, New York, USA, 2014, pp. 3340.
    73. 73)
      • 3. Pang, B., Lee, L.: ‘Opinion mining and sentiment analysis’, Found. trends Inf. Retr., 2008, 2, (1–2), pp. 1135.
    74. 74)
      • 103. Huiskes, M.J., Thomee, B., Lew, M.S.: ‘New trends and ideas in visual concept detection: ‘the mir flickr retrieval evaluation initiative’, Proc. of the Int. Conf. on Multimedia Information Retrieval, Vancouver, British Columbia, Canada, 2010, pp. 527536.
    75. 75)
      • 111. Alameda-Pineda, X., Pilzer, A., Xu, D., et al: ‘Viraliency: ‘pooling local virality’, 2017 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, July 2017, pp. 484492.
    76. 76)
      • 57. Ortis, A., Farinella, G.M., Torrisi, G., et al: ‘Visual sentiment analysis based on objective text description of images’, 2018 Int. Conf. on Content-Based Multimedia Indexing (CBMI), La Rochelle, France, 2018, pp. 16.
    77. 77)
      • 8. Schmidt, S., Stock, W.G.: ‘Collective indexing of emotions in images. a study in emotional information retrieval’, J. Am. Soc. Inf. Sci. Technol., 2009, 60, (5), pp. 863876.
    78. 78)
      • 44. Jou, B., Chen, T., Pappas, N., et al: ‘Visual affect around the world: a large-scale multilingual visual sentiment ontology’, Proc. of the 23rd ACM Int. Conf. on Multimedia, Brisbane, Australia, 2015, pp. 159168.
    79. 79)
      • 31. Mikels, J.A., Fredrickson, B.L., Larkin, G.R., et al: ‘Emotional category data on images from the international affective picture system’, Beh. Res. Meth., 2005, 37, (4), pp. 626630.
    80. 80)
      • 99. Yamasaki, T., Sano, S., Aizawa, K.: ‘Social popularity score: ‘predicting numbers of views, comments, and favorites of social photos using only annotations’, Proc. of the First Int. Workshop on Internet-Scale Multimedia Management, Orlando, FL, USA, 2014, pp. 38.
    81. 81)
      • 6. Ortis, A., Farinella, G.M., Battiato, S.: ‘An overview on image sentiment analysis: methods, datasets and current challenges’, Proc. of the 16th Int. Joint Conf. on e-Business and Telecommunications - Volume 1: SIGMAP, INSTICC, Prague, Czech Republic, 2019, pp. 290300.
    82. 82)
      • 1. Liu, B., Zhang, L.: ‘A survey of opinion mining and sentiment analysis’, in Aggarwal, C.C., Zhai, ChengXiang (Eds.): Mining text data, (Springer, Boston, MA, USA, 2012), pp. 415463.
    83. 83)
      • 98. Trusov, M., Bucklin, R.E., Pauwels, K.: ‘Effects of word-of-mouth versus traditional marketing: findings from an internet social networking site’, J. Mark., 2009, 73, (5), pp. 90102.
    84. 84)
      • 77. Zhao, S., Gao, Y., Jiang, X., et al: ‘Exploring principles-of-art features for image emotion recognition’, Proc. of the 22nd ACM Int. Conf. on Multimedia, Orlando, FL, USA, 2014, pp. 4756.
    85. 85)
      • 125. Cappallo, S., Svetlichnaya, S., Garrigues, P., et al: ‘The new modality: emoji challenges in prediction, anticipation, and retrieval’, IEEE Trans. Multimed., 2018, 21, (2), pp. 402415. Pending minor revision.
    86. 86)
      • 117. Agarwal, B., Mittal, N., Bansal, P., et al: ‘Sentiment analysis using common-sense and context information’, Comput. Intell. Neurosci., 2015, 2015, p. 9.
    87. 87)
      • 123. Cappallo, S., Mensink, T., Snoek, C.G.: ‘Image2emoji: zero-shot emoji prediction for visual media’, Proc. of the 23rd ACM Int. Conf. on Multimedia, Brisbane Australia, 2015, pp. 13111314.
    88. 88)
      • 120. You, Q., Jin, H., Wang, Z., et al: ‘Image captioning with semantic attention’, Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 46514659.
    89. 89)
      • 41. Peng, K.-C., Chen, T., Sadovnik, A., et al: ‘A mixed bag of emotions: model, predict, and transfer emotion distributions’, Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Boston, MA, USA, 2015, pp. 860868.
    90. 90)
      • 51. Vadicamo, L., Carrara, F., Cimino, A., et al: ‘Cross-media learning for image sentiment analysis in the wild’, Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 308317.
    91. 91)
      • 102. Almgren, K., Lee, J., Kim, M., et al: ‘Predicting the future popularity of images on social networks’, Proc. of the The 3rd Multidisciplinary Int. Social Networks Conf. on SocialInformatics 2016, Data Science 2016, Union, NJ, USA, 2016, p. 15.
    92. 92)
      • 12. Itten, J.: ‘The art of color: the subjective experience and objective rationale of color (John Wiley & Sons Inc, New York, USA, 1973).
    93. 93)
      • 52. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
    94. 94)
      • 32. Borth, D., Ji, R., Chen, T., et al: ‘Large-scale visual sentiment ontology and detectors using adjective noun pairs’, Proc. of the 21st ACM Int. Conf. on Multimedia, Barcelona, Spain, 2013, pp. 223232.
    95. 95)
      • 134. Liu, Y.-J., Yu, M., Zhao, G., et al: ‘Real-time movie-induced discrete emotion recognition from eeg signals’, IEEE Trans. Affective Comput., 2017, 9, (4), pp. 550562.
    96. 96)
      • 58. Felicetti, A., Martini, M., Paolanti, M., et al: ‘Visual and textual sentiment analysis of daily news social media images by deep learning’, Int. Conf. on Image Analysis and Processing, Trento, Italy, 2019, pp. 477487.
    97. 97)
      • 96. Goeldi, A.: Website network and advertisement analysis using analytic measurement of online social media content, 5 July 2011. US Patent 7,974,983.
    98. 98)
      • 90. Xu, H., Wang, J., Hua, X.-S., et al: ‘Tag refinement by regularized lda’, Proc. of the 17th ACM Int. Conf. on Multimedia, New York, USA, 2009, pp. 573576.
    99. 99)
      • 113. Altwaijry and, H., Belongie, S.: ‘Relative ranking of facial attractiveness’, IEEE Workshop on Applications of Computer Vision (WACV), Washington, DC, USA, 2013, pp. 117124.
    100. 100)
      • 91. Wang, W., He, Q.: ‘A survey on emotional semantic image retrieval’, 15th IEEE Int. Conf. on Image Processing, San Diego, CA, USA, 2008, pp. 117120.
    101. 101)
      • 62. Fortin, M.P., Chaib-draa, B.: ‘Multimodal sentiment analysis: a multitask learning approach’, Proc. of the 8th Int. Conf. on Pattern Recognition Applications and Methods, Prague, Czech Republic, 2019.
    102. 102)
      • 106. Wu, B., Cheng, W.-H., Zhang, Y., et al: ‘Sequential prediction of social media popularity with deep temporal context networks’, Int. Joint Conf. on Artificial Intelligence (IJCAI), Melbourne, Australia, 2017.
    103. 103)
      • 74. Dan-Glauser, E.S., Scherer, K.R.: ‘The geneva affective picture database (gaped): a new 730-picture database focusing on valence and normative significance’, Beh. Res. Meth., 2011, 43, (2), pp. 468477.
    104. 104)
      • 78. Stottinger, J., Banova, J., Ponitz, T., et al: ‘Translating journalists’ requirements into features for image search', 15th Int. Conf. on Virtual Systems and Multimedia, Vienna, Austria, 2009, pp. 149153.
    105. 105)
      • 25. Khosla, A., Das Sarma, A., Hamid, R.: ‘What makes an image popular?’, Proc. of the 23rd Int. Conf. on World Wide Web, Seoul, Korea, 2014, pp. 867876.
    106. 106)
      • 19. Joshi, D., Datta, R., Fedorovskaya, E., et al: ‘Aesthetics and emotions in images’, IEEE Signal Process. Mag., 2011, 28, (5), pp. 94115.
    107. 107)
      • 38. Chen, Y.-Y., Chen, T., Hsu, W.H., et al: ‘Predicting viewer affective comments based on image content in social media’, Proc. of Int. Conf. on Multimedia Retrieval, Glasgow, UK, 2014, p. 233.
    108. 108)
      • 4. Dohaiha, H.H., Prasad, P., Maag, A., et al: ‘Deep learning for aspect-based sentiment analysis a comparative review’, Expert Syst. Appl., 2018, 118, pp. 272299.
    109. 109)
      • 104. Lin, J., Efron, M.: ‘Overview of the trec2013 microblog track’. Technical report, 2013.
    110. 110)
      • 39. Xu, C., Cetintas, S., Lee, K.-C., et al: Visual sentiment prediction with deep convolutional neural networks. arXiv preprint arXiv:1411.5731, 2014.
    111. 111)
      • 48. Yang, J., Sun, M., Sun, X.: ‘Learning visual sentiment distributions via augmented conditional probability neural network’, AAAI, San Francisco, CA, USA, 2017, pp. 224230.
    112. 112)
      • 126. Al-Halah, Z., Aitken, A., Shi, W., et al: Smile, be happy:) emoji embedding for visual sentiment analysis. arXiv preprint arXiv:1907.06160, 2019.
    113. 113)
      • 89. Wu, L., Jin, R., Jain, A.K.: ‘Tag completion for image retrieval’, IEEE Trans. Pattern Anal. Mach. Intell., 2013, 35, (3), pp. 716727.
    114. 114)
      • 101. Cappallo, S., Mensink, T., Snoek, C.G.: ‘Latent factors of visual popularity prediction’, Proc. of the 5th ACM on Int. Conf. on Multimedia Retrieval, Shanghai, People's Republic of China, 2015, pp. 195202.
    115. 115)
      • 100. Valafar, M., Rejaie, R., Willinger, W.: ‘Beyond friendship graphs: a study of user interactions in flickr’, Proc. of the 2nd ACM Workshop on Online Social Networks, Barcelona, Spain, 2009, pp. 2530.
    116. 116)
      • 95. Khosla, A., Xiao, J., Torralba, A., et al: ‘Memorability of image regions’, Adv. Neural Inf. Process. Syst., 2012, pp. 305313.
    117. 117)
      • 45. Sun, M., Yang, J., Wang, K., et al: ‘Discovering affective regions in deep convolutional neural networks for visual sentiment prediction’, IEEE Int. Conf. on Multimedia and Expo., Seattle, WA, USA, 2016, pp. 16.
    118. 118)
      • 109. Hidayati, S.C., Chen, Y.-L., Yang, C.-L., et al: ‘Popularity meter: an influence-and aesthetics-aware social media popularity predictor’, Proc. of the 2017 ACM on Multimedia Conf., Mountain View, CA, USA, 2017, pp. 19181923.
    119. 119)
      • 30. Machajdik, J., Hanbury, A.: ‘Affective image classification using features inspired by psychology and art theory’, Proc. of the 18th ACM Int. Conf. on Multimedia, Firenze, Italy, 2010, pp. 8392.
    120. 120)
      • 26. McParlane, P.J., Moshfeghi, Y., Jose, J.M.: ‘Nobody comes here anymore, it's too crowded; predicting image popularity on flickr’, Proc. of Int. Conf. on Multimedia Retrieval, Glasgow, UK, 2014, p. 385.
    121. 121)
      • 60. Huang, F., Zhang, X., Zhao, Z., et al: ‘Image–text sentiment analysis via deep multimodal attentive fusion’, Knowl.-Based Syst., 2019, 167, pp. 2637.
    122. 122)
      • 66. Soleymani, M., Garcia, D., Jou, B., et al: ‘A survey of multimodal sentiment analysis’, Image Vis. Comput., 2017, 65, pp. 314.
    123. 123)
      • 24. Gelli, F., Uricchio, T., Bertini, M., et al: ‘Image popularity prediction in social media using sentiment and context features’, Proc. of the 23rd ACM Int. Conf. on Multimedia, Brisbane Australia, 2015, pp. 907910.
    124. 124)
      • 76. Zhao, S., Yao, H., Gao, Y., et al: ‘Predicting personalized emotion perceptions of social images’, Proc. of the 24th ACM Int. Conf. on Multimedia, Amsterdam, The Netherlands, 2016, pp. 13851394.
    125. 125)
      • 46. Katsurai, M., Satoh, S.: ‘Image sentiment analysis using latent correlations among visual, textual, and sentiment views’, IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Shanghai, China, 2016, pp. 28372841.
    126. 126)
      • 67. Li, Z., Fan, Y., Jiang, B., et al: ‘A survey on sentiment analysis and opinion mining for social multimedia’, Multimedia Tools Appl., 2019, 78, (6), pp. 69396967.
    127. 127)
      • 16. Valdez, P., Mehrabian, A.: ‘Effects of color on emotions’, J. Exper. Psychol.: Gen., 1994, 123, (4), p. 394.
    128. 128)
      • 21. Raví, F., Battiato, S.: ‘A novel computational tool for aesthetic scoring of digital photography’, Proc. of 6th European Conf. on Colour in Graphics, Imaging, and Vision (SPIE-IS&T), Amsterdam, 2012, pp. 15.
    129. 129)
      • 82. Alexe, B., Deselaers, T., Ferrari, V.: ‘Measuring the objectness of image windows’, IEEE Trans. Pattern Anal. Mach. Intell., 2012, 34, (11), pp. 21892202.
    130. 130)
      • 114. Fan, Q., Gabbur, P., Pankanti, S.: ‘Relative attributes for large-scale abandoned object detection’, Proc. of the IEEE Int. Conf. on Computer Vision, Sydney, Australia, 2013, pp. 27362743.
    131. 131)
      • 47. Yu, F.X., Cao, L., Feris, R.S., et al: ‘Designing category-level attributes for discriminative visual recognition’, Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Portland, OR, USA, 2013, pp. 771778.
    132. 132)
      • 70. Lang, P.J., Bradley, M.M., Cuthbert, B.N.: ‘International affective picture system (IAPS): technical manual and affective ratings’, in The center for research in psychophysiology, University of Florida, Gainesville, FL, 1999.
    133. 133)
      • 63. Corchs, S., Fersini, E., Gasparini, F.: ‘Ensemble learning on visual and textual data for social image emotion classification’, Int. J. Machine Learn. Cybern., 2019, 10, (8), pp. 20572070.
    134. 134)
      • 10. Zhao, S., Yao, H., Yang, Y., et al: ‘Affective image retrieval via multi-graph learning’, Proc. of the 22nd ACM Int. Conf. on Multimedia, Orlando, FL, 2014, pp. 10251028.
    135. 135)
      • 40. You, Q., Luo, J., Jin, H., et al: ‘Robust image sentiment analysis using progressively trained and domain transferred deep networks’, Proc. of the Twenty-Ninth AAAI Conf. on Artificial Intelligence (AAAI’15), Austin, TX, USA, 2015, pp. 381388.
    136. 136)
      • 50. Campos, V., Jou, B., Nieto, X.G.i.: ‘From pixels to sentiment: fine-tuning CNNS for visual sentiment prediction’, Image Vis. Comput., 2017, 65, pp. 1522, Multimodal Sentiment Analysis and Mining in the Wild Image and Vision Computing.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2019.1270
Loading

Related content

content/journals/10.1049/iet-ipr.2019.1270
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address