Hostname: page-component-cd9895bd7-lnqnp Total loading time: 0 Render date: 2024-12-26T15:14:45.401Z Has data issue: false hasContentIssue false

Contextual and social cues may dominate natural visual search

Published online by Cambridge University Press:  24 May 2017

Linda Henriksson
Affiliation:
Department of Neuroscience and Biomedical Engineering, Aalto University, 00076 AALTO, Finland; linda.henriksson@aalto.fihttps://people.aalto.fi/linda_henriksson
Riitta Hari
Affiliation:
Department of Neuroscience and Biomedical Engineering, Aalto University, 00076 AALTO, Finland; linda.henriksson@aalto.fihttps://people.aalto.fi/linda_henriksson Department of Art, Aalto University, 00076 AALTO, Finland. riitta.hari@aalto.fihttps://people.aalto.fi/riitta_hari

Abstract

A framework where only the size of the functional visual field of fixations can vary is hardly able to explain natural visual-search behavior. In real-world search tasks, context guides eye movements, and task-irrelevant social stimuli may capture the gaze.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2017 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Birmingham, E., Bischof, W. F. & Kingstone, A. (2008) Social attention and real-world scenes: The roles of action, competition and social content. Quarterly Journal of Experimental Psychology (Hove) 61(7):986–98.Google Scholar
Castelhano, M. S. & Henderson, J. M. (2008) Stable individual differences across images in human saccadic eye movements. Canadian Journal of Experimental Psychology 62(1):114.Google Scholar
Cerf, M., Frady, E. P. & Koch, C. (2009) Faces and text attract gaze independent of the task: Experimental data and computer model. Journal of Vision 9(12):10 1115.Google Scholar
Crouzet, S. M., Kirchner, H. & Thorpe, S. J. (2010) Fast saccades toward faces: Face detection in just 100 ms. Journal of Vision 10(4):16 1117.Google Scholar
Devue, C., Belopolsky, A. V. & Theeuwes, J. (2012) Oculomotor guidance and capture by irrelevant faces. PLoS ONE 7(4):e34598.Google Scholar
Hari, R., Henriksson, L., Malinen, S. & Parkkonen, L. (2015) Centrality of social interaction in human brain function. Neuron 88(1):181–93.Google Scholar
Hsiao, J. H. & Cottrell, G. (2008) Two fixations suffice in face recognition. Psychological Science 19(10):9981006.Google Scholar
Krizhevsky, A., Sutskever, I. & Hinton, G. E. (2012) ImageNet classification with deep convolutional neural networks. In: Conference of Advances in neural information processing systems, ed. Pereira, F., Burges, C. J. C., Bottou, L. & Weinberger, K. Q., pp. 1097–105. Neural Information Processing Systems Foundation. Available at: https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.Google Scholar
Kujala, M. V., Kujala, J., Carlson, S. & Hari, R. (2012) Dog experts' brains distinguish socially relevant body postures similarly in dogs and humans. PLoS ONE 7(6):e39145.Google Scholar
Kümmerer, M., Theis, L. & Bethge, M. (2014) Deep Gaze I: Boosting saliency prediction with feature maps trained on ImageNet. arXiv preprint arXiv:1411.1045. Available at: http://arxiv.org/abs/1411.1045.Google Scholar
Kundel, H. L., Nodine, C. F., Conant, E. F. & Weinstein, S. P. (2007) Holistic component of image perception in mammogram interpretation: Gaze-tracking study. Radiology 242(2):396402.CrossRefGoogle ScholarPubMed
Neider, M. B. & Zelinsky, G. J. (2006) Scene context guides eye movements during visual search. Vision Research 46(5):614–21.Google Scholar
Or, C. C., Peterson, M. F. & Eckstein, M. P. (2015) Initial eye movements during face identification are optimal and similar across cultures. Journal of Vision 15(13):12.CrossRefGoogle ScholarPubMed
Peterson, M. F. & Eckstein, M. P. (2012) Looking just below the eyes is optimal across face recognition tasks. Proceedings of the National Academy of Sciences of the United States of America 109(48):E3314–23.Google ScholarPubMed
Pihko, E., Virtanen, A., Saarinen, V. M., Pannasch, S., Hirvenkari, L., Tossavainen, T., Haapala, A. & Hari, R. (2011) Experiencing art: The influence of expertise and painting abstraction level. Frontiers in Human Neuroscience 5:94.Google Scholar
Riby, D. M., Brown, P. H., Jones, N. & Hanley, M. (2012) Brief report: Faces cause less distraction in autism. Journal of Autism and Developmental Disorders 42(4):634–39.Google Scholar
Rosenholtz, R. (2016) Capabilities and limitations of peripheral vision. Annual Review of Vision Science 2:437–57.Google Scholar
Torralba, A., Oliva, A., Castelhano, M. S. & Henderson, J. M. (2006) Contextual guidance of eye movements and attention in real-world scenes: The role of global features in object search. Psychological Review. 113(4):766–86.Google Scholar
Unema, P. J. a., Pannasch, S., Joos, M. & Velichkovsky, B. M. (2005) Time course of information processing during scene perception: The relationship between saccade amplitude and fixation duration. Visual Cognition 12(3):473–94. doi: 10.1080/13506280444000409.Google Scholar
Wang, S., Jiang, M., Duchesne, X. M., Laugeson, E. A., Kennedy, D. P., Adolphs, R. & Zhao, Q. (2015) Atypical visual saliency in Autism spectrum disorder quantified through model-based eye tracking. Neuron 88(3):604–16.Google Scholar
Yarbus, A. L. (1967) Eye movements and vision. Plenum Press.Google Scholar