Hostname: page-component-cd9895bd7-gvvz8 Total loading time: 0 Render date: 2024-12-27T05:56:39.006Z Has data issue: false hasContentIssue false

Don't admit defeat: A new dawn for the item in visual search

Published online by Cambridge University Press:  24 May 2017

Stefan Van der Stigchel
Affiliation:
Department of Experimental Psychology, Helmholtz Institute, Utrecht University, 3584 CS Utrecht, The Netherlands; s.vanderstigchel@uu.nlhttp://www.attentionlab.nl
Sebastiaan Mathôt
Affiliation:
Aix-Marseille University, CNRS, LPC UMR 7290, Marseille, 13331 Cedex 1, France. s.mathot@cogsci.nlhttp://www.cogsci.nl/smathot

Abstract

Even though we lack a precise definition of “item,” it is clear that people do parse their visual environment into objects (the real-world equivalent of items). We will review evidence that items are essential in visual search, and argue that computer vision – especially deep learning – may offer a solution for the lack of a solid definition of “item.”

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2017 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Becker, S. I., Ansorge, U. & Horstmann, G. (2009) Can inter-trial priming effects account for the similarity effect in visual search? Vision Research 49:1738–56.CrossRefGoogle Scholar
Desimone, R., Albright, T., Gross, C. & Bruce, C. (1984) Stimulus-selective properties of inferior temporal neurons in the macaque. Journal of Neuroscience 4(8):2051–62.CrossRefGoogle ScholarPubMed
Egly, R., Driver, J. & Rafal, R. D. (1994) Shifting visual attention between objects and locations: Evidence from normal and parietal lesion subjects. Journal of Experimental Psychology: General 123:161–77. doi: 10.1037//0096-3445.123.2.161.CrossRefGoogle ScholarPubMed
Einhäuser, W., Spain, M. & Perona, P. (2008) Objects predict fixations better than early saliency. Journal of Vision 8(14):18.CrossRefGoogle ScholarPubMed
He, K., Zhang, X., Ren, S. & Sun, J. (2015) Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. arXiv 1502.01852.CrossRefGoogle Scholar
Hubel, D. H. & Wiesel, T. N. (1959) Receptive fields of single neurons in the cat's striate cortex. Journal of Physiology 148(3):574–91.CrossRefGoogle ScholarPubMed
Kristjánsson, Á. & Driver, J. (2008) Priming in visual search: Separating the effects of target repetition, distractor repetition and role-reversal. Vision Research 48(10):1217–32. Available at: http://doi.org/10.1016/j.visres.2008.02.007.CrossRefGoogle ScholarPubMed
Krizhevsky, A., Sutskever, I. & Hinton, G. E. (2012) ImageNet classification with deep convolutional neural networks. In: Conference of Advances in neural information processing systems, ed. Pereira, F., Burges, C. J. C., Bottou, L. & Weinberger, K. Q., pp. 1097–105. Neural Information Processing Systems Foundation. Available at: https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.Google Scholar
Le, Q. V., Ranzato, M., Monga, R., Devin, M., Chen, K., Corrado, G. S., Dean, J. & Ng, A. Y. (2012) Building high-level features using large scale unsupervised learning. Paper presented at the ICML.CrossRefGoogle Scholar
LeCun, Y., Kavukvuoglu, K. & Farabet, C. (2010) Convolutional networks and applications in vision. Paper presented at the ISCAS.CrossRefGoogle Scholar
Lee, H., Grosse, R., Ranganath, R. & Ng, A. Y. (2009) Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. Paper presented at the ACM.CrossRefGoogle Scholar
Marr, D. (1982) Vision: A computational investigation into the human representation and processing of visual information. W.H. Freeman.Google Scholar
Meeter, M. & Van der Stigchel, S. (2013) Visual priming through a boost of the target signal: Evidence from saccadic landing positions. Attention, Perception, and Psychophysics 75:1336–41.CrossRefGoogle ScholarPubMed
Meeter, M., Van der Stigchel, S. & Theeuwes, J. (2010) A competitive integration model of exogenous and endogenous eye movements. Biological Cybernetics 102:271–91.CrossRefGoogle ScholarPubMed
Theeuwes, J., Kramer, A. F., Hahn, S. & Irwin, D. E. (1998) Our eyes do not always go where we want them to go: Capture of eyes by new objects. Psychological Science 9:379–85.CrossRefGoogle Scholar
Theeuwes, J., Mathôt, S. & Grainger, J. (2013) Exogenous object-centered attention. Attention Perception, and Psychophysics 75:812–18. doi: 10.3758/s13414-013-0459-4.CrossRefGoogle ScholarPubMed
Theeuwes, J., Mathôt, S. & Kingstone, A. (2010) Object-based eye movements: The eyes prefer to stay within the same object. Attention Perception, and Psychophysics 72(3):1221. doi: 10.3758/APP.72.3.597.CrossRefGoogle ScholarPubMed
Theeuwes, J., Mathôt, S. & Grainger, J. (2013) Exogenous object-centered attention. Attention, Perception, and Psychophysics 75:812–18.CrossRefGoogle ScholarPubMed
Trappenberg, T. P., Dorris, M. C., Munoz, D. P. & Klein, R. M. (2001) A model of saccade initiation based on the competitive integration of exogenous and endogenous signals in the superior colliculus. Journal of Cognitive Neuroscience 13(2):256–71.CrossRefGoogle Scholar