Δευτέρα 18 Νοεμβρίου 2019

Which search are you on? Adapting to color while searching for shape

Abstract

Human observers adjust their attentional control settings when searching for a target in the presence of predictable changes in the target-defining feature dimension. We investigated whether observers also adapt to changes in a nondefining target dimension. According to feature integration theory, stimuli that are unique in their environment in a single feature dimension can be detected with little effort. In two experiments, we studied how observers searching for such singletons adapt their attentional control settings to a dynamical change in a nondefining target dimension. Participants searched for a shape singleton and freely chose between two targets in each trial. The two targets differed in color, and the ratio of distractors colored like each target varied dynamically across trials. A model-based analysis with a Bayesian estimation approach showed that participants adapted their target choices to the color ratio: They tended to select the target from the smaller color subset, and switched their preference both when the color ratio changed between gray and heterogeneous colors (Exp. 1) and when it changed between red and blue (Exp. 2). Participants thus tuned their attentional control settings toward color, although the target was defined by shape. We concluded that observers spontaneously adapted their behavior to changing regularities in the environment. Because adaptation was more pronounced when color homogeneity allowed for element grouping, we suggest that observers adapt to regularities that can be registered without attentional resources. They do so even if the changes are not relevant for accomplishing the task—a process presumably based on statistical learning.

Configural superiority for varying contrast levels

Abstract

Observers can search for a target stimulus at a particular speed and accuracy. Adding an identical context to each stimulus can improve performance when the resulting stimuli form clearly discriminable configurations. This search advantage is known as the configural superiority effect (CSE). A recent study showed that embedding these stimuli in noise revealed lower contrast thresholds for part-stimuli compared to configural stimuli (Bratch et al., Journal of Experimental Psychology. Human Perception and Performance42(9), 1388–1398, 2016). This contrasts with the accuracy advantages traditionally associated with CSEs. In this study, we aimed to replicate the results of Bratch et al. and asked whether the benefit for part-stimuli held across the full psychometric function. Additionally, we tested whether embedding the stimuli in noise was crucial for obtaining their result and whether different contrast definitions affected the results. Furthermore, we used control stimuli that were more directly comparable. Our results showed a detection benefit for the Gestalt context stimuli in all conditions. Together, these results are in line with the literature on CSEs and do not seem to support the recent claim that Gestalts are processed less efficiently than part stimuli. Inspired by this, we sketch how contrast manipulations could be an additional tool to study how Gestalts are processed.

An analysis of the processing of intramodal and intermodal time intervals

Abstract

In this 3-experiment study, the Weber fractions in the 300-ms and 900-ms duration ranges are obtained with 9 types of empty intervals resulting from the combinations of three types of signals for marking the beginning and end of the signals: auditory (A), visual (V), or tactile (T). There were three types of intramodal intervals (AA, TT, and VV) and 6 types of intermodal intervals (AT, AV, VA, VT, TA, and TV). The second marker is always the same during Experiments 1 (A), 2 (V), and 3 (T). With an uncertainty strategy where the first marker is 1 of 2 sensory signals being presented randomly from trial to trial, the study provides direct comparisons of the perceived length of the different marker-type intervals. The results reveal that the Weber fraction is nearly constant in the three types of intramodal intervals, but is clearly lower at 900 ms than at 300 ms in intermodal conditions. In several cases, the intramodal intervals are perceived as shorter than intermodal intervals, which is interpreted as an effect of the efficiency in detecting the second marker of an intramodal interval. There were no significant differences between the TA and VA intervals (Experiment 1) and between the AV and TV intervals (Experiment 2), but in Experiment 3, the AT intervals were perceived as longer than the VT intervals. The results are interpreted in terms of the generalized form of Weber’s law, using the properties of the signals for explaining the additional nontemporal noise observed in the intermodal conditions.

Flexible weighting of target features based on distractor context

Abstract

Models of attention posit that attentional priority is established by summing the saliency and relevancy signals from feature-selective maps. The dimension-weighting account further hypothesizes that information from each feature-selective map is weighted based on expectations of how informative each dimension will be. In the current studies, we investigated the question of whether attentional biases to the features of a conjunction target (color and orientation) differ when one dimension is expected to be more diagnostic of the target. In a series of color-orientation conjunction search tasks, observers saw an exact cue for the upcoming target, while the probability of distractors sharing a target feature in each dimension was manipulated. In one context, distractors were more likely to share the target color, and in another, distractors were more likely to share the target orientation. The results indicated that despite an overall bias toward color, attentional priority to each target feature was flexibly adjusted according to distractor context: RT and accuracy performance was better when the diagnostic feature was expected than unexpected. This occurred both when the distractor context was learned implicitly and explicitly. These results suggest that feature-based enhancement can occur selectively for the dimension expected to be most informative in distinguishing the target from distractors.

Visual noise consisting of X-junctions has only a minimal adverse effect on object recognition

Abstract

In 1968, Guzman showed that the myriad of surfaces composing a highly complex and novel assemblage of volumes can readily be assigned to their appropriate volumes in terms of the constraints offered by the vertices of coterminating edges. Of particular importance was the L-vertex, produced by the cotermination of two contours, which provides strong evidence for the termination of a 2-D surface. An X-junction, formed by the crossing of two contours without a change of direction at the crossing, played no role in the segmentation of a scene. If the potency of noise elements to affect recognition performance reflects their relevancy to the segmentation of scenes, as was suggested by Guzman, gaps in an object’s contours bounded by irrelevant X-junctions would be expected to have little or no adverse effect on shape-based object recognition, whereas gaps bounded by L-junctions would be expected to have a strong deleterious effect when they disrupt the smooth continuation of contours. Guzman’s roles for the various vertices and junctions have never been put to systematic test with respect to human object recognition. By adding identical noise contours to line drawings of objects that produced either L-vertices or X-junctions, these shape features could be compared with respect to their disruption of object recognition. Guzman’s insights that irrelevant L-vertices should be highly disruptive and irrelevant X-vertices would have only a minimal deleterious effect were confirmed.

Interacting hands draw attention during scene observation

Abstract

In this study I examined the role of the hands in scene perception. In Experiment 1, eye movements during free observation of natural scenes were analyzed. Fixations to faces and hands were compared under several conditions, including scenes with and without faces, with and without hands, and without a person. The hands were either resting (e.g., lying on the knees) or interacting with objects (e.g., holding a bottle). Faces held an absolute attentional advantage, regardless of hand presence. Importantly, fixations to interacting hands were faster and more frequent than those to resting hands, suggesting attentional priority to interacting hands. The interacting-hand advantage could not be attributed to perceptual saliency or to the hand-owner (i.e., the depicted person) gaze being directed at the interacting hand. Experiment 2 confirmed the interacting-hand advantage in a visual search paradigm with more controlled stimuli. The present results indicate that the key to understanding the role of attention in person perception is the competitive interaction among objects such as faces, hands, and objects interacting with the person.

The space contraction asymmetry in Michotte’s launching effect

Abstract

Previous studies have found that, compared with noncausal events, spatial contraction exists between the causal object and the effect object due to the perceived causality. The present research aims to examine whether the causal object and the effect object have the same effect on spatial contraction. A modified launching effect, in which a bar bridges the spatial gap between the final position of the launcher and the initial position of the target, was adopted. Experiment 1 validates the absolute underestimation of the bar’s length between the launcher and the target. Experiment 2a finds that in the direct launching effect, the perceived position of the bar’s trailing edge that was contacted by the final launcher was displaced along the objects’ direction of movement. Meanwhile, the perceived position of the bar’s leading edge that was contacted by the initial target was displaced in opposite direction to the moving direction. The magnitude of the former’s displacement was significantly larger than that of the latter, displaying a significant contraction asymmetry. Experiment 2b demonstrates that the contraction asymmetry did not result from the launcher remaining in contact with the edge of the bar. Experiment 3 indicates that contraction asymmetry showed a type of postdictive effect; that is, to some extent, this asymmetry depends on what happens after contact. In conclusion, the space between the causal object and effect object contracts asymmetrically in the launching effect, which implies that the causal object and effect object are perceived as shifting toward each other nonequidistantly in visual space.

Dwelling on simple stimuli in visual search

Abstract

Research and theories on visual search often focus on visual guidance to explain differences in search. Guidance is the tuning of attention to target features and facilitates search because distractors that do not show target features can be more effectively ignored (skipping). As a general rule, the better the guidance is, the more efficient search is. Correspondingly, behavioral experiments often interpreted differences in efficiency as reflecting varying degrees of attentional guidance. But other factors such as the time spent on processing a distractor (dwelling) or multiple visits to the same stimulus in a search display (revisiting) are also involved in determining search efficiency. While there is some research showing that dwelling and revisiting modulate search times in addition to skipping, the corresponding studies used complex naturalistic and category-defined stimuli. The present study tests whether results from prior research can be generalized to more simple stimuli, where target-distractor similarity, a strong factor influencing search performance, can be manipulated in a detailed fashion. Thus, in the present study, simple stimuli with varying degrees of target-distractor similarity were used to deliver conclusive evidence for the contribution of dwelling and revisiting to search performance. The results have theoretical and methodological implications: They imply that visual search models should not treat dwelling and revisiting as constants across varying levels of search efficiency and that behavioral search experiments are equivocal with respect to the responsible processing mechanisms underlying more versus less efficient search. We also suggest that eye-tracking methods may be used to disentangle different search components such as skipping, dwelling, and revisiting.

Comparable search efficiency for human and animal targets in the context of natural scenes

Abstract

In a previous series of studies, we have shown that search for human targets in the context of natural scenes is more efficient than search for mechanical targets. Here we asked whether this search advantage extends to other categories of biological objects. We used videos of natural scenes to directly contrast search efficiency for animal and human targets among biological or nonbiological distractors. In visual search arrays consisting of two, four, six, or eight videos, observers searched for animal targets among machine distractors, and vice versa (Exp. 1). Another group searched for animal targets among human distractors, and vice versa (Exp. 2). We measured search slope as a proxy for search efficiency, and complemented the slope with eye movement measurements (fixation duration on the target, as well as the proportion of first fixations landing on the target). In both experiments, we observed no differences in search slopes or proportions of first fixations between any of the target–distractor category pairs. With respect to fixation durations, we found shorter on-target fixations only for animal targets as compared to machine targets (Exp. 1). In summary, we did not find that the search advantage for human targets over mechanical targets extends to other biological objects. We also found no search advantage for detecting humans as compared to other biological objects. Overall, our pattern of findings suggests that search efficiency in natural scenes, as elsewhere, depends crucially on the specific target–distractor categories.

Is it impossible to acquire absolute pitch in adulthood?

Abstract

Absolute pitch (AP) refers to the rare ability to name the pitch of a tone without external reference. It is widely believed to be only for the selected few with rare genetic makeup and early musical training during the critical period, and therefore acquiring AP in adulthood is impossible. Previous studies have not offered a strong test of the effect of training because of issues like small sample size and insufficient training. In three experiments, adults learned to name pitches in a computerized, gamified and personalized training protocol for 12 to 40 hours, with the number of pitches gradually increased from three to twelve. Across the three experiments, the training covered different octaves, timbre, and training environment (inside or outside laboratory). AP learning showed classic characteristics of perceptual learning, including generalization of learning dependent on the training stimuli, and sustained improvement for at least one to three months. 14% of the participants (6 out of 43) were able to name twelve pitches at 90% or above accuracy, comparable to that of ‘AP possessors’ as defined in the literature. Overall, AP continues to be learnable in adulthood, which challenges the view that AP development requires both rare genetic predisposition and learning within the critical period. The finding calls for reconsideration of the role of learning in the occurrence of AP, and pushes the field to pinpoint and explain the differences, if any, between the aspects of AP more trainable in adulthood and the aspects of AP that are potentially exclusive for the few exceptional AP possessors observed in the real world.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου

Αρχειοθήκη ιστολογίου