Aktuelles

Neue Veröffentlichung in JEP:LMC: The factor structure of executive functions measured with electrophysiological correlates: An event-related potential analysis.

Im Journal of Experimental Psychology: Learning, Memory, and Cognition ist ein neuer Artikel aus der Arbeitsgruppe erschienen:

Löffler, C., Sadus, K., Frischkorn, G. T., Hagemann, D., & Schubert, A.-L. (in press). The factor structure of executive functions measured with electrophysiological correlates: An event-related potential analysis. Journal of Experimental Psychology: Learning, Memory, and Cognition. https://doi.org/10.1037/xlm0001549

Abstract: The three-factor model of executive functions is widely employed in cognitive control research. However, recent studies have revealed psychometric problems with commonly used difference scores in behavioral measures of executive functions. Examining behavioral scores, several studies were unable to find a coherent factor structure for executive functions or identify significant individual differences in specific executive function abilities. These findings have raised questions about the utility of established measurement scores for executive functions. Our study sought to reassess the three-factor model proposed by Miyake et al. (2000), employing event-related potentials from electroencephalography as a means to directly probe underlying cognitive processes, leveraging the electroencephalography’s high temporal resolution. We conducted an analysis of the factor structure of the three executive functions (updating, shifting, and inhibition) in a sample of 148 participants. We employed Bayesian structural equation models to examine the relationships between the mean amplitudes of the N2 and P3 components, obtained from a battery of nine executive function tasks. Our results indicate that amplitudes of the event-related potential components measured in executive function tasks almost exclusively represent variance related to general processes rather than executive function-specific variance. Notably, no task demonstrated variance uniquely attributable to individual differences in executive function processes added through experimental manipulations. These results cast doubt on the validity of current executive function tasks in accurately reflecting individual differences in these processes.

Veröffentlicht am | Veröffentlicht in Aktuelles

Ausgezeichnete Forschung: Graduate Travel Awards für Jan Göttmann und Meike Steinhilber

Jan Göttmann und Meike Steinhilber wurden mit einem Graduate Travel Award der Psychonomic Society ausgezeichnet. Prämiert wurden ihre aktuellen Forschungsbeiträge:

  • Jan Göttmann: A Neurocognitive Psychometric Approach to Modeling Individual Differences in Working Memory – ein Ansatz, der neurokognitive Modelle mit psychometrischer Präzision verbindet, um interindividuelle Unterschiede im Arbeitsgedächtnis besser zu erfassen.

  • Meike Steinhilber: Early Stopping in Sequential ANOVA: How Reliable Are Fast Decisions? – eine Analyse der Zuverlässigkeit früher Abbruchentscheidungen in sequenziellen statistischen Verfahren.

Meike and Jan at PS 2025.

Veröffentlicht am | Veröffentlicht in Aktuelles

Neue Veröffentlichung in Communications Psychology: Improving Statistical Reporting in Psychology

In Communications Psychology ist ein neuer Artikel aus der Arbeitsgruppe erschienen:

Schubert, A.-L., Steinhilber, M., Kang, H., & Quintana, D. S. (2025). Improving statistical reporting in psychology. Communications Psychology, 3(1), 156. https://doi.org/10.1038/s44271-025-00356-w

Abstract: Transparent and comprehensive statistical reporting is critical for ensuring the credibility, reproducibility, and interpretability of psychological research. This paper offers a structured set of guidelines for reporting statistical analyses in quantitative psychology, emphasizing clarity at both the planning and results stages. Drawing on established recommendations and emerging best practices, we outline key decisions related to hypothesis formulation, sample size justification, preregistration, outlier and missing data handling, statistical model specification, and the interpretation of inferential outcomes. We address considerations across frequentist and Bayesian frameworks and fixed as well as sequential research designs, including guidance on effect size reporting, equivalence testing, and the appropriate treatment of null results. To facilitate implementation of these recommendations, we provide the Transparent Statistical Reporting in Psychology (TSRP) Checklist that researchers can use to systematically evaluate and improve their statistical reporting practices (https://osf.io/t2zpq/). In addition, we provide a curated list of freely available tools, packages, and functions that researchers can use to implement transparent reporting practices in their own analyses to bridge the gap between theory and practice. To illustrate the practical application of these principles, we provide a side-by-side comparison of insufficient versus best-practice reporting using a hypothetical cognitive psychology study. By adopting transparent reporting standards, researchers can improve the robustness of individual studies and facilitate cumulative scientific progress through more reliable meta-analyses and research syntheses.

Veröffentlicht am | Veröffentlicht in Aktuelles

Neue Veröffentlichung in Vision Research

In Vision Research ist ein neuer Artikel aus der Arbeitsgruppe erschienen:

Hunt-Radej, C., Schubert, A.-L., & Meinhardt, G. (2025). Feature synergy enhances detection but not recognition of shape from texture cues. Vision Research, 235, 108660. https://doi.org/10.1016/j.visres.2025.108660

Abstract: Texture regions that differ from their surroundings in more than one local feature are more easily detected. Recent findings show that a low-level summary statistic, net contrast energy, predicts this double-cue advantage, suggesting early-stage integration during image analysis. We investigated whether this advantage also applies to more complex, texture-defined shape discrimination beyond figure-ground segregation. Using both a figure detection task and a more demanding shape identification task, we calibrated d' sensitivity to fixed baseline levels with single-cue targets defined by orientation or spatial frequency contrast. We then measured performance for double-cue targets at these baselines. Contrary to earlier results reported for simpler shape discriminations, we found a reduced double-cue advantage in the shape identification task. Specifically, double-cue sensitivity was notably lower than the algebraic sum of the single-cue sensitivities, a level achieved consistently in the detection task. Control tests with high feature contrast showed perfect detection performance for both single and combined cues. However, shape identification saturated at levels between accuracy, while gray-shaded figures yielded perfect performance, suggesting that unique shape representations could not be built from single or combined texture cues. These findings suggest that texture cue summation enhances texture segregation and segmentation but does not improve higher-level recognition of 2D texture shapes.

Veröffentlicht am | Veröffentlicht in Aktuelles

Posterpreis für Simon Schaefer

Wir gratulieren Simon Schaefer herzlich zum Gewinn des Posterpreises bei der diesjährigen SMiP Summer School! Sein Poster mit dem Titel „Modeling flanker task performance using deep neural networks“ überzeugte die Jury durch innovative Ansätze und herausragende wissenschaftliche Qualität. Wir freuen uns sehr über diese Auszeichnung und gratulieren Simon zu diesem Erfolg!

Veröffentlicht am | Veröffentlicht in Aktuelles

Neue Veröffentlichung in JEP:General

Im Journal of Experimental Psychology: General ist ein neuer Artikel aus der Arbeitsgruppe erschienen:

Schubert, A.-L., Löffler, C., Jungeblut, H. M., & Hülsemann, M. (2025). Trait characteristics of midfrontal theta connectivity as a neurocognitive measure of cognitive control and its relation to general cognitive abilities. Journal of Experimental Psychology: General. Advance online publication. https://doi.org/10.1037/xge0001780

Abstract: Understanding the neurocognitive basis of cognitive control and its relationship with general cognitive ability is a key challenge in individual differences research. This study investigates midfrontal theta connectivity as a neurocognitive marker for individual differences in cognitive control. Using electroencephalography, we examined midfrontal global theta connectivity across three distinct cognitive control tasks in 148 participants. Our findings reveal that midfrontal theta connectivity can be modeled as a trait-like latent variable, indicating its consistency across tasks and stability over time. However, the reliability of the observed measures was found to be low to moderate, suggesting substantial measurement error. We also replicated previous results, finding a strong correlation (r = 0.64) between midfrontal theta connectivity and cognitive abilities, especially during higher order stages of information processing. We disentangled the specific cognitive processes contributing to this relationship by employing a task-cueing paradigm with distinct cue and target intervals. The results indicated that only theta connectivity during response-related processes, not during cue-evoked task-set reconfiguration, correlated with cognitive abilities. These insights significantly advance theoretical models of intelligence, highlighting the critical role of specific aspects of cognitive control in cognitive abilities.

Veröffentlicht am | Veröffentlicht in Aktuelles

Promotion Christoph Löffler

Wir gratulieren Christoph Löffler herzlich zu seiner erfolgreichen Disputation an der Universität Heidelberg und zum erfolgreichen Abschluss seiner Promotion!

Veröffentlicht am | Veröffentlicht in Aktuelles

Neue Veröffentlichung im Journal of Mathematical Psychology

In der Zeitschrift Journal of Mathematical Psychology ist ein neuer Artikel aus der Arbeitsgruppe erschienen:

Nunez, M. D., Schubert, A.-L., Frischkorn, G. T., & Oberauer, K. (2025). Cognitive models of decision-making with identifiable parameters: Diffusion Decision Models with within-trial noise. Journal of Mathematical Psychology, 125, 102917 . https://doi.org/10.1016/j.jmp.2025.102917

Abstract: Diffusion Decision Models (DDMs) are a widely used class of models that assume an accumulation of evidence during a quick decision. These models are often used as measurement models to assess individual differences in cognitive processes such as evidence accumulation rate and response caution. An underlying assumption of these models is that there is internal noise in the evidence accumulation process. We argue that this internal noise is a relevant psychological construct that is likely to vary over participants and explain differences in cognitive ability. In some cases a change in noise is a more parsimonious explanation of joint changes in speed-accuracy tradeoffs and ability. However, fitting traditional DDMs to behavioral data cannot yield estimates of an individual’s evidence accumulation rate, caution, and internal noise at the same time. This is due to an intrinsic unidentifiability of these parameters in DDMs. We explored the practical consequences of this unidentifiability by estimating the Bayesian joint posterior distributions of parameters (and thus joint uncertainty) for simulated data. We also introduce methods of estimating these parameters. Fundamentally, these parameters can be identified in two ways: (1) We can assume that one of the three parameters is fixed to a constant. We show that fixing one parameter, as is typical in fitting DDMs, results in parameter estimates that are ratios of true cognitive parameters including the parameter that is fixed. By fixing another parameter instead of noise, different ratios are estimated, which may be useful for measuring individual differences. (2) Alternatively, we could use additional observed variables that we can reasonably assume to be related to model parameters. Electroencephalographic (EEG) data or single-unit activity from animals can yield candidate measures. We show parameter recovery for models with true (simulated) connections to such additional covariates, as well as some recovery in misspecified models. We evaluate this approach with both single-trial and participant-level additional observed variables. Our findings reveal that with the integration of additional data, it becomes possible to discern individual differences across all parameters, enhancing the utility of DDMs without relying on strong assumptions. However, there are some important caveats with these new modeling approaches, and we provide recommendations for their use. This research paves the way to use the deeper theoretical understanding of sequential sampling models and the new modeling methods to measure individual differences in internal noise during decision-making.

Veröffentlicht am | Veröffentlicht in Aktuelles

Neue Veröffentlichung in Psychological Methods

In der Zeitschrift Psychological Methods ist ein neuer Artikel aus der Arbeitsgruppe erschienen:

Steinhilber, M., Schnuerch, M., & Schubert, A.-L. (2024). Sequential analysis of variance: Increasing efficiency of hypothesis testing. Psychological Methods. Advance online publication. https://doi.org/10.1037/met0000677
Abstract: Researchers commonly use analysis of variance (ANOVA) to statistically test results of factorial designs. Performing an a priori power analysis is crucial to ensure that the ANOVA is sufficiently powered, however, it often poses a challenge and can result in large sample sizes, especially if the expected effect size is small. Due to the high prevalence of small effect sizes in psychology, studies are frequently underpowered as it is often economically unfeasible to gather the necessary sample size for adequate Type-II error control. Here, we present a more efficient alternative to the fixed ANOVA, the so-called sequential ANOVA that we implemented in the R package “sprtt.” The sequential ANOVA is based on the sequential probability ratio test (SPRT) that uses a likelihood ratio as a test statistic and controls for long-term error rates. SPRTs gather evidence for both the null and the alternative hypothesis and conclude this process when a sufficient amount of evidence has been gathered to accept one of the two hypotheses. Through simulations, we show that the sequential ANOVA is more efficient than the fixed ANOVA and reliably controls long-term error rates. Additionally, robustness analyses revealed that the sequential and fixed ANOVAs exhibit analogous properties when their underlying assumptions are violated. Taken together, our results demonstrate that the sequential ANOVA is an efficient alternative to fixed sample designs for hypothesis testing.
Impact Statement: In scientific research, the analysis of variance (ANOVA) is frequently used to assess statistical differences in mean values across multiple groups. Essential to this process is an a-priori sample size calculation, ensuring that the researchers collect enough data to be able to detect an effect size of interest with a high enough chance. However, accurately determining the required sample size can be challenging. Moreover, finding small differences requires a lot of data, making it expensive and sometimes not feasible to collect enough data. We introduce an alternative method implemented in the R package “sprtt,” termed sequential ANOVA, as a more resource-efficient alternative to the traditional fixed ANOVA. The sequential ANOVA, based on the sequential probability ratio test (SPRT), uses a likelihood ratio to compare two competing hypotheses while adjusting for long-term error rates. The sequential ANOVA accumulates evidence iteratively until sufficient evidence is collected to accept one of the two hypotheses. Our simulations confirm that the sequential ANOVA outperforms the traditional fixed ANOVA in efficiency and maintains the long-term error control. In cases where the underlying assumptions are not met, the sequential ANOVA is as robust as the fixed ANOVA. Consequently, our findings support the use of sequential ANOVA in studies that have limitations on sample size, offering a robust and resource-efficient solution for hypothesis testing.
Veröffentlicht am | Veröffentlicht in Aktuelles

Neue Veröffentlichung in Behavior Research Methods

In der Zeitschrift Behavior Research Methods ist ein neuer Artikel aus der Arbeitsgruppe erschienen:

Schubert, A.-L., Frischkorn, G. T., Sadus, K., Welhaf, M. S., Kane, M. J., & Rummel, J. (2024). The brief mind wandering three-factor scale (BMW-3). Behavior Research Methods. https://doi.org/10.3758/s13428-024-02500-6
Abstract: In recent years, researchers from different fields have become increasingly interested in measuring individual differences in mind wandering as a psychological trait. Although there are several questionnaires that allow for an assessment of people’s perceptions of their mind wandering experiences, they either define mind wandering in a very broad sense or do not sufficiently separate different aspects of mind wandering. Here, we introduce the Brief Mind Wandering Three-Factor Scale (BMW-3), a 12-item questionnaire available in German and English. The BMW-3 conceptualizes mind wandering as task-unrelated thought and measures three dimensions of mind wandering: unintentional mind wandering, intentional mind wandering, and meta-awareness of mind wandering. Based on results from 1038 participants (823 German speakers, 215 English speakers), we found support for the proposed three-factorial structure of mind wandering and for scalar measurement invariance of the German and English versions. All subscales showed good internal consistencies and moderate to high test–retest correlations and thus provide an effective assessment of individual differences in mind wandering. Moreover, the BMW-3 showed good convergent validity when compared to existing retrospective measures of mind wandering and mindfulness and was related to conscientiousness, emotional stability, and openness as well as self-reported attentional control. Lastly, it predicted the propensity for mind wandering inside and outside the lab (as assessed by in-the-moment experience sampling), the frequency of experiencing depressive symptoms, and the use of functional and dysfunctional emotion regulation strategies. All in all, the BMW-3 provides a brief, reliable, and valid assessment of mind wandering for basic and clinical research.
Veröffentlicht am | Veröffentlicht in Aktuelles