Autor: Anna-Lena Schubert

Neue Veröffentlichung im Journal of Mathematical Psychology

In der Zeitschrift Journal of Mathematical Psychology ist ein neuer Artikel aus der Arbeitsgruppe erschienen:

Nunez, M. D., Schubert, A.-L., Frischkorn, G. T., & Oberauer, K. (2025). Cognitive models of decision-making with identifiable parameters: Diffusion Decision Models with within-trial noise. Journal of Mathematical Psychology, 125, 102917 . https://doi.org/10.1016/j.jmp.2025.102917

Abstract: Diffusion Decision Models (DDMs) are a widely used class of models that assume an accumulation of evidence during a quick decision. These models are often used as measurement models to assess individual differences in cognitive processes such as evidence accumulation rate and response caution. An underlying assumption of these models is that there is internal noise in the evidence accumulation process. We argue that this internal noise is a relevant psychological construct that is likely to vary over participants and explain differences in cognitive ability. In some cases a change in noise is a more parsimonious explanation of joint changes in speed-accuracy tradeoffs and ability. However, fitting traditional DDMs to behavioral data cannot yield estimates of an individual’s evidence accumulation rate, caution, and internal noise at the same time. This is due to an intrinsic unidentifiability of these parameters in DDMs. We explored the practical consequences of this unidentifiability by estimating the Bayesian joint posterior distributions of parameters (and thus joint uncertainty) for simulated data. We also introduce methods of estimating these parameters. Fundamentally, these parameters can be identified in two ways: (1) We can assume that one of the three parameters is fixed to a constant. We show that fixing one parameter, as is typical in fitting DDMs, results in parameter estimates that are ratios of true cognitive parameters including the parameter that is fixed. By fixing another parameter instead of noise, different ratios are estimated, which may be useful for measuring individual differences. (2) Alternatively, we could use additional observed variables that we can reasonably assume to be related to model parameters. Electroencephalographic (EEG) data or single-unit activity from animals can yield candidate measures. We show parameter recovery for models with true (simulated) connections to such additional covariates, as well as some recovery in misspecified models. We evaluate this approach with both single-trial and participant-level additional observed variables. Our findings reveal that with the integration of additional data, it becomes possible to discern individual differences across all parameters, enhancing the utility of DDMs without relying on strong assumptions. However, there are some important caveats with these new modeling approaches, and we provide recommendations for their use. This research paves the way to use the deeper theoretical understanding of sequential sampling models and the new modeling methods to measure individual differences in internal noise during decision-making.

Veröffentlicht am | Veröffentlicht in Aktuelles

Neue Veröffentlichung in Psychological Methods

In der Zeitschrift Psychological Methods ist ein neuer Artikel aus der Arbeitsgruppe erschienen:

Steinhilber, M., Schnuerch, M., & Schubert, A.-L. (2024). Sequential analysis of variance: Increasing efficiency of hypothesis testing. Psychological Methods. Advance online publication. https://doi.org/10.1037/met0000677
Abstract: Researchers commonly use analysis of variance (ANOVA) to statistically test results of factorial designs. Performing an a priori power analysis is crucial to ensure that the ANOVA is sufficiently powered, however, it often poses a challenge and can result in large sample sizes, especially if the expected effect size is small. Due to the high prevalence of small effect sizes in psychology, studies are frequently underpowered as it is often economically unfeasible to gather the necessary sample size for adequate Type-II error control. Here, we present a more efficient alternative to the fixed ANOVA, the so-called sequential ANOVA that we implemented in the R package “sprtt.” The sequential ANOVA is based on the sequential probability ratio test (SPRT) that uses a likelihood ratio as a test statistic and controls for long-term error rates. SPRTs gather evidence for both the null and the alternative hypothesis and conclude this process when a sufficient amount of evidence has been gathered to accept one of the two hypotheses. Through simulations, we show that the sequential ANOVA is more efficient than the fixed ANOVA and reliably controls long-term error rates. Additionally, robustness analyses revealed that the sequential and fixed ANOVAs exhibit analogous properties when their underlying assumptions are violated. Taken together, our results demonstrate that the sequential ANOVA is an efficient alternative to fixed sample designs for hypothesis testing.
Impact Statement: In scientific research, the analysis of variance (ANOVA) is frequently used to assess statistical differences in mean values across multiple groups. Essential to this process is an a-priori sample size calculation, ensuring that the researchers collect enough data to be able to detect an effect size of interest with a high enough chance. However, accurately determining the required sample size can be challenging. Moreover, finding small differences requires a lot of data, making it expensive and sometimes not feasible to collect enough data. We introduce an alternative method implemented in the R package “sprtt,” termed sequential ANOVA, as a more resource-efficient alternative to the traditional fixed ANOVA. The sequential ANOVA, based on the sequential probability ratio test (SPRT), uses a likelihood ratio to compare two competing hypotheses while adjusting for long-term error rates. The sequential ANOVA accumulates evidence iteratively until sufficient evidence is collected to accept one of the two hypotheses. Our simulations confirm that the sequential ANOVA outperforms the traditional fixed ANOVA in efficiency and maintains the long-term error control. In cases where the underlying assumptions are not met, the sequential ANOVA is as robust as the fixed ANOVA. Consequently, our findings support the use of sequential ANOVA in studies that have limitations on sample size, offering a robust and resource-efficient solution for hypothesis testing.
Veröffentlicht am | Veröffentlicht in Aktuelles

Neue Veröffentlichung in Behavior Research Methods

In der Zeitschrift Behavior Research Methods ist ein neuer Artikel aus der Arbeitsgruppe erschienen:

Schubert, A.-L., Frischkorn, G. T., Sadus, K., Welhaf, M. S., Kane, M. J., & Rummel, J. (2024). The brief mind wandering three-factor scale (BMW-3). Behavior Research Methods. https://doi.org/10.3758/s13428-024-02500-6
Abstract: In recent years, researchers from different fields have become increasingly interested in measuring individual differences in mind wandering as a psychological trait. Although there are several questionnaires that allow for an assessment of people’s perceptions of their mind wandering experiences, they either define mind wandering in a very broad sense or do not sufficiently separate different aspects of mind wandering. Here, we introduce the Brief Mind Wandering Three-Factor Scale (BMW-3), a 12-item questionnaire available in German and English. The BMW-3 conceptualizes mind wandering as task-unrelated thought and measures three dimensions of mind wandering: unintentional mind wandering, intentional mind wandering, and meta-awareness of mind wandering. Based on results from 1038 participants (823 German speakers, 215 English speakers), we found support for the proposed three-factorial structure of mind wandering and for scalar measurement invariance of the German and English versions. All subscales showed good internal consistencies and moderate to high test–retest correlations and thus provide an effective assessment of individual differences in mind wandering. Moreover, the BMW-3 showed good convergent validity when compared to existing retrospective measures of mind wandering and mindfulness and was related to conscientiousness, emotional stability, and openness as well as self-reported attentional control. Lastly, it predicted the propensity for mind wandering inside and outside the lab (as assessed by in-the-moment experience sampling), the frequency of experiencing depressive symptoms, and the use of functional and dysfunctional emotion regulation strategies. All in all, the BMW-3 provides a brief, reliable, and valid assessment of mind wandering for basic and clinical research.
Veröffentlicht am | Veröffentlicht in Aktuelles

Neue Podcastfolge: Wie kann man intelligenter werden?

In der 6. Folge unseres Wissenschaftspodcasts zur Intelligenz beschäftigen wir uns mit der Frage, was man eigentlich tun kann, um seine Intelligenz zu steigern -- was bringen kognitive Trainings? Wie kann ich meine Ernährung anpassen, um meine geistige Leistung zu steigern? Was ist eigentlich dran am Mythos um Smartdrugs? Und welche Lebensstilfaktoren haben auch noch einen Einfluss auf die Intelligenz? Hier geht's zur Folge: https://denkbar.letscast.fm/episode/006-wie-kann-man-intelligenter-werden

Eine Übersicht über alle bisher erschienen Folgen findet sich hier.

Veröffentlicht am | Veröffentlicht in Aktuelles

Neue Veröffentlichung in Human Brain Mapping

In der Zeitschrift Human Brain Mapping ist ein neuer Artikel aus der Arbeitsgruppe erschienen:

Wehrheim, M. H., Faskowitz, J., Schubert, A.-L., & Fiebach, C. J. (2024). Reliability of variability and complexity measures for task and task-free BOLD fMRI. Human Brain Mapping, 45(10), e26778. https://doi.org/10.1002/hbm.26778
Abstract: Brain activity continuously fluctuates over time, even if the brain is in controlled (e.g., experimentally induced) states. Recent years have seen an increasing interest in understanding the complexity of these temporal variations, for example with respect to developmental changes in brain function or between-person differences in healthy and clinical populations. However, the psychometric reliability of brain signal variability and complexity measures—which is an important precondition for robust individual differences as well as longitudinal research—is not yet sufficiently studied. We examined reliability (split-half correlations) and test–retest correlations for task-free (resting-state) BOLD fMRI as well as split-half correlations for seven functional task data sets from the Human Connectome Project to evaluate their reliability. We observed good to excellent split-half reliability for temporal variability measures derived from rest and task fMRI activation time series (standard deviation, mean absolute successive difference, mean squared successive difference), and moderate test–retest correlations for the same variability measures under rest conditions. Brain signal complexity estimates (several entropy and dimensionality measures) showed moderate to good reliabilities under both, rest and task activation conditions. We calculated the same measures also for time-resolved (dynamic) functional connectivity time series and observed moderate to good reliabilities for variability measures, but poor reliabilities for complexity measures derived from functional connectivity time series. Global (i.e., mean across cortical regions) measures tended to show higher reliability than region-specific variability or complexity estimates. Larger subcortical regions showed similar reliability as cortical regions, but small regions showed lower reliability, especially for complexity measures. Lastly, we also show that reliability scores are only minorly dependent on differences in scan length and replicate our results across different parcellation and denoising strategies. These results suggest that the variability and complexity of BOLD activation time series are robust measures well-suited for individual differences research. Temporal variability of global functional connectivity over time provides an important novel approach to robustly quantifying the dynamics of brain function.
Veröffentlicht am | Veröffentlicht in Aktuelles

Special Issue "Strengthening derivation chains in cognitive neuroscience" abgeschlossen

Im abschließenden Editorial des Special Issues "Strengthening Derivation Chains in Cognitive Neuroscience" in Cortex betonen die Gastherausgeber/-innen Daniel Mirman, Anne Scheel, Anna-Lena Schubert und Robert McIntosh die Bedeutung solider Ableitungsketten in der kognitiven Neurowissenschaft. Wir argumentieren, dass starke methodische und konzeptionelle Verbindungen zwischen empirischen Daten und wissenschaftlichen Aussagen entscheidend für die Reproduzierbarkeit und Robustheit von Forschungsergebnissen sind. Das Editorial hebt Beiträge aus der Ausgabe hervor, die starke Ableitungsketten veranschaulichen, und soll weitere Fortschritte im Bereich durch methodische Präzision und konzeptionelle Klarheit fördern.

Weitere Informationen finden sich im Originalartikel: https://doi.org/10.1016/j.cortex.2024.04.004

Veröffentlicht am | Veröffentlicht in Aktuelles

Neuer Mitarbeiter Simon Schaefer

Wir begrüßen Simon Schaefer als neuen Mitarbeiter in der Abteilung für Analyse und Modellierung komplexer Daten. Herr Schaefer hat bereits Psychologie an der JGU studiert und war als studentische Hilfskraft in unserer Abteilung sowie den Abteilungen Allgemeine Psychologie und Sozialpsychologie tätig. Wir freuen uns sehr, ihn als Doktoranden begrüßen zu können! Er wird sich in seiner Promotion mit der Messung individueller Unterschiede in Aufmerksamkeitskontrollprozessen mittels mathematischer Modelle beschäftigen.

Veröffentlicht am | Veröffentlicht in Aktuelles

Neue Podcastfolge: Wie erforscht man eigentlich Intelligenz?

In der 4. Folge unseres Wissenschaftspodcasts zur Intelligenz beschäftigen wir uns mit Intelligenzforschung: Wie lässt sich Intelligenz eigentlich beforschen und womit beschäftigen wir uns in unserer Forschung eigentlich? Wie können wir untersuchen, welche elementaren Prozesse Intelligenzunterschieden kausal zugrunde liegen? Hier geht's zur Folge: https://denkbar.letscast.fm/episode/003-gibt-es-noch-andere-formen-von-intelligenz-ausser-der-kognitiven

Eine Übersicht über alle bisher erschienen Folgen findet sich hier.

Veröffentlicht am | Veröffentlicht in Aktuelles

Neue Veröffentlichung in Psychological Research

In der Zeitschrift Psychological Research ist ein neuer Artikel aus der Arbeitsgruppe erschienen:

Löffler, C., Frischkorn, G. T., Hagemann, D., Sadus, K., & Schubert, A.-L. (2024). The common factor of executive functions measures nothing but speed of information uptake. Psychological Research. https://doi.org/10.1007/s00426-023-01924-7

Abstract: There is an ongoing debate about the unity and diversity of executive functions and their relationship with other cognitive abilities such as processing speed, working memory capacity, and intelligence. Specifically, the initially proposed unity and diversity of executive functions is challenged by discussions about (1) the factorial structure of executive functions and (2) unfavorable psychometric properties of measures of executive functions. The present study addressed two methodological limitations of previous work that may explain conflicting results: The inconsistent use of (a) accuracy-based vs. reaction time-based indicators and (b) average performance vs. difference scores. In a sample of 148 participants who completed a battery of executive function tasks, we tried to replicate the three-factor model of the three commonly distinguished executive functions shifting, updating, and inhibition by adopting data-analytical choices of previous work. After addressing the identified methodological limitations using drift–diffusion modeling, we only found one common factor of executive functions that was fully accounted for by individual differences in the speed of information uptake. No variance specific to executive functions remained. Our results suggest that individual differences common to all executive function tasks measure nothing more than individual differences in the speed of information uptake. We therefore suggest refraining from using typical executive function tasks to study substantial research questions, as these tasks are not valid for measuring individual differences in executive functions.

Veröffentlicht am | Veröffentlicht in Aktuelles

Neue Veröffentlichung in Developmental Science

In der Zeitschrift Developmental Science ist ein neuer Artikel aus der Arbeitsgruppe erschienen:

Vermeent, S., Young, E. S., DeJoseph, M. L., Schubert, A.-L., & Frankenhuis, W. E. (2024). Cognitive deficits and enhancements in youth from adverse conditions: An integrative assessment using Drift Diffusion Modeling in the ABCD study. Developmental Science, e13478. https://doi.org/10.1111/desc.13478

Abstract: Childhood adversity can lead to cognitive deficits or enhancements, depending on many factors. Though progress has been made, two challenges prevent us from integrating and better understanding these patterns. First, studies commonly use and interpret raw performance differences, such as response times, which conflate different stages of cognitive processing. Second, most studies either isolate or aggregate abilities, obscuring the degree to which individual differences reflect task-general (shared) or task-specific (unique) processes. We addressed these challenges using Drift Diffusion Modeling (DDM) and structural equation modeling (SEM). Leveraging a large, representative sample of 9–10 year-olds from the Adolescent Brain Cognitive Development (ABCD) study, we examined how two forms of adversity—material deprivation and household threat—were associated with performance on tasks measuring processing speed, inhibition, attention shifting, and mental rotation. Using DDM, we decomposed performance on each task into three distinct stages of processing: speed of information uptake, response caution, and stimulus encoding/response execution. Using SEM, we isolated task-general and task-specific variances in each processing stage and estimated their associations with the two forms of adversity. Youth with more exposure to household threat (but not material deprivation) showed slower task-general processing speed, but showed intact task-specific abilities. In addition, youth with more exposure to household threat tended to respond more cautiously in general. These findings suggest that traditional assessments might overestimate the extent to which childhood adversity reduces specific abilities. By combining DDM and SEM approaches, we can develop a more nuanced understanding of how adversity affects different aspects of youth's cognitive performance.

Veröffentlicht am | Veröffentlicht in Aktuelles