Colloquium Spring 2023

Spring 2023

The Quantitative Psychology colloquium series meets weekly in the Autumn and Spring semesters. The primary activity is presentation of ongoing research by students and faculty of the quantitative psychology program. Often,  guest speakers from allied disciplines (e.g., Education, Statistics, and Linguistics) within and external to The Ohio State University present on contemporary quantitative methodologies. Additionally, discussions of quantitative issues in research and recently published articles are often conducted during colloquium meetings.

Faculty coordinator: Dr. Jolynn Pek
Venue: 
Psychology 35 and online  
Time: 
12:30-1:30pm

Please contact Dr. Jolynn Pek if you would like information on how to attend these events over zoom.

 

January 09, 2023
Organizational Meeting

 

January 16, 2023
Martin Luther King Day

 

January 23, 2023
Paper Discussion
Cummings, N., & Cummings, D. (2021). Historical chronology: Examining psychology’s contributions to the belief in racial hierarchy and perpetuation of inequality for people of color in the U.S. American Psychological Association.

 

January 30, 2023
Speaker: Dr. Clare Evans
Department of Sociology, University of Oregon
*joint event with University of Maryland, College Park; University of North Carolina at Chapel Hill, University of Notre Dame, Vanderbilt University; University of South Carolina; and University of Virginia
Title: Multilevel Models of Intersectional Inequalities
Abstract: How can we incorporate intersectional thinking into our research, evaluation, and promotion of Diversity, Equity, and Inclusion?
Intersectional MAIHDA (multilevel analysis of individual heterogeneity and discriminatory accuracy) is a quantitative method recently developed in the field of social epidemiology (Evans et al. 2018). It has been hailed as the “new gold standard for investigating health disparities” (Merlo 2018). Intersectional MAIHDA is a tool for engaging in inclusive, justice-focused research and practice, with potential applications across the social and population health sciences. Empirical and simulation studies have shown MAIHDA has numerous methodological and theoretical advantages over conventional methods, such as regression models with high-dimensional interaction parameters (Bell et al 2019, Mahandran et al 2022a, 2022b, Evans et al. 2020). At its heart, MAIHDA is a reimagining of multilevel (hierarchical) models for a new purpose: quantitative intersectional analysis. In this session, I will introduce intersectional MAIHDA, explore examples to showcase its potential, and address practical issues to help you consider whether it would be useful in your own work.

 

February 6, 2023
Speaker
: Marco Chen
Department of Psychology & Neuroscience, University of North Carolina at Chapel Hill
Title
: Modeling Growth When Measurement Properties Change Between Persons and Within Persons Over Time: A Bayesian Regularized Second-Order Growth Curve Model
Abstract: A common interest in educational and psychological measurement is to examine change over time through longitudinal assessments. To accurately capture true change in an underlying construct of interest, we must also account for changes in the way the construct manifests itself over time. One essential approach is longitudinal measurement models that analyze construct change over time and evaluate item characteristics at each timepoint. However, limitations exist for traditional longitudinal measurement and second-order growth models, such as an inability to incorporate time-varying covariates (TVC) that possess different values among individuals at a given timepoint. We propose an alternative model drawing on the advantages of regularized moderated nonlinear factor analysis (MNLFA; Bauer et al., 2021). This setup follows the MNLFA framework in using covariate moderation on item parameters to represent differential item functioning (DIF). The proposed model is more parsimonious than the traditional second-order growth model and one of the first setups to estimate DIF effects from both time-varying and time-invariant covariates. Additionally, this model can address DIF effects from multiple covariates simultaneously without imposing a priori item equality constraints. It does so by applying Bayesian regularization to DIF effects and identifying the model without using anchor items (Chen et al., 2022). The current study evaluates the performance of the proposed regularized longitudinal MNLFA model through a simulation and presents an empirical example on adolescent delinquency and early alcohol use. This study demonstrates the feasibility and importance of including both time-varying and -invariant covariate effects in longitudinal measurement evaluation and growth modeling.

 

 

February 13, 2023
Speaker: Kathryn Hoisington-Shaw
Department of Psychology, The Ohio State University
Title
: There and Back Again: A Sequential Testing Tale
Abstract: Planning a study requires researchers to make a number of decisions, not the least of which is what sample size to choose. The gold standard in psychological research is to run a power analysis to calculate sample size, which requires using an effect size as an input in this equation. Results of a recent meta-science study show that the most common type of effect size used in these power analyses are sourced from previously collected data. However, use of these point estimated effect sizes do not account for sampling variability and result in estimates that are imprecise. There are modern power methods that account for this sampling variability, but Monte Carlo simulations show that these approaches often result in unrealistically high sample sizes. Due to these limitations, this talk will move away from using statistical power, and will focus on the potential of using the sequential probability ratio test (SPRT) for study design and analysis instead. How to use SPRT in practice will be discussed, as well as the proposal of two new extended SPRT approaches that take the sampling variability of estimates into account when using previously collected data to plan future research. The results of Monte Carlo simulations to evaluate and compare these methods will be discussed, as well as a comparison to the modern power approaches, and final recommendations for study design will be considered.
Discussant:  Jingdan (Diana) Zhu

 

February 20, 2023
Speaker: Shannon Jacoby
Department of Psychology, The Ohio State University
Title
: Assessing linearity via monotonicity and polynomials: A simulation study
Abstract: Selecting an appropriate modeling framework is one of the first tasks researchers face after data collection and cleaning have been completed. After making this choice, typically model diagnostics are run to evaluate adherence to the model assumptions. Across a wide array of content areas, many researchers choose to operate within a linear modeling framework, and more specifically within the regression approach. A central assumption of this approach is linearity of the regression function, and the current diagnostic criterion for this assumption is a visual inspection of a scatter- or residual plot for the absence of systematic nonlinearity. Given that violation of the linearity assumption can lead to biased estimates of parameters and error variances, our current work endeavors to create an additional avenue for examination of the linearity assumption that goes beyond graphical analysis. By borrowing inspiration from the ANOVA tradition, we will apply three different contrast analyses (trend, Helmert, and adjacent) to simulated data in an effort to identify potential inflection points, which will further inform our understanding of the tonic quality of the relationship between the independent and dependent variables. Criteria for the evaluation of this method will be defined and possible limitations will be addressed. Additionally, advantages and drawbacks of a continuous approach to simulating data will be explored.
Discussant:  Hanrui Mei

 

February 27, 2023
Speaker: Dr. Soojin Park
Graduate School of Education, University of California – Riverside
*joint event with University of Maryland, College Park; University of North Carolina at Chapel Hill, University of Notre Dame, Vanderbilt University; University of South Carolina; and University of Virginia
Title:  Estimation and Sensitivity Analysis for Causal Decomposition: Assessing Robustness Toward Omitted Variable Bias.
Abstract: A key objective of decomposition analysis is to identify risks or resources (‘mediators’) that contribute to disparities between groups of individuals defined by social characteristics such as race, ethnicity, gender, class, and sexual orientations. In decomposition analysis, a scholarly interest often centers on estimating how much the disparity (e.g., health disparities between Black women and White men) would be reduced/remain if we set the mediator (e.g., education) distribution of one social group equal to another. However, causally identifying disparity reduction and remaining depends on the no omitted mediator-outcome confounding assumption, which is not empirically testable. In this talk, we discuss a flexible way to 1) estimate disparity reduction and remaining and 2) assess the robustness of the estimates to the possible violation of no omitted mediator-outcome confounding. We apply the proposed methods to an empirical example, examining the contribution of education in reducing health disparities across race-gender groups. Our proposed methods are available as open-source software (‘causal.decomp’ R package).

 

March 6, 2023
Speaker
: Jacob Coutts
Department of Psychology, The Ohio State University
Title: A Methodological Perspective on Teaching, Mentorship, and Research
Abstract:
As a faculty member, it is important to have a strong teaching, mentorship, and research philosophy. In this job talk, I describe my personal philosophy in these domains and provide evidence of how I incorporate methodology and diversity in each. I conclude with a teaching demo, followed by an activity, that synthesizes the content of the talk and demonstrates my commitment to active learning in the classroom. The structure of this job talk, for an Assistant Teaching Professor position in Psycholohy, is based on what was requested by the search committee at Northern Arizona University.

 

March 13, 2023
Spring Break

 

March 20, 2023
Speaker:
Dr. Zachary Fisher
College of Health and Human Development, Pennsylvania State University
Title: Structured Estimation of Time Series from Multiple Individuals
Abstract: Data rising from high-dimensional time-dependent systems is increasingly common in the health, social and behavioral sciences. Despite the many benefits these data provide, less work has been devoted to addressing the strong structural heterogeneity present in many processes involving human behavior. To address this gap in the literature, I will discuss some recent work modeling time series data arising from multiple individuals where both qualitative and quantitative differences in the structure of individual dynamics are accommodated.

 

March 27, 2023
Speaker
: Dr. Lisa Wijsen
Faculty of Social and Behavioral Sciences, University of Amsterdam
*joint event with University of Maryland, College Park; University of North Carolina at Chapel Hill, University of Notre Dame, Vanderbilt University; University of South Carolina; and University of Virginia
Title: Values in Psychometrics
Abstract: Measuring psychological attributes has become a very normal part of our lives. In the Netherlands, children get tested at 12 years old to help with choosing an appropriate secondary school; when we apply to jobs it is often expected to go through a wide range of psychological assessments, and when we go to a clinical psychologist it is not unusual to be given a diagnostic measurement tool such as the BDI. The role of testing and measurement in our society is often a topic of debate and can be considered a moral choice. Do we want to measure children at an early age? Is it desirable to have test scores determine whether we end up in university or not? What is the right thing to do? However, for the technical field of psychometrics, the moral dimension is less clear. In the field of psychometrics, people often work on highly complex models which often estimate individual differences based on observed scores. The language psychometricians use is strongly model- and mathematics-based, and often strongly separated from applications of psychometrics. Even though this technical work does not seem to be dependent on any type of moral value, when we look closely, we can still discern several (moral) values. In this talk, I will discuss the following four values: that individual differences are quantitative (rather than qualitative), that measurement should be objective in a specific sense, that test items should be fair, and that the utility of a model is more important than its truth. The goal of this talk is not to criticize psychometrics for supporting these values but rather to bring them into the open, and to encourage psychometricians to enter the debate on the moral dimensions of their field.

 

April 3, 2023
Speaker: Dr. Wes Bonifay
College of Education & Human Development, University of Missouri
Title: The Hidden Depths of Complexity in Statistical Modeling
Abstract: It is no secret that statistical model complexity affects goodness-of-fit to the observed data (i.e., accommodation) and generalizability to future data (i.e., prediction). Less obvious is that typical methods of addressing complexity fail to tell the whole story: Familiar statistics such as BIC account for parametric complexity (due to the presence of many model parameters) while configural complexity (due to the particular configuration of model variables) remains undetected. To obscure matters further, a model may be simultaneously contaminated by both sources of complexity, and the knowledge that can be gained from application of such a model will be severely limited. In this talk, I will present several methods that have been recently developed to detect and counteract the effects of configural complexity, and to thereby improve inference, generalizability, and replication in model-based research.

 

April 10, 2023
Speaker
: Hanrui Mei
Department of Psychology, The Ohio State University
Title: Comparing Diffusion model and accumulator model with cross-fitting method
Abstract: Sequential sampling models are widely used to predict both response time (RT) and choice probability data in behavioral decision tasks. Different models in this domain may have different assumptions about how information is accumulated in cognitive processes. For example, diffusion models assume that evidence accumulation for two choice alternatives is one single process, while accumulator or racing models accumulate the evidence for two alternative responses as two separate diffusion processes. However, these models can exhibit model mimicry, where they perform similarly well in predicting behavioral data despite their different underlying assumptions. In this study, we employ both data-informed and data-uninformed cross-fitting methods to examine model mimicry between the diffusion model and the racing model. The cross-fitting method generates data from one model and fits it to the other model. The second model should fit the data well if it could capture the pattern of data generated from the first model.
Discussant:  Jacob Coutts

 

April 17, 2023
Speaker
: Dr. Rachel Fouladi 
Department of Psychology, Simon Fraser University
*joint event with University of Maryland, College Park; University of North Carolina at Chapel Hill, University of Notre Dame, Vanderbilt University; University of South Carolina; and University of Virginia
CANCELLED

 

April 24, 2023
Brief presentations on external talks by the following students:
Jacob Coutts
Shannon Jacoby
Hanrui Mei

Robert Wherry Speaker Series
Colloquium Archive