Colloquium Fall 2022

Fall 2022

The Quantitative Psychology colloquium series meets weekly in the Autumn and Spring semesters. The primary activity is presentation of ongoing research by students and faculty of the quantitative psychology program. Often,  guest speakers from allied disciplines (e.g., Education, Statistics, and Linguistics) within and external to The Ohio State University present on contemporary quantitative methodologies. Additionally, discussions of quantitative issues in research and recently published articles are often conducted during colloquium meetings.

Faculty coordinator: Dr. Jolynn Pek
Venue: 
Psychology 35 and online  
Time: 
12:30-1:30pm

Please contact Dr. Jolynn Pek if you would like information on how to attend these events over zoom.

 

August 29, 2022
Organizational Meeting

 

September 5, 2022
Labor Day

 

September 12, 2022
Speaker: 
Dr. Dakota Cintron
Evidence for Action National Program Office, Robert Wood Johnson Foundation
*joint Quantitative Brownbag event with University of Maryland, College Park; University of North Carolina at Chapel Hill, University of Notre Dame, Vanderbilt University; University of South Carolina; and University of Virginia
Title: Advancing Fairness and Equity in Measurement: An intersectional Approach to Measurement Invariance Testing
Abstract: Measurement Invariance (MI) across groups is one indicator of the appropriateness of a measure for individuals from different populations. MI indicates that scores are invariant across groups, implying that the same construct is being measured across diverse groups. However, the evaluation of MI is typically limited to comparing independent groups defined by a single demographic variable (e.g., men vs. women or no bachelor’s degree vs. with bachelor’s degree). This approach treats social categories as independent and mutually exclusive. However, intersectionality theory dictates that we consider the intersection of social categories (e.g., females with a bachelor’s degree vs. females without a bachelor’s degree). Using intersectionality as a guiding theoretical framework prompts investigations to excavate how a person’s multiple identities and social positions are embedded within systems of inequality. Building on the recommendations in Han et al. (2019), we consider the evaluation of intersectional MI using the alignment method (AM), which was designed to evaluate MI across many groups (Asparouhov & Muthén, 2014). This research demonstrates an approach for using MI testing for evaluating the intersectional construct validity of an instrument, thereby facilitating instrument development on intersectionally defined population subgroups.

 

September 19, 2022
Speaker: Dr. Greg Allenby
Fisher College of Business, The Ohio State University
Title: Is Your Sample Truly Mediating? Bayesian Analysis of Heterogeneous Mediation (BAHM)
Abstract: Mediation analysis is used to study the relationship between stimulus and response in the presence of intermediate, generative variables. The traditional approach to the analysis utilizes the results of an aggregate regression model, which assumes that all respondents go through the same data-generating mechanism. We introduce a new approach that is able to uncover the heterogeneity in mediating mechanisms and provides more informative insights from mediation studies. The proposed approach provides individual-specific probabilities to mediate as well as a new measure of the degree of mediation as the prevalence of mediation in the sample. Covariates in the proposed model help describe the variation in the probability to mediate among respondents. The empirical examination of published studies demonstrates the presence of heterogeneity in mediating processes and supports the need for this new approach. We present evidence that the results of our more flexible heterogeneous mediation analysis do not necessarily agree with traditional aggregate measures. We find that the conclusions from the aggregate analysis are neither sufficient nor necessary to claim mediation in the presence of heterogeneity. A web-based application allowing researchers to analyze the data with the proposed model in a user-friendly environment is developed.

 

September 26, 2022
Speaker:
Diana Zhu
Department of Psychology, The Ohio State University
Title: 
Hierarchical Clustering for Measurement Invariance with Many Groups
Abstract: When dealing with violation of Measurement Invariance (MI) across many groups, one solution is to find clusters of groups where MI holds and compare the measured constructs within the clusters (Davidov et al., 2014). In this talk, a common unsupervised learning algorithms, Hierarchical Clustering, is proposed to organize and view the information from measurement models within the factor analytics framework across many groups.
Hierarchical clustering is a connectivity-based clustering analysis that requires pairwise distance metrics of the groups to form a hierarchy of clusters of groups. There are two reasons why one might consider clustering methods. (1) They are simple exploratory methods and can be easily implemented without use of specialized software programs. They are widely available in software implementation and can be used as a second stage analysis in a two-stage process, in which the first stage was obtained either from an EFA- or CFA -based factor analysis. (2) Clustering methods are appropriate when one assumes MI violation have discrete patterns across groups rather than a continuous/random pattern. If the assumption does not hold, they may still suggest interpretations for how to interpret the MI violations.
An analysis of 17 groups for a collaboration project investigating survey translation methods will be presented. This research is part of my dissertation work, and a report of the presented analysis will be submitted to Journal of Applied Psychology.
Discussant: Kathryn Hoisington-Shaw

 

October 3, 2022
Speaker: Junyeong Yang
Quantitative Research, Evaluation, and Measurement (QREM), College of Education
Title: Performance of Bias-Corrected Bootstrap Confidence Interval for Parameter k Method in Actor Partner Interdependence Model (APIM)
Abstract: In APIM, various dyadic patterns between an actor and partner can be examined using a parameter , which is the ratio of partner effect to actor effect ( ). It can be tested by including a phantom variable in a model and inspecting whether the bias-corrected (BC) bootstrap confidence interval (CI) for the  parameter includes 1, 0, or -1. However, there is no study conducted to examine the performance of BC bootstrap CI under the APIM context. This study aims to examine the performance of the BC bootstrap CI for the parameter  method under the various conditions of sample sizes and  ratios. Results showed that the convergence rate increased with the larger sample size and decreased with the smaller values of the  ratio. For smaller sample sizes and larger values of  ratio, the range of CI increased, indicating the increased possibility of including two or more values of -1, 0, or 1. The range of CI became stable when the sample size is large enough.In APIM, various dyadic patterns between an actor and partner can be examined using a parameter , which is the ratio of partner effect to actor effect ( ). It can be tested by including a phantom variable in a model and inspecting whether the bias-corrected (BC) bootstrap confidence interval (CI) for the  parameter includes 1, 0, or -1. However, there is no study conducted to examine the performance of BC bootstrap CI under the APIM context. This study aims to examine the performance of the BC bootstrap CI for the parameter  method under the various conditions of sample sizes and  ratios. Results showed that the convergence rate increased with the larger sample size and decreased with the smaller values of the  ratio. For smaller sample sizes and larger values of  ratio, the range of CI increased, indicating the increased possibility of including two or more values of -1, 0, or 1. The range of CI became stable when the sample size is large enough.

 

October 10, 2022
Speaker: 
Dr. Viji Sathy & Dr. Abigail Panter
University of North Carolina at Chapel Hill
*joint Quantitative Brownbag event with University of Maryland, College Park; University of North Carolina at Chapel Hill, University of Notre Dame, Vanderbilt University; University of South Carolina; and University of Virginia
Title: Using a Quantitative Mindset to Advance DEI in Higher Ed
Abstract: This session will illustrate systems-level approaches to advance equity and inclusion in undergraduate education using a quantitative mindset. Drawing upon our leadership experiences in the Office of Undergraduate Education at UNC-Chapel Hill, we will share some of the innovative projects we’ve conducted to promote equity in student success. First, we will share projects that are designed to increase student engagement and learning for all our students. Next, we will discuss programmatic efforts to support faculty in their teaching and scholarship of teaching, as well as administrators in monitoring efforts to help students succeed. Finally, we will share some research projects we are currently undertaking to advance the scholarship of inclusive teaching and provide evidence of the educational benefits of diversity.

 

October 17, 2022
Speaker: Dr. Menglin Xu
Laboratory for Investigatory Imaging
College of Medicine, The Ohio State University
Title: Two Method Measurement Planned Missing Data with Purposefully Selected Samples
Abstract: Two-method planned missing methods are increasingly popular tool for applied researchers to design efficient research under budget limit. Previous TMM research typically focus on the missing mechanism of MCAR, leaving the question of TMM under MAR unaddressed. This study fills the gap by conducting a Monte Carlo simulation study varying the conditions of missing data design and autoregressive path size. Results suggest a) higher correlation between the auxiliary variable and model variable is associated with greater estimation biases in factor loadings and autoregressive paths; b) the inclusion of the auxiliary variable to the analysis model well recovers the estimation accuracy; c) statistical power of detecting the treatment effect is affected by autoregressive path size.

 

October 24, 2022
Speaker: Jacob Coutts
Department of Psychology, The Ohio State University
Title: Should we be doing this? A deep dive into conditional indirect effects
Abstract: Researchers interested in understanding causal relationships must not only test if X causes Y, but how and/or when X causes Y. Mediation analysis is a tool that allows researchers to identify the mechanism(s) by which one variable causes another, whereas moderation analysis allows researchers to detect when one variable’s effect is heterogenous across levels of another variable (or multiple variables). Although these analyses lead to a deeper understanding of an observed relationship, they are still often too simplistic in isolation to properly model real-world effects. Combining mediation and moderation into a single analysis allows one to study conditional indirect effects—that is, when an indirect effect of X on Y is variable across the levels of a moderator. Methodological researchers have paid much attention on how to test for conditional indirect effects. However, considerably less work has been devoted to evaluating the performance of these proposed methods. A review of prior simulation studies reveals that current methods have relatively poor performance except for the most optimistic combinations of effect and sample size. Despite this, many substantive researchers continue to use these methods and rely on them for dichotomous decisions about and interpretations of such effects. A simulation study was conducted to compare inferential tests for conditional indirect effects with more plausible combinations of sample and effect size and variable type (e.g., dichotomous vs. continuous independent variables). All methods had low Type I error and power in many of the study conditions. These limitations in statistical performance can be ameliorated by a more careful presentation of the model results. Clear guidelines are presented for substantive researchers on how to best specify, test, and interpret conditional indirect effects. Future directions for methodological researchers are also discussed.

Discussant: Frank Leyva Castro

 

October 31, 2022
Speaker:
Frank Leyva Castro
Department of Psychology, The Ohio State University
Title: 
Residual diagnostics in Structural Equation Models
Abstract: Structural Equation Modeling (SEM) is a popular framework in the social sciences. As a linear model framework, it shares similar assumptions with Multiple Linear Regression (MLR) models and Multilevel Models (MLM) and Factor Analysis (FA). However, unlike these models. SEM literature on assumption assessment is surprisingly scarce, focusing mainly on outliers and influential cases. The SEM procedure allows for MLR, MLM and FA to be adjusted as special cases of SEM. We deem tenable to try to apply diagnostic approaches developed in alternative linear model literature within the scope and reach of SEM.  The focus of this presentation is to showcase several approaches to residual computation and analysis, as well as to propose further development on these procedures.

Discussant: Shannon Jacoby

 

November 7, 2022
Speaker: 
Dr. Nathan Kuncel
University of Minnesota
*joint Quantitative Brownbag event with University of Maryland, College Park; University of North Carolina at Chapel Hill, University of Notre Dame, Vanderbilt University; University of South Carolina; and University of Virginia
Title: Moving Toward Evidence Based Practice in Graduate Admissions
Abstract:The validity and fairness of admissions decisions are driven by the quality of the information considered and the decision making process used to combine that information. Ideally, decision makers should consider multiple, highly valid predictors. Each of these predictors should provide incremental information over the others. Finally, the predictor information should be combined consistently and weighted to maximize the predictive power of the available information. Unfortunately, most graduate school admissions processes are the exact opposite of this ideal. Typically, information sources with near zero predictive power are seriously considered and discussed. This information is then subjectively weighted at the whim of the decision maker. Finally, the ultimate decision is often based on undisciplined group discussions with little to no follow-up or accountability. Most holistic admissions processes are unintended and well-intentioned shams. In this talk, I will provide evidence for this thesis and discuss what to do about it.

 

November 14, 2022
Speaker:
Dr. Y. Andre Wang
University of Toronto, Scarborough
Title: 
Power Analysis for Parameter Estimation in Structural Equation Modeling
Abstract: Despite the widespread and rising popularity of structural equation modeling (SEM) in psychology, there is still much confusion surrounding how to choose an appropriate sample size for SEM. Dominant guidance on this topic primarily consists of sample-size rules of thumb that are not backed up by research and power analyses for detecting model misspecification. Missing from most current practices is power analysis for detecting a target effect (e.g., a regression coefficient between latent variables). In the first part of my talk, I distinguish between power to detect model misspecification and power to detect a target effect, report the results of a simulation study on the latter type of power, and introduce a user-friendly Shiny app, pwrSEM, for conducting power analysis for detecting target effects in structural equation models. In the second part of my talk, I reflect on the pros and cons of building a user-friendly statistical tool, and I consider epistemological reasons for (vs. against) conducting power analysis for complex models in the first place.

 

November 16, 2022
*Extra event on a Wednesday
Speaker:
Dr. Denny Borsboom
University of Amsterdam
Title: 
Measurement theory and psychometrics: A network perspective
Abstract: Psychological measurement has traditionally been approached through the lens of psychometric models: statistical structures that form a bridge between substantive psychological theory and empirical data. For most of the 20th century, the dominant model in psychometrics was the latent variable model, in which test scores can be viewed as effects of a latent psychological construct. However, in recent years, psychological constructs are increasingly interpreted in terms of networks of interacting beliefs, abilities, affect states, and behaviors; such conceptualizations have taken a high flight in psychopathology research, but are also on the rise in research on attitudes, intelligence, and personality. From this perspective, a psychological construct is not seen as a latent variable that underlies or determines observable behaviors, but as a property that emerges from the interaction between network components. A novel psychometric modeling tradition associated with this idea has developed statistical structures to serve as a bridge between network theory and data: network psychometrics. In the present talk, I will explain how network psychometrics relates to traditional psychometric perspectives and how it changes some pivotal elements in thinking about psychological measurement.

 

November 21, 2022
Speaker:
Dr. Natasha Bowen & Dr. Gerald Bean
The Ohio State University
Title:
Confirmatory Factor Analysis and Item Response Theory as Complementary Approaches to Scale Development
Abstract:
Proponents of CFA and IRT often use their preferred approach to the exclusion of the other. The presenters see this reliance on one method as a lost opportunity to enhance the validity argument for the use of scales in research and in practice settings. Drs. Bean and Bowen illustrate how CFA and IRT can be used in combination to provide more complete information on the functioning of scales and their individual items. The presentation will demonstrate ways that corresponding elements of CFA and IRT results can increase confidence in the quality of scales; for example, model fit statistics, dimensionality information, CFA factor loadings and IRT slope parameters, CFA thresholds and IRT response category probabilities. The presentation will also demonstrate how statistics unique to each approach contribute to a comprehensive evaluation of scales; for example, CFA correlated errors and scale reliability, and IRT conditional reliabilities and scale information values. The presenters will highlight how analysis results inform scale improvement efforts.

 

November 28, 2022
Panel Speakers:
Dr. Jessica Logan, Vanderbilt University
Dr. Amanda Montoya, University of Los Angeles California
Chris Strauss, University of North Carolina at Chapel Hill
*joint Quantitative Brownbag event with University of Maryland, College Park; University of North Carolina at Chapel Hill, University of Notre Dame, Vanderbilt University; University of South Carolina; and University of Virginia

Title: Teaching for Diversity in Quantitative Courses
Abstract: In our increasingly diverse and multicultural society, it’s more important than ever for teachers to incorporate culturally responsive instruction in the classroom. Quantitative methodology instructors have a a unique opportunity to incorporate instruction that focuses on Diversity, Equity, and Inclusion (DEI). The three invited panel speakers have incorporated DEI in their teaching activities, and will discuss ways to engage students in conversations around issues concerning DEI.

 

December 5, 2022
Brief presentations on external talks by the following students:

Francisco Leyva-Castro
Diana Zhu
Kathryn Hoisington-Shaw

 

Robert Wherry Speaker Series

Colloquium Archive