Colloquium Fall 2021

Fall 2021

The Quantitative Psychology colloquium series meets weekly in the Autumn and Spring semesters. The primary activity is presentation of ongoing research by students and faculty of the quantitative psychology program. Often,  guest speakers from allied disciplines (e.g., Education, Statistics, and Linguistics) within and external to The Ohio State University present on contemporary quantitative methodologies. Additionally, discussions of quantitative issues in research and recently published articles are often conducted during colloquium meetings.

Faculty coordinator: Dr. Jolynn Pek
Venue: 
Psychology 35 or over zoom.  
Time: 
12:30-1:30pm

Please contact Dr. Jolynn Pek if you would like information on how to attend these events over zoom.

 

August 30, 2021
Organizational Meeting

 

September 6, 2021
Labor Day

 

September 13, 2021
Speaker: 
Dr. Edgar Merkle
Department of Psychological Sciences, University of Missouri
*joint event with University of Maryland, College Park; University of North Carolina at Chapel Hill, University of Notre Dame, Vanderbilt University; University of South Carolina; and University of Virginia
Title: Some progress and problems in Bayesian SEM
Abstract: The talk will involve a combination of research and software on Bayesian structural equation models, with some material being relevant to general Bayesian modeling in psychology. I will say some things about how the blavaan package was started and where it is going in the future. Then I will say some things about research that has stemmed from development of blavaan, including some problems that are not fully solved. Topics include speed/efficiency of model estimation, methods for computing Bayesian information criteria, and methods for specifying proper prior distributions. Finally, I will discuss the collaborative work that can arise from open source software.

Dr. Ed Merkle’s research involves a mix of psychometric modeling and the experimentation/modeling that arises from cognitive science and mathematical psychology. His specific research includes Bayesian latent variable models, forecasting and subjective probability, psychometric measurement, and statistical modeling.

 

September 20, 2021
Speaker: Dr. Brittany Shoots-Reinhard
Department of Psychology, The Ohio State University
Title: The Improving Numeracy Project: Efficacy-building activities in a college statistics course
Abstract: Introductory statistics students in Psych 2220 completed a number of activities intended to improve numeric self-efficacy, numeracy, and other outcomes. Numeric self-efficacy and numeracy were associated with better course and life outcomes. Students completing more (vs. fewer) activities had more positive outcomes, but our manipulations did not produce the hypothesized differences. This suggests a potential limit to effectiveness of educational interventions to increase self-efficacy.

 

September 27, 2021
Speaker
: Dr. Michael Schell
Program of Biostatistics and Bioinformatics, Moffitt Cancer Center
Title
: The 10 Best Kept Secrets in Statistics
Abstract: Statistical practice is obviously very important, but it sadly lags far behind theoretical advancements.  A top 10 list of widely applicable statistical techniques is provided.  While some, such as the Holm adjustment for multiple comparisons, have received high citations already, it should be much more widely known and used, as it completely outperforms the more universally known Bonferroni adjustment. Arguments are presented for the other 9 improvements as well, in the hope that the statistical practice pattern of the attendees will be altered where these tools have heretofore remained a secret.

 

October 4, 2021
Speaker
: Dr. Hudson Golino
Department of Psychology; University of Virginia
*joint event with University of Maryland, College Park; University of North Carolina at Chapel Hill, University of Notre Dame, Vanderbilt University; University of South Carolina; and University of Virginia
Title: On networks and online Russian trolls: How can the total entropy fit index be applied to optimize the number of embedded dimensions used in dynamic exploratory graph analysis, and why does it matter
Abstract: The current presentation will show how a new fit index for dimensionality analysis termed total entropy fit index can be applied to tune the number of embedded dimensions used in the dynamic exploratory graph analysis (DynEGA) technique. DynEGA uses dynamical systems and network psychometrics to estimate the number of (dynamic) latent factors in multivariate time-series of continuous or categorical data. For each time series generalized local linear approximation (GLLA) is used to compute n-order derivatives for each individual. The stacked matrix of derivatives (combined row-wise) is then used to estimate a network structure in which communities represent dynamical factors. GLLA requires the user to set the number of embedded dimensions to transform each time series into a time delay embedding matrix. In a Monte-Carlo simulation, we show that the total entropy fit index can be used in a grid search to find the optimal number of embedded dimensions. In an applied example, we performed DynEGA with the TEFI optimization on a large dataset with Twitter posts from state-sponsored right- and left-wing trolls during the 2016 U.S. presidential election. DynEGA revealed factors (in this case latent topics) that were pertinent to several consequential events in the election cycle, demonstrating the coordinated effort of trolls capitalizing on current events in the U.S. This example demonstrates the potential power of our approach for revealing temporally relevant information from qualitative text data.

 

October 11, 2021
Speaker
: Dr. Jolynn Pek, Dr. Duane T. Wegener, & Kathryn Hoisington-Shaw
*joint event with University of North Carolina at Chapel Hill
Title: A discussion on the uses of power
Abstract:
Statistical power continues to be much researched and applied owing to concerns about the credibility and replicability of psychological findings. We consider power calculations in the abstract (i.e., without data), and consider calculations that incorporate sampling variability when effect sizes are estimated from collected data. Power calculated from estimated effect sizes have been used to (a) design future research (b) characterize the power of designs used in a literature that served as input to the power calculation, or to (c) evaluate whether the obtained results can be trusted. We end with an open discussion on these uses of power.

 

October 18, 2021
Speaker: 
Dr. Alex Wasserman
Department of Psychology; The Ohio State University
Title: Quantitative Approaches to Testing the Dual Systems Model of Adolescent Risk-Taking
Abstract: Developmental theories often highlight complex intra-individual changes as the impetus for age-graded differences in risk-taking behavior. The dual systems model, for example, proposes that adolescents engage in higher rates of risk-taking behaviors compared to adults in part due to a maturational imbalance between impulse control (e.g., capacity to inhibit prepotent responses) and sensation seeking (e.g., propensity for engaging in novel and thrilling experiences). In my presentation, I consider the different approaches that the field has used to test the dual systems model as well as the challenges therein. I will also present recent attempts to quantify the “maturational imbalance” (e.g., difference scores) and have a discussion on how to bridge the gap between applied research and quantitative methodology.

 

October 25, 2021
Speaker: Dr. Kathleen Keeler
Fisher College of Business; The Ohio State University
Title: Lost in translation? A review and empirical examination of different approaches to translation and the factors that influence translation quality.
Abstract: It is becoming increasingly common for Industrial/Organizational psychologists and management scholars to utilize non-English speaking samples. Most researchers translate existing measures rather than develop new measures in the target language. This means that researchers need to ensure that the meaning of the construct is captured in the translation. Translation quality is affected by many factors and using a particular approach does not ensure measurement equivalency. We review leading general management and applied and cross-cultural psychology journals to examine how authors describe their translation process and what steps, if any, they take to ensure equivalency between original and translated versions of their measures. We then present an empirical examination of three approaches to translation (i.e., forward translation, back-translation, and the committee approach) using a sample of 4,000 Chinese participants and 250 US participants. We also empirically evaluate how variations in translator expertise influences translation quality. Specifically, we explore the influence of four different levels of translator expertise (bilingual, bilingual with content knowledge, professional translator, and professional translator with content knowledge) on translation quality.

 

November 1, 2021
Speaker
: Dr. Felix Thoemmes
Department of Human Development; College of Human Ecology
Department of Psychology; College of Arts and Sciences
Cornell University
*joint event with University of Maryland, College Park; University of North Carolina at Chapel Hill, University of Notre Dame, Vanderbilt University; University of South Carolina; and University of Virginia
Title: Estimating bias and sensitivity of front-door models
Abstract: The front-door criterion is a heavily underused analytic method to estimate causal effects in the presence of unobserved confounding. One potential reason why this approach has been largely ignored is the strong set of assumptions that need to be invoked. The front-door estimator yields unbiased total effects between a putative cause and an outcome of interest by decomposing a total effect in unbiased component effects. However, this guarantee of unbiasedness rests strictly on a set of assumptions that can be violated in practice. To illuminate these assumptions and the severity of bias due to their violation, we derive exact bias formulas for each possible violation. We further compare the performance of the front-door estimate under violations with the performance of a naive estimator (that simply regresses the outcome on the putative cause). We show that some violations of assumptions lead to simple confounding bias, but also to collider bias, and bias amplification. We derive all biases analytically, but also supplement our analysis with an extensive simulation, in which we compare biases for a very wide range of parameter values. Using the results from our analysis and simulation, we dissect and explain the nature of bias in the front-door estimate. Finally, we present a simple method to conduct sensitivity analyses using phantom variables in structural equation models.

 

November 8, 2021
Speaker: 
Selena Wang
Department of Psychology; The Ohio State University
Title: Resilience to stress in bipartite networks: Application to the Islamic State recruitment network
Abstract: Networks are resilient to internal failures or external attacks. The resiliency is often beneficial, but there are scenarios where the collapse of a social system, network, or organization would be beneficial to society, such as the dismantlement of terrorist, rebel, or organized crime groups. In this article, we develop a methodology to estimate the effect of knockouts and apply our method to the Islamic State recruitment network. Using our novel methodology, we demonstrate how coordinated attacks against recruiters might reduce the Islamic State’s ability to mobilize new fighters. This analysis has direct implications for studies of network resilience and terrorist recruitment.
Discussant: Kathryn Hoisington-Shaw

 

November 15, 2021
Speaker: 
Inhan Kang
Department of Psychology; The Ohio State University
Title: Modeling Conditional Dependence of Response Accuracy and Response Time with the Diffusion Item Response Theory Model
Abstract: In this talk, we propose a model-based method to study conditional dependence between response accuracy and response time (RT) with the diffusion IRT model. We extend the earlier diffusion IRT model by introducing variability across persons and items in cognitive capacity (drift rate in the evidence accumulation process) and variability in the starting point of the decision processes. We show that the extended model can explain the behavioral patterns of conditional dependency found in the previous studies in psychometrics. Variability in cognitive capacity can predict positive and negative conditional dependency and their interaction with the item difficulty. Variability in starting point can account for the early changes in the response accuracy as a function of RT given the person and item effects. By the combination of the two variability components, the extended model can produce the curvilinear conditional accuracy functions that have been observed in psychometric data. We also provide a simulation study to validate the parameter recovery of the proposed model and present two empirical applications to show how to implement the model to study conditional dependency underlying data response accuracy and RTs.
Discussant: Yiyang Chen

 

November 22, 2021
Speaker: Yiyang Chen
Title: Modeling the continuous performance task for sustained attention
Abstract: The continuous performance task (CPT) is widely used to assess deficits in sustained attention among people with psychological disorders. In particular,  individuals with first-episode psychosis have poorer performance in the CPT compared to individuals without first-episode psychosis, reflecting a potential attentional deficit associated with psychosis, but it is not exactly clear what specific factors may contribute to these between-group differences. We built a theory-based hierarchical Bayesian model for the CPT, which allows us to find the potential underlying mechanism for people’s performance deficits on the CPT by interpreting changes in the model’s estimated parameters. We apply this model to a data set consisting of individuals with and without first-episode psychosis, tested on the CPT from the MATRICS cognitive battery. Modeling results reveal that individuals with first-episode psychosis might have a reduced ability to attend to task-relevant in- formation compared with people without first-episode psychosis, which likely contributes to their lower response accuracies in the CPT.
Discussant: Inhan Kang

 

November 29, 2021
Speaker
: Dr. Minjeong Jeon
Department of Education, University of Los Angeles, California
*joint event with University of Maryland, College Park; University of North Carolina at Chapel Hill, University of Notre Dame, Vanderbilt University; University of South Carolina; and University of Virginia
Title
: Mapping item-response interactions: A latent space approach to item response data with interaction maps
Abstract
: In this talk, I introduce a novel latent space modeling approach to psychological assessment data. In this approach, respondents’ binary responses to test items are viewed as a bipartite network between respondents and items where a tie is made when a respondent gives a correct (or positive) answer to an item. The resulting latent space model provides a window into respondents’ performance on the assessment, placing respondents and test items in a shared metric space referred to as an interaction map. The interaction map approach can help assess students’ strengths and weaknesses from cognitive assessment and identify patients’ symptom profiles from clinical assessment data. I will illustrate the utilities of the proposed approach, focusing on how the interaction map can help derive insightful diagnostic information on items and respondents.

 

December 6, 2021
Brief presentations on external talks by the following students:
Ivory Li
Selena Wang
Jacob Coutts
Yiyang Chen
Kathryn Hoisington-Shaw

 

Robert Wherry Speaker Series
Colloquium Archive