Colloquium Spring 2021

Spring 2021

The Quantitative Psychology colloquium series meets weekly in the Autumn and Spring semesters. The primary activity is presentation of ongoing research by students and faculty of the quantitative psychology program. Often,  guest speakers from allied disciplines (e.g., Education, Statistics, and Linguistics) within and external to The Ohio State University present on contemporary quantitative methodologies. Additionally, discussions of quantitative issues in research and recently published articles are often conducted during colloquium meetings.

Faculty coordinator: Dr. Jolynn Pek
Venue: 
Online  
Time: 
12:30-1:30pm
Please contact Dr. Jolynn Pek if you would like information on how to attend these events over zoom.

 

January 11, 2021
Organizational Meeting

 

January 18, 2021
Martin Luther King Day

 

January 25, 2021
Speaker: 
Dr. Wolfgang Widermann
College of Education, University of Missouri
*joint event with University of Maryland, College Park Measurement, Statistics and Evaluation Program; University of North Carolina at Chapel Hill, and University of Notre Dame.
Title: Direction Dependence Analysis: A Statistical Framework to Test the Causal Direction of Effects in Observational Data
Abstract: Direction dependence analysis (DDA; www.ddaproject.com) is a recently proposed statistical framework that addresses the need for more sophisticated tools to evaluate causal mechanisms. In observational data settings, at least three possible explanations exist for the association of two variables x and y: 1) x is the cause of y (Model I), 2) y is the cause of x (Model II), or 3) an unmeasured confounder is present (Model III). DDA makes use of non-normality of variables to detect potential confounding and to probe the causal direction of linear variable relations. The “true” predictor is assumed to be a continuous non-normal exogenous variable. DDA involves the evaluation of three properties of the data: 1) observed distributions of the variables, 2) residual distributions of competing models, and 3) independence properties of predictors and residuals of competing models. Under non-normality, DDA components can be used to uniquely identify each explanatory model (Models I – III). Statistical inference methods for model selection are presented and implementations of DDA in SPSS and R are provided. The application of DDA is illustrated in the context of identifying mediators of a classroom behavior management training program on student academic competence (Wiedermann et al., 2020). The study involved a group randomized controlled trial with 105 teachers and 1818 students (K-3rd grade) in a large urban school district. DDA suggests that only student prosocial skill development causally mediated the intervention effects on student academic competence. Limitations and potential future directions of direction dependence modeling are discussed.

 

February 1, 2021
Speaker: Ivory Li
Department of Psychology, The Ohio State University
Title: Filling Gaps in the Research on IRTree Theoretically and Practically: a Comprehensive Taxonomy and a User-Friendly R Package
Abstract: Since being proposed about a decade ago (De Boeck & Partchev, 2012; Böckenholt, 2012), item response tree (IRTree) models have drawn increasing interest in psychometrics. Despite the great potential in investigating response processes underlying observed categorical responses, the applications of IRTree models are limited to a large degree due to several reasons. The first reason is the lack of a comprehensive taxonomy of IRTree which hampers people realizing various plausible applications of IRTree. The second reason is the difficulty in data preparation and model formulation for practitioners who are not experts in psychometrics. In addition, IRTree models have been studied in the IRT framework, while it is not impossible to extend them to the factor analysis framework and even the more complex structural equation modeling framework so that IRTree can be used in more applications. Taking the foregoing into account, this project is aimed to establish a comprehensive taxonomy of IRTree and to develop a user-friendly R package for IRTree which are expected to alleviate the limitations in the application of IRTree. With the taxonomy and R package, we show some applications of IRTree in both the IRT framework and the factor analysis framework.
(Note: this is a dissertation proposal, so the taxonomy and R package have not been fully built and the applications have not been studied yet).
Discussant: Diana Zhu

 

February 8, 2021
Speaker
: Jacob Coutts
Department of Psychology, The Ohio State University
Title
: It makes a difference: An exploration of methods for comparing indirect effects in multiple mediator models
Abstract: Establishing a cause–effect relationship between two variables is a fundamental goal of scientific research. It is valuable to know that one variable causally influences another. Just as important, however, is establishing how, or through what mechanism(s), this effect operates. Mediation analysis is a popular method used to answer such questions. A simple application of the mediation model looks at how one intervening variable (a “mediator”) can explain the relationship between two others. The quantification of an effect through a mediator is called an indirect effect. Real-world processes are complex, however, and effects are often transmitted by more than one mechanism. Consequently, it can be beneficial to simultaneously look at multiple mediators that could explain the connection between an antecedent variable and its consequent. As multiple mediator models continue to grow in popularity, it is theoretically and practically useful to explore whether one mechanism is “stronger” or “more important” in producing an effect than another. This can be done by comparing the relative sizes of the indirect effects. Although several methods have been proposed in the methodological literature for comparing indirect effects, little-to-no literature exists exploring whether one method is better than another. A simulation study was designed to compare the merits of various approaches to comparing indirect effects. Additionally, the properties of three different bootstrap confidence intervals commonly used for inference were compared. The results suggest that calculating the difference of the absolute values of two indirect effects and using percentile bootstrap confidence intervals for inference could be actionable advice given to substantive researchers exploring multiple pathways of causation.
Discussant
: Kathryn Hoisington-Shaw

 

February 15, 2021
Speaker
: Allison Devito
Title: Copyright Considerations for Research and Education
Abstract: Participants will be introduced to materials offered by University Libraries and learn best practices for using, sharing, and creating copyrighted content.

 

February 22, 2021
Speaker
: Dr. David MacKinnon
Department of Psychology, Arizona State University
*joint event with University of Maryland, College Park Measurement, Statistics and Evaluation Program; University of North Carolina at Chapel Hill, and University of Notre Dame.
Title: How do we know that our statistical methods should work? Benchmarks, Plasmodes, and Statistical Mediation Analysis
Abstract:
This presentation describes a benchmark method to validate statistical methods from the analysis of data on a known or established empirical effect. There are aspects to benchmark validation that complement mathematical derivations and simulations. The method may be useful for evaluating the accuracy of causal conclusions from a statistical method. I apply the method to statistical mediation analysis of the process by which imagery increases recall of words. I discuss strengths and limitations of the method.  

 

March 1, 2021
Speaker: 
Dr. Michael Walker
Education Testing Service
Title: Equity in college admissions: Is it possible?
Abstract: This talk explores whether it is possible ever to achieve equity in college admissions, given the subgroup disparities in educational opportunities in primary and secondary school. Starting with the observation that subgroups differ in average performance on college entrance examinations, we will review three broad categories of attempted solutions. The first perspective treats the test as a barrier, leading to several proposals to fix it or circumvent it (e.g., the “Test Optional” movement). The second perspective uses tests to gain actionable information about students (e.g., AP Potential). Finally, we will discuss some selection strategies that have attempted to remedy subgroup disadvantages (e.g., affirmative action). Throughout the talk, we will critically evaluate what equity means and demands in this context.

 

Robert Wherry Speaker
March 5, 2021
Thursday 3:00-4:00pm
Speaker: Dr. Geert Molenberghs
Department of Biostatistics, Hasselt University and KU Leuven in Belgium
Title: A Biostatistician’s Perspective on SARS-CoV-2: Data on Virus and People
Abstract: The COVID-19 pandemic, induced by the SARS-CoV-2 virus, is literally a rare event in the course of history, because we need to go back to 1918 for a similar, even worse pandemic, the Spanish Flu, or H1N1, although we also had a tuberculosis pandemic in the interbellum; there was the Russian flu in 1890 (maybe also a coronavirus and not influenza), and the plague that literally haunted the world for several centuries.
The time-honored non-pharmaceutical interventions (suppression, mitigation, and herd immunity) are sketched. We discuss the modeling of known quantities (e.g., confirmed cases, hospitalizations, deaths), the estimation of key quantities (e.g., reproduction number, increment function), and the prediction of quantities unknown (e.g., the true number of cases, likelihood and timing of second and later peaks). We discuss the collection of crucial sup-porting information, such as collected by population surveys to gauge people’s behavior, opinion, well-being, professional activity, etc., and the (spatial) analysis of such data.
I-BioStat has been involved in the response to the COVID-19 crisis, ranging from mathematical and statistical modeling, over day-to-day monitoring, to scientific and government committee work and policy making. We place the mathematical and statistical work done against the background of its use towards policy making, public communication, and out-reach in real time.

 

March 15, 2021
Speaker
: Kathryn Hoisington-Shaw
Department of Psychology, The Ohio State University
Title
: Alternative Approaches to Study Design Using a Flexible N
Abstract: Emphasis on statistical power analyses to justify sample size (N) has seen a resurgence, largely due to questions regarding the credibility of psychological findings. However, practical implications of this emphasis leave researchers faced with the possibility that a very high N is needed to reach a given threshold of power, which in turn might be unrealistic to collect given resources available (especially for difficult-to-collect populations). Alternatives exist for study design beyond the classical testing method (i.e. determine a fixed N based on a power analysis), but remain largely fringe concepts in psychology. Sequential testing is one such alternative for which N is a random variable (i.e. flexible N).  Statistical information is monitored during data collection with the hopes of terminating the study early, and thus resulting in a smaller N. The present talk will focus on the taxonomy of sequential testing, which is separated based on the type of statistical information collected by the test: either spending function- or likelihood ratio- based methods. Differences between these groups will be highlighted, with a special focus placed on likelihood ratio-based methods. Specifically, a method called the sequential probability ratio test will be discussed in detail, including a brief tutorial on how it may be used.
Discussant
: Ivory Li

 

March 22, 2021
Speaker: 
Dr. Li Cai
Department of Education, University of California, Los Angeles
*joint event with University of Maryland, College Park Measurement, Statistics and Evaluation Program; University of North Carolina at Chapel Hill, and University of Notre Dame.
Title: Reflections on the fading standardized testing requirement in college admissions
Abstract: In this talk, I will draw on some of my personal experiences serving on the University of California’s Standardized Testing Task Force (STTF) between 2018-2020 and ponder about the future of the role of educational assessments in college admissions.  The format will be conversational.

It would be helpful to review the executive summary of the STTF report and the additional statement here.

 

March 19, 2021
Speaker: 
Dr. Jason Rights
Department of Psychology, University of British Columbia
Title: Quantifying explained variance in cross-sectional and longitudinal multilevel models with any number of levels
Abstract: Multilevel models (MLMs) are commonly used by psychologists to analyze nested data (e.g., students nested within schools or repeated observations nested within persons). To aid researchers in quantifying explained variance for these models, Rights & Sterba (2019, 2020) recently developed an integrative framework for R-squared computation that subsumed existing measures, clarified equivalencies among existing measures, and filled gaps by supplying new measures to answer key substantive questions. This original framework, however, did not readily accommodate modeling choices common to longitudinal contexts, nor did it generalize to hierarchical data structures beyond two levels. In this talk, I first describe this recent framework of Rights & Sterba, and proceed to delineate generalizations of this framework to accommodate longitudinal models and/or models with three or more levels. I also discuss a recently developed R package (r2mlm) to further aid researchers in implementing this framework and computing R-squared measures in practice.

 

April 5, 2021
Title
: Panel Discussion: History and Future of Quantitative Psychology
Abstract: Quantitative psychologists study and develop methods and techniques for the measurement of human behavior and other attributes. Their work involves the statistical and mathematical modeling of psychological processes, the design of research studies and the analysis of psychological data. Research questions concerned the field have changed over the time, and this panel discussion will focus on what questions unified the field in history and what research questions should be answered in the future.

Panel Speakers
Dr. Gary McClelland
        Professor Emeritus
Department of Psychology and Neuroscience, University of Colorado Boulder
Dr. Joe Rodgers
        Professor Emeritus
Department of Psychological Sciences, Vanderbilt University
Dr. David Thissen
        Professor Emeritus
Department of Psychology & Neuroscience, University of North Carolina at Chapel Hill

 

April 12, 2021
Speaker
: Selena Wang
Department of Psychology, The Ohio State University
Title
: What can we learn from the social media data? An example using a joint modeling framework for social networks and high-dimensional attributes
Abstract
: In the era of modern “big data”, as the data collection abilities have increased, datasets comprising of multimodal networks and multivariate measurements have been available in many scientific domains including social sciences, psychology, education, economics, neuroscience and genomics. There is a pressing need for understanding and visualizing the information in such big data. In this presentation, we will explore the structures in the user networks and behaviors in multi-modal social media systems like Instagram and YouTube using a joint model for social networks and attributes. We construct interactive joint latent spaces with both user nodes and behavior nodes. We explore the impacts of social relationships on user behaviors by comparing the joint latent spaces with only the behavior data versus when social networks are also included. We investigate the influence of networking on career trajectory, political views, and social status in a small circle of French financial elites. A R package jlsm is developed to fit the models proposed in this presentation and is publicly available from the CRAN repository  https://cran.r-project.org/web/packages/jlsm/jlsm.pdf.
Discussant: Inhan Kang

 

April 19, 2021
Speaker: Dr. Riet van Bork
Department of Psychological Methods, University of Amsterdam
*joint event with University of Maryland, College Park Measurement, Statistics and Evaluation Program; University of North Carolina at Chapel Hill, and University of Notre Dame.
Title: A causal interpretation of psychometric models
Abstract: Psychometrics heavily relies on the use of statistical models to measure psychological attributes such as cognitive abilities, attitudes, personality traits and mental disorders. Latent variable theory and psychological network theory diverge in how observable behaviors are related to each other and to the attribute that is being measured. While the theories are different, the statistical models that are used in these two frameworks are statistically similar and in some cases even equivalent. This similarity raises the question of whether these models should be seen as competing explanations for the data or as merely different statistical representations of the same joint distribution in the data. I argue for a causal interpretation of these models in which the models represent different explanations. A consequence of this interpretation of psychometric models is that it becomes important to compare alternative explanations. To further the comparison of network models and latent variable models, I investigate different forms of model simplicity that not only consider their differences as statistical models, but also account for their differences as causal models.

 

Robert Wherry Speaker Series
Colloquium Archive