Colloquium Spring 2020

Spring 2020

The Quantitative Psychology colloquium series meets weekly in the Autumn and Spring semesters. The primary activity is presentation of ongoing research by students and faculty of the quantitative psychology program. Often,  guest speakers from allied disciplines (e.g., Education, Statistics, and Linguistics) within and external to The Ohio State University present on contemporary quantitative methodologies. Additionally, discussions of quantitative issues in research and recently published articles are often conducted during colloquium meetings.

Faculty coordinator: Dr. Jolynn Pek
Venue:
35 Psychology Building
Time:
12:30-1:30pm

 

January 6, 2020
Organizational Meeting
*pizza and drinks will be served!

 

January 13, 2020

Title: Ethics Seminar: Committee on Academic Misconduct (COAM)
Abstract: The Committee on Academic Misconduct (COAM) is charged with maintaining the academic integrity of Ohio State University. This committee establishes and oversees procedures for investigating reported cases of alleged academic misconduct by students. This talk will cover topics such as what defines academic misconduct, how to report an incident, and COAM’s procedures and rules surrounding a report of alleged academic misconduct.
Discussant: Dr. Trish Van Zandt

 

January 20, 2020
Marin Luther King Day

 

January 27, 2020

Speaker:  Dr. Duane Wegener
Department of Psychology, The Ohio State University
Title:
Classic and Contemporary Assessments of the Implications of (Lack of) Statistical Power
Abstract
In substantive research, the traditional approach was for statistical power to be considered when designing a study (especially for inclusion in a grant application) but not when evaluating it after the fact. That approach changed (and has driven new journal submission and evaluation guidelines aimed at increasing rates of successful replication) when methodologists started claiming that low power increases the likelihood of Type I Errors in a published literature. This talk will examine the rationale linking power levels to Type I Error rates and how that rationale relates to the types of empirical cases typical of the research literature.

 

February 3, 2020
Speaker
:  Dr. Rebecca Andridge
College of Public Health Division of Biostatistics, The Ohio State University

Title: Measures of Selection Bias for Proportions Estimated from Non-Probability Samples
Abstract:  The proportion of individuals in a finite target population that has some characteristic of interest is arguably the most commonly estimated descriptive parameter in survey research. Unfortunately, the modern survey research environment has made it quite difficult to design and maintain probability samples: the costs of survey data collection are rising, and high rates of nonresponse threaten the basic statistical assumptions about probability sampling that enable design-based inferential approaches. As a result, researchers are more often turning to non-probability samples to make descriptive statements about populations. Non-probability samples do not afford researchers the protection against selection bias that comes from the ignorable sample selection mechanism introduced by probability sampling, and descriptive estimates based on non-probability samples may be severely biased as a result. In this seminar I describe a simple model-based index of the potential selection bias in estimates of population proportions due to non-ignorable selection mechanisms. The index depends on an inestimable parameter that captures the amount of deviation from selection at random; this parameter ranges from 0 to 1 and naturally lends itself to a sensitivity analysis. I describe both maximum likelihood and Bayesian approaches to estimating the index, and illustrate its use via simulation and via application to real data.

 

February 10, 2020

Speaker: Ivory Li
Department of Psychology, The Ohio State University

Title: Modeling Conditional Dependence between Response and Response Time in the Testlet Model
Abstract: The research development on response accuracy (RA) and response time (RT) has contributed to the understanding of the response process in educational tests and measurements. One of the topics is RA-RT conditional dependence which suggests that there is remaining dependence between RA and RT after controlling for the latent variables of ability and speed. In testlet-based assessments, RTs are often registered per testlet instead of per item due to technological barriers, or because RT dependence within testlets makes item-level RTs not accurate so that the summed RT was used per testlet. For such testlet data, two models were proposed to address RA-RT cond`itional dependence on testlet level (between testlet effect and RT per testlet) and on item level (between every RA and RT per testlet), respectively. It is assumed that the former model performs better when RA-RT conditional dependencies within one testlet are from a common source and the latter model performs better when those are from different sources. The models were tested in a simulation study and used for a real data set. The results showed that the models could take into account of RA-RT conditional dependence in testlet models well.
Discussant: Diana Zhu

 

February 17, 2020

Speaker:  Matthew Galdo
Department of Psychology, The Ohio State University

Title: Towards a Quantitative Framework for Detecting Transfer of Learning
Abstract: Transfer of learning (transfer) refers to how learning in one context influences performance in a different context. A well-versed theory of transfer is paramount to understanding educational outcomes. Yet, a thorough understanding of transfer has been frustratingly elusive, with some researchers arguing that meaningful transfer rarely occurs or attempts to detect transfer are futile. In spite of this pessimism, we explore a model-based account of transfer. Building on the laws of practice (learning curve models), we develop a scalable, quantitative framework to detect transfer (or lack thereof). We perform a parameter recovery analysis and find the identifiability of transfer model parameters is contingent on the order practice is observed. We then use our modeling framework to explore a large-scale gameplay dataset from Lumosity. Preliminary results suggest our models provide a reasonable account of the data and that the added complexity of transfer is justified
Discussant: Yiyang Chen & Inhan Kang

 

February 24, 2020

Speaker:  Dr. Augustine Wong
Department of Mathematics and Statistics, York University

Title: A Simple and Accurate Likelihood-Based Inference Method
Abstract: Statistical inference is a process of evaluating how far away an estimate of a parameter obtained from an observed sample is from the true value of that parameter.  In most cases, a confidence region of the parameter or a probability that measures the distance between the estimate and the true value is reported.  Exact statistical inferential methods are available for some specific problems. However, they are not available for many realistic problems of interest, so asymptotic methods are needed in these cases.  In this presentation, the standard likelihood-based asymptotic methods (Wald, Rao, and Wilks) are reviewed. Since these methods require large sample size, a simple improvement to the Wilks method is proposed with the aid of the bootstrap method.  Simulation results show that the proposed procedure gives extremely accurate coverage even when the sample size is small.

Dr. Augustine Wong is a Professor of Mathematics and Statistics at York University. His research focuses on computational methods in statistics, foundation of inference, likelihood-based asymptotic inference, and statistical theory with applications to econometrics, finance, and survival data analysis.

 

March 2, 2020

Speaker: Dr. Gary H. McClelland
Department of Psychology and Neuroscience, University of Colorado Boulder

Title:  Things I’ve Learned (So Far) Following Jacob Cohen’s Footsteps.
Abstract: Jacob Cohen published in 1990 an interesting paper entitled “Things I’ve Learned (So Far)” in the American Psychologist. Cohen emphasized the importance of the general linear model, the value of statistical power analysis, the evils of median splits, and the need for better graphics. Although I never met Cohen, I’ve found myself often following in his footsteps. But he wasn’t always an infallible guide. We will examine some errors in his approach to power analysis that is sowing confusion today, see that median splits are even worse than Cohen imagined, and flesh out what his suggestions for better statistical graphs might have been. It will be a light-hearted and graphical trip in Cohen’s footsteps.

Dr. Gary McClelland is Professor Emeritus of Psychology at the University of Colorado Boulder. His primary research interests are (1) statistical methods and (2) judgment and decision making. He is a founding member of the Society for Judgment and Decision Making (SJDM), a founding fellow of the Association for Psychological Science, and a winner of the American Psychological Association’s Jacob Cohen Award for distinguished contributions to teaching and mentoring (with Charles Judd). He has more than 100 scholarly publications (including a graduate-level textbook now in its third edition) and more than 15,000 citations identified by Google Scholar. His methodological work is motivated by issues concerning power analysis in relation to mediation and moderation designs. He has also been successful in the promotion of practical statistical procedures and the development of user-friendly interactive demonstrations for teaching statistical concepts. His JDM work intersects with his methodological interests, and has often focused on aptly applying statistical methods to study judgment in the contexts of public policy (e.g., environmental valuation, attitudes toward taxation) and marketing (e.g., online shopping and the advantages of using less information).

 

March 9, 2019
Spring Break

 

The remaining events have been cancelled.