Welcome

240K Lazenby Hall
Department of Psychology
1827 Neil Avenue
Columbus, OH 43210
Phone: 614 292 4940
Email: pek.5@osu.edu

I am an Associate Professor in the Quantitative program within the Department of Psychology at Ohio State University, currently serving as the Director of the Graduate Program in Quantitative Psychology.

Research Interests

My research is motivated by promoting sound methodological practice in data generation, exploration, and analysis. I have continued to work on two broad themes of research related to replicability of research findings. The first focuses on quantifying and potentially reducing uncertainty in statistical results. The second centers on bridging methodological developments and their use by substantive (i.e., non-methods) researchers.

Quantifying uncertainty. Many sources of uncertainty challenge the making of strong scientific claims. My early research quantified the effect of influential cases on results and constructed well-performing confidence intervals for linear and nonlinear models. I also examined exchangeable weights (EWs) — alternative descriptions of the data that are no different from optimal weights (e.g., ordinary least squares weights) in practice. Highly variable EWs reflect highly sensitive model descriptions of the data and vice versa, and I continue to have an interest in the performance of alternative weights in different modeling frameworks for different research intentions (e.g., explanation vs. prediction).

My recent work examines uncertainty in analyses of statistical power (the probability over repeated samples with which significant results can be expected to be found when effects exist in the population). Power analysis is promoted as a method to improve research replicability but incorporating uncertainty about the unknown effect size in power analysis yields calculated power values (and associated sample sizes) too uncertain to reasonably guide study planning. My ongoing work on power focuses on what utility there is in power analysis (for design and not evaluating completed studies) and how can we better communicate about power analysis to limit misconception.

I have also started to consider uncertainty in the conception of constructs, providing an alternative explanation for variability in results (cf. irreplicability) and motivating new modeling techniques. In relation to measuring constructs, I have started to expand my research to examine what kinds of scores (summed vs. factor) aptly reflect constructs and to examine reliability of such scores from different perspectives. Much of this work grapples with the lack of clarity among classical test theory, item response theory (IRT), ANOVA models, factor models, and IRT models.

Bringing methodology with practice. Methodologists have difficulty communicating their developments to substantive researchers and substantive researchers misconstrue statistical concepts. My research in this area has focused on correcting misconceptions about statistical power by emphasizing their place as “what if” scenarios that project forward not back at previous results. Additionally, I have used simulations to show that flexible analysis strategies (p-hacking) have less influence than previously shown when aggregating results across methodologically consistent studies and that scores utilizing statistical partialing (effects controlling for other measures) reduce conceptual clarity. Such misconceptions might arise from use of statistical heuristics, and I have forwarded intuitively attractive, accessible, and relevant concepts that can counter misperceptions and help substantive researchers understand the statistical concepts. I continue to be interested in how quantitative psychologists can improve the conduct of psychological science via method development as well as pedagogy.

Curriculum Vitae
Google Scholar