Colloquium Spring 2019

Spring 2019

The Quantitative Psychology colloquium series meets weekly in the Autumn and Spring semesters. The primary activity is presentation of ongoing research by students and faculty of the quantitative psychology program. Often,  guest speakers from allied disciplines (e.g., Education, Statistics, and Linguistics) within and external to The Ohio State University present on contemporary quantitative methodologies. Additionally, discussions of quantitative issues in research and recently published articles are often conducted during colloquium meetings.

Faculty coordinator: Dr. Jolynn Pek
Venue:
35 Psychology Building
Time:
12:30-1:30pm

 

January 7, 2019
Speaker: 
Nicholas Rockwood
Department of Psychology, The Ohio State University

Title: Modeling individual differences using multilevel structural equation models
Abstract: Over the past decade, there has been an increase in longitudinal data collection via experience sampling and ecological momentary assessments (EMAs). As measured variables that fluctuate over time contain both within-person and between-person variability, multilevel modeling (MLM) is typically employed to analyze such data, where repeated measurements are modeled as nested within individuals. The types of effects that can be modeled include within- and between-person processes, as well as individual differences in means (via random intercepts), within-person covariances (via random slopes), and within-person variances (via random variances). In this talk, I present the multilevel structural equation modeling (MSEM) framework as an extension of MLM that allows for modeling multivariate responses, between-person response variables, measurement error, and structural relations among between-person random effects/latent variables. Because the MSEM model does not typically have a closed-form likelihood function, maximum likelihood (ML) estimation requires approximating the function using numerical integration. I will demonstrate how the likelihood function can be reforumated to reduce the dimension of numerical integration required, which results in more accurate and efficient ML estimation. Example analyses will be provided to indicate potential applications of the model for health and behavioral science research. 
Discussant:
Jack DiTrapani
Note: This talk will be 1 hour long instead of the usual 40 minutes, followed by a Q&A session.

 

January 14, 2019
Organizational Meeting
*pizza and drinks will be served!

 

January 21, 2019
Marin Luther King Day

 

January 28, 2019
Title: Ethics Seminar: Predatory Journals
Abstract: Predatory journals are a new threat to the integrity of academic publishing. These parasitic publishers exploit the open-access model by encouraging the miscarriage of the peer-review process, where peer-review is minimal or even absent. Their primary motivation is to extract publication fees from authors, who succumb to the pressures of publishing or perishing. Predatory journals have led to a research on their identification (i.e., Beall’s black list), academic pranks (e.g., fake editors, Sorokowski, Kulczycki, Sorokowska, & Pisanski, 2017; fake submissions, Bohannon, 2013), and studies on who publishes in predatory journals (Demir, 2018). This seminar will discuss ethical aspects to predatory journals.
Discussants: Dr. Trish Van Zandt & Dr. Jolynn Pek

 

February 4, 2019
Speakers
: Seo Wook Choi
Department of Psychology, The Ohio State
Title: Optional stopping with Bayes factor
Abstract: It is common practice for researchers to collect more data until they obtain the satisfying result. Doing optional stopping in collecting subjects can be seen as an effort to the research or abuse of statistical procedure. If there is a standard way of doing optional stopping procedure, it will be useful for the experimental research. Using the Bayes factor can be the solution for this problem. There will be simulation studies to mimic the experiment situations of testing mean difference and interaction effect between two variables. Often, there might be not enough sample size to make a decision. The way to calculate the number of samples required will be provided. Finally, I am going to discuss the limitation of optional stopping rule related to the bias of estimated coefficients. If there is any good idea to correct the bias problem systematically, it will be a very meaningful discussion.
Discussants: Inhan Kang

Speaker: Yiyang Chen
Department of Psychology, The Ohio State University
Title: Modeling progressive ratio task with Bayesian methods
Abstract: The progressive ratio task (PRT) is recently applied to quantify motivational deficits in human participants. By progressively increasing the effort that the participants have to exert for a fixed amount of reward, PRT obtains the maximum amount of effort each participant is willing to give before quitting the task. In common practices, researchers often use only the breakpoint, a single statistic related to the maximum amount of exerted effort, to measure motivation, which might waste other helpful information obtained from PRT studies. I seek to build a hierarchical Bayesian model to utilize a richer pool of information to measure motivation, including response time, effort in previous trials and decision patterns from participants.
Discussant: Bob Gore

 

February 11, 2019
Speaker:  Ottavia Epifiania
Department of Psychological Sciences, University of Padova
Title: Rasch gone mixed: A mixed model approach to the Implicit Association Test
Abstract: Advantages resulting from a Rasch analysis of Implicit Association Test (IAT) responses have already been outlined in the literature. However, given the cross-classified structure of the IAT, the application of the Rasch model might be problematic. This issue can be overcome by applying Linear Mixed Effects Models (LMM). Generalized LMMs and LMMs were applied to accuracy responses and log-time responses of two different IATs (i.e., a Race IAT and a Chocolate IAT) to obtain Rasch and lognormal model estimates, respectively. On both the IATs considered, Generalized LMMs resulted in the estimation of two condition-specific stimuli easiness parameters and overall respondents’ ability parameters, while LMM resulted in overall stimuli time intensity parameters and participants’ condition-specific speed parameters. Results allowed a deeper investigation of the contribution of each stimulus to the IAT effect, along with information on stimuli category representativeness. Detailed information on participants’ accuracy and speed performance were available as well, allowing the investigation of the components of the D-score, the classic score used for the IAT. The capacity of the parameters to predict a behavioural outcome was investigated and compared with the D-score predicting capacity. The condition-specific speed parameters proved to have a better predictive performance than the D-score, in terms of chocolate choices correctly identified. Implications of the results, future directions, and limitations are discussed.

Ottavia Epifania is a PhD student in Psychological Sciences at the University of Padova. Her research focus is on the Implicit Association Test (IAT) and specifically on modelling its responses with an IRT approach. To take into account the fully-crossed design characterizing the IAT, she is using Linear Mixed Effect Models for estimating the parameters of Rasch and lognormal models from accuracy and log-transformed time responses, respectively. She is also developing a Shiny Web app and an R package for the computation of the D-score for the IAT.

 

February 18, 2019
Speaker: Feifei Huang
School of Psychology, South China Normal University
Title:  Research on the Test Equating of Explanatory Item Response Models
Abstract: Test equating is a statistical process that is used to adjust scores on test forms so that scores on the forms can be used interchangeably (Kolen & Brennan, 2004). Among the various possible equating designs, the non-equivalent groups with anchor test (NEAT) design is most widely used in the large-scale tests. However, it is impossible to have optimal score equating results because of the exposure to the common items may positively affect examinee performance. New alternative designs without using common items need to be explored and investigated for test equating in security-risk populations. Based on De Boeck and Wilson (2004) (also see De Boeck, Cho, & Wilson, 2016; De Boeck & Wilson, 2016), a new method can be developed. The approach assumes that the item parameters can be reasonably well predicted through item covariates in the framework of the explanatory IRT models. There are three types of the explanatory IRT models, including 1) person explanatory, 2) item explanatory, and 3) doubly explanatory. Preliminary research has shown that the explanatory IRT approach can be useful for equating as it neither requires overlap of respondents nor of items. But there is no assurance that this alternative method would function as well as the traditional design for test equating. A research will be conducted to investigate the performance of the explanatory IRT approach through a simulation study.

Feifei Huang is a PhD student in School of Psychology at the South China Normal University. Her research is about the test equating of item response theory. As in standard procedures for test equating, the NEAT design is a widely used data collection design for test equating. However, the common-item exposure risk becomes more serious especially in the internet era where test takers are able to share the test items on the internet after they have taken the test (Wei & Morgan, 2016). Her current work is to explore new alternative designs or methods for test equating without using common items, specifically to conduct the research on the test equating of explanatory item response models.

 

February 25, 2019
Speaker:  Dr. David Hothersall
Department of Psychology, The Ohio State University
Title: An Illustrated Early History of the Ohio State University and its Department of Psychology
Abstract:  The founding of Ohio State and the first 80 years of the department of psychology will be presented.
Special attention will be paid to significant figures and their contributions to the university and the department.

 

March 4, 2019
Speaker: Ivory Li
Department of Psychology, The Ohio State University
Title: The Differentiation of Three Types of Conditional Dependence
Abstract: Conditional dependence is defined as the dependence between items that cannot be fully explained by latent variables and their correlation(s). We focus on parallel data (e.g., response times and responses for same items) with a multidimensional model in which each type of data has a latent variable. There are three possible types of conditional dependence. Assume item Y1 loads on factor F1 and item Y2 loads on factor F2, the first type of conditional dependence derives from the effect of the expected values of Y1 on Y2. In this case, the dependence can be explained through a cross loading of Y2 on factor F1. The second type is based on the effect of the observed values of Y1 on Y2, which implies a direct effect of Y1 on Y2. The third type comes from the effect of the residual of Y1 on Y2, in which case the dependence is captured through a residual correlation between Y1 and Y2. Relevant derivations show that the three types of dependence will result in different correlations among variables, indicating the possibility to differentiate the types of dependence. An empirical study was first conducted to compare models with different types of conditional dependence. Even though the model with direct effects has the best goodness of fit and is theoretically reasonable, its model-fit criterion values are close to those of the model with residual correlations. Possible reasons are the models were too complicated or the data was not informative enough. As a further investigation, a simulation study was conducted to find under which conditions the three types of dependence can be differentiated and to explore possible factors that influence the differentiation.
Discussant: Diana Zhu

 

March 11, 2019
Spring Break

 

Time Change: 12:10-1:30pm
March 18, 2019
Speaker: 
Dr. Richard S. John
Department of Psychology, University of Southern California
Title: On Human detection of deception on social media.
Abstract: False posts to social media following extreme events such as natural disasters and terror attacks have become a pervasive concern for both crisis responders and the public. The current study assesses how well individuals can identify false social media posts, extending previous research on lie detection of oral communication. We report 3 experiments in which over 1000 US participants were presented with a series of actual Tweets posted within 48 hours following soft target terrorist attacks in the US or Europe. Experiment 1 contains three conditions where base rates of false information were 25%, 50%, and 75%, respectively. In Experiment 2 and 3, respondents were incentivized using one of three payoffs varying in the relative cost of false positives and false negatives. In each experiment, respondents were randomly assigned to one of three conditions and provided a binary judgement of the authenticity of information for 20 separate Tweets. ROC analysis showed that respondents performed only slightly better than chance (AUC statistic ranges from 0.49 to 0.56 across all three experiments), consistent with previous meta-analysis from lie detection research. Furthermore, sensitivity and specificity shifted across three conditions in accordance to the manipulations, yet not as much as predicted from an optimal threshold analysis. Participants were responsive to the manipulations, but overall poor at detecting false information. Participants who self-identified politically as conservatives performed worse than liberals and moderates across all three experiments.

 

March 25, 2019
Speaker:   Joonsuk Park
Department of Psychology, The Ohio State University
Title: Identifiability analysis of sequential sampling models
Abstract: A parametric statistical model is said to be identifiable if there only exists a single set of model parameters that is consistent with the data (Bamber & Van Santen, 2000). Put another way, for identifiable models, two distinct parameter sets always yield different probability distributions defined on the outcome space of the experiment. Identifiability is a critical condition for the parameter estimates of a model to be meaningfully interpreted; were it not the case, different parameter sets provide equally good fit for a given dataset, making it impossible to choose which to interpret. Unfortunately, it is not yet clear whether popular sequential sampling models such as diffusion decision model (DDM) and linear ballistic accumulator (LBA) are identifiable, due to the lack of identifiability analysis of the models. This situation has to be addressed because the models’ parameter estimates are often given substantive interpretations. Interestingly, it was recently reported that parameter recovery of DDM was poor, especially for the so-called across-trial variability parameters, suggesting the possibility of unidentifiability of DDM, which has motivated this study (Boehm et al, 2018). In this presentation, I will discuss the concepts, issues, and preliminary results regarding the identifiability analyses of DDM and LBA. In addition, future research directions and implications of the possible results are discussed.
Discussant: Yiyang Chen

Speaker: Inhan Kang
Department of Psychology, The Ohio State University
Title: Decomposition of sources of noise in perception and cognition
Abstract: 
Information processing in perception and cognition is inherently noisy. The noise can arise from internal sources (e.g., moment-to-moment fluctuation) or external sources (e.g., different configurations of the nominally same stimulus). However, many mathematical models of psychological processes implement only the internal noise. In this presentation, direct evidence of external sources of noise obtained from five double-pass experiments will be reported. In the double-pass experiments, the exact same stimuli were presented twice and the degree of agreement in the subject’s responses was analyzed. Prediction from the linear ballistic accumulator model was used to decompose internal and external sources of noise. Based on the result, it appears that external sources of noise take a large proportion of the total noise in many cases, supporting the claim that cognitive models should consider external noises to reproduce behavioral patterns from perceptual decision-making tasks.
Discussant: Joonsuk Park

 

April 1, 2019
Speaker:   Jacob Coutts
Department of Psychology, The Ohio State University
Title: SEM, OLS regression, and their application to mediation analysis in distinguishable dyadic data
Abstract: Establishing the process or mechanism by which effects operate is one of the fundamental goals of social science. Although it is important to know that two variables are associated, at least as important is understanding how they are associated. The analysis of the mechanism(s) by which an effect operates through an intervening variable—also known as mediation analysis—is a popular analytical technique for testing hypotheses about mechanisms. This technique can be applied to models measuring dyads—an important unit of human interaction found in many settings in life (e.g., marriages, workplaces, families). Measuring variables on two persons (as opposed to just one individual) increases the complexity of analyses that seek to understand how dyad members affect each other. Computational methods depend on whether dyad members are theoretically distinguishable or indistinguishable, with the former—the focus of this presentation—being more difficult to analyze and interpret. Typically, dyadic mediation analysts use structural equation modeling (SEM) procedures that require special software and training in how to set up a model, impose estimation constraints if desired, and test the difference in fit of various models. Current practices with mediation analysis in distinguishable dyadic data are presented, as well as a new regression-based computational macro that greatly simplifies analysis in these models (including pairwise contrasts of indirect effects). Limitations and advantages of the two approaches are discussed.
Discussant: Kathryn Hoisington-Shaw

 

April 8, 2019
Speaker: Kathryn Hoisington-Shaw  
Department of Psychology, The Ohio State University
Title: Statistical power: Interpreting power for design vs. power for evaluation
Abstract: Statistical power analysis has received much attention due to questions regarding the robustness of psychological studies (e.g. Bakker, van Dijk, & Wicherts, 2012). Two categories of statistical power have developed as a result: power for design (Cohen, 1988) and power for evaluation (e.g., see Onwuegbuzie & Leech, 2004). Although power for design is better understood, the extent of its use in practice is not as clear. Additionally, power for evaluation remains elusive in its interpretation and has been surrounded by misconceptions (O’Keefe, 2007). Two studies were conducted to further investigate these concepts of statistical power. First, a sample of 100 journal articles published in Psychological Science in 2017 were reviewed for the statistical models used as well as any reports of statistical power. The average article was found to contain 2.4 studies and power analysis was not reported for the majority of statistical tests used (e.g. 75% of linear models were not accompanied by a power analysis). Second, multiple Monte Carlo simulations were conducted to evaluate possible interpretations of power for evaluation. One such interpretation is that power for evaluation can be used as an estimate of power for design. However, the simulations suggested that power for evaluation is a biased and inconsistent estimator, suggesting that this interpretation is not useful. Future directions include investigating power for evaluation as it relates to multiple studies within a single journal article, as well as exploring additional possible interpretations of power for evaluation.
Discussant: Selena Wang

Speaker: Diana Zhu
Department of Psychology, The Ohio State University
Title: Cross-level lack of metric invariance as an explanation for lack of multi-group scalar invariance
Abstract: In this study, we explain that a violation of metric cross-level invariance leads to a violation of scalar multi-group invariance. We reason from a violation of cross-level invariance to violations of measurement invariance in a multi-group model, whereas other authors have focused on multi-group conditions for cross-level invariance (e.g., Jak & Jorgersen, 2017). They are two sides of the same coin. The reasoning is as follows: we can derive expected item means for level 2 units based on their position on the level-2 factors. Without cross-level metric invariance, the group differences in item means cannot be explained by the group differences in multi-group factor means. In the absence of metric cross-level invariance, level 2 implies another source of group differences in item means. This also means that the specific scalar invariance violations in a between-group model can be predicted from the level-2 factors.
In this presentation, I will go through the theoretical derivation, show an attempted illustration with real data from an annual employee opinion survey, and briefly discuss future simulation research direction.
Discussant: Saemi Park

 

April 15, 2019
Speaker:   Saemi Park
Department of Psychology, The Ohio State University
Title: A Relation between Multi-group Multidimensionality and Uniform DIF
Abstract: This study explores a relation between multi-sample multidimensionality and uniform DIF, revisiting multidimensional model of DIF (MMD) proposed by Shealy and Stout (1993). One of the complications test developers unceasingly encounter is to ensure that items in a test truly measure the target latent trait of interest and that no other unwanted traits are triggered and measured by the items. A reason it is not so simple is that cognitive abilities are interwoven in a sense that one kind of cognitive development owes and facilitates other kinds of cognitive development to some degree. If a test measures one or more than a secondary latent trait as well as the target latent trait, chances are that it leads DIF. MMD explicates how an additional latent trait plays a role in creating uniform DIF by considering three factors: the impact (group mean difference) of the primary dimension, the impact of the additional dimension, and the correlation between two dimensions. An interrelation among the three factors defines a function named ‘DIF potential’. We adopt it to investigate in what systematic way DIF potential and item discrimination affect three properties of uniform DIF- occurrence, magnitude, and direction. It is found that an interaction between DIF potential and the item discrimination of primary dimension causes a false positive of unidimensional items, that null DIF potential can suppress uniform DIF of multidimensional items, and that an interaction between DIF potential and the item discriminations of both dimensions anticipates the three properties of uniform DIF.
Discussant: Ivory Li

 

April 22, 2018
Brief presentations on external talk
*pizza and soda will be served

Inhan Kang
Jack DiTripani
Seo Wook Choi
Jacob Coutts
Kathryn Hoisington-Shaw
Joonsuk Park

Time & Venue: 3:30pm, PS 35 (Reception to follow)
April 23, 2018
Distinguished Alumni Award Colloquium
Speaker: Dr. Keith Widaman
Graduate School of Education
University of California, Riverside
Title: Modeling Data Using Regression Models: Testing Theoretical Conjectures Strongly
Abstract: Typical hypothesis testing in psychology and the behavioral sciences uses an exploratory approach to specification and testing of models developed over 80 years ago. This well‐ingrained method involves the development of two mutually exclusive and exhaustive hypotheses – a null hypothesis and an alternative hypothesis – and serves as the basis for t‐tests of differences in means, F‐tests in ANOVA, and other common analytic chores. This approach presumes implicitly that the researcher is ignorant with regard to likely out-comes of experiments and associated data analyses, so any patterns of non-chance results can be captured. The time has come for a new approach to analyses, introducing the model testing approach developed in structural equation modeling into regression analysis. Rather than using completely off the‐shelf, exploratory methods, modeling of data requires one to consider carefully the process generating the data and then to specify and test whether theoretically formulated models adequately explain the data. Several examples will illustrate the new insights that can arise using this revised approach. The key advantage of this approach to model testing is that it enables one to test strongly the theoretical conjectures motivating research.

 

Robert Wherry Speaker Series
Colloquium Archive