Yiqing Xu

Assistant Professor at
Department of Political Science
Stanford University

Welcome!

I am an Assistant Professor at Department of Political Science, Stanford University. I was recently promoted to Associate Professor (with tenure), effective July 1, 2026.

I work in political methodology (causal inference) and comparative politics (with a focus on China).

I received a PhD in Political Science from Massachusetts Institute of Technology (MIT) in 2016, an MA in Economics from the National School of Development (NSD) at Peking University in 2010 and a BA in Economics from Fudan University in 2007. I taught at University of California San Diego (UCSD) from July 2016 to September 2019.

I am an associate director of Stanford Causal Science Center (SC2) and a faculty affiliate to Stanford Center on China’s Economy and Institutions (SCCEI), Stanford Center for Open and REproducible Science (CORES), Stanford King Center on Global Development, Stanford Center for East Asia Studies (CEAS), and the 21st Century China Center (21CCC) at UCSD.

My work has appeared in American Political Science Review, American Journal of Political Science, The Journal of Politics, Political Analysis, Journal of the American Statistical Association, Journal of Economic Perspectives, Nature Human Behaviour among other academic outlets.

I have won several professional awards/fellowships:

 
You can reach me via email: yiqingxu [at] stanford.edu.

Recent Articles.

  • Scaling Reproducibility: An AI-Assisted Workflow for Large-Scale Reanalysis with Leo Yang Yang.

    Reproducibility is central to research credibility, yet large-scale reanalysis of empricial data remains costly because replication packages vary widely in structure, software environment, and documentation. We develop and evaluate an agentic AI workflow that addresses this execution bottleneck while preserving scientific rigor. The system separates scientific reasoning from computational execution: researchers design fixed diagnostic templates, and the workflow automates the acquisition, harmonization, and execution of replication materials using pre-specified, version-controlled code. A structured knowledge layer records resolved failure patterns, enabling adaptation across heterogeneous studies while keeping each pipeline version transparent and stable. We evaluate this workflow on 92 instrumental variable (IV) studies, including 67 with manually verified reproducible 2SLS estimates and 25 newly published IV studies under identical criteria. For each paper, we analyze up to three two-stage least squares (2SLS) specifications, totaling 215. Across the 92 papers, the system achieves 87% end-to-end success overall. Conditional on accessible data and code, reproducibility is 100% at both the paper and specification levels. The framework substantially lowers the cost of executing established empirical protocols and can be adapted in empirical settings where analytic templates and norms of transparency are well established.

  • Factorial Difference-in-Differences with Anqi Zhao and Peng Ding. Journal of the American Statistical Association, forthcoming.

    We formulate factorial difference-in-differences (FDID), a research design that extends canonical difference-in-differences (DID) to settings in which an event affects all units. In many panel data applications, researchers exploit cross-sectional variation in a baseline factor alongside temporal variation in the event, but the corresponding estimand is often implicit and the justification for applying the DID estimator remains unclear. We frame FDID as a factorial design with two factors, the baseline factor G and the exposure level Z, and define effect modification and causal moderation as the associative and causal effects of G on the effect of Z, respectively. Under standard DID assumptions of no anticipation and parallel trends, the DID estimator identifies effect modification but not causal moderation. Identifying the latter requires an additional factorial parallel trends assumption, that is, mean independence between G and potential outcome trends. We extend the framework to conditionally valid assumptions and regression-based implementations, and further to repeated cross-sectional data and continuous G. We demonstrate the framework with an empirical application on the role of social capital in famine relief in China.

  • The Credibility Revolution in Political Science with Carolina Torreblanca, William Dinneen, and Guy Grossman.

    How has the credibility revolution reshaped political science? We address this question by using a large language model to classify 91,632 articles published between 2003 and 2023 across 174 political science journals, focusing on causal research designs, transparency practices, and citation patterns. Design-based studies—research strategies that explicitly a research design and the assumptions required for causal identification—have become increasingly common, displacing regression-based analyses that rely primarily on modeling assumptions. Yet as of 2023, studies without an explicit identification strategy still constitute nearly 40% of empirical quantitative work. Within design-based research, survey experiments dominate, while field experiments and quasi-experimental approaches have grown more modestly. Transparency practices such as placebo tests and power analysis remain rare. Design-based studies are concentrated in top journals and among authors at highly ranked institutions, and enjoy a persistent citation premium. The credibility revolution has meaningfully reshaped the discipline, though unevenly and incompletely.

  • User Location Disclosure Fails to Deter Overseas Criticism but Amplifies Regional Divisions on Chinese Social Media with Leo Yang Yang.

    We examine the behavioral effects of a user location disclosure policy implemented by Sina Weibo, China’s largest microblogging platform, using a high-frequency dataset of uncensored user engagement—including tens of thousands of comments—on 165 prominent government and media accounts. Exploiting the platform’s abrupt rollout of IP-based location tags on April 28, 2022, we compare user behavior in comment sections before and after the policy change. Although the policy was publicly justified as a measure to curb misinformation and counter foreign influence, we find no decline in participation by overseas users. Instead, it significantly reduced domestic engagement with local issues outside users’ home provinces, particularly among critical comments. Evidence suggests this effect was not driven by generalized fear or concerns about credibility, but by a rise in regionally discriminatory replies that increased the social cost of cross-provincial engagement. Our findings indicate that identity disclosure tools can produce unintended consequences by activating existing social divisions in ways that reinforce state control without direct censorship.

  • A Practical Guide to Estimating Conditional Marginal Effects: Modern Approaches with Jiehan Liu and Ziyi Liu. Prepared for Elements in Quantitative and Computational Methods for the Social Sciences, Cambridge University Press.

    This Element offers a practical guide to estimating conditional marginal effects—how treatment effects vary with a moderating variable—using modern statistical methods. Commonly used approaches, such as linear interaction models, often suffer from unclarified estimands, limited overlap, and restrictive functional forms. This guide begins by clearly defining the estimand and presenting the main identification results. It then reviews and improves upon existing solutions, such as the semiparametric kernel estimator, and introduces robust estimation strategies, including augmented inverse propensity score weighting with Lasso selection (AIPW-Lasso) and double machine learning (DML) with modern algorithms. Each method is evaluated through simulations and empirical examples, with practical recommendations tailored to sample size and research context. All tools are implemented in the accompanying interflex package for R.

  • Causal Panel Analysis under Parallel Trends: Lessons from A Large Reanalysis Study with Albert Chiu, Xingchen Lan, and Ziyi Liu. American Political Science Review, Vol. 120, Iss. 1, February 2026, pp. 245–266.

    Two-way fixed effects (TWFE) models are widely used in political science to establish causality, but recent methodological discussions highlight their limitations under heterogeneous treatment effects (HTE) and violations of the parallel trends (PT) assumption. This growing literature has introduced numerous new estimators and procedures, causing confusion among researchers about the reliability of existing results and best practices. To address these concerns, we replicated and reanalyzed 49 studies from leading journals using TWFE models for observational panel data with binary treatments. Using six HTE-robust estimators, diagnostic tests, and sensitivity analyses, we find: (i) HTE-robust estimators yield qualitatively similar but highly variable results; (ii) while a few studies show clear signs of PT violations, many lack evidence to support this assumption; and (iii) many studies are underpowered when accounting for HTE and potential PT violations. We emphasize the importance of strong research designs and rigorous validation of key identifying assumptions.

     

    (Please see the Erratum, which addresses a typesetting error in the published article.)

  • Decentralized Propaganda in the Era of Digital Media: The Massive Presence of the Chinese State on Douyin with Yingdan Lu, Jennifer Pan and Xu Xu. American Journal of Political Science, forthcoming

    The rise of social media in the digital era poses unprecedented challenges to authoritarian regimes that aim to influence public attitudes and behaviors. In this paper, we argue that authoritarian regimes have adopted a decentralized approach to producing and disseminating propaganda on social media. In this model, tens of thousands of government workers and insiders are mobilized to produce and disseminate propaganda, and content flows in a multi-directional, rather than a top-down manner. We empirically demonstrate the existence of this new model in China by creating a novel dataset of over five million videos from over 18,000 regime-affiliated accounts on Douyin, the Chinese branding for TikTok. This paper supplements prevailing understandings of propaganda by showing theoretically and empirically how digital technologies are changing not only the content of propaganda, but also the way in which propaganda materials are produced and disseminated.

  • Comparing Experimental and Nonexperimental Methods: What Lessons Have We Learned Four Decades After LaLonde (1986)? with Guido Imbens. Journal of Economic Perspectives, Vol. 39, No. 4, pp. 173-202, Fall 2025.

    In 1986, Robert LaLonde published an article comparing nonexperimental estimates to experimental benchmarks (LaLonde 1986). He concluded that the nonexperimental methods at the time could not systematically replicate experimental benchmarks, casting doubt on their credibility. Following LaLonde’s critical assessment, there have been significant methodological advances and practical changes, including (i) an emphasis on the unconfoundedness assumption separated from functional form considerations, (ii) a focus on the importance of overlap in covariate distributions, (iii) the introduction of propensity score-based methods leading to doubly robust estimators, (iv) methods for estimating and exploiting treatment effect heterogeneity, and (v) a greater emphasis on validation exercises to bolster research credibility. To demonstrate the practical lessons from these advances, we reexamine the LaLonde data. We show that modern methods, when applied in contexts with sufficient covariate overlap, yield robust estimates for the adjusted differences between the treatment and control groups. However, this does not imply that these estimates are causally interpretable. To assess their credibility, validation exercises (such as placebo tests) are essential, whereas goodness-of-fit tests alone are inadequate. Our findings highlight the importance of closely examining the assignment process, carefully inspecting overlap, and conducting validation exercises when analyzing causal effects with nonexperimental data.

  • Disguised Repression: Targeting Opponents with Non-Political Crimes to Undermine Dissent with Jennifer Pan and Xu Xu. The Journal of Politics, Vol. 88, No. 1, January 2026, pp. 282–298.

    Why do authoritarian regimes charge political opponents with non-political crimes when they can levy charges directly related to opponents’ political activism? We argue that doing so disguises political repression and undermines the moral authority of opponents, minimizing backlash and mobilization. To test this argument, we conduct a survey experiment, which shows that disguised repression decreases perceptions of dissidents’ morality, decreases people’s willingness to engage in dissent on behalf of the dissident, and increases support for repression of the dissident. We then assess the external validity of the argument by analyzing millions of Chinese social media posts made before and after a large crackdown of vocal government critics in China in 2013. We find that individuals with larger online followings are more likely to be charged with non-political crimes, and those charged with non-political crimes are less likely to receive public sympathy and support.

  • How Much Should We Trust Instrumental Variable Estimates in Political Science? Practical Advice based on 67 Replicated Studies with Apoorva Lal, Mac Lockhart, and Ziwen Zu. Political Analysis, Vol. 32, Iss. 4, October 2024, pp. 521-540.

    Instrumental variable (IV) strategies are widely used in political science to establish causal relationships, but the identifying assumptions required by an IV design are demanding, and assessing their validity remains challenging. In this paper, we replicate 67 articles published in three top political science journals from 2010-2022 and identify several concerning patterns. First, researchers often overestimate the strength of their instruments due to non-i.i.d. error structures such as clustering. Second, the commonly used t-test for two-stage-least-squares (2SLS) estimates frequently underestimates uncertainties, resulting in uncontrolled Type-I errors in many studies. Third, in most replicated studies, 2SLS estimates are significantly larger than ordinary-least-squares estimates, with their ratio negatively correlated with instrument strength in studies with non-experimentally generated instruments, suggesting potential violations of unconfoundedness or the exclusion restriction. We provide a checklist and software to help researchers avoid these pitfalls and improve their practice.

See All Papers

Software.

ivDiag: Estimation and Diagnostics for IV Designs

ivDiag is toolkit for estimation, diagnostics, and visualization with instrumental variable designs.

hbal: Hierarchically Regularized Entropy Balancing

hbal addresses the shortcomings of entropy balancing by hierarchically regularizing higher-order moment constraints of observed covariates.

fect: Fixed Effect Counterfactual Estimators

Counterfactual estimators for panel data with binary treatments address the weighting problem of fixed effects models and can potentally relax strict exogeneity.

tjbal: Trajectory Balancing

Using panel data with binary treatments, trajectory balancing draws causal inference by balancing on kernelized features from pretreatment periods.

interflex: Flexible Interaction Models

interflex conducts diagnostic tests and offers flexible estimation strategies for nonlinear interaction effects. It accommodates both continuous and discrete outcomes.

panelView: Visualizing Panel Data

panelview visualizes the treatment and missing-value status of observations in a panel dataset and plots variables of interest in a time-series fashion.

See All Software

Teaching~

  • POLI 158. AI Technologies for Social Applications

    Artificial intelligence is becoming increasingly central to how societies organize information, design policies, and deliver services. This course introduces undergraduates to the core concepts and applications of machine learning (ML) and artificial intelligence (AI), with a particular focus on their use in social and political contexts. Students will learn about the underlying concepts to understand what these systems can and cannot do, but the primary goal is to help students develop practical habits of incorporating AI into their work, to evaluate its strengths and limitations, and to imagine creative applications in nonprofit and civic settings such as NGOs, media, philanthropy, political campaigns, and health organizations.

    By the end of the quarter, students will be able to explain the principles behind core AI technologies, assess their opportunities and risks in real-world applications, and design a project that demonstrates how AI might address a social or political challenge.

  • Short Course on Causal Inference with Panel Data

    This workshop series gives an overview of newly emerged causal inference methods using panel data (with dichotomous treatments). We start our discussion with a review of the difference-in-differences (DiD) method and conventional two-way fixed effects (2WFE) models. We then discuss the drawbacks of 2WFE models from a design-based perspective and clarify the two main identification regimes: one under the strict exogeneity (SE) assumption (or its variants) and one under the sequential ignorability (SI) assumption. In Lecture 2, we review the synthetic control method and discuss its extensions. In Lecture 3, we introduce the factor-augmented approach, including panel factor models, matrix completion methods, and Bayesian latent factor models. In Lecture 4, we take a different route and discuss matching and reweighting methods to achieve causal inference goals with panel data under the SE or SI assumptions. We also discuss hybrid methods that enjoy doubly robust properties.

    Lecture 1. Difference-in-Differences and Fixed Effects Models
    Lecture 2. Synthetic Control and Extensions
    Lecture 3. Factor-Augmented Methods
    Lecture 4. Matching/Balancing and Hybrid Methods

  • POLI 450A. Political Methodology I

    This is the first course in a four-course sequence on quantitative political methodology at Stanford Political Science. Political methodology is a growing subfield of political science which deals with the development and application of statistical methods to problems in political science and public policy. The subsequent courses in the sequence are 450B, 450C, and 450D. By the end of the sequence, students will be capable of understanding and confidently applying a variety of statistical methods and research designs that are essential for political science and public policy research.

    This first course provides a graduate-level introduction to regression models, along with the basic principles of probability and statistics which are essential for understanding how regression works. Regression models are routinely used in political science, policy research, and other disciplines in social science. The principles learned in this course also provide a foundation for the general understanding of quantitative political methodology. If you ever want to collect quantitative data, analyze data, critically read an article that presents a data analysis, or think about the relationship between theory and the real world, then this course will be helpful for you.

    You can only learn statistics by doing statistics. In recognition of this fact, the homework for this course will be extensive. In addition to the lectures and weekly homework assignments, there will be required and optional readings to enhance your understanding of the materials. You will find it helpful to read these not only once, but multiple times (before, during, and after the corresponding homework).

  • POLI 150A. Data Science for Politics

    Overview. Data science is quickly changing the way we understand and engage in politics, how we implement policy, and how organizations across the world make decisions. In this course, we will learn the fundamental tools of data science and apply them to a wide range of political and policy-oriented questions. How do we predict presidential elections? How can we guess who wrote each of the Federalist Papers? Do countries become less democratic when leaders are assassinated? These are just a few of the questions we will work on in the course.

    Learning Goals. The course has three basic learning goals for students. At the end of this course, students should:

    1. Be comfortable using basic features of the R programming language.
    2. Be able to combine political data with statistical concepts to answer political questions.
    3. Know how to create visual depictions of statistical patterns in data.

    Learning Approach. Statistical and programming concepts do not lend themselves to the traditional lecture format, and in general, experimental research on teaching methods shows that combining active learning with lectures outperforms traditional lecturing. We will teach each concept in lectures using applied examples that encourage active learning. Lectures will be broken up into small modules; first, I will explain a concept, and then we will write code to implement the concept in practice. Students are asked to bring their laptops to class so that we can actively code during lectures. This will help students “learn by doing” and it will ensure that the transition from lecture to problem sets is smooth.

See All Teaching