Work in Progress

Inference without randomization or ignorability: A stability-controlled quasi-experiment on the prevention of tuberculosis (with Werner Maokola & Ami Wulf). [draft]

When determining the effectiveness of a new treatment, randomized trials are not always possible or ethical, or we may wish to estimate the effect a treatment has actually had, among a population that already received it, through an unknown selection process. The stability-controlled quasi-experiment (SCQE) (Hazlett, 2019) replaces randomization with an assumption on the outcome’s “baseline trend,” or more precisely, the change in average non-treatment potential outcome across successive cohorts. We describe and extend this method, and provide its first direct application: examining the real world effectiveness of isoniazid preventive therapy (IPT) to reduce tuberculosis (TB) incidence among people living with HIV in Tanzania. Since IPT became available in the clinics we studied, 27% of new patients received it, selected through an unknown process. Within a year, 16% of those not on IPT developed TB, compared to fewer than 1% of those taking IPT. We find that (i) despite this compelling naive comparison, if the baseline trend is assumed to be flat, the effect of IPT on TB incidence would be -2 percentage points (pp) with a confidence interval of [-10, 5]; (ii) to argue that IPT was beneficial requires believing that the (non-treatment) incidence rate would have risen by at least 0.5pp per year in the absence of the treatment; and (iii) to argue IPT was not harmful requires arguing that the baseline trend did not fall by more than 1pp per year. We also find that those who were given treatment may have been less likely to develop TB anyway. This illustrates how the SCQE approach extracts valid causal information from observational data while protecting against over-confidence.

Wildfire Exposure Increases Pro-Climate Political Behaviors (with Matto Mildenberger). [draft]

 

Despite the climate threat's severity, global policy responses remain anemic. One political challenge has been the temporal mismatch between short-term climate policy costs and long-term climate policy benefits. Will this policymaking obstacle weaken as the impacts of climate change begin to realize? Here we analyze the impact of a climate-related hazard on public support for costly climate reforms. Using a natural experiment based on randomness in the timing of California wildfires we link, for the first time, threat exposure to realized political behavior rather than self-reported attitudes or behavioral intentions. We find that census block groups within 15 km of a wildfire have approximately 4 to 6 percentage points higher support for costly pro-climate ballot measures. The effects are stronger for block groups closest to wildfires, dropping by approximately 1.7 percentage points for every 10km of distance. Moreover, the effect is concentrated among census block groups with a large or medium concentrations of Democratic voters; by contrast, voters in Republican-dominated census block groups are largely unresponsive to wildfires. Our results suggest that experienced climate threats may only enhance willingness-to-act in areas where the public already holds pro-climate identities.

 

Credible or Confounded? What we (do not) know about who supports peace with the FARC (with Francesca Parente). [draft]

 

Social scientists pose important questions about the effects of potential causes, but often cannot eliminate all possible confounders in defense of causal claims. Sensitivity analyses can be useful in these circumstances, providing a route to rigorously investigate causal questions despite imperfect identification. Further, if more widely adopted, these tools have the potential to improve upon standard practice for communicating the robustness causal claims, while suggesting new ways for readers and reviewers to judge research. We illustrate these uses of sensitivity analysis in an application that examines two potential causes of support for the 2016 Colombian referendum for peace with the FARC. Conventional regression analyses find "statistically and substantively significant" estimated effects for both causes. Yet, sensitivity analyses reveal very weak confounders could overturn one cause (exposure to violence), but extremely powerful confounders are needed to overturn the other (political affiliation with the deal's champion).

 

Displaced Loyalties: The effects of indiscriminate violence on attitudes among Syrian refugees in Turkey (with Kristin Fabbe & Tolga Sinmazdemir). [draft]

How does violence during conflict affect the political attitudes of civilians who leave the conflict zone? Using a survey of 1,384 Syrian refugees in Turkey, we employ a quasi-experiment owing to the inaccuracy of barrel bombs to examine the effect of having one's home destroyed on political loyalties. We find that refugees who lose a home to regime-inflicted barrel bombs are less supportive of the opposition and more likely to say no armed group in the conflict represents them. Suggestive evidence supports two explanations for this: individuals may blame the opposition for provoking regime violence, and they may feel more generally "pro-peace", withdrawing support from any group that employs violence. These findings diverge from the expectations of existing theories, which assume that civilians are captive in the conflict zone and must choose sides for protection.

 

Trajectory Balancing: A general reweighting approach to causal inference with time-series cross-sectional data. (with Yiqing Xu). [draft]

 We introduce trajectory balancing, a general reweighting approach to causal inference with time-series cross-sectional (TSCS) data. We focus on settings where one or more units is exposed to treatment at a given time, while a set of control units remain untreated. First, we show that many commonly used TSCS methods imply an assumption that each unit’s non-treatment potential outcomes in the post-treatment period are linear in that unit’s pre-treatment outcomes and its time-invariant covariates. Under this assumption, we introduce the mean balancing method that reweights control units such that the averages of the pre-treatment outcomes and covariates are approximately equal between the treatment and (reweighted) control groups. Second, we relax the linearity assumption and propose the kernel balancing to seek approximate balance on a kernel-based feature expansion of the pre-treatment outcomes and covariates. The resulting approach inherits the ability of synthetic control and latent factor models to tolerate time-varying confounders, but (1) improves feasibility and stability with reduced user discretion; (2) accommodates both short and long pre-treatment time periods with many or few treated units; and (3) balances on the high-order “trajectory” of pre-treatment outcomes rather than their period-wise average. We illustrate this method with simulations and two empirical examples.

Published (2014 onward)

Making Sense of Sensitivity: Extending omitted variable bias (with Carlos Cinelli). Forthcoming, Journal of the Royal Statistical Society, Series B. [manuscript]

In this paper we extend the familiar "omitted variable bias" framework, creating a suite of tools for sensitivity analysis of regression coefficients and their standard errors to unobserved confounders that: (i) do not require assumptions about the functional form of the treatment assignment mechanism nor the distribution of the unobserved confounder(s); (ii) can be used to assess the sensitivity to multiple confounders, whether they influence the treatment or the outcome linearly or not; (iii) facilitate the use of expert knowledge to judge the plausibility of sensitivity parameters; and, (iv) can be easily and intuitively displayed, either in concise regression tables or more elaborate graphs. More precisely, we introduce two novel measures for communicating the sensitivity of regression results that can be used for routine reporting. The "robustness value" describes the association unobserved confounding would need to have with both the treatment and the outcome to change the research conclusions. The partial R-squared of the treatment with the outcome shows how strongly confounders explaining all of the outcome would have to be associated with the treatment to eliminate the estimated effect. Next, we provide intuitive graphical tools that allow researchers to make more elaborate arguments about the sensitivity of not only point estimates but also t-values (or p-values and confidence intervals). We also provide graphical tools for exploring extreme sensitivity scenarios in which all or much of the residual variance is assumed to be due to confounders. Finally, we note that a widespread informal "benchmarking" practice can be widely misleading, and introduce a novel alternative that allows researchers to formally bound the strength of unobserved confounders "as strong as" certain covariate(s) in terms of the explained variance of the treatment and/or the outcome. We illustrate these methods with a running example that estimates the effect of exposure to violence in western Sudan on attitudes toward peace. 

Angry or Weary? The effect of personal violence on attitudes towards peace in Darfur. Forthcoming, Journal of Conflict Resolution (2019). [manuscript]

Does exposure to violence motivate individuals to support further violence, or to seek peace? Such questions are central to our understanding of how conflicts evolve, terminate, and recur. Yet, convincing empirical evidence as to which response dominates, even in a specific case, has been elusive, owing to the inability to rule out confounding biases. This paper employs a natural experiment based on the indiscriminacy of violence within villages in Darfur to examine how refugees' experiences of violence affect their attitudes toward peace. The results are consistent with a pro-peace or "weary" response: individuals directly harmed by violence were more likely to report that peace is possible, and less likely to demand execution of their enemies. This provides micro-level evidence supporting earlier country-level work on "war-weariness," and extends the growing literature on the effects of violence on individuals by including attitudes toward peace as an important outcome. These findings suggest that victims harmed by violence during war can play a positive role in settlement and reconciliation processes.

Estimating causal effects of new treatments despite self-selection: The case of experimental medical treatments. Journal of Causal Inference (2019). [paper]

Providing terminally ill patients with access to experimental treatments, as allowed by recent “right to try” laws and “expanded access” programs, poses a variety of ethical questions. While practitioners and investigators may assume it is impossible to learn the effects of these treatment without randomized trials, this paper describes a simple tool to estimate the effects of these experimental treatments on those who take them, despite the problem of selection into treatment, and without assumptions about the selection process. The key assumption is that the average outcome, such as survival, would remain stable over time in the absence of the new treatment. Such an assumption is unprovable, but can often be credibly judged by reference to historical data and by experts familiar with the disease and its treatment. Further, where this assumption may be violated, the result can be adjusted to account for a hypothesized change in the non-treatment outcome, or to conduct a sensitivity analysis. The method is simple to understand and implement, requiring just four numbers to form a point estimate. Such an approach can be used not only to learn which experimental treatments are promising, but also to warn us when treatments are actually harmful – especially when they might otherwise appear to be beneficial, as illustrated by example here. While this note focuses on experimental medical treatments as a motivating case, more generally this approach can be employed where a new treatment becomes available or has a large increase in uptake, where selection bias is a concern, and where an assumption on the change in average non-treatment outcome over time can credibly be imposed.

 

Kernel Balancing: A flexible non-parametric weighting procedure for estimating causal effects. Forthcoming, Statistica Sinica. [advanced copy][supplement]

Matching and weighting methods are widely used to estimate causal effects when adjusting for a set of observables is required. Matching is appealing for its non-parametric nature, but with continuous variables, is not guaranteed to remove bias. Weighting techniques choose weights on units to ensure pre-specified functions of the covariates have equal (weighted) means for the treated and control group. This assures unbiased effect estimation only when the potential outcomes are linear in those pre-specified functions of the observables. Kernel balancing begins by assuming the expectation of the non-treatment potential outcome conditional on the covariates falls in a large, flexible space of functions associated with a kernel. It then constructs linear bases for this function space and achieves approximate balance on these bases. A worst-case bound on the bias due to this approximation is given and is the target of minimization. Relative to current practice, kernel balancing offers one reasoned solution to the long-standing question of which functions of the covariates investigators should attempt to achieve (and check) balance on. Further, these weights are also those that would make the estimated multivariate density of covariates approximately the same for the treated and control groups, when the same choice of kernel is used to estimate those densities. The approach is fully automated up to the choice of a kernel and smoothing parameter, for which default options and guidelines are provided. An R package, KBAL, implements this approach.

          For R users, KBAL can be installed from my github repository:

    > devtools::install_github("chadhazlett/kbal")

A Persuasive Peace: Syrian refugees' attitudes towards compromise and civil war termination” (with Kristin Fabbe and Tolga Sinmazdemir). Journal of Peace Research (2019). [paper

 

Civilians who have fled violent conflict and settled in neighboring countries are integral to processes of civil war termination. Contingent on their attitudes, they can either back peaceful settlements or support warring groups and continued fighting. Attitudes toward peaceful settlement are expected to be especially obdurate for civilians who have been exposed to violence. In a survey of 1,120 Syrian refugees in Turkey conducted in 2016, we use experiments to examine attitudes towards two critical phases of conflict termination -- a ceasefire and a peace agreement. We examine the rigidity/flexibility of refugees' attitudes to see if subtle changes in how wartime losses are framed or in who endorses a peace process can shift willingness to compromise with the incumbent Assad regime.  Our results show, first, that refugees are far more likely to agree to a ceasefire proposed by a civilian as opposed to one proposed by armed actors from either the Syrian government or the opposition. Second, simply describing the refugee community's wartime experience as suffering rather than sacrifice substantially increases willingness to compromise with the regime to bring about peace. This effect remains strong among those who experienced greater violence. Together, these results show that even among a highly pro-opposition population that has experienced severe violence, willingness to settle and make peace are remarkably flexible and dependent upon these cues.

 

Covariate Balancing Propensity Score for a Continuous Treatment: Application to the efficacy of political advertisements (with Christian Fong and Kosuke Imai). Annals of Applied Statistics (2018). [paper] [R package]

​Propensity score matching and weighting are popular methods when estimating causal effects in observational studies. Beyond the assumption of unconfoundedness, however, these methods also require the model for the propensity score to be correctly specified. The recently proposed covariate balancing propensity score (CBPS) methodology increases the robustness to model misspecification by directly optimizing sample covariate balance between the treatment and control groups. In this paper, we extend the CBPS to a continuous treatment. We propose the covariate balancing generalized propensity score (CBGPS) methodology, which minimizes the association between covariates and the treatment. We develop both parametric and nonparametric approaches and show their superior performance over the standard maximum likelihood estimation in a simulation study. The CBGPS methodology is applied to an observational study, whose goal is to estimate the causal effects of political advertisements on campaign contributions. We also provide open-source software that implements the proposed methods.

         For R users, CBPS can be installed from CRAN:

    >install.packages("CBPS")

Stress-testing the affect misattribution procedure: Heterogeneous control of affect misattribution procedure effects under incentives (with Adam Berinsky).  British Journal of Social Psychology, 2017. [paper]

The affect misattribution procedure (AMP) is widely used to measure sensitive attitudes towards classes of stimuli, by estimating the effect that affectively charged prime images have on subsequent judgements of neutral target images. We test its resistance to efforts to conceal one’s attitudes, by replicating the standard AMP design while offering small incentives to conceal attitudes towards the prime images. We find that although the average AMP effect remains positive, it decreases significantly in magnitude. Moreover, this reduction in the mean AMP effect under incentives masks large heterogeneity: one subset of individuals continues to experience the "full" AMP effect, while another reduces their effect to approximately zero. The AMP thus appears to be resistant to efforts to conceal one’s attitudes for some individuals but is highly controllable for others. We further find that those individuals with high self-reported effort to avoid the influence of the prime are more often able to eliminate their AMP effect. We conclude by discussing possible mechanisms.

Global progress and backsliding on gasoline taxes and subsidies. (with Michael Ross and Paasha Mahdavi). Nature Energy, 2017. [paper]

To reduce greenhouse gas emissions in the coming decades, many governments will have to reform their energy policies. These policies are dicult to measure with any precision. As a result, it is unclear whether progress has been made towards important energy policy reforms, such as reducing fossil fuel subsidies. We use new data to measure net taxes and subsidies for gasoline in almost all countries at the monthly level and find evidence of both progress and backsliding. From 2003 to 2015, gasoline taxes rose in 83 states but fell in 46 states. During the same period, the global mean gasoline tax fell by 13.3% due to faster consumption growth in countries with lower taxes. Our results suggest that global progress towards fossil fuel price reform has been mixed, and that many governments are failing to exploit one of the most cost-eective policy tools for limiting greenhouse gas emissions.

KRLS: A Stata package for kernel-based regularized least squares. (with Jens Hainmueller & Jeremy Ferwerda). Journal of Statistical Software, 2017. [paper]

       For R users, KRLS can be installed from CRAN:

   >install.packages("KRLS")

       For STATA users, it can be installed from the SSC repository:

   >ssc install krls, all replace

​​Kernel Regularized Least Squares: Reducing misspecification bias with a flexible and interpretable machine learning approach (with Jens Hainumeller). Political Analysis, 2014. [paper] [R package][appendix]

​We propose the use of Kernel Regularized Least Squares (KRLS) for social science modeling and inference problems. KRLS borrows from machine learning methods designed to solve regression and classification problems without relying on linearity or additivity assumptions. The method constructs a flexible hypothesis space that uses kernels as radial basis functions and finds the best-fitting surface in this space by minimizing a complexity-penalized least squares problem. We argue that the method is well-suited for social science inquiry because it avoids strong parametric assumptions, yet allows interpretation in ways analogous to generalized linear models while also permitting more complex interpretation to examine non-linearities, interactions, and heterogeneous effects. We also extend the method in several directions to make it more effective for social inquiry, by (1) deriving estimators for the pointwise marginal effects and their variances, (2) establishing unbiasedness, consistency, and asymptotic normality of the KRLS estimator under fairly general conditions, (3) proposing a simple automated rule for choosing the kernel bandwidth, and (4) providing companion software. We illustrate the use of the method through simulations and empirical examples.

 

The Epidemiology of Lethal Violence in Darfur: Using micro-data to explore complex patterns of ongoing armed conflict. (with Alex de Waal, Christian Davenport, and Joshua Kennedy). Social Science & Medicine, 2014.

This article describes and analyzes patterns of lethal violence in Darfur, Sudan, during 2008–09, drawing upon a uniquely detailed dataset generated by the United Nations–African Union hybrid operation in Darfur (UNAMID), combined with data generated through aggregation of reports from open-source venues. These data enable detailed analysis of patterns of perpetrator/victim and belligerent groups over time, and show how violence changed over the four years following the height of armed conflict in 2003–05. During the reference period, violent incidents were sporadic and diverse and included: battles between the major combatants; battles among subgroups of combatant coalitions that were ostensibly allied; inter-tribal conflict; incidents of one-sided violence against civilians by different parties; and incidents of banditry. The conflict as a whole defies easy categorization. The exercise illustrates the limits of existing frameworks for categorizing armed violence and underlines the importance of rigorous microlevel data collection and improved models for understanding the dynamics of collective violence. By analogy with the use of the epidemiological data for infectious diseases to help design emergency health interventions, we argue for improved use of data on lethal violence in the design and implementation of peacekeeping, humanitarian and conflict resolution interventions.

 

 

© 2023 by Alice Styles. Proudly created with Wix.com