Think Tanks
Here's a look at documents from think tanks
Featured Stories
Rand Issues Commentary: Challenges and Prospects for Estimating Joint Effects of Gun Policies
SANTA MONICA, California, March 10 -- Rand issued the following commentary on March 9, 2026, by James Murphy, computational social scientist and policy researcher, and Andrew R. Morral, Greenwald Family Chair in Gun Policy and senior behavioral scientist:* * *
Challenges and Prospects for Estimating Joint Effects of Gun Policies
Criminal justice researchers often estimate the joint effects of multiple gun policies, given the challenges of isolating the effects of individual policies. In estimating the effects of multiple policies, researchers typically take one of two approaches: They use a ... Show Full Article SANTA MONICA, California, March 10 -- Rand issued the following commentary on March 9, 2026, by James Murphy, computational social scientist and policy researcher, and Andrew R. Morral, Greenwald Family Chair in Gun Policy and senior behavioral scientist: * * * Challenges and Prospects for Estimating Joint Effects of Gun Policies Criminal justice researchers often estimate the joint effects of multiple gun policies, given the challenges of isolating the effects of individual policies. In estimating the effects of multiple policies, researchers typically take one of two approaches: They use acomposite index of policies, or they build a statistical model that includes multiple gun policy indicators at once, sometimes with interactions between them.
Each of these approaches has significant limitations. Composite indices--which are typically based on either unweighted counts of gun laws or weights that are assigned to those laws by experts--assume that gun policies are unidimensional, interchangeable, and independent in their effects. These assumptions often oversimplify the complex and multidimensional nature of firearm policy environments. Statistical modeling, while more flexible, can suffer from reduced statistical power and challenges in interpreting interactions between policies. Recent advances in machine learning (ML) offer promising tools to address these limitations but require careful application to ensure causal validity.
In this essay, we critically examine the practice of estimating the joint effects of gun policies. We argue that researchers should avoid using broad, holistic indices that obscure the diversity of policy mechanisms. While the field does not have definitive solutions to the challenges outlined above, we suggest some avenues that may improve the transparency and rigor of studies of joint effects.
Introduction
Gun policy researchers have typically focused on estimating the effects of individual policies on outcomes, such as mass shootings, homicides, and suicides. While researchers have successfully done so for a few policies--such as stand-your-ground or safe firearm storage laws--the evidence for most individual policies remains weak or inconsistent across studies (Smart et al., 2024). Because of the difficulty in isolating these individual effects, a growing body of research literature has moved toward estimating the joint effects of multiple policies. The rationale for this shift is that the combined impact of a policy regime may be stronger, more synergistic, and more easily detectable than the effects of any single policy in isolation.
This essay provides a critical examination of the conceptual and methodological issues raised by current approaches to studying joint effects: specifically, the use of composite indices and the use of statistical modeling that includes multiple policy indicators, sometimes with interactions between them. The essay is intended for applied quantitative researchers who must navigate the challenges of identifying causal effects in observational data.
This essay proceeds as follows. First, we discuss the data context of gun policy research, focusing on the aspects that are especially relevant for studying joint effects. We then review the most common approach to studying joint effects: using composite indices that are typically unweighted counts of policies or expert-weighted scales developed by advocacy organizations. We also discuss the implicit assumptions underlying those measures and the challenges that those assumptions create for interpreting study results. Next, we review statistical modeling approaches that either aggregate the effects of multiple individual policies or examine the interactions of policies to study situations in which the efficacy of a policy may depend on the existence of other policies. We argue that statistical models can avoid some of the assumptions of policy scales, but this can come at the cost of reduced statistical power to identify causal effects. Finally, although the field lacks definitive solutions to all of the issues we raise, we conclude by suggesting some methodological "guardrails": constructive steps toward more-rigorous and more-transparent research on the joint effects of gun policies.
The Challenges Facing Gun Policy Research
All gun policy research must address a variety of challenges related to identifying causal effects in observational data. In other work as part of RAND's Gun Policy in America initiative, RAND researchers have reviewed the general statistical challenges of evaluating gun policies (Schell, Griffin, and Morral, 2018; Schell, Smart, and Morral, 2022). Among these challenges, data sparsity makes it difficult to conduct any causal evaluation of state-level gun policies, but it is especially problematic for studying joint effects. Studies are typically based on a relatively small set of jurisdictions (i.e., 50 states) to identify variation across jurisdictions and across time. This sparsity limits statistical power. The fact that many key outcomes of interest, such as firearm homicides and mass shootings, are relatively rare in a statistical sense further compounds this problem and increases measurement variance, making detecting policy effects difficult.
As a result, causal estimates often exhibit high uncertainty. This increases the risk that null-hypothesis statistical significance tests will fail to detect true policy effects, overstate the magnitude of statistically significant effects, or suggest that significant effects are in the opposite direction of the true effect. Studying the interactions of policies exacerbates these problems: It challenges researchers to find sufficient variation not just for one policy but for combinations of policies.
A second, related problem is that states rarely adopt policies in isolation. States often enact conceptually related policies across short spans of time. In this context, statistical models have fewer observations with which to distinguish individual from joint policy effects (i.e., a smaller area of "common support"; Matthay et al., 2021). As the lag between policy adoptions shortens, the difficulty of distinguishing individual from joint effects worsens. Simulations in other policy contexts suggest that having many policies adopted in short succession inflates the variance of individual policies' causal effect estimates, worsening statistical power problems (Griffin et al., 2023). These difficulties are greatly exacerbated for studies trying to establish interaction effects between policies, because one needs sufficient lags between different policy combinations.
The study of gun policy effects also poses measurement challenges. State laws are rarely identical to each other. To understand the effect of a type of law implemented in multiple states, researchers must decide how to "lump together" similar laws or draw distinctions between state laws by splitting them into distinct law classes. These tasks require specifying what makes two laws sufficiently alike to be treated as members of the same class. Such definitions have important implications because they determine how many states can be counted as having adopted a given law class, and the number of such adopters versus non-adopters affects uncertainty and statistical power (Ben-Michael et al., 2024; Schell, Smart, and Morral, 2022; Schuler et al., 2021). For co-occurring policies, lumping versus splitting in turn affects the area of common support: the number of jurisdictions that experience different combinations of policies.
These related challenges--statistical power, co-occurrence, and measurement (defining law classes)--pertain to the two primary strategies researchers use to estimate joint effects, which we review in this essay. Composite indices, which we turn to next, are an extreme example of a "lumping" strategy. They are helpful for addressing data sparsity and the co-linear data space and can conserve statistical power. This approach, however, comes at the cost of strong--and typically unstated--assumptions. Using individual indicators of policies, which we discuss in a later section, avoids the strong assumptions of indices but runs into the statistical challenges of low power and inflated variance when trying to isolate the effects of co-occurring policies (Griffin et al., 2023; Schuler et al., 2021).
The First Approach: Composite Indices
In gun policy research, the predominant approach that criminal justice researchers take to study the joint effects of gun policies is to create a single number to summarize the co-occurrence of multiple policies--which we refer to as an index--and then to treat that index value as a measure of the state's policy interventions for the purpose of estimating the effects of policies in a statistical model.
Indices can be useful summaries of complex, multidimensional policy arenas. Indices provide a holistic--though possibly crude or imprecise--view of the policy environment for comparing jurisdictions and change over time (Organisation for Economic Co-operation and Development, European Union, and Joint Research Centre, 2008).
Gun policy research has relied on three types of indices: those based on unweighted counts of laws; those with weights constructed from expert opinion; and those weighted based on their statistical relationship to an outcome, such as firearm deaths. Table 1 summarizes findings from longitudinal studies that have used composite indices to estimate causal effects of gun policies on a variety of outcomes.
How Researchers Construct Three Types of Gun Policy Indices
Unweighted Counts
Perhaps the most intuitive idea for measuring joint effects of gun laws is to simply count the number of gun laws in effect in a state. Common sources for counts of gun laws include the State Firearm Law Database compiled by Michael Siegel and colleagues (hereafter, "the Siegel database"; Siegel, 2020; Siegel et al., 2017a) and the RAND State Firearm Law Database (Hoch et al., 2026). The most recent version of the Siegel dataset (Martinez et al., 2024) includes 72 state gun laws covering the period from 1976 to 2024./[1] RAND's dataset covers 20 law classes and dozens of subclasses and currently covers the years 1979 to 2024. Researchers also create unweighted counts from sources that originate as expert-weighted scales, such as the Giffords Law Center's Annual Gun Law Scorecard and Everytown for Gun Safety's Gun Law Rankings (Giffords Law Center to Prevent Gun Violence, undated-l; Everytown for Gun Safety Support Fund, 2025a). For an example, see Kivisto, Ray, and Phalen (2017).
* * *
Table 1. Example Studies Using Composite Indices to Estimate Gun Policy Effects
* * *
Expert Weighting
The chief criticism of using counts of gun laws is the assumption that all laws are equally effective at reducing firearm violence. An alternative approach is to weight policies based on their anticipated effects on gun access or gun violence. The most prominent expert-weighted scale that receives ongoing updates is the Annual Gun Law Scorecard created by the Giffords Law Center (Giffords Law Center to Prevent Gun Violence, undated-l), the successor to scores previously published by the Brady Campaign to Prevent Gun Violence. Other expert-weighted indices include Everytown for Gun Safety's Gun Law Rankings (Everytown for Gun Safety Support Fund, 2025a), the Cato Institute's Freedom in the 50 States index (Cato Institute, undated), and the Traveler's Guide to the Firearm Laws of the Fifty States (Kappas, 2019).
The creators of expert-weighted indices share similarities in their general procedure. The creators track state gun legislation, assign each law a point value, and aggregate the results for each state, usually on an annual basis. Typically, index creators convert these scores into rankings, and, in some cases, states are assigned letter grades. Researchers have used these scores and derived grades in diverse ways. In addition to using the scale scores themselves, many researchers use the resultant letter grades by converting them to scales similar to grade point averages, assigning scores of 4.0 to A, 3.7 to A-, etc. (e.g., Smith and Spiegler, 2020; Stevens et al., 2023), or segmenting states into categorical groups based on rankings (e.g., Fleegler et al., 2013; Kwon and Baack, 2005). Given that these scorecards typically rate only recent state law environments, time spans for studies using expert scores tend to be short compared with those of studies using unweighted counts.
Data-Driven Weighting
A less common alternative for constructing policy weights is to base them on an empirical analysis of the laws' association with some external criterion. Chien et al. (2022) provides an example of this strategy. Using indicators of 134 laws from the Siegel database from 1999 to 2018, Chien et al. used a weighted quantile sum regression to model homicide rates based on all laws while adjusting for state demographics, violent crime, law enforcement officer rate, and household firearm ownership. The authors added the coefficients corresponding to each state's mix of policies to create an aggregate measure that they labeled as a "gun law strength index." The index is essentially the expected homicide rate for a given mix of gun policies based on Chien et al.'s model. In this sense, the research design is formally indistinguishable from studies using multiple policy indicators to estimate combined causal effects; however, the results are framed as an index rather than causal effects. Researchers using the index in subsequent studies might examine whether, in addition to predicting state homicide rates, it can predict other outcomes.
Data-driven weighting is more common in other policy areas, such as cross-national measures of well-being (Greco et al., 2019; Organisation for Economic Co-operation and Development, European Union, and Joint Research Centre, 2008). In these domains, some researchers use multivariate statistical techniques, such as principal components analysis, to measure the associations between different potential measures of a construct, such as well-being or the firearm policy environment. Principal components analysis can identify underlying dimensions in states' firearm policies that explain the most variation in the distribution of the observed indicators (e.g., gun policies) based on their correlation with each other. An index (or indices) can then be constructed by weighting policies by how closely they are related to the underlying dimension that explains the most variance. While data-driven weighting has generally been unused by researchers studying gun violence, studies focused on policy adoption have used it based on the co-presence of gun policies. McLean and Sorens (2019), for instance, applied nonlinear factor analysis to 29 gun policies between 1986 and 2006 to examine how correlated gun policies were to each other and to more-general policy liberalism of states.
The Limitations of Composite Indices
There are conceptual and empirical challenges to using indices. Researchers often do not directly address these limitations. Most fundamentally, researchers often do not articulate what they believe an index measures, and sometimes this is ambiguous. Moreover, most indices implicitly adopt assumptions that are probably not true, such as the assumption that all gun policies fall along a single continuum of restrictiveness (unidimensionality) or that policies are interchangeable, as discussed below. Finally, the procedures through which indices are constructed are not always clearly documented. The assumptions and procedures used by advocacy organizations to construct indices may evolve, creating ambiguity about whether change over time reflects changes in a state's policy environment or changes in the measurement approach.
What Are We Measuring?
A deceptively simple question when constructing a gun policy index is, "What are we measuring?" Studies that use indices frequently do not address this question explicitly, but the construction of indices implies a theory of change. Social scientists often construct scales out of indicators of a latent construct that they are trying to measure. The underlying construct of interest (e.g., personality traits) is believed to cause observable phenomena (e.g., self-reports of days feeling sad) that can be measured and collected to reveal information about the latent construct. These types of indicators are referred to as reflective indicators (Bollen and Diamantopoulos, 2017); that is, the measure's creator has designed it to reflect the construct of interest. In contrast, formative indicators (e.g., income, education, occupation) cause the trait that a researcher intends to measure (e.g., socioeconomic status). This contrast is depicted schematically in Figure 1.
* * *
Figure 1. Conceptual Representation of Formative and Reflective Indicators
* * *
One can interpret indices used in gun policy research in multiple ways. Unweighted counts of gun laws hold a particularly ambiguous status as formative or reflective indicators. One may treat an unweighted count as a formative measure if the construct of interest is the sheer volume of legislative regulation--each law is an equal building block forming the whole of the policy environment. Conversely, the same count can be interpreted as a reflective measure if each law is seen as an indicator of an underlying latent trait, such as a state's "willingness to regulate." From this view, a higher count might reflect a stronger regulatory predisposition on the part of the state or its legislators. Attempts to estimate the effect of gun laws on, say, violence outcomes that use such a scale cannot, therefore, distinguish whether any observed effects are attributable to the volume of indexed laws (a law effect) or to a state's predisposition to regulate (a state characteristic that affects the passage of laws and, possibly, violence outcomes).
More-sophisticated data-driven strategies, such as factor analysis or principal components analysis, are more clearly reflective in nature. McLean and Sorens (2019), for instance, treats gun policy laws as indicators of a state's willingness to regulate. The authors' nonparametric factor analysis derives an index from the associations among the policies. The key assumption is that an underlying willingness to regulate drives these associations. Policies that are more central to this underlying dimension and highly correlated with other laws receive more weight in the index, as they would be stronger reflections of the latent state willingness to regulate.
Expert-weighted indices, by contrast, are clearer in treating policies as formative indicators. They typically imply one of two theories of change. The first is that restrictiveness is the central dimension distinguishing gun laws' efficacy. That is, laws that limit access to or use of guns by the general public or specific groups will have effects starkly different from laws designed to reduce barriers to access and use of guns. A second common theory of change is that we can distinguish individual laws in terms of their effects on outcomes of interest, such as firearm homicides, firearm suicides, mass shootings, social costs of the laws, and the laws' effects on civil liberties.
The most-popular expert scales adopt the second theory, weighting gun laws by their perceived efficacy in reducing gun violence. Such organizations as the Giffords Law Center and Everytown for Gun Safety assign points based on their perceptions of a law's efficacy in reducing gun violence. From a different ideological orientation, the Cato Institute's Freedom in the 50 States index attempts to measure "overall average freedom" and "the direct cost of gun laws to gun owners and dealers" (Cato Institute, undated). Both approaches treat policies as formative indicators.
Efficacy-based and data-driven weights (calibrated on specific outcomes, such as homicide) risk circularity. Because their developers construct them to prioritize certain policies, an analysis using the index to predict outcomes primarily tests the index's validity. If a study detects no association between an index score and an outcome, this does not imply that the policies included in the index have no effect on the outcome; rather, it implies that they may not affect them in the way the index's authors thought they would.
Hidden Assumptions
Beyond ambiguity of what an index truly measures, the way in which a gun policy index's creator constructs it can rely on several significant assumptions that are rarely scrutinized. In studies that include an index as a regressor, the implied causal contrast is the effect on the outcome of moving a jurisdiction from one index value to another. To interpret this contrast causally, the index must satisfy a version of the stable unit treatment assumption (SUTVA) at the level of the index: specifically, that jurisdictions assigned the same index value correspond to the same (or a sufficiently similar) counterfactual outcome.
Although SUTVA alone suffices to interpret untreated jurisdictions as counterfactuals in causal analysis, SUTVA itself cannot be tested. However, because policy indices are composite measures, SUTVA claims rest on additional measurement assumptions about how individual laws combine to form the index--assumptions that can be empirically explored but rarely are. Below, we discuss three such assumptions--unidimensionality, exchangeability, and conditional independence--that are sufficient to render the index-level treatment well defined:
* Unidimensionality is the assumption that the disparate laws reflect a single latent trait. If this assumption is false, the index conflates different mechanisms.
* Exchangeability is the assumption that different policies can substitute for each other. If this assumption is false, the composition of the bundle matters more than the score.
* Conditional independence is the assumption that laws do not interact. If this assumption is false, the "sum" of the laws does not equal their combined effect.
Assumption 1: Policies Are Unidimensional
As currently practiced, construction of a single composite index assumes that indicators reflect a common dimension of restrictiveness or efficacy in affecting an outcome of interest (Greco et al., 2019; Organisation for Economic Co-operation and Development, European Union, and Joint Research Centre, 2008). However, gun laws are diverse. Some concern which guns may be sold; others determine who may buy them. Some concern how guns may be carried or stored, and others determine when police may divest an owner of their weapon. This diversity may reveal different dimensions of gun laws that affect violence or other outcomes in different ways, in which case identifying and distinguishing these underlying dimensions might lead to a better understanding of how gun laws affect outcomes jointly or individually. Thus, an alternative data-driven strategy is to focus on empirical associations among the policies themselves to produce indices corresponding to the underlying dimensions revealed by their associations (McLean and Sorens, 2019). Such multidimensionality could reveal, for instance, that two states with equal numbers of laws might differ in the types of laws they have selected; one state might concentrate on policies restricting requirements for purchasing a gun (e.g., universal background checks or waiting periods) while the other focuses on policies regulating use or carrying (e.g., concealed-carry or stand-your-ground laws). In that case, using a single measure risks conflating two different ways that state policies might affect gun violence. If, however, states that place heavier restrictions on purchases also more stringently restrict carrying, then a single index might represent between-state differences in gun policy restrictions well./[2]
Assumption 2: Policies Are Exchangeable
Regardless of how an index weights policies, a researcher must decide how to aggregate them into a single score. The approach used by all of the indices reviewed earlier is to simply sum the weighted policy indicators (i.e., linear aggregation; Greco et al., 2019)./[3] Linear aggregation carries with it two additional assumptions. The first is that if a state removed a policy that has weight X, it could adopt any combination of policies with weights that sum to X without changing the expected effects or character of its gun policy regime (compensability). For example, the 2013 Brady Campaign Scorecard allocates 11 points to states that require background checks for all unlicensed gun sales, 6 points to those that require reporting of lost or stolen firearms, and 5 points to those with may-issue laws, which allow officials discretionary authority to deny permits to carry concealed firearms. Compensability implies that if a state simultaneously repealed its may-issue and reporting requirements laws but adopted a new law for universal background checks, there would be no change in the state's overall gun policy effectiveness (Greco et al., 2019; Organisation for Economic Co-operation and Development, European Union, and Joint Research Centre, 2008).
Assumption 3: Policy Effects Are Independent of Each Other
Summing policy weights (including using counts of laws) also implies that the restrictiveness or effectiveness of a given policy is not dependent on other policies being in place (conditional independence; Greco et al., 2019; Organisation for Economic Co-operation and Development, European Union, and Joint Research Centre, 2008). For example, research on right-to-carry laws suggests that they increase homicides and other violent crimes (Smart et al., 2024). It is plausible, however, that training requirements for firearm purchasers could offset a portion of violent crime resulting from right-to-carry laws by improving the ability of gun owners to store and operate their guns safely. If so, then the effects of right-to-carry laws depend on the existence of training requirements. Simply adding together the existence of these laws in a single index would violate the conditional independence assumption.
The conditional independence assumption requires believing that the firearm policy regimes of the states that typically implement a given law are not relevant for the likely effects of the law. For instance, one would have to assume that a restrictive law, such as firearm registration requirements, in a state with a restrictive firearm policy regime (for instance, New Jersey) would have effects equivalent to the effects it would have in a state with permissive gun laws (such as Missouri). But because no such oddities as a registration law in a permissive state actually occur in contemporary U.S. state law, assuming conditional independence requires extrapolating empirical effect estimates from states where the presence of a law makes sense within the states' overall firearm policy regime to states where implementing such a law would be unusual in the extreme and where we have no recent historical evidence on which to base more-reasonable estimates.
Similarly, the assumption of conditional independence ignores evidence that state firearm laws are sometimes designed to compensate for or reinforce other laws. For instance, the U.S. Supreme Court declared in its 2022 Bruen decision that states cannot require an individual to prove "proper cause" to obtain a license to carry a concealed handgun in public (New York State Rifle and Pistol Association v. Bruen, 2022). Affected states quickly adopted laws expanding the breadth of locations in which concealed carry would be illegal to include, for instance, public transportation and parks, bars and restaurants, and even all private property unless the owner consents to having concealed weapons on the premises. Thus, state legislators used different gun policies to compensate for and counteract a permissive expansion of concealed-carry permits by limiting where holders of those permits could carry their guns or by ensuring minimum competency among permit-holders.
Conditional independence may be too strong an assumption for use in evaluating firearm policies. Nevertheless, it is widely assumed not only in studies using indices to measure policy effects but also in studies of the effects of individual gun laws.
Approaches that interpret index values as heterogeneous mixtures of policy bundles are not logically incoherent: Such approaches define an estimand equal to the average effect of the particular mix of policy bundles observed at a given index value. However, these estimands are inherently context dependent; vary with the composition of the mixture; and are therefore difficult to generalize across jurisdictions, across time, or to concrete policy choices.
In sum, policy-relevant causal interpretation of a gun-law index requires that each index value correspond to a coherent and stable treatment rather than to an unstable mixture of substantively different policy regimes. This can be ensured by either (1) assuming index-level SUTVA directly or (2) imposing additional structure on how individual policies combine. The assumptions of unidimensionality, exchangeability, and conditional independence are not separate identifying assumptions; rather, they are alternative ways of making the treatment encoded by the index interpretable and stable.
Comparability Across Time and Transparency
Assessing a policy's causal effects requires longitudinal data to compare outcomes before and after a policy's implementation. Consequently, indicators of policy should be consistently measured across time. Researchers should not take for granted that a composite index's meaning is consistent across time, particularly for expert-weighted indices created by advocacy organizations. Expert-weighted indices may change over time because of either actual changes in policy or experts' reevaluation of policies' importance. Index creators do not always release methodological reports that describe the process leading to the weights, describe how the resulting scores are converted to grades or ranks, or document changes since previous versions. This leads some users to use the letter grades or convert the ranks or letter grades into a smaller number of categories. These solutions are unlikely to solve the comparability problem, since they do not guarantee that differences between states mean the same thing over time.
The Second Approach: Multiple Policy Indicators
Statistical modeling of joint effects with multiple indicators can take two approaches. The first, like common composite indices strategies, assumes the additivity of the treatment effects. For example, Schell et al. (2020) used an additive Bayesian time series model to estimate the joint effects of right-to-carry, stand-your-ground, and child-access prevention laws on firearm homicides, firearm suicides, and total firearm deaths. In addition to reporting estimated effects of the individual policies, the authors also reported the joint effect of having the most restrictive policy regime versus the least restrictive.
A second approach to modeling with joint effects, and one that relaxes the conditional independence assumption, is to use multiplicative interaction effects instead of additive effects./[4] That is, the researcher estimates how one policy amplifies or diminishes the effect of another. For example, Fridel, Zimmerman, and Arrigo (2024) found that between 1991 and 2020, the implementation of permit-to-purchase laws and the implementation of minimum age laws requiring handgun purchasers to be at least 21 years old were each associated with a reduction in firearm suicides by 18-20-year-olds, but having both laws in place was associated with even greater reductions. (The researchers found no statistically significant association with non-firearm suicides for either policy, independently or jointly.)
Challenges in Statistical Modeling with Multiple Policy Indicators
The same considerations discussed for composite indices apply to modeling with multiple policy indicators, but the latter approach offers more flexibility in potentially addressing them. The clearest example of this flexibility involves relaxing assumptions of compensability and conditional independence. A researcher can potentially achieve this by including interactions between two or more policies. The price one pays for this greater flexibility is reduced statistical power, the ability of a model to accurately reject the null hypothesis that a policy has no effect. Statistical power is a function of sample size, the prevalence of the policy and the outcome, the policy's true effect size, and the number of other covariates in the model. Without sufficient statistical power, a model may fail to detect a policy's true effect, may overestimate the magnitude of the effect, or may even infer that it is in the opposite direction of the true effect (Gelman and Carlin, 2014). Underpowered regression-based studies are a pervasive problem in gun policy research that assesses the effects of even a single policy (Schell, Griffin, and Morral, 2018; Schell, Smart, and Morral, 2022). Power considerations may limit the number and types of laws for which joint effects can be reasonably estimated to just those that multiple states have implemented, particularly when using the regression-based models that are standard in gun policy research today (Schell, Griffin, and Morral, 2018).
Using multiple policy indicators can also give a false sense of security regarding construct validity ("What are we measuring?"), transparency, and comparability over time because of the "lumping" versus "splitting" measurement choices discussed earlier. While splitting reduces statistical power for detecting modest effects, lumping can obscure consequential heterogeneity in design or implementation (Ben-Michael et al., 2024; Schell, Smart, and Morral, 2022). Some researchers (e.g., Sharkey and Kang, 2023a) cite the ambiguity in the timing and implementation of individual policies as a justification for using composite indices.
Moreover, sorting policies adopted in different states and at different times into a relatively small number of categories is inevitably an approximation that requires expert judgment. To ensure transparency and comparability over time, detailed documentation of the qualitative coding of laws is still essential so that readers may evaluate for themselves whether the specific policy classes or "lumps" are appropriate (Schell, Smart, and Morral, 2022).
The use of multiple policy indicators sidesteps questions of dimensionality that occur for composite indices. If one assumes that a coding scheme has struck the right balance between lumping and splitting, the question of unidimensionality raised by composite indices is avoided. Each separate policy is treated as its own dimension. Moreover, as discussed in the next section, recent methods not yet applied in gun research perform dimension reduction in the process of estimating an individual treatment's effect on an outcome. Because adoption of specific gun policies is highly correlated with adoption of certain other policies, models that include multiple policy indicators are necessarily estimated on observed policies that do not cover the universe of possible policy combinations. Nevertheless, it would be easy--and possibly a mistake--to extrapolate the results of such models to never-before-seen policy combinations. In one study (Morral et al., 2024), the authors used their model of firearm law effects to estimate the joint effects of implementing six restrictive firearm policies. Although there are a few states in the researchers' data that have had all six policies, the sparsity of data on such combinations and the ease with which similar estimates could be constructed for never-before-seen policy suggest that caution is warranted in assessing the credibility of estimates of the joint effects of laws estimated in this way.
Machine Learning: Useful but Not a Solution
From a statistical vantage point, the main drawback of using multiple policy indicators for statistical modeling is that it can dilute statistical power. Secondly, clearly and concisely communicating the implications of many interactions between discrete policies is significantly more arduous than doing so in work that uses composite indices.
Recently, ML approaches to causal inference have garnered excitement in other policy areas (Brand, Zhou, and Xie, 2023; Lechner, 2023). A major motivation for using ML is situations in which there are many more plausible covariates and interactions that could reasonably be included in a model than there are data points. This is the dilemma over statistical power faced by gun policy researchers using conventional parametric regression-based techniques. The most-salient contrasts between ML and conventional parametric regression-based techniques are differences in emphasis (Grimmer, Roberts, and Stewart, 2021; Molina and Garip, 2019). Researchers use conventional parametric techniques to quantify the direction and magnitude of an association between predictor variables (e.g., policies) and some outcome. Traditionally, ML researchers have been concerned with the prediction of the outcome at the expense of interpreting the individual associations between the predictors and the outcome. One consequence of this has been ML's greater emphasis on out-of-sample prediction as the standard of quality: developing a model with one set of observations before testing how well it predicts the outcome in a separate set of observations (Hofman et al., 2021).
Most ML methods employ a regularization strategy that might improve gun policy research, particularly in the context of estimating multiple, joint, and interactive policy effects while accounting for important covariates. Regularization entails constraining some model parameters or penalizing large but uncertain effect estimates to improve the generalizability of the model to new data that were not used during its original estimation.
A recent strand of ML research focuses expressly on multiple treatments and the interactions between them by identifying subgroups of units that respond to a set of treatments in similar ways (Egami and Imai, 2019; Goplerud, 2021; Goplerud, Imai, and Pashley, 2025). Most recently, Goplerud, Imai, and Pashley (2025) developed a "mixture of experts" model in which exogenous characteristics are included in a two-part model that groups units together based on both the similarity of their treatment response patterns and exogenous characteristics predicting those patterns.
In the gun policy context, this approach might involve identifying different treatment "recipes," or patterns of law adoption, and then modeling what types of states are expected to follow each adoption pattern./[5] Subgroup identification methods perform regularization that distinguishes between treatment conditions and other covariates. However, the form of regularization differs from the typical goal of pushing small, noisy effects toward zero; instead, these techniques push the differences between effects toward zero. The result is fusing together different levels of a treatment if their main effects and interactions with other treatment variables are similar. For example, if permit-to-purchase requirements for handguns and long guns have similar effects on homicides and similar interactions with the existence of a stand-your-ground law, then the algorithm would encourage treating handgun and long-gun permit-to-purchase laws as identical. This can greatly reduce the complexity of the model and ease interpretation.
The potential utility of causal ML for gun policy research is untested, and there is reason for skepticism. First, scholars developed these methods for applications with larger sample sizes than are typical in gun policy research. Second, techniques discussed in this section were predominantly developed using either experimental or cross-sectional observational data, not the longitudinal data necessary for policy evaluation. Progress is being made on longitudinal extensions for ML methods, such as for the detection of interactions between a single treatment and exogenous covariates (McJames, O'Shea, and Parnell, 2024; Wang, Martinez, and Hahn, 2024); additional work is required to accommodate multiple treatments (Hill, Perrett, and Dorie, 2023). The methods that focus on identifying mixtures of treatment effect patterns already accommodate repeated observations for the same unit (e.g., Goplerud, Imai, and Pashley, 2025) but were developed in experimental applications.
ML techniques may also offer helpful complements to traditional parametric methods used when studying interaction effects. With their emphasis on prediction, ML techniques can establish an upper bound against which more-interpretable parametric models can be compared (Cranmer and Desmarais, 2017). In this spirit, Verhagen (2024) suggests that researchers compare their theoretically informed parametric model's predictive performance with the performance of more-flexible ML algorithms using the same set of variables. If the ML model outperforms the parametric model, comparing predictions can reveal useful changes in the functional form of the parametric model, such as adding new interactions between policies.
Ways Forward
This essay has detailed the conceptual and methodological challenges inherent in estimating the joint effects of gun policies. We have shown that common approaches--whether using composite indices or conventional statistical models--rest on a series of powerful and often unstated assumptions that can undermine the validity of researchers' conclusions. When presenting their results, researchers should be clear about these assumptions and justify them. Beyond increased transparency, the following recommendations are intended to provide researchers with a constructive path forward. These recommendations outline a more rigorous and transparent approach to designing and interpreting quantitative studies of the multifaceted gun policy environment.
Do Not Use Single, Holistic Indices
We urge researchers to avoid using holistic indices that reduce the gun policy environment to a single number. Although the use of holistic indices preserves statistical power compared with the use of multiple policy indicators, for the purposes of policy evaluation, the assumptions required to produce a valid measure are heroic. Different firearm policies target different mechanisms to achieve effects. These causal pathways are masked by the unidimensionality assumption. Moreover, the presumptions that policies from different policy domains are interchangeable and that there are no interactions among policies require strong justification. Finally, the practical utility of these estimates is questionable because they provide no direct guidance on what policies to adopt.
Tailor Aggregate Measures
Our warnings about holistic indices do not imply that researchers should never aggregate policies into composite measures. Indeed, many law classes are already aggregations of multiple different policies. Such classes as background checks on private sales, for instance, often combine permit-to-purchase systems and those requiring all such transactions to occur through a licensed dealer. Such aggregation is useful if it correctly reflects a theory about the mechanism through which the laws affect particular outcomes--for example, "any state laws designed to ensure that all firearm transactions are subject to a background check have substantially similar effects." Aggregating laws according to specific theories of action makes the assumptions of unidimensionality and exchangeability more defensible, as the included policies target similar causal mechanisms. Aggregation also increases the number of treated units, improving statistical power.
Use Regularization When Modeling Multiple Policy Effects or Interactions
Statistical modeling avoids many of the undesirable assumptions of composite indices and facilitates studying interactions between policies. But complex models come at the cost of statistical power. When directly modeling the effects of multiple policies and their interactions, researchers should use regularization to improve the stability and credibility of their estimates. This practice is particularly valuable for gun policy research, in which including the many necessary policy indicators and interaction terms can easily overwhelm the available data, leading to noisy or unreliable results. Bayesian parametric models are one approach to regularization that can be useful in gun policy research (Schell, Smart, and Morral, 2022). Regularization explicitly helps manage the bias-variance trade-off that researchers face when including multiple policies. By penalizing large but uncertain parameters, regularization helps stabilize estimates and reduce the high variance that can result from complex, underpowered models (Schell, Smart, and Morral, 2022). For example, a researcher might wish to examine whether the effect of a "red flag" law--which allows courts to temporarily remove firearms from individuals deemed a danger to themselves or others--depends on the presence of a preexisting licensing requirement. In a traditional regression, including these two policies along with their interaction term--while also controlling for numerous state-level covariates--drastically reduces the degrees of freedom. This often results in unstable coefficients that are highly sensitive to small changes in the data. By applying a regularizing prior, the researcher can "shrink" the interaction term toward zero unless the data provide strong evidence for its existence. This avoids the assumption that the law's effects are independent while mitigating the risk that any detected complementarities are not merely a by-product of an overfitted, low-powered model.
Furthermore, Bayesian priors can avoid problematic trade-offs discussed in composite indices' weighting schemes. Rather than assuming that every law in a category has an identical impact--like unweighted counts--or relying on experts' evaluations, researchers can specify families of related policies that share a common hyperprior. The model "borrows strength" across related laws to stabilize estimates. This framework allows the collective evidence from a policy family to inform estimates for individual policies. Each policy can still have a unique effect, but those effects are pulled toward the group average unless the data provide strong evidence to the contrary.
When considering newer ML methods, researchers should also be aware that many popular techniques developed purely for prediction--as opposed to explanation--can underestimate causal effects (Belloni, Chernozhukov, and Hansen, 2014; Hahn et al., 2018). Recent advances in regularization are designed to mitigate this problem by distinguishing between the policy variables whose effects researchers want to estimate and the background factors for which researchers want to control, especially when effects are small or heterogeneous (Hahn, Murray, and Carvalho, 2020).
Carefully Consider Whether the Data Can Support Identification of Interaction Effects
Statistical techniques--including regularization--can never excuse researchers from knowing their data (Hemenway, 2025). Just as estimating the independent effects of any two policies requires variation in the timing of their adoption, interactions require variation in the combination of policies adopted and a lag between when different combinations are adopted. If all states that adopt Policy A adopt Policy B at the same time, there is no variation with which to identify an interaction effect. When studying interaction effects, researchers should inspect their data to assess how much variation in timing exists for the interaction effect of interest. If the policy combination is rare or the policies are almost never adopted in isolation from each other, any estimates of their effects will be highly uncertain and unreliable. Further research is needed into how much variation in policy combinations must exist for reliable testing of interaction effects in policy evaluations. A researcher simply may not be able to study a particular combination of policies because of the combination's rarity. One must make this decision on a study-by-study basis.
Summary and Conclusions
Estimating the combined effects of state gun policies is a critical but methodologically challenging task. This essay has shown that the most-common approaches--composite indices and statistical modeling with multiple policy indicators--are laden with significant, often unstated, assumptions that can compromise the validity and interpretability of research findings. We have argued that holistic indices, while convenient, are conceptually ambiguous and rest on significant assumptions about policies: specifically, that they lie along a single continuum of regulation (unidimensionality), can be substituted for each other in a simple fashion (exchangeability), and do not interact with each other (conditional independence). While direct statistical modeling can relax these assumptions, it can come at the cost of statistical power as models become more complex.
The path forward for gun policy research requires a shift toward greater methodological transparency and adoption of new statistical practices. We urge researchers to move away from single, holistic indices in favor of measures that are more tailored and conceptually grounded. These include using narrowly focused indices on coherent policy sub-domains. Alternatively, researchers may directly model the effects of multiple policies and--when the model specification is theoretically motivated--their interactions. Incorporating regularization can help manage the trade-off between model complexity and statistical power. Additionally, the data context of gun research warrants humility that some effects cannot be estimated well. Ultimately, producing credible and useful evidence for policymakers depends on researchers being more transparent and deliberate in the strategies we use to approach questions about the joint effects of policies.
* * *
Acknowledgments
We thank our quality assurance reviewers, Denis Agniel and Elizabeth Stuart, as well as Rosanna Smart, Terry Schell, Heather Schwartz, and Laurie Martin for thoughtful comments on earlier drafts.
Funding for the Gun Policy in America initiative was originally provided through unrestricted gifts from RAND supporters and income from operations. Since June 2018, this initiative has been supported by a grant from Arnold Ventures. This work was conducted in the Infrastructure and Justice Program of RAND Education, Employment, and Infrastructure, a division of RAND that aims to improve educational opportunity, economic prosperity, and civic life for all. For more information, visit www.rand.org/eei or email EEI@rand.org.
Notes
1. The studies we reviewed in this essay that used the Siegel database relied on an earlier version with 134 laws categorized into 14 domains spanning the period from 1991 to 2020 (Siegel, 2020).
2. Typically, the creators of gun policy indices categorize laws into different types, which allows researchers to use sub-scales for particular policy domains. This strategy entails the same considerations as overall indices. Sub-scales may solve issues with multidimensionality if it exists. However, if gun laws can be adequately represented by a single, overall index, then using sub-scales unnecessarily dilutes statistical power.
3. The chief alternative to linear aggregation is geometric aggregation, which uses the geometric mean of the policies' weights to reduce the importance of outlier policies. We know of no gun policy studies that have used such an approach.
4. The typical way of expressing an interaction between two policies is E(Y) = B1, where E(Y) is the expected value of the outcome of interest (e.g., homicides) and are policy indicators that equal 1 if the policy is in place and 0 otherwise. B1 is POLICY1's effect on Y if POLICY2 is not in place, and B2 is POLICY2's effect on Y if POLICY1 is not in place. B3 is the additional effect if both policies are in place. The total effect on Y if both policies are in place is B1 + B2 + B3.
5. Because these partitioning methods were developed for experimental applications, they have not yet been designed to control for the effects of possible confounds, which is essential for causal modeling with observational data.
* * *
James Murphy is a computational social scientist and policy researcher at RAND.
Andrew R. Morral is the Greenwald Family Chair in Gun Policy and a senior behavioral scientist at RAND.
* * *
Original text here: https://www.rand.org/about/people/m/murphy_james_p.html
[Category: ThinkTank]
Ifo Institute: East German Companies Particularly Dissatisfied With Economic Policy
MUNICH, Germany, March 10 -- ifo Institute issued the following news release on March 9, 2026:* * *
East German Companies Particularly Dissatisfied With Economic Policy
Companies in Eastern Germany are more dissatisfied with the economic policy of the German government than western German companies. That's according to a recent survey by the ifo Institute. While companies in Western Germany give the government an average grade of 4.14, the grade in Eastern Germany is worse at 4.27. "Far more companies in the East than in the West give the government a grade of 6. Construction companies and ... Show Full Article MUNICH, Germany, March 10 -- ifo Institute issued the following news release on March 9, 2026: * * * East German Companies Particularly Dissatisfied With Economic Policy Companies in Eastern Germany are more dissatisfied with the economic policy of the German government than western German companies. That's according to a recent survey by the ifo Institute. While companies in Western Germany give the government an average grade of 4.14, the grade in Eastern Germany is worse at 4.27. "Far more companies in the East than in the West give the government a grade of 6. Construction companies andretailers are particularly critical," says Joachim Ragnitz, Managing Director of the Dresden branch of the ifo Institute.
In the ifo Business Survey, 13 percent of East German companies gave a grade of 6, while the figure was only 8 percent in Western Germany. East German construction companies and retailers were more critical than their West German counterparts (average grade of 4.48 (construction) and 4.46 (retail)), while there were hardly any differences between East and West in other sectors such as wholesale, industry and services. The assessment of government policy was critical in all the policy areas examined. Social and pension policy scored particularly poorly, with an overall grade of 4.62, which matches the ratings from Western Germany. There were clear differences in the areas of fiscal policy (East: 4.18, West: 4.00), energy and climate policy (East: 4.32, West: 4.16), and industrial policy (East: 4.30, West: 4.11). "The results show that reforms are urgently needed in almost all areas," says Marcel Thum, Director of the ifo Institute's Dresden branch.
* * *
Original text here: ifo.de/en/detail-suche?size=n_10_n&filters%5B0%5D%5Bfield%5D=bundle.keyword&filters%5B0%5D%5Bvalues%5D%5B0%5D=pressemitteilung&filters%5B1%5D%5Bfield%5D=bundle.keyword&filters%5B1%5D%5Bvalues%5D%5B0%5D=pressemitteilung&filters%5B2%5D%5Bfield%5D=bundle.keyword&filters%5B2%5D%5Bvalues%5D%5B0%5D=pressemitteilung&filters%5B3%5D%5Bfield%5D=bundle.keyword&filters%5B3%5D%5Bvalues%5D%5B0%5D=pressemitteilung&sort-field=_score&sort-direction=desc
[Category: ThinkTank]
Hudson Institute Issues Commentary: China May Economically Surpass the United States in the Next Decade
WASHINGTON, March 10 -- Hudson Institute, a research organization that says it promotes leadership for a secure, free and prosperous future, issued the following commentary on March 9, 2026, by Asia-Pacific Security Chair Patrick M. Cronin to the International Economy Symposium:* * *
China May Economically Surpass the United States in the Next Decade
China is a peer economic power and may economically surpass the United States in the next decade. Yet the level of international disorder, combined with recent demonstrations of leaders' agency, suggests the outcome for the foreseeable future ... Show Full Article WASHINGTON, March 10 -- Hudson Institute, a research organization that says it promotes leadership for a secure, free and prosperous future, issued the following commentary on March 9, 2026, by Asia-Pacific Security Chair Patrick M. Cronin to the International Economy Symposium: * * * China May Economically Surpass the United States in the Next Decade China is a peer economic power and may economically surpass the United States in the next decade. Yet the level of international disorder, combined with recent demonstrations of leaders' agency, suggests the outcome for the foreseeable futureremains too close to call.
Claims of China's economic primacy can be parsed in many ways. Depending on the indicator, such as manufacturing scale or trade volume, China has more than closed the gap. Yet this trajectory is not immutable. The United States, particularly if it acts in concert with allies and partners, retains the ability to reverse unfavorable trends through policies to revitalize American industry while harnessing a collective innovation and advanced manufacturing ecosystem. Even so, we cannot say with confidence how this economic rivalry ends, because we don't yet know the level of decoupling, how vital technological changes will unfold, or the impact of these changes on society and humanity.
An equally consequential issue as economic primacy is how power should be measured in the first place. Whether one country posts a larger GDP, attracts more foreign direct investment, spends more on research and development, or produces more steel, ships, or semiconductors matters less than whether those resources translate into usable power. Strategic outcomes matter more than raw outputs. What ultimately counts is power capable of delivering decisive effects, whether by reshaping the rules of the global trading system or generating the industrial capacity required to prevail in a protracted conflict. Perceptions of state power change slowly. Confirmation bias makes it difficult to absorb new evidence and fosters overconfidence, and these barriers to clear-eyed estimates affect both the American and Chinese perspectives.
Even discarding China's narrative of inexorable rise and America's terminal decline, the world remains transfixed by the rise of the East. China has come a long way since Mao's revolutionary era of mass killing and ideologically driven economic experimentation. Deng Xiaoping, Jiang Zemin, and Hu Jintao nurtured development while stopping well short of full liberalization. Xi Jinping, by contrast, has re-centralized economic authority.
The forthcoming Fifteenth Five Year Plan is likely to codify what has been evident since 2012: politics and security now matter far more than growth. New productive forces and mastery of key technologies are intended to offset slowing growth and to prevent economic malaise from delegitimizing the party and provoking social unrest. Even as recent analyses suggest China will undershoot its growth ambitions, Beijing still appears capable of muddling through, even if a collapse of domestic cohesion should not be ruled out.
The tendency toward confirmation bias and overconfidence are also evident in the United States. It is now accepted wisdom, as many in the Trump administration have long claimed, that earlier elites too blithely assumed that engagement, international institutions, and global trade rules would gradually liberalize China. Still, the current administration's overconfidence is visible even in carefully considered documents such as the recently released National Security Strategy. While the NSS rightly elevates economic competition, it also lapses into a bit of magical thinking by asserting that the United States will preserve peace and order through "overmatch," a term so elastic as to mean almost anything.
The absence of a clear advantage today does not preclude a decisive shift tomorrow. Strategic balances can change abruptly if either side competes inefficiently, neglects foundational strengths, or stumbles into the extremes of conflict or capitulation. China has benefited from watching the United States erode its soft power and long-term investment in science and technology. At the same time, Beijing has obscured its massive rural poverty, structural dependence on exports, and deep resistance to economic reform that might challenge party authority.
My reasons for optimism would require another, lengthier article. However, if both countries choose to race to the bottom, it should not surprise us if, by mid-century, the global economic hierarchy looks very different indeed.
Read in The International Economy Symposium (http://www.international-economy.com/TIE_F25_ChinaOvertakeSymp.pdf).
* * *
At A Glance:
Patrick M. Cronin is the Asia-Pacific security chair at Hudson Institute. His research analyzes salient strategic issues related to US national security goals in the Indo-Pacific region and globally.
* * *
Original text here: https://www.hudson.org/economics/china-may-economically-surpass-united-states-next-decade-patrick-cronin
[Category: ThinkTank]
Hudson Institute Issues Commentary to Religion Unplugged: USCIRF's Religious Freedom Report Sparks Dispute Over US Policy Critique
WASHINGTON, March 10 (TNSrep) -- Hudson Institute, a research organization that says it promotes leadership for a secure, free and prosperous future, issued the following commentary on March 9, 2026, by Paul Marshall, senior fellow at the Center for Religious Freedom, to Religion Unplugged:* * *
USCIRF's Religious Freedom Report Sparks Dispute over US Policy Critique
The U.S. Commission on International Religious Freedom's 2026 annual report (https://www.uscirf.gov/annual-reports) , released on March 4, focused on international issues -- but was overshadowed by controversy over its critique ... Show Full Article WASHINGTON, March 10 (TNSrep) -- Hudson Institute, a research organization that says it promotes leadership for a secure, free and prosperous future, issued the following commentary on March 9, 2026, by Paul Marshall, senior fellow at the Center for Religious Freedom, to Religion Unplugged: * * * USCIRF's Religious Freedom Report Sparks Dispute over US Policy Critique The U.S. Commission on International Religious Freedom's 2026 annual report (https://www.uscirf.gov/annual-reports) , released on March 4, focused on international issues -- but was overshadowed by controversy over its critiquethat other branches of government have undercut protections for religious freedom.
It criticized, for example, cuts to USAID programs, since many of those programs were specifically aimed at protecting religious freedom.
The report stated that the State Department's suspension of foreign aid "left hundreds of victims of religious persecution receiving support in immediate need of lifesaving assistance in countries such as Afghanistan, Burma, Egypt, Nigeria, and Vietnam." It added that U.S. funding cuts "contributed to 11.6 million refugees, including many fleeing religious persecution, losing access to humanitarian assistance," and that as a result 130,000 refugees, including about 15,000 registered Iranian Christians, remain in limbo.
The USCIRF -- established as an independent, bipartisan legislative branch agency by the 1998 International Religious Freedom Act -- has a mandate to monitor violations of religious freedom globally and make recommendations to the president, secretary of state, and the U.S. Congress.
Republican members of the commission officially dissented from these statements. They did not say that the findings were wrong but that they went far beyond the scope of USCIRF's own mandate, which is specifically international in focus.
My own view is that the criticism that many of the Trump Administration's acts have undercut religious freedom is important and correct, but that it does fall out of USCIRF's authority and scope.
It is disturbing that this issue has divided the Commission on partisan grounds. Hitherto, it has been successfully bipartisan, and in its international focus continues to be so.
The commission is required to recommend to the State Department that governments that have "engaged in or tolerated particularly severe violations of religious freedom," defined as "systematic, ongoing, and egregious," be designated as "Countries of Particular Concern."
Other, slightly lesser offenders should be put on a "Special Watch List." Finally, it recommends the designation of "Entities of Particular Concern" to violent non-government bodies, most of which are Islamist terrorist groups in Africa, as well as the Iran-funded Houthis in Yemen.
One highlight is its special attention to the hideous violence in Nigeria. Since 2009, USCIRF has recommended that the U.S. Government designate Nigeria a CPC. The Trump administration did so in 2020, but in 2021, while the situation worsened, the Biden administration removed the designation without offering any explanation. President Trump re-designated Nigeria in October 2025, and this has helped focus attention and action on the tens of thousands killed there in religion-related violence.
The commission recommended that Burma, China, Cuba, Eritrea, Iran, Nicaragua, Nigeria, North Korea, Pakistan, Russia, Saudi Arabia, Tajikistan and Turkmenistan continue to be designated as CPCs and that another five countries -- Afghanistan, India, Libya, Syria and Vietnam -- be added to the list.
It also urged that Algeria and Azerbaijan continue to be on the SWL and that nine countries be added: Egypt, Indonesia, Iraq, Kazakhstan, Kyrgyzstan, Malaysia, Qatar, Turkey and Uzbekistan.
It also proposed adding the Rapid Support Forces in Sudan to the EPC list.
Note that these are recommendations made to the State Department, which is the body that actually makes the final decision on a country's status. Despite the Commission's proposals in 2025, the State Department did not add India, Nigeria and Afghanistan as CPCs. Similarly, apart from Algeria and Azerbaijan, the Department did not follow USCIRF's SWL recommendations.
Even if the U.S. Government does give a CPC designation, it is not required to apply any sanctions to the offending country. Pakistan, Saudi Arabia, Tajikistan and Turkmenistan are CPC's but have been granted "national Interest waivers," a declaration that any proposed sanctions would not be in the US national interest.
Now, the U.S. Government is highly unlikely to accept the new recommendation that Qatar be designated a SWL. The country often works closely with the U.S. and is home to the Al Udeid Air Base, the largest U.S. military installation in the Middle East, and is central to the war on Iran. Similarly, America will not want to antagonize India, with whom it is seeking increased partnership.
The result is that the U.S. largely ends up sanctioning those, such as Iran, China, and North Korea, whom it is already sanctioning on other grounds.
Does this mean that these designations are essentially merely virtue signaling without real results? Not at all. Many countries do not want to be embarrassed by appearing on the State Department or USCIF lists and take affirmative steps to avoid it.
The report also creates and shapes media coverage. There are many, usually small, NGOs that faithfully report on religious freedom worldwide but do not receive much coverage, especially because they often do not fit dominant media interests or narratives. Reports by a bipartisan government agency draw attention to many otherwise ignored conditions around the world.
In addition, this year's USCIRF report is both deeper and more expansive than previous ones. Its overview highlights Syria, China, India, Central Asia, Cuba, Nicaragua, Venezuela, Saudi Arabia and Iran. Since the U.S. State Department has not issued its own legally required annual religious report for almost two years now, USCIRF's report is the best available global overview. It remains an indispensable resource.
Read in Religion Unplugged (https://religionunplugged.com/news/2026-uscirf-religious-freedom-report-sparks-dispute-us-policy-critique).
* * *
At A Glance:
Paul Marshall is a senior fellow at Hudson Institute's Center for Religious Freedom.
* * *
Original text here: https://www.hudson.org/religious-freedom/uscirfs-religious-freedom-report-sparks-dispute-over-us-policy-critique-paul-marshall
[Category: ThinkTank]
CSIS Issues Critical Questions Q&A: What Does the New Cyber Strategy Really Mean?
WASHINGTON, March 10 -- The Center for Strategic and International Studies issued the following Critical Questions Q&A on March 9, 2026, involving Emily Harding, director of the Intelligence, National Security and Technology Program and vice president of the CSIS Defense and Security Department:* * *
What Does the New Cyber Strategy Really Mean?
The Trump administration dropped its long-awaited cybersecurity strategy late on Friday, March 6. Its tone is largely commendable--it is a declaration of a more robust, more aggressive, and more proactive cyber policy. It is marked by its brevity, clocking ... Show Full Article WASHINGTON, March 10 -- The Center for Strategic and International Studies issued the following Critical Questions Q&A on March 9, 2026, involving Emily Harding, director of the Intelligence, National Security and Technology Program and vice president of the CSIS Defense and Security Department: * * * What Does the New Cyber Strategy Really Mean? The Trump administration dropped its long-awaited cybersecurity strategy late on Friday, March 6. Its tone is largely commendable--it is a declaration of a more robust, more aggressive, and more proactive cyber policy. It is marked by its brevity, clockingin at four pages, plus a cover letter. However, it is more a statement of goals than a strategy. It is remarkable for what it lacks: a conversation about matching resources to these goals. Still, there are more details to come, including some executive orders reportedly in the pipeline and a robust implementation plan that remains embargoed.
In many ways, it is a statement whose time is overdue. The recently published CSIS Intelligence, National Security, and Technology Program report, A Playbook for Winning the Cyber War, called for just such a declaration to adversaries. The administration's document is a version of the Playbook's recommendation to put adversaries on notice that the United States will no longer view cyberattacks as one-offs, or the cost of doing business, but for what they are: a critical national security threat. It is past time to restore deterrence in this vital domain.
Q1: What are the strong points of the document?
A1: The document leads with a list of the Trump Administration's successes in the cyber domain, followed by a threat:
Whether destroying online scammers' networks and seizing $15 billion of their stolen money, supporting a globe-spanning operation to obliterate Iran's nuclear infrastructure, or leaving our adversaries blind and uncomprehending during a flawless military operation to bring international narco-terrorist Nicolas Maduro to justice, adversaries are on notice that America's cyber operators and tools are the best in the world and can be swiftly and effectively deployed to defend America's interests.
Where previous administrations have kept largely quiet about the United States' offensive capabilities, this strategy document is the opposite. The Trump administration has come out swinging. There is a clear statement of posture: "We will act swiftly, deliberately, and proactively to disable cyber threats to America."
Then, the document makes an absolutely critical point, for which the CSIS Playbook also advocated: "We will not confine our responses to the 'cyber' realm." U.S. strength is wasted in a proportional cyber-for-cyber campaign. U.S. adversaries are willing to engage in practices that the United States is not, such as cyber attacks on water systems and hospitals. Even in times when Washington has apparently shut down power grids, the effects have been targeted and temporary. These moral restraints are right and proper, but they also prevent Washington from attaining escalation dominance in a pure cyber-for-cyber fight. U.S. strength lies in its other tools.
Resilience is prominently featured. This is the right answer. There is no such thing as 100 percent security in the cyber domain, so the better approach is to assume there will be breaches and establish ways to work despite them. Fail and recover is a far better strategy than building ever-higher castle walls.
The strategy also leans into the necessity of pushing hard on emerging tech, including quantum and AI. Staying ahead of cyberattacks means staying ahead in critical technologies. Whoever wins the race toward AI-enabled cyber offense gains a significant, if temporary, advantage, but whoever wins the race to quantum computing wins a massive coup in decryption capability, suddenly making data stolen over the last decade readable and thus valuable.
Finally, there is a clear statement of cyber requiring a collective defense. With highly intertwined software supply chains and interdependent systems, cyber is truly a team sport. One chink in the armor can have far-reaching implications for the global economy and national security. This line is highly encouraging: "Every American should take practical steps to protect themselves and their families in cyberspace, but America's citizens do not stand alone." This sounds like a common sense approach, but it is actually a shift. Other governments, globally, have largely left defense to the victims. This more robust strategy recognizes that individuals and businesses cannot be left to manage sophisticated adversaries on their own.
Q2: What is the document lacking?
A2: The strategy gets a bit muddled in a couple of places. Most oddly, it does not name any adversary, even though China has been on a hacking spree for more than a decade. This may be a deliberate decision ahead of some high-stakes meetings between President Trump and President Xi. Further, there's a comment about fighting the curtailment of free speech, which would be better suited to a different policy line of effort. There are also some unnecessary swipes at previous administrations, who did take cybersecurity seriously, if more gently, than the Trump administration.
The document could have gone far deeper on the rise of cyber conflict as well. The Russian invasion of Ukraine, the 12-day Iran-Israel conflict, and even use of cyber operations in Venezuela and Iran by the United States demonstrate how cyber capabilities are shaping conflict. A critical initiative for this administration should be cementing how cyber policy as a field should operate in the larger geostrategic environment.
The document spends some time discussing talent, but some of the language lacks clarity or specifics. For example: "We will eliminate roadblocks that prevent industry, academia, government, and the military from aligning incentives and building a highly skilled cyber workforce." What this might mean remains unclear.
Finally, the biggest gap is of course how the United States should implement this sweeping statement of intent. There is much "we will" language and very little on the actual steps to make it so. The New York Times reported that a series of executive orders will accompany the strategy, but only one is out. There is an implementation plan, much of which is classified, that will draw out these points, but unless some of it is released to the public, it is hard to inspire real action.
Q3: What are other key points?
A3: One quite controversial point will be the strategy's direction to unleash companies to "disrupt adversary networks." This sounds much like the administration has decided to support the idea of offensive cyber by private entities, or "hack back." Such activity is currently illegal, having been categorized in the same bucket as vigilante justice. Victims are generally discouraged from seeking retribution or recovery of assets on their own, lest the state lose control of its monopoly on force. This document suggests there may be a move toward something akin to an ancient practice--letters of marque, whereby the government allows companies to engage on its behalf.
There is a brief but important callout to the importance of securing federal networks: "We will accelerate the modernization, defensibility, and resilience of federal information systems by implementing cybersecurity best practices, post-quantum cryptography, zero-trust architecture, and cloud transition." This is a key line: "We will work to elevate the importance of cyber in government leadership and in the board room." Civilian, non-intelligence parts of the federal government have woefully underfunded cybersecurity for decades, in part because leadership of departments and agencies have not seen cyber as important or their job. Further, the document says, "Working across the government to modernize and create competitive procurement processes, we will remove barriers to entry so that the government can buy and use the best technology." This echoes themes from the National Defense Strategy and other Trump administration statements, which encourage adopting more off-the-shelf capabilities and doing so with minimal friction. This could be a real opportunity for U.S. cybersecurity stars to do more to help the federal government.
Q4: What Needs to Happen Next?
A4: The Playbook for Winning the Cyber War makes recommendations for how to progress from the current state of relative complacency to a robust defense and a far more active offense. Those recommendations include the following:
1. Create six new frameworks for a new era.
* Reframe offensive operations--think like an octopus. Offensive cyber tools, at their best, are flexible, inventive, and opportunistic, akin to how an octopus hunts in the wild. Cyber offense must combine long-term planned campaigns and instant opportunism--like an octopus's central brain and tentacles.
* Redefine proportionality and escalation to include the big picture. Policymakers' view of proportionality must expand beyond the most recent incident and consider the aggregate costs of a pattern of attacks, the long-term economic and security consequences of those attacks, and the message sent by inaction. A new policy, which could be called "cyber first-cyber optional," must begin with explicit principles that the United States is redefining proportionality in the cyber domain, bolstering defense, and putting adversaries on notice that in the future the United States will retaliate for the overall pattern of behavior, not any one attack in isolation, and will use all tools at its disposal.
* Lay the groundwork for deterrence. Defining international norms of behavior will establish a clear baseline to facilitate future action, making it a worthwhile exercise, even if many states are likely to ignore them. Further, demonstrated will is critical to deterrence. A strong U.S. and allied response to the first cyberattack after an explicit policy goes into place will help set a new tone.
* Get comfortable with being uncomfortable about the level of attribution. There may be moments when it is necessary, even prudent, to act before definitive attribution. Establishing consequences for malign actors is a worthwhile goal, and the benefits of sending a strong message of response could outweigh the relatively small risks of misattribution. Make a plan to act in the face of uncertainty.
* Reimagine the cyber warriors. Cyber war is largely fought on private networks with combatants who do not wear uniforms. The cyber domain needs its own service--a Cyber Force that can be built for purpose. It should tilt heavily toward reserve service, and its physical requirements should be utterly different from those of the Marines, for example. Further, the United States must view private sector partners as real partners. It should put in place protections for cyber operators who act in conjunction with the U.S. government, as so many from the private sector did in Ukraine.
* Focus on defense as a no-fail mission. A stronger cyber defense at home is a worthy goal in itself, but it is also the key to an unleashed U.S. offense. As long as policymakers worry that the home front is vulnerable to adversary attacks, they will hesitate to retaliate. To flip the script, the United States must make its adversaries believe that a cyberattack, particularly on U.S. critical infrastructure, will do minimal long-term damage to the United States and that retaliation, in whatever form, will be swift and painful. To create that stronger defense, the Cybersecurity and Infrastructure Security Agency needs leverage beyond its convening and cajoling efforts. Departments and agencies must be held accountable for their investments--or lack thereof--in cyber defense.
2. Establish a methodology for decisionmaking in a crisis. Researchers ran war games as a part of the playbook project, and those games demonstrated that policymakers are still unsure how to think about a response to a cyberattack. The Playbook suggests starting with answering seven questions. These questions will illuminate aggravating circumstances and suggest a set of responses that establish escalation dominance and create deterrence.
3. Run the playbook. Be bold. Match creative policy responses to the pain points of the particular attacker. Demonstrate that the United States will view a cyberattack that causes damage as just as serious as a kinetic attack. Plan for success on offense, confident in the strength of defense.
Further, if the administration is serious about pursuing these goals, they need to fix two CISAs: first, the Department of Homeland Security's Cybersecurity and Infrastructure Security Agency, which was gutted in the early days of Trump II. It needs people and better tech to conduct its mission. Second, it must throw its weight behind reissuing the Cybersecurity and Information Sharing Act of 2015, a critical piece of legislation that Congress extended briefly but is due to lapse again in September.
The new cyber strategy is a departure from old versions in several important ways. It meets the moment by declaring the United States is taking a far stronger approach to ending cyber attacks, restoring deterrence, and proactively defending U.S. interests. For it to be successful, however, it needs to emulate at least one part of past administrations' strategies: It needs measurable, achievable steps forward if it is to achieve its bold goals.
* * *
Emily Harding is director of the Intelligence, National Security, and Technology (INT) Program and vice president of the Defense and Security Department (DSD) at the Center for Strategic and International Studies (CSIS).
* * *
Original text here: https://www.csis.org/analysis/what-does-new-cyber-strategy-really-mean
[Category: ThinkTank]
CSIS Issues Commentary: Pioneering Quantum-Supercomputing Integration - U.S. Leadership in the Next Computing Era
WASHINGTON, March 10 -- The Center for Strategic and International Studies issued the following commentary on March 9, 2026, by fellow Hideki Tomoshige and program manager and research associate Shruti Sharma, both of the Renewing American Innovation Program:* * *
Pioneering Quantum-Supercomputing Integration: U.S. Leadership in the Next Computing Era
Integrating quantum computers into U.S. world-class supercomputers is now a strategic imperative for U.S. technological leadership in the next era of computing. Hybrid systems will synergize classical and quantum computing and deliver breakthroughs ... Show Full Article WASHINGTON, March 10 -- The Center for Strategic and International Studies issued the following commentary on March 9, 2026, by fellow Hideki Tomoshige and program manager and research associate Shruti Sharma, both of the Renewing American Innovation Program: * * * Pioneering Quantum-Supercomputing Integration: U.S. Leadership in the Next Computing Era Integrating quantum computers into U.S. world-class supercomputers is now a strategic imperative for U.S. technological leadership in the next era of computing. Hybrid systems will synergize classical and quantum computing and deliver breakthroughsfaster in optimization, simulation, and scientific discovery.
While the United States leads in supercomputing and quantum computing, it lags behind Europe and Japan in developing hybrid quantum-supercomputing systems. Increased federal investment in testbeds, open-source software stacks, and workforce development can position the United States to shape and maintain competitiveness in quantum-supercomputing systems and capitalize on quantum breakthroughs when practical quantum advantages emerge.
Strategic Importance of Computational Power
The U.S. government has long leveraged computing power for national security missions, including ballistics and nuclear weapons simulation, signals intelligence, and scientific breakthroughs. The Department of Energy's (DOE) exascale supercomputers generate great research value across government, academia, and industry in fields including weather forecasting, materials design, and drug development.
Computational power remains a critical strategic resource in the twenty-first century. The United States stands at the pinnacle of classical computing power. U.S. exascale supercomputing capabilities available through facilities like Frontier at Oak Ridge National Laboratory, Aurora at Argonne National Laboratory, and El Capitan at Lawrence Livermore National Laboratory highlight the country's technological leadership.
However, classical computational power faces three challenges:
1. Slowing performance gains from miniaturization (Limitation of Moore's law)
2. Consumption per transistor no longer decreasing with size (End of Dennard scaling)
3. Unsustainable power requirements of current technology systems
These challenges are driving investment in disruptive computational approaches, including quantum-centric supercomputing.
What Are Quantum-Centric Supercomputers?
Quantum-centric supercomputers exploit the strengths of both classical and quantum systems in an integrated workflow. Quantum computers specialize in accelerators rather than stand-alone replacements for classical supercomputers. Classical computers manage data preparation and post-processing analysis, while quantum processors address classically intractable problems, including optimization and quantum simulations.
This hybrid approach aims to expand the range of addressable problems and enhance the precision and efficiency of calculations. When supercomputers are tightly integrated and colocated with quantum devices, classical systems can apply real-time feedback loops, error correction, and noise reduction for quantum computers. This greatly improves result reliability.
This complementary capability gap is leading supercomputer centers around the world to host early-stage quantum computers alongside supercomputers. The centers see no need to wait for utility-scale quantum computers before deploying quantum computers. This quantum hybrid system is a pragmatic near-term path to create value from early quantum computing efforts while building sophisticated quantum systems in the longer-term.
Strategic Considerations for Quantum-Centric Supercomputing
Early efforts to integrate quantum computing and supercomputers are unlocking new human expertise and institutional knowledge. Scientists and engineers are exploring different approaches to integrate quantum and supercomputing, including designing quantum algorithms for hybrid workflows and optimizing resource allocation. The first nations to field these operational systems are securing a first-mover advantage in quantum-centric supercomputing research and workforce expertise. Early deployment of quantum supercomputing systems will also drive demand for quantum hardware vendors. For these reasons, as well as other strategic considerations, the United States should invest in quantum infrastructure and provide financial support to vendors whose technologies align with U.S. national priorities.
Early quantum high-performance computing (HPC) deployment also drives demand for quantum hardware vendors. These efforts possibly serve as a means for governments to provide financial support to vendors whose technologies align with U.S. national priorities.
These and other strategic considerations call for a more detailed review of the current policy and technological landscape of quantum-centric supercomputing.
Global Quantum-Centric Supercomputer Deployment
The United States maintains leadership in the field of classical computing but faces gaps in quantum supercomputer deployment compared to international competitors such as Europe and Japan.
For example, eight European supercomputer centers in Czechia, France, Germany, Italy, Poland, Spain, the Netherlands, and Luxembourg are currently working to deploy on-site quantum computers, which will be tightly integrated with classical supercomputing infrastructure under the European High Performance Computing Joint Undertaking. To date, six of these eight countries have already signed procurement contracts for these quantum computers, and most have installed them already, while the Netherlands and Luxembourg are still evaluating vendor selections. These procurements ensure a diversity of quantum modalities to avoid lock-in and create both redundancy and comparative learning across different quantum hardware platforms.
Likewise, Japan has taken early steps to integrate quantum computing and supercomputing. For example, in 2023, the Fugaku supercomputer at RIKEN, a national research and development institute, was paired with a quantum computer developed by Fujitsu and RIKEN. Japan is now building a national quantum-supercomputer platform that connects RIKEN's new Quantinuum Ion-Trap quantum computer and IBM's superconducting quantum computer with the Fugaku supercomputer and supercomputers at the University of Tokyo and Osaka University. This has created a distributed quantum-supercomputing infrastructure system across Japan that Japan's premier research institutions are able to access.
Beyond RIKEN, the ABCI-Q supercomputer currently being developed at Japan's National Institute of Advanced Industrial Science and Technology will be paired with three different quantum computers in 2025: a neutral-atom computer from QuEra, a photonics computer from OptQC, and a superconducting computer from Fujitsu.
Meanwhile, U.S. efforts to develop hybrid quantum-supercomputing systems remain nascent. The DOE's Oak Ridge National Laboratory (ORNL) is emerging as the flagship institution for quantum-supercomputer integration, with a $125 million budget through 2030. In 2025, ORNL announced the installation of two quantum computers: IQM's 20-qubit superconducting quantum computer (scheduled) and Quantum Brilliance's diamond quantum processing units (installed). Nine DOE laboratories are moving forward with efforts to integrate quantum computing and supercomputing, as reflected in Nvidia's recent announcement about the adoption of NVQLink, a high-speed physical architecture that links quantum processors with classical computers.
In 2024, the state of Massachusetts announced a two-year, $16 million matching-fund project with $11 million from QuEra to install and deploy a quantum computer at the Massachusetts Green HPC Center. This initiative is important in that it represents the first significant state-level quantum-centric supercomputer investment.
Nevertheless, U.S. efforts to integrate quantum computing and supercomputing remain limited in scope compared to the coordinated national efforts underway in Europe and Japan, and legislative commitments in the United States have not kept pace with international developments and technological change For example, while the U.S. National Quantum Initiative Act has been essential in building the United States' quantum research foundation, it predates the quantum-supercomputing integration imperative and does not include explicit budget authority for large-scale hybrid deployments.
The Role of Software in Quantum-Supercomputing
Software is another important area of focus for driving U.S. technological leadership in quantum-supercomputing. Without a comprehensive software stack, quantum computers and classical computers cannot operate together efficiently, even if they are physically colocated and connected through high-speed networks. One proposed quantum-centric supercomputer software architecture expands traditional HPC software stacks to address the unique properties of quantum systems. Without a comprehensive software stack, quantum computers and classical computers cannot operate together efficiently, even if they are physically colocated and connected through high-speed networks. A hybrid coordination layer would provide the architecture required to allow different quantum platforms and classical systems--two fundamentally different computing paradigms--to function together in a unified workflow.
The following visualization of a layered architecture of quantum-quantum-centric supercomputer software stack orders from upper layers, closer to the user, to lower layers, closer to the hardware.
Lessons from HPC Software Development
The HPC community's decades of experience in building scalable, flexible, and efficient operating system interfaces reveal several principles for developing comparable interfaces for quantum-supercomputer integration. HPC software typically includes the following features:
* Layered Architecture: HPC software uses layered designs. Developers can modify or improve individual layers without a full system redesign. The same approach should be applied to quantum supercomputer interfaces for updates and system stability.
* Standardization: The HPC community benefits from established standards, such as an operating system interface standard and a programming standard, which allow applications to run across different systems with minimal configuration. The quantum computing community is starting to follow this path, though nascent and fragmented. Standards that the quantum computing community has adopted include OpenQASM (Quantum Assembly Language), which provides a common language for expressing quantum circuits, and Quantum Intermediate Representation, which enables quantum algorithms to be translated between different hardware platforms. Moving too quickly toward rigid standards could lock immature quantum computers into suboptimal designs and reduce the flexibility needed for innovation. Delaying standardization fragments the ecosystem into incompatible approaches. Neither outcome is desirable. The quantum community should collaborate and coordinate tightly and proactively to achieve the right balance.
* Orchestration: Supercomputers orchestrate thousands of processors, GPU accelerators, and storage systems across distributed supercomputing clusters, ensuring correct resource allocation, timely job queuing, and efficient and effective load balancing. The orchestration layer for hybrid systems must manage which tasks run on classical chips, which go to quantum processors, and how their results should be merged meaningfully. For instance, diagnosing and correcting quantum errors requires tasks of high-bandwidth and low-latency communication between quantum processing units and classical CPUs or GPUs.
* Hardware-Software Codesign: Top-performing supercomputers have resulted from close collaboration and harmonization between chip designers, software architects, and application developers--not just better chips or better code. Likewise, the development of quantum-centric supercomputers will require a codesign approach and even tighter feedback loops between hardware and software developers. A codesign approach ensures that hardware constraints are considered from the start and that software optimizes and fully utilizes next-generation quantum devices.
The Promise of Open-Source Software for Quantum-HPC
While quantum-centric supercomputer software stacks are in their early stages, open-source software stacks developed by universities, national labs, and companies are helping to accelerate progress by allowing researchers to experiment, share, and critique ideas, and test algorithms on different systems instead of working in proprietary silos. For example, the Munich Quantum Toolkit, developed by the Technical University of Munich and the Munich Quantum Software Company, has been downloaded free of charge over 2 million times by users in both academia and industry. Other notable examples of open source software include:
* The Open Quantum Toolchain for Operators and Users, developed by Open-Source Quantum Computer Operations from Fujitsu; the University of Osaka; Systems Engineering Consultants Co., Ltd.; and TIS Inc.
* The XACC framework led by ORNL
* The HybridQ platform was developed by the Flatiron Institute
Intermediary organizations like the Unitary Foundation and IEEE Quantum Computing Working Groups connect stakeholders, facilitate standardization discussions, and coordinate development efforts. These connective tissues allow rapid collective learning and faster progress toward mature quantum-classical software stacks.
U.S. Advantages and Strategic Imperatives
The United States has distinctive strengths, including in quantum computing hardware research and development, supercomputing infrastructure, and a robust open-source software culture. These national strengths are without equivalent internationally. In order to leverage distinctive strengths and attract diverse stakeholders, the United States will need to take focused policy action. This commitment to quantum supercomputing will attract global talent and drive innovation.
It remains uncertain when or in what areas quantum computing will have commercial and scientific utility. Quantum computers have not yet demonstrated clear advantages over classical computing systems for real-world applications outside narrow laboratory domains. It could take a decade or more for quantum commercialization to occur.
This technological uncertainty strengthens rather than diminishes the case for investment in quantum-centric supercomputers. The United States should develop integrated systems now and be ready to capitalize immediately when quantum breakthroughs occur, including by promoting the development of open software stacks, encouraging networking among players in the standards community, expanding workforce development that bridges the two computing systemsand, and constructing new quantum-supercomputing research and development infrastructure and making it available to researchers from universities, national laboratories, and industry in the United States.
With appropriate investments, the United States can maintain operational experience and human expertise in quantum-centric supercomputing, regardless of the timing and scope of quantum advantage. This is a hedging strategy: Invest now in the infrastructure and knowledge required to seize future quantum breakthroughs, while strengthening classical supercomputing capability independently.
The United States built its post-World War II research enterprise on federal investment in computing infrastructure, enabling decades of scientific discovery and innovation and creating the foundation for U.S. economic competitiveness and national security advantage. Quantum-centric supercomputing is the next generation of computing infrastructure. The United States should take strategic action to establish quantum-centric supercomputer ecosystems that will benefit from the most pragmatic approach to quantum computing for the next decade and shape the trajectory of quantum innovation for decades.
* * *
Hideki Tomoshige is a fellow with the Renewing American Innovation (RAI) Project at the Center for Strategic and International Studies (CSIS) in Washington, D.C. Shruti Sharma is a program manager and research associate with Renewing American Innovation at CSIS.
The authors would like to thank Sebastian Hassinger, former principal specialist with Amazon Web Services, for his contribution to this article.
* * *
Original text here: https://www.csis.org/analysis/pioneering-quantum-supercomputing-integration-us-leadership-next-computing-era
[Category: ThinkTank]
CSIS Issues Commentary: Containment Doctrine - How the U.S. Is Using Trade to Stop Digital Regulation
WASHINGTON, March 10 -- The Center for Strategic and International Studies issued the following commentary on March 9, 2026, by Director Philip A. Luck and research intern Duc Minh Nguyet "Moon" Nguyen, both of the Economics Program and Scholl Chairs in International Business:* * *
The New Containment Doctrine: How the United States Is Using Trade to Stop Digital Regulation
In January 2026, the Department of State imposed visa restrictions on five European officials involved in drafting the EU Digital Markets Act (DMA) and Digital Services Act (DSA), an escalation in the Trump administration's ... Show Full Article WASHINGTON, March 10 -- The Center for Strategic and International Studies issued the following commentary on March 9, 2026, by Director Philip A. Luck and research intern Duc Minh Nguyet "Moon" Nguyen, both of the Economics Program and Scholl Chairs in International Business: * * * The New Containment Doctrine: How the United States Is Using Trade to Stop Digital Regulation In January 2026, the Department of State imposed visa restrictions on five European officials involved in drafting the EU Digital Markets Act (DMA) and Digital Services Act (DSA), an escalation in the Trump administration'scampaign against digital regulation. These visa actions may be the most direct measures targeting the European Union stemming from frictions over the DMA, but they are by no means the only front in this growing conflict. At the same time it has acted directly against the European Union, the administration has also sought to embed anti-digital regulation clauses into bilateral trade agreements with other partners, including Malaysia, Indonesia, Cambodia, Argentina, Guatemala, and El Salvador, with similar language appearing in framework deals with Ecuador, Thailand, and others.
The strategy marks a shift. Unable to roll back EU regulations at their source, the administration is trying to contain their spread. Using tariff threats as leverage, the United States is transforming what has been a domestic regulatory choice into a negotiable trade concession, seeking to establish precedents that will shape digital governance negotiations for years.
The Trade Deal Campaign
The administration's campaign extends beyond Brussels. The United States has embedded anti-regulation clauses into bilateral trade agreements and frameworks with at least nine countries across Latin America, Southeast Asia, and Europe. The agreements share nearly identical languages. Specifically, Section 3 of these agreements stated that these countries:
shall not impose digital services taxes, or similar taxes, that discriminate against U.S. companies in law or in fact.
shall facilitate digital trade with the United States, including by refraining from measures that discriminate against U.S. digital services or U.S. products distributed digitally [ensuring the free transfer of data across trusted borders for the conduct of business, and collaborating with the United States to address cybersecurity challenges.]
The clauses do not explicitly mention digital market regulation, but they are clearly aimed at DMA-style frameworks. The DMA requires large platforms to allow interoperability, prohibits self-preferencing in search and app stores, and mandates data portability--rules that the United States argues disproportionately target U.S. firms and undermine their business models. The deliberately broad language in these trade deals--"measures that discriminate"--gives Washington wide latitude to define what counts as a violation.
* * *
Figure 1: U.S. Opposition to the European Union's DMA and DSA Escalated in 2025
* * *
Signed agreements with Argentina, El Salvador, Guatemala, Malaysia, Indonesia, and Cambodia contain binding versions of these clauses, though none have entered into force as all countries' complete internal legal procedures are still ongoing. Frameworks with Ecuador and Thailand include similar language. While nonbinding, these frameworks typically become templates for final agreements. Switzerland and Liechtenstein signed frameworks with narrower language, covering only "harmful digital services taxes" without the broader prohibition on discrimination.
Only Malaysia's agreement includes a carve-out: "Malaysia has the right to regulate in the public interest." Whether this provides meaningful policy space or just political cover remains unclear. What matters more is that this reservation is absent from all other agreements--the United States appears to have tightened language in subsequent negotiations.
For smaller economies, the calculation is straightforward: accept restrictions on future digital regulation in exchange for immediate tariff relief on agriculture, consumer goods, or other priority sectors. Indonesia, with a GDP of around $1.4 trillion in 2024, is the largest economy to accept such clauses so far. These agreements serve as test cases, demonstrating that the United States can extract regulatory concessions when tariff leverage is high and a country's digital trade volume is low.
* * *
Figure 2: U.S. Trade Agreements Containing Anti-Digital Market Regulation Clauses
Countries with anti-digital market regulation clauses by continents
* * *
Why This Strategy Now
The Trump administration's aggressive stance departs sharply from the Biden years. While Biden's Federal Trade Commission pursued domestic antitrust enforcement against big tech, his trade team took a more measured approach to allied digital regulation, even withdrawing U.S. support for certain World Trade Organization e-commerce proposals to avoid constraining domestic regulatory choices. The Trump administration has so far rejected this deference. It has viewed trade policy as "a critical component to national security" and frames digital regulation as a competitive battleground, showing less concern for sustaining multilateral consensus.
This more aggressive approach has been empowered by the Trump administration's expansive use of tariffs as leverage. If a trading partner adopts digital rules that Washington interprets as discriminatory, whether targeted taxes, data localization requirements, or platform conduct mandates, the administration can threaten or impose costs outside the digital sector through tariffs on goods. In August 2025, Trump threatened "additional tariffs" on countries with "digital taxes, legislation or regulations," explicitly portraying such measures as designed to harm U.S. technology companies. It should be noted that even with the loss of the International Emergency Economic Powers Act (IEEPA) as a policy tool, digital trade objectives can still be pursued through credible threats of tariffs or investigations under Section 301 or Section 232 of U.S. trade law.
The urgency demonstrated by the Trump administration likely stems from the perceived acceleration of the Brussels Effect. The EU Digital Markets Act has become a template for many others. Brazil introduced comprehensive digital markets legislation in September 2025. South Korea advanced platform fairness rules. India, Japan, and the United Kingdom are all considering similar frameworks. When the world's second-largest economic bloc establishes platform rules that predominantly bind U.S. companies and other major economies follow, the U.S. executive branch has strong incentives to respond. The methods range from negotiating carve-outs to threatening retaliation to deter adoption.
Where the Strategy Works--and Where It Does Not
Success with small economies does not mean the Brussels Effect is contained. Digital trade volume matters. Argentina, El Salvador, Guatemala, Malaysia, Indonesia, and Cambodia's combined digital markets are negligible compared to Brazil's, South Korea's, or the European Union's. As the administration targets larger markets with more developed regulatory proposals, the strategy faces stiffer resistance.
* * *
Figure 3: Limited Digital Trade in Countries with Anti-DMA-DSA Clauses
Figure 4: The Administration's Strategy Faces Stronger Resistance from Larger Markets with More Developed Regulatory Proposals
* * *
Brazil is testing the limits. In September 2025, Brazil's government sent a bill on fair competition in digital markets to Congress. The proposal would create a specialized department to oversee "platforms with systemic relevance"--language that predominantly captures American companies like Google, Meta, and Amazon. The bill includes interoperability and data sharing provisions mirroring DMA requirements.
As a result, U.S. lawmakers pushed back. The House Judiciary Committee sent a letter to Brazil's Ministry of Finance arguing the bill would "mainly capture American platforms" and constitute a nontariff trade barrier. The Office of the U.S. Trade Representative (USTR) simultaneously launched a Section 301 investigation into Brazil's trading practices, including digital trade and electronic payment services. Despite this pressure, Brazil has not withdrawn the proposal. Tariff threats lose potency when regulatory stakes are high, and domestic political constituencies support intervention.
South Korea took a different approach but still drew fire. Rather than the DMA's broad dominance-based regime, South Korea's proposed Online Platform Fairness Act focuses narrowly on platform-merchant relationships: contract terms, transparency requirements, and intermediation conduct. The "fairness" framing was partly strategic--an attempt to avoid DMA-style confrontation.
Yet this has not insulated South Korea from U.S. pressure. The bill has surfaced repeatedly as a friction point in U.S.-South Korea trade conversations, with Washington claiming such rules function as discriminatory nontariff barriers regardless of framing. The dispute strains the alliance: South Korea is a critical security partner, yet even narrowly tailored digital regulation triggers U.S. pressure.
The European Union itself remains out of reach. The United States and European Union signed a framework agreement in August 2025 committing to address "unjustified digital trade barriers," but the European Commission made clear the DMA and DSA are not part of trade negotiations. In March 2025, USTR designated both acts as digital trade barriers in its annual National Trade Estimate report. The Department of State's visa restrictions on five EU officials involved in drafting the legislation marked an escalation, but Brussels shows no signs of retreat. The European Union's control of the world's second-largest digital market and unified member states' support for regulations mean tariff threats from Washington carry less weight.
* * *
Figure 5: The European Union Controls the World's Largest Digital Market
Figure 6: The European Union's Digital Trade Volume Is Nearly Twice That of the Combined Previous 18 Countries
* * *
What This Means
The administration's trade deal campaign has secured initial wins, embedding anti-regulation clauses into agreements with at least eight countries and establishing precedents that may outlast this administration. But the strategy's limits are visible. Large economies like Brazil and South Korea demonstrate that tariff threats weaken when regulatory stakes are high and domestic constituencies support intervention.
Even in those jurisdictions where these clauses have been added, critical questions remain unanswered. How would the United States actually enforce these clauses, especially as the U.S. Supreme Court struck down IEEPA tariffs? The "discriminate against U.S. companies" language is deliberately vague. Does it prohibit any regulation that disproportionately affects large platforms? Could countries challenge enforcement through WTO dispute settlement? Would a future administration honor these commitments or quietly abandon enforcement? The agreements do not specify.
The approach treats digital regulation as purely a trade issue. Such a framing works for small economies seeking tariff relief, but breaks down when countries pursue what they see as legitimate governance goals. Platform power concentration is a political issue across democracies. The DMA catalyzed a global conversation about whether and how to regulate digital gatekeepers. That conversation will not end because Washington embeds anti-regulation language in trade deals with Cambodia and El Salvador.
The administration can lock in commitments from smaller partners. It cannot stop larger economies from regulating when domestic political pressure demands it. The Brussels Effect spreads through economic gravity--firms comply with EU rules because the market is too large to ignore, then apply those standards globally because fragmented compliance is expensive. Trade deal clauses might slow adoption in countries with minimal digital markets, but they will not reverse the regulatory tide in economies that matter.
What the strategy does accomplish is establishing a precedent: digital regulation as a negotiable trade concession. Future administrations will inherit these agreements and the principle they embody. Whether that principle survives depends on factors beyond this administration's control--including whether the European Union's regulatory model delivers the benefits Brussels promises, and whether U.S. technology companies maintain the market dominance that makes this fight worth having.
* * *
Duc Minh Nguyet (Moon) Nguyen is a research intern with the Economics Program and Scholl Chair in International Business at the Center for Strategic and International Studies (CSIS) in Washington, D.C. Philip A. Luck is director of the Economics Program and Scholl Chair in International Business at CSIS.
* * *
Original text here: https://www.csis.org/analysis/new-containment-doctrine-how-united-states-using-trade-stop-digital-regulation
[Category: ThinkTank]
