«Countering the Adversary: Effective Policies or a DIME a Dozen? Stephen M. Shellman Brian Levey Hans H. Leonard Violent Intranational Political ...»
Similarly, DIME‘s Information instrument shares similarities to the Intelligence instrument articulated in DIMEFIL, though the former entails the effective use of information (about yourself and your adversary) in shaping public opinion and the enemy‘s perspective on the conflict. Intelligence, on the other hand, deals in the mechanisms for collecting information relevant to a real or potential conflict.
While intelligence operations were certainly an important part of Cold War strategies, DIMEFIL highlights the importance of Intelligence in counterinsurgency operations as a unique instrument of power. Whereas the Cold War offered a degree of stability via a balance-of-power logic, military counterinsurgency doctrine emphasizes the flexible and thereby unpredictable character of insurgent operations, justifying a more prominent role for intelligence. Likewise, Law Enforcement is regarded as an instrument of power unique to counterinsurgency, where national and international laws can be brought to bear to restore order domestically and ensure the legitimacy of a friendly government under pressure from insurgents (NSCT, 2006). While we have plans to explore many of these various instruments of power, for the purposes of this study we focus on U.S. military and diplomatic actions.
4.0 The General Model Figure 1 contends that political dynamics affect popular support for the host government and competing dissident organizations and vice versa. Specifically, government and dissident interactions are public events and the population makes value judgments concerning those public interactions. As the dynamic of politics changes on the ground, how do organizations adapt and shift their tactics/strategies (military attacks v. attacks on civilians v. negotiations, etc.)? How do such dynamics affect public opinion (i.e., sentiment) and how does public opinion affect tactics and strategies? Finally, which U.S. tactics and strategies help win the support of the population and aid in defeating the insurgency? We discuss some of our preliminary results using this framework below to illustrate the utility that our approach and technological developments can have for understanding IW and SSTRO.
4.0 Data The majority of our current data come from Shellman‘s (2008) Project Civil Strife datasets and the Integrated Crisis Early Warning System (ICEWS) datasets which were compiled from National Science Foundation (NSF) and Defense Advanced Research Projects Agency (DARPA) funded projects respectively. In total the projects generate several different but related datasets for 29 countries in the U.S.
Pacific Command‘s (PACOM) south and southeast Asia region from 1997-2009. Without listing all 29, the dataset contains countries as diverse as Russia, Australia, Thailand, Indonesia, India, Nepal, the Solomon Islands, and the Philippines. The event data (dataset #1) contain information on daily events comprising information on ―who did what to whom.‖ The actors are disaggregated by individuals, groups, and branches of government, while the events are disaggregated by tactic and run the gamut from statements to negotiations to protests to armed clashes. Finally, the data also include international actions by all actors included in the 29 countries, as well as the United States and Europe. This dataset is two orders of magnitude greater than any other events dataset generated to date. While there are other global databases available, the ICEWS/PCS dataset contains information from over eight million news reports (over 25 gigabytes of English and translated foreign language text) from over 75 different news agencies.
A DARPA seedling project enabled SAE to develop an automated sentiment software package to generate sentiment data (dataset #2). The software generates ―polling‖ type data in near real time from electronic sources such as blogs, Diaspora sources, and news reports. In short, the package incorporates a bag of words technique to quantify the perceptions of actors, policies and activities after texts have been sorted into such subtopics and sub-issues by a document classifier. Moreover, we built upon our event coding technologies to develop a dyadic sentiment coder in which we can collect information about one actor‘s attitudes towards another. No other packages that we know of currently generate this type of data;
most utilize the bag of words technique and code the overall sentiment of a document without deciphering the actor doing the talking or the target of the sentiment. We have successfully integrated these data into various models to address how attitudes yield shifts in group goals, tactics, and strategies. In this project our goal is to assess what types of government behavior and policies produce shifts in various populations‘ attitudes and subsequently how such shifts in attitudes affect changes in dissident behavior (e.g., tactics and strategies). We use the dyadic sentiment data in this study.
4.1 Operational Indicators In this section, we briefly sketch how we operationalize our concepts. In terms of the events data, we will aggregate the actions by individuals within groups, groups themselves, domestic governments, and international actors (governments, NGOs, and IGOs). For dissident actors and social groups we create violent, nonviolent, and cooperative event counts and weighted event counts (using well-known scales – e.g. Goldstein 1992 – to measure the intensity of actions) (See Shellman, et al. 2010 & Shellman 2006a & 2006b for examples). To operationalize U.S. government tactics, we use several indicators. To begin, we use Item Response Theory scaling (outlined in Horne, Shellman, and Stewart 2008) to create scales of relevant actors‘ diplomatic (D), information (I), military (M), economic (E), financial (F), intelligence (I), and law enforcement (L) or DIMEFIL activities. Essentially, the technique allows one to derive a latent variable (e.g., diplomacy) using the various ―diplomatic events‖ contained in the events dataset. We can generate such domestic government oriented DIMEFIL activities as well as foreign government DIMEFIL activities, most notably the U.S. We estimate these dimensions using a two-parameter Bayesian IRT model. This framework allows us to estimate both the discrimination and the difficulty of the events along a latent dimension. That is, the two-parameter model estimates how ‗different‘ various events are from each other as well how extreme or mild, for example, certain events are. This allows us to estimate scales that range from low level military conflict to more intense military conflict, for example, through the discrimination parameter and the rank order of events on this scale via the difficulty parameter. The Bayesian framework allows us to incorporate any prior information we may have regarding the scales we develop. This setup is also considerably more flexible than frequentist methods of dimensional extraction which often make assumptions of normality and linearity among scale items.
Given the distribution of the events data, these traditional assumptions are almost certainly violated in ways that lead to biased results. Two-parameter models Bayesian IRT models have been used to estimate political parties‘ left-right policy preferences (Bakker forthcoming, Armstrong and Bakker 2006), levels of democracy using Polity data (Treier and Jackman 2007), and measures of civil rights (Armstrong 2009). The resultant scales are essentially Likert-like scales ranging from -10 to +10 (strongly oppose to strongly support type measures).
In addition to creating scaled variables along a continuum, we isolate specific actions and estimate their impacts on adversarial activities. For example, in this study we examine the effects of specific military training exercises on the intensity of violent political conflict over time.
For this study, we aggregated the sentiment data into monthly temporal measures regarding the public‘s attitudes towards specific actors. For example, we created dyadic measures representing the public‘s attitudes towards the U.S. military, the Indian government, and Indian separatist groups. Having sketched the ways in which we operationalze data we turn towards our modeling capabilities.
5.0 Empirical Models: Impact Assessment Models & Counterfactual Models What would happen to an ongoing insurgency if the United States began training the host government‘s military? How would positive diplomatic actions affect violent events on the ground? Such questions address complex cause-and-effect relationships which ultimately result in only one observable set of results. Either the U.S. decides to train a host nation‘s military to help quell an insurgency or it doesn‘t. A government either chooses to engage in positive diplomatic behavior or it doesn‘t. In either case, the observable data reflect only the course of action taken, but policy-makers may be deeply interested in knowing the (potential) outcome of the road not taken. But what if the U.S. did not train the host nation‘s military or rebuild infrastructure? Moreover, we want to know what the observable effects are from the road taken compared to the road not taken.
These ―what if‖ type questions are quite common in the biological sciences and are often answered using natural experiments. For example, in a pharmaceutical trial one group of patients is given a new drug and designated the treatment group. A similar group of patients (in terms of characteristics, medical history, etc.), the control group, are administered only a placebo. Doctors can then estimate the average effect of treatment by comparing outcomes of the treatment and control groups. Social science questions often do not lend themselves to these kinds of natural experiments; anyone would agree that designing a foreign policy agenda around a natural experiment is a foolish course of action. However, using historical data we can leverage the insights of a natural experiment through case matching followed by statistical analyses. We can then estimate the effects of specific actions in various contexts.
What we have described above is termed ―counterfactual analysis‖ and is based on the assumption that in scientific design every individual has an observed outcome and a potential outcome.
We observe the effect of a treatment on individuals in the treatment group and assume that the outcome would have been different if that group had not received the treatment. Likewise, we observe outcomes for the control group and assume that outcomes would have been different if a treatment had been applied. More formally, we can write
and yi0 for every individual: yi1 is the outcome for individual i under treatment and yi0 is the outcome for individual i under no treatment.
If our subject has been given a treatment, then we observe yi1, and yi0, which is unobserved (counterfactual), is estimated from a model. The difference between these two outcomes is the treatment effect (T) for this subject.
Estimating treatment effects in experimental studies where the researcher can randomly assign the treatment is different from estimating effects with observational data – such as the data typically used to address our original questions regarding military training or positive diplomatic actions. For this kind of analysis, we combine counterfactual analysis with a case matching procedure. A concrete example will illustrate this process.
Suppose we want to know the effect of a military training exercise in India on violent attacks by insurgents. We have observational data on violent attacks and we know when military training exercises were conducted. Using military training as our treatment, we can look at the impact of training on the number of violent attacks per month before and after training. Of course, military training is not the only factor that might influence the number of violent attacks we observe. Models of political conflict suggest that government repression, government crackdown on insurgent groups, public sentiment, the economic and social environments, and other variables also shape conflict. Thus, we must control for such factors.
To isolate the independent effect of our treatment – military training – we use matched case analysis, matching observations on these control variables.
We begin by fitting a model on all cases. We desire a model where the model-predicted values and the actual values correlate at a high value (.80-.99). Generating such a model provides increased confidence that we have not omitted important variables and provides a starting point for matching cases.
The goal of matched case analysis is to choose cases which are as similar as possible on all confounding factors except the treatment variable. So for instance, we would choose to match pre-treatment months to post-treatment months that have similar values on government repression, government toward insurgent violence, and public sentiment. Combining a matched case analysis approach with counterfactual modeling increases our confidence that observed differences in the number of attacks before and after treatment are due to treatment, in this case, military training, and not confounding factors.
Figure 2 illustrates the basic tenants of the process. First we determine the dependent variable we wish to analyze. For our purposes it could be a stability indicator or the number of violent attacks. We will use violent attacks for illustrative purposes here. Second we fit a model to that dependent variable.