top of page
  • Writer's pictureVladimir Chlouba

Philosophy of (Political) Science

I have recently come across a book which addresses one of the most fundamental questions that has troubled philosophers and social scientists alike: what is it that makes our enquiry scientific? The book I intend to engage in this text bears the title A Model Discipline and it was authored by Kevin A. Clarke and David M. Primo, both of which are professors of political science at the University of Rochester. There is much in their oeuvre that I happen to disagree with but the work also raises some very good points. In order to offer a constructive discussion of the book, it is imperative that we carefully distinguish between these various points. Before we delve into the issues however, let us first briefly review the book’s main argument in order to subsequently critique it and then draw some larger lessons that tap into an important debate about the nature of social science which could be of interest to anyone pondering the topic.

At the very beginning of their work, the authors claim that their “book is about how to think about models and the roles they play in our discipline [political science].”[1] To further explain the centrality of models, they go on to assert that “the approach we [the authors] take is known as the model-based or model-theoretic view, which holds that models, not theories, are the primary units of scientific interest.”[2] We are also told that models should not be judged based on their ability to predict but rather, on their “usefulness” for a particular purpose. One might wonder what exactly the authors mean by a “model.” In the article which gave rise to the discussed book, the reader is given some guidance: “Maps are models. Maps are not reality, nor are they isomorphic to reality. Rather, they are representations of reality. Furthermore, maps are physical objects, not linguistic entities. It therefore does not make sense to ask whether maps are true or false any more than it makes sense to ask if other physical objects – tea kettles, toy airplanes, or gas grills – are true or false.”[3]

The approach that has according to Clarke and Primo led social scientists to overemphasize the ability of models to predict is termed hypothetico-deductivism (H-D). H-D, a familiar procedure, has three sequential parts. First, a hypothesis or theory is formulated. Second, a prediction or observable claim is deduced. Third, the hypothesis is tested with real-world data. Lastly, depending on the result of the test, the hypothesis is either confirmed or disconfirmed.[4] Let us conclude this brief overview of the authors’ argument by noting that they suggest five types of models which do not necessarily need to be tested to be useful. These are foundational (provide general insights into problems), structural (organize known facts), generative (tell us where to search next), explicative (investigate causal mechanisms), and predictive (advance forecasts) models.[5]

We are now ready to proceed onto our critique of Clarke and Primo’s ideas. The main argument that I will advance in the course of this text is that although the chief contribution that Clarke and Primo claim to make is their departure from a flawed philosophy of (political) science, it is in this realm where their arguments remain least convincing. Conversely, it is in the discussion of particular methods and practices of political science where their lasting contribution is most clearly felt. The discussion of models as maps seems to offer a fitting point of departure for our discussion.

The authors dwell a lot on the point that models’ worth is to be measured by models’ usefulness. We are told that models are objects and as such cannot always be tested. They represent and they are useful. Maps represent well, they are useful and no one in their right mind cares about whether or not maps are particularly accurate. In fact, Clarke and Primo tell us, maps are false. Unfortunately, this depiction of models in science seems to underestimate the undeniable link between truth and usefulness: the degree to which anything in the scientific enterprise is useful depends on the degree to which it is representative of actual phenomena that exist in the real world and that we care about. Representativeness, therefore, very much depends on the degree to which representative objects deviate from reality. Reality, representativeness, and usefulness are more intimately related than Clarke and Primo’s metaphor is prepared to admit. Think of maps again. Maps are useful but they are only useful because they truthfully represent reality. If they did not, indeed, if they were removed from reality to a sufficient degree, they would be utterly useless. What models and maps really do is that they bring out a particular feature that is of interest to us. In the case of maps, we care about the spatial relationships that exist between various objects, not about the colors of the different buildings and streets that maps show. But this changes nothing about the fact that there is always a strong link between the scientific enterprise, models as its vehicles, and the real world that scientists seek to learn about. Once that link is abandoned, we set forth on a journey of irrationalism. Such journey may still be fun (useful in a sense) but it has little to do with science.

As has been clarified above, A Model Science does more than criticize particular methods of deriving and testing hypotheses. It challenges the very primacy of the H-D approach: “What if we were to press deeper and ask just what it is about this three-step method – propose, derive, and test – that makes it scientific?”[6] This, indeed, is a crucial question because the answer which we are about to give illustrates that the various kinds of models that Clarke and Primo write about (foundational, structural, generative, explicative, and predictive) cannot be separated as neatly as the authors suggest. And, so, why is it that theory implies prediction? If one understands the internal dynamics, structure, and underlying logic of a given phenomenon, then one should be able to produce a prediction. If a prediction cannot be made, then there are other variables at work whose importance cannot be foreseen. In other words, a theory which cannot ultimately yield predictions is incomplete at best. What the authors likely refer to when they write about the different models are the different stages of scientific enquiry. It is quite correct to observe that we first need to think in terms of basic, foundational models, so that we can better describe structures, generate new avenues of thought, come up with explanations, and ultimately predict. Prediction, as it were, validates the entire process. It is true that all of these models, or, as I have called them, stages of scientific enquiry, are useful and contributive in themselves. But they ultimately need to provide opportunities for determining their validity. They need to be tested. To put it in a cruder form, what could ever be scientific about coming up with a tale which can never be proved nor disproved and which, essentially, forever remains a tale which one might choose to believe or ignore but which can never be rationally adjudicated?

That this interpretation of Clarke and Primo’s book is not due to my imagination can be illustrated by their treatment of explanation in social science. The authors argue that “the desire to choose between explanations is concomitant with the desire to test theoretical models. Much of the time, however, choosing between explanations simply is not necessary.”[7] The argument is illustrated (let us for now abstain from the word ‘explained’) by the following set of questions:

  1. Why did Germany invade Poland in 1939?

  2. Why did Germany invade Poland in 1939?

  3. Why did Germany invade Poland in 1939?

  4. Why did Germany invade Poland in 1939?

Clearly, question 1 asks why Germany, instead of, for instance, France, invaded Poland. The second question is truly interested in why Germany invaded Poland instead of some other territory, and so on. What is the conclusion Primo and Clarke derive from this exercise? They tell us that “multiple explanations can coexist without any single explanation being false. (...) There is no need to choose between these explanations, however, for each answers a different question.” Yet this last point, i. e. that each explanation answers a different question, is of crucial importance. The four questions, despite their visual identity, aim at enquiring about different phenomena. While question 2 is concerned with the geographical dimension of Germany’s expansionism in 1939, question 3 focuses on its temporal aspect. And because these questions, as the authors clearly state, are different, so are their explanations. The explanation to question 1 can coexist with the explanation to question 2 as long as we are concerned about the questions’ visual appearance. Once we are concerned about their actual meaning, i. e. what these questions really ask, their explanations cannot coexist. The explanation that ‘Germany invaded Poland because Poland was Germany’s neighbor’ certainly cannot answer questions 1, 3, and 4. Such explanation can only answer question 2. What Clarke and Primo’s exercise really shows is that visually similar (and, obviously, poorly formulated) questions, not factually distinct explanations, can coexist. To suggest that the explanations to questions 1 and 4 are somehow intimately related is an imprecise analysis of the matter at hand, for it relies on the questions’ visual similarity rather on their factual meaning.

The crucial point is that whether or not we have access to it, there necessarily exists one single and complete explanation of a given phenomenon. This is because truth still matters in the scientific enterprise. The unfortunate nature of Clarke and Primo’s argument is that they devote a large part of their argumentation to relativizing this fact of life while elaborating less on why it is that our view of the truth is dim and our methods imprecise. It is worth thinking about why it is that the political economy of publishing prizes often ephemeral claims of complete explanations over partial, humbler but also more durable accounts of social phenomena, yet this point receives comparatively little attention in A Model Discipline.

Another argument which Clarke and Primo employ to clarify why testing of theoretical models may not always be appropriate holds that theoretical models are not necessarily tested with data, they are tested with models of data. What the authors mean by this is that drawing a sample from an underlying population does not provide us with complete knowledge of this population, it only provides its representation, a model. Since these models of data are in themselves not necessarily true, they fail to test theoretical models. The careful reader, once again, has to demand clarity with respect to the precise location of the said imprecision, whose existence is fully admitted. It must be realized that the source of imprecision lies not in the philosophical foundation of model-based testing. Rather, it has all to do with the practice’s methodology. Let me illustrate what I mean by this. To say that theoretical models are tested with models of data means that we do not necessarily have access to the world as it is, we only work with samples, means, and other statistical tools that represent, ideally quite closely, the entirety of a given phenomenon. Because these statistical tools are not flawless, we are incapable of testing our theoretical models with real data but merely models of data. Let us for a while grant that this is true as indeed, this in many cases describes the work of political scientists quite well. But this does not mean that if we were able to obtain data of sufficient quality, we would not be able to use it to test our theories. The fact that we cannot obtain this data at present does not imply that the philosophy of testing itself is wrong, it merely means that we need to find better ways of finding and analyzing data. Yet this is not what Clarke and Primo’s publication suggests. It argues that there is something inherently wrong with the entire philosophy of testing theories with data. This is not surprising given that in the article that later gave rise to their book, Clarke and Primo speak of “the disjunction between usefulness and prediction.”[8] I claim that this “disjunction between usefulness and prediction” is a false dichotomy. Ultimately, there is no disjunction between usefulness and prediction. Although different by virtue of their particular emphasis, these two concepts are tightly connected by the external reality – truth.

But the authors are onto something. The political economy of publishing encourages one particular streak of methodology: quantitative methods. There are many ways in which we can learn about the world and the quantitative way is not the only method of learning about social phenomena. Editors of many political science journals are nowadays reluctant to publish case studies, formal models, and purely theoretical work in general without an accompanying empirical (very often quantitative) section. Given the complex nature of the social world, it is perfectly legitimate to ask why it is that the obverse does not hold true. Indeed, let us ask why it is that purely quantitative studies, some of which aim at explaining the relationship between phenomena as complex as democracy and development, do not have to be accompanied by case studies which demonstrate the mechanics of a quantitatively-derived conclusion in a particular setting.

Clarke and Primo are onto something when they write that “the only people who still care about the scientific status of political science are political scientists motivated by a largely unnecessary academic inferiority complex. For those who remain concerned, we therefore stipulate that political science is scientific.”[9] It is true that the emulation of the natural sciences by their social counterparts has at times been driven by the desire to prove the social sciences’ scientific rigor. And although our analysis of the social world has gained much by borrowing methods from other fields of enquiry, we should not forget the crucial distinction between the social and the natural sciences, a topic to which we turn next.

The aforementioned crucial distinction between the social and natural sciences lies in the subject of their study. While the natural sciences study relatively stable environments that lend itself to abstraction (except for extreme conditions such as those of the world of the speed of light and quantum mechanics), societies are infinitely more complex. To be more precise, the natural world, too, is complex but it is possible to isolate individual forces through abstraction. Furthermore, most of the forces and phenomena are (again, under normal circumstances) remarkably stable. That is why the number of studied variables is reasonably small and why we can arrive at natural “laws.” Societies can hardly be reduced to a few forces, the number of variables at play is simply overwhelming. In addition, it is difficult to gauge the importance of the individual variables, partially because this very importance can change. Individual human beings, whether acting alone or in groups, are naturally very important factors in the social sciences (often they are the subjects of interest themselves). Doing social sciences is akin to forecasting weather. There are too many variables to predict the outcome easily. That does not change the fact that both weather and social outcomes could theoretically be fairly well predicted if one had the complete picture. Instead of searching for laws of human behavior, we opt to look for patterns of human behavior, recurring developments that occur in societies. Our goal does not have to be the discovery of the underlying force that will govern all of social reality, we simply want to learn more about the particular urges, motives, and causes (understood rather loosely) that are at play in specific situations. There are good reasons for attempting to revive the case study. Social scientists often aim at providing total explanations. Instead of examining what unfolded in a particular case, we see attempts to arrive at complete explanations. These are hard to arrive at. While modern papers may be more scientific by virtue of their advanced methodologies, it does not follow that they are also necessarily more scholarly. The one key method that has over the ages yielded the most precious results has not changed. Here I refer to the scholarly mind’s curiosity, creativity, and capability of analytic rigor.

[1] CLARKE, Kevin A. a David M. PRIMO. A Model Discipline: Political Science and the Logic of Representations. Oxford: Oxford University Press, 2012. ISBN 978-0-19-538220-4, p. 1

[2] ibid.

[3] CLARKE, Kevin A. a David M. PRIMO. Modernizing Political Science: A Model-Based Approach. Perspectives on Politics [online]. 2007, 5(04), - [cit. 2016-11-25]. DOI: 10.1017/S1537592707072192. ISSN 1537-5927. Available at:, p. 742

[4] Technically speaking, a hypothesis can never be confirmed. We can only, based on the available data, reject or fail to reject a hypothesis.

[5] CLARKE, Kevin A. a David M. PRIMO. Modernizing Political Science: A Model-Based Approach. Perspectives on Politics [online]. 2007, 5(04), - [cit. 2016-11-25]. DOI: 10.1017/S1537592707072192. ISSN 1537-5927. Available at:, p. 743

[6] CLARKE, Kevin A. a David M. PRIMO. A Model Discipline: Political Science and the Logic of Representations. Oxford: Oxford University Press, 2012. ISBN 978-0-19-538220-4, p. 21

[7] ibid., p. 164

[8] CLARKE, Kevin A. a David M. PRIMO. Modernizing Political Science: A Model-Based Approach. Perspectives on Politics [online]. 2007, 5(04), - [cit. 2016-11-25]. DOI: 10.1017/S1537592707072192. ISSN 1537-5927. Available at:, p. 743

[9] CLARKE, Kevin A. a David M. PRIMO. A Model Discipline: Political Science and the Logic of Representations. Oxford: Oxford University Press, 2012. ISBN 978-0-19-538220-4, p. 10

28 views0 comments
bottom of page