top of page

Chapter 2
What Can We Expect From Science:
The Case Against Scientific Realisms

Shapin and Shaffer (1985), Latour (1990) and Stengers (2023) vividly tell the story of how sociopolitical crises played an important role in the so-called scientific revolution. According to their narrative, the need to create a consensus led people to consult an authority, if possible an authority that will be powerful enough to settle the disputes and convince everyone, so that crises like bloody wars may come to an end. Nature itself, the objective reality, devoid of any subjective opinion emerged as such an authority and scientists were to become the bearers of that authority as people who can speak with nature through their experiments, delivering nothing less than facts and truth. This is the basis of modern science’s epistemic sovereignty.

 

The status of science delivering  facts and truth has been dominant in academy and the public. Pretty much anything quickly becomes a matter of evidence or data and if not supported by them it is quickly disregarded as something speculative or even dogmatic. But of course, there were disagreements, too. Especially the twentieth century witnessed important debates around what science can tell us.

 

In this chapter, I will first summarize the realist stance and its arguments. Then I will explain the criticisms directed at the realist stance by the anti-realist one and its overall arguments, and I will propose that one should take an anti-realist stance. In doing that, I will tap into scientific realism, selective realisms, constructive empiricism, pessimistic argument, underdetermination, perspectival realism, sociological and historical approach to science and practice approach to science.

 

Note that, my aim in this chapter is mainly to answer the question of “what can we expect from science.” The answer to this question will later constrain our more local questions and answers in particular fields of sciences, for the cases of this thesis, in cognitive science.

2.1  Setting the stage for scientific realism 

The controversy as to how we should think about the reality of our theories goes at least back to the discussion between Copernicus and Bellarmine in 17th century (Feyerabend, 1975). Likewise, 19th century witnessed a heated debate around the similar questions and concerns, Duhem, Poincare and Mach being at the centre stage (Duhem, 1991; Mach, 2013; Poincare, 2014). The debate continued into 20th century especially with the drive of novel claims by logical positivism (Richardson and Uebel, 2007) and it is later taken up by Putnam, Boyd and others (Boyd, 1973, 1983; Putnam, 1962, 1975a, 1975b).

As is the case with most topics in philosophy, it has a long history that cannot be dealt adequately here in a single thesis, or even worse, in a single section of a thesis. Therefore, in this section I will try to confine the discussion in plausible limits and start off with the recent and more relevant formulations of the discussion.

Scientific realism is basically the idea that scientific theories are true (or approximately so) and/or they give us, to put a bit metaphorically, the ‘picture’ of a mind-independent reality (Chakravartty, 2017).  At the first glance, one might think the fact that scientific theories are fallible contradicts this idea and it cannot be correct, but of course scientific realists acknowledge that fact, which is the reason for the mention of “approximate truth.” Note that this is most importantly a claim about theoretical terms as opposed to observational terms which means the terms refer to things that we can observe — construed in a broad sense here — whereas the former means the terms that we employed to explain the observational terms (Chakravartty, 2017). This distinction itself has a long history and a literature full of various stances which is beyond the scope of this thesis.[1] However, suffice it to say that there are scholars who use theoretical terms as including any term that is employed by scientific theories whether observational or not (Chakravartty, 2017). Relatedly, some prefer observable/unobservable distinction as more apt to the situations and take the discussion as mainly revolving around unoservables, which are entities that we cannot ‘directly observe.’ Note that, though, there is no consensus on how we should construe the meaning of ‘direct observation.’ But for most people electrons or many components of String Theory are good and accepted examples of unobservables.

Scientific realism has three main commitments: ontological or metaphysical, epistemological and semantic. Ontological one is the commitment that there is a mind-independent reality whereas epistemological commitment states that we can and do come to know that independent reality through science. Lastly, semantically, realists commit to the view that scientific statements are and should be construed literally (as referring to the mind-independent reality). Discussions of scientific realism have plenty of different stances depending how one formulates these commitments more specifically.

Hilary Putnam was one of the most important analytic philosophers of the 20th century, and his ideas on these are foundational for contemporary debates on the topic. Putnam takes the semantic commitment as crucial to secure the others and gives a causal-historical theory of reference. According to the model that will later come to be also known as Putnam-Kripke Model (Kripke, 1980; Putnam, 1973, 1975b) a name’s reference, its extension, is fixed at the moment of ‘baptism’, namely the moment the name is given to the relevant object by convention. That name then becomes ‘the rigid designator’ of its object and it is passed onto other people and other generations which links all the later references causally to the first act of naming, so-called the moment of ‘baptism’.  For a better understanding of the underlying idea, we can consider the famous Twin Earth thought experiment proposed by Putnam (1973). Suppose that there is another earth which is exactly the same with that of ours except for just a small difference: ‘water’ is not constituted by H2O there but by some mixture that is unknown to us called, hypothetically, XYZ. Notice that this small difference does not change anything in terms of water’s appearance or how it is sensed, perceived, etc. So, Putnam argues, if it were for the appearance of ‘water’, or how it is perceived, sensed or conceptualized by us that is determining the reference, then we would be referring to the same thing both in our earth and the twin earth even if their extensions are not the same. However, this poses a contradiction since different extensions lead to different truth-conditions, which in turn contradicts the mention of ‘the same reference.’ This would also pose a challenge for ‘mind-independence’ as here the reference would be mind-dependent (depending on how ‘water’ is sensed, perceived, conceptualized, etc.) One further consequence of this, for example, is that we would not be able to refer to the same thing throughout history or across different cultures when using the same ‘names’ which, in turn, obviously undermines the claims of scientific realism. Consequently, to solve this, Putnam argues that the meaning, the reference of a name must be fixed causally which makes it determined by the object’s nature, it’s mind-independent causal powers becaue only then would it be possible to refer the same thing with the same name throughout history and across cultures. Accordingly, for Putnam, science discovers the appropriate extensions of natural kinds. Water is a natural kind, in this sense, as it is fixed by H2O’s causal powers.[2] What is true and false for H2O is also true and false for water. This is how science reveals a true picture of the mind-independent reality.

It may be useful to elaborate further on this point, especially since the ideas discussed here might appear incompatible with Putnam’s functionalist views, which we will address later. Putnam might be expected to argue that the meaning of water emerges from its causal relations and emergent structure. In fact, some commentators have derived precisely this conclusion from the Twin Earth thought experiment (Searle, 1983). Putnam's main point here is an attempt to demonstrate that certain concepts used across different scientific theories may refer to “the same thing,” even if the theories themselves, and the semantic frameworks that give rise to them, are different. In this way, Putnam seeks to show that we can talk about the same entities while subscribing to different theories and explanations concerning them, and that the meanings of these entities do not originate from the theories themselves. While explaining the thought experiment, Putnam notes that if an Earthling were to travel to Twin Earth and discover that the substance called "water" there is not H₂O but XYZ, they would report: “On Twin Earth, the word ‘water’ means XYZ.” Symmetrically, a Twin Earthling visiting Earth would say, “On Earth, the word ‘water’ means H₂O.” Similarly, from this perspective, drawing on Kripke, it can be argued that the sentence “Bertrand Russell is a famous chef” is false, even if the speaker mistakenly believes that Russell was a famous chef rather than a renowned philosopher. What makes the sentence false is not the speaker’s internal conceptualization or belief, but the fact that the expression "Bertrand Russell" refers to the philosopher, regardless of the speaker’s mental concepts.

Two points need to be emphasized here. First, this view, much like Frege’s, draws a distinction between sense and reference, and it treats reference as constitutive of meaning. For example, people who, for years, did not know that the Morning Star and the Evening Star are the same celestial object, Venus, nonetheless used both expressions and the referent of these expressions remained Venus, irrespective of that ignorance. In a similar fashion, for readers less familiar with the philosophy of language, it may be helpful to briefly reference the intension–extension distinction. In this context, extension refers to the things an expression picks out, while intension refers to the manner in which it picks them out. In rough terms, extension corresponds to reference, and intension to sense. To illustrate, imagine a room containing 20 cube-shaped objects and 10 round ones. By coincidence, the round objects are colored green and blue, while all the cubes are red, and nothing else in the room is red. In this case, both “the red objects in the room” and “the cube-shaped objects in the room” are expressions with different intensions, but their extensions are identical. The second point I wish to stress is that reference, that is, on this account, meaning, is not individual, but collective and historical. This is one of the most commonly misunderstood aspects of Putnam’s and Kripke’s theories. The core claim here is that when a term is introduced, regardless of the reason behind it introduction, and begins to circulate within a linguistic community, either orally or in writing, its reference remains causally fixed by that initial usage, even if all associated senses later change. Precisely for this reason, someone using the term with a particular sense or intension may still succeed in referring to the original object, even if they are unaware of its reference. Likewise, even if the individual who first introduced the term did so with an incomplete or mistaken sense or intension, the referent remains fixed in a collective and historical manner.

However, this account has been criticized in different ways. We can count three of them shortly: 1) Metaphysics of chemistry suggests that we cannot claim water = H2O since it is never that simple and there are many more factors and constituents (VandeWall, 2007). 2) Many scientific realists do not share this commitment of correspondence and find it unnecessary for scientific realism. For example, Dupre, makes a case against natural kinds using biological taxonomy as a counter-example (Dupré, 1993). 3) There is an entire different tradition in philosophy of language with a different account of language, especially since Wittgenstein’s Philosophical Investigations (Hallett, 1991; Hanfling, 1984, 2000; Loomis, 2017). This tradition equates meaning with use, which is context-dependent, and rejects ‘natural kinds’ as neither necessary nor possible.

An elaboration for the third point is in order here. The concept of language games is central to Wittgenstein and Wittgensteinian philosophy. Accordingly, language is constituted by the act of speaking in a particular situation where the situation itself provides the relevant rules for the act. The act of speaking, in this sense, is continuous with other forms of action and the relevant rules are not explicit most of the time, just like many forms of games we encounter. Therefore, the act of speaking can be construed as a ‘move’ in a game, where the game itself, namely which game it is, its rules, etc. can only be apparent or emergent locally and contextually albeit only vaguely. Thus, mind-independence or truth becomes irrelevant here.[3] Perhaps, the following passage from Philosophical Investigations can be more explanatory [4] (Wittgenstein, 1953, aphorism 23):

23. But how many kinds of sentence are there? Say assertion, question, and command?—There are countless kinds: countless different kinds of use of what we call “symbols,” “words,” “sentences.” And this multiplicity is not something fixed, given once for all; but new types of language, new language-games, as we may say, come into existence, and others become obsolete and get forgotten. (We can get a rough picture of this from the changes in mathematics.)

 

Here the term “language-game” is meant to bring into prominence the fact that the speaking of language is part of an activity, or of a form of life. Review the multiplicity of language-games in the following examples, and in others: Giving orders, and obeying them, describing the appearance of an object, or giving its measurements, constructing an object from a description (a drawing), reporting an event, speculating about an event ... forming and testing a hypothesis,  presenting the results of an experiment in tables and diagrams, making up a story; and reading it, play-acting, singing catches, guessing riddles, making a joke; telling it, solving a problem in practical arithmetic, translating from one language into another, asking, thanking, cursing, greeting, praying...

As mentioned above, this way of thinking about language opened up new ways and there is a rich literature growing day-by-day where Putnam’s claims regarding natural kinds and its semantics are challenged.

On the other hand, causal-historical theory of reference or Putnam-Kripke model is not Putnam’s only argument in favor of scientific realism. Putnam put forward another famous argument called “No Miracle Argument” or “Miracle Argument” which has been fundamental. The name refers to the Putnam’s statement that realism “is the only philosophy that does not make the success of science a miracle” (Putnam, 1975). To put it briefly, the idea depends on the question of how come our best scientific theories are successful if they are not true or approximately true. And it continues, only the realist stance can explain it. However, for the realist to back this argument up they need to show how our best theories can and do approximate to truth. For this, selective realisms might have an answer.

2.2  Selective realisms: explanationist realism, entity realism and structural realism

Selective realism is the collection of proposals which maintain that theories change throughout the history of science but that selective parts of theories are retained thanks to which a particular kind of accumulation is achieved. In turn, the accumulation here might be thought of as approximation to truth. Another important factor here is realists’ desire to merge scientific realism with the widely accepted idea that most theories are false even if not all. The aforementioned accumulation might provide the answer for which parts of theories are true and which parts are not.

There are different kinds of selective realism depending on what they take as being retained. For the present discussion, though, I will broadly tackle what their basic claims are, how they mainly diverge and what their common problem is.

As stated in the subtitle, there are mainly three selective realisms: explanationist realism, entity realism and structural realism. Explanationist realism holds that what are retained in theory changes are the essential and ‘necessary’ parts of the previous theory in explaining/predicting the relevant phenomena. In this vein, Kitcher proposes that there are “idle parts” or “presuppositional parts” of theories along with the “working parts” which do the ‘real’ work (Kitcher, 1993).

           

On the other hand, entity realists argue mainly that, through experimentation using different techniques and methods, it is possible to see if we can manipulate the relevant unobservable entity to create certain and predicted kinds of effects, and to the extent that we can do this, it is safe and reasonable to believe in the mind-independent existence of that entity (Hacking 1983, 1983; Miller, 2016). Notice that not every entity realist commits to the causal-historical theory of reference, it is relatively more compatible with it than other approaches.

           

Lastly, structural realism, as historically known as relationism, is the view that even though unobservable entities proposed in scientific theories cannot correspond to the mind-independent reality, the structure between them can do so. And just like other selective realisms, we can see how some ‘structure’ is retained through the theory changes (Chakravartty, 2007; Frigg & Votsis, 2011; Psillos, 2006).

           

Every selective realism faces a particular kind of challenge which are voiced and levered against each other by the proponents of each realism. For the present purposes, though, the important thing is what is common to all selective realisms: they all infer approximation to truth, or truthlikeness if you like, from some form of continuity in the history of science. Whether this inference is justified, or whether it is problematic in terms of approximation and truthlikeness, will be discussed below. However, they are not the only ones making inferences from history. Anti-realists are also famous for their historical inference in the exact opposite direction: pessimistic meta-induction or shortly, meta-induction.

2.3  Pessimistic meta-induction

Pessimistic meta-induction or pessimistic induction (sometimes called disastrous induction (Psillos, 2022) is famously proposed by Larry Laudan in his 1981 paper[5] (Laudan, 1981). Laudan argues that history of science is a graveyard of theories, what is more, it is also a graveyard of successful theories; even worse, a graveyard of successful theories that are thought to be the best theories of their own time as well. Laudan, generalizing from this observation, claims that just as our past successful theories turned out to be false, we do not have any compelling reason to think that our successful theories now will not turn out to be false in the light of new theories in future. Thus, Laudan does not share the ‘optimism’ of realism and argues against No Miracle Argument.

           

The argument has received different criticisms and led to much controversy. One such criticism takes pessimistic argument as a fallacious reasoning in that it commits base-rate fallacy. The idea stems from how induction is normally calculated and claims that without knowing the distribution of successful theories and unsuccessful theories among the totality of theories — which is impossible for obvious reasons — it is mathematically impossible to do such an induction (Lewis, 2001; Mizrahi, 2013, 2016). In fact, this argument is also valid in the case of No Miracle Argument when understood as an induction from historical record in favor of the claim that successful theories are true or approximately true (Howson, 2000). Thus, base-rate fallacy objection would better be seen as an argument against historical inductions where the base-rate is unknown.

           

Independently of the base-rate fallacy objection, Chang (2003) criticises the scientific realist’s alleged inference from the continuity in theory changes to approximate truth or truthlikeness as there is no logical link between the two. Being aware of these objections, Psillos (2009) takes the realist stance as constituted by two distinct steps; first one concerns itself with showing there is a continuity in theory changes and since this is not enough to establish approximation to truth or truthlikeness, the second one comes to rescue in the form inference to the best explanation (IBE). More clearly, the second step is the argument that the continuity at hand, which is shown by the first argument, cannot be explained without assuming approximation to truth or truthlikeness. However, whether IBE at this philosophical context is a legitimate argumentative move or not is a further controversial issue for various reasons. For instance, one might reject that the first step of the argument has been successful and established the continuity in theory change explanation-wise, structure-wise or entity-wise. Or someone, like Van Fraassen (1989), could argue that theory changes might be susceptible to some form of Darwinian selection where the selection dynamics e.g. some factors that are important to scientific community, lead to such continuity, thus removing even the need to develop a further explanation. Or in parallel with socio-historical approaches to science that I will mention below it is possible to argue that alleged continuity in theory changes is actually an empirical phenomenon that can and should be explained socio-historical processes in science. One can also claim that scientific realism’s assumption of aproximate truth or truthlikeness is not “explanatory” either and even if we did an inference to the ‘best’ explanation it would not be in favor of scientific realism.

           

On the other hand, when the argument construed in two steps and the link between the continuity in theory changes and approximation to truth or truthlikeness becomes allegedly unnecessary, another logical possibility might appear. If it is not the continuity in theory changes that gives us the realism, then maybe it might be the discontinuity in theory changes. This is the view defended by Devitt (2007), Doppelt (2007) and Fahrbach (2011). We can summarize it in three branches: 1) Our current science is in some sense incomparably better than the past and our best theories are confirmed to a degree that we can say there is a qualitative difference. 2) Last century has witnessed an exponential growth both in the numbers of papers published and in the numbers of scientists. Thus, for the current successful theories in science, it is safe to say that they are much more confirmed by evidence. 3) It would be difficult for the anti-realist to find a theory which is accepted in the last century and also abandoned later. For Fahrbach (2011), this shows that our current best theories are qualitatively different than the previous ones as it would take a shorter time to discover they were false and abandon them.

           

However, the first point can be criticized on the grounds that the past theories will always look worse through the glasses of current theories. For the second point, though, Wray (2013) states the argument from exponential growth would be true at any point — or most points, or at least at some past points — in time, which undermines the argument itself, showing that exponential growth cannot show any qualitative change for the benefit of current theories. As for the last point, it can be argued that such an argument can only show that science has matured in some ways, but nothing more. Moreover, when considering how ever more easy it is for anyone to publish something compared to a century or more ago, it becomes even less suggestive of approximate truth or truthlikeness and becomes an empirical subject of sociohistorical factors. Alai (2017) also asks the question that if there were a qualitative difference between the theories of current science and that of past, then how could we explain that past theories had been successful even to some degree? Consequently, the argument from discontinuity requires that the proponents should show how there is a qualitative and conceptual difference between the failed theories and successful ones of our current science. The burden of proof is on their shoulders and it seems there is not any compelling case yet.

 

Let us get back at the pessimistic meta-induction. The problem that got us to this sidetrack was mainly the criticism against inductive inferences from history of science whether realist or anti-realist. However, might it also be possible, or even more accurate to construe pessimistic induction not as an inductive argument but as a deductive one? Wray (2015) does exactly this and claims pessimistic induction is actually not an induction but a deduction and, in fact, just one single successful but false theory either in history of current would be enough to reject the scientific realism’s claim. On this view, the historical part plays only a supportive role in showing that there is of course one, and possibly more than one, successful but false theory which challenges the claim that there is a link between success and approximate truth or truthlikeness.

2.4  Underdetermination and Stanford’s New Induction

More recently Kyle Stanford (2006) proposed a new inductive argument from history of science which is aptly called by him as “New Induction.” Since New Induction makes reference to good-old issue of underdetermination, we should tackle it first.

           

The problem of underdetermination can be traced back to 19th century physicist Pierre Duhem. Duhem observed that there is no and cannot be a ‘critical experiment’ that could either confirm or refute a given theory once and for all. This is because all hypotheses and all experiments contain auxiliary assumptions and background beliefs that might be confounders in the result of experimentation, which means that it is never possible to test a single hypothesis or a theory in isolation. Even if one would try to isolate further possible confounders and test them, those further tests would also rely on other auxiliary assumptions and background beliefs resulting in an endless chase. It has become clear that experiment plus logic is never conclusive and cannot determine our choice of a theory over another. The term ‘underdetermination’ refers to this situation where theories are underdetermined by the evidence. This poses a challenge for realism because it shows that there might not be a rational or logical and in this sense ‘universal’ way of theory choice which opens up a space for possible factors of an arbitrary, or at least of a mind-dependent nature. Even though Duhem initially proposed this idea only for theoretical physics, scholars realized it is applicable to pretty much every field of science.

           

Later, in mid 20th century Quine (1951, 1975a, 1975b) famously extended this to the totality of our knowledge with his concept of web of beliefs. Accordingly, not just our scientific hypotheses but all of our beliefs, all of our knowledge constitutes an interconnected web. The following quote from Quine himself maybe helpful (Quine, 1951):

The totality of our so-called knowledge or beliefs, from the most casual matters of geography and history to the profoundest laws of atomic physics or even of pure mathematics and logic, is a man-made fabric which impinges on experience only along the edges (p. 42)

Thus, although we can somehow intuit there is something false with this web of belief, we cannot know what exactly it is since ‘it only impinges on experience along the edges’, both in the context of scientific epistemology or epistemology in general. Of course one can argue against this by pointing out that in both science and other contexts of knowledge we can revise and change our web of beliefs according to some criteria, some rule, etc. This is what Laudan basically claims (Laudan, 1977, 1984). However, as a counterpoint this would miss the main claim of Quine which is that even those criteria or rules are already within the web of beliefs. Moreover, those criteria or rules might not be able to determine the exact nature of revision in that alternative revisions might be ‘reasonable’ at the same time.

           

This is a radical stance. The challenge for realism here is mainly the idea that this web of belief is subjective, it varies from person to person, from culture to culture or from historical period to another historical period and it seems that there is no way we can construct or discover a procedure, algorithm, criterion or a meta-constraint of another kind that will be applicable to all possible web of beliefs. If we did find such a meta-constraint, it would help us bridge the gap between the web of beliefs and a mind-independent reality and select certain types of web of beliefs over others. This radical scepticism has been taken up by socio-historical approcahes in philosophy of mind that we will mention below. Proponents of these approaches claim that the web of beliefs are socio-historically contingent whereas Quine would take up another route and state that nature of human psychology is the definitive factor in this context.  

           

However, this radical holism is not the only way to interprete the problem of underdetermination and much of the discussion has been revolved around a more modest reading. In fact, Stanford (2023) distinguishes between the two readings as holist underdetermination and contrastive underdetermination. Holist underdetermination is when we cannot test a hypothesis in isolation and determine if we received the result due to our hypothesis or some other confounding variable such as auxiliary assumptions or background beliefs. On the other hand, contrastive underdetermination is the situation where a given set of data or evidence can be accounted for more than one and possibly conflicting alternatives. Even though these two are obviously related to an important degree in that they differ in their emphases. The former emphasizes the challenge to single out a factor when faced with a data, or some evidence whereas the latter emphasizes the difficulty to choose alternative theories or ‘web of beliefs’ given some data or evidence. One implication of the distinction is that even if we did single out a relevant factor in determining whether it led to a given result (data/evidence) or not, still there would be possibly more than one hypothesis to lead that given result. Notice that ‘contrastive underdetermination’ is closely related with the notion of ‘empirically equivalent alternatives’ that has been central to recent debates in philosophy of science.

           

The issue of empirically equivalent alternatives or rivals is, as the name suggests, there might always be some other alternative which is empirically equivalent to the theory we believe in, undermining its truth yet being empirically equivalent. However, critics contend that this issue is mostly proposed as a possibility in principle. In actuality, on the other hand, it is either the case that there are no alternatives available at a given time, or the proposed alternatives are not strong enough to be challenging.

 

Stanford (2006) in this context claims that the underdetermination must not only be about empirical adequacy, it would still be problematic if there were empirically unequivalent but equally confirmed alternatives. Notice that the difference is all about unobserved observables. Two theories might have different predictions regarding unobserved observables yet they can both be equally confirmed by the evidence at hand. Stanford claims that we almost always witness such a weak form of underdetermination both currently and throughout the history of science. What is more, on Stanford’s view, this is a recurrent situation in that, say, we chose an alternative theory over the one at hand, then this weak underdetermination –another alternative- would then occur for the newly chosen theory as well. He further claims that there might also be presently unconceived such alternatives (empirically unequivalent but equally confirmed) which can only later occur to researchers. And his New Induction makes the case that The Problem of Unconceived Alternatives has always been there throughout the history of science from Aristotle to String Theory. Scholars could not literally conceive equally confirmed alternatives at the time and when they conceived the new alternative replaced the past theory. Hence, he thinks, we are most probably in the same situation while dealing with our current theories.

2.5  Socio-historical approaches

There are both sociological and historical turns in the philosophy of science. Since they are related in and similar in some of their views, I will present them in a single section.

           

The so-called historical turn in philosophy of science is taken to start with Kuhn (1962/1970) In his monumental work, The Structure of Scientific Revolutions, Kuhn claims that when we take history of science seriously and look closer to investigate how science works, we see that there are actually two different phases: normal science and the crisis. During normal science there is nothing peculiar and scientists work on local problems without the need to consider theoretical or philosophical questions behind them. However, during a crisis phase, most of the time triggered by some contradictory evidence piling up, scientists grow more concerned about the theoretical and philosophical aspects which would later result in a paradigm shift where a new philosophical and theoretical framework, i.e. paradigm, replace the older one. This cyclic theme in history of science is actually the norm that we encounter all the time, Kuhn claims. A paradigm brings with its own ontology, own methodology, own techniques, own phenomena, own explanations and even own conceptualizations and perceptions. For Kuhn, not just phenomena are theory-laden, but everything is everything else laden which could remind us Quinian holistic underdetermination. In this regard, one of Kuhn’s central claim was incommensurability between the paradigms in the sense that two scientists working in different paradigms are said to be living in different worlds, they cannot even communicate properly since they mean different things even when they use the same terms.

           

The issue of incommensurability between paradigms has been a controversial topic and there are different stances. After Kuhn, contributions of two imporant figures became prominent in keeping the valuable insights of Kuhn’s work and providing a considerably less radical view. For example, Lakatos proposed the notion of research programmes instead of paradigms which contain a hard core and auxiliary hypotheses. Roughly the idea is that the hard core is the essential part of a programme which cannot be abandoned without abandoning the entire program as a whole, whereas the auxiliary hypotheses is susceptible to change. Newton’s three laws of motion is an example to hard core in this context. The three laws of motion and the law of universal gravitation do not, by themselves, specify what one will observe. To generate testable predictions, these laws must be supplemented with a range of auxiliary assumptions about the positions, masses, and relative velocities of celestial bodies, including that of the Earth. On the other hand, Laudan proposed research traditions where the components are only loosely related in comparison to both Kuhn and Lakatos in that elements might enter or leave the hard core from time to time, or it is okay that a later update on theory can only account for less phenomena than before for various reasons which is sometimes called Kuhnian loss. Laudan also maintained that there might be more than one research traditions in a given field at a given time, as opposed to Kuhn’s paradigms. Moreover, these traditions might be in competition and can become dominant in the field in different periods. Notice that Laudan’s picture still admits that research traditions have their own ‘way of doing things’ in terms of metaphysical understanding, methodology or some other institutional practice etc. even though they are loosely related here. Hence the problem concerning mind-independent reality arises here again since it is an open question how to attain some mind-independent truth within a particular research tradition or a set of research traditions that often conflict one another at a given moment. Laudan only refers to the criteria that a scientifici community holds at a given time, but that criteria themselves would be under a given paradigm or a given web of beliefs, to speak in a Kuhnian or Quinian sense. This is pretty much the same issue that we mentioned above in the section on Quine’s holistic underdetermination.

 

Notice that the trend we are talking about here takes the ‘conventions in science’ seriously. In parallel with that, the questions of what are the conventions in science and how does scientific knowledge come about conventionally gained prominence. Thus sociological approach emerged. Sociological approach focuses on how social settings, political factors, the different practical conventions in different instititutions or competition between individuals, institutions or entire schools of thought affect or even determine scientific knowledge. [6] Even though this can be compatible with realist commitments, the real controversy arises when one considers the Strong Programme in sociology of science (Barnes, Bloor & Henry, 1996; Bloor, 1976). Proponents of the Strong Programme have two main commitments: 1) They should be totally neutral as to the success, truth or falsity, or epistemic status of the theories, findings, interpretations, claims, beliefs, practices etc. to avoid the possible biases that might stem from intra-theoretic or intra-conventional leanings. That is to say, to proponents of the Strong Programme, one should investigate what is taken to be true and false all alike. 2) Social factors not only affect the scientific processes including theory choice over another, it determines them.

           

The second commitment can be seen as related to the first one because when one explains how both so-called true and false beliefs are produced by the same social principles and same social dynamics the truth becomes irrelevant in a mind-independent sense. This stance also undermines the controversial distinction between the scientific knowledge or knowledge in general, including the everyday knowledge or knowledge-related practices of different cultures and social settings, in line with contextualist views that we tackle in this thesis a couple of times.

           

In its claims Strong Programme makes use of case studies to an important extent. This view is often criticized on the grounds that case studies cannot support general conclusions about science, or that the case studies employed as examples are frequently misrepresented. However, even though the criticisms are important methodologically, the claim that some or all case studies conducted by sociologists of science turn out to be false cannot give an a priori reason that social factors have not much of an importance. This is especially so considering most scholars even in the realist side of the debate acknowledge that conventions are important to a varying extent. Therefore, to think that the conventions taking place in social situations might be affected or even determined by social and/or political factors seems reasonable. It is enough to show it only once to raise suspicion.  Hence, the burden of proof is on the shoulders of the realist or anyone claiming that knowledge-related dynamics are not affected by such factors. Provided there are too many decisions and practices or even theory changes throughout the history of science this might be practically impossible, which would again be a challenge for the realist side of the debate.

 

2.6  Perspectival realism

Admitting the problems and the controversy surrounding the topics in philosophy of science and the realist claims, that we also tried to cover to some extent, Massimi (2018, 2022) offers a minimal account of realism bypassing most of the issues about theory change, continuity in philosophy of science, etc. She claims that despite the alleged problems of the realist stance we should not drop the concept of ‘truth’ and ‘reality’ and try to preserve them in a new framework called ‘perspectival realism.’ She grounds truth on ‘perspectival cross-evaluation.’ On Massimi’s view, of course there are different perspectives, different cultures, different conventions in terms of ‘knowledge’ and yes; they often conflict with each other. However, she maintains, they do not simply conflict with each other but, at the same time, they are also compatible with one another and can establish some agreement. This part is actually what we can take to be truth albeit in a perspectival fashion. For her, confrontation between different perspectives may lead to the discoveries of such truth.

           

Admittedly this position has an advantage over the previous ones in that it takes as its central point the fact that there are many and often conflicting perspectives, belief systems, etc. in the world. However, it has several problems in tandem with previous ones as well. First of all, perspectival realism cannot give us a mind-independent truth or reality, or an approximation thereof. It can only show that we can establish consensus to some extent between perspectives, which might also be cumulative in nature.[7] Massimi herself admits that and she further states perspectival realism rejects the very conception of objectivist, mind-independent truth. I take this to be a problematic move as the chosen conception of truth seems to be chosen ad hoc. It is certain we are and always have been able to define or conceptualize any concept as we wish. However, the controversy between scientific realism and anti-realism seems to stem from the historical consensus in the field as to the relevance of mind-independence to realism. Without doing justice to this observation, it seems to me an arbitrary move to make the problem appear resolved and reduce it to a mere semantic convention. On the other hand, if the perspectival realist chooses to take this route, they must at the very least explain why we should not simply abandon the traditional concept of (mind-independent) truth altogether and replace it with an alternative. This would be what philosophical integrity and linguistic hygiene would require. Moreover, truth and objective reality have always been put forward as a meta-perspectival or non-perspectival authority to appeal when faced with a conflict. If it’s not going to serve this function, why insist on keeping it?

           

One last problem with perspectival realism occurs in cross-perspectival evaluation. For cross-perspectival evaluation to work, a given statement should be possible to assess from both perspectives and it should satisfy the performance-adequacy standards[2] of them. For this, an account of translation between the perspectives must be established. Because, even though translation and cross-perspectival evaluation seem possible between near perspectives or narrow contexts, it might be nearly impossible, or even outright impossible with perspectives that are far from each other. What I mean by these two expressions, the distance between perspectives, might be best exemplified with different examples. For example, the Atkinson-Shiffrin model of memory (Atkinson & Shiffrin, 1968) and distinctiveness theory of memory (Glenberg & Swanson, 1986; Glenberg, 1997; Nairne, 2006; Schmidt, 1991) can be said to be near perspectives whereas ecological theory of visual perception (Gibson, 1979) and accounts from deep learning approach to neuroscience (Richards et al. 2019) might be very far from each other. On the other hand, say, Aristotelian conception of soul and current cognitive science might constitute very distinct and distant perspectives. To take it a bit more extreme, the distance between the perspectives of a non-industrialized society, say, an Amazonian people, and the perspective of a particle physics would be to an extreme degree. Therefore, I argue that translation and thus cross-perspectival evaluation is a problematic step in the argument that needs to be cashed out.

2.7  Constructive empiricism

Before going into the concluding section of this chapter, I want to explain why I do not adopt a constructive empiricist approach. This is important since constructive empiricism might be the most renowned stance in the anti-realist camp (Van Fraassen, 1980, 2001, 2002).

           

Constructive empiricism is proposed by the philosopher Van Fraassen. His main motivation in the framework is to propose an empiricist view of science which does not fall into the traps of logical positivism. That is, he tries to give an account with minimal metaphysical commitments while also giving most possible credit to scientific knowledge through grounding it in experience as typical of empiricism. Unobservable/observable distinction plays a key role here. Because for Van Fraassen, science does not aim to give a true picture of mind-independent reality but it aims to give empirically adequate theories, it aims to save the phenomena. And on his view, theories are empirically adequate to the extent that their statements about observable phenomena are true. Then the question arises: what is observable and what is unobservable?

             

Van Fraassen’s observability criterion is a central aspect of his constructive empiricism, and it is defined with respect to the limits of human perceptual capabilities. According to Van Fraassen, an entity is observable if it is, in principle, accessible to the human perception without the aid of instruments. For instance, distant stars or planets qualify as observable entities, not because we currently perceive them directly, but because if they were close enough, we could. Their observability is tied not to current accessibility but to the modal condition of potential unaided observation. In contrast, microscopic entities such as bacteria do not satisfy this condition. We can see them only through technological instruments like microscopes, whose reliability and function rely on further scientific assumptions. Thus, the observability criterion draws a line between what can be said to fall within our natural reach and what depends on theoretical constructions and instruments whose epistemic status is not theory-neutral. This distinction has been one of the most debated aspects of constructive empiricism, raising multiple objections, particularly those of relativity and circularity. Before addressing these concerns, it is worth emphasizing an important clarification Van Fraassen himself offers. He acknowledges that there is no sharp a priori boundary between observable and unobservable phenomena. Instead, what counts as observable ultimately depends on the actual capacities of epistemic agents and the context in which observation is situated. In other words, the line between observable and unobservable is drawn not from some metaphysical or purely conceptual demarcation, but from a contingent and empirical consideration of what some particular community of observers are able to perceive, given their biological, technological, and social circumstances,. This point becomes clearer when Van Fraassen introduces the famous thought experiment involving an epistemic community whose members are born with electron microscopes biologically integrated into their sensory systems (Van Fraassen, 1985). In such a world, microscopic entities like molecules or bacteria would be observable for them, just as trees and rocks are for us. Thus, the observability criterion is not fixed across all contexts or agents. It is, in a sense, relative to the epistemic community. However, this dependence brings us to the first major objection, the problem of relativity.

 

The relativity critique suggests that if observability is relative to the epistemic community or the contingent capabilities of observers, then it risks becoming theory-dependent. That is, any scientific theory could stipulate its own observability criterion compatible with its commitments, and thereby justify or exclude entities accordingly. In such a scenario, the observability criterion no longer functions as an independent constraint but collapses into the very theoretical framework it was meant to regulate. This undermines one of the main attractions of constructive empiricism, namely its attempt to avoid metaphysical commitment while retaining epistemic humility. To address this issue, Van Fraassen asserts that the observability criterion is theory-independent. He claims that it is not the theory that determines what is observable, but rather our biological capacities and perceptual modalities. Yet this response invites a second, closely related objection, circularity. If our knowledge about what is observable is itself derived from scientific theories such as our understanding of how microscopes work or how visual perception operates then using that very knowledge to define a supposedly theory-independent criterion seems circular. That is, we use theoretical commitments to draw the line between observable and unobservable, only to then justify those same commitments based on what we define as observable.

 

This circularity becomes particularly pressing when we consider the role of instruments. For example, microscopes are only trustworthy epistemic tools if we accept the theories that explain how they work and validate the reliability of their outputs. But those theories are, in turn, justified by scientific practices that rely on observable outcomes. In rejecting small-scale entities as observable because they require theory-laden instruments, Van Fraassen appears to be engaged in a circular maneuver: appealing to theory to limit what theory can be committed to. As such, the claim of theory-independence becomes increasingly difficult to sustain. One way to interpret Van Fraassen’s insistence on theory-independence is to suggest that he sees the observability criterion as belonging to a meta-scientific or philosophical domain, something like a conceptual framework external to scientific theory itself. On this reading, the criterion functions as a philosophical constraint on scientific practice, and thus its independence is secured by its non-scientific nature. However, Van Fraassen explicitly rejects this interpretation. He states clearly that he does not propose the observability criterion as a philosophical or conceptual tool external to science (Van Fraassen, 1980, Ch. 3, Sec. 7). He insists that his empiricism is grounded within scientific practice, not above it. Therefore, this first interpretation fails to capture his intent. The alternative interpretation is that the observability criterion is scientific in nature and that science itself determines what counts as observable. Yet this reintroduces the circularity problem again. If the scientific framework is used to define what is observable, then observability can no longer constrain science from the outside, it becomes one of its internal assumptions. Moreover, if Van Fraassen maintains that observability is theory-independent and that theory itself determines it, he seems to be offering two incompatible claims. Either observability is external and constraining, or it is internal and relative. Holding both simultaneously appears contradictory.

 

This tension has been acknowledged by Van Fraassen himself. In his later exchange with Morton (Morton & Van Fraassen, 2003), he concedes that the issue of circularity cannot be dismissed so easily. He admits that the observability criterion’s claim to theory-independence does not, in fact, resolve the charge of circularity. As such, he concludes that some level of circularity may simply be unavoidable if one adopts the constructive empiricist view. That is, if we accept that science should be committed only to what is observable, and yet our knowledge of what is observable is itself shaped by science, then we must accept this feedback loop as a structural feature of our condition.

 

In light of these considerations, I argue that the observable/unobservable distinction, so fundamental to constructive empiricism, cannot be maintained in a principled way. Therefore, I reject the distinction and, by extension, the foundational role it plays in constructive empiricism. [9]

           

In line with concerns already noted in the literature, I want to raise an additional worry about the observable–unobservable distinction. As discussed earlier, this distinction appears arbitrary, particularly in light of the relativity and circularity problems. What further strengthens this concern is that, especially in relation to the relativity problem, the distinction between what counts as observable and unobservable is troubled by a continuity between scientific and folk ontologies or, more broadly, between scientific and natural languages. It seems difficult, if not impossible, to draw a clear line and say, “this part is scientific,” and “beyond this point is just folk ontology.” Philosophers like Quine, Sellars, the Churchlands, and Rorty have pointed out this continuity. Natural languages themselves fall under the category of theoretical languages, and the folk ontologies they express are not theory-independent, they are themselves theoretical constructs. Therefore, the anti-realist arguments we have discussed in relation to scientific realism also apply to these domains. This undermines any attempt to say that observations made through unaided perception are ‘real’ while those involving theoretical instruments or concepts are ‘unreal.’ Observations via unaided perception are still mediated by natural language and folk ontology. In this light, it seems difficult to sustain a non-ad hoc position that is anti-realist about science but realist in a broader metaphysical sense.

 

Constructive empiricism, in fact, is much broader than the part we covered here. I just wanted to present the part essential for our debate about realism and my criticism thereof. For a thorough understanding of the position one can see the original resources cited.

 

2.8  Conclusion

 

In this chapter I gave an overview of the debate between scientific realism and anti-realism along with my comments and stances with it. In doing that, I aimed breadth over depth since my motivation was to show even though there has been a heated debate and various arguments, most of them collapse into some particular version which ultimately either does not give us an account of mind-independence or trivializes the concept of realism and truth by outright rejecting mind-independence as a prerequisite for realism.

           

We mentioned how Putnam-Kripke model of referentiality is problematic from a Wittgensteinian tradition in philosophy of language, noting that the concept of language games and Wittgensteinian contextualism will be implicit for our further discussions. Then we saw how selective realisms all collapse into some particular form of consensus, as they aim to show a historical continuity that might very well be a result of certain type of conventions, and this is itself not enough to establish mind-independence or approximate truth. Likewise, we dealt with perspectival realism and found it problematic on three grounds: as (1) trivializing the concept of truth as an ad hoc move, (2) being only capable of establishing a particular kind of cross-perspectival consensus[10] and (3) having an important gap in the argument about translatability of statements especially between perspectives that are distanced from each other.

 

On the other hand we found discontinuity thesis about current science and its success problematic for two reasons: (1) that can only show science has matured in some ways and (2) there does not seem to be a compelling case in favor of a conceptual difference between the past science and the current one. Likewise, on the realist camp, constructive empiricism suffers from a contradiction in its observable/unobservable distinction.

           

Quinian holism, Stanford’s New Induction and Problem of Unconceived Alternatives and the sociohistorical approaches continue to challenge realist stance. In parallel with this, I take science as a particular kind of cultural/social practice[11], an emergent institution developed throughout centuries and embedded into other practices of our modern world. Even though this makes it unable to legitimately claim mind-independent truth or an approximation to it, contrary to first common reaction of an individual facing this claim, it is not the case that such a conception of science renders it equal to other practices like religion, spiritualism, etc. Every practice is a particular kind of game, with its own implicit and explicit rules, with its own functions, own results and implications. Therefore, science can still be science without claiming the hold of mind-independent truth in any non-trivial sense.  Because we know and see what we can do with science in medicine, in technology, in education, etc... How science is serving its alleged function without realism is a big question for history, sociology and philosophy of science altogether. An important stance in this regard is that science gives us explanations of our worlds and a particular kind of explanation at that.

 


[1] For more details, see: (Churchland, 1985; Hacking, 1985; Teller, 2001; Van Fraassen, 1985; 2005)

[2] I am aware that this is not the only account of natural kindhood. I do not have enough space to dive into that discussion, though suffice it to say that there are theories of natural kinds that are compatible with anti-realism.

[3] This view that takes language as a form of action, as something used as a tool to change some things in the world rather than to attain the truth or reality has been defended more recently by the embodied trends in cognitive science (Baggs, 2015). Relevantly, Vygotsky is another prominent and historical figure that is being rediscovered lately (Vygotsky, 2012). In similar vein, see also Christopher Gauker (1994).

[4] Even though it will not be at the centre stage argumentatively, a Wittgensteinian kind of contextualism will be lurking behind throughout the present thesis. I do not commit some particular reading of Wittgenstein and broadly construe and borrow the idea of games or language games as a way of thinking.

[5] For a similar argument called “Principle of No Privilege,” see: Hesse (1976)

[6] As we mentioned above, one might think this can also be formulated as ‘how the web of beliefs are constructed via social factors and conventions’ as opposed to Quine’s appeal to nature of human psychology as the ground.

[7] Notice that the arguments in favor of scientific realism from the continuity in history of science are also of this nature. They can only show that there is a consensus throughout history that is also cumulative to some extent. We mentioned this above with reference to Psillos (2009).

[8] More information on the working details of perspectival realism see, Massimi (2018, 2022).

[9] Note that I do not think this is particularly that problematic since any philosophical account broad enough to cover the entirety of science or anything on a par with that would probably fall into some sort of circularity at the end. I view relativity problem as a bigger concern here.

[10] This might be called synchronic consensus whereas continuity theses like selective realisms that try to show how science keeps some specific parts of theories in theory changes throughout history are diachronic consensus.

[11] For some formulations of sociohistorical /cultural practice approach to science, see: (Bourdieu, 2004; Collins, 1981; Collins & Pinch, 1993; Knorr-Cetina, 1981; Latour & Woolgar, 1979; Pickering, 1984) I did not include them in this chapter since their crucial commitments for the realism-antirealism debate are pretty similar to sociohistorical approaches.

Yunus Şahin - Cognitive Science & Philosophy

bottom of page