Chapter 4
Integrating Science at Its Joints
Most debates within research traditions in science, or more specifically in cognitive science, revolve around the questions of realism, i.e., the realism of different entities or theoretical posits. Hence throughout the earlier chapters we looked into whether science could give us an answer concerning the reality of anything at all. My answer was no. One consequence of the discussion about realism versus anti-realism we presented here is the practice approach to science that takes science as a cultural practice, as some sort of a cultural institution. Just like other cultural practices, it cannot grant a logical link to mind-independent reality or truth per se, but it may very well have its own distinct rules — either implicit or explicit — that distinguish it from other and at the same time gives it its uniqueness. This reference to explicit or implicit rules was inspired by a Wittgensteinian contextualism which is already implicit in the sociohistorical and cultural approaches to science. But contrary to most takes in philosophy of science we refrained from appealing to a mere instrumentalism according to which those rules are determined by local contexts and needs of the given epistemic community, instead we stated that one rule as to how science explains its subject matter might be the mechanistic explanations construed by the New Mechanism. There are two ways to interpret New Mechanism one of which is the narrow reading where the New Mechanism is both a descriptive and normative framework of a particular way of doing science albeit not the only one. The other reading is the broad reading which takes the New Mechanism as both a descriptive and normative framework aimed to cover the entirety of science. I argued that even though I employ a broad interpretation here, it is not necessarily so in terms of the scope of the present thesis and an explanatory pluralist might still find the present argument for a relatively local context useful or inspiring. One possible concern taking mechanistic explanation as a rule in an anti-realist conception of science would be the dominance of scientific realism among the New Mechanist philosophers. Relying on the work of Colombo, Hartmann and Van Iersel (2015), we saw that there is no logical and necessary incompatibility between the two. In this treatment though, the concept of coherence, as opposed to truth/reality, became prominent in accepting a particular explanation in science. As opposed to Colombo et al. (2015)’s broad way of construing coherence by only referring to evidence and background knowledge, which makes it only a matter of empirical investigation, I proposed that there is a need to another coherence criterion concerning conceptual aspects of research traditions because what kind of beliefs we form, what kind of tests we employ upon them, and as a result, what to accept and what not to accept are all determined, or at least, importantly influenced by such conceptual constraints.[1] We do not ask empirical questions about fairies and angels but about particular kinds of things and events, so what leads to the distinction at hand here?[2]
In this chapter, I am going to deal with this concern and investigate what would be conceptual criterion of coherence in an anti-realist science in which the New Mechanism will prove helpful. But before that, I will first tackle the mainstream view in cognitive science and the problems it faced according to the framework presented here. That discussion will further help us navigate through the conceptual criterion subject as well. I will argue that there are three foundational commitments that historically and logically gave rise to the current fundamental debates in cognitive science: 1) multiple realizability of mental states, 2) autonomy of special sciences, 3) naturalizability of the mental. I will argue that these three are in fact interdependent and lead to same kinds of thorny problems for pretty much every framework in the field. Then I will propose and explain how conceptual coherence criterion can bypass these problems
4.1 Foundations of the science of mind
When you open up a textbook, a dictionary or a common internet source to see what cognitive science is, you would likely see something like this: “Cognitive science is the interdisciplinary study of mind (and/or intelligence.)” George Miller, one of the founding figures of cognitive science, even says that they were reluctant to use “mind” because of the dominance of behaviorism, so they used “cognition” in its stead (Miller, 2003). So, cognitive science has been defined over the concept of mind/cognition.[3] But what is mind? What was mind back at the time when cognitive science emerged?
In the early and mid twentieth century, there were two main strands in philosophy of mind answering the very question: behaviorism and brain-identity theory. Roughly, behaviorism was the claim that mental concepts mean nothing more and beyond the dispositional characteristics of an entity. Just like a glass is predisposed to break when it falls from a certain height, when we say someone believes something we only refer to their disposition to do something or to act some particular way in certain contexts, namely we take their probability to do that or act in some particular way more higher than the alternatives[4] (Ryle, 1949). On the other hand, brain-identity theory refers to the idea that mental concepts refer to brain states (Feigl, 1958; Place, 1956; Smart, 1959). Notice that that does not mean that mind and the brain are exactly the same things as clearly exemplified by our ability to talk about how many neurons there are in the brain in contrast to our inability to do the same about mind. There are no neurons in the mind. One may construe this as a constitution relation like Place (1954, 1656) or as different senses with the same reference (Feigl, 1958; Smart, 1959).
In a series of influential papers Putnam (1960, 1963, 1967) criticized these views. His criticisms were grounded in a couple of interrelated arguments. In his 1963 article, although he initially approaches behaviorism through the concept of translatability, he quickly dismisses this perspective by noting that virtually no one still defends it. He then reformulates the view, proposing that behaviorism rests on two fundamental claims: 1) While mental terms and behavioral terms may not be fully translatable into one another, there exists a certain type of entailment relation between them. Even if these entailments are not strictly analytic, they still emerge in some way from the meanings of the terms involved. 2) The lack of one-to-one translatability stems from the fact that mental terms are of lower resolution compared to behavioral terms. Behavioral terms are more precise and varied, making them capable of capturing distinctions that mental terms cannot completely illustrate (Putnam, 1963, p. 327). Then he goes on to criticize behaviorism.
Putnam’s response builds on an example and a thought experiment. He draws his example from medicine, arguing that whether something counts as a disease is determined by the relevant symptoms, which may have different causes. Moreover, it is possible to discover that what was once thought to be the cause of a disease is not its actual cause, and that other factors are responsible. However, changes in the identified causes do not affect the definition of the disease or the determination of whether a person has the disease. He uses multiple sclerosis (MS) as an example to illustrate this point. In essence, we do not equate causes with effects. The effect and the cause are distinct entities, and a disease is simply the name we give to a particular effect, regardless of its underlying causes.[5]
Based on the first example, Putnam argues that a mental state, such as "pain," is not merely a cluster of symptoms or responses but rather the cause of a cluster of symptoms or responses. He supports this claim by appealing to our ability to conceive of possible worlds where a mental state like pain causes no observable responses at all. In the thought experiment he proposes, we can imagine "super-Spartans" who, due to cultural norms, never express their pain outwardly, regardless of how much they feel it. Moreover, we could even imagine that after many generations, this trait evolves to become entirely biological rather than culturally conditioned. In this scenario, super-Spartans could still verbally report feeling pain when asked, but they would exhibit no other external responses associated with pain. If such a scenario does not seem "nonsensical" or inherently self-contradictory, then behaviorism's claim cannot be correct. According to Putnam, this possibility shows that mental states could, at least in principle, be entirely independent of observable behaviors. Therefore, behaviorism’s fundamental claim that mental terms are in some sense entailed by behavioral terms becomes untenable.[6]
Before turning to Putnam’s positive argument about how the mind should be conceptualized, let us consider his critique of identity theory. In fact, rather than developing a detailed critique, he makes a few brief remarks in his 1967 paper (Putnam, 1967). For example, he suggests that saying “heat is mean molecular kinetic energy” is more plausible than asserting “pain is…” or “being in pain is being in a particular brain state.” This is because, in the latter case, the spatiotemporal relation changes, making the identity claim less plausible since mind is not something spatiotemporal. We cannot say mind is of such and such height, length or weight, or we cannot measure it by any other spatiotemporal measure.[7]
In addition, there are arguments suggesting there may be creatures that we might reasonably attribute pain or other mental categories to, even if the attribution do not correspond to any particular type of reductionistic relation. Evolution shows significant variation in this respect. For instance, it seems plausible that there could be creatures showing similar behaviors or engaging in seemingly identical mental activities while sharing little to nothing in terms of evolutionary or physiological mechanisms. Given the diversity of life and the different evolutionary trajectories, such possibilities seem likely. Following this consideration, Putnam shifts from offering an a priori critique of identity theory to presenting his own alternative, functionalism or more specifically machine functionalism, which he defends as a more reasonable "hypothesis." According to this view, a mental state is neither a behavioral state nor a brain state but rather a functional state. What does this mean?
To explain and better illustrate this point, Putnam refers to the Turing Machine. Just as the states in a Turing Machine are functional in the sense that each state directs us to another state based on specific instructions in the machine's instruction table without being dependent on the physical substrate that the Turing Machine runs, mental states are also functional states. In this framework, the type of hardware on which the Turing Machine operates or the materials from which it is made is irrelevant. What matters is that the machine implements the relevant rules, regularities, and relations between states. This notion, in fact, marks the emergence of the computer metaphor and the computationalist approach to mind (Chalmers, 2011; Milkowski, 2018; Newell, 1980; Pylyshyn, 1980, 1984; Searle, 1990; Shagrir, 2006). Each state in the system is interrelated and causally interdependent, yet "independent" from lower-level physical substrates in a certain sense. According to Putnam, the critical point is that if a mental state M1 leads to another mental state M35 in response to a specific input and there exists a causal relation of this sort with a particular output, then attributing the corresponding mental ascription to any system exhibiting such a relationship is justified. Note that, these relations need not be entirely deterministic; they could be probabilistic, but that is a trivial issue, one that can be addressed during the process of modeling or theorizing. This causal structure remains entirely independent of the system's physical substrate. As readers familiar with the subject would recognize, this is precisely what the concept of multiple realizability corresponds to.
Notice that while we have discussed functionalism and machine functionalism, it is not a logical necessity for a functionalist to refer to Turing Machine. The relevant functional relations can be conceptualized in different ways. In other words, at least in principle, one can be a functionalist without being a computationalist.[8] In fact, many non-computationalist or even anti-computationalist frameworks in cognitive science are examples of this as long as they allow the multiple realizability of mind.
It is not possible to deal within the scope of this thesis with the entire history of functionalism and the extensive debates surrounding it over the years. Rather, my aim here is to highlight some of the fundamental concepts, approaches, and tendencies that played a central role in the emergence of cognitive science. I will then mention how these ideas persist, even in opposing camps within contemporary debates, and explain what the thesis developed here contributes to these conceptual axes. I hope that the discussion so far has conveyed the main motifs of functionalism. However, cognitive science also involves other important issues that run parallel to or stem from functionalism, which must be addressed as well. For instance, around the same time with the developments mentioned above, Jerry Fodor, building on these ideas, developed the notions of the autonomy of the special sciences and the language of thought hypothesis. As for the latter one, more specifically, he argued for the need to integrate the computational theory of mind with the representational theory of mind, laying the foundation of cognitivism. These two ideas, rooted in functionalism, have become some of the most influential views in cognitive science. Even in later embodied and radically embodied approaches that oppose computational and representational theories, we see that much of the debate still unfolds on this conceptual ground. After addressing these issues, we will proceed to the core propositions of this thesis.
As we just mentioned, another important and related idea is the autonomy of special sciences (Fodor, 1974; 1997). The autonomy of special sciences refers to the idea that specialized scientific disciplines, such as biology, psychology, or economics, operate independently of more fundamental sciences like physics or chemistry. This independence is rooted in the unique entities, properties, and regularities that these sciences study, which cannot always be fully reduced to or explained by the principles of the more basic sciences. For instance, while physics underpins all physical phenomena, the concepts and explanations used in biology, such as evolution or ecosystems, are specific to living systems and cannot be entirely derived from physics alone. So far, so good... But just as pretty much every idea in philosophy, it seems completely plausible when explained in just a short paragraph as this. So let us explore it a bit more.
Fodor (1974) begins his article with the statement, “Every science implies a taxonomy of the events in its universe of discourse.” According to him, there are natural kinds to which lawlike generalizations apply. For example, while a predicate like “moving an object over a distance of 3 kilometers” might apply to many objects and instances, it is nevertheless not part of any scientific vocabulary because the things it applies to are arbitrary, contingent, and disjunctive. Its relationship to its domain, to the objects and instances it applies, is not lawlike. However, generalizations in psychology or economics, for Fodor, do not share this characteristic. In these fields, we can formulate highly effective generalizations that hold and yield new predictions and allow us to discover new regularities as well. This implies that the taxonomies in these fields employ natural kinds. Notice that this view poses a strong relation between lawlike regularities and the natural kinds, as Fodor himself also acknowledges. Even though this might be something dubitable, it is beyond the scope of this discussion.
These natural kinds, in turn, determine the vocabulary and taxonomy employed by the special sciences. Although Fodor does not use this term, for the purposes of our discussion, we might refer to these taxonomies and/or vocabularies as special ontologies. Special ontologies are the ontologies posited by the special sciences. As the name implies, special ontologies are special, that is they cannot be reduced to lower levels. On the other hand, where Fodor diverges from reductionism is in the requirement, according to reductionism, that the natural kinds of the special sciences must be coextensive with physical natural kinds i.e., the natural kinds in lower-level sciences like physics[9]. This would be analogous to the case of heat being equivalent to mean molecular kinetic energy. However, Fodor argues that such coextension is impossible for psychology and cognitive science. For example, he claims that many regularities in psychology are not coextensive with neurological processes as in the case of heat and mean molecular kinetic energy. For any given psychological function, there is potentially an unlimited, or if not unlimited, a highly complex and disjunctive set of neurological correlates. As a result, the laws of special sciences cannot be mapped onto the laws of physical sciences, as the coextension criterion required for such mapping cannot be fulfilled. Fodor takes this argument so far as to list certain disciplines and assert that the work being conducted within them is problematic:
“If psychology is reducible to neurology, then for every psychological natural kind predicate there is a co-extensive neurological natural kind predicate, and the generalization which states this co-extension is a law. Clearly, many psychologists believe something of the sort. There are departments of 'psycho-biology' or 'psychology and brain science' in universities throughout the world whose very existence is an institutionalized gamble that such lawful co-extensions can be found. Yet, as has been frequently remarked in recent discussions of materialism, there are good grounds for hedging these bets.” (Fodor, 1974, p. 105)
When considered in the context of cognitive science and psychology, concepts such as belief, attention, memory, perception, mental representation, and, depending on the context, encoding and computation are part of the special ontologies of these fields. Note the connection between this discussion and Putnam’s functionalism and the thesis of multiple realizability. While Putnam critiques reductionism in the context of philosophy of mind and advocates for functionalism and the independence of mental terms from their physical bases, Fodor adopts a similar strategy within the philosophy of science. Fodor emphasizes that reduction is not appropriate from the perspective of the philosophy of science and he argues that higher-level phenomena must be addressed on their own terms, with their own ontologies and methodologies. Furthermore, the regularities that special sciences enable us to discover, and the predictions they allow us to make, should be evaluated based on their own internal criteria. For instance, later in Psychosemantics (1987), Fodor defends folk psychology as fundamental to cognitive science and psychology. For him, if we agree with a friend to meet at a specific place, at a specific time, on a specific day a year from now, ceteris paribus, this agreement will hold, folk psychology works remarkably well in such contexts. While I find this perspective compelling, particularly regarding the relationship between reduction and the interrelation between sciences, I also have certain concerns that make me somewhat skeptical of this view.
First, for a reductionist, as Fodor himself acknowledges but ultimately rejects, there is a common reductionist narrative according to which progress in science, including so-called special sciences, often occurs through the increasing association of the phenomena in special sciences with sub-disciplines and their eventual reduction to them. Namely, as the science progresses the special sciences are expected to disappear since they are being reduced to the lower, reducing sciences through this progress. Notice that here the backbone of scientific explanation is reduction. In Fodor’s case, however, it remains unclear how progress is to be achieved or what meta-scientific criteria we should adopt in theory evaluation. This becomes particularly evident when we consider Fodor’s claim that appealing to lower, ‘reducing’ disciplines to explain exceptions and anomalies in special sciences is both normal and necessary.[10] This opens the door, in principle, to ad hoc arguments against any given counter-evidence in the disputes and competition between theories. After all, any theory or framework, when evaluated by its intra-theoretical criteria, can appear highly explanatory. What exposes the explanatory limits of a theory is often a potential new theory, sometimes accompanied by a significantly revised ontology that provides a reference for comparison. Fodor’s proposed framework creates a significant gap in this regard. There might even be a case to argue that the current theoretical and methodological crises in cognitive science and psychology are related to these meta-scientific views (Gigerenzer, 2010; Hommel & Colzato, 2015; Sanches de Oliveira & Baggs, 2023). On the other hand, whether the proper relationship between sciences in this context should indeed be reduction is a distinct matter, which we will return later. However, before doing so, we will address a major critique of Fodor’s position and his response to it, thereby concluding the discussion on the autonomy of special sciences.
Famously, Jaegwon Kim disagrees with Fodor (Kim, 1989; 1992). For Kim, non-reductive physicalism[11] is untenable. Such a view must either collapse into a form of physicalism or lead to eliminativism. Otherwise, it would require positing non-physical causes, which is problematic since all causal processes depend on physical[12] events. Without this dependence, we lose any justification for excluding "spooky" concepts and explanations. Thus, explaining something through non-physical means when physical explanations are available becomes arbitrary and violates the principle of parsimony. For instance, Kim argues that to explain a psychological and/or mental phenomenon, such as pain, we must engage in local reductions. While Fodor is correct in asserting that what we call "pain" could operate on entirely unrelated physical mechanisms or principles for a Martian, an octopus, an insect, and a human—making it impossible to reduce all instances of pain to a single, unified explanation or reduction—we can still reduce each instance separately. That is, we could reduce Martian pain, human pain, octopus pain, and so on individually, treating them as distinct types of pain. Why should we not do this? If we choose not to reduce these phenomena in this way, then we might decide not to call them "pain" at all. Instead, we could address the behavioral phenomena using different and newly formulated concepts, eliminating mental concepts altogether. However, this approach would no longer be non-reductive physicalism—it would amount to eliminativism.
According to this view, special sciences are not autonomous, and in any case, explaining phenomena within these disciplines requires progressively appealing to lower-level regularities and that is entirely common scientific practice. For Kim, the fact that this does not yield a lawlike regularity between lower-level and higher-level phenomena is not a problem. A common criticism is that it would prevent us from reaching generalizations about psychological and mental phenomena, leaving our explanations confined to local instances. However, according to the critiques, one of the primary goals of science is to arrive at generalizations. Whether the aim of science is to achieve generalizations is a long-standing and complicated debate in the philosophy of science. Even though the debate lies outside the scope of this thesis, suffice it to say that the dominance of local explanations rather than broad generalizations in many areas of biology often serves as a common counter-example in these discussions. On the other hand, this criticism is closely related to the notion of multiple realizability. According to Kim’s views, as outlined above, there is no single, multiply realized instance of pain or any other mental state for that matter. Consequently, it would be unreasonable to expect observations about one instance of, say, Martian pain, to lead to generalizations about another, say, octopus pain. Therefore, even if the goal of science is to achieve generalizations, Fodor’s claims do not provide a plausible framework for reaching such generalizations.
Kim illustrates his critique of multiple realizability most famously with the example of jade. Kim wants us to consider that with the progress in science, we discovered that what we commonly refer to as jade is actually composed of two distinct substances: jadeite and nephrite. This means that, despite their striking similarity in macro-physical properties, nearly identical in appearance and most other macrophysical observations, they differ at the microphysical level. If someone attempts to sell nephrite as jade, they would be committing fraud because it is not genuine jade. Kim argues that, for the sake of discussion, let us assume that all our observations of jade thus far have been based either on jadeite or nephrite, without any observations of the other. Given that we now know these two substances are distinct, would it be justified to make predictions about one based on observations of the other simply because they share significant similarities in macro-physical properties and have been falsely classified under the same kind, "jade"? Similarly, how can we make predictions about a Martian’s experience of pain or an octopus’s pain based on findings from studies on human psychological phenomena, such as pain? For Kim, There is no singular, unified "thing" being realized across these cases. Consequently, we cannot project from one onto the other. Kim goes even further, arguing that Fodor’s argument cannot lead to the conclusion that psychology is autonomous but rather to the conclusion that psychology lacks a proper subject matter. Ultimately, based on this argument, Kim concludes that multiple realizability is a problematic conception.
Fodor’s response to this critique is somewhat interesting (Fodor, 1997). He actually agrees that jade is not multiply realizable. However, he argues that this is not due to whether jadeite and nephrite are projectible but rather because jade is not multiply realizable in the first place. He claims that jade has a nomological relationship with jadeite in all possible worlds, much like water’s relationship to H₂O[13]. In contrast, this is not the case for pain or other mental states. For Fodor, it’s the other way around. We do not arrive at multiple realizability because making inferences from one instance to another is justified; rather, if something is multiply realizable, then making inferences from one instance to another is justified. He goes even further and asserts that, for a functionalist, mental states already multiply realizable. Therefore, for a functionalist, it is justified to infer from human pain to octopus pain or Martian pain, precisely because mental states are multiply realizable in the first place. In contrast, jade is not multiply realizable. However, that means that Fodor has not actually proven anything, nor has he provided any argument to support the claim that the mental states are multiply realizable in the way and to the extent he suggests. In fact, by presupposing this from the outset, he appears to be drawing a circular picture.
Let us recall that Putnam also presented functionalism not as a grounded theory but as a "possible alternative hypothesis." Fodor makes the same move here, and after years of debate, we find ourselves back where we started. Thus, in its current state, functionalism remains an arbitrary theoretical preference.The debate over multiple realizability is an exceptionally long and complex one and it is impossible to cover its entirety here. It spans across a vast field of conceptualizations at the intersection of philosophy of language, philosophy of science, metaphysics, and philosophy of mind. Moreover, there are more recent approaches, as well as alternative perspectives and variations that we cannot delve into here but which certainly require separate discussion.The aim of this thesis is not, by any means, to make a definitive claim about whether multiple realizability is true or false. However, the discussion so far suggests the following: some things may indeed be multiply realizable, while others may not. In some cases, the material and/or physical substrates may be indispensable for defining a phenomenon or a concept. Yet, claiming that mental states are multiply realizable and defining it functionally is neither a neutral nor a given perspective in cognitive science, it is a theoretical choice and should be recognized as such.
We have discussed above how the autonomy of special sciences and multiple realizability are closely associated.[14] While we refrained from adopting a general stance on multiple realizability, we concluded that functionalism is an ungrounded and unjustified thesis. Similarly, we noted that these two theses are not theory-neutral propositions for cognitive science; rather, they are theoretical choices and, as such, can be criticized or rejected in the context of theoretical disputes. Let me clarify that I remain entirely agnostic about the status of multiple realizability or functionalism in the context of other disciplines or metaphysical debates. Within the scope of this thesis, I focus exclusively on their relevance to cognitive science. This is because, even if multiple realizability and functionalism are entirely reasonable and valid for many objects and concepts, they may not hold for mind. It is possible and likely that even though one day the relationships between higher-level and lower-level sciences may be well-grounded for various objects and concepts at the human scale, mind could represent an exception in this regard. For instance, we do not speak of a "table-atom problem,[15]" yet we do discuss the "mind-body problem." In this context, I argue that even if we resolve how to ground and scientifically explain all other objects and concepts at the human scale, understanding the relationship between some higher-level and lower-level phenomena without any problematic issue, it would still be unjustified to assume that that framework also applies to mind. This stance, which I would term exceptionalism about mind, is of course rooted in the mark of the mental debate. To further engage with this issue, let us explore another topic arising from the two foundational pillars of functionalism discussed above, and its implications for cognitive science and philosophy of mind. Ultimately, this discussion will lead us to the hard problem of content. [16]
Let us remember Putnam’s machine functionalism, a.k.a. computational theory of mind. Putnam conceptualizes mental states in terms of functional roles, analogous to the states of a Turing machine. He argues that mental states are not reducible to specific physical or biological structures but are instead defined by their causal roles in a system. Just as the states of a Turing machine are defined by the functional relations between inputs, outputs, and transitions governed by its instruction table, mental states are defined by their functional interrelations within a broader system of inputs, behavioral outputs, and other mental states. This perspective is also endorsed by Fodor.
Later, however, Fodor finds this view inadequate and proposes necessary add-ons (Fodor, 1975, 1981, 1983, 1987, 2008). He argues that, if approached solely in this way, it would be scientifically trivial, as it would allow for the proposing an if-then instruction table for every single behavior without any criteria to constrain it. Moreover, he contends that the goal of a scientific inquiry into mind cannot be an endless effort to provide if-then explanations for every single behavior. It is worth noting the parallelisms between this critique and the earlier discussion of local explanations versus generalizations. For Fodor, any theory of human thought must account for two key features: productivity and systematicity. According to this view, human thought and behavior are potentially infinite and exhibit systematic relationships. A classic example of systematicity is the relationship between the sentences "John loves Mary" and "Mary loves John." Anyone capable of forming or understanding the first sentence should also be able to form or understand the second, demonstrating a systematic relationship between the two. Fodor emphasizes the need for a principle or constraint that accounts for this systematicity and introduces the Language of Thought (LoT) hypothesis. Assuming that LoT is well-known in this context, suffice it to say that Fodor’s view holds that thought is regulated by language-like systematic and productive relationships (See for more on LoT: Quilty-Dunn, Porot & Mendelbaum, 2023; Fodor,1975, 2008; Gomila, 2011; Lupyan, 2023; Rescorla, 2024). At the core of this are semantic representations that the "machine" performs computations on. These semantic representations possess combinatorial properties that establish the rules and constraints governing their permissible combinations. These rules and constraints, in turn, underlie both systematicity and productivity. The best example of this can be found in natural language: despite having a finite set of elements, natural language allows for the creation of an infinite number of systematic expressions and sentences due to similar rules and constraints.
This idea which is often referred to as Fodorian Cognitivism has led to extensive debates and has branched into numerous subfields. However, at its core lies a critical question that has posed a serious challenge for cognitive science and the philosophy of mind: where and how do these representations acquire their semantic content? Remember, according to the functionalism, mental representations qua mental can be realized by different substrates hence making different physical substrates with the identical content possible. These physical substrates called vehicles of representations. So, the question is how do these different sorts of vehicles can acquire their content? This issue has led to significant theoretical crises, as exemplified by the Chinese Room argument (Searle, 1980) and the symbol grounding problem (Harnad, 1990), both of which are closely related to this question. While various theories of content have been proposed, the rise of embodied approaches, particularly since the 1990s, can be seen as a response to this crisis. These approaches, which emphasize the role of the body and the environment in cognition, have generated a wide array of responses across different schools of thought and theoretical traditions. Notice that the problem of content historically goes back to the question of intentionality and the mark of the mental (Brentano, 1995; Crane, 1998). So it means that what we take to be contentful is minded and what we take to be minded is contentful. If we can answer this question then we can know the categorical difference between a table or a chair and a human or some other creature’s mind. In this context, some favoured something in the vicinity of embodied representations that take their content through sensorimotor information as opposed to classical cognitivism’s amodal representations (Barsalou, 1999, 2008; Glenberg, 1997; Goldman, 2013). On the other hand, some approaches altogether rejected the notion of representation if conceptualized in the cognitivism’s way (Van Gelder, 1995; Varela, Thompson & Rosch, 1991). Among such views are enactivism (Chemero, 2009) and radical embodied cognitive science (REC) (Varela, 1979; Varela, Thompson & Rosch, 1991). Enactivism proposes life-mind continuity thesis and grounds the mark of the mental in terms of autopoiesis.[17] For an enactivist, every living system is also an autopoietic system and every living system is minded. Even though, enactivism is most commonly classified as an anti-representational stance, most enactivists are open to some conceptualizations of representation that depend on autopoiesis and/or organism-environment interactions. On the other hand, proponents of REC reject any need for any kind of representation. There are content eliminativists like Chomsky and Egan that accept representations but only reject content which gives representations only a heuristic status (Chomsky, 1995; Egan, 2014).[18] Or there are also separate theories of mental content that usually appeals to information-theoretic (Dretske, 1981; Eliasmith, 2005; Lloyd, 1989; Rubert, 1999, 2008; Usher, 2001) and teleological approaches (Millikan, 1984, 2004, 2017; Neander, 2013, 2017; Shea, 2018; Stampe, 1977). These approaches have been viewed as most successful and promising theories of content in mainstream. However, they are criticized on the grounds that they all employ information-as-covariance and thus, as the consensus in the field acknowledges, cannot account for the content since covariance does not constitute content (Hutto & Myin, 2012). The main trouble arises only in the context of content that is semantic, normative and/or truth-bearing which has been taken to be crucial for mindedness. No kind of covariance, no kind of correlation can give us semantics, truth or normativity by itself. The hard problem of content is pretty much this idea that natural phenomena, natural explanations are extensional whereas semantics, truth and normativity are intensional, hence causing them to be separate in a principled way. In other words, according to hard problem of content, content cannot be naturalized.[19] Most cognitive scientists or philosophers would argue against such a claim by saying that semantic content can be grounded in some natural explanation or natural concept. The various embodied cognition approaches or different theories of mental content mentioned above would, of course, claim that they have either resolved the issue or are on the verge of doing so. However, the problem lies in the fact that choosing one or more regularities found in nature and labeling them as "semantics," “content” or “normativity” is neither compelling nor sufficient to convince an impartial observer to accept this attribution. All these stances remain self-attributive and relatively arbitrary.
In this short review, I have outlined the contours of these complicated debates within cognitive scientce and to illustrate the primary issues and challenges along the major conceptual axes. What I have aimed to show thus far is that the two foundational functionalist views central to the establishment of cognitive science, i.e. multiple realizability and the autonomy of special sciences, have given rise to the problem of mental causation and the hard problem of content. Moreover, I have argued that these two problems cannot be addressed in isolation from one another for a conceptually complete and coherent cognitive science. Finally, I have shown that different frameworks within cognitive science offer varied responses to these issues, but all of them work under functionalist assumptions, trying to provide cognitive science, assumed by them to be a special science, with a corresponding special ontology. Accordingly, nearly all schools of thought in cognitive science begin by establishing their metaphysics of mind and their approach to understanding it. Based on these primary answers, they then proceed to determine how cognitive science should be conducted. This is true not only for functionalism and the cognitivism but also for the post-cognitivist approaches grouped under the umbrella term embodied cognition.
At this point, we are gradually approaching the central argument of this thesis. My question is: can we not take a couple of steps back and think of a different trajectory for cognitive science, one that is entirely agnostic regarding the metaphysics of mind and the problems associated with such ontological stances? Building on our discussions of anti-realism and the New Mechanism in science, I will argue below that such a trajectory is indeed possible and even needed.
4.2 Naturalistic Coherence and sidestepping the mental
It has been a long journey, so let us take a moment to recall where we started. Beginning with the observation that most debated topics among different schools of thought in cognitive science is the reality of certain theoretical entities, we questioned whether prioritizing the reality of any entity or focusing on this question during theoretical conflicts is justified. At the conclusion of that discussion, we found an anti-realist stance to be inevitable, treating (scientific) concepts as non-real constructs. Within this framework, we conceptualized science as a social and cultural practice that produces these some particular subset of these constructs. However, just as every social and cultural practice has its own unique boundaries, rules, and forms, we acknowledged that science too has its distinct form, procedures, and limits. Given that there is no theory-neutral method to resolve disputes between theories and frameworks regarding the reality of entities, we explored in the second chapter how such conflicts might be addressed and sought to understand the practical boundaries of science. This led us to examine the New Mechanism approach in the philosophy of science. We noted that this approach can be interpreted in two ways: under the broad interpretation, New Mechanism describes the entirety of science and offers a normative framework for it; under the narrow interpretation, it provides a descriptive and normative account only for certain fields or aspects of science. For the purposes of this thesis, we argued that both interpretations are compatible. Those who adopt the broad interpretation might view the proposal here as saying, "Cognitive science must follow this model because science as a whole is such and such." In contrast, those who find the narrow interpretation more plausible might see it as "a coherent and potentially new way to address certain issues in cognitive science." Although many proponents of New Mechanism are realists, we briefly argued that New Mechanism is, in principle, compatible with anti-realism. We emphasized that for an anti-realist perspective, coherence is more important than reality. In this context, we distinguished between empirical coherence and conceptual coherence. We outlined that the central focus of this thesis is to introduce the concept of conceptual coherence and to explore its implications for cognitive science. Along the way, we took a detour to summarize the major debates and foundational issues in cognitive science that are relevant to our discussion. Now, having reviewed these key issues in cognitive science, we are finally in a position to turn our attention to the concept of conceptual coherence.
Let’s remember the question of the present discussion: Can we not take a couple of steps back and think of a different trajectory for cognitive science, one that is entirely agnostic regarding the metaphysics of mind and the problems associated with such ontological stances? Given that a) we think that the autonomy of special sciences rests on controversial assumptions and thus is yet unconvincing; b) we differ from functionalism regarding the goals of science and what constitutes a scientific explanation; c) we argue that the autonomy account fails to provide a framework for how scientific progress occurs and leaves room for ad hoc justifications in theoretical disputes; d) we find the hard problem of content argument convincing and approach the naturalizability of the mental with skepticism; e) we do not take it granted that there is a categorical difference between sciences with different domains at different scales; f) we do not consider it necessary to view cognitive science as a special science that must first be provided with a special ontology: How can we proceed from here?
As discussed in the third chapter, according to New Mechanism, when we encounter a phenomenon in science, what we observe is essentially Event A and Event B. Let us assume that the available evidence and the scientific framework suggest that these two events are causally related. To explain this relationship, we can adopt two approaches. The first approach is to zoom in and examine the components of Event A and Event B, focusing on both their internal interactions and organization and the possible relations between the two events at this subcomponent level. This involves using relevant methods and techniques to formulate hypotheses based on existing theories and evidence, which are then tested. The second approach is to zoom out and examine how Event A and Event B, as components, relate to the larger events or mechanisms they are part of, or how those larger mechanisms interact with the subcomponents of the mechanisms underlying Event A and Event B systems. This, too, requires developing and testing hypotheses using similar methodologies and techniques. For New Mechanism, neither of these approaches is inherently superior to the other. Science progresses by moving in both directions: the questions we pose lead to new questions, and we navigate between scales or levels according to the demands of these inquiries. The resulting descriptions that emerge at different though interdependent levels or scales are what we refer to as mechanistic explanations.
In doing that, namely starting with Event A and Event B while our knowledge of their relations to higher and lower levels remains limited, in other words when these territories are known-unknowns, the explanations we develop, however localized, necessitate introducing new elements, processes, and events that were previously outside the scope of scientific knowledge. Notice that I am making no claims about whether these entities are observable or unobservable by any specific criterion. This distinction is not relevant to the current discussion. Within the natural progression of scientific inquiry, the new "things" we propose to explain various phenomena are inherently tied to ontology and incrementally expand our conceptual framework, both as a scientific community and individually. For instance, centuries ago, concepts like microbes, electrons, or quarks were absent from our discussions. Today, despite their content, definitions, and different "language games" that vary according to context, these entities have secured places in both scientific and, to some extent, folk ontologies. This is a natural outcome of the scientific progress. As late 20th-century philosophy of science and philosophy of language have shown, there is no categorical boundary between the empirical and the conceptual. Notice the contradiction regarding special sciences and their special ontologies. For a special science to exist in the first place, it must rely on a shaky, one whose origins are unclear, whose foundations may be unstable, whose reliability is uncertain, and upon which we cannot impose any criteria to determine if it misleads or wrongfoots us. Moreover, as a natural consequence of its autonomy, the explanatory criteria of this special science are tied to its special ontology, which, as noted earlier, will always appear successful and useful when evaluated by intra-theoretical standards. When faced with explanatory challenges, such as an inability to account for certain phenomena or the counterexamples to its explanations, these special disciplines, as Fodor himself acknowledges, tend to appeal to lower-level regularities to explain away such issues. As a result, the discipline, regardless of its particular content, is perpetually regarded as successful and sufficiently scientific, drifting according to trends without any clear criteria for progress. In contrast, the process outlined above, grounded in mechanistic explanation, suggests that there are no clear-cut divisions between levels of explanation. Instead, science progresses by filling in gaps and increasing the resolution of its understanding, just like an ever-sharpening image. As it does so, the localized explanations we develop, along with the new theoretical entities and constructs we propose, contribute continuously to the development of our conceptual frameworks.
At this point, two crucial questions arise for a researcher or a scientific community: 1) When dealing with known-unknowns and when theoretical and empirical needs require us to introduce a new construct, what should we consider? What are the relevant constraints? For instance, why, in our explanatory practices in science, say, when trying to explain a highly localized, low-level phenomenon in biochemistry, do we not invoke "spooky" concepts like spirits, ghosts, or fairies? Why do we instead even prefer to name these new constructs using sequences of letters and numbers that sometimes might seem entirely meaningless (e.g., H5N1, FOXP2, HD 209458, OGLE-2005-BLG-390Lb) 2) Beyond the methodological and epistemological constraints of a given scientific community at a particular time, are there any ontological, or more precisely ontic, constraints that compel us to stop at a certain scale, level of explanation, or point which will lead us to conclude, "Beyond this, no further explanation is possible; we must stop here"?
This is precisely where conceptual coherence becomes relevant. In light of the discussion above, let us define conceptual coherence as follows:
Conceptual Coherence (CC): When introducing a new concept in a given domain for any reason, the concept must, by its definition, be coherent with the conceptual framework of the domain and it must be consistent with the rest of it.
Conceptual Coherence can be applied to different conceptual frameworks and need not necessarily be limited to scientific or naturalistic[20] contexts. However, since this thesis focuses on naturalistic science, we will also define a specific case of conceptual coherence:
Naturalistic Coherence (NC): In a naturalistic domain and/or (a naturalistic scientific) discipline, when introducing a new concept for any reason, the concept must, by its definition, be naturalistic[21] and consistent with the rest of the domain.
Finally, in cases where this coherence fails which may either obvious in some cases or subtle in others, I will refer to the resulting mismatch and issues as conceptual discordance.[22] For example, the mind/body problem is a clear case of such conceptual discordance.
Let us return to the questions. The answer to the first question lies in people's tendency to adhere to conceptual coherence (CC). However, since this tendency is implicit, it largely depends on the intuition of individuals and researchers, and as with any complex matter, it is unlikely to function without any issues or ambiguities. Therefore, even though such a tendency exists, we can still observe incompatibe instances. The answer to the second question is that there are no inherent ontic constraints in nature itself. Some may claim otherwise, but one wonders how they overcome the anti-realist arguments presented in the second chapter, as our anti-realist commitments render such ontic attributions implausible. However, in a conceptual sense, conceptual discordance may nevertheless arise. Such limitations and constraints, rather than stopping research or imposing restrictions on science, can and should only push us to reconsider and potentially revise the conceptual framework we employ in our scientific and philosophical inquiries. This is because, as long as the conceptual framework remains unchanged, the relevant issues will persist. The distinction between the intensional and extensional in the context of the hard problem of content serves as an example as well. If we cannot separate mental/normative concepts from semantics, or from notions of truth and falsehood in a non-trivial and non-ad-hoc way, which is something we actually cannot, then there is no point in insisting on using these concepts either. Instead, it would be more reasonable to propose a new conceptual framework or that is already compatible with the existing naturalistic framework. In other words, rather than insisting on naturalizing what is "unnatural" or attempting to force our way through these kinds of conceptual impasse, we should expand the domain of what is already natural. [23]
Both Conceptual Coherence (CC) and Naturalistic Coherence (NC) are, in fact, both descriptive and normative. However, as previously discussed, their normative aspect has remained implicit due to their intuitive nature. I aim to make this implicit tendency explicit and postulate it as a normative criterion: the Naturalistic Coherence Criterion (NCC). I propose this as a conceptual criterion both for evaluating theories or conceptual frameworks in science and as a constraint in proposing such theories or conceptual frameworks. By adopting this criterion, we can continue to conduct science without leading to conceptual discordance, sustain scientific progress, enhance resolution, and expand the naturalistic domain.
To better illustrate my point, let us briefly consider an example from the history of developmental biology.[24] The central question in developmental biology is: How is it that cells with the same genetic base "know" what to do, evolving into different tissues with different functions to form distinct body parts, resulting in what we call "development"? At the dawn of developmental biology in the late 19th and early 20th centuries, Hans Driesch reached an important finding while researching this question. He observed that when certain parts of sea urchins were cut away, the separated cells were capable of forming entirely new and healthy sea urchins. To explain this phenomenon, Driesch introduced a concept he called entelechy, inspired by Aristotle's notion of telos. According to Driesch, entelechy is an immaterial entity that cannot be reduced to physical processes or explained mechanistically. It directs the cells by providing them with the "knowledge" and "representation" of how and what to do in the process of development. In his view, the cells carried the "purposeful" knowledge of the organism’s developmental process (hence telos). Entelechy was directly related to "life" itself and it had no connection or relationship to lower-level processes.
Notice that the perspective here is strikingly similar to the arguments developed by Putnam and Fodor regarding multiple realizability and the autonomy of special sciences. In fact, it seems plausible that if Driesch’s debate had occurred a few decades later, it might have been framed as an argument in favor of treating developmental biology as a special science. Consequently, the necessity and inevitability of a special ontology that would include irreducible entities like entelechy for this field might have been proposed. However, contrary to the vitalist view advanced by Driesch at the time, the history of developmental biology has shown that these developmental processes could indeed be explained mechanistically, even though we had to wait until almost a century later. Initially, it was thought that morphogens[25] within cells directed developmental pathways in a straightforward, unidirectional manner. Over time, however, it became clear that morphogens, whose exact nature and functioning were not well understood at the time, actually interacted through complex feedback mechanisms with the cells and tissues they helped create. These interactions revealed that the developmental process was far from unidirectional. The vaguely defined space where these mechanisms were thought to occur was termed the morphogenetic field, inspired by physics. In the early 20th century, there were still vitalists who maintained that morphogenetic fields were irreducible and could not be explained mechanistically. However, Huxley and De Beer (1934) speculated that certain metabolic factors might regulate these fields, pointing toward a potential mechanistic explanation. By the late 1990s, with developments in molecular biology, we learned how proteins such as Bone Morphogenetic Proteins (BMP) and Wnt influence gene expression through some specific receptors, thus playing crucial roles in developmental processes. Mechanistic explanations of these processes have enabled the development of mathematical models. Research in this field continues, and future discoveries may of course change our understanding of these processes. But the point I wish to emphasize, and which I believe is apparent here, is that scientific progress, regardless of how long it takes or which extra-scientific factors (e.g., technological, political, or economic) it depends upon, has been possible only through a mechanistic assumption and adherence to the Naturalistic Coherence Criterion (NCC). It is mechanistic because we consistently operate within constraints regarding inter-level relationships, which guide our investigations and prevent arbitrary drifts. It adheres to NCC because, in the process of advancing scientific explanations, we do not propose concepts that, by their definition, extend beyond the domain in question or introduce theoretical and philosophical gaps that hinder rather than advance scientific explanation. The concepts we propose are already defined or approached within a naturalistic framework. I suggest that the acceptance of vitalist views during that period, relying on assumptions about irreducible, immaterial, and mechanistically inexplicable phenomena could have undermined progress. In doing so, I leave it to the reader to consider what the history of developmental biology might have looked like if a view like the autonomy of special sciences had been widely accepted in the developmental biology of that era.
Let us revisit our two questions mentioned earlier. In the first question, we asked: When dealing with a low-level scientific question, why do we not invoke "spooky" entities, propose such entities, or feel the need for them? Recall my responses referring to Conceptual Coherence (CC) and Naturalistic Coherence (NC). Introducing a concept upon observing some known-unkown phenomena and defining it as irreducible and mechanistically unexplainable is a theoretical and philosophical choice. Conversely, as Huxley and De Beer did, considering what a potential mechanistic explanation might look like, proposing how-possibly explanations and continuing the research is also a theoretical and philosophical choice. We cannot know in advance whether such research will yield any meaningful results, or whether we will reach those results in fifty, one hundred, or even five hundred years. However, as far as I can see, this is the only way for science to exist, expand the naturalistic domain, and contribute to its explanatory power. [26] Specifically for cognitive science, I argue that following the assumptions of the functionalist dogma and trying to understand the mental via multiple realizability and the autonomy of special sciences is a choice similar to the first one above. Returning to the second question: When faced with a phenomenon we cannot yet explain, i.e. a known-unknown, do we have any ontic (as opposed to epistemological or methodological) reason to conclude that it cannot be explained or further connected to lower levels? Setting aside our anti-realist commitments, which we have already discussed, I argue that there is no such reason and that making such an assumption contradicts the very nature of scientific explanation and progress. In this example, I fail to grasp how accepting that entelechy is irreducible, mechanistically unexplainable, and, in a sense, "ontologically special" could possibly contribute to scientific progress.
To even further the point being made here, we can appeal to a thought experiment similar to classic ones found in literature. As usual, consider a curious alien, or even a Martian scientist, who seeks to understand the behavior of Earth's living beings. Let us further imagine that in the scientific community of these Martians, and indeed across their entire species, including their everyday and social interactions, there are no mental or even teleological concepts. Assume that their language is entirely causal, and they conduct all their communication this way. While their science or everyday language may include concepts at different levels of abstraction, resolution, vagueness, or concreteness, they are completely unfamiliar with any intensional, semantic, or contentful concepts, neither their language nor their thinking incorporates such things. Now suppose further that these Martian scientists are centuries or even millennia ahead of us in terms of their mathematics, observational tools, and experimental methods. With this in mind, imagine that these scientists set out to study and explain the behavior of Earth's living beings, from the simplest to the most complex. As they come across various phenomena and mechanisms that require them to propose new concepts, what would compel these scientists, who have no prior knowledge of mental or teleological concepts to posit such notions? What would force them to invent these concepts, making it otherwise impossible for them to explain these phenomena? If we cannot demonstrate a categorical necessity, which I think we cannot, then the view I advocate here is a reasonable one.
Before closing this chapter, I want to address two potential misunderstandings. The first one is that the notion of coherence discussed in this thesis should not be conflated with epistemic coherentism, which is often the first association evoked by the term within analytic philosophy (for a concise review, see: Olsson, 2022). Although there are some similarities in terms of structural outline and emphasis, the two issues serve different functions and belong to distinct domains. Coherentism in epistemology is a theory of justification, often presented as an alternative to foundationalism. It holds that a belief is justified if and only if it coheres with a system of mutually supportive beliefs. Epistemic coherentism typically rejects the idea of basic, self-justifying beliefs (as in foundationalism) and instead emphasizes holistic evaluation, mutual support, and the systemic integrity of doxastic networks. On the other hand, NCC is a meta-theoretical tool for constraining the introduction of new concepts and assessing the internal compatibility of theoretical frameworks. The purpose here is pragmatic and methodological, it is to guide the development of scientific theories that avoid conceptual discordance (such as the mind–body problem) by ensuring that their components integrate into a broader, naturalistic explanatory web. In other words, NCC is not doxastic. More clearly, in epistemology, coherence is about justification that is it tells us when a belief is warranted. Here, coherence is a (meta-)methodological criterion for conceptual legitimacy, if I may say so, it tells us when a scientific concept fits within the broader explanatory scheme. Lastly, traditional coherentism has been criticized for its relationship to truth in that coherence does not guarantee correspondence with reality. The proposal here sidesteps this problem by rejecting the realist assumptions altogether. Instead of claiming that coherence somehow relates to truth, I claim that truth is not the right criterion for science to begin with.
The second one is that what I am proposing here is not that we should stop practicing "philosophy of mind." Rather, I claim that when certain concepts in our current framework, potentially due to various reasons, do not align with one another, fail to progress cohesively, and create problems for a specific scientific discipline (in the context of this thesis, cognitive science), it may be a better approach to develop a new conceptual framework. This framework should be consistent with the rest of naturalistic science and, in a sense, with the "better-functioning" parts of it. Such concepts should also be proposed in light of various phenomena, considering local theoretical and empirical needs. To give an example from biology again, as seen in the literature, attempts to ground "life" or "living systems" might refer to DNA, far-from-equilibrium thermodynamics, or who knows what else (Mariscal, 2021). However, neither DNA nor far-from-equilibrium thermodynamics was proposed explicitly to "ground" or explain "life" or "living systems," nor do they gain their explanatory justification from this question. What is more, biology as a discipline existed long before any of these were introduced and it has progressed for centuries without a properly grounded concept of "life." (For the argument that grounding the concept of "life" is unnecessary for biology, see: (Tirard et al. 2010; Machery 2012; Cleland & Chyba 2002). Similarly, in cognitive science, to ground "mind" or to define the field based on a particular metaphysics of mind is neither necessary nor desirable. Cognitive scientists should not be required to frame their field around such metaphysical commitments (For similar but somewhat different views: Allen, 2017; Rubert, 2013; Villalobos & Palacios, 2021). Of course, philosophers of mind, just as philosophers of biology and life, may learn from cognitive science and debate whether concepts and findings from cognitive science can be useful for their philosophical discussions. They may use cognitive science to engage in science-informed philosophy. However, these questions should not form the core theoretical distinctions in cognitive science itself. In summary, the mind-body problem should be a concern only for the philosopher of mind, not the cognitive scientist. The cognitive scientist should be able to conduct their work solely within the framework of res extensa.
To do this, a critical question remains: what kind of conceptual framework could be proposed for cognitive science that would allow us to address phenomena traditionally considered mental without necessitating the attribution of any mental concepts to nature itself? More importantly, what conditions must such a framework satisfy, what components must it include, and what must it exclude? These are very important questions, and I believe that further research in this area will be definitive for the future of cognitive science. The answers to these questions, I suppose, could provide a way to unify the various conceptual traditions within cognitive science without relying on the metaphysical debates that currently divide them and establish a form of epistemological ground-zero for these schools and their associated research.
So far, I have attempted to summarize my claims. In the upcoming conclusion, I will briefly review the discussion thus far and clarify what I am not claiming, in order to preempt potential misunderstandings.
[1] Notice that this is in the vicinity of both Laudan’s distinctions between the theoretical/conceptual, the empirical/methodological and the institutional. A somewhat parallel distinction can be found in (Sanches de Oliveira & Baggs, 2023). Though both accounts make a tripartite distinction there is a case to be made in favor of a quadripartite layering including the distinction between the theoretical as in empirical theories and the theoretical as in philosophical theories, e.g. local ontologies, conceptual frameworks, etc. A useful example might come from cognitive science where two opposing models of memory, Atkinson-Shiffrin Model of Memory (Atkinson & Shiffrin, 1968) and Distinctiveness Theory of Memory (Glenberg & Swanson, 1986; Glenberg, 1997; Nairne, 2006; Schmidt, 1991) share the same philosophical/ontological/conceptual background, i.e. computationalism, but differ in their empirical contents and predictions. Likewise, the debate between classical cognitive science and connectionist cognitive science would lie somewhere between the two as it is partly a philosophical/ontological/conceptual matter as well as partly an empirical matter.
[2] Notice also that this concern is in line with Sider (2011) or with the discourse around carving nature at its joints in general. However, as the present anti-realist stance prevents us talking about carving nature objectively, the title of the chapter is “integrating science at its joints.” We will talk abpout this furthermore later.
[3] Throughout this thesis, mind and cognition are used interchangeably if not stated otherwise.
[4] Notice that this is different from other kinds of behaviorisms sometimes called psychological and methodological. Here we mention only the philosophical or logical behaviorism as it is the subject of philosophy of mind and what mind is.
[5] In fact, Putnam seems to have already assumed the conclusion he intends to argue for. It remains unclear why the mind should be compared to diseases or whether different examples from history of medicine could challenge this metaphor, a question best left to medical experts rather than to tackle here. However, at least in some cases, it does not seem entirely implausible to think that a disease might be identified not only by its symptoms but also by its causes. When we learn that the causes differ, it is conceivable that we might introduce a new taxonomic category based on the cause rather than the symptom alone. In fact, as we will see later, this objection touches on the same point raised by Jaegwon Kim with his example of "jade." However, let us set this issue aside for now.
[6] The status of conceivability arguments and how we should evaluate them is a controversial meta-philosophical issue. For further discussion, see: (Gendler & Hawthorne, 2002; Hallett, 1991)
[7] I find this reasonable as well. For precisely this reason, I will defend exceptionalism of the mental in the context of mental causation and multiple realizability below. This argument aligns with that of Feyerabend’s (Feyerabend, 1963) and, in a certain sense, with the phenomenological tradition. However, adopting this perspective is not a prerequisite for the present thesis. One could accept the argument presented in this thesis without considering the mental as exceptional in this regard. Though, I believe doing so would lead to other problems independent of the thesis’s central issues. These concerns will be addressed in the conclusion.
[8] In fact, even Putnam himself acknowledges that saying "the mind is a machine" is somewhat trivial, as pretty much anything can be considered as a machine when viewed through this lens. This perspective helps explain the rise of pancomputationalist and panmentalist views in the field, particularly over the past two decades which is an unsurprising development given that such stances follow naturally from functionalism.
[9] This distinction between lower-level and higher-level sciences is of course not unproblematic. Most simply because it is obvious that physics is not just concerned with small-scale phenomena but it also aims to explain astrophysical events. But the distinction, at least for the discussion at hand, is conventional.
[10] Fodor supports this claim by pointing out that there are exceptions to the laws in special sciences but the laws of physics or chemistry do not have exceptions. So, when an exception is observed in the special sciences it is normal to refer to the more powerful and exceptionless laws of the lower levels. However, this seems to be a false dichotomy as whether there is a clear-cut and categoric distinction between the natural sciences and higher-level special sciences in terms of laws and exceptions is dubitable. Physical laws seem exceptionless because we deal with them only under strictly controlled conditions. The problem with the laws of the special sciences is that we often lack the tools to provide such strict controls, and these tools may, in principle, be impossible to develop due to ethical and epistemological constraints. Therefore, making a categorical distinction between explanatory practices in science on this basis is unjustified and problematic. Once this distinction is removed, various pathways for bridging special sciences and natural sciences become apparent, pathways that bypass typical categorizations between reduction and emergence. As we will explore below, mechanistic interlevel integration may offer a promising pathway in this context.
[11] Functionalism has been proposed as a stance both non-reductive and still physicalist, hence the naming. However, later, non-reductive physicalism has gained its own trajectory.
[12] Notice that that does not mean that every phenomenon belongs to the domain of phsyics as a science. It just means that in naturalistic explanations we do not refer to super-natural or extra-natural elements.
[13] I mentioned Putnam’s microessentialist views in the second chapter and found them problematic.
[14] While these two positions are often found together and have historically developed in close parallel, it is important to stress that there is no necessary or logical entailment between them. Rather, they have been mutually suggestive. The plausibility of one often lends support to the other.
[15] Similar issues are discussed in the context of metaphysical identity. However, there is a categorical difference between grounding a table or other objects at that scale and grounding mind, particularly in terms of causation and the philosophy of science. The former is merely a matter of scale, whereas the latter involves fundamentally different problems beyond the issue of scale. Even if we were able to fully understand the relationship between higher-level and lower-level phenomena, we would still be left with an unanswered question: what exactly sets apart ordinary objects, like tables and chairs, which exist at a scale comparable to human behavior, from things that are considered mental? Notice that this is directly related to the mark of the mental a.k.a intentionality, and the hard problem of content.
[16] The discussions on multiple realizability, higher-level and lower-level phenomena, the unity of science, and similar topics often involve realist assumptions, such as those concerning the fundamental level of reality. Therefore, while the sketch presented here is as concise and aligned with the mainstream narrative as possible, there are numerous points that I might disagree with upon deeper consideration. Consequently, the reader should approach this sketch with a grain of salt.
[17] Autopoiesis is an important and heavy term that is central to enactivism. I cannot give a full review of relevant discussions here, but suffice it to say that it roughly means sum of self-organization and autonomy, in the sense that organism is autonomous and it can act despite the environment.
[18] Note that neither Chomsky nor Egan is a proponent of embodied cognitive science. Their positions rest on different arguments than embodied cognitive scientists.
[19] Even though I am on the same page with Hutto and Myin on the hard problem of content, I am skeptical towards both their distinction between basic minds and linguistic minds, and their claim that language and representationality are intrinsically related, issues both of which are beyond the scope of this thesis. See, for example Van den Herik (2014) for the latter issue. Also, as Villalobos and Silverman (2018) point out, asserting a semiotic relation in basic minds instead of representationality does not do much difference in terms of metaphysics of mind and the relevant philosophical problems.
[20] Notice that I do not say ‘natural science’ as it has connotations regarding the distinction between the natural science and the social science. What I mean by ‘naturalistic science’ here is in fact the entirety of disciplines that are both scientific and naturalistic which may or may not cover social disciplines and/or some other so-called ‘special sciences’.
[21] Here I will shortly refer to the Rorty’s definition of naturalism: "I define naturalism as the claim that (a) there is no occupant of space-time that is not linked in a single web of causal relations to all other occupants and (b) that any explanation of the behavior of any such spatiotemporal object must consist in placing that object within that single web" (Rorty, 1998).
[22] In fact, the particular word I have in mind is Turkish, “cızırdama.” It is at the intersection of discordance, scratch and sizzling.
[23] Feyerabend (1963), in a two-page commentary, proposes something very similar in outline.
[24] This example and historical narrative are taken from Weber (2022).
[25] The concept morphogen was originally introduced by Alan Turing in the 1930s. However, even earlier, there were other terms used in a similarly vague sense to refer to "entities or molecules present in the cell, whose presence, structure, or various regularities influence or determine the developmental process." In this context, I am using the term morphogen in this general sense, albeit somewhat ahistorically.
[26] In saying this, I adopt a similar view with Feest (2010) who is also cited by Weber, albeit not entirely. The view in question suggests that concepts in science are research tools. I believe this naturally follows from my defense of anti-realism, my references to Wittgensteinian contextualism, and my view of science as a socio-cultural practice.


