Chapter 3
When Mechanistic Explanations Went Unreal
During the last few decades, the New Mechanism tradition in philosophy of science has gained prominence. This tradition departed from the mainstream strands in 20th century philosophy of science, eschewing the discussions about nomological explanations, theories that are composed of lawlike statements and inter- or intra-theoretic reduction. The New Mechanists took mechanisms or mechanistic explanations at the centre stage and built a new way to look at how science explains.
One might interprete the New Mechanism framework in two ways, broad or narrow. In its broad interpretation, the mechanistic view of science is an overarching framework, both descriptive and normative of how scientific explanation works and how it should work, covering science in its entirety. The narrow interpretation is a modest one in that it takes the mechanistic framework only as a particular way of doing science among other alternatives (Glennan, 2017). I am committed to the broad interpretations for the reasons that will be clear at the end of this chapter. However, the claims of the present thesis are surely compatible with the weak version as well. Because it is both compatible to read the thesis as a bold normative claim relevant to all scientists or as only a contribution to mechanistic way of doing science for those who take the narrow interpretation of mechanistic outlook.
In this chapter, I will explain the basic claims and commitments of the New Mechanism. I will first show what mechanisms are or what mechanistic explanations are taken to be. Then I will explain how mechanistic explanations come about, how we arrive at them. Later, I will focus on how mechanistic levels can be illuminating in terms of the relations between scientific disciplines and emergence. Lastly, I will clarify the main misconceptions about mechanistic explanations which are mostly about the compatibility of abstraction/idealization, linearity and the distinction between mechanisms and machines. At the end of the chapter, I will adress how mechanistic explanations are compatible with an anti-realist view on science.
3.1 Introducing mechanisms and mechanistic explanations
Machamer, Darden and Craver’s 2000 article has been very influential at the resurgence of New Mechanism. Even though they do not give a formal definition of mechanisms, they characterize it as “Mechanisms are entities and activities organized such that they are productive of regular changes from start or set-up to finish or termination conditions.” However, this has later been updated by other mechanists mostly on the grounds that it is too restrictive and there is no a priori reason for a mechanism to be regular or to have set-up and termination conditions[1] (Bechtel, 2011, 2022; Glennan, 2009; Krickel 2014; Leuridan, 2010; Machamer, 2004). For example, Bechtel and Abrahamsen (2005) argue that “a mechanism is a structure performing a function in virtue of its component parts, component operations, and their organization. The orchestrated functioning of the mechanism is responsible for one or more phenomena.” However, this characterization has later been found to be problematic as the concept of function is a very loaded term metaphysically and there is no consensus as to how to situate it in the mechanistic science[2] (Craver, 2013; Garson, 2013). Thus I will take Glennan’s minimal mechanism (Glennan, 2017, p.92) as the most recent and widely accepted characterization of mechanisms: “A mechanism for a phenomenon consists of entities (or parts) whose activities and interactions are organized so as to be responsible for the phenomenon.” So there is a phenomenon and a mechanism for that. Before going into the details, it will be better to look into that distinction.
At this point, Glennan (2017) makes use of Giere’s account on models (Giere, 2004, 2006) and mostly follows him. Giere proposes a liberal conception of model where there is not a definitive characteristic or property of being a model except that something is used by the modeler as a model to represent something else with a particular purpose. This conception stems from the practice that both in philosophy and in the sciences there are countless ways something is taken to be a model. Equations, algorithms, simulations, diagrams or even living creatures as in model organisms or model animals.[3] Note that the representational relation stated here crucially relies on the modeler’s agency and purpose. [4] Accordingly, there are only models of models of models ad infinitum... Glennan goes on and states that mechanistic explanations are models as well, sometimes called mechanistic models or mechanical models. Moreover, they do have two parts: phenomenal description and mechanism description (Glennan 2005, 2017). Phenomenal description only gives us the cause and effect pairing, namely the explanandum in a sense.[5] The mechanism description, though, gives us how that cause and effect come about as a result of particular kind of components, their relation to one another, their activities and the overall organization of them. It is important to realize, though, that these two descriptions cannot be separated much clearly in most mechanistic models and can only serve as an idealized picture here. In this context, mechanism descriptions can come in different ways. For instance, some phenomenon might be the result of a temporal process whereas some others might be synchronically realized or constituted by the mentioned mechanism (Craver and Darden, 2013). In this sense, mechanisms are said to be producing, underlying or maintaining the phenomenon. Producing refers to situations where the mechanism produces the result as an end-point of a temporal sequence, e.g. protein synthesis. On the other hand, a mechanism can be underlying or maintaining the phenomenon as is the case with organelles and their activities that make up a cell or a metabolismic mechanism to maintain a certain regulatory processes in an organism’s body. Another parallel distinction is between etiological explanations and constitutive explanations. Etiological explanation pretty much corresponds to when the phenomenon is produced whereas constitutive explanation is when the mechanism constitutes the phenomenon. Processes such as protein synthesis or a neuron's reception of current through its dendrites, its loading and firing, the transmission of the signal across the synapse, the subsequent release of neurotransmitters at the terminal point, and the resulting activation of another neuron via a different dendrite can serve as examples of etiological mechanistic explanations. In contrast, the constitutive mechanism can be illustrated through the earlier example of cells and organelles. This last point has led to much debate about how to demarcate constitution and causality in mechanistic explanations. Since this debate has no definitive relevance to the present discussion I will not much focus on it except for the few sentences below in the passages about levels. A quote from Glennan (2017) can summarize the upshot of the present discussion:
Mechanistic models are the vehicles for mechanistic explanation. A mechanistic model characterizes both the phenomenon to be explained and how the organized activities and interactions of some set of entities produce or underlie that phenomenon. So mechanistic models show how the phenomenon is caused and constituted by a mechanism. It is for this reason that I call mechanistic explanations how explanations.[6] (p. 68)
I said above that it may not be too easy to distinguish between the phenomenal description and a mechanism description of a mechanistic model. Another related issue is that mechanistic models are always incomplete to differing degrees.[7] For example, there might be some models where a subset of components or activities corresponds to mechanistic parts whereas other details do not correspond to anything as such and only posited for some theoretical or formalistic reasons. This is especially apparent when one considers how scientists come to posit mechanism descriptions. There are two important distinctions that can be read as two orthogonal[8] to each other. The first distinction is between how-possibly models and how-actually models while the second is between the mechanism-sketches and mechanism-schemas (Craver, 2007). How-possibly models are proposed as conjectures or hypotheses[9] about how a mechanism works. If enough empirical tests or the background knowledge at hand confirms the model then it becomes and how-actually model. Of course it is not decided at a moment by the relevant epistemic and/or scientific community but as the evidence grows the community increasingly builds consensus. In this context, it is always possible for another how-possibly model with more details or with a different content to emerge and become the how-actually model at present. It is also a possibility that a scientist may not know if the stated components, activities, parts and the organization is correct at the beginning, before any substantial process of empirical testing.[10] On the other hand, mechanism-sketches and mechanism-schemas are about how precise and/or detailed a mechanistic model is. Craver tends to view these two distinctions in a parallel fashion in that he claims the more a model approaches to become a how-actually model the more it also moves from being mechanism-sketch to mechanism-schema. It becomes more detailed and fine-grained. However, even though that might be a common tendency for models, it does not have to be so since we could easily think of thoroughly fine-grained mechanistic models that are false and coarse-grained, sketchy models that are more correct. Therefore, as Glennan suggests it is more accurate to conceive these distinctions as orthogonal to each other. However, remember, these are all about giving a mechanism description for the phenomenal description. So, where this phenomenal description comes from? In fact, it comes from some previous mechanism description probably[11] proposed to account for another phenomenon. As Glennan suggests, all mechanistic models contains gaps, components, activities that need to be further accounted for, hence every mechanistic model creates new known-unknowns to be explained (Glennan, 2005). Mechanistic models are being continually refined and modified. Appropriately, Craver even argues that this constant filling in the gaps found in mechanistic models is what drives the scientific progress.
The emphasis in filling in the gaps and always approximating more details in mechanistic models, especially made by Craver and Darden (2013) led to much criticism (Levy & Bechtel, 2013). The idea rests on problematizing the blurred line between a complete and incomplete model with emphasizing the role and importance of abstraction and idealization in science. Opponents ask whether we should go until the atomic level or even below just to explain some phenomena in, say, social cognition. Even though the whole debate contains various arguments and to fully capture it here is impossible, suffice it to say that the New Mechanists do not have any problem with using explanations that are incomplete in differing degrees whenever it suits the researcher’s pragmatic concerns. For them an explanation is completely complete when science is complete, i.e. when all explanations are complete. At this point it is obvious that this is an impossible ideal but an ideal consistent with the practice of science and scientists since it always aims to broaden its scope and updates itself forever. Hence, it is no problem for them to admit incompleteness.
Another issue is that Craver and Darden, sometimes with David Kaplan, stated multiple times that their views situate abstraction and idealization in a place no less important than that of opponents (Craver, 2013; Craver & Darden 2013; Craver & Kaplan, 2020). They maintain that every mechanism is a mechanism of some phenomena, which would require the researcher to ignore irrelevant details for the phenomena at hand. But their emphasis on filling in the gaps also stems from the concern that if we did take functional explanations, black boxes and other explanations with many gaps and unexplained subcomponents as complete, then there would not be any iterative or progressive way to advance science in a coherent and consistent way.
Agreeing with Craver, Darden, and Kaplan, I would add that every explanation is, by its nature, a model, and every model is necessarily an abstraction—because it is not the thing it represents. We can only understand the system in question through concepts that are already abstracted, along with the relations and activities we propose between its parts. In this sense, abstraction is not something we can avoid; it is an essential part of how we explain anything at all. “The more detail the better” is true in the context of science as a whole but it is always possible to use an explanation at a preferred level according to the context or the relevant theoretical or practical needs. For instance, in neuroscience, one might explain the transmission of a signal across a neural network by referring to action potentials and synaptic transmission without invoking the molecular details of ion channel behavior or neurotransmitter synthesis. In some contexts, this high-level mechanistic account is not only sufficient but preferable such as when modeling large-scale neural activity in computational neuroscience. Thus, while further detail might enrich the explanation in principle, abstraction at a certain level can be pragmatically more fruitful, depending on the context.[12]
The hierarchical structure of mechanistic explanation has become apparent in the above discussion. This is an aspect of the New Mechanism that contrasts with its antecedents in that they almost exclusively focused on etiological mechanisms since the levels are implicit in the new mechanistic science.[13] We explain some phenomenon by either providing an etiological or a constitutive mechanism - and ideally both. Then we explain further the components within that very mechanism. This iterative process goes on as far as our epistemic capacities enable us. However, this process does not only run through from upper levels to lower ones, we also look up to see if our mechanism is a part of another phenomenon at some other level (Bechtel, 2009). This intuitively and intrinsically follows from the characterization of mechanisms and how they explain.
In this context, we can clarify what we mean by "level" with an example. If we take the cell as our example again, the components that make up a cell, namely, the organelles and their activities, constitute one level, whereas the interactions in which the cell, as a whole, participates with other cells constitute a different level. To describe these intercellular interactions, it is not necessary to refer to the internal subcomponents of the cell, at least, this is not a logical necessity. Whether it is an empirical necessity may depend on the specific phenomenon in question. The intercellular interactions of a cell may indeed be influenced by and even depend on its internal processes. However, someone who only studies the organelles without also investigating the cell's intercellular interactions will not be able to establish the relation between the two. In other words, studying organelles alone may not be sufficient for predicting the intercellular behavior of the cells they constitute. This discussion may remind those familiar with the topic of the concept of emergence. The debate over emergence versus reduction has a long history and, roughly speaking, stems from the claim that all sciences can ultimately be reduced to physics. Against this view, it is often argued, especially in fields such as biology and psychology, that many processes cannot be reduced to physics, and that no matter how well we understand the physical level, this understanding will not suffice to predict what happens at higher levels. A crucial distinction here is whether this limitation is ontic or epistemic in nature. The former is associated with the notion of strong emergence, which holds that the structure of the universe is genuinely multilayered, and that causal relations at levels above physics are just as real as those within it. The latter, known as weak emergence, maintains that irreducibility stems from our cognitive or methodological limitations, rather than from the nature of reality itself.
Notice that even though not entirely incompatible, there is little room for strong emergence here. Mechanistic explanations are mostly taken to be suggestive of a minimal account of emergence where organizational emergence is possible but strong emergence is something to be explained further, if possible. Some mechanist philosophers argue against spooky emergence on the basis that the resulting properties of the emergent entity should be the result of the explaining lower-level mechanism even though they might be unpredictable from it. However, spooky or strong emergence opens up a space for new properties that are not the result of the underlying[14] mechanism. On a related note, this is where mechanistic approach departs from mainstream reductionism in that it has no claim regarding the predictability of higher-levels from the lower ones as reductionism would have it. Instead it is open to the idea of epistemic emergence as well where the reason we cannot predict the novel properties of the emergent level might be our restricted ability of knowing be it by our technology and/or methodologies or our cognitive capacities.
Notice that the mechanistic levels have also implications for the integration of different disciplines. Disciplines working at different phenomena at different levels might be integrated in an inter-level way naturally via such mechanistic outlook (Bechtel, 2009; Craver & Darden, 2013; Tabery, 2014). A caveat would be that different disciplines might explain the same or related phenomena via different mechanisms which would be problematic for an anti-pluralist but not problematic for a pluralist. However, even if you are an anti-pluralist it is a theoretical and empirical problem that might be solved eventually by further research and testing. This hierarchical structure of mechanistic explanation and the mechanistic conception of emergence (mechanistic emergence henceforth) and “levels” will be crucial for the argument I will develop in the next chapter.
3.2 Misconceptions and criticisms about mechanisms
There are a bunch of misconceptions about the mechanistic framework. Some of them are outright novel misunderstandings whereas others stem from the commitments of historical precedents of the New Mechanism. The word mechanism and mechanistic have certain type of connotations due to that historical background. However, the New Mechanists repeatedly claimed and dismissed most of those problematic commitments. Especially since those misconceptions also feed much controversy and possibly arguments against the New Mechanism, it is important mention them and make clarifications.
One such prominent misconception refers to the alleged incompatibility of abstraction/idealization with mechanisms. This concern mostly stems from some mechanists’ emphasis on more detail is always better. We already mentioned in the above discussion why this is and cannot be the case provided that mechanistic models are always models of some phenomena and always ignores some components out there whereas retaining others.
Another visible misconception is taking mechanisms as being identical with machines (Craver & Tabery, 2023; Darden, 2006). This leads to the criticism that most things, for example organisms, are not machines and hence they cannot be treated mechanistically (Nicholson, 2018). However, the New Mechanists especially eschewed this interpretation and stated that even though the concept of mechanism and machines are both etymologically and historically related they are not necessarily the same thing. Something that is not a machine may still contain or contribute to some sort of mechanisms. To defend a weaker thesis, something that is not a machine can still be modeled in virtue of its component parts, activities and organization. In fact, we can witness this kind of a modelling practice throughout the entire history of science. A salient example is the cardiovascular system in humans. While the heart is not a machine in the literal sense, it is often modeled mechanistically as a pump that drives the flow of blood through a system of vessels, regulated by valves and pressure gradients. Such a model allows for causal explanation and intervention without presupposing that the system is a machine in the strict engineering sense. What matters is not mechanical construction but the presence of organized components and their coordinated activity.
One other controversial issue in this regard is the claim that mechanisms must be stable. Stability here means that the arrangement of parts, activities and the organization be fixed and cannot change. However, scholars like Glennan (2009) talks of ephemeral mechanisms where the parts, activities of parts and organization of them can change through time. Notice that this does not lead to a contradiction with those parts, activities and organization and the change of them take place in space and time. How and why does that change occurs in a given mechanism is also something to be explained by some mechanistic models.
Another widespread misconception which sometimes becomes a central line of critique is the claim that mechanistic explanations, as characterized by the New Mechanism, are inherently linear and thus incapable of accounting for phenomena involving feedback, circular causality, or dynamic regulation. This critique, however, rests on a misreading. The New Mechanism does not commit to linearity as a necessary feature of mechanisms. Rather, it emphasizes the organized interaction of entities and activities that jointly produce a phenomenon. In fact, feedback loops are fully compatible with mechanistic explanations, as long as the loop itself can be decomposed into or explained in terms of underlying organized components that sustain the behavior in question. Servomechanisms, for instance, constitute a classic case where feedback is inherent to the system’s capacity to actively maintain certain variables within bounds, producing apparent stasis or equilibrium as a result of continuous internal regulation (Bechtel, 2011, 2022). What may seem like a static output is in fact an emergent outcome of dynamically interacting mechanistic components. The New Mechanism is well equipped to capture such dynamics, provided that the relevant explanatory models remain committed to uncovering the constituent parts and their modes of interaction that give rise to such regulated patterns. Therefore, criticisms grounded in the assumption that mechanistic models must be linear or cannot accommodate feedback overlook the flexibility and scope of the New Mechanism.
As a last remark in this context, considering the above discussion, one might then ask, what is it that incompatible with mechanisms? Is the concept of mechanism so trivial that it is compatible with pretty much everything? If mechanism covers all these then what does it not cover? The primary clarification for such a concern is that to be comprehensive to a great extent is not surprising for a framework that claims to explain — at least for some proponents — how science actually works. We mentioned that there is a broad interpretation that takes New Mechanism as both descriptive and normative framework capturing the entirety of science and the ontology of explanation. In this reading, other kinds of models are explanatory only in so far as they can capture and take into account the mechanistic activities and components. For example, Kaplan and Bechtel (2011) claim dynamical models are illuminating in terms of temporal organizations of interactive mechanisms whereas others think that they are only descriptive models and do not provide an explanation (for an application of this argument in cognitive science, see; Erdin, 2020). This is also the case for other so-called explanations like topological explanations and network explanations. For mechanists, they fail to distinguish between good but entirely fictional predictive models and when they do not fail to distinguish them it is only possible through some implicit or explicit mechanistic model. One could argue that this is precisely what distinguishes an explanation from a simulation. Otherwise, just as in the case of deep learning models, we could endlessly increase the predictive power of our model by adding countless elements whose correspondence to any structure or causal organization in the target system or phenomenon (e.g., the brain or a given cognitive function) remains unspecified or unknown. Yet this would not suffice for explaining the system. As the mechanists argue, in order to legitimately claim that we have explained a system, the components or processes used in the model must correspond to actual entities or processes within the system itself. In this sense, the model must be medium-dependent (Bechtel & Abrahamsen, 2010; Milkowski, 2011, 2016).
Accordingly, to clarify the discussion it is safe to say an explanation must be medium-dependent and what makes a model explanatory is its content rather than its form or predictions. Therefore, network models, topological models, dynamical models, etc. are all the cases of medium-independent models.
However, since scholars who take a narrow interpretation of mechanism mostly do not think a mechanistic explanation is compatible with any of those we mentioned above, this triviality objection would not arise in their case. Nevertheless we can refer to a short list given by Craver and Tabery (2023) as examples of what are not mechanisms: entities, objects, correlations, inferences, reasons, arguments, symmetries, fundamental laws, relations of logical and mathematical necessity.
For the present discussion, the following quotation from the same source might be helpful as well:
The idea of mechanism is a central part of the explanatory ideal of understanding the world by learning its causal structure. The history of science contains many other conceptions of scientific explanation and understanding . . . Some have held that the world should be understood in terms of divine motives. Some have held that natural phenomena should be understood teleologically. Others have been convinced that understanding the natural world is nothing more than being able to predict its behavior. Commitment to mechanism as a framework concept is commitment to something distinct from and, for many, exclusive of, these alternative conceptions. If this appears trivial rather than a central achievement in the history of science, it is because the mechanistic perspective now so thoroughly dominates our scientific worldview (Craver & Tabery, 2023).
3.3. New Mechanism goes unreal
I argued for an anti-realist conception of science in Chapter 2. However, most mechanists commit to a realist view in that they take it that in science we discover mechanisms (out there) in nature (Bechtel & Abrahamsen, 2005; Glennan, 2005). That might possibly lead to a confusion that needs to be dealt with here.
First of all, I should make something clear: the discussion about realism versus anti-realism is mainly concerned with science in its entirety, meaning that it does not say anything particular for some modeling practice in science. It concerns itself with what any modeling practice in science can achieve in terms of mind-independent truth and/or reality.
On the other hand, mechanist philosophers’ commitment to some form of realism might not stem from their commitments to mechanistic modelling but from realism being the dominant philosophical stance in the philosophy of science academia. In this regard, Colombo, Harmann and Van Iersel’s article (2015) takes up the task of showing why and how modelling practice in mechanistic science does not guarantee realist commitments just by themselselves as there is no necessary relation between the two. In doing that, they first show why mere empirical adequacy cannot give us realist commitments in a similar vein as we did above. Then they argue that the fact that a true mechanistic model cannot be true mind-independent does not prevent the models from being useful, insightful and explanatory according to the criteria set by the relevant scientific community at the time. They maintain that those criteria are also already used in determining whether a model is true in the first place — whether construed realistically or anti-realistically. Therefore, they claim that what makes a mechanistic model, true or in more accurate terms, correct or accepted, is the empirical coherence of that model with the background knowledge of the scientific community. So far so good... Yet, they mostly construe coherence in an empirical way, which is in terms of beliefs and evidences. Even though this is not problematic for their paper’s aims and scope because they want to show how mechanistic modelling is in principle equivalent to that of mechanism with a realist belt, it has a missing part. We know that science is something more than belief plus evidence as most clearly portrayed by the underdetermination discussion. What kind of beliefs do we form, namely, what kind of concepts do we employ, what is our ontology, our “conceptual toolkit” are also matters of coherence. Because obviously we do not form beliefs about angels or fairies and test them in science but do so about particular things and events. We only take particular kinds of concepts relevant to scientific investigation. Might there be a coherence criterion that can help us to clarify this observation further?
[1] There are different conceptualizations of regularity. Following (Craver & Tabery, 2023) I take it that a mechanism is regular when it works the same way given the same background conditions. This also might relate to a counterfactual understanding of causality, though not necessarily. However, it does not follow from this that when we cannot provide the same conditions more than once for the contingent reasons, say some event in history occuring only once, we cannot attribute a mechanism to it. It is only a contingent complication of that specific event that it does not happen more than once. This contingency does not pose any problem to the claim that the event is a result of some mechanism or that there might be some mechanisms accounting for that event.
[2] A caveat about functions: I am sympathetic towards Craver’s perspectival account (Craver, 2013) and I take functions to be heuristic attributions to systems/mechanisms. The main reason is that even though functional thinking may appear to enrich our theorization and modelling processes, at least only by itself, it has no contribution to whether some model is explanatory or an explanation is successful or not. Because description of a mere causal chain without any attribution of function would not lose explanatory power. Consider explaining some functional phenomena or translating a functional description into description of a set of particular events without any normativity (as opposed to function-talk since functions always come with malfunction and normativity) that where we have enough knowledge. It would be still possible albeit practically very difficult or impossible. Moreover, one might even say functions themselves, when put forward as explanatory, become further explanandums in mechanistic science. This way of thinking can be traced back to the well-known distinction between the context of discovery and context of justification which is conventionally attributed to Reichenbach (1938). I will not focus on the debate about functions in this thesis since it is not relevant to the main claims of the present thesis.
[3] Natural language sentences, words, idioms or metaphors etc. might also be taken as models in this picture.
[4] Notice that the modeler here might be more than one person, it can even be an entire community.
[5] I use the word explanandum with a grain of salt here just to illustrate that phenomenal description gives us the thing or process to be explained. It should not be confused with the original meaning of the word that comes from covering-law explanations.
[6] In Bechtel and Abrahamsen’s (2005) words “mechanistic explanations explain why by explaining how.”
[7] An example of partial and incomplete model is Hodgkin-Huxley model discussed by Craver (2007) and Glennan (2017).
[8] Note that we owe this remark on orthogonality to Glennan (2017).
[9] Glennan (2017) notes that how-possibly model can be interpreted more broadly where they are models of logical possibilities that may or may not be conjectures. One benefit of this interpretation is that it allows some scientific practices like exploratory simulations or testing hypothetical scenarios that are known to be factually false just to make idealized inferences. Glennan then adds another category in between, how-roughly models that are conjectures or hypotheses.
[10] Notice that it is not much different in theorization in other kinds of explanatory frameworks. It is still possible and legitimate to posit unobservable or yet unobserved constructs into the model as well.
[11] I say probably because that is usually the case but one can imagine situations where one just observes or imagines something as an explanandum out of blue.
[12] I used the terms idealization and abstraction interchangeably here, following Craver and Kaplan (2020), because everything that is said about one of them also applies equally to the other in this context.
[13] There is also the issue of whether something has an ontic aspect i.e. a claim about the structure of reality itself. However, in line with my argument on the anti-realism in the previous chapter, this will no longer be relevant for this discussion. I obviously do not commit to such an ontic view.
[14] I am using the word colloquially here.


