Corps de l’article

This thematic issue shows how the concept of dynamic capabilities can be furthered by a microfoundational perspective. In this position paper, we address underlying discussions about methodological and epistemological issues. We zoom out from conceptual debates about DCs and from recent debates about the micro-foundations approach to propose specific improvements of research practices. We promote nothing but a piece of “small-m” methodology, a set of practical recommendations articulated around multi-level and situational analysis to improve data collection and data analysis. Our goal is to clear the ground for a more systematic use of the microfoundations approach in strategic management.

DCs have by now become one of the central, organizing concepts in management research, finding wide application in studies that address entrepreneurship, innovation, competitiveness, learning and leadership. In other words, the concept is not limited to strategic management research, its origin (Teece, Pisano and Shuen, 1997), but informs large areas of macro-management research. In their bibliometric and historiographic analysis of DCs, Peteraf et al. (2013) and Di Stefano et al. (2014) explain that research on DCs has developed under the impact of two complementary yet partly contradictory contributions to the theory of DCs, namely Eisenhardt and Martin (2000) and Teece, Pisano and Shuen (1997). DCs represent the capacity to integrate, to construct, and to reconfigure knowledge and resources. As such, the notion of DC has become an essential element for the understanding of firm behaviors.

Peteraf et al. (2013) and Di Stefano et al. (2014) explain that the field is “socially constructed”. They also show that the literature on DCs is consistent with the VRIN conditions of sustainable competitive advantage introduced by Barney (1991). Teece (2007) explains that DCs rely on the capacity to identify and take advantage of new opportunities, and transform them to capture value. Teece et al. (1997) focus on adaptation[1]. Eisenhardt and Martin (2000) draw a difference between “moderately dynamic markets” and “high velocity markets” and explain that DCs resemble routines in the former, and “experiential and fragile processes” in the latter. They add that the focus is on variation in the former, and on selection in the latter. The tension between the field’s theoretical roots materializes in a divide pointed out by Di Stefano, Peteraf and Verona (2014: 317): is the aim of DCs “to achieve advantage over market rivals, or […] to adapt to changing conditions (or create them)”? Constantly adapting the organization to achieve new forms of competitive advantage is the heart of the concept of DCs. Di Stefano, Peteraf and Verona (2014: 317) conclude from their bibliometric analysis that the research domain diverges in its understanding of the concept, rather than converge around a consistent definition of how DC should be framed. Peteraf, Di Stefano and Verona (2013: 1394) insist on discrepancies about boundary conditions to explain different logical links between DCs and sustainable competitive advantages. They subsequently conclude that scholars need to take into account the contingencies of each test domain to understand where Eisenhardt’s and Teece’s respective approaches best apply. Peteraf, Di Stefano and Verona (2013: 1407) and Di Stefano, Peteraf and Verona (2014: 311) dichotomize the theoretical roots on DCs between behavioral theory (and Eisenhardt’s orientation for organizational issues, managerial mechanisms and the analysis of (simple) routines) and the resource based view (that they associate with Teece’s orientation, with a focus on more complex routines and organizational mechanisms). The last paragraph in Peteraf, Di Stefano and Verona (2013), however, mentions that all mechanisms are equally important and at work within firms, either simultaneously or sequentially. They mention that all these aspects are required in the analysis and try to articulate together the two approaches.

In Section 1 of this article, we propose another interpretation for the divide in the field: the heterogeneity in the literature on DCs relates to methodological debates, or practices. We suggest that scholars working in the field of DCs adopt different positions regarding levels and units of analysis, and fail to converge about the proper unit of data collection for field research. The remainder of this article will relate to these aspects with methodological and epistemological debates. In the subsequent sections, we clarify the links between the micro-foundations approach and multi-level analysis, or situational analysis, and we stress the importance of articulating together social facts (e.g. institutions), social outcomes, the individual actions, and the underlying epistemic and decision-making processes. Section 3 clarifies multi-level analysis. Section 4 comments on the unit of analysis issue in multi-level analysis and points out that research on DCs has difficulties coping with it. Section 5 repositions these debates inside broader epistemological perspectives. Section 6 concludes this position paper with final comments and practical recommendations about how to install the microfoundations approach in empirical research on DCs. We explain that a reference to microfoundations may help progress in conceptual debates.

The level of analysis issue: what is the locus of dynamic capabilities?

According to Peteraf, Di Stefano and Verona (2013), the foundational DCs articles by Eisenhardt and Martin (2000) and Teece, Pisano and Shuen (1997) opened up two different vistas in DC research with regard to boundary conditions, sustainable advantage, and competitive advantage. They argue that Teece, Pisano and Shuen (1997) (henceforth, the “Teece view”) and their followers focus on complex environments and managerial behaviors, while Eisenhardt and Martin (2000) (henceforth, the “Eisenhardt view”) and those following their lead instead emphasize explanations based on routines and organizational coordination issues. Di Stefano, Peteraf and Verona (2014) argue that the Eisenhardt versus Teece views on DCs differ in five key dimensions: the nature of the concept, the locus (the agent), the nature of action, the object of the action, and the aims of the construct. Trying to reconcile the Eisenhardt and Teece positions, they identify a difference in perspective yet conclude that scholars need to take into account the relevant contingencies in their empirical research. We accept this point.

Peteraf, Di Stefano and Verona (2013) conclude that the DC concept requires a comprehensive view articulating both approaches, and an exploration of the underlying “dynamic bundles” as a whole. They define “bundles” as a sort of interaction between simple and complex routines that cannot be discussed in isolation from an explanation of the attainment of sustainable competitive advantage. The “bundles” represent their attempt at articulating together the individual and organizational levels of analysis. Di Stefano, Peteraf and Verona (2014: 318-9) then propose to articulate these aspects into an “organizational drivetrain” similar to the mechanism making it possible to pedal a bicycle. In their metaphor, the front gears [crankset] compare with simple rules (and individual-level actions); the chain with linking mechanisms articulating with ways of working; the back gears [freewheel] with complex routines and any other organizational-level action; and the derailleur with some sort of coupling and uncoupling mechanism used to shift gears, and therefore to “cope with the challenges of a changing landscape.” They explicitly describe that the top management activates the front gears, drives the bicycle, and introduces constraints on the action taking place at “the more complex organizational level”. The chain coordinates the two levels of action. The derailleur, or back gears, allows for flexible adjustments described by Martin (2011). Di Stefano, Peteraf and Verona (2014: 320) explain about the “drivetrain” in reference to the CISCO case already discussed by Eisenhardt and Sull (2001) who explain the importance of simple rules. The case describes how CISCO acquires startups to foster its related diversification strategy. The authors re-analyze data displayed by Eisenhardt and Sull and try to overcome the Eisenhardt-Teece theoretical divide with two respects. First, the case illustrates that the references to complex routines versus simple rules “need not be thought of as contradictory or opposing visions”; they mention that this theoretical divide does not have much practical meaningfulness for actual decision makers or managers. Second, they mention that complex routines and simple rules respectively associate with the organizational and individual levels of analysis. They note, at the end, the importance of “interconnectedness, [of the] reliance on tacit knowledge, and [of the] complexity of a system involving a variety of moving parts” to promote the importance of focusing on the entire dynamic bundle. We support that a comprehensive view of the entire dynamic bundle is important.

Our point is different. The literature on DCs faces analytical challenges with practical implications: how to handle the unit of analysis associated with the concept and make the individual, team and organizational levels together consistent? Using the metaphor of the organizational drivetrain and of the bicycle again, these questions translate in a very easy way. Who is actually pedaling the bicycle and activating the DCs? Who is selecting the front gears? Who is actually introducing the constraints on the back gears? Who is activating the derailleur? How does the chain work out coordination between front and back gears?

The metaphor of the drivetrain cannot make us forget the conceptual challenges and open debates still present in the academic literature on the nature and contents of the front and back gears. We can however picture that front gears emerge from strategy building initiatives developed by owners, boards, and/ or some part of the top management (as in Teece, 2017). We can also eventually understand that back gears elaborate on ways of working available after long and complex evolutionary processes, and some sort of interaction between middle managers and the other layers of the organization (as illustrated in Merindol and Versailles, 2018). The discussion present in Peteraf, Di Stefano and Verona (2013) only shows where the rider pedals its bike and shows that a contingency perspective makes it eventually possible to reconcile Eisenhardt’s and Teece’s approaches. The analysis remains, however, partial, because they do not address any of the questions about the nature of the rider, and the actual dynamics present in or around the “drive-train”. To put it in different words, the aspects characterizing the locus of DCs are not directly addressed even though epistemic and connectedness issues are explicitly identified.

We note that recent publications by Eisenhardt (e.g. Davis et al., 2009) and Martin (2011) still focus on the issue of rules and routines. In more recent works, Teece (2014) and Teece, Peteraf, Leigh (2016) conversely refer to investigations of rationales for sensing, seizing and reconfiguring located at each managerial level. They document organizational agility. In his 2017 article, Teece is only explicit about the locus of “higher order” dynamic capabilities, that he analyzes at individual level with top managers only. Other aspects are not discussed. The locus of DCs seems to be an open question, even for one of the founding fathers of the field.

We use the drivetrain metaphor to point out another typical conceptual issue ignored in Peteraf et al. (2013) and Di Stefano et al. (2014) articles: how does the “system” or “socially complex and hard to imitate dynamic bundle of resources and capabilities” pictured as a “drivetrain” actually emerge? Is it possible to provide analytically separated explanations of its emergence and of its operational activation? The references to simple rules, to complex routines, to actual ways of working and behaviors (already present in Teece’s exposition of how DCs orchestrate resources (2007; 2014)), to adaptive systems under conditions of change (Eisenhardt, Furr and Bingham, 2010; Martin, 2011) already address parts of the question. In reality, the description of CISCO’s competitive advantage is only accepted ex-post in Di Stefano, Peteraf and Verona (2014) analysis and proposition. They list rules and routines used by CISCO to select startups for its external acquisition strategy and to hire or retain talents for its R&D, but they do neither explain the relationships with the VRIN conditions of competitive advantage, nor its sustainability. They take these aspects for automatically granted and do not analyze how they relate to rules, routines, ways of working and decision-making processes. The effectiveness of the contribution of these startups to CISCO diversified portfolio is not discussed. The analysis is also silent about eventual selection (or accommodation) errors with these startups. To paraphrase Foss and Klein (2012: 221), the drivetrain analogy is “descriptively tractable but silent on key analytical problems”.

The drivetrain analogy leaves a practical question wide open. How is it possible to provide explanations for DCs consistent with a real-time perspective on managerial decision-making processes? Explanations have to be consistent with tools, information, behaviors, rules and routines prevailing in each layer of the organization when actual activities develop. Explanations must be consistent with the actual time frames where decision-makers situate, and with the information and knowledge available for decisions at the different moments of such time frames. When scholars reinterpret a decision-making process and dispose of extensive knowledge about all direct and indirect consequences of a managerial decision, they leverage on much more data and information than the actual manager had available when making the decision. They neither suffer the time constraints in place when the actual managers had to make the decision, nor the psychological constraints prevailing during the interpretation of data and information, nor the uncertainty prevailing in business in general. It is typically difficult to walk in the decision makers’ shoes and reconstruct (real-time or ex ante) decision(s) or perceptions. As in most evolutionist theories, the investigation starts with a confusing tangle of events, where the researcher tries to make sense of some visible outcomes of a complex process. Researchers draw on a preanalytical narrative to rub out all bifurcations, provide a linear vision of the different steps towards the elaboration of decisions, as ones who only see the visible part of an iceberg and cannot even imagine that bifurcations exist in the evolutionary process. Their stories do not match the situation that practitioners confront, and often justify retrospectively decisions and local perspectives. When one watches a movie and already knows the end of the scenario, it is easy to make sense of each scene and to spot misinterpretations, errors and irrelevant behaviors. We need a conceptual apparatus that takes into account the complexities of (real) time, radical uncertainty and ignorance (O’Driscoll and Rizzo, 1985).

Multi-level analysis and explanation in social sciences

The notion of multi-level analysis lies behind the methodological issues described in the previous section. It has been described in numerous research fields, most notably sociology, epidemiology, education research, human geography, and demography where this discussion has been introduced in parallel by scholars who treat it independently (Courgeau and Baccaïni, 1998). It is also present in management science and in economics. This point relates to the possibility to reconcile macro- and micro-perspectives in these research fields. Physics had to wait until Heisenberg to learn about discrepancies between “Newton’s physics” and “Einstein’s physics” with these respects. The macro versus micro debate has been conversely present all the time in social sciences. In all research fields, at any period of time, the very same questions emerge again about how to reconcile conclusions generated at individual versus collective level, and how to make them together consistent. Debates always address the interaction between individual and aggregated characteristics, and their respective influences. This question echoes very old debates claiming an epistemological difference between, on one side, physics and the other sciences of matter and, on the other side, social sciences and all life sciences. In epistemology, the debates cover in particular the truth status of the laws produced in each research field and their potential for generalization (universality). Autonomous methodological issues also discuss all aspects in relation with the design of research protocols, namely data collection, data codification and data analysis.

The term multi-level analysis has been introduced at the very moment where sociologists were discussing the truth-status of the first conclusions generated with descriptive statistics. Robinson (1950) authored the expression in investigating the statistical protocols present in Durkheim’s analysis of suicide. He pointed out that different correlations emerge when statistics are processed at individual versus aggregated level. He concluded that such correlations are no substitutes one for each other, and generate contradictory explanations, or logical links with different variables.

Selvin (1958) developed the criticism further. He coined the term “ecological fallacy” to stress the importance of the context, or the importance of computing “as many variables as necessary” to explain a phenomenon both at group and individual levels. In Robinson’s or Durkheim’s words, the term “ecology” is synonym of “group” and generates the associated correlations (therefore the title of Robinson’s 1950 article). Research then tried to identify potential correlations between suicide and other variables such as religion, age, or community at aggregate level, without paying attention to the underlying rationales at individual level. Selvin insists that “the main important point is not substantive, but methodological” (ibid., 1958: 611); it relates to the method used for collecting and processing data and generating explanations. In pointing out the “ecological fallacy”, Selvin refers both to the initial conditions leading to (sociological) phenomena, to the dependencies to context effects, to interactions in the ecosystem, and to the specificities of the case impacting the potential for explanation and for generalization. He justifies the importance of the context when analyzing data because social scientists “can seldom randomize” (ibid, 613): all data are context dependent.

The same issues apply to management science. All are present in the previous section when mentioning rules, routines, decision making processes, and any other part of the “drivetrain”. Selvin pointed out the necessity to go deep into the variables explaining the context[2] to understand the potential for generalization or replication of any conclusion. The “ecological fallacy”, as he calls it, emerged from the shortcomings of descriptive statistics[3] to generate the same correlations at the individual and aggregate levels, and to refer to the same explanatory variables. At the other end of the methodological debate, the “atomist” fallacy deserves the very same criticism and shows the same shortcomings. The debates presented in the previous section illustrate that the “ecological fallacy” exist in any research context where data can be collected either at individual or collective level, and that it is not dependent on the collection method or on the tool used for their analysis (descriptive statistics in Durkheim (1897), case study analysis in Helfat and Peteraf (2015)).

Following these considerations, multi-level analysis seems to represent the solution to appraise the entirety of a phenomenon and explain it thanks to considerations about its presence in a context situation. The analysis simultaneously develops at all levels at the same time, without devoting more importance to any of these levels (Courgeau and Baccaïni, 1998: 40). The explanation should cover individual and aggregate aspects to increase its relevance, and also the modalities of interaction between individual and aggregate aspects.

An important step of the discussion develops around the reference to Coleman’s trapezoid (or “bathtube”, compare figure 1 adapted from Coleman, 1986; 1990) that was already commented by Bunge (1996; 2003: 127) as the Boudon-Coleman diagram to acknowledge Boudon’s commitment to methodological individualism in sociology (1979 [1981]).

This discussion has direct consequences on how to design data collection protocols.

In Coleman’s terminology, micro refers to individuals; macro refers to social systems (such as a family, a city, a business firm, or a society) and to “collective” entities. Macro-conditions in Coleman are rephrased into “social facts” (e.g. institutions) in Felin, Foss and Ployhard (2015); similarly, macro-outcomes (Coleman) are rephrased into “social” outcomes. The “conditions of individual actions” describe the environment where behaviors and decision-making processes take place (including norms and cultural aspects). “Individual action” and decisions represent an important, yet not unique, part of the explanation of social outcomes. Arrow 1 “represents assumptions on how social conditions affect these variables”. Arrow 2 covers the theories explaining individual behavior and decision-making. Arrow 3 represents assumptions on “how actors’ behaviors generate macro-outcomes”. Abell, Felin and Foss (2008) show that causal chains connecting two macro-phenomena always involve “macro-to-micro” and “micro-to-macro” links, not possibly explained or understood with direct “macro-to-macro” links.

FIGURE 1

General model from social science explanation

General model from social science explanation
Source: Felin, Foss, Ployhart, 2015: 591

-> Voir la liste des figures

We will refer further to these aspects and explain about the application of the microfoundations perspective to DCs.

The unit of analysis issue in multi-level analysis

The methodological difficulty identified in the DCs debate relates to the necessity to handle at the same time different levels of the analysis, as illustrated in the drivetrain metaphor. Key critical issues described in relation with organizational capabilities (Winter, 2003) and the capabilities-based literature focus on collective level constructs such as routines and capabilities and the challenges of linking these constructs to other constructs at different levels (Felin and Foss, 2005: 443). Nelson and Winter (1982) focus on routines as unit of analysis; they figuratively refer to individual skills and collective routines, but they give primary emphasis to organizational routines over individual behavior (1982: 9, 134-5). Eisenhardt and Martin (2000) adopt this very same option while Teece (2007, 2014) mixes the approach with individual behaviors present with different categories of managers. When the literature on routines and capabilities moves in a more literal direction explicitly independent from individuals, they add implicit assumptions about individual homogeneity: the organization members are malleable beings influenced by the organization’s identity; learning is primarily limited to internalizing from the social context (Spender, 1996: 53). Felin and Foss (2005: 443) explain that this line of reasoning places the explanatory burden on the context and environment (the “contingencies” in Peteraf et al., 2013), over individual causation.

Di Stefano et al. (2014) describe the evolution of the DC field with a figure showing two axes, and many interactions: the horizontal axis connects the nature of DCs, the subsequent action, and the associated ultimate goal; on the vertical axis agent(s) serve as antecedent(s) of action, and the object of DCs as a consequence. They list “bifurcations” for the five elements yet never try to propose a comprehensive analysis of interactions between them. They only show a flat list of items. The nature of DCs alternates between a latent action (e.g. an ability, a capacity) and constituent elements (as in a process, a routine, or patterns). The difference lies in the degree of observability of phenomena. The two perspectives on action show an even split over whether DCs change an existing base, or create something new. The object of the action of DCs recalls the debate capabilities (or resources) versus opportunities. Di Stefano et al. (2014) write that the aim of DCs seems the most critical yet divided issue, and therefore suggestive of complexities: they oppose a focus on competitive performance as a whole and a “more general notion of helping an organization to respond to changing conditions”. Agents locate upstream of action and the object of DCs on the vertical axis: the focus shifts from managers to organizations. Di Stefano et al. (2014: 315) maintain that these two views “may not be incompatible” and “that the two views may have different appeals for different types of audience”. The organizational view should lead to “more robust theoretical foundations” while the managerial version puts “more emphasis on practice” and “shows more concern for real world utility” (sic). The analysis of the agents of DCs remains the only moment where Di Stefano et al. (2014) identify an issue with the level of analysis. We agree that this point is important, but we cannot understand why this issue does not pervade the analysis of all bifurcations mentioned in the article. Discrepant units of analysis impact the action, the aim, the nature and the object of DCs. Consistency issues between these levels must be considered as a part of the analysis for each perspective while “bifurcations” are listed in isolation from the other aspects. Di Stefano et al. (2014) never address how these items may fit together. We suggest that the problem of the field precisely lies in this consistency issue. Multi-level analysis is a mandatory aspect of research on DCs.

Felin and Foss (2005: 446-7) identify four important methodological problems related with multi-level analysis in reference to strategic management, and applicable here:

  1. Multi-level theories most often borrow from specific results in psychology and apply them to higher levels of the analysis (for instance: behaviorism, stimulus-response theory) without even questioning their scientific status in the original field (expanded in Felin and Foss, 2011; 2012). Consistent with Petroni (1991), we add as a complement that the reference to psychology elaborates either on the methodological or epistemological preference for reductionism.

  2. Multi-level theories most often view the analyses available at each level as complementary to each other, and as equally valid for the explanation. They implicitly advocate for some sort of pluralism without considering that it might represent a form of relativism. This option relates to “coherentism” (i.e. the coherentist theory of justification) that is a view about the structure and system of knowledge. Avoiding relativism and opportunistic research protocols requires an analysis of how concepts and theories available at the different levels complement each other. We supplement this point. Multi-level theories do not address the question of transformation that is the rigorous transposition of investigations from one to the other level of analysis. This point covers another theoretical demand: the global consistency between explanations provided at each level (e.g. from the individual to the collective level).

  3. Multi-level theories fall into the trap of infinite regress, or regressive[4] argument. At each level, the explanation depends on another point introduced in a different level. This is for instance the case when explanations about the organization logically depend on a business unit level, that in turn justifies its content with a point made at team level, and then at micro-individual level, that then looks for justifications into behavioral psychology, then into cognitive psychology or into neuro-science, etc.[5]. The infinite regress emerges because it is always necessary to go further back, and there is no end with the provision of justifications. An important point is here the difference between explanation and justification. When any part of the sequence of arguments is investigated, its truth-status (or its “proof”) either depends on a justification or on the infinite regress towards another justification; the same applies to the proof itself, and to any other proof at any other level. Not any level provides an actual demonstration[6] (with the reference to safe internal and external logic). This issue is not easy to solve, because it relates at the same time to recurring debates in philosophy and epistemology (for instance about reductionism), and to practical considerations about the design of research protocols. Albert (1968) and Bartley (1984) have discussed the articulation between these different categories. We do not want to open such Pandora boxes in this article. We point out that the confusion between justification and explanation (or demonstration) represents an actual challenge for the design of research protocols, and for data analysis. The growth of scientific knowledge elaborates on the confrontation between theories and observations, and on the demarcation between “certainty” and “truth”.

  4. Regressive arguments lead to another very important question: where does locate the added value/ contribution of a specific research field, if its concepts cannot autonomously provide an explanation for a phenomenon? Felin and Hesterly (2007) discuss the point in relation with captive audiences and academic insularity. We prefer to point out other aspects. At scientific level, concepts will automatically reduce (and, eventually, lose) their explanatory power and, therefore, their relevance if they logically depend on another field. At methodological level, nesting together many explanatory layers generates as many “protective belts”, thus immunizing the theories.

Helfat and Peteraf (2015) illustrate all these methodological problems at the same time. To analyze “cognitive managerial capabilities” and DCs, they import the analytical description of individual cognition and the subsequent learning processes from psychology without considering that Posner and Rothbard (2007) article only builds one of the many contributions to conversations in this field. When they use the article without acknowledging its status of non-conclusive and highly debated proposition, they also fall into the regression issue. Helfat and Peteraf (2015) build a bridge between individual, team and collective levels to claim an analysis of the effectiveness of the top managers’ impact on their organizations. They emphasize the role of the managers’ cognitive capabilities in DCs. However, they do not look for an explanation of the respective influences between individuals, teams and collective entities. They place the managers in isolation, not in interaction, and refer to the notion of “cognitive capabilities” as a sort of black box for which explanations are eventually “delegated to” psychology. Some of the research questions for which they use Posner and Rothbard (2007), however, exist in conversations inside management science: the twin reference to mental models and mental activities discussed in the strategic management of knowledge in Boisot (1998) or Boisot and Canals (2004); team coordination and team performance (Marrone, 2010); team effectiveness (Salas, Goodwin and Burke, 2009) and team cognition (Salas and Fiore, 2004). The list is not limitative. Research on team cognition or team effectiveness illustrates how to build a multidisciplinary debate, to jointly go as deep as possible into psychology and management science at the same time and not transform either field into a sort of black box for the other one.

The question of multi-level articulation illustrates with Teece (2017) article and with the already mentioned difference between “dynamic” and “ordinary” capabilities (2017: 1-2). “Ordinary” capabilities relate to functions such as routine, administration and basic governance activities. DCs in turn separate into higher-order versus second-order categories. Second-order DCs relate to “astute decision-making under uncertainty”, and implicitly relate to tasks performed by middle managers. Higher-order DCs build the top managers’ originality, materialize with the sensing, seizing and reconfiguring actions, and “guide”, “aggregate” and “direct” both ordinary capabilities and second-order DCs. Even if he does not clearly state it, Teece adheres to multi-level analysis in segregating 3 levels in the organization: ordinary capabilities for basic governance and basic administration (routine back office), middle managers for the daily front office, and top management making strategic decisions. Teece even uses the word “microfoundations” to describe the relation between ordinary capabilities, second order DCs on the one hand, and higher-order capabilities on the other hand. Most of Teece’s conclusions show how cognitive and decision-making processes situated at top-manager level build the interaction between DCs and the definition of a firm’s grand strategy. Teece locates his 2017 contribution at top management level in making this a genuine consequence of his definitions for capabilities and for “sensing”, “sensing” and “reconfiguring”, not of his research protocol (as would do the proponents of the micro-foundations approach). These definitions assign properties and prerogatives to each category of manager (in each layer), that in turn feed the investigation of individual epistemic mechanisms. Teece’s reference to microfoundations therefore represents a way to open the black box of (individual) epistemic mechanisms and decision-making rationales without using any justification borrowed from psychology. However, Teece does not go into the details of the actual ways of running the “drivetrain” pictured by Di Stefano et al. (2014), and never discusses how to articulate together the organizational levels. Focusing on the “drivetrain” would require questioning the relevance of the definitions. It would mandate a confrontation between these definitions and the articulation between individual competencies, decision making and epistemic processes. It would also require an analysis of the process through which collective competences are built inside each layer of the organization, and a comparison between the contents of the definitions and actual processes prevailing between layers. In the 2017 article, Teece remains silent on processes bridging individual competencies with collective competences.

Update on methodological debates about multi-level analysis

The debate on the relevant unit of analysis for DCs represents one of the numerous illustrations of such (methodological) difficulty in social sciences. It shows, if required, that the debate still makes an important impact on research. Notable contributions have been recently published in the Journal of Institutional Economics in 2010 and 2011, with a debate between Felin and Foss, and Winter, Pentland, Hodgson and Knudsen. In 2010, Erkenntnis published Vromen’s analysis of Abell, Felin and Foss (2008) proposition to use Coleman’s “bathtube” (1990) as a methodological reference for multi-level theories, and their reply. More recently, in Sociological theory, Jepperson and Meyer (2014) introduced a critique on micro-level mechanisms-based explanations against the same authors, who replied later in Sociologica (2014) (with subsequent comments by Barbera and Negri, 2014). Jepperson and Meyer develop their points in reference to Bunge (1996, 2000) and elaborate on his rejection of methodological individualism. All (including Barbera and Negri) ignore that Foss and Felin explicitly refer to Agassi (1975, 1987) who explained that any individualism is by (logical) definition a form of institutionalism and, at the minimum, a form of interactionism. Both Bunge (1996; 2000) and Jepperson and Meyer (2014) only comment on forms of radical individualism that are not present in the microfoundations approach. They miss the references to Agassi’s “institutional individualism” and to Popper’s situational analysis even though Agassi explains that such concepts and Bunge’s “systemism” (2000) “cover the same ground” (2011: 545).

One point builds the foundations of the discussion and the controversy: there cannot be such a thing as direct macro to macro causation. This is a core argument extensively discussed by all promoters and detractors of methodological individualism, either in philosophy of science and methodology, or in different sciences (most notably sociology, economics and management). Felin and Foss (2012: 10) have a very explicit presentation of these methodological requirements: the microfoundations program expects “to unpack, where possible, the central constituents, processes and interactions among individuals” in order to explain collective entities, such as norms, routines, capabilities, etc. To stress the point, they automatically coin such collective entities as “outcomes”.

Felin and Foss also call for the Popperian reference to “clouds” and “clocks” (Popper, 1972) to remind us that the “contingencies” (in Peteraf et al. 2013 words) or “situational features” (in Latsis’ words, 1983: 140) do not force particular behaviors as behaviorism postulates that agents are programmed to automatically follow experience-based heuristics, rules, routines, etc. With this reference, they show their consistency with the philosophical (ontological) posture in favor of indeterminism that is also present in Menger (1883) and in Popper (1972). The arguments remain however logically independent from each other – as Agassi (1987), Bunge[7] (2000) and Japperson and Meyer (2014) commented in very different perspectives.

It is also important to note that methodological prescriptions “to unpack” quoted earlier for the microfoundations approach remain independent from any form of ontological individualism, i.e. the thesis saying that collective processes are produced and reproduced by individuals only, and that methodological and conceptual prescriptions would be required because actions are enacted by individuals (Bunge, 2000: 385). In his 2011 comments to Felin and Foss (2011), Pentland was already calling for a close examination of the “ontology of organizational routines” (2011: 285) and he was indirectly addressing the holism versus individualistic debate through the prism of macro-level entities, and eventual macro-to-macro causation. The “ontological truism” represents neither an explanation of, nor a reference for the microfoundations approach. It is however possible to reverse the argument and identify that Jepperson and Meyer (2014) make an implicit (philosophical and/ or ontological) argument in favor of the autonomy (the primacy) of macro-level entities over individuals. This is the reason why these authors link “action” [“actorhood” in their own words] with cultural and social constructions without direct reference to individuals (as already present in Jepperson and Meyer (2000), in Abel et al. (2014)). Barbera and Negri (2014: 7) point out these shortcomings of Jepperson and Meyer’s development when they comment on the bottom left part of Coleman’s diagram (the “conditions of individual action”).

A specific difficulty arises with the explanatory power of the microfoundations approach: the logical possibility to exhaust the explanation of a phenomenon thanks to the microfoundations approach. Jepperson and Meyer introduce this point with the difference between “microfoundations of social-organizational and institutional pathways” and “causal arguments at the level of individuals conceived as actors” (2011: 67). They explain that the micro-composition of a truly collective-institutional process is not equivalent to the sequence of arrows 1, 2 and 3 on Coleman’s diagram, coined as “macro-micro-macro” explanation. Genuine individualistic causalities would emerge from individual-level decisions, behaviors, and their respective effects. Microfoundations conversely relate to more complex sets of enactments rooted in interactions between individuals, and in the reference to institutional structures (e.g. roles, routines, processes). We accept Jepperson and Meyer (2014) points that (1) microfoundations relate to the “lower (or lowest-relevant-level) causal process within a multi-level system” (2014: 68) and (2) that microfoundations do not conflate with causality in general. We disagree that social-organizational and institutional properties “can be connected via a micro-level pathway” (ibid, our emphasis) because we defend that they should always be explained, supported or contextualized via such pathways. We disagree that macro-properties “may also be connected via distinct macro-level causal pathways” (ibid, our emphasis) for two reasons: first, we adhere to the fact that only individuals can be the actors of the “actions” explained with causalities identified in social science in general, and in management in particular; second, we defend that any macro-level pathway should always be contextualized in its situation to avoid “ecological fallacies” (Selvin, 1958). The same holds for the “meso” pathway indicated by Jepperson and Meyer (2014: 66, 68) in their adaptation of Coleman’s diagram.

Abell, Felin and Foss consider social complexity, emergence and multi-level processes as “mere re-statements of the problem of social theory rather than a basis for explanation” (2014: 2). They do not situate the microfoundations approach in any sort of (micro-) reductionist methodological perspective. This difference is important. In their replies published in the Journal of Institutional Economics in 2012 and in Sociologica in 2014, Abell, Felin and Foss always use the same words: “to unpack constituent and component parts”. From a conceptual perspective, these words align with Coleman’s “macro-micro-macro” pathway. Consistently with institutional individualism, these words mean that macro and social outcomes emerge[8] from individual actions and from interactions (Ullman-Margalit, 1977) as “the result of human action, but not of human design” (Watkins, 1987: 432). From a methodological perspective, it is important to point out that words used by Abell, Felin and Foss do not have any sort of technical content in relation with the reductionist debate underlying the issue of causation in individualism. Driving back to constituent and component parts translates into modern words the research protocol described by Menger (1883[1996]: 45). The founding father of the Austrian school made a difference between the technical meaning of the word “to reduce” (reduzieren) and vernacular uses of the word (zurückführen). Carl Menger used zurückführen.[9]

Abell, Felin and Foss (2008) identify “missing links” and claim a “causal incompleteness” that Vromen comments to denunciate an amalgam between causal links and constituents. Constitutive relations, analyzed by Petroni (1991: 42), Ylikoski (2013) and Bulle (2018), represent elements of the relation between parts and wholes: to state that a part is a constituent of a whole is not the same as asserting that the part causes the whole. It is easy to state that the whole would be different if made of different parts, but the causal link critically depends on a logically causal and temporal precedence. Abell, Felin and Foss (2010) accept this distinction as a missing aspect in their 2008 article. They point out that properties of the part should be analytically separable to play a role in the explanation of the whole and, also, of inter-level relations. While Vromen (2010) takes advantage of his argument to support the conclusion that “macro-to-macro” explanations may eventually hold, Abell, Felin and Foss (2010) nevertheless maintain three points: (1) inter-level relations can be causal; (2) there are no macro-level causal mechanisms; and (3) specific definitions or implementations of the concepts of routines and capabilities may lead to interpret them as macro causes inasmuch as it is possible to define their content independently from micro-variables. Boland (2003: 257) has already shown that the inclusion of macro-foundations (i.e. the macro-to-micro dimension in Coleman’s diagram) does neither violate Popper’s situational analysis nor Agassi’s institutional individualism. Boland’s demonstration supports Abell, Felin and Foss (2010) third point.

Ylikoski (2014; Section 7.4) shows that causal explanations track causal dependencies, and constitutive explanations track constitutive dependencies. Ylikoski (2013) explains that what has to be explored/ explained to build constitutive explanations always locate at micro-scale. Using logical and ontological points, he explains that organizations and “higher level” structures most often exhibit many properties that are not those of their members. With different arguments, Yilikoski reaches a conclusion already pointed out by Petroni (1991) and Agassi (1975) about Popper discussion of “situational analysis” and of the “rationality principle” (1994). Popper’s demonstration of the impossibility of reducing[10] any decision to the decision maker’s psychology automatically leads to the conclusion that it is methodologically impossible to propose explanations of social phenomena (“social outcomes”) solely elaborating on individualistic drivers. Multi-level analysis is a necessity.

Practical recommendations to implement the micro-foundations approach: “small-m” methodology

Macro-outcomes follow from an explanans comprising at the same time, and with the same level of importance, assumptions on individual behavior, macro-conditions, macro-to-micro relations (“bridge assumptions” in Raub and Voss’ words), and micro-to-macro relations (“transformation rules”). These aspects should all be investigated in the protocol installed by one who expects to adhere to the micro-foundations approach, and documented with data collection, data codification and data analysis. Felin and Foss (2012) have already commented on these aspects and called for more emphasis on choice, foresight, anticipation and rationality, and for the necessity to insert in the analysis that individuals are not rigid followers of rules and routines. The most important part of the microfoundations approach associates with the necessity to produce a comprehensive picture of the entire “bathtub” and “to unpack” all aspects underlying arrows and nodes. We call for a theory of intentional action and of purposive factors associated with the recollection of data allowing for the appreciation of the logic of the situation. To establish a comprehensive perspective of the situation, we also call for a triangulation of data sources focusing on different categories of actors and decision-makers, and for a systematic contextualization of the context of action with different stakeholders in the ecosystem.

We conclude this article in introducing practical methodological recommendations on how to implement the microfoundations approach with data collection, data codification, and data analysis. Table 1 recapitulates on all aspects for field research design in the microfoundations approach. The research question may focus on any of the nodes or arrows of the “bathtub”, with the aim of establishing causalities and logical linkages, typologies, taxonomies, etc. This perspective directly supports several types of field research protocols: inductive, abductive, but also hypothetico-deductive or deductive ones. It is consistent with either qualitative or mixed methods, with longitudinal or cross-sectional analysis, and perfectly aligns with recommendations of the inductive (Glaser) or abductive (Strauss) interpretation of Grounded Theory (Glaser and Strauss, 1967).

Table 1

Documenting the methodological options with the micro-foundation approach

Documenting the methodological options with the micro-foundation approach

-> Voir la liste des tableaux

The micro-foundations approach currently lacks research articles in quantitative analysis, most notably in the domain of DCs and routines, even if mixed methods applying quantitative tools to data collected from questionnaires or text mining from interviews would be easy to implement. It is conversely easy to illustrate these recommendations with articles using qualitative methods. Table 2 describes the field research protocols offered in two articles discussing questions in strategic management complying with the microfoundations approach: Merindol and Versailles (2018, EMR) and Barney, Foss and Lyngsie (2018).

These practical aspects lead to an important point.

The microfoundations approach is an instance of applied methodology, or “small-m” methodology as Boland coins it (2003: 4sq; 308; chapter 18). Boland contrasts the establishment of an autonomous body of research activities working primarily on new models and new questions in the methodology of social sciences, and of economics in particular, against activities addressing actual research questions in economics. Boland points out that economists look for the support of “methodological plumbers” (sic, 2003: 4); they do not need any promoter of a new form of heterodoxy. It is easy to transpose Boland’s comments to management science. The microfoundations approach does not intend to develop as an autonomous subfield in management research. Our purpose is to serve the advancement of research on key concepts (for instance DCs) with recommendations on three aspects: the design of field research protocols, data collection, and data analysis. We point out that this agenda cannot ignore the underlying connections to framing and recurrent debates in epistemology and in philosophy but, as noted by Boland, the domain of “small-m” methodology is already very large. In the 1978 introduction to the first edition of Die beiden Grundprobleme der Erkenntnistheorie[11] ([1978] 2009: xxv), Popper identified two different topics that had not been properly considered in his previous publications: deciding whether a statement is decidable (its confrontation to field research and the discussion of its scientific status) versus the question of its truth (i.e. its demarcation from non-science, that is most often quoted as the falsifiability principle, cf Popper, [1978] 2009: xxvi: point 9). The decidability depends on the proper design of field research protocols, and on data collection. This is precisely where we intend to locate our effort.

Table 2

Our methodological recommendations in practice

Our methodological recommendations in practice

-> Voir la liste des tableaux

This article intended to zoom out from current debates around the microfoundations approach and to explain how research on DCs might benefit from this reference. We have also introduced practical recommendations for the design of research protocols consistent with the microfoundations approach. This does not mean that all issues have been cleared. The sections of this article illustrate on the contrary that the microfoundations approach both relates to very ancient controversies for social sciences, and situates at the same time in vivid debates in modern epistemology and methodology. These arguments are no difficulties for management science. We consider them as opportunities to foster the development of better defined research protocols in management science in general, and on DCs in particular.