Document Type : Commentary
Author
Division of History, Heritage, and Politics, University of Stirling, Stirling, UK
Abstract
Keywords
In principle, the aim of Squires et al1 is laudable since they have a great general point: let’s not be too vague when we describe policy-making ‘context.’ They note that many people use ‘context’ imprecisely, often as a catch-all term for the things we suspect to be important explanations for variations across different cases. To solve this initial problem, they identify how some scholars have used this term, followed by interviews with expert practitioners in multiple countries to see what they mean by ‘context.’ The result is an impressive list, which operationalises a key idiom – forewarned is forearmed – and adds value to similar lists in the knowledge translation field. In practice, there are multiple issues worthy of further attention.
While Squires and colleagues’1 (p. 16) initial purpose is to foster research, they also hope to use the results to guide implementation by helping ‘change agents’ to identify ‘the important features of context to consider when choosing, designing and implementing interventions.’ This aim exposes a difference between two connected objectives: to (1) describe or (2) engage in policy-making. As a description, a long and thorough-looking list can be interesting without being overwhelming: the reader will soon realise that one could not possibly incorporate all of these factors into policy design. Then, they might explore, for example, how policy designers simplify the list or the consequences of only taking into account a small proportion of factors. Either way, a long list is comforting to have if we do not need to use it. As an aid to prescription, the potential to be practical is outweighed by the likelihood that such lists become overwhelming.
As such, the modern history of implementation studies provides a relevant cautionary tale, albeit told in more or less positive ways.2-6 Put most provocatively, implementation studies was ruined by an excessively long list. So called ‘first generation’ studies took a ‘top down’ approach to focus on implementation gaps with reference to a small number of key factors.7 These factors combined to represent a manageable research agenda (to study implementation gaps) and practical heuristic (to try to minimise them). The latter involved making sure that: your aims are clear and well communicated, to skilful and committed staff, while devoting sufficient resources, maintaining stakeholder support, minimising the number of actors or steps essential to the delivery chain, and hoping that external events or socioeconomic conditions do not undermine your plans. The former involved identifying a tendency – largely in case study research - for these aims to not work out in practice.
It was followed by a ‘second generation’ of studies that examined these dynamics from the ‘bottom up.’ Such studies noted that the aim to close an ‘implementation gap’ from a top-down perspective was misguided empirically (the centre does not control implementation networks) and normatively (the ‘gap’ may be a legitimate deviation from central government aims).8,9
Then came the ‘3rd generation’ of implementation scholars who sought to move beyond case studies to foster large-n studies. This task required them to turn (1) a huge shopping list of the factors that might be crucial to implementation, into (2) a shortlist that was parsimonious enough to produce a manageable research agenda. The aim may have been to quantify the combined impact of key factors, but the result was limited interest in the call for a third generation.
Overall, what began as a simple and practical heuristic became an overwhelming list of variables. It seemed to prompt many scholars to move onto other concepts (with great potential to reinvent the wheel). Further, those who remained seemed to offer more rigour and more studies but less to say to (a diverse group of) practitioners (p. 310).4
In that context, Squires and colleagues’ aim is laudable, but we should beware the unintended consequences of their attempt to solve the problem, and the possibility of creating a bigger one. This problem may be compounded by seeking new data from practitioners without first learning from previous approaches to comparable issues, including not only implementation studies but also studies of policy analysis and design.10
Implementation studies sought to operationalise key variables to help quantify the extent to which each explained implementation issues. It struggled to manage so many variables and, crucially, was not able to establish how they interacted to produce emergent outcomes in complex systems.
In comparison, Squires and colleagues’ study categorisesmany factors, including culture, geography, governance, political climate, and leadership. Each of these concepts comes with its own literature which describes its ambiguous and multi-faceted nature. If so, much like the sorcerer’s apprentice, we may be in danger of replacing one big vague term – context – with many smaller ones.
For example, ‘governance’ can be associated with a normative stance: the requirement for ‘good’ governance. This term is highly contested, such as when ‘new public management’ ideas based on private sector methods face some challenge from ‘new public governance’ ideas based on concepts such as collaborative governance and public value.11-13 Or, governance is a shorthand term to make an empirical and conceptual point, to describe the inadequacy of the word ‘government’ to describe policy-making. It is little more than a catch-all term to introduce a wide range of empirical studies in different ways.14
Further, perhaps the most important category of all is ‘System complexity,’ accompanied by the quotation ‘I think a key challenge is related to the complexity or the under-estimation of the complexity of the system involved. … if you’ve done any work in a complex system, when you shift something in one place, something moves elsewhere that was unexpected’ (p. 9).1 It not only adds to the list of concepts that need to be unpacked to be useful, but also exposes a major division in attitudes to policy-making context. One use of ‘systems thinking’ for policy design is to seek the ability to use policy levers to produce a disproportionate impact: ‘if we engage in systems thinking effectively, we can understand systems well enough to control, manage, or influence them’ (p. 130).10 An alternative focus in policy studies is to describe the policy outcomes that ‘emerge’ from complex policy-making systems in the absence of central control: “we need to acknowledge these limitations properly, to accept our limitations, and avoid the mechanistic language of ‘policy levers’” (p. 130).10
This distinction between types of systems thinking introduces a wider point about many of Squires and colleagues’ categories: many describe their functional requirements rather than actual policy-making dynamics. In the list of ‘context features,’ we find the requirement to: design a well-functioning organisational structure and networks where many actors interact, build trust through beneficial social interaction, secure organisational readiness for change, find effective local champions, secure buy-in from partners, and have sufficient capacity to deliver (including enough people well trained in implementation or ‘translation science’). There is a big difference between listing requirements and securing them in practice. As in the study of policy analysis and design, these factors may be more useful to help explain gaps between expectation and outcomes.
The authors have produced a very useful intellectual exercise, prompting scholars and respondents to be careful when using ‘context’ too loosely. This is a welcome service to the profession, which could be extended by comparing responses across different countries and political systems. However, it is not yet clear how these categories would help ‘change agents’ engage more effectively during policy implementation.
Not applicable.
Author declares that he has no competing interests.
PC is the single author of the paper.