Comment: The debates about evidence-based research, practice and policy appear to be never ending. This ongoing contestation is both necessary and predictable, given what is at stake. Basically under consideration are the roles researchers can expect to play in influencing governing practices. I have been interested in this topic for some time.

In 2008 I wrote an article examining debates in the 1970s about “research utilization” and attempts by researchers to influence policy making (Bacchi 2008). I traced the changing dynamics of these debates through to later discussions of the so-called “know-do gap” in relation to the social determinants of health (1990s, 2000s and continuing).  To summarize a rather complicated argument, I highlighted the institutional constraints on researchers – how the topics they investigated (the “problems” set for study) were linked to a large extent to their bases of support. I described a shift, from the 70s to today, to greater and greater reliance on government and other external forms of funding. My argument, put starkly, was that sources of funding influenced topics of research, shaping and limiting the research agenda.

This argument links directly to the evidence-based practice debates but in a way that is not immediately obvious. Current concerns around evidence tend to focus on questions about “whose evidence?”, challenging for example the “hierarchy of evidence” that privileges RCTs (Randomized Controlled Trials). I ask a question that precedes this concern. I ask: “evidence for what?

Put briefly, evidence-base policy enshrines a form of applied problem solving, based on a rational policy model (see Howlett 2010, p. 19). As in this model, what goes under-examined are the “problems” that are assumed to launch both the policy making and research exercises. In evidence-based practice, the focus is on “what works”, assuming that the goals – the research “problems” – set for testing research interventions are legitimate and non-prejudicial. Given the commitment in the “What’s the Problem Represented to be?” (WPR) approach to interrogate assumed policy “problems”, my concern at the willingness to accept and work within such constraints can be anticipated.

Some researchers (Biesta 2007; Sanderson 2003, 2009) who raise critical questions about the “what works” approach to research embrace a version of pragmatism, drawing on Dewey (see Research Hub entry, 30 April 2018). They make an extremely important point – that critical scrutiny needs to target not just the means of research but the ends as well. However, this goal is compromised, I believe, by the lack of attention to how a problem-solving mindset – which both authors are loath to abandon – continues to assume the existence of legitimate, predetermined “problems” as research foci.

In this vein Biesta (2007: 15, 18), drawing on Dewey, endorses a form of “reflective experimental problem solving” or “professional problem solving”, and continues to assert that the goal of policy is to “successfully solve problems” (Biesta, p. 19; emphasis added). Along similar lines, Sanderson (2009, p. 699; emphasis added) identifies as one intellectual pillar in thinking about evidence-based policy, “social scientific knowledge and its role in guiding action to address social problems”.[1]Drawing on Dewey, he asserts that “the foundation for the development of knowledge about the world lies in active engagement with concrete problems” (Sanderson 2009, p. 709; emphasis added). This engagement, says Sanderson (p 709), involves “commitment to the methods of scientific inquiry and to the principle of experimentation, subjecting hypotheses to empirical test in an active engagement with experience”.

Sanderson’s aspiration is to produce a modified (“neo-modern”) approach to policy making that incorporates “a broader perspective” based on dialogue and argumentation. He (p. 710) insists that “our knowledge is always fallible and open to further interpretation and criticism”. As a result he recommends that we “introduce pilots or trials, evaluate their success and move forward cautiously” (p. 714).

WPR intervenes in these debates by pointing to the non-innocence of the “problems” set for researchers. It argues that the failure to question the “problems” set for analysis – or for pilot studies! – limits severely the scope of any critique. Put simply, I argue that, in order to interrogate the ends of research – which both Biesta and Sanderson endorse strongly –, one needs to interrogate the beginnings– the “problems” set for investigation.

I want to ask, for example – just how many more studies of “obesity” do we need? How has “obesity” come to be constituted as a health “problem”? How does it function to shape health research practices? What fails to be attended to in such research? What is not being investigated? On the implications for researchers, I can do no better than quote Merton and Lerner’s 1951 (p 306) caution:

The scientist is called upon to contribute information useful to implement a given policy, but the policy itself is “given”, not open to question … So long as the social scientist continues to accept a role in which he (sic) does not question policies, state problems, and formulate alternatives, the more does he (sic) become routinized in the role of bureaucratic technician.

A first step toward challenging this designated and compromised role, I suggest, is to disrupt the problem-solving mindset that dominates the current intellectual and policy landscape. WPR aims to contribute to this goal.


Bacchi, C. 2008. The politics of research management: Reflections on the gap between what we “know” [about SDH] and what we do.  Health Sociology Review, 17(2): 165-176.

Bacchi, C. 2016. Problematizations in Health Policy: Questioning how “Problems” are Constituted in Policies. Sage Open, April-June: 1-16.  Bacchi Problematizations Health Policy DOI: 10.11771/21582440/6653986.

Biesta, G. 2007. Why “What Works” Won’t Work: Evidence-based practice and the democratic deficit in educational research, Educational Theory, 57(1):

Howlett, M. 2010. Designing Public Policies:  Principles and Instruments. Routledge.

Merton, R. K. and Lerner, D. 1951. Social scientists and research policy. In D. Lerner and H.D. Lasswell (eds) The Policy Sciences: Recent Developments in Scope and Method. Stanford, California: Stanford University Press, pp. 282-363.

Sanderson, I. (2003). Is it “what works” that matters? Evaluation and evidence-based policy-making, Research Papers in Education, 18(4): 331-345.

Sanderson, I. (2009). Intelligent Policy Making for a Complex World: Pragmatism, Evidence & Leaning, Political Studies, 57: 699-719.

[1]Sanderson’s (2009, p 699) other intellectual pillar comprises “our developing knowledge about complex adaptive systems”. I have raised qualms about the “turn to complexity” in Bacchi 2016.