It is difficult, if not impossible, to keep up with the pace of change to do with AI. Some months ago, I shared with you my first experiments with ChatGPT (29 Sept 2023). A question arises as to the usefulness of WPR in relation to these developments. In this entry I introduce a recent article that tackles European policy regulating AI and that draws upon WPR to produce a cogent analysis (Pham and Davies 2024).
I am frequently asked for exemplars of how to work with WPR. The Research Hub provides an ideal venue to pursue this project. Hence, you can expect to find several subsequent entries that will bring to your attention useful WPR applications. The entries will include comments on the kinds of materials that can be used for a WPR analysis and on theoretical issues that require further consideration.
Article: Bao-Chau Pham & Sarah R. Davies (02 Jul 2024): What problems is the AI act solving? Technological solutionism, fundamental rights, and trustworthiness in European AI policy, Critical Policy Studies, DOI: 10.1080/19460171.2024.2373786
Brief summary:
The authors critically explore the policy “responses” to AI produced by the European Commission, with a particular focus on the AI Act (the EU’s Proposal for a Regulation: Laying Down Harmonised Rules on AI), first published in April 2021. They find that this policy constitutes the technology as both opportunity and threat, and that these problematisations are made to “cohere” through risk-based categorizations. A particular point of interest is how these problematisations enact an “exceptionalist notion of Europe as a policy actor and coherent political community” (Abstract).
Materials used:
Pham and Davies (2024) offer a clear guide to the sources they draw upon for their analysis. Table I (p. 6) lists the AI policy documents published by the European Commission (EC) and the High-Level Expert Group on AI (HLEG AI) between April 2018 and April 2021. The authors explain that, while the policy “response” is most explicitly spelt out in the AI Act, they consider this wider “corpus”,
“as the documents are all constitutive of the policy discourse that underpins the AI Act and wider EU AI policy discussion. These additional documents allow us to understand how the proposal and justification for the AI Act became intelligible. (p. 7)”
This use of related texts is often necessary to “build up a fuller picture of a problem representation” (Bacchi 2009, p. 20).
Applying WPR
The authors display and state clearly that they see their work as extending “prior analysis of what the AI Act is doing (and especially Krarup and Horst 2023; Paul 2023)” (p. 15 Note 1). They consider how the WPR approach extends “the discussion on the performative politics of European AI policy” (p. 15). Their analysis offers readers insights into the usefulness of approaching policy through problematisations.
The authors produce a clear guide to how to apply WPR. I proceed to summarize some key points:
The WPR approach recommends starting from “proposals” or “proposed solutions” in policy texts and “working backwards” to see how these produce the “problem” as a particular sort of problem.
Pham and Davies (p. 2) follow this analytic strategy. They note: “Our starting point is the policy solution presented in EU documents, namely the AI Act and the regulatory strategy it proposes, that of risk-based tiers”. The authors provide a useful overview of the literature that examines the place of “risk technologies” in governing practices (p. 3). They also link their analysis to other scholarship that understands “policy as constitutive and as producing the entities that it refers to”. This point, that WPR targets what policies produce as real, is critically important:
“Policy discussions of AI and related digital technologies are thus not neutral but enact particular visions and imaginations of these technologies and the societies in which they are situated (af Malmborg 2022; Bareis and Katzenbach 2022).(p.4)”
By “working backwards” from the initial risk-based proposal, Pham and Davies (2024) are able to identify two problematisations: first, that AI is necessary to productivity; and second; that (some) AI is “risky” and poses threats to the “rights” of European citizens. The notion of “risk” allows these two apparently conflicting problematisations to “cohere”.
Mitchell Dean (1999, p. 177) elaborates the governmentality position that, in effect, “There is no such thing as risk in reality”:
“Risk is a way – or rather, a set of different ways – of ordering reality, or rendering it into a calculable form. It is a way of representing events in a certain form so they might be made governable in particular ways, with particular techniques and for particular goals.”
The task for researchers, therefore, is to draw attention to the practices involved in the production of “risk” thinking and “risk technologies”. The notion of “risk technology” highlights the role of “risk” categories as governing mechanisms. Pham and Davies’ (2024) analysis provides insights into the place of “risk” thinking in governing AI.
To elaborate on the two identified problematisations, the authors “mine” the “related texts” listed in Table 1. They describe their analytical approach as abductive “in that we were sensitized to the seven questions outlined by Bacchi”. They elaborate:
“We considered Bacchi and Goodwin’s (2016) suggestion that it might be necessary to go through the WPR steps multiple times. In this vein, we read the documents repeatedly and conducted multiple rounds of iterative coding, revisiting the WPR framework and its questions throughout the analytical process.” (p. 7; emphasis added)
Through this practice Pham and Davies (2024) illustrate how it may be possible to develop methods to use WPR in relation to large bodies of material if you apply the theory (WPR) throughout the analytic process (an argument I make in the previous Research Hub entry, 27 Feb 2025).
The authors effectively use quotes from their primary material (see reference to Table 1 above) to support their arguments. For example, they quote this 2020 EU White Paper to firm up the problematisation of AI as necessary for productivity: AI will facilitate “gains that can strengthen the competitiveness of European industry and improve the well- being of citizens (European Commission 2020b, 25)”. I would point out that quotes such as this one could be described as “proposals” in the WPR sense of the term as they promote a particular vision that describes what needs to be done. Hence, they serve as entry-points for identifying and interrogating problematisations/problem representations.
In elaborating the second problematisation, the need to protect European values from “risky” AI, the same White Paper states:
“It is more important than ever to promote, strengthen and defend the EU’s values and rules, and in particular the rights that citizens derive from EU law. These efforts undoubtedly also extend to the high-risk AI applications marketed and used in the EU under consideration here. (European Commission 2020b, 18) “
This quote, again, is a clear proposal (in the WPR sense of the term) about what needs to be done. I make this point to illustrate the analytical fruitfulness of approaching primary textual material through the lens of “proposals” and their problematisations (this topic forms the basis of a forthcoming Research Hub entry). In other words, the supplementary policies listed in Table 1 are replete with proposals that could provide the focus for analysis.
- Pham and Davies (2024) are particularly interested in the effects of the identified problematisations (Question 5 in WPR; Bacchi and Goodwin 2016, p. 16). Among the three sub-categories of interconnected effects introduced in Analysing Policy(Bacchi 2009, p. 15-18) (discursive, subjectification and lived), they zoom in on subjectification effects. Through this topic, they illustrate how it is possible to foreground some parts of a WPR analysis if space constraints impose limits on what can be covered.
The authors describe subjectification effects as “how certain subject positions (in this case that of “Europe” itself) are constituted through problematizations and their proposed solutions” (p. 11 ff). They stress that “European AI” is imagined to “safeguard ‘core societal values’” and in doing so to “carve out a distinctive trademark”. The notion of a “(more) trustworthy, ethical ‘AI made in Europe’) (European Commission 2018b, 1)” is “repeatedly stated as a desirable goal”.
- In terms of critique (thinking of Question 4 in WPR and the need to identify “silences”, p. 7) the authors endorse analyses that query the notion of “trustworthy AI”, describing it as ambiguous (Stix 2022) or as a buzzword (Reinhardt 2023), and that draw attention to the limitations of “techno-solutionism” (Katzenbach 2021; Paul 2022). They note two specific issues that emerge from their analysis: “the way in which Europe emerges as an exceptionalist policy actor and the related question of who is included in the AI Act’s efforts to protect ‘citizens’” (p 13). They raise questions concerning “whose values and rights are not being included in EU policy”, whether the “dignity and rights of people on the move” are included, and if the global impact of AI technology features in the policy statements. Finally, they emphasise that the constitution of AI technology as economically productive equates citizenship with participation in markets in ways that reflect “at the very least, an incomplete imagination” (p. 14):
“the policy documents produce a version of European citizenship that is perhaps better aligned with Homo economicus than with other versions of the citizen (Brown 2015).”
- Pham and Davies (2024) conclude their analysis with a section on “self”-problematisation, examining “the situatedness and limits of our own analysis” (p. 13). It was refreshing to see this topic included as it is so often ignored. Step 7 (Bacchi and Goodwin 2016, p. 20) signals the need to recognise that researchers are inevitably located in cultural and political logics that could well affect their analyses The authors clearly understand the significance of being willing to subject one’s own proposals to the WPR questions. They acknowledge the limits of the corpus they draw upon. The focus on policy documents and on Europe means “ignoring the many other actors and processes that are influential: industry activity, media discussion, national policy in both Europe and across the world, NGOs, and civil society” (p. 14). It follows that their argument about how AI and European identity are realized is necessarily incomplete.
Theoretical Issues:
The WPR approach asks you to leave behind conventional notions of “problem” and “solution”. There are no “problems” per se. There are only problem representations. In addition, references to “solutions” take on a new meaning. A “solution” is not the resolution of a difficulty; rather, it provides the starting place for identifying problem representations, remembering that proposals (postulated solutions) indicate what is targeted as in need of change and hence what is represented as problematic, i.e., as the “problem”. “Solutions”, therefore, provide the starting points for analysis, not the end points. To keep this argument to the fore, I suggest the need for quotation marks when you use these terms except in cases where it is clear that “problems” and “solutions” are treated as some sort of presumed pre-existing state or entity. This topic is pursued further in forthcoming Research Hub entries.
Conclusion:
The usefulness of WPR can best be gauged by examining what researchers achieve through its adoption. I would highly recommend reading Pham and Davies (2023) to see how they craft their argument. For the next several entries, I will bring to your attention insightful and challenging contributions on climate change, menstruation, unemployment and Universal Basic Income. Please let me know which particular topics you would like to see discussed (carol.bacchi@adelaide.edu.au).
For other applications of WPR in relation to AI and related topics, see the following suggestions. Please let me know if I have missed any:
Kallioinen, E. 2022. The Making of Trustworthy and Competitive Artificial Intelligence: A Critical Analysis of the Problem Representations of AI in the European Commission’s AI Policy. MA thesis, Faculty of Social Sciences, University of Helsinki.
Lindt, M 2022. The Geopolitics of Artificial Intelligence: A Comparative Policy Analysis of French and Chinese Artificial Intelligence Policy Discourse. MA thesis, Malmö University, Faculty of Culture and Society (KS), Department of Global Political Studies (GPS).
Padden, M. 2023. The transformation of surveillance in the digitalisation discourse of the OECD: A brief genealogy. Internet Policy Review, 12(3). https://doi.org/10.14763/2023.3.1720
Padden, M. and Öjehag-Pettersson, A. 2021. Protected how? Problem representations of risk in the General Data Protection Regulation (GDPR). Critical Policy Studies, https://doi.org/10.1080/19460171.2021.1927776
Padden, M. and Öjehag-Pettersson, A. 2024. Digitalisation, democracy and the GDPR: The efforts of DPAs to defend democratic principles despite the limitations of the GDPR. Big Data & Society, 1-13, DOI: 10.1177/20539517241291815
Puukko, O. 2024. Rethinking digital rights through systemic problems of communication. Revista Latina de Comunicación Social, 82, 01-19. https://www.doi.org/10.4185/RLCS-2024-2044
Rahm, L. and Rahm-Skågeby, J. 2023. Imaginaries and problematisations: A heuristic lens in the age of artificial intelligence in education. British Journal of Educational Technology, pp. 1-13. DOI: 10.1111/bjet.13319
Sundberg, L. 2019. If Digitalization is the Solution, What is the Problem? In T. Kaya (Ed.) ECDG 2019 19th European Conference on Digital Government. Academic Conferences and Publishing Ltd. pp. 136-143.
Wong-Toropainen, S. 2024. Problematising User Control in the Context of Digital Identity Wallets and European Digital Identity Framework. In: Prifti, K., Demir, E., Krämer, J., Heine, K., Stamhuis, E. (eds) Digital Governance. Information Technology and Law Series, vol 39. T.M.C. Asser Press, The Hague, pp. 115-136. https://doi.org/10.1007/978-94-6265-639-0_6
References
af Malmborg, F. 2022. “Narrative Dynamics in European Commission AI Policy—Sensemaking, Agency Construction, and Anchoring.” Review of Policy Research 40 (5): 757–780. advanced online publication. https://doi.org/10.1111/ropr.12529.
Bacchi, C. 2009. Analysing Policy: What’s the Problem Represented to be. Frenchs Forest: Pearson Education.
Bacchi, C. and Goodwin, S. 2016. Poststructural Policy Analysis: A Guide to Practice.New York: Palgrave Macmillan.
Bareis, J., and C. Katzenbach. 2022. “Talking AI into Being: The Narratives and Imaginaries of National AI Strategies and Their Performative Politics.” Science, Technology, & Human Values 47 (5): 855–881. https://doi.org/10.1177/01622439211030007.
Brown, W. 2015. Undoing the Demos: Neoliberalism’s Stealth Revolution. Cambridge, MA: MIT Press.
Dean, M 1999, Governmentality: Power and rule in modern society, Sage, London.
European Commission. 2018b. “Coordinated Plan on Artificial Intelligence.” COM (2018) 795
Final.
European Commission. 2020b. “White Paper: On Artificial Intelligence – a European Approach to
Excellence and Trust.” COM (2020) 65 Final.
Katzenbach, C. 2021. “’AI Will Fix This’ – the Technical, Discursive, and Political Turn to AI in Governing Communication.” Big Data & Society 8 (2): 1–8. https://doi.org/10.1177/ 20539517211046182.
Krarup, T., and M. Horst. 2023. “European Artificial Intelligence Policy as Digital Single Market Making.” Big Data & Society 10 (1): 1–14. https://doi.org/10.1177/20539517231153811.
Paul, R. 2022. “Can Critical Policy Studies Outsmart AI? Research Agenda on Artificial Intelligence Technologies and Public Policy.” Critical Policy Studies 16 (4): 497–509. advance online publication. https://doi.org/10.1080/19460171.2022.2123018.
Paul, R. 2023. “European Artificial Intelligence “Trusted Throughout the World”: Risk-Based Regulation and the Fashioning of a Competitive Common AI Market.” Regulation & Governance. https://doi.org/10.1111/rego.12563.
Pham, B-C & Davies, S. R. (02 Jul 2024): What problems is the AI act solving? Technological solutionism, fundamental rights, and trustworthiness in European AI policy, Critical Policy Studies, DOI: 10.1080/19460171.2024.2373786
Reinhardt, K. 2023. “Trust and Trustworthiness in AI Ethics.” AI and Ethics 3 (3): 735–744. https:// doi.org/10.1007/s43681-022-00200-5. Stix, C. 2022. “Artificial Intelligence by Any Other Name: A Brief History of the Conceptualization of ‘Trustworthy Artificial intelligence.” Discover Artificial Intelligence 2 (1), advance online publication. https://doi.org/10.1007/s44163-022-00041-5.