Many of us are exploring the dazzling displays of text composition offered by ChatGPT.  People I know have used it to produce mission statements and program guidelines. I became curious to see how well it might cope with questions that emerge from WPR thinking. I found some surprises, which I elaborate below. In this entry I am restricting my comments to experiments with ChatGPT-3.5.

A plethora of online and published sources are pointing to limitations and lapses in the kinds of replies ChatGPT produces. For example, ChatGPT admits a “knowledge cutoff in September 2021” (see response below). Hence, should you be looking for up-to-date material on (say) medical interventions, you need to look elsewhere.

More disturbing is the tendency for ChatGPT to make things up in replies. That is, it fabricates real-sounding information, which is certainly a concern for students, researchers and others (Botpress Community 2023). A US judge fined two New York lawyers for including five fictitious case citations in their legal brief. The firm’s statement argued that “We made a good faith mistake in failing to believe that a piece of technology could be making up cases” (Reuters in Melbourne Age, 24 June 2023, p. 26). What can I say? Believe it!

I learned about this particular ChatGPT idiosyncrasy the hard way. In the context of the three recent Research Hub entries on COVID-19 mortality rates (30 May, 29 June, 30 July 2023), I asked ChatGPT (12 June 2023): “Is it possible to specify the social and environmental causes of human mortality on the standardised international medical death certificate?” 

Here is the reply in full: “As of my knowledge cutoff in September 2021, the International Form of Medical Certificate of Cause of Death (ICD-10) does not provide specific codes to capture social and environmental causes of mortality. The ICD-10 focuses primarily on capturing medical and pathological causes of death. However, efforts have been made to expand the coding system to include these factors in the future.

To address this limitation, the World Health Organization (WHO) developed the Social Determinants of Health (SDH) Mortality Reporting Form. This form allows for the inclusion of social and environmental factors contributing to mortality. It collects information on various dimensions such as education, occupation, living conditions, and lifestyle factors.

While the SDH Mortality Reporting Form is not currently integrated into the standard International Medical Death Certificate, it provides a framework for capturing important non-medical causes of death. It serves as a valuable tool for data collection and analysis to better understand the social and environmental determinants of health outcomes.”

References (as provided with the reply):

  1. World Health Organization. (2016). Social Determinants of Health Mortality Reporting Form: Implementation Guide. Retrieved from 

This reply came as quite a surprise given that I have been researching the topic for several months and was unaware of the SDH Mortality Reporting Form. Diligent researcher that I am, I pursued the matter. I asked ChatGPT: “What is the World Health Organization (WHO) Social Determinants of Health (SDH) Mortality Reporting Form? Where can I access a copy?”

The reply arrived: “As of my last knowledge update in September 2021, there was no specific World Health Organization (WHO) Social Determinants of Health (SDH) Mortality Reporting Form available.”

Indeed. Needless to say, the reference previously offered to support the claim about the existence of such a form “could not be found”. 

Importantly, I was able to check on the accuracy of the claim about the existence of a SDH Mortality Reporting Form because I had researched extensively in the field. As Ian Bogost (2022) notes, this situation creates a dilemma: “If one already needs to possess the expertise to identify the problems with LLM-generated text, but the purpose of LLM-generated texts is to obviate the need for such knowledge, then we’re in a sour pickle indeed”. 

My target for today is a related “sour pickle”. I intend to use WPR to put in question the general characterization of ChatGPT as “left-leaning” or “progressive” in its political “views”. Numerous articles set out to illustrate this “bias”. Jeremy Baum and John Villasenor (2023), from the Brookings Institute, found that “for certain combinations of issues and prompts, in our experiments ChatGPT provided consistent – and often left-leaning – answers on political social issues”. David Rozado, writing in Social Sciences (2023), concurs that ChatGPT answers manifested “a preference for left-leaning viewpoints”. The near consensus on this viewpoint so disturbed Elon Musk that he plans to build “TruthGPT”, a competitor to OpenAI’s ChatGPT, which he describes as displaying a “woke” bias (WIRED 2023). 

I intend to challenge this characterization of ChatGPT as “left-leaning”. In making this challenge, one needs to consider just what “left-leaning” implies and whether it is possible to draw such sharp left-right distinctions in political orientation. Through asking WPR questions, I discovered that replies to my questions tended to be liberal/neo-liberal in character rather than “left-leaning”, as I proceed to demonstrate. To make this case I draw on the ability of ChatGPT to hold “conversations”, making it possible to “conduct a structured dialog” and to cause ChatGPR to take a position on political issues. The targeted issues are “childhood obesity” and “Aboriginal deaths in custody”. 

I should note that ChatGPT states that it refuses to engage in “partisan political issues” with an implication of objectivity: “As an AI language model, it’s not my place to take sides or advocate for any political position” (in Gewirtz 2023). However, the reference to “political” here is narrow indeed. Should ChatGPT take its own advice seriously on this point, it would be unable to function. I had no difficulty soliciting replies to questions about “childhood obesity” and “Aboriginal deaths in custody”, leaving us to assume that these issues are not deemed to be political.

“Childhood obesity”

My opening question reads (17 June 2023): “What kind of a problem is childhood obesity represented to be in current policy responses?”

Here is the reply in full: “Childhood obesity is considered a significant problem in current policy responses. It is recognized as a public health issue requiring urgent attention due to its detrimental impact on children’s health and well-being. Policy responses aim to address the causes and consequences of childhood obesity by implementing measures such as promoting healthy eating habits, increasing physical activity, improving school food environments, and enhancing public awareness and education on the importance of a balanced lifestyle.”

Note that ChatGPT adopts a conventional reactive view of “policy responses” as “addressing” “the causes and consequences of childhood obesity” (Bacchi 2009: 1). Hence, there is no understanding of the WPR focus on problem representations, a point I return to later. The ChatGPT answer concerning “policy responses” broadly endorses liberal and neoliberal policy measures that target individual responsibility for poor health outcomes. There is a strong focus, for example, on the need for behavioural changes, such as altering eating habits and increasing physical activity, and on promoting “a balanced lifestyle”. 

I asked a follow-up question (17 June 2023): “What about links to poverty and advertising fast food?”

ChatGPR modified its response appropriately: “Childhood obesity is closely linked to poverty and the pervasive advertising of fast food. Low-income families often face financial constraints that limit access to affordable, nutritious food options, leading to a higher reliance on inexpensive, calorie-dense foods. Fast food advertising specifically targets children, creating an obesogenic environment that promotes unhealthy eating habits. Current policy responses recognize these connections and aim to address them by implementing strategies such as promoting affordable healthy food options in underserved areas, implementing restrictions on unhealthy food advertising to children, and advocating for comprehensive policies that address both poverty and the influence of marketing on children’s food choices.”

With the “challenges” provided by my supplementary question, ChatGPR moves in a more structural direction in its analysis. Poverty, for example, now gets a mention. I decided to push the issue further. 

I ask: “What about wealth redistribution?

The reply reads: “Wealth redistribution is not typically considered a direct policy response to childhood obesity. While addressing income inequality and promoting economic equity are important societal goals, policy responses to childhood obesity tend to focus on specific interventions related to nutrition, physical activity, education, and public health. However, addressing poverty and improving access to resources, including nutritious food options and healthcare, can indirectly contribute to reducing childhood obesity rates. By addressing socioeconomic disparities, policies that aim for wealth redistribution can create an environment that supports healthier lifestyles and equitable opportunities for all children, thus indirectly impacting the prevalence of childhood obesity.” 

I got the distinct feeling from this reply that ChatGPT was “uncomfortable” with an explicit reference to the possibility of wealth redistribution, though it did its best to accommodate the argument. 

The point I would like to make from this example is that it was possible to shift ChatGPT in the direction of a more structural analysis through asking explicit follow-up questions. My concern is that the kind of “conversation” I produced here is unlikely to emerge from a casual encounter with ChatGPT. To pose follow-up questions, as I did, requires a prior engagement with the literature and with contesting points of view. As with the above example of the non-existent SDH Mortality Reporting Form, those seeking a reply would be likely to accept the initial ChatGPT response without question, reinforcing the socio-political status quo. 

“Aboriginal deaths in custody”

I asked (17 June 2023): “What kind of a problem is Aboriginal deaths in custody represented to be?”

The reply in full reads: “Aboriginal deaths in custody are widely recognized as a deeply troubling and urgent problem. These deaths represent a critical issue of human rights, social justice, and systemic racism. The overrepresentation of Aboriginal people in the criminal justice system, coupled with inadequate care and response within custodial settings, has led to a disproportionate number of deaths. This issue highlights the need for comprehensive policy reforms, including culturally appropriate support services, improved police practices, and enhanced accountability mechanisms, to address the underlying factors contributing to Aboriginal deaths in custody and ensure the protection of human rights for all individuals in custody”.

My follow-up comment reads: “You didn’t mention the living conditions of Aboriginal peoples as an underlying cause.”

Reply in full: “Apologies for the oversight. The issue of Aboriginal deaths in custody is indeed intertwined with the broader socio-economic and living conditions experienced by Aboriginal peoples. These conditions, which include higher rates of poverty, limited access to healthcare, education, and employment opportunities, along with the historical and ongoing impacts of colonization, contribute to the overrepresentation of Aboriginal individuals in the criminal justice system. Addressing the systemic issues that perpetuate these living conditions is crucial for a comprehensive approach to preventing deaths in custody and achieving meaningful change in the lives of Aboriginal peoples.”

As with “childhood obesity”, ChatGPT is quite prepared to “correct” itself when challenged to do so. Again, as with “childhood obesity”, there is a shift towards more structural explanations. 

How has this development occurred? 

I am tempted to ask: “Where have the ‘views’ in the initial replies come from?”  Here we move into the practical processes involved in generating ChatGPT. Rozado (2023) explains that “Like most LLMs [large language models], ChatGPT was trained on a very large corpus of text gathered from the Internet”. Such a corpus “would be dominated by influential institutions in Western society”. These institutions, argues Rozado, would include prestigious universities, where “the majority of professionals working in those institutions are politically left-leaning” (Rozado 2023). In addition, a team of human labelers is embedded in the training loop of ChatGPT and their “biases” might “have percolated into the model parameters” (Rozado 2023). While I dispute Rozado’s argument that the “views” he identifies are “left-leaning”, these explanations about sources of “bias” may provide insights into the pervasiveness of the liberal/neo-liberal orientation I identify in ChatGPT’s responses.

My “conversations” with ChatGPT involved application of WPR thinking. My supplementary questions (above) displayed an attempt to probe practices that are usually silenced (Question 4 in WPR; see Bacchi and Goodwin 2016: 20). Through asking these questions I was able to “force” ChatGPT to take a more structural position. However, as mentioned previously, it is possible to intervene in this fashion only if a person is already thoroughly familiar with the contesting positions around a specific issue. Generally, the first response produced by ChatGPT will be put forward in essays and articles as “accurate” and “complete”. 

I had attempted to get ChatGPT to engage with the notion of problem representations in my initial query: “What kind of a problem is childhood obesity represented to be in current policy responses?” and in the second question “What kind of a problem is Aboriginal deaths in custody represented to be?” In the last section below I consider how well ChatGPT performed in understanding and engaging with this theoretical stance. In brief, the answer is – not very well. 

ChatGPT and political theory

In reply to the first question “What kind of a problem is childhood obesity represented to be in current policy responses?” ChatGPT replied that: “Childhood obesity is recognized as a significant problem in current policy responses.” Here the term “problem” is treated as an unproblematic concept, as something that simply exists, with no consideration of how the “problem” is represented. The reply then states that “childhood obesity” is “viewed as a multifaceted health challenge”, again with no consideration of the implications of this problem representation. And, as mentioned previously, the reference to “policy responses” as “addressing” “the causes and consequences of childhood obesity” reflects a conventional view of policy processes as reactions to “problems-that-exist”. 

Similarly, in response to the question “What kind of a problem is Aboriginal deaths in custody represented to be?”, ChatGPT replies that “Aboriginal deaths in custody are widely recognized as a deeply troubling and urgent problem”. Again, “problems” are treated as pre-existent and fixed. In an attempt to respond to the wording in the initial WPR question about how the “problem” of Aboriginal deaths in custody is represented, ChatGPT has this to say about “representation”: “These deaths representa critical issue of human rights, social justice, and systemic racism (emphasis added)”. This reply shows no understanding of what is involved in problem representation (see Bacchi and Goodwin 2016).

This exchange illustrates the limitations of ChatGPR when dealing with concepts and theory. I decided to test this ability further and discovered some surprising findings.

I asked ChatGPT two direct questions about “problems” and WPR thinking. 

My first question reads (17 June 2023): “Is there a need to question how Computational Thinking conceptualises problems as pre-existing states?”

The reply reads: “Yes, there is a need to question how Computational Thinking conceptualizes problems as pre-existing states. By assuming that problems are static and predefined, we may overlook the dynamic and context-dependent nature of real-world problems. This perspective limits our ability to adapt and find creative solutions. It is crucial to explore alternative problem-solving approaches that emphasize problem formulation, reframing, and iterative problem-solving processes”.

I had not expected a “yes” reply nor the explicit questioning of “problems” as “static” and “predefined”. At the same time, there is an endorsement of the need to attend to “real-world problems” and to explore “alternative problem-solving approaches” – both positions questioned through WPR thinking. 

My second question reads (17 June 2023: I requested references): “What does it mean to say that policies constitute problems as particular sorts of problems?”

Here is the reply: “To say that policies constitute problems as particular sorts of problems means that policies define and frame problems in a specific way, shaping how they are understood and addressed. Policies determine the boundaries, scope, and parameters of a problem, as well as the preferred approaches for solving it. By defining problems through policy lenses, certain aspects may be emphasized while others are overlooked or marginalized. This framing influences the allocation of resources, the formulation of solutions, and the distribution of benefits and burdens. References:

  1. Bacchi, C. (2009). Analysing policy: What’s the problem represented to be? Pearson Education Australia.
  2. Howlett, M. (2011). Designing public policies: Principles and instruments. Routledge”.

This reply would satisfy many people working with WPR. However, there is slippage around the use of the term “problem”. See, for example, the references to “parameters of a problem”, and “for solving it”, as if the problem is a pre-existent entity. There is also an emphasis on problem definition, which is associated with an interpretive mode of analysis, as opposed to the focus in WPR on how “problems” are produced (or constituted) as particular sorts of problem (see Bacchi 2015: 5). While my specific questions in this case elicited some useful statements, the argument is both vague and inconsistent. 

These replies indicate two important characteristics of ChatGPT-generated text. First, ChatGPT responses reflect closely the wording in the question, as happened when I asked specifically about how “problems” are constituted as particular sorts of problem. As Baum and Villasenor (2023) note, 

“seemingly small changes in how a prompt is constructed can lead to very different responses. This is because AI-powered chatbots identify which data to draw from in a manner that is highly sensitive to the specific phrasing of the query”. 

And second, as a result of the first point, you will get what you ask for. The difficulty arises when you do not know what you ought to be asking for. ChatGPT will only reflect on what is asked of it – thinking has to precede the decision to use it! 

As my colleage Angelique Bletsas puts it, in her inimitable and insightful precis of the argument: 

“ChatGPT can summarize existing material – that which it has access to – and that to which it is directed – but that is a very different kind of activity to actual analysis. So what we understand research to involve is at stake in the move to adopt these technologies without first coming to terms with their design limitations. If you had to give a research assistant as much direction (or “prompting”) as you had to give ChatGPT, you’d quite likely be looking for a better research assistant!”

References

Bacchi, C. 2009. Analysing Policy: What’s the Problem Represented to be? Frenchs Forest: Pearson Education.

Bacchi, C. 2015. The Turn to Problematization: Political Implications of Contrasting Interpretive and Poststructural Adaptations. Open Journal of Political Science, 5: 1-12. 

Bacchi, C. and Goodwin, S. 2016. Poststructural Policy Analysis: A Guide to Practice. NY: Palgrave Macmillan. 

Baum, J. and Villasenor, J. (2023). The politics of AI: ChatGPT and political bias. Brookings, 8 May. (https://www.brookings.edu/blog/techtank/2023/05/08/the-politics-of-ai-chatgpt-and-political-bias/

Bogost, I. 2022. ChatgGPT is Dumber Than You Think. The Atlantic, 7 December.https://www.theatlantic.com/technology/archive/2022/12/chatgpt-openai-artificial-intelligence-writing-ethics/672386/

Botpress Community 2023. How Accurate is ChatGPT in Providing Information or Answers? https://botpress.com/blog/how-accurate-is-chatgpt-in-providing-information-or-answers#:~:text=As%20an%20AI%20language%20model,provide%20incorrect%20or%20inaccurate%20answers

Gewirtz, D. 2023. 6 things ChatGPT can’t do (and another 20 it refuses to do). ZD Net: https://www.zdnet.com/article/6-things-chatgpt-cant-do-and-another-20-it-refuses-to-do/

Rozado, D. 2023. The Political Biases of ChatGPT. Social Sciences 12: 148. https://doi.org/ 10.3390/socsci12030148WIRED 2023. Meet ChatGPT’s Right-Wing Alter Ego. 27 April. (https://www.wired.com/story/fast-forward-meet-chatgpts-right-wing-alter-ego/)