ChatGPT Makes Some Decisions in Similar Ways to People Although There Are Interesting Differences

Let me first preface this post by saying two things: 1) the state of research in this area is very young (e.g., most citations in 2022 and 2023), and 2) my summary will be at risk of oversimplifying things and missing some nuance.

The students I have at Cornell Tech are really sharp and energized. In one class, an interesting question was raised about whether AI could be used as part of the testing process, such as to A/B pre-test interventions. To try to get a better understanding of this space, I sought to do a little research on to what extent AI decision making resembles human decision making. So today I shared findings from a working paper that I recently read (Chen et al., 2023). The paper covers 18 common human biases relevant to operational decision-making (e.g., judgments regarding risk, evaluation of outcomes, and heuristics in decision making, such as System 1 versus System 2 thinking).

Here’s a summary of differences between ChatGPT and humans:

  • Judgments Regarding Risk – ChatGPT seems to mostly maximize expected payoffs with risk aversion only demonstrated when expected payoffs equal. It does not understand ambiguity. Also, ChatGPT exhibits high overconfidence, perhaps due to its large knowledge base.
  • Evaluation of Outcomes – ChatGPT is sensitive to framing, reference points, and salience of information. No sensitivity to sunk costs or endowment effect (e.g., may not have physical or psychological ownership concept).
  • Heuristics in Decision Making – More research needed, although aspects such as confirmation bias present. Additionally, ChatGPT has the ability to generate both classic System 1 responses (incorrect answers by humans typically driven by fast, automatic thinking) and System 2 responses (correct answers, such as those by humans which typically require more slow, reflective thinking).

While the reasoning for these modes of responses is not fully known, it seems as though ChatGPT is extremely logical when it comes to things like maximizing expected value. However, perhaps due to its nature of trying to be conversational and responding to salient information provided by the user, it can be overly sensitive to framing effects.

There are surely a lot things to think about, opportunities to pursue, and research to pursue.

Reference: Chen, Yang and Andiappan, Meena and Jenkin, Tracy and Ovchinnikov, Anton, A Manager and an AI Walk into a Bar: Does ChatGPT Make Biased Decisions Like We Do? (March 6, 2023). Available at SSRN: https://ssrn.com/abstract=4380365 or http://dx.doi.org/10.2139/ssrn.4380365