000 02229nab a2200229 c 4500
999 _c149131
_d149131
003 ES-MaIEF
005 20240322130305.0
007 ta
008 240322t2024 us ||||| |||| 00| 0|eng d
040 _aES-MaIEF
_bspa
_cES-MaIEF
100 1 _971543
_aMotoki, Fabio
245 0 _aMore human than human
_bmeasuring ChatGPT political bias
_c Fabio Motoki, Valdemar Pinho Neto, Victor Rodrigues
500 _aResumen.
504 _aBibliografĂ­a.
520 _aWe investigate the political bias of a large language model (LLM), ChatGPT, which has become popular for retrieving factual information and generating content. Although ChatGPT assures that it is impartial, the literature suggests that LLMs exhibit bias involving race, gender, religion, and political orientation. Political bias in LLMs can have adverse political and electoral consequences similar to bias from traditional and social media. Moreover, political bias can be harder to detect and eradicate than gender or racial bias. We propose a novel empirical design to infer whether ChatGPT has political biases by requesting it to impersonate someone from a given side of the political spectrum and comparing these answers with its default. We also propose dose-response, placebo, and profession-politics alignment robustness tests. To reduce concerns about the randomness of the generated text, we collect answers to the same questions 100 times, with question order randomized on each round. We find robust evidence that ChatGPT presents a significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK. These results translate into real concerns that ChatGPT, and LLMs in general, can extend or even amplify the existing challenges involving political processes posed by the Internet and social media. Our findings have important implications for policymakers, media, politics, and academia stakeholders.
650 4 _970681
_aCHATGPT
650 0 _971544
_aASPECTOS POLÍTICOS
700 1 _971545
_aNeto, Valdemar Pinho
700 1 _971546
_aRodrigues, Victor
773 0 _9171454
_oOP 1443/2024/198/1/2
_tPublic Choice
_w(IEF)124378
_x 0048-5829
_g v. 198, n. 1-2, January 2024, p. 3-23
942 _cART