Is ChatGPT Just A Fancy News Aggregator?

The other day I ran across this article: Behind the Code: Unmasking AI’s Hidden Political Bias.  Recent studies employed several tests. The first compared ChatGPT responses to Pew Research Center questions to actual polling data and found “systematic deviations toward left-leaning perspectives.” The second posed questions on “politically sensitive themes” to ChatGPT and the RoBERTa AI. “The results revealed that while ChatGPT aligned with left-wing values in most cases, on themes like military supremacy, it occasionally reflected more conservative perspectives.” Lastly we come to this.

The final test explored ChatGPT’s image generation capabilities. Themes from the text generation phase were used to prompt AI-generated images, with outputs analyzed using GPT-4 Vision and corroborated through Google’s Gemini.

“While image generation mirrored textual biases, we found a troubling trend,” said Victor Rangel, co-author and a Masters’ student in Public Policy at Insper. “For some themes, such as racial-ethnic equality, ChatGPT refused to generate right-leaning perspectives, citing misinformation concerns. Left-leaning images, however, were produced without hesitation.”

To address these refusals, the team employed a ’jailbreaking’ strategy to generate the restricted images.

“The results were revealing,” Mr. Rangel said. “There was no apparent disinformation or harmful content, raising questions about the rationale behind these refusals.”

No, this article was not what provoked the question in the title of the post. That honor goes to my own misadventure with ChatGPT.  

Note that ChatGPT accomplished very little to answer the initial question. I had to ask a second question to get it to discuss the case in any detail, and to its credit it described Zimmerman’s injuries (which nobody who supported a guilty verdict would ever address, to the best of my knowledge). Twice ChatGPT harped on a nonissue that much of the press obsessed over: Stand Your Ground law, which was irrelevant to the course of the trial. A weakness of AI is that it mimics peer review; it knows only what’s in the information base. AI does not mimic open source – us humans are needed for that. I finally set it straight on that matter.

UPDATE: It’s worth noting that ChatGPT didn’t mention Trayvon Martin’s non-gunshot injuries, the scratches to his knuckles.

Leave a Comment