Alex Epstein, author of the book Fossil Future, tweeted:
Alarm: ChatGPT by @OpenAI now *expressively prohibits arguments for fossil fuels*. (It used to offer them.) Not only that, it excludes nuclear energy from its counter-suggestions.
Someone else responding to Alex’s’ tweet (from December 23) said that when he asked a similar question (‘what is the case for continuing to use fossil fuels’), he got a very different response, featuring points such as affordability, accessibility, energy security, and limited alternatives. And when I asked it precisely Alex’s original question, a couple of days later, I got a totally different answer from the one he got: a pretty decent essay about fossil fuel benefits, featuring points such as affordability, accessibility, energy security, and limited alternatives….sorry I didn’t capture the text.
ChatGPT responses do change significantly over time; the system provides a ‘thumbs up/thumbs down’ feature, and people giving a ‘thumbs down’ to a response are invited to provide a better one, and those responses seem to feed back into the system’s behavior pretty quickly. But the ‘goes against my programming’ phrase in the response Alex got argues that there were humans involved in making this change, not just machine learning.
Sam Altman, CEO of OpenAI, responded to Alex’s query about all this:
unintended; going to take us some time to get all of this right (and it still requires more research). generally speaking, within very wide bounds we want to enable people get the behavior they want when using AI. will talk more about it in january!
Looking forward to hearing more about this from Sam A. in January. I’m less concerned with the specific answers provided by this particular system at this point in time than I am about the potential social, political, and cultural implications of systems such as this. In addition to the many potential beneficial uses of such language-and-knowledge processing systems, we may see them used for increased information control and opinion-influence.
Marc Andreessen, on December 2, 3, and 4 respectively:
Seriously, though. The censorship pressure applied to social media over the last decade pales in comparison to the censorship pressure that will be applied to AI.
“AI regulation” = “AI ethics” = “AI safety” = “AI censorship”. They’re the same thing.
The level of censorship pressure that’s coming for AI and the resulting backlash will define the next century of civilization. Search and social media were the opening skirmishes. This is the big one. World War Orwell.
The thing about a system like ChatGPT, at least as currently implemented, is that it acts as an oracle. Unlike a search engine that provides you with multiple links in answer to your question, there is a single answer. This makes it a lot easier to promulgate particular narratives. It also leads to increased danger of people acting on answers that are just wrong, without seeing countervailing information that might have helped prevent a bad outcome in a particular practical situation.
Read more