OpenAI’s newest GPT-4o model can behave pretty strangely, for example by copying your voice, or by shouting or moaning erotically during a conversation.
The reason we know this is not because some independent researchers or random users caught it. These are observations from OpenAI itself in a “red teaming” report that’s supposed to identify the model’s risks and the ways those risks have been addressed.
To be clear, it sounds like OpenAI is comfortable identifying these risks because it’s found ways to mitigate them.
So the version of GPT-4o that the public has access to shouldn’t suddenly start copying their voices, and it supposedly refuses to make erotic or violent sounds or to generate that kind of speech. (It’s also supposed to refuse to make sound effects in general.)
Hit play to learn more, then let us know what you think in the comments.
Source : Techcrunch