Tech »  Topic »  Trying to break OpenAI's new o1 models? You might get banned

Trying to break OpenAI's new o1 models? You might get banned


John Lund/Getty Images

Even the smartest AI models are prone to hallucinations, which can be amusing when provoked. May I remind you of glue pizza? However, if you try to induce hallucinations in OpenAI's advanced o1 reasoning models, you may lose access to the model altogether. 

OpenAI unveiled its o1 models last week, which were trained to "think before they speak" and, as a result, are capable of solving complex math, science, and coding problems using advanced reasoning. With a model touting such impressive capabilities, naturally, people set out to break its string of reasoning. 

Also: How well can OpenAI's o1-preview code? It aced my 4 tests - and showed its work in surprising detail

However, as first spotted by Wired, users who tried to do so got warnings within the chatbot interface, informing them that their actions violated OpenAI's terms of use and usage policies. The ...


Copyright of this story solely belongs to zdnet.com . To see the full text click HERE