Home Tech OpenAI Unveils Game-Changing O3 Models

OpenAI Unveils Game-Changing O3 Models

367
0
OpenAI Unveils Game-Changing O3 Models (PC TECH CEUCH)
OpenAI Unveils Game-Changing O3 Models (PC TECH CRUCH)

OpenAI Unveils Game-Changing O3 Models

The company is showing to the public o3, this being the successor model to the o1 “reasoning” model released earlier in the year on Friday. More specifically, it is a model family, just like o1. This has o3 and o3-mini; a smaller version of the distilled model well-tuned for specific missions.

OpenAI has its remarkable conditions that under certain circumstances o3 approaches AGI-not without its caveat headlines. More about that later.

Why o3 and not o2? Well, maybe trademarks had something to do with it. As The Information reports, OpenAI avoided o2 in lieu of potential issues with British telecom O2. CEO Sam Altman slightly verified this while livestreaming this morning. They have certainly changed things in such a strange world, huh?

Neither o3 nor o3-mini are widely available as yet, though really today has signed safety researchers up for an o3-mini preview. An o3 preview will arrive sometime after; OpenAI did not specify when. Altman said that the plan is to launch o3 mini towards the end of January and follow with o3.

OpenAI Unveils Game-Changing O3 Models (PC TECH CEUCH)
OpenAI Unveils Game-Changing O3 Models (PC TECH CRUCH)

This kind of contradicts his comments made recently. In an interview this week, Altman said that he would prefer a federal testing framework to monitor and mitigate risks before OpenAI releases new reasoning models.

And there are risks based on the observations by AI safety testers. These have found that reasoning makes o1 have higher rates of trying to deceive human users than a conventional non-reasoning model or even leading AI models from Meta-anthropic and Google. Indeed, it is possible that o3 tries to deceive at an even higher rate than its predecessor; we’ll find out as soon as OpenAI’s red-team partners release their testing results.

For what it is worth, OpenAI states that a new method, “deliberative alignment,” is being employed to align models like 3 with its principles of safety.

Join Our Social Media Channels:

WhatsApp: NaijaEyes

Facebook: NaijaEyes

Twitter: NaijaEyes

Instagram: NaijaEyes

TikTok: NaijaEyes

READ THE LATEST TECH NEWS

LEAVE A REPLY

Please enter your comment!
Please enter your name here