OpenAI introduces the new GPT-4o model

OpenAI introduces the new GPT-4o model

OpenAI has announced its latest artificial intelligence broad language model, which it says will be easier and more intuitive to use. With OpenAI GPT-4o, ChatGPT is now an assistant and free for all users!

The new model, called GPT-4o, is an update to the previous GPT-4 model that OpenAI introduced just over a year ago. OpenAI GPT-4o will be available to free users, which means that everyone can access OpenAI's most advanced AI technology.
 

Free and faster model for all ChatGPT users

GPT-4o will enable ChatGPT to interact using text, voice and pseudo-image. This means it will be able to view and say something about screenshots, photos, documents or graphics uploaded by users. OpenAI Chief Technology Officer Mira Murati said ChatGPT will now also have memory capabilities, meaning it can learn from previous conversations with users and translate in real time.

 

During the event at the company's San Francisco headquarters, Murati said: ‘For the first time, we're really taking a big step forward when it comes to ease of use. Interaction is becoming much more natural and much, much easier.’

For example, users can ask the ChatGPT powered by GPT-4o a question and interrupt the ChatGPT while it responds. OpenAI says the model offers ‘real-time’ responsiveness and can even capture the emotion in the user's voice and produce ‘different emotional styles’ of voice. The GPT-4o also enhances ChatGPT's vision capabilities. Given a photo or a desktop screen, ChatGPT can now quickly answer relevant questions such as ‘What does this software code say?’ or ‘What brand of shirt is this person wearing?’.

GPT-4o will be available starting today on ChatGPT's free plan and to OpenAI's premium ChatGPT Plus and Team subscribers with ‘5x higher’ message limits and ‘coming soon’ Enterprise options. OpenAI says it will make the enhanced voice experience powered by GPT-4o available to Plus users within the next month.

Comments