OpenAI’s secret model codenamed ‘Strawberry’ has been officially introduced as “OpenAI o1”. OpenAI o1 is a model that can truly reason and can think before giving an answer.
Reasoning is one of the most fundamental tools for the human mind to grasp reality. In philosophical terms, reasoning is the process of systematically analyzing and making sense of our thoughts, perceptions, and experiences. This process requires the individual to use their critical thinking skills to draw the line between truth and illusion. However, the greatest power of reasoning is not only reasoning, but also the ability to be open to the unknown and to conduct deep questioning. This ability pushes the boundaries of consciousness and brings people together with the multi-layered meanings of existence. So, can artificial intelligences really achieve this reasoning power? According to OpenAI, known for its artificial intelligences such as ChatGPT and Sora, this has already happened. The company recently introduced a new model called “OpenAI o1”. It is stated that this model is a reasoning model and thinks before answering.
OpenAI o1 is an artificial intelligence model developed by OpenAI designed to spend more time thinking before answering. The OpenAI o1 family can reason on complex tasks and solve more difficult problems than previous models in the fields of science, coding and mathematics.
These models, which were made available as of September 12, were published via ChatGPT and API. However, it is underlined that OpenAI o1 is a "preview" version and that regular updates and improvements will be made.
As it is known, OpenAI has named its models as 1,2,3,4 so far. Finally, the GPT-4o model, which brings together multi-modal models, was introduced. However, OpenAI o1 represents a significant advancement in the field of complex reasoning and a new artificial intelligence capability. For this reason, the counter is reset to 1 again and the series is called OpenAI o1.
The new OpenAI o1 series models have been trained to spend more time thinking through the solution process before answering a problem. In the process, the models try different strategies, realize their mistakes, and improve their thinking processes. The model uses a “thought chain” like humans to solve queries. This thought chain is also displayed in the interface.
Tests show that the next update of this series performs similarly to doctoral students on difficult tests in fields such as physics, chemistry, and biology.
Great progress has been made, especially in mathematics and coding. In the qualifying exam for the International Mathematical Olympiad, the GPT-4o model answered only 13 percent of the questions correctly, while the newly developed logic model o1 achieved 83 percent success. Its coding skills reached an 89 percent success rate in Codeforces competitions. These are simply incredible and unprecedented values.
However, since the OpenAI o1 series is a preview, it does not yet have many of the features that make ChatGPT useful, such as browsing the web for information and uploading files and images. For many common cases, GPT-4o will continue to be more capable or functional in the near term.
In this series, the models’ logic capabilities are used to make them more effectively comply with security rules. The o1-preview model scored a significant 84 out of 100 in “jailbreaking” tests aimed at violating security rules. This is a significant improvement over the previous GPT-4o model’s 22-point performance.
Jerry Tworek, OpenAI’s research leader, says the training behind o1 is fundamentally different from its predecessors. But as the company has been tight-lipped about the details, Tworek says o1 was trained using an entirely new optimization algorithm and a new training dataset specifically designed for it.
OpenAI had taught previous GPT models to mimic patterns in the training data. With o1, it trained the model to solve problems on its own using a technique known as reinforcement learning, which teaches the system through rewards and punishments. As we explained in How It Works, o1 uses a “chain of thought” to process queries, similar to how humans process problems step by step.
GPT-4o and OpenAI o1 were given the same math problem.
OpenAI says that the new o1 series has greatly reduced the frequency of hallucinations, but the problem still persists. So o1 is better at making things up than GPT-4o. But the biggest difference between them is in solving complex problems like coding and math.
OpenAI o1 finds the right answer in a long series of operations.
However, when it comes to factual knowledge about the world, GPT-4o becomes more capable.
The OpenAI o1 series was developed specifically for professionals who tackle complex problems in science, coding, and mathematics. For example, health researchers can use these models to annotate cell sequencing data, physicists to generate complex mathematical formulas for quantum optics, and software developers to create and execute multi-step workflows.
OpenAI has also released a smaller and faster version of the o1 series, the o1-mini model. This version, which is 80% cheaper than the o1-preview model, is particularly effective for coding tasks. It stands out as a powerful option for applications that do not require extensive knowledge of the world but do require reasoning.
ChatGPT Plus and Team users will be able to access o1 models in ChatGPT starting today. Both o1-preview and o1-mini can be manually selected in the model selector. During the release period, weekly rate limits will be 30 messages for o1-preview and 50 messages for o1-mini.
ChatGPT Enterprise and Edu users will have access to both models starting next week.
Developers who qualify for the Level 5 API layer can start prototyping with both models in the API today with a speed limit of 20 RPM. OpenAI says they will increase all limits over time.
OpenAI will also bring its new o1-mini AI model to free ChatGPT users. However, the company has not yet announced when access will begin. The top model o1 is not currently on the agenda for free use in ChatGPT.
Although OpenAI has moved to a brand new series called o1, this does not mean that the GPT series has been abandoned. The company says that they will continue to develop and release new models for the GPT series in addition to the OpenAI o1 series.
According to OpenAI, the new o1 model really thinks. However, this thinking is not thinking as we know it. The design of o1’s interface and the model’s human-like statements such as “I’m thinking” and “Okay, let me take a look” appeal to perceptions a bit. So why does OpenAI behave this way? The answer is actually simple. o1, yes, does not really think, but the model itself is becoming more and more human-like. The difference between them is gradually blurring.
Ultimately, the OpenAI o1 model is also a large language model (LLM), and they are not “intelligent”. We have covered this issue in detail in our content titled "How do artificial intelligences work". To put it briefly: Today’s artificial intelligences actually predict word sequences to give you an answer based on patterns learned from large amounts of data. The answers given are entirely based on probabilities. That’s why I think it makes more sense to call them “probability robots” instead of “artificial intelligence”. In fact, the “hallucination” problem we always talk about stems from this basic working principle.
On the other hand, let's also state that there is a big "however" word in front of us. When researchers break the chains of reasoning, that is, reasoning, for artificial intelligence, we will approach real artificial intelligence at a significant level at the human level.
We definitely recommend you to watch the video examples below to reach more insights about the model.