OpenAI is developing an AI that plans for the future

 OpenAI is developing an AI that plans for the future

OpenAI, the leading company in artificial intelligence, is apparently working on a new technology. According to reports, the Microsoft-backed company is developing a new reasoning technology.

OpenAI, the developer of ChatGPT, Sora and Dall-E, supported by Microsoft, is working on a mysterious project codenamed "Strawberry". It is stated that the company is trying to bring a new approach to the development processes of artificial intelligence models with this project.

With the Strawberry project, OpenAI is said to add advanced reasoning capability to large language models. According to Reuters, OpenAI keeps its plans for Strawberry and information about how it works even within the company. However, it is currently unclear when Strawberry will be announced to the public.


OpenAI is developing a reasoning technology that will revolutionise AI

On the other hand, thanks to Strawberry, artificial intelligence will not only produce answers to queries, but also plan far enough ahead to navigate the internet autonomously and reliably with the mechanism the company calls "deep research". However, there is another more important detail. The Strawberry project is actually rumoured to be the mysterious Q* (pronounced Q-Star) project we have heard about in the past. In recent months, it was stated that Q* could be a turning point in the search for super intelligence, also known as artificial general intelligence (AGI).

It also showed a research project that OpenAI claimed to have new human-like reasoning skills at an internal meeting last Tuesday. It is unclear whether this is the Strawberry project, but indications point to it.

Reasoning, or reasoning, is one of the weakest points of today's artificial intelligences. Yes, we can summarise dense texts and create prose with large language models (the systems that power systems like ChatGPT), but they all fail when it comes to questions that require intuition and common sense, or more precisely, thought. When confronted with such questions, the models produce fabricated information referred to as "hallucinations".

AIs that are strong at reasoning can be strong at planning ahead, understanding the physical world and solving challenging multi-step problems. So strengthening reasoning in AI models could revolutionise their ability to do everything from making major scientific discoveries to planning and building new software applications. In a January conversation with Bill Gates, OpenAI CEO Sam Altman said that "the most important areas of progress will be around reasoning" in AI.

In addition, what OpenAI is working on is said to have similarities with a method developed at Stanford in 2022 and called "Self-Taught Reasoner" or "STaR". STaR is an approach that allows artificial intelligence models to iteratively generate their own training data to raise themselves to higher levels of intelligence. This approach could theoretically be used to enable language models to exceed human-level intelligence.

Comments