OpenAI has released a large multimodal version: GPT4

OpenAI has released a large multimodal version: GPT4

After months of waiting, OpenAI has released its image and text comprehension artificial intelligence model, GPT-4, which marks a new milestone for the company. The current generation GPT-3.5 in OpenAI's extremely popular ChatGPT chatbot was only able to detect and render text.

ChatGPT is now smarter with GPT4

According to OpenAI, the GPT-4 can accept image and text input and perform at a "human level" on a variety of professional and academic benchmarks. The company reported that the GPT-4 passed simulated exams (such as Uniform Bar, LSAT, GRE) with a score of "around the top 10 percent of test takers" compared to the GPT-3.5. OpenAI also says that the new system outperforms its predecessor in "factuality, directability and non-breaking barriers". Although GPT-4 accepts image inputs, it can only return text for now. OpenAI also warns that the systems retain many of the same problems as previous language models, including a tendency to fabricate information, as well as the capacity to produce violent and harmful text.

Speculation about GPT-4 and its capabilities has continued over the past months, with many suggesting it will be a huge leap forward over previous systems. However, it seems that GPT4 is a model that focuses on eliminating shortcomings and improving performance. However, the dataset on which GPT-4 was trained again does not include after September 2021. OpenAI seems to have prepared GPT4 apparently for those who want to integrate it into their systems or applications. It is emphasized that the new model is 82 percent less likely to respond to unauthorized content requests compared to the GPT-3.5.

Comments