Important announcement from OpenAI about banned ChatGPT

Important announcement from OpenAI about banned ChatGPT

OpenAI, the creator of ChatGPT, has made public statements about the security of artificial intelligence and how it tries to keep its products safe. As it will be remembered, the sensitive information of some of the users of ChatGPT was leaked as a result of a very simple mistake. ChatGPT was banned in Italy after this incident and was examined in some European countries. Last week, Italy became the first western country to ban the use of ChatGPT, citing privacy concerns. While OpenAI claims to be confident it complies with existing laws, Italy's move raises the possibility that other countries will follow suit and restrict the use of AI until its safety is assured. The blog post on the company's website may allay fears about AI models that have made significant strides in the past few months. Does it relieve worries? Not really.

"GPT-4 tested 6 months before release"

In the post, Open AI states that it rigorously tests any new system using external experts before it is introduced to the public, and uses human feedback and reinforcement learning to make improvements. The company claims to have tested the final model, GPT-4, for six months before releasing it, and calls for regulation: “We believe that powerful AI systems should be subject to rigorous security evaluations. Regulation is needed to ensure the adoption of such practices, and we are actively working with governments on the best form that such regulation can take.”

OpenAI states that despit the extensive research and testing they have rigorously conducted, what they can learn in the lab is limited, so they cannot predict all useful or abusive avenues. That's why the company states that public testing is a must for the development of such systems. OpenAI says it can monitor abuse of its services and take immediate action based on real-world data. As can be seen from the explanations above, OpenAI develops nuanced policies against the real risks posed by its technologies such as ChatGPT, while theoretically allowing people to use them for beneficial or harmful purposes.

The company also said it is considering verification options to allow users over 18 or 13 with parental consent to access its services. It was also emphasized that there was no verification option in Italy's ban decision.

 

 

 

 

 

 

 

 

 

 

 

 

 

Comments