OpenAI GPT-4 With Image Input Feature and Better Factual Responses
On 14 March, OpenAI confirmed the rumours about the updated version of ChatGPT by officially announcing the large language model GPT-4.
It includes multiple enhanced features focusing on accuracy, creative responses, collaboration, and an emphasis on providing safer and more accurate content.
Starting today, you and the developers can now test the new model using the API.
The GPT-4 language model comes up with additional input capabilities among its new features. Usually, we enter a text prompt and get our required content, but this time you can also upload an image, and in return, you will receive responses via text. Also, GPT-4 will deliver you more creative and factual content by entering a more detailed prompt.
Another feature of this new model is that it supports the text of 25,000 words reflecting greater accuracy than the prior model, which could only support 1,000 words at a time.
GPT-4 was developed and trained on Microsoft Azure AI supercomputers within 6 months and is now “safer and more aligned.” The model will be 82% less likely to provide offensive content on requests and 40% more likely to produce the user’s desired information.
Several companies work with OpenAI to develop GPT-4, including Duolingo worked on improved language interactions and BeMyEyes, which worked on visual accessibility; Stripe to fight fraud.