ChatGPT’s new GPT-4 model powers feature for the visually impaired

OpenAI, the company behind AI chatbot ChatGPT, has upgraded the technology that powers the bot. It has transitioned from the GPT-3.5 model to the GPT-4 model and brought a range of improvements in the chatbot. These changes include the ability to take image prompts and generate AI-powered visuals.
The new capabilities of the GPT-4 model have also made their way to the Be My Eyes app for the visually impaired. More specifically, the app is set to employ GPT-4’s dynamic image-to-text generator to bring a ‘Virtual Volunteer’ AI feature.

As per the company announcement, “Our new Virtual Volunteer tool, currently in beta testing, will push us further toward achieving our goal to improve accessibility, usability, and access to information globally, and aligns us with OpenAI’s stated principles on developing safe and responsible AI.”
How Virtual Volunteer improves the app
Be My Eyes is an app that connects visually challenged people with a community of volunteers and company representatives via video call. The platform enables them to take help from the volunteers for various daily needs, which may include reading small text or differentiating between colours. However, the app is inherently community-driven, and its users have a dependence on others to help them out.

Leave a Reply

Your email address will not be published. Required fields are marked *