Social media giant Facebook has started using artificial intelligence (AI) to describe the content of photos to blind and visually impaired users, a media report said.
From Monday, through “automatic alternative text”, Facebook began automatically describing the content of photos to blind and visually impaired users.
The feature, created by Facebook’s five-year-old accessibility team, is coming to iOS and later to Android and the web and recognises objects in photos using machine learning, The Verge reported.
Machine learning helps to build artificial intelligence by using algorithms to make predictions, the report said.
For example, if you show enough pictures of a dog to the software, it will, in time, be able to identify a dog in a photograph. This feature identifies things in Facebook photos and then uses the iPhone’s VoiceOver feature to read descriptions of the photos out loud to users.
“We need a solution to that problem if people who cannot see photos and understand what’s in them are going to be part of the community and get the same enjoyment and benefit out of the platform as the people who can,” Matt King, a Facebook engineer who is visually impaired, said.
Facebook is not alone in using machine learning to understand photos. Similar technology powers keyword searches in Google Photos and Flickr, but the technology is still prone to errors.