At Facebook's F8 developer conference Wednesday, the company plans to address some of the more future-focused aspects of its business, including artificial intelligence and 360 video. And, as part of that, it's revealing a bit more to the public about how its AI works.
At Facebook, AI is used to power projects like its virtual assistant M, but the company is also using it in an effort to make photos more accessible to the blind by talking through what it sees in photos.
Here's one video, for instance, that shows Facebook's AI reading off what it sees in an image:
In a blog post the company said this same technology may also be applied to search: "Using image segmentation we will be able to build way more immersive experiences for the blind with 'talking images' you can read with your fingertips, as well as way more powerful ways to search images. In our case here the ability to search for 'a photo of us five on skis on the snow, with a lake in the background and trees on both sides.'"
Here's another video, where Facebook's AI reads through what's happening inside the app:
"Facebook has built an AI backbone that powers much of the Facebook experience and is used actively by more than 25% of all engineers across the company," the company said. "Teams across the company are running 50x more AI experiments per day than a year ago, which means that research is going into production faster than ever."
So get ready to see more of this type of functionality within the project, perhaps very soon.
Alex Kantrowitz is a senior technology reporter for BuzzFeed News and is based in San Francisco. He reports on social and communications.
Contact Alex Kantrowitz at firstname.lastname@example.org.
Got a confidential tip? Submit it here.