A Year After Pledging Openness, Apple Still Falls Behind On AI

A year ago, Apple pledged it would engage more with the academic research community. But today, AI experts say it’s still not enough.

“Apple is the NSA of AI.”

That was how storied computer scientist and Stanford University adjunct professor Jerry Kaplan described Apple's unconventional approach to artificial intelligence research in 2016. A year later, his assessment of the company is largely the same — despite Apple's pledge to more fully engage with the research community that drives innovation in the field.

“My observation is that Apple tends not to be as heavily involved in the academic AI world as other companies that are well-known for being involved,” Kaplan told BuzzFeed News this week.

Two years ago, Apple’s penchant for keeping its artificial intelligence research secret was notorious throughout the industry. But since then, Apple has made a big show of ramping up its efforts in AI. The company hired Russ Salakhutdinov, a highly respected Carnegie Mellon professor, to be its first AI director, and it publicly pledged it would engage more with academia. This past July, it debuted an official blog covering its progress in AI and machine learning.

Apple will start publishing, according to @rsalakhu at #nips2016

Kaplan admitted that he was not aware of these developments at Apple because he does not closely watch the company. But that’s also the point. One year after Apple’s pledge to be more open with the AI research community, the company is still facing much of the same criticism in a crucial and extremely competitive sector of the tech industry. “Other companies have very strong outreach, ties, and interaction [with universities like] Stanford,” Kaplan explained.

BuzzFeed News interviews with a dozen AI experts paint a picture of Apple’s artificial intelligence research that shows the company is opening up a bit more — but there is still a disconnect between the academic AI community’s values and Apple’s way of doing business. The company’s obsessive focus on the AI applications in Apple products can make working for the company less desirable to some talented experts who have no shortage of options, researchers said. And that’s bad news for Apple, which faces an uphill battle in attracting the people it needs to become a true frontrunner in AI among the giants of tech.

“If Apple doesn’t publish, fewer talented people will join.”

Nowadays, the race among tech giants to lead in artificial intelligence — a term generally used for software that allows computers to learn and improve tasks on their own — is heating up, and Apple clearly wants to be in the running. AI is a technology that underpins self-driving cars and voice assistants like Siri and Amazon’s Alexa, and tech giants like Google, Facebook, Amazon, and Microsoft are in a fierce competition to recruit the top minds in AI and offer the most advanced applications of the technology. (According to a recent report in the New York Times, salaries for AI experts can range from $500,000 to $120 million annually.) For years, Apple was perceived as lagging in the field, largely because of its tight-lipped approach to research. Now, as the availability of top AI talent has declined due to booming demand for expertise in the field, the company has recently sought to show it can respond to the conventions of the AI community, where publishing frequently is a must.

“There is a stark contrast in how Apple deals with its AI research as opposed to other companies like Google, Microsoft, and Facebook,” Shreshth Gandhi, a research scientist at the Canadian biotech company Deep Genomics and a former machine learning graduate student at the University of Toronto, told BuzzFeed News. “Compared to its competition, Apple doesn't seem to be doing enough to promote new research in AI.”

“If Apple doesn’t publish, fewer talented people will join,” said University of Toronto machine learning student Yuhuai Wu, who studied under Salakhutdinov as a PhD candidate and who Apple has attempted to recruit. “My biggest concern is whether I can still be visible in the research community [if I worked at Apple], and whether I can do the research that I want to do.”


One former computer vision AI engineer for Apple still remembers the aura of secrecy surrounding his job. He described how engineers knew only as much about any given project at the company as they needed to know. “It’s very common where, in a team of 10 people, five of us are disclosed to a project while five of us are not disclosed to a project,” the engineer said. “So you could be sitting beside this person, working together, and he has some information that you don’t have, while you have some information he doesn’t have.” That, the ex-Apple engineer said, “created a bit of a bad feeling, because you don’t know what the big picture is. You don’t know — ‘Why am I doing this?’”

To be fair, this was about two years ago, when things at Apple were at peak secrecy. But it underscores how locked down Apple’s culture had been — and how much the company has tried to open up in subsequent years. In January, Apple joined the Partnership on AI, a group dedicated to developing best practices for AI research, along with Facebook, Microsoft, and other tech companies. Last October, it hosted BayLearn 2017, a Bay Area machine-learning symposium. When Apple brought Salakhutdinov aboard, the announcement made waves throughout the AI community as yet another buzzy AI hire made by a top tech company, following New York University’s Yann LeCun joining Facebook in 2013, and Geoffrey Hinton of the University of Toronto joining Google in the same year. (Salakhutdinov joined Carlos Guestrin, a notable University of Washington professor whose machine learning company Apple acquired in August 2016.)

At the iPhone event this past September, people watching for clues on Apple’s AI progress were treated to a slew of AI-infused features in the new iPhones. At the event, Apple executives and presenters explained the new iPhone X’s features using terminology familiar to the AI-literate: The company's new A11 Bionic chip includes a so-called “Neural Engine” designed for artificial intelligence processing tasks like mathematically modeling the human face for the iPhone X's new FaceID authentication feature. Crucially, the chip uses machine learning to evolve recognition of a face over time, and even if you grow a beard or wear glasses.

No other company is as well-positioned to pull off features like this, said cofounder of Deep Genomics and deep learning expert Hannes Bretschneider — especially because they inform the user experience, or how consumers interact with Apple products, which is the company’s forte. “Apple really thinks about the product first, then they find the tech to enable those products,” he said. (Eddy Cue, Apple’s senior vice president of internet software and services, echoed the sentiment in a 2016 interview with Wired: “We are driven by a vision of the end result,” he said.) That strategy includes acquiring companies that possess the technology Apple aimed to enable in its products, such as the eponymous startup behind its voice assistant Siri and a company called PrimeSense, which manufactured Microsoft’s movement-sensing game system Kinect, before being acquired by Apple, and which is likely behind the dot grid system used in the face-recognition tech within Face ID. That strategy has its pros and cons: Acquiring companies for their technology can be expensive, but the tech sees a more immediate application. Meanwhile, funding basic research pushes the far-out edges of AI technology — but applications may not end up materializing until five or 10 years down the line, if at all.

In July 2017, Apple also launched a machine learning blog to better align itself with the practices of the AI research community. But in the five months it’s existed, the blog has published only seven entries on Siri, face detection, handwriting recognition, and text labeling — all of which reveals a product-centric focus. What’s more, the entries list teams at Apple instead of individual researchers, in contrast to other peer-reviewed journals in the field.

“That blog is completely useless.”

“That blog is completely useless,” an AI professor of an elite university, who asked to remain anonymous because they did not want their name attached to criticisms of an influential tech company, told BuzzFeed News a few weeks ago. “There are absolutely no details, for example, in Apple’s post about AI in handwriting recognition. It amounts to bragging and it is impossible to actually learn anything from it. It feels like they realized most big-name institutions have blogs and created one, but didn't do it in a way that adds any value. I would contrast it with Google’s post about neural networks for language understanding, which has many more details and points to public code along with walkthrough explanations.”

Of Apple’s most recent post on face detection, published last Thursday, the AI professor said: “Not bad. There’s still no code released, but not bad. Certainly better than the other stupid one. They’re improving!”

Three months ago, Apple won a Best Paper award at the 2017 Conference on Computer Vision and Pattern Recognition — one of the most influential conferences in the field of AI. That’s an impressive accomplishment on its face, but what Apple has published formally in peer-reviewed journals pales in comparison to other tech giants. In 2016, Facebook published 125 articles on the free-to-access academic journal arXiv, with that number expected to grow to about 200 articles in 2017, according to a company spokesperson. A Microsoft spokesperson said the company published 847 AI papers in 2016, and the tally was at 394 by August of this year. A Google spokesperson, meanwhile, told BuzzFeed News that the company does not have the numbers for 2017 compiled yet, but that it published 133 papers in 2014, 171 AI papers in 2015, 203 in 2016, and that it expected 2017 to follow the same general trend. Apple declined to tell BuzzFeed News the number of AI papers it has published in recent years. But in addition to its computer vision research paper, there are only three others with Apple researchers listed as authors on arXiv that BuzzFeed News was able to find, which brings the unofficial tally to four.

Apple also noted that it presented three peer-reviewed papers at the International Speech Communication Association’s Interspeech conference in Sweden this past August. According to AI researchers, this is more reputable, because there are no rigor standards to publishing on arXiv. “Anyone can put a paper on arXiv and there is no review — peer or otherwise — to assess quality,” Georgia Tech AI researcher Mark Riedl said. “There are only a few papers that I can think of that are considered groundbreaking that were never submitted to or accepted into a highly competitive peer-reviewed conference.” But two researchers, including Riedl, told BuzzFeed News that the acceptance rate at these competitive conferences matters. “If the acceptance rate is less than 50% I'd say it's good, less than 25% is great,” said one AI professor. In 2015, ISCA said its overall acceptance rate was 51%.

More to the point, Apple still publishes very infrequently compared to other tech giants. “To be serious in AI, you have to publish at the main peer-reviewed artificial intelligence, machine learning, computer vision, and natural language conferences,” said Bart Selman, a Cornell University AI professor. Selman enumerated the conferences: NIPS, ICML, IJCAI, AAAI, CVPR, and ACL. “Other tech giants publish dozens of papers at those venues every year. So, Apple has a long way to go.” Of those conferences, Apple has only presented at ICML and CVPR. (Salakhutdinov did appear at NIPS in 2016 to announce that Apple planned to start publishing, but the company didn't present research at the conference.)

“I think this kind of secrecy is antithetical to the open research culture in the AI community.”

Apple’s business-first approach to AI development is fundamentally at odds with how the AI research community is used to exploring new ideas and new science, some researchers told BuzzFeed News. “One of my friends had a recruitment interaction with Apple, and they refused to give him even a vague idea of the kind of work he might be doing if he joins,” Gandhi wrote in an email to BuzzFeed News. “I think this kind of secrecy is antithetical to the open research culture in the AI community where most research by universities and companies alike is shared.”

AI researchers who BuzzFeed News spoke with acknowledged that there’s a big debate over the merits of publishing consistently versus having your research be deployed in millions of Apple devices the world over. “Many folks believe in deployment over publishing,” said the anonymous AI professor. “I think both are fair.”

But as Chris Nicholson, CEO and cofounder of deep-learning startup Skymind, points out, AI is a field dominated by academics and researchers, and “those people like to publish. Publishing is like breathing to them: You do it or you die,” he said. “So if you try to recruit AI researchers by promising lots of money and zero peer recognition, you won't get very far. There are some people who will never join Apple for that reason.”


Apple has also reportedly fumbled its AI projects in areas where it has been ahead of the curve. A recent Wall Street Journal article described how former Siri team members said progress on the voice assistant was slowed by a failure to set ambitious goals, changing strategies, and a dominating focus on developing the iPhone. “Siri is Apple’s biggest Achilles’ heel in AI,” said Bretschneider. “In terms of ability, it’s long been outgunned by Google and Amazon.” Unlike Amazon’s Alexa, which allows developers to code custom “skills” across a wide range of applications, Apple opened Siri up to only seven types of apps, including payment and ride-sharing, during its developer conference last year.

Other limitations are set by Apple’s emphasis on privacy. At Apple’s developers conference in 2016, the company made a big deal of being the first to apply a research technique called “differential privacy” at scale — basically a way for Apple to analyze user data at scale without revealing anything about an individual, like what a user is accessing on the web via their iPhones. “Apple’s goal there is to have it both ways,” explained Bretschneider. “For a long time, it just collected very little data from their users to begin with. [Using differential privacy] allows Apple to collect more data of their users now without changing their fundamental agreement.” While from the outside it’s hard to know how effectively this strategy is helping Apple develop its AI, Bretschneider said that it is apparent Apple’s AI efforts are “more targeted and limited,” whereas a company like Google, “essentially wants to be an AI company.”

Doubling down on its commitment to privacy, Apple also keeps most user data on the phone itself and deletes it after a few months. But Eugenio Culurciello, a professor at Purdue University who works on machine learning hardware, said that while AI processing on a chip is better than it has been before, limitations on power and memory bandwidth still make a mobile device no match for cloud AI — which is what Google and Amazon use. (These companies keep data until users explicitly request for it to be discarded.)

“AI at Apple is hobbled by the way they handle information.”

Essentially, Skymind’s Nicholson added, Apple is accepting a commercial disadvantage based on its business model. “AI at Apple is hobbled by the way they handle information,” said Nicholson. Apple takes the data privacy of its users very seriously — quite possibly the most seriously of any major company in Silicon Valley. It likes to tout that this is because it is a hardware business, not a company whose business ultimately relies on advertising, like Facebook and Google, which sell ads based on user data.

But it also means that it's harder for Apple to benefit from so-called “data effects,” where products improve based on the sheer volume of data you collect. “AI benefits from data effects,” Nicholson said. “It needs massive amounts of data, and once you have that, you can build a superior product, attract more users, expose your AI to even more data, and embark on a virtuous cycle. Apple's not fully participating in that cycle.” (Apple, for its part, said the right data is more important to the company than having the most data, and it is satisfied with the data it collects in privacy-preserving ways.)

Apple is a wildly successful company — a company that, in turn, has become the world’s most valuable publicly traded company, the world’s top retailer in sales per square foot, and the world’s most profitable company ever. But as the rise and fall of the world’s biggest corporations tells us, no one player has a permanent claim to No. 1. And if the pundits are correct, the fortunes of today’s most successful companies may well turn on AI, technology that is already radically transforming essential human industries, from medicine and finance to labor and art.

“Apple has not shown the world it's a leader in AI,” said Nicholson. “And its strategy may mean that it won't own the future of AI.”

Skip to footer