It was the final night of classes at Singularity University’s March 2013 Executive Program, and we, the students, had been given a valedictory assignment: Predict the future.
For the past six days, the 63 of us had been immersed in lectures on the nearly limitless potential of artificial intelligence, robotics, nanotechnology, and bioinformatics, and now the moment had arrived for us to figure out what we really believed and ponder the big questions. Was a transhuman future — the Singularity — really only three decades away, as SU’s chancellor and co-founder Ray Kurzweil had prophesied? Were we really on the brink of a cure for all viruses and an era of radical energy abundance? Would we soon be able to choose to live forever? How many glasses of wine would it take until our group of entrepreneurs, executives, and hippie mystics got impatient and just resolved to build a time machine?
Inside Singularity University’s airy classroom on the campus of NASA’s Ames Research Center in Mountain View, California, the SU staff distributed about 50 sheets of paper, many bearing newspaper headlines from this radical but not-too-distant future. (A future that, shockingly, still included a print newspaper industry.) We were instructed to break up into small groups to decide when in the next 20 years these world-changing milestones would come to pass.
“LIFE EXPECTANCY REACHES 150 IN AMERICA” blared the first headline. I stared at it incredulously. Life expectancy in the United States was currently 79. For the life expectancy to hit 150, that would mean… I started to do some back-of-the-napkin calculations. One of my fellow classmates, the 66-year-old chairman of an international law firm, was quicker to formulate his answer. “According to a gerontologist in England, the first person to live to 1,000 has already been born,” he told us. “I’m not sure I believe that, but everyone thinks that the first person to live to 150 has already been born. I’d even say the first person to live to 200 has been born.” The other five of us nodded our heads. We collectively decided that U.S. life expectancy would reach 150 within the next 10 to 15 years.
The next headline declared: “ROBOT LEAVES EARTH, MINES OTHER PLANET AND BRINGS MATERIAL BACK TO EARTH.” A number of us exchanged sidelong glances. Progress in space had slowed dramatically in the post-Apollo era. A group of planetary exploration enthusiasts — among them the film director James Cameron and SU co-founder Peter Diamandis — had recently backed a company seeking to mine asteroids, but it was hard to imagine those missions were imminent. Then Diane Murphy, Singularity’s PR executive, sidled up to our table. “Elon Musk already announced a plan to create an 80,000-person colony on Mars starting in 15 years,” she said. Gently admonished, we decided that planet-mining robots would be operational in the next decade.
After half an hour, the entire class reconvened in SU’s main lecture hall to compile our predictions into a master chronology. This is the future we foresaw: Five years from now, a majority of medical doctors will consult with artificial intelligence before making a diagnosis; a genetic-engineering service for fetuses will be an increasingly popular resource for expectant parents; and a synthetically manufactured virus will be found spreading in the wild. Five years after that, 100 million people will have watched the World Cup via virtual reality glasses, and laptops, tablets, and mobile devices will have been abandoned in favor of more immersive computer systems. Jump another half-decade ahead, an AI will be given lead authorship of a scientific paper and a zoo will open that houses 10 species that went extinct more than 15,000 years ago. By 2033, our collective vision became murky: One small group predicted that synthetic grass would have cleaned up 100% of excess carbon dioxide in the atmosphere, arresting the progress of global warming and beginning to reverse its pernicious effects. Another group decided to write their own headline: “WORLD ENDS.”
“So how many people here had arguments about something they barely knew existed six days ago?” asked Kathryn Myronuk, SU’s director of research. Hearty laughter broke out across the room.
The evening was now a haze, with most of us buzzed on Merlot. It was 10 p.m. The lights dimmed in the conference room. Music came crashing over the speakers, and a few students launched into a sweaty dance. In an adjoining room where a late-night bull session on the nature of consciousness had transpired earlier in the week, more bottles of wine were emptied, with a pride of leonine European venture capitalists leading the bacchanal. The party carried on into the wee hours. The future looked bright.
Techno-utopianism is hardwired into Singularity University. Kurzweil and Diamandis founded the organization in 2009, at a time when both men were coming off career triumphs. Kurzweil — an inventor most famous for pioneering the flatbed scanner, creating reading machines for the blind, and developing a line of synthesizers popular in the ’80s and ’90s (he is, unsurprisingly, friends with Stevie Wonder) — had gained a new level of fame in 2005 with the publication of The Singularity Is Near: When Humans Transcend Biology, a best-selling manifesto making a scientific case for a merger of man and machine that would collapse distinctions between physical and virtual reality and even life and death. (Kurzweil sets the date for this event horizon at 2045.)
Meanwhile, Diamandis had seen the X Prize Foundation, a passion project he founded in 1995, spur the successful development of the first private spaceship to make multiple passenger-carrying flights. On a hiking trip in Chile in 2006, Diamandis was toting The Singularity Is Near in his backpack when he decided to found a school based on its ideas. When he returned home, Kurzweil agreed to join him.
Kurzweil and Diamandis had little trouble amassing powerful friends. Genentech, Google, Cisco, Nokia, and Autodesk lined up to be founding partners of Singularity University, and Google CEO Larry Page played a key role in shaping the organization’s mission. At SU’s founding conference in September 2008, Page leapt up and told the assembled group that Google would back the organization if it dedicated itself to “addressing humanity’s grand challenges.” Kurzweil and Diamandis were happy to oblige, and “improving the lives of a billion people within a decade” became a key plank of SU’s founding platform. (This January, Page hired Kurzweil at Google to help develop software that better understands natural language, how humans actually communicate.)
SU continues to expand the scope of its grand ambitions. It remains an educational institution, offering weeklong executive education courses like ours and a 10-week summer immersion primarily for young entrepreneurs. It has also become an elite Silicon Valley conclave, staging a three-day invitation-only schmoozefest on the Fox Studios lot in Los Angeles that brings together big-time CEOs like LinkedIn’s Reid Hoffman, science heroes like Apollo 11 astronaut Buzz Aldrin, and tech-curious celebrities like Ashton Kutcher, Will.I.Am, Jodie Foster, and Seth Green. And as of last year, Singularity University is a startup incubator, surrendering its nonprofit status to take an equity stake in companies developing emergent technologies like lab-generated beef and laser-printed DNA. Upon SU’s launch in 2009, Peter Diamandis told the Associated Press, “We expect the next generation of multibillion-dollar companies to come out [of] this university.” It’s perhaps telling of this healthy self-regard that the organization’s logo, a serifed “S” inlaid in a shield, recalls nothing so much as the emblem Superman wears on his chest.
It was a beaming Saturday afternoon in mid-March when the Executive Program attendees arrived at Singularity University’s principal classroom, a low-slung building on the 2,000-acre campus of NASA’s Ames Research Center. The skeleton of Hangar One — a 1930s airship garage that was once the world’s largest freestanding structure — loomed nearby. Many of us were jet-lagged after flights from São Paulo, Amsterdam, Tel Aviv, and Mumbai, but nearly everyone seemed giddy. We were an almost comically eclectic group: a square-jawed brigadier general in the U.S. Marine Corps; a fried-chicken magnate from Cali, Colombia; a Brazilian fashion researcher; the tan and buff mayor of a small California city; a former Goldman Sachs partner who described himself as “an explorer on a journey”; a Canadian geophysicist who had spotted “four or five” UFOs as a young man; a puckish 40-year-old named Malek who had toured the world as the indie-rock act Jupiter Sunrise; and two business magnates who were among the first to purchase flights on Richard Branson’s Virgin Galactic space airlines. The seven-day Singularity University Executive Program, it seems useful to mention at this point, costs $12,000.
Singularity University, we were told immediately and often, wanted to teach us to “recognize the power of exponential technology” by “rewiring our brains” to “think exponentially.” Exponential technologies, as we learned on that first afternoon, are simply technologies whose principal measures of capability increase by multiples — instead of growing linearly (1, 2, 3, 4, 5, 6), they grow exponentially (1, 2, 4, 8, 16, 32). In computing, exponential growth has long been observed in a phenomenon called Moore’s Law that states that the number of transistors on a silicon chip doubles every 18 months or so. Such rapid enhancement is the reason that your smartphone is a million times less expensive and a thousand times more powerful than the supercomputers of 40 years ago.
Kurzweil’s contribution to the theory of exponential growth has been to take the principles behind Moore’s Law and apply them to everything from cellular biology to interstellar space travel. Kurzweil calls this “The Law of Accelerating Returns,” and it gets very radical, very fast. “In the 21st century,” he writes in The Singularity Is Near, “we will witness on the order of 20,000 years of progress (again, when measured by today’s rate of progress), or about 1,000 times greater than what was achieved in the 20th century.”
Later that day, as we celebrated our initiation into this brave new world with a feast of paella under the stars, I sought out the Canadian geophysicist and UFO spotter, figuring he might have an interesting take on what it means to experience “20,000 years of progress” in 100 years. It turned out this was his second stint at Singularity this year. He’d made the trip from Toronto to Silicon Valley only two months earlier for a weekend-long workshop with Kurzweil and Diamandis. “The whole thing is fucking addictive,” he said. “Ray and Peter are the high priests. I wanted to come back to get more of the practical side.”
I asked him about his UFO sightings.
“I was out prepping a 40-pound drill one night doing geochemical sampling in Saskatchewan,” he said. “Then I saw two brilliant, luminous, soundless discs. They combined into one. And then they were gone.”
“Do you believe in the Singularity?” I asked.
“Oh, I have a theory about what the Singularity might look like.”
“Well, I can’t tell you,” he chortled. “I might write a book about it!”
The next morning, the 63 of us arrived back at the main classroom, plowed through a gourmet breakfast buffet, and plopped into our pastel-colored swivel chairs for 14 hours to listen to a marathon of lectures. We learned that artificial intelligence was all around us — Pandora, Siri, IBM’s Watson, Amazon.com recommendations, credit card fraud detection, shoot-‘em-up video games like Call of Duty — but that soon we’d be able to create a single, massive artificial intelligence that would, as one instructor put it, “master the accelerating wave front of human knowledge.”
We learned that mapping the human genome took the U.S. government 13 years and $2.7 billion, now takes a lab as little as three days and $2,000, and in 10 years will be done instantly for less than it costs to flush a toilet (under a penny). We learned that in 18 years some people will be able to live for “an arbitrarily long period of time,” and that we ought to consider cryogenic freezing if we fail to make that cut. We learned that the nation-state is an outdated model at risk of being disrupted and that the future will belong to city-states — the Bay Area, to pick a random example — and global corporations. We watched videos of robots serving as pack mules over broken terrain and heard that wars in the future would be fought entirely by AI-enhanced machines.
But all of that was just the beginning. A day after our lectures on AI, bioinformatics, and robotics, Ralph Merkle, a legend of Silicon Valley — a cryptography innovator, cryonics enthusiast, nanotechnology evangelist, and potbellied reminder that computer geeks didn’t always aspire to tech-brohood — arrived at SU to imagine the 21st century for us.
Nanotechnology, he said, would soon enable you to hold your breath at the bottom of a pool for an hour (via red-blood-cell-replacing respirocytes), allow for reverse aging (via nanobots called chromallocytes that would insert fresh chromosomal material into your cells), and end global warming (via solar-powered diamond trees that transform carbon dioxide into oxygen). “We’ll have materials that are over 50 times stronger than steel for the same weight,” Merkle said. “A single-stage-to-orbit space vehicle would weigh about 3,000 kilograms including fuel. That’s about a VW bus. You hop in — you and four passengers and a bit of luggage — and you take off into low-Earth orbit. You’ll be able to go anywhere in the solar system in a month for a few thousand dollars. I don’t think we’ll have these kinds of technologies in 20, 25 years; I think it’ll be more 30, 35, maybe 40.”
“What could block it?” someone shouted from the back.
“Oh, I don’t know, nuclear war would block it pretty effectively,” Merkle drolled.
“So basically it’s going to happen,” the student shot back.
Would it surprise you to hear that multiple lecturers used examples from the films Minority Report, Gattaca, and Prometheus to explain their subjects? Standing in the lunch line one day, a Marin County entrepreneur named Kent and I got into a conversation about all the futuristic talk we’d been hearing. He showed me his iPhone’s Kindle app, where he’d amassed a collection of titles by cult writers like “techno-thriller” novelist Daniel Suarez and William Hertling, author of A.I. Apocalypse and Avogadro Corp: The Singularity Is Closer Than It Appears.
“A lot of what we’ve been hearing sounds like science fiction to me,” I said to Kent. “It’s taking the concepts of science and extrapolating them into fantasy.”
“No,” he politely corrected. “This is science catching up with fantasy.”
Why had this group of students, ranging in age from their mid-twenties to late seventies, all come to SU? Few if any of us were planning to start companies that would manufacture nanobots or artificial brains, and not one of us was going to gain enough insight from a couple dozen 90-minute lectures to become anything close to an expert in any of these fields.
As the week wore on, I put the simple question “Why are you here?” to more and more of my fellow students. The answers divided the class into two camps. There were the people like Kent — mostly Americans, mostly with some tie to Silicon Valley — who plainly lived and breathed this stuff. And there were the skeptics — mostly foreign, many with some tie to finance — who seemed to be at SU, mostly, as one New Zealander explained to me, to bask in “the pedigree of Silicon Valley.”
Over lunch one afternoon, I put the question to two women, a Danish venture capitalist and an Indian-born, Germany-based solar executive. Both were fed up with the European welfare state — they couldn’t believe how many workers stayed home for sick days and they marveled at the chutzpah of male employees who used paternity leave as an excuse to shirk their responsibilities. Europe was dead. America and Silicon Valley in particular were the models for a more productive capitalism.
But what about the Singularity, I asked them? Did they have opinions on our transhuman future? The Indian woman curled her upper lip and let out a short huff. The Dane rolled her eyes. All that sci-fi talk, they made clear, was something to be endured, not enjoyed. (Still it was undeniably part of the attraction. At the end of the week, the Indian woman confessed that she’d liked SU precisely because it wasn’t the same as a typical b-school course. It was weirder. Sometimes weird is good.)
One night in the middle of the week, I found myself sitting at the dinner table next to Malek, the Jupiter Sunrise frontman. Malek filled me in on his open-sourced, improvisational music project inspired by the films of John Cassavetes, told me about his various residencies — San Francisco, San Diego, a little town near Joshua Tree — and dropped that he was friends with Peter Diamandis. Malek had introduced himself to the group by revealing that after a devastating car accident he had lost hearing in his left ear. Returning to the subject, he called his choice of material “a little Machiavellian.”
“I very deliberately mentioned that I’d had an accident; it draws people toward me,” Malek explained. “Typically people don’t think exponentially until they’ve had something traumatic happen in their lives. This guy next to me,” he pointed to Kent, “I knew within two words something traumatic had happened to him.”
“This,” Malek continued, gesturing toward the attendees and faculty seated across the dining room, “is the scientific adjunct of the San Francisco New Age community. Two thousand years ago these people would have been Gnostics. All these people, their underlying motivation is to relieve their own suffering. Ray is a healer. He’s like a preacher. If you feel validated the first time, you’ll come back next week. If you feel validated again, you’ll donate. If you feel validated a third time, then you’ll evangelize. The whole system and everything that Ray is doing is operating on that church model with some math to prove it.”
Malek didn’t necessarily mean this as a critique, and the longer the week went on, the more his comments proved prescient. The day after that dinner, a ginger-haired British attendee confessed that he’d spent much of his adult life waging a battle against depression. (He ran a life-coaching business that promised potential clients the power to change their worlds into better places.) Then another man revealed that his wife suffered from multiple sclerosis. Then a former gymnast shared she’d recently had two tumor scares. Malek had been right about Kent too: His father had died of MS two years before. When we divided up into student-led seminars one night, the most popular class was a meditation session taught by a lay Zen Buddhist monk who doubled as a Silicon Valley marketing consultant.
It wasn’t until the fourth night of the program, though, that the therapeutic, the spiritual, and the technological began to converge. I had been eagerly anticipating the moment when Ray Kurzweil would finally appear before our class, but when his speaking slot arrived, it was David Roberts — the director of SU’s graduate studies program, former military special agent, and current Air Force reservist — who took the stage. Roberts began much like the other lecturers, excitedly describing how disruption had made technology cheaper and better and more democratic.
“Unfortunately,” Roberts sighed, “technology doesn’t bring us ethics. And here’s the really bad news: As much as the disruptive technology enables us to create extraordinary things from very small numbers of people, it also allows very small numbers of people to do unbelievably dangerous and bad and evil things in a way that was never possible before. Within 20 years, one or two people could probably end up being capable of destroying the Earth.”
Roberts hit the remote control that controlled the large display monitors scattered throughout the room. The 72-million-viewed YouTube video “Battle at Kruger” came onto the screens. In that safari footage, a young wildebeest is grabbed by a pride of lions and a hungry crocodile before being rescued by its herd. In a calm, preacherly tone, Roberts analyzed the scene.
”There’s a self-preservation instinct, and there is a herding instinct, and I think that’s about the level animals can think at,” Roberts said. “But I think human beings are different. C.S. Lewis does an unbelievable job of explaining this: We actually have this third thing. If you hear somebody yell down a dark alley, there is a self-preservation instinct that says, No, I don’t want to go down there, and there is a herding instinct that says, You should go down that alley, and there’s a third thing that tells you that the right thing is to go down that alley and see what’s going on.”
“What are we? What are you?” Roberts asked, his face hanging to make clear his extreme seriousness. Flipping to his next slide, a photograph of a large white palm cradling a tiny withered black hand, he continued, “We can become brilliantly smart and live forever, and not become something that we’re proud of. And technology doesn’t make that change for us, we make that change through a decision that is independent of that line and growth path. And that is my hope for all of us.”
After the lecture, I found Roberts outside SU’s classroom building talking with about a dozen students under a large oak tree. They were discussing C.S. Lewis’s “third thing,” an explicitly Christian concept, and pondering whether it was possible to be ethical citizens while also embracing runaway technological progress. As I was standing on the outside of the circle listening, a jovial Brazilian telecom executive walked up to us and seized upon a pause in the conversation. “I have just a short question,” he asked. “God?”
Roberts drew in a breath and leaned back on his heels, gazing upward through the gently swaying branches, a beatific smile spreading across his face. “I used to be a big atheist,” he chuckled. “I dressed up as God for Halloween.” He turned to the Brazilian. “You know, on the left you have all of these concepts we’ve been discussing, and on the right, you have God,” he said. “There are indicators both ways. I think both of these things are really possible.”
Singularity University doesn’t actually take alms, and if it’s operating on “the church model,” it’s only in the most metaphorical way. But faith plays a crucial role at SU.
Kevin Kelly — the former editor and publisher of the Whole Earth Catalog, a founder and former executive editor of Wired, and, it turns out, a devout Christian — had appeared in Transcendent Man, a 2009 documentary about Kurzweil. In the film, Kelly is interviewed about the Singularity, and while he treats Kurzweil sympathetically, he ultimately declares that his ideas are wishful thinking. I was curious if Kelly had written anything that might shed some light on “the third thing” and SU’s particular brand of faith. Late one night, I found a piece he’d written for Science in 1998 titled “The Third Culture.”
“In the third culture, the way to settle the question of how the mind works is to build a working mind,” he wrote. “Scientists would measure and test a mind; artists would contemplate and abstract it. Nerds would manufacture one. Creation, rather than creativity, is the preferred mode of action. One would expect to see frenzied, messianic attempts to make stuff, to have creation race ahead of understanding, and this we see already. In the emerging nerd culture a question is framed so that the answer will usually be a new technology.”
This was SU! This was its value system! This was the value system that believes in hacking more than studying, in entrepreneurs more than scientists, in private industry more than governments, and, for all the lip service paid to the consequences of technology, in innovation more than ethics. It is the value system of Mark Zuckerberg and Larry Page and Steve Jobs before them. And much more than in 1998, it is the value system of our world. What “thinking exponentially” really means is believing in the righteousness of this “third culture.”
Early on the sixth day of the Executive Program, Kurzweil himself finally addressed the class. He was immediately identifiable, wearing as many gold rings as Liberace and speaking with a slightly nasally voice that recalls the off-kilter cadence of Christopher Walken (the two men grew up in Queens, New York, less than five years apart).
“Look at how predictable that is,” Kurzweil droned, pointing at a graph he’d assembled of the exponential growth curve of innovation over the last 100 years. “Think of all the history that happened in the 20th century — two world wars, the Cold War, the Great Depression. People say, ‘If it happens so inexorably, why don’t we all sit back and relax and go on vacation, and just let it happen?’ Then it wouldn’t happen! Actually what’s predictable is human passion to solve problems and come up with ideas to continue this exponential progression.”
Kurzweil’s faith, then, isn’t really in machines but in man, and his laws don’t describe physical properties so much as human passions. In March, the British philosopher Colin McGinn (since embroiled in a very terrestrial sexual harassment scandal) wrote a scathing takedown of Kurzweil in The New York Review of Books, arguing that because it only measured mankind’s ingenuity, Kurzweil’s Law of Accelerating Returns was “just a fortunate historical fact about the twentieth century…not written into the basic workings of the cosmos.” Kurzweil plainly believes otherwise. The Singularity isn’t the final stage of evolution, as he sees it. It is merely the fifth stage. In the sixth and final stage, he writes in The Singularity Is Near, “the ‘dumb’ matter and mechanisms of the universe will be transformed into exquisitely sublime forms of intelligence…This is the ultimate destiny of the Singularity and of the universe.” In other words, the “human passion to solve problems” leads inexorably, if not entirely logically, to total cosmic consciousness.
“Does God exist?” Kurzweil muses, at the end of Transcendent Man. “Well, I would say, not yet.”
On the final afternoon of the Executive Program, after the hangover of our drunken predictions had faded, I decided to seek out Malek once more. He’d proven a brilliant forecaster, laying out for me on that second night the mix of group therapy, New Age spirituality, and technological boosterism that the week would become. How, I wanted to know, had he seen the future so clearly?
“Since I was 2 years old, I was trained to follow the signs of this happening — the junction of science and magic,” Malek said in a voice barely above a whisper. “Science is a philosophy. And religion is about looking back into the deepest past. How ironic that this place is called Singularity University — the Singularity is the oldest thing in the universe.”
“I’m part of other communities that are doing this,” he continued, “but instead of with data and technologies, they’re doing it with emotions. If this were one of those communities, we’d be sitting on beanbags and everyone would hug at the end.”
There was no group hug at the end of the SU Executive Program. Instead many of the attendees got dressed up — some men put on jackets, some women wore a bit more makeup — and we all mounted the stage to receive our SU diplomas. We were being welcomed into an exclusive club. “There’s a reality distortion field around SU,” Rob Nail, Singularity University’s Tesla-driving dude of a CEO, told us. “Our mission is to get everyone else into this circle and share our view of the optimistic future we can create.”
The convocation also included a magic show. One might think that after a weeklong immersion in superhuman AIs, nanobot-facilitated immortality, and the apocalypse, something as seemingly pedestrian as a Robert Strong “The Comedy Magician” performing a few parlor tricks would be greeted with a yawn if not outright hostility, but Strong wowed the room. First he guessed the first and last name of a girl who had been one student’s first kiss, then he asked all of the attendees to write their names, secret talents, and lucky numbers on slips of paper, fold them up, and place them in a Tupperware bowl. At least to the eye, he never unfolded a single slip, but he managed to single out members of the group and reveal everything they’d written down.
Toward the middle of the show, Strong asked a middle-aged Brazilian woman to concentrate on a number. Then he wrote a series of numerals on a 4-by-4 square, asking her if any of them matched what she’d picked. She shook her head. “OK,” Strong said, “don’t let me finish the show without getting it.”
After a few more tricks, Strong pulled the Brazilian woman back onstage. Ever since he had asked her to pick her number, he’d been drawing something on a small notepad. Now he turned toward the audience for all to see. It read “43” in big bubbly numerals. The Brazilian woman gasped. It was her number. Then Strong delivered the coup de grâce. He crossed the stage to the 4-by-4 square and proceeded to sum all rows, columns, diagonals, and four-square boxes. Each combination added up to 43. The trick was a magic square, a centuries-old innovation that utilizes linear arithmetic, not exponential technology. The 63 of us burst into applause.