go to content

We Worked Out How To Generate The Trippiest Images With Google's Deepdream

The computer program mimics what the human mind sees. We got inside Google’s brain and took our own pictures.

Posted on

Last month Google published "Inceptionism", an article about their latest experimentation with a computer program that attempts to detect and re-create patterns in images using similar techniques to the human brain.

In the same way our brain identifies shapes and forms in clouds in the sky, Google's artificial neural network has been trained to recognise common features such as doors, dogs, and bicycles. The result is a new image, enhanced with information from what the network thinks it has seen.

Each layer of the program deals with features at a different level of abstraction. As the article explains, the first layer looks for information such as edges, surfaces, and orientation. Intermediate layers interpret the basic features to look for overall shapes or components, and the final layers recognise high-level features like the human body, creating complete interpretations.

Google went on to open-source the code under the name "deepdream", and the internet went on an acid trip, producing a myriad of beautiful, sometimes unsettling, and always strangely compelling images, GIFs, and videos.

Keen developers and web services quickly put up portals to allow people to explore deepdream themselves; however, it is an intensive process and many services were quickly overwhelmed. We wanted to get a thorough insight in to what deepdream was capable of, so Paul, a BuzzFeed developer, set it up the hard way.

Here are some examples of peoples' #deepdream creations on Twitter:

A rare photography of The Puppyslug Nebula from the Hubble Telescope. #deepdream

#deepdream manifested a Darwinian menagerie on the shore, boats on an ocean, and a cosmic web of life in the sky.

First, we wanted to get a feel for how deepdream responded to a variety of different images, using the default filter - "inception_4c/output". This would guide us on which kind of image could be made more beautiful, which induced the most vivid and abstract "dreams", and if any simply did nothing at all.

Now that we had a good idea as to how deepdream responded to varying levels of detail, we went on to run a single image though all 54 filters available to us. This would tell us which filters produced the most interesting results.

The variety of different interpretations the network produced was stunning. Combining our knowledge of likely effective images and 13 filters that produce interesting results, we ran a selection of images through deepdream at high resolution. Our machine took a 48-hour processing nap; here are the dreams it had:

This is our final selection of images, each run through our favourite four filters.

Here is a photo by Marcelo Del Pozo of a man riding through the flames during the "Luminarias", a religious tradition that dates back 500 years, in Alosno, Spain.

Here's how we did it:

You'll need to be comfortable on the Linux or OS X command line to set this up. We configured deepdream using image-dreamer, then modified the Vagrantfile to let it use all the CPU and RAM the machine was able to give, as we found the size of our input images drastically affected the processing power required.

The shell scripts we used to see which images "dreamed" most effectively (one filter, many images), and which filters in the set were the most interesting (one image, many filters), were very useful. You're welcome to use them as you wish.

This user on Imgur provides a useful reference to the available filters if you'd like to save time!

You can visit the full gallery Google introduced deepdream to the world with here.

View this video on YouTube

youtube.com


Every. Tasty. Video. EVER. The new Tasty app is here!

Dismiss