We taught a computer how to say “Go Bears!” then oil painted it

I guess I enjoy creating cool stuff, and art is pretty cool.

Training montage of neural style transfer of ‘The Great Wave off Kanagawa’ onto a Lion

What constitutes “art” is super broad. As a computer graphics nerd, my interests are in creating computational and generative art. As an artist my tastes lean towards the abstract. This article aims to highlight a recent project, done by my partner in art Jacky Lu and I (check out his linked art Instagram), in which we use deep learning to computationally generate, digital art to then be used as inspiration for abstract, physical mixed media art. As far as we are aware, the concept of mixing art generated by artificial intelligence with traditional, physical art is a pretty new concept — an artistic space that we’re pretty excited to continue to explore.

What if computers were the target audience of our art?

While this question is admittedly quite absurd, who knows, perhaps one day with the advent of artificial general intelligence, instead of our technology serving to entertain us, we might serve as entertainment providers for them…

As humans, we find the final generated, stylized images aesthetically pleasing (for example the final lion at the end of the training montage at the top of this article). But have you ever wondered what this “aesthetically pleasing-ness” looks like to a computer? Here is your answer:

2D representations of the gram matrices of the 5 style layers of the neural net, as well as the learned “style” they correspond to (bottom right)

Each of the 5 black and white images correspond to rendered 2D representations of the gram matrices (high-level feature representations of “style” in our input style image) of the 5 style layers of our trained neural net. The bottom right image is the learned style texture they generate. Or phrased more simply, the first 5 images are what the computer sees and the last image is what we as humans see.

Neural Style transfer is not a new technique

It has been around since 2015 (Gatys et al.), thus we claim no novelty on this front. The sequence below is an example of how Jacky and I personally applied the transfer of style from one image (painting by Zlatko Music Art) to the content of another (bear photo), to generate a entirely new, stylized image. Technical details of our implementation of neural style transfer (which differs in various ways from Gatys et al’s) to generate images such as the one shown below can be found here, and further examples of our style transfer work (including onto videos and 360 videos!) can be found here.

Left: Adobe stock image of a bear, Center: Artwork by Zlatko Music Art, Right: Our neural style transfer generated art

But can we really consider these generated images as novel pieces of art? If so, who is the artist?

Is it the machine (i.e. the trained neural net)? The human who trained the machine with specific parameters and who chose the input and training images? Or the photographers and artists who create the input images (used for style, content and training)?

At the end of the day, it’s not my place to tell you or anyone else what is or isn’t art. All I know is that the coder has a great degree of flexibility and control in the outcome of the final image, based on clever input image selection, network architecture design and the tuning of various parameters — and the results look really cool. Ultimately, isn’t that the goal of an artist? To use creativity to mould a work to the artist’s liking by fine tuning various elements of a medium that are under their control? The modern artist is spoilt for choice when it comes to media for expression, and creative coding must be regarded as a valid artistic skillset, just as one would regard watercolour painting or pottery. The computer is the paintbrush of the 21st century, and the coder the artist.

The novel step: Machine Learning as a muse

We coded a neural style transfer algorithm to generate a computationally stylized bear, we found that to be pretty cool, and are currently working on new generative art projects using neural networks. However, for this particular project, it is this next step that we consider to be the novel one: we used our own computationally generated image as inspiration to paint our own mixed media art!

“bear” — mixed media spray and oil painting by Ivan Jayapurna & Jacky Lu

Any computer science nerd who knows basic machine learning and has a GPU can implement style transfer, and any art nerd can oil and spray paint. Jacky and I exist in the intersection of the two and going forwards we hope to blend the boundaries between the two even further and create some really cool stuff. Go bears!



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Ivan Jayapurna

Ivan Jayapurna

I’m a University of California, Berkeley PhD student who writes code, makes biodegradable plastics, and blogs about other things entirely.