Blog

ProjectBlog: Deep Learning and Art

Blog: Deep Learning and Art


A couple of years ago I went to Paris and, of course, visited Le Louvre. While I cannot say it was boring, there was something too… “classical” about most of the artworks displayed there to my taste. It was later when I visited Le Musée d’Orsay, the santuary Impressionist art, where I was really blown away. It is the things we see around us — a basket of apples or a sunset on a summer day — which I have always found fascinating.

The only thing I always felt sorry about, is that those grand masters of art have passed away — and we will not be able to recreate their art in our days…

… but surprisingly, with the help of Deep Learning methods (Keras library, which is based on Tensorflow), we can shift the pattern of one image to another (caled ‘neural style transfer’, see section 8.3 in [1]), giving us some very nice results.

Have a look at a few examples, which go as follows: the “pattern” image, the original image, and the new image we created using a neural network model.

We will explain some of the math behind this below.

Which I got from taking a cherry (“Sakura” in Japanese) tree:

And Van Goch’s Starry Night:

Which I got from taking Tel Aviv’s coastline:

And one of Monet’s Water Lilies:

Which I got from taking the Central Park in NY:

And Dalí’s The Persistence of Memory:

Which I got from taking a flower vase from my home:

And Miro’s The Garden:

Which I got from taking the HaYarkon Park in Tel Aviv:

And learning one of Sisley’s Grain Fiels on the Hills of Argenteuil:

A few words on Deep Learning

Neural Networks, the algorithm behind “Deep Learning”, are supervised models which consist of layers of nodes of weights (which are the parameters of the model) and activation functions (which are the ways the information passes from one layer to another).

The ultimate goal of a neural network is to minimize a cost function, called the loss of the network. This is usually done by an algorithm called backpropagation, which, like all the good algorithms, uses the gradient of the function we wish to minimize and iteratively gets closer to the optimum of the parameters.

The Math behind the Art

The idea behind these are creations is to use a pre-trained NN model for image recognition, called the VGG19 network. We then activate this network on the two images, to minimize a loss function which will give us the desired result. The main observation of how to build this “loss function” is based on some properties of NN. Simply put, there are three components to this loss function:

  • Global similarity between the images: this is done by comparing (l2
  • distance) the high levels of the activations of the network on the two images.
  • Local/pattern similarity between the images: this is done by comparing the local correlations (inner products) between the high level and low level layers.
  • Regularization factor: continuity of the generated image, to avoid a pixelated result.

References

[1] Deep Learning with Python, by François Chollet.

Source: Artificial Intelligence on Medium

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top
a

Display your work in a bold & confident manner. Sometimes it’s easy for your creativity to stand out from the crowd.

Social