Blackmetal x GANs

In the spirit of black & white grainy low-quality imagery the Norwegian blackmetal bands purposedly used on their album covers in the 90s, I decided to see if I can generate such visuals with neural networks. Without thinking too much, just aiming to depict the nature in its pure rawness and darkness.

There were times when I would have to apply several effects on a video footage, or when I would even draw some frames by hand to achieve the unpolished dirty visual. Thinking and talking so much about the creative potential of GANs lately, I decided to test it in my own workflow and see whether I can make stuff more easy with neural networks in the future.

original footage

I started with a nature scene/landscape video, which I ran through the DeepLab model in Runway ML. This model created semantic map for each frame of the video. Something SPADE could make sense of and generate nature sceneries based on the map. Running DeepLab model was chained directly to SPADE-Landscapes model, which took frames from DeepLab and turned them into generated images. These were directly saved in an image directory in this case – but the model could have been also chained directly to the third model I used – the Arbitrary-Image-Stylization. The last step was the ESRGAN model, which upscales the images by 4x, what gave me quite acceptable resolution as a result.

DeepLab / Runway ML
SPADE-Landscapes / Runway ML
Generated nature sceneries with SPADE-Landscapes

So I got the generated nature sceneries, but I wanted them to look a bit more blackmetal-ish. I tried several images and settings in the style-transfer phase.

First attempt to transfer the style from BW image of a forrest didn’t do much
(left-result, right-input image)

After trying several black & white misty forrests and getting not really satisfying results (the style transfer was still pretty decent) I wanted to try more extreme inputs. Darkthrone’s famous Transylvanian Hunger album cover did already some nice changes to the nature there:

Another test using the cover of Darkthrone – Transylvanian Hunger for style transfer
(left-result, right-input image)

When I noticed the style-transfer is able to grab the most iconic part of the visual and re-interpret the nature with it, I got immediately curious what it would do with just simple classic black & white blackmetal logo. There it goes:

Arbitrary Image Stylization / Runway ML
Final version using Darkthrone logo as an input for style transfer
(left-result, right-input image)

Amazing! Now the nature is all black and white and all shapes are represented with the branch-like structure of blackmetal logos!

To wrap this up – I think there’s definitely future in generating visuals in artistic practice. Searching for the right footage to work with used to be the biggest pain: there’s always problem with licences of images, plus it always took hours of googling/browsing databases or creating your own material.

There’s often a prejudice against generated visuals, because it usually was not possible to really direct the final output and such artwork had to often count with some level of randomness and amount of “happy little accidents”. That might not be the case anymore. During this process I had quite clear idea of what I wanted to achieve and by combining various models I was able to get to that point quite fast. Yes, it will take still some time for the workflow and quality of outputs to reach the expectations. But the state of the art as it is now is definitely a tip of a really really huge iceberg, creeping somewhere under the dark waters of a fjord of possibilities!