Brainstorming future scenarios for synthetic visual media with a group of hackers and designers

During the Hackers & Designers Summer Academy 2019 I gave a workshop together with Pavol Rusnak dedicated to the creative power of GANs and Bob Ross legacy: “Bob Ross Lives!”.

tripy video documentation from the day of the workshop

The goal of the workshop was to speculate together possible scenarios for visual media generated by neural networks, as well as getting our hands dirty with production of visual fakes itself.

In the context of the latest development in deep learning and specifically in Generative Adversarial Networks (GANs), we read a lot about the negative side of these tools being widely accessible. The media focus mainly on the threat of deepfakes and their consequences on politics or privacy, what generates unreasonable fear and prejudice towards any kind of new synthetic media. The prevailing opinion, that the advancements in AI-powered visual generative tools will bring only harm and confusion in visual communication will not help to avoid these scenarios, but the opposite. What our society needs instead is to see how it might as well benefit from appropriating these tools and getting engaged in the process of exploring the new creative possibilities as soon as possible. There are many unthought positive applications of deep learning models able to generate photorealistic visuals, however it might be difficult to see them when focusing only on its malicious uses.

In order to create a balanced discussion about the consequences of synthetic media becoming natural part of visual communication, the participants were divided in two groups based on their preference to explore either good or malicious usecases of synthetic media. As Bob Ross used to say: “You need the dark, in order to show the light” – so we also tried to place the dark right next to the light to be able to highlight the positive and creative potential of GANs.

Quite some peculiar scenarios were created during the brainstorming session. The group of “bad guys” focused on various malicious uses of fake images, such as fake eBay advertisements with generated text and visuals, “Missing cat” posters with StyleGAN-ed cat photos, or going as far as discussing faked terrorist executions of faceswapped celebrities in order to get ransom. The “good guys” saw potential for example in letting AI to re-interpret the satellite imagery of Google Maps in order to show how our cities would look like if we don’t address the climate change right now, or thought of an image analysis which would help diagnose a skin rash from a photo, or a speculative idea of bringing Jesus back to life. Remarkably, there were many ideas that were somehow balancing on the edge between good and evil – or rather ideas that we couldn’t clearly label as being genuinely good or absolutely malicious. Maybe it is the novelty of synthetic media generated by neural networks that fuels our inability to apply current moral criteria on them.

After the brainstorming session we introduced the available tools for producing synthetic visuals to play with, such as Runway ML and Google Colab. A nice coincidence was that just few days before the workshop Runway ML introduced the new feature we were all waiting for: chaining the models. This allows to directly connect an output of one model as an input to another model, therefore execute several steps at once. While using Google Colab requires certain level of proficiency in Python, Runway ML is a tool that lets you experiment with machine learning models instantly without any coding. That’s not only making machine learning more accessible to wider audience, but it’s also rapidly speeding up the proces of prototyping spontaneous workshop-triggered ideas with immediate satisfaction from almost instant visual output.

But why Bob Ross? Bob Ross was one of the first youtubers before YouTube, spreading the joy of painting by making it as accessible as possible. While getting proper art education is not an option for most of the people, anyone could follow his simple tutorials on TV and experience the gratification of achieving something seemingly so complex so fast. I see a nice parallel with what Runway ML is doing for people who didn’t have access to machine learning education. Being able to test your own ideas and understand a bit more of what neural networks can do, is an important addition to any creative thinker’s, maker’s, artist’s or designer’s skillset. But while Bob Ross could give you only very limiting space to experiment on a canvas, Runway ML lets you push the boundaries further than you might think.

playing with SPADE-COCO in Runway ML

I was curious how would the participants approach this tool and whether they’d be able to create the generated visuals they brainstormed in the first part of the workshop. We had limited time of 3 hours only for the production phase, so I was sceptical about getting any results at all. The shortage of time is a common problem (or advantage?) of most of the workshops. While the time pressure should act as an accelerator of design thinking process, it can also ruin the final production phase and forever lock the concepts in an abstract realm. Usually there’s either focus on brainstorming and fast documentation of ideas (those are the workshops with lots of post-its) or on acquiring a new skill through guided learning-by-doing method. In our workshop we merged both in one. We consider both conceptual and hands-on value equally important. If you only show the tools, you’ll end with mostly low-hanging-fruit ideas for the sake of trying out what the tools can do. If you don’t show the tools, you’ll get ideas with more conceptual value but in the form of speculative sketches often lacking the applied potential. With artificial intelligence being already so abstract and media offering only very narrow view on it, it’s crucial to engage in it really on a practical level, while also being conscious about implications of such production.

generated Bob Ross by Rogier Klomp
half dog half human StyleGAN by Soyun Park

Even if some participants didn’t continue in working on their concepts from the brainstorming session and rather prioritized an undirected experimentation with the new tools, the forthcoming discussion set the ground for further conceptualizing of their experiments. Somehow the opportunity to play with your own generated self in Google Colab (StyleGAN-Encoder + StyleGAN) overshadowed much wider space of possibilities of playing with various models in Runway ML. Frankly, seeing your own face moving on the age and gender vectors is really impressive and no wonder everyone wanted to try it out.

This excitement naturally led to an idea of the “Bae2Babe App” – concept of an app envisioning how the child of two (or even more) people might look like. Jonas Bohatsch engaged in showing us random combinations of workshop participants’ facial values resulting in a toddler’s face. Imagine this being a feature of dating apps such as Tinder!

Generating babies by Jonas Bohatsch

Similarly to what the Rorschach inkblots can tell us about our unconscious personality’s parts and emotional functioning, the computer vision combined with generating images based on words could do that as well. In the light of this, Nadia Piet created “GANschach”, a set of AttnGAN-generated images that are used to measure unconscious parts of human and machine perception and association. The process is based on combining AttnGAN (in Runway) with Google Cloud Vision API and comparing the results with what people saw in the image. Nadia ends her project presentation with a question: “Did they just create their own encrypted language that I/we cannot understand?”

GANschach by Nadia Piet

The models able to describe what is in the image and generate an image based on words triggered notable attention from participants. Together with generated selfies and gender-swaps it falls in the category of the necessary first tryouts and initial excitement from seeing the first few outputs. Perhaps it was the same in case of the beginning of the Internet. Do you remember what you were doing first time you were online? I personally didn’t know where to start, what to do – there was a search bar in front of my face and I didn’t know what to search for… After awkwardly long hesitation I entered the name of my back-then favorite band, I got several fan pages back – and I was A-MA-ZED! Compare it with what we do with the Internet today. How the Internet is actually seamlessly embedded in our daily lives. Only difference is that we’ll get to that point of laughing at our first experiments with the AI in considerably shorter time.

Image generated from text input “graphic design” – Marianne Noordzij

Bob Ross was a huge inspiration for this workshop and he also deserved a very nice tribute in the form of a conceptual project by Selby Gildemacher. Selby let the neural networks re-interpret several paintings of Bob Ross combining the magic of SPADE and im2txt models. What is so nice about this experiment is, that through the process of creating a segmentation map from original painting and generating synthetic one from the map, the neural networks were actually painting according to Bob Ross’ tutorial (and therefore unknowingly participating in the Joy of Painting).

im2txt generates sentence descriptions of images.

The generated paintings were uploaded to the web database of Bob Ross’ paintings, where people upload also their own versions. You can find them under the user account called Rob Boss. It’s a generated masterpiece, consistent in every step – including the generated descriptions of the paintings, and even description of Rob Boss himself!

Final Reflections: Bob Ross vs. Rob Boss by Selby Gildemacher
Bubbling Stream: Bob Ross. vs. Rob Boss by Selby Gildemacher

Soon after uploading, people started commenting. And they love it!

comments on the generated painting at twoinchbrush.com

Not only the AI followed Bob Ross’ tutorial, it was also able to fool bunch of real fans with its legibility. I think it opens up an interesting question: where does the AI-generated fake painting stands among other fake paintings made by humans? Is it fake or is it not? Is it morally OK to upload generated paintings to this website without explicitly saying it is generated? But – does it even matter in this case – on a website full of fakes?

This is exactly that kind of conflicting scenario I was looking forward to get from this workshop. Because there will be only more popping up in the future. Not genuinely bad, nor explicitly good – the synthetic media will soon circulate the Internet naturally, alongside the human-made content and we will probably not even be aware of it. In discussions like this one, we test these situations and try to see the consequences. Some scenarios seem less scary once we expose ourselves to them. Other might surprise with the unexpected biases and strike us to the point we might want to change that and have our say in, for example, the way datasets are produced. Some questions might remain hard to answer still at this point. But getting our hands dirty with AI-driven technologies, before they are definitely set in stone, is as important as browsing random lame fan-pages of the 90s Internet. Back then it made us undeniably more critical and set the base for developing the crucial digital literacy we posses today.

I would like to encourage all creators to go ahead and experiment with machine learning models without hesitation, simply just trying out what it does and later on conceptualizing the experiments. Because, however you think it should be, that’s exactly how it should be.

If you have any interesting tryouts or thoughts on this topic, please share it with me!