view

Part Two: What might your creativity look like in a Generative world?

In the second part of our dive into Generative AI, we look at some potentially unexpected – and positive – outcomes of its use in creative industries.
A dark background with a bright white and blue illustration of a side view of an illuminated human head.

Part Two: What might your creativity look like in a Generative world?

SARAH VLOOTHUIS HEADSHOT

Written by Sarah Vloothuis

Senior Manager External Communications

In part one, we acknowledged the natural feelings of excitement and trepidation felt by creative communities to Generative AI. We considered the founding of any fears, our shared baselines for creativity and what our role might be in keeping the world’s content fresh and colourful. In part two, we look at GenAI ‘out in the wild’. That is, how it’s being naturalised into our familiar spaces and some potentially unexpected – and positive – outcomes of its deployment in creative industries.  

Indeed, just as every technological revolution makes certain jobs superfluous, they create new ones, and ‘Prompt Engineer’ seems to be the position du jour. It’s important to establish here that this role is a broad church, with the high end of the salary range (some at $100,000+ a year), acting more as a tester and evaluater of the strengths and weaknesses of AI models. While the low end seems to be just ‘create huge volumes of content to brief using ChatGPT’ (a quick scout around Upwork puts this end of the service spectrum at around $50 a day). With Microsoft, Adobe, Shutterstock and other big hitters welcoming Generative AI into their professional toolboxes, learning to prompt effectively will be an essential skill. Author and prompt wizard Guy Parsons gives a crash course on prompts for art and ‘faux-tography’ using DALL-E on the Microsoft website, advising, “it's best to imagine your image already exists in some kind of online gallery, and then write the kind of short caption you might imagine appearing with it.”

Remember, when Photoshop was released in 1990, some speculated that it would destroy photography forever. This is laughable now, but the arrival of Generative AI in the creative space certainly feels similar. Today, Adobe’s Generative Fill (which is still in Beta and cannot be used commercially) brings the same tools that can be found in Midjourney, DALL-E, Stable Diffusion and others into the Photoshop platform – which is used near exclusively by millions of creatives around the world. On a simple text bar, you type descriptive prompts to automatically add, extend, or remove content. The AI is trained on Adobe’s database of stock images. And the weirdest thing? It’s early days, but people seem to really enjoy using it. Could this be because it has the heady blend of a clearly defined data source, while having the potential to cleanly automate tiresome jobs without eliminating the truly creative ones? Has Adobe hit gold with digital creators?

Three people sat side by side. The expressions on their faces suggest they are watching something gripping or about to be scary. The person in the middle’s hand hovers over a popcorn carton.

Will heavy regulation of Artificial Intelligence mean that satire, sex, war, or any potentially triggering content that requires a sensitive treatment would be the sole preserve of humans?

A new kind of ‘Uncanny Valley’?

So far, the internet is ablaze with “OMG, look at this cool thing I did with AI!” and “Can you tell which photo is real?” fighting for airtime against the claims of copyright infringement, legal precedents and employment anxiety. But once the novelty has worn off and AI generated content becomes mainstream, will we all just end up so weirded out by it that we simply don’t respond as required? Back in the 1970s, a pioneering roboticist Masahiro Mori coined the term ‘Uncanny Valley’ to describe a strange phenomenon that he encountered when measuring the emotions of people as they interacted with robots. He observed that human-like robots are very appealing – to a point. When they become both too human, yet not human, people would abruptly shift from empathy to revulsion. On a graph, this looked like a sudden ‘dip’ (the ‘Uncanny Valley’) from a positive response (such as you might have to a cuddly toy) to a place of abject fear (akin to encountering a dead body).

Shocking? But not so much, when you think about it. Clever old evolution has given us myriad ways to sense when things are off. Have you ever seen or read something that just doesn’t hit right? The pores in a photo are just too smooth, the grammar too perfect. It lacks the whiff of imperfection that we unknowingly expect from other humans. In this context, some psychologists use the term ‘Violation of Expectation’ – we know what we expect, but what we see does not match. Others point to our understanding of the ‘inner mind’. When we encounter something that should be human, but lacks the essence of human experience, feelings of discomfort are triggered. When AI work is in widespread commercial use, will brands notice a loosening of the connection between it and its customers as they fall into the new Uncanny Valley?

Regulating humans into a darker place

Regulating Artificial Intelligence is a global priority at the moment, and well it might be because the implications for short, mid and long-term range from bog standard misinformation to Large Language Models (such as GPT-3 and 4) being able to create and run their own code. However, over the past months we have seen companies begin to self-regulate their AI, attempting to remove biases and ‘hallucinations’ (the AI term for ‘content that is untrue or makes no sense’), prohibiting output that is violent, of a sexual nature or just hateful. OpenAI has gone so far as to forbid any representations of politicians, conspiracies, or current political events. Obviously, not all will have such rules in place, so tight regulation on developers, deployers and users will no doubt be incoming.

When regulations ensure that AI is trained to be ‘good’, it may no longer be used to explore the darker side of the human experience. That, weirdly, could be left to us. Does this mean that satire, sex, war, or any potentially triggering content that requires a sensitive treatment would be the sole preserve of humans? It would certainly make sense for it to be so. Firstly, only we have genuine experience of such things and, secondly, history shows us that we are darkly excellent at it.

“A world where we use text-based prompts to bring ideas to life could democratise creativity for millions of disabled people.”

Imagination > skills

As a species, we frequently tell ourselves that there are no limits to our imaginations. But there are limits to how we realise the things that we imagine. Today, to bring our ideas to life often requires a technical ability: to write, paint, compose, design, film, photograph or any number of other physical skills. But what if you are unable to do these things? What about those who cannot lift a pen, type on a keyboard or use a Wacom tablet? What is the outlet for their exquisite imaginations? Generative AI is already giving power to those for who have been neglected. Autistic people are reportedly embracing ChatGPT for its ability to help in articulating emotions, modelling conversations and immersion in special interests. And a world where we use text-based prompts to generate ideas could democratise creativity for millions of disabled people. In a time where authentic storytelling has never been more valuable, there is no question that this is world-changing for everyone.

So, does our creativity have value in a Generative world? In short – yes. But there is no doubt that it will look a little different going forward. Jobs that can be easily automated will be so, but clever and new creative talent will be able to filter through from previously unseen places. We can, and will, hold on to our traditional ways of creating – these will never go away – but Generative AI will take us in new directions. They may be good, they may be terrible, but we will experiment and learn. Many of us will enjoy the tools it gives us and perhaps find that they raise our ambitions. Others may find them intolerable. We may find ourselves delving ever deeper into the human experience, seeking out the places that AI cannot.

But we will never stop. Because human creativity matters and the stories we share, individually and commercially, are galvanizing.

Even when we use a little AI help to tell them.

Written by Sarah Vloothuis


Read more articles like this from Canon VIEW