What is AI art and what does it mean for the future of content moderation?October 14, 2022 | Image Moderation
Artificial intelligence (AI) has been taking off in recent decades. As we make smarter AI, the need for humans to do similar tasks sometimes changes or even becomes altogether unnecessary. For cases such as content moderation, a human will most likely always be necessary to understand the nuances of moderation, but AI can still be used to identify potentially offensive content. AI has become a leading force in the current state of technology, but what about our creativity?
AI art is on the rise, begging the fascinating question of whether or not creativity has to be innately human. AI can now craft art using only a prompt and other images from the internet. It’s an amazing feat, but what does this mean for the future of art and ethical issues on the internet?
Let’s take a closer look!
What is AI-generated art?
In recent years, several companies have emerged with AI software that helps a user create images using text prompts. While programs like Midjourney and DALL-E have been released for some time, AI-generated art has really started to boom since the release of Stable Diffusion. The company Stability.ai released Stable Diffusion in August of this year, making it available for virtually anyone to use. By pulling from images across the internet, this incredible software can create art in the style of any artist, while using any subject or background. Anyone can use this program and create beautiful artwork and graphic designs without picking up a pencil or brush.
Many people are even creating integrations with other software like Photoshop to enhance the art they make with Stable Diffusion. AI isn’t only being used to create funny images to be shared, but also impressive pieces of artwork. Because software like Stable Diffusion pulls images from across the web to generate art, many ethical questions have arisen about artists being credited for their work. Some artists are even concerned that AI will completely replace humans in the art world.
While AI art is a revelation of technology, it raises some important questions about the future of content moderation as well as the future of art as a career in the digital space.
The future of art as a career
Plenty of jobs have been replaced or have the possibility to be replaced by advances in tech. Between the self-checkout at the grocery store and the emergence of self-driving cars, we’re seeing the extent to which technology can do some jobs more efficiently than humans. Now that AI can actually create art that passes for human-made art, artists and graphic designers are concerned about their careers.
AI art is far cheaper to make than paying a trained artist. Especially for smaller projects, companies might begin cutting corners, using software like Stable Diffusion instead of paying an artist. This could leave entry-level artists out of work and unable to gain the experience they need to progress in their careers. Some marketing and advertising agencies are already trying to figure out how to leverage AI art to create compelling graphics for their clients.
Beyond entry-level opportunities, professional artists are also concerned about how AI could impact their careers. Notably, someone recently won the Colorado State Art Fair’s competition in the “emerging digital artists” category with a piece of artwork that was created using AI art generator Midjourney. Unsurprisingly, many artists were outraged at this news and worried that human artists may eventually become obsolete. What was once a craft that took years to master is becoming something that can be done with the push of a button. Their fears may not be unfounded, either. Recently, a national art exhibit opened in the Faroe Islands showcasing only artwork created with AI.
There are some artists, however, that feel this new technology could be useful. If AI art could expedite their creative process, they’d be able to create more art pieces in a shorter amount of time. Even the artist who won a competition using Midjourney took 80 hours to get the piece just right, proving that there is currently still a need for human creativity, even when working with AI software.
Executives at some emerging AI companies have spoken out with regards to this idea that creative AI will take all jobs from humans. Several don’t believe that AI could ever fully take the place of human creation, just make processes easier and remove some of the tedium from creators’ jobs. Only time will tell how advanced AI will get and if creativity is an innately human skill. Even if AI doesn’t take roles away from creative humans, however, the world of AI-generated art remains nebulous when it comes to legality and morality of using an artist’s work to generate new artwork.
Copyright ambiguity with AI-generated art
The next question that AI art poses is: can artists sue over their “style” being used in artwork made with AI? The creators of software like Stable Diffusion believe that the onus is on individuals to use the program responsibly and not commit copyright infringement. The laws regarding the use of an artist’s work when generating new artwork as well as the copyrights of the resulting artwork are currently unclear.
Back in February, the U.S. Copyright Office denied an artist the copyright to art created using a platform called Creativity Machine, stating that “human authorship is a prerequisite to copyright protection.” However, recently, there was a case of a graphic novelist submitting their graphic novel (created using AI) for a copyright and they received it, further complicating the laws around AI artwork.
While the laws remain ambiguous, companies like Getty Images feel they have no choice but to ban AI artwork from being added to their website. With the responsibility falling on the individual user, it can be challenging to know when you’re on the right side of the law. Companies want to save their users from the possibility of accidentally infringing on laws as best they can.
But copyright laws are not the only laws digital artists using these AI art generators may have to contend with. There is also the question of what types of content can and should be created. Unfortunately, if these AI art platforms are being used for unethical acts like copyright infringement or creating deepfake images, the user is not the only person affected, the artist or subject of the artwork also suffers. Deepfakes can lead to defamation, which brings up an interesting question about moderation.
Content moderation and AI-generated art
This new AI medium also introduces new challenges to the content moderation industry. While companies that enable users to create art using AI have tried to ensure that they can’t produce offensive content such as pornography, users have easily found ways around this. There are Discord servers cropping up to circumvent this, such as Unstable Diffusion, which allows people to generate images using words deemed harmful by Stable Diffusion. This can become very dangerous as people begin to request abusive content such as child pornography.
Users can also create artwork using these AI platforms that spreads misinformation. They could produce images and videos of various political figures and celebrities endorsing things that they simply never endorsed. Users of the infamous site 4chan have already begun using AI artwork generators to create deepfake images like fake pornography, which can have real life consequences for victims. One investigative journalist in India was the victim of one such creation. A pornographic video was created using her likeness, and the abuse that she received was so awful and unending that the United Nations had to step in and insist that the Indian government protect her.
While offensive content such as pornography can easily be recognized as inappropriate, deepfakes of political figures saying things that they never said, for example, are significantly more challenging to detect. And this type of content poses a dangerous threat as it can be used to spread misinformation. How can you monitor content that is fraudulent, but not necessarily offensive? As AI becomes more advanced and more accessible to users, we, as content moderation service providers, are continuing to adapt our approaches, just as we have always done, to address the new user generated content threats that accompany the evolution of technology. As an example we recently launched our new VR moderation studio to keep VR communities safe. VR is another example of a tremendous tech leap that poses new moderation challenges.
Artificial Intelligence has been on the rise in recent decades and continues to change the world around us. While AI has been effective in automating many technologies, we are now seeing it change the way that we approach creative projects, too. AI is growing faster than ever, with some predicting that AI-generated artwork will change the art world as we know it within the next year. As AI becomes more integrated in our society and the way that we do things, more questions about ethics and moderation will arise. While AI can help somewhat with moderation, having a human in the loop to spot complex and nuanced content like misinformation and deepfakes remains crucial. Content moderation platforms like WebPurify will need to remain ever vigilant as people rely on AI more to create content that is not only offensive but also inaccurate or bordering on copyright infringement.