The spread of AI is frightening for professionals in creative industries as it threatens their jobs. Drawing tablet manufacturer Wacom found itself in an unfavorable situation with its product users – designers. In its New Year campaign, Wacom had used illustrations created by artificial intelligence. The designer community protested loudly, and Wacom had to withdraw these illustrations, acknowledging that they had offended and taken a stance against their target audience.
Selkie, a brand in the fashion industry, is also caught in the crossfire with its customers after a Valentine's Day clothing line, partly created using AI, was posted on Instagram. Customers noticed that in the prints of the clothes some puppies had more paws than they should have. Many Selkie fans expressed disappointment that the brand had chosen to use the technology rather than hire an artist. Customers pointed out that the brand was contradicting its self-proclaimed messages of artistic self-expression and the promotion of ethical production.
The cautious ones
There are companies that view the use of such tools with caution. Major technology giants like Apple and Samsung have prohibited the use of ChatGPT by their employees. The reason provided is ensuring confidentiality and preventing any possibility of sensitive information leakage. Telecommunication company Verizon and some Wall Street banks have taken similar measures, highlighting that company policies prohibit the use of such tools to prevent any risks of customer data breaches. However, where there is a problem, there are solutions. OpenAI offers companies the option to create a so-called private ChatGPT sandbox, ensuring that the data and information remain within the company when employees work with the tool.
Artificial Intelligence Regulations
In March, the EU approved the AI law, the world's first comprehensive AI regulation. Delfi reports on what it will include, for example:
- the use of biometric categorization systems based on sensitive characteristics and the automated collection of facial images from the internet or CCTV footage to create facial recognition databases.
- AI systems will be banned for emotion recognition in workplaces and schools, user rating, etc.
- Clear obligations are indicated for high-risk AI systems, as they can cause serious harm to health, safety, fundamental rights, the environment, democracy and the rule of law. For example, the use of AI in areas such as critical infrastructure, education and vocational training, employment, basic private and public sector services (healthcare, banking), as well as in some law enforcement systems, migration and border management, judicial and democratic processes (e.g. to influence elections) is very risky.
- AI-generated content, such as images, audio, video, also known as deepfakes, must be labeled. The social media platform TikTok already called on users in September to label content created artificially on the platform with a specially designed logo.
AI-generated content, such as images, audio, video, also known as deepfakes, must be labeled. The social media platform TikTok already called on users in September to label content created artificially on the platform with a specially designed logo.
We are now living through both exciting and frightening times in history, because it is unknown how humanity will adapt to this technological revolution. The first copyright lawsuits against AI have already emerged and are expected to increase. There may also be lawsuits for invasion of privacy.
Perhaps we will face an abundance and oversaturation of what AI produces, but at the same time a vacuum of creativity as what AI uses becomes more templated and standardized. Maybe AI will be used to mitigate the effects of other AI. Or we might see a new trend like AI detox, where we seek to reconnect with authenticity.
Many unknowns, but what is clear is that for some this will be a new opportunity, for others a disruptive backdrop. And we are already part of this important experiment in world development.
[1] AI Controversies Marketers and Brands Should Avoid | LinkedIn