The most commonly used amongst the agency community include Adobe’s AI-enhanced suite, ChatGPT, Bard, DALL-E, Midjourney, and Stable Diffusion.
FuturePedia does an excellent job of tracking AI tools broadly
The Brandtech Group powers a curated landscape dubbed BrXnd Scape that aims to be “a landscape of the world’s best companies at the intersection of brands and AI.”
While many stakeholders have been calling for regulation establishing clear guidelines for the development and use of AI, AI legislation is still in early stages in the U.S. An interesting new federal bill on the horizon could be a bipartisan AI Deepfakes bill, the NO FAKES Act, which establishes a federal right to publicity.
To fill a perceived AI legislation void left by Congress, nearly 200 AI-related bills have been introduced nationwide in state legislatures so far in 2023 — a more than four-fold increase compared to 2022. So far only 14 have become law. Deepfakes bills are the most popular theme, and the most likely to be passed. Many of the bills focus on how state governments will use AI and whether to mandate impact assessments to mitigate the risks of certain types of AI. Worth nothing, there is a strong overlap between legislators focused on data privacy and AI.
The White House recently (10/30/23) issued an “Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence” that directs government agencies to establish new standards and regulations for AI. The official fact sheet can be found here. It will take time for the various federal agencies to execute against the order – and with 2024 being a presidential election year, things may proceed even more slowly – but a firm call to action from the President is welcome. This action follows previous efforts from the Administration including an initiative to secure voluntary commitments from 15 leading companies (including Meta, OpenAI, Alphabet, Amazon, Microsoft and more) to drive safe, secure, and trustworthy development of AI. The White House Office of Science and Technology Policy has also published a blueprint for an AI bill of rights that highlights 5 key principles and considerations for AI regulation. To attempt to establish a much needed construct for legal and usage rights for AI-generated content, in August 2023, the U.S. Copyright Office launched a notice of inquiry (NOI) to examine the copyright law and policy issues raised by artificial intelligence (AI) technology. It includes the scope of copyright in works generated using AI tools and the use of copyrighted materials in AI training. In a similar proceeding, the U.S. Patent and Trademark Office sought public comment earlier this year on the current state of AI technologies and inventorship issues, likely foreshadowing that future regulations in the area may be coming.
For those agencies operating outside of the US, the EU has taken a much more aggressive approach; it enacted the world’s first comprehensive AI law, the Artificial Intelligence Act earlier this summer.
Below AdAge summarizes the implications of the White House Executive Order for the advertising industry.
The most pressing legal risks facing agencies today surround IP and copyright. Details on current status can be found in this 4A’s Webinar.
Further information on the legal risks of AI facing agencies today can also be read in this 4A’s whitepaper.
In the most recent update to its Media Buying Contract template, the ANA stipulates that “an agency must obtain the advertiser’s prior consent to use any artificial intelligence applications in the delivery of services.” AI usage should also be covered in detail in any vendor contracts.
The 4A’s has developed a high-level template for agency policies on artificial intelligence, highlighting key issues and including sample language.
Some companies have gone so far as to prohibit employees from using ChatGPT and other generative AI tools, often out of fear that confidential information will be unintentionally entered into a tool, thereby violating confidentiality agreements and potentially contributing training data to the models that power the tools. Platform terms and conditions vary but tend to skew in favor of the platform itself, sometimes going so far as to claim that any data entered into the system becomes the property of the platform and can be used to train models. In the case of ChatGPT for example, unless a user specifically opts out of having their data used for training, any and all inputs/prompts via Non-API sources can be used for training. It is important to read and understand the terms of service and related policies for each platform your agency chooses to engage with.
When it comes to agencies, AdWeek recently reported that holding companies are winning business on the back of sound AI policies. Some variation on “play, don’t publish” seems to be the overarching guidance. Beyond that, we have generally seen policies that cover the following areas of concern:
IPG and McCann Worldgroup have joined the Partnership on AI to Benefit People and Society (PAI), a nonprofit organization that aims to advance responsible AI. The holding company joins more than 100 organizations already on board, including most of the big tech companies.
Publicis became the first advertising holding company to join The Coalition of Content Provenance Authenticity (C2PA), a foundation dedicated to setting standards for content authentication.
The NIST (national Institute of standards and technology) AI Risk Management Framework (AI RMF) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
Miscrosoft has a website on Responsible AI, with articles covering the latest on AI policy, research and engineering.
Anthropic, Google, Microsoft, and OpenAI recently launched the Frontier Model Forum, with the aim of “ensuring the safe and responsible development of frontier AI models.”
While still very early days and most businesses appear to be in the experimentation phase (at least with GenAI), the potential use cases are broad – and span the entire lifecycle of marketing program development. Below is an initial list that will undoubtedly grow over time. In the near term, most GenAI use cases require human oversight and collaboration. The potential benefits and implications of these use cases are equally broad, including:
AI Overview for Brands and Agencies – The Good, Bad, and Ugly. What Marketers Need to Know About AI for Agency Scopes.
This AdAge article provides a high-level overview of how major agencies are using AI.
What’s Next is Everything (WNIE) is a blog that is cataloging examples of brands using emerging tech – including AI and Generative AI
Insights
Ideation
(use caution here as IP and copyright questions abound – see legal risks section above)
Writing:
Media Generation
Generative AI can be used to power chatbots that engage directly with consumers, or as conversational AI interfaces that accelerate human agent solutioning.
Our Expert
Kari Shimmel CEO of Campbell Ewald and agency pros explain CX journey mapping in depth in this workshop video.
Becky Getz, from Amazon discusses the importance of CX for both the client and agency business.
How to approach CX Measurement from The 4A’s CX Council.