January 15, 2024

How two industry execs are approaching responsible AI

Transparency, accuracy, consent, and ownership are just a few key areas where guardrails need to be put in place in order to practice responsible AI.
MarTech
TABLE OF CONTENTS

Recently a well-known American sports magazine was accused of using AI to generate bogus writer profiles, including fake bios, hobbies, headshots and article content. Why does this have everybody so riled up?

In a nutshell: transparency. And this is where we’re starting. 

When reading online media—specifically journalism—there’s both a precedent and expectation that content is going to be created by humans. And if that isn’t the case, audiences expect that to be clear from the get-go. The same logic applies to brands and marketers using AI-generated content for creative imagery, written copy, music or any other form of media.

I recently sat down with Lindsay Hong, CEO and co-founder of SmartAssets, and Mansoor Basha, CTO of Stagwell Marketing Cloud, to talk about the general concerns that marketers have around AI and how they can be addressed. At the top of the list was transparency, but accuracy, consent and ownership were some of the other main areas for concern. 

I started by asking Hong whether these topics aligned with the discussions that she had been part of: 

“Absolutely. I think different brands are at different stages in their engagement with AI and what stage they're at is driven by a number of factors on their end: how digitally native they are, whether they were online first versus whether they are a legacy brand, their general history of adoption of new technology and their comfort with that,” she explains. 

“Ultimately all brands want the efficiency gains and the increase in effectiveness, but they're also aware of their responsibility to their brand, and to their consumers, to put things out in a responsible way,” Lindsay Hong 

We can all agree marketing objectives aren’t too different from those of other business units: improving processes, output, and effectiveness. And doing so responsibly. Where the role of marketing diverges, however, is that a huge part of output is external-facing. This is where the AI guidelines can get a little blurry and put external comms professionals slightly on edge. So, what advice is out there for marketers looking for guidelines on navigating AI? 

“I highly recommend looking to Sue Turner. She runs an organization called AI Governance Limited in the UK and she has a brilliant training plan. It’s easy to understand and digestible for busy executives who just need to understand AI and more importantly the implications of it. I would highly recommend that companies consider putting any decision makers through that training because ultimately those decision makers are in their role because they already know their brand, understand the business objectives, and have been making effective decisions in the pre-AI world,” relates Hong.  

Challenges for Content Creation

TLDR

The conversations about AI are still current, subject to change, and legislation can vary by location. While governing bodies are focussing on building relevant guidelines, the main thing is transparency. Developing a content plan means putting consumers at the center of your strategy, and addressing data privacy, consent, and other audience concerns. Resources are available to provide guidance for the key decision makers within businesses. 

Responsible content creation can pose a real challenge for businesses looking to improve and scale their content production engine. 

There are solutions for all types of content—image creation and optimization, music, video, code, written copy—you name it, there’s probably already an AI solution to solve it. As generative AI becomes more sophisticated, we can only expect it to become increasingly ubiquitous, and this raises concerns that gen-AI content is coming to a point where it appears almost indistinguishable from its human-generated counterpart. 

Many companies are already out on the field and tackling the issues of transparency and ownership head on. Basha from Stagwell Marketing Cloud explains that brands “are addressing these concerns by trying to define and set-up watermarks on what is generated and what is not. But we can’t force everyone to focus on that. It’s more important to focus on how transparency translates into customer experience and sales.” 

AI is far from being the first technology that revolutionizes the way we work (think: steam engine, printing press, computers…the internet). Basha relates this back to tools such as Adobe: “This completely changed how design professionals work. They either had to build around or on top of the new software.” And what this means is that content production professionals will need to learn how to use new AI tools, to leverage them to improve their output, and to adapt the software to the needs of their workload. Whether it’s via new applications, plugins, or prompts; history has a tendency to repeat itself and “it’ll be the same for Gen-AI tools,” he concludes.

But what does this change for consumers mean and why might it cause an issue? 

Volume and omnipresence. AI simply allows us to scale content like never before, and what’s more is it allows companies to target users at the opportune moment of their conversion journey on their preferred platform. Hong agrees that some content does deserve higher scrutiny: “If you’re creating a master campaign, you need to be extremely clear on what the message is that you’re trying to communicate, extremely clear that the content you're developing is going to land that message and create the feelings that you want to be created.”

“I think if we're in a world where brands think it's okay to dub over local language–and they're still getting the engagement that they're expecting–then it's probably also okay for them to test out some AI content.”

But on the question of whether consumers are going to be able to tell that content has been generated by artificial intelligence, the answer is probably not. “I think the thing about AI is that it's trained on data—most of which has been produced by humans—and it has imperfections, as does any system that human beings create. Sometimes you can tell, lots of times you can't. It also depends on the type of content,” says Hong. 

Drawing from examples of non-AI-generated content, she explains how tech is still being used to dub over advertising campaigns without any adaptation to the acting or people in the ad. “It's really rough, but for some brands that's sufficient and that still goes on.” 

Getting it Right 

TLDR

Humans are not flawless, but neither is AI. Creating safety nets and multiple-step verification systems is key to making sure that data is policed in a correct and compliant way. Accuracy is important for all types of content, but especially data-driven communications and external campaigns. There are specialized organizations to help with this.

Humans make mistakes, and so can AI. 

But for both of these alike there should be a question of accountability—and safeguard setting—to ensure that outward-facing content and information is fact-checked before publication. If you’re thinking about including stats for your next big client presentation, uploading content to your website, or brandishing data in your ad campaigns, there should always be a multiple-step verification to ensure that information is accurate. 

But sometimes this just falls through the net. AI blunders are often caused by phenomena known as ‘hallucinations’; a term which draws its meaning from the human experience of seeing things that aren’t really there. This is a result of biases or errors in the data that machines have been trained on, which then leads to fabricated information. And this can produce wide-ranging results: from reported accounts of impossible and fictitious foot crossings of the English channel to realistic-looking images of three-headed cats. Then next time you’re bored, feel free to Google a couple of examples.

I asked Basha for some advice on how we can address the issue of AI accuracy, and it bears repeating that fact-checking and post-editing are a must. Basha suggests that we should be asking ourselves more questions such as “what is it that we want AI to do for us and does it need to be fact checked?” 

He goes on to speak about highly visible content requiring a high-level of accountability: “If we take the example of a presidential speech, then yes. The guardrails need to be set up to manage accuracy and ensure that the citations—and thought processes—of the offering are validated by AI. It’s like asking a student to show you their work.”

“The measure is on a scale: complicated information needs to have strong validation, checks, and balances in place.” Mansoor Basha 

“As the scale drops, the importance could also be found amusing. For example, when Dall-E generates humans with six fingers purposely, it gives us an idea on how AI thinks it can be funny, and spurs creative ideas.”

Hong has a similar view and alludes to a growing need for quality-assurance systems that ensure that content is generated effectively. Since there are no widely accepted guidelines on compliance, or ISO standards, she foresees these will follow in due course. But in the meantime, there are solutions that marketers can take advantage of to ensure that their AI-generated content is up to scratch. 

“Companies need to know that the AI that they're using is legitimate; is good quality; is being well-maintained, and there are platforms to help you do that. Impact AI is a company that helps make sure that the LLMs that are being developed will actually do what they say they can do. I think that's important,” comments Hong. 

“And then the other thing [that companies should pay attention to] is obviously data. You can't really have AI without data. It’s about making sure that your requirements when working with suppliers around security data management are up to scratch. If you have any data that you want to be excluded from these models, there are processes around for that. There’s a company called Private AI who works to exclude certain pieces of data from the data which feeds into it.” 

This information could be anything, from PII (personal identifiable information) to other data that you wouldn’t want to be fed into an external LLM. The companies that engage in these data exchanges should be taking steps to ensure that the data that they are handing over is compliant. 

For example, if one company hands over masses of data to another company to feed into their LLM and produce the results they want, policing can become difficult. And, as we mentioned earlier, humans can be liable to making mistakes. “They might put their phone number in there by mistake. They might attach something that shouldn't have been attached, but you can use tools and systems that build rules to identify that and strip it out to make sure that you maintain those levels of security. There's a whole load of companies bringing support to clients with quality AI and data security, and you should be looking at those,” advises Hong. 

The Marketing Balance

TLDR

Upskilling and becoming familiar with AI is essential to not falling behind. Certain roles—such as top-end content strategizing—will continue to be human dominated. AI is not perfect, but constantly evolving and so regular updates are needed. Marketers and other professionals need to take advantage of the tools at their disposal to drive better efficiency.  

From ad personalization to creative generation and click prediction to retargeting, AI is transforming the way that marketing departments work. But that isn’t to say that AI will replace the professionals and marketers that are driving the industry. In fact, if Jevon’s paradox and demand elasticity are anything to go by, this has had the opposite effect in the past and consequently generated more jobs; albeit jobs that require a slightly different skill set (read: upskilling).

Hong’s advice to those involved in the creation of generic content is that “you should be upskilling in understanding how AI works and how you can add value to that process. As we've said, brands are concerned about how to implement it. Create a role with your experience of content creation that supports brands, or supports your customers in using AI tools effectively and safely taking them on a journey to adopting them. That will be a hugely valuable role and more than just the executional element.”

There are still many roles at the top-end of content production strategy which will be driven by human thought processes. Ultimately, people still like to interact with people, despite finding robotics and AI novel. And what’s great is that they don’t have to be mutually exclusive. Whereas AI is incredible in certain circumstances, it isn’t necessarily going to replace amazing and creative ideas. “It can learn and come up with its own ideas, but: would AI suggest to Chanel to get rid of corsets? Would AI have said to Apple to change headphones to white? I'm not sure,” quips Hong. 

And if we strip marketing back to its core function, it’s about showcasing and promoting the products and services of a company with the aim of attracting more customers, engagement and loyalty. It’s also a support channel for the sales department, which aims to nurture and educate customers so that they can make a choice about the products they wish to purchase. It’s decorating the digital front door and display windows at your business. 

And AI can help with all of this. 

Basha reiterates that “marketing is a functional area; it is not sales and product development. So, budgets need to be spent as effectively as possible to get the most value. That will never go away. The companies that are able to maximize this, win in the marketplace. And if AI can help to drive these goals in an effective and efficient manner, then marketers will use AI to drive value.”

Next Steps

AI adoption is a journey which can be covered in incremental and responsible steps, in line with individual comfort levels and understanding of evolving technologies. Marketers can always start by looking at the type of content they create, the value that it delivers, how generic or specific it is, and lastly what its purpose is. 

By evaluating the different types of content you wish to create, and aligning this with your goals, you can create a more detailed content roadmap and priority list. This means that you could shift your attention from more monotonous or generic work, that could be handled by AI, to more value-driven tasks. 

Looking into the near future, there could be various role shifts in content generation which require a new set of AI-adoption skills. If we compare the advent of AI to other technologies like the internet, it’s easy to see how the way that we work has been transformed: from travel and communications, to information access and remote work. To the same extent that we have become IT and internet literate, we should be working to upskill our AI capabilities to remain not just relevant, but also competitive. 

The mechanical cat is out of the bag and nobody is putting it back.

Daniel Purnell

Daniel Purnell is the Marketing Manager for Stagwell Marketing Cloud.

Take five minutes to elevate your marketing POV
Twice monthly, get the latest from Into the Cloud in your inbox.
Related articles
Coca-Cola’s risky AI video bet
Online reactions to a new holiday campaign were harsh—but did they miss the point?
Media
AI 101: "Black box" anxiety
GenAI can’t always explain itself—and here’s why that’s not so scary.
MarTech
Hey, PR agencies: Stop trying to build your own tech!
Rather than creating bespoke platforms from scratch, license best-in-class ones.
Communications