August 24, 2023

What I learned about responsible AI from Google, Microsoft, and OpenAI

A look at responsible AI through the lens of the biggest players in the space.
MarTech
TABLE OF CONTENTS

Unless you have a Tom Hanks in Castaway level of separation from the news, you’ve read headlines both heralding the positive, life-changing impacts AI are going to have on our everyday lives and warning that an AI-driven future will render humans useless—or even more alarming, become a threat to humanity.

As always, we can’t predict the future. AI’s impact on society is yet to be known. But players in the space are taking the further development and exploration of AI seriously, many of them putting pen to paper to outline the principles that will guide their work. 

This desire to continue developing artificial intelligence with intention has led to the practice of responsible AI. 

What is responsible AI?

Responsible AI refers to the ethical and accountable deployment, development, and use of AI. 

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemic and nuclear war,” reads a one-line statement released by the Center for AI Safety. Over 350 executives, engineers, and researchers signed the institute’s statement, among them executives from OpenAI, Microsoft, and Anthropic. 

This is where responsible AI comes into play. If we leverage this technology thoughtfully and with guardrails, the thought is, it will be a force for good, not evil. 

Why is responsible AI relevant to marketers?

Most marketers aren’t sitting at their desks, hands-on-keyboard, developing or deploying AI technology. 

But that doesn’t mean this isn’t relevant to them. 

As tech companies scramble to make sure their products are built with steady foundations that value privacy, security, and transparency, these tools still require a certain level of discernment to accurately leverage—and much of the software marketers are coming to rely on are driven by AI. 

For marketers, responsible AI might look less broad and more tactical. Tools like ChatGPT—which are trained on fixed datasets that are not by any means fact checked—can provide inaccurate information. It’s on the user to make sure they’re not dispelling inaccurate information in a blog or social media post. 

As tech companies scramble to make sure their products are built with steady foundations that value privacy, security, and transparency, these tools still require a certain level of discernment to accurately leverage—and much of the software marketers are coming to rely on are driven by AI. 

There’s also a question of disclosure and transparency: As copy and images generated by AI proliferate across all marketing channels, what are best disclosure practices? Do agencies that use AI on their clients’ deliverables need to disclose this? Should a blog post generated using AI have a disclaimer at the top? 

Not to mention copyright issues, racial and gender bias, or data and security concerns…the list goes on.

Ultimately, practicing responsible AI is a cross functional effort that requires getting every arm of the business on board. Engineering, product, legal, HR, marketing, sales: Everyone is impacted by this technology, and companies keen on practicing responsible AI should be vigilant about creating AI principles, educating their organization about them, and requiring acknowledgement of the practices they put in place. 

Google, Microsoft, and OpenAI on responsible AI

Some good news: There’s a lot of consensus on how to approach responsible AI from the biggest players in the space. 

I read through responsible AI documents from Google, Microsoft, and OpenAI (so you don’t have to) to see what the biggest points of alignment are when it comes to best AI practices. 

While each of these organizations may have a slightly different way of saying these things, here are the main principles they all have in common: 

Safety

Safety first. AI should be built to avoid unintended harmful results. 

OpenAI focuses specifically on long-term safety in their charter and the potential harm that could come from moving too quickly: “We are concerned about late-stage AGI (artificial general intelligence) development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be ‘a better-than-even chance of success in the next two years.’”

Ultimately, everyone agrees that putting safety precautions in place, improving system functionality, and testing AI technologies in a variety of environments leads to the production of better products. 

Social benefit

From promising broadly distributed benefits to making sure that products are inclusive of people of all abilities, each company makes a promise to society in their guidelines. 

“As we consider the potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will process where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides,” Google pledges. 

“Our primary fiduciary duty is to humanity,” says OpenAI. “We anticipate needing to marshal significant resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.” 

“Our primary fiduciary duty is to humanity.” —OpenAI

For Microsoft, they specifically call out “fairness” as one of their principles. AI systems should increase opportunities, resources, or information to society. 

The idea across the board, though, is to benefit humanity. AI should reach everyone, it should bolster every industry, and it should be inclusive. 

Transparency 

From sharing results on alignment techniques (techniques used to make sure AI is aligned with human intent and values) to providing transparency around data collection or overall processes, transparency comes up in a few different use cases for these tech giants. 

One of Microsoft’s six principles is transparency. They ask: “How might people misunderstand, misuse, or incorrectly estimate the capabilities of the system?” to guide their thinking around transparency. 

Additional principles, resources, and education

Outside of these principles, each company has some of their own ideals around subjects like “scientific excellence” and “cooperative orientation.” 

For leaders in the AI space, publishing content related to responsible AI doesn’t just stop with their principles. They are also all creating resources and documents to help others take a similar approach. 

Google, for instance, has published a document every year for the last four years reporting on how their principles actually showed up in their work throughout the year. 

In 2022 the Translate team worked to prevent unfair gender bias in translations between English and Spanish, and English and German. This project resulted in a new dataset, the Translated Wikipedia Biographies Dataset, which trained the model on Wikipedia biographies that represented a fair split of female, male, and non-binary people to help the model more accurately maintain the integrity of pronouns.

And Microsoft has turned their principles into an actionable public playbook for building responsible systems. Microsoft’s Responsible AI Standard operationalizes their six principles, outlining goals and requirements for AI development. 

Stagwell Marketing Cloud’s AI Principles

At Stagwell Marketing Cloud, we’re committed to the practice of responsible AI. Not unlike Google, Microsoft, and OpenAI, we keep safety, transparency, and social impact top of mind when making decisions around the implementation of AI in our products. Here are our AI principles: 

  1. Human-centered: We will build all AI systems to ultimately empower and benefit humanity. Our AI’s are designed to be a force multiplier for human creativity, production, and efficiency. 
  2. Privacy and data protection: We will protect people’s privacy, personal data, and digital identities. We will be transparent about how data is collected and used, and we will respect individual privacy rights and data protection laws.
  3. Bias and fairness: We will work to identify, address, and mitigate unfair bias of all kinds to ensure our AI systems treat all people fairly.
  4. Safety and transparency: We will ensure that our AI systems are safe, secure, and operate as intended, and that any limitations are clearly communicated to users. Our AI’s will be transparent, and we will aim to explain their predictions and recommendations.
  5. Accountability: We will implement governance practices to ensure responsibility and accountability for our AI systems and their impact. We will monitor and audit our AIs to evaluate them for unfairness, unintended consequences, and other risks.
  6. Do No Harm: We will consider the broad implications and unintended consequences of our AI technologies to ensure that they do not adversely impact individuals or society as a whole. We aim for our AIs to have a net positive impact.
  7. Continuous learning: We commit to ongoing education and skills development to ensure our teams have the multidisciplinary expertise to develop AI responsible. We will re-train and expand “AI University” programs to transform marketing with AI.

Whatever the future has in store, implementing thoughtful guardrails and building AI technology with intention is the surest way to achieving AI that is safe, transparent, and beneficial to all.

—————————————————————————————————————————————————————————————————————————

Sarah Dotson is Stagwell Marketing Cloud’s editorial manager.

Sarah Dotson

Sarah Dotson is the Editorial Content Manager for Stagwell Marketing Cloud.

Take five minutes to elevate your marketing POV
Twice monthly, get the latest from Into the Cloud in your inbox.
Related articles
How AI is changing the game for 3 creatives
A fine artist, a marketing CEO, and an art director share their thoughts.
MarTech
The future of AI & marketing in 2025
We asked 10 experts to predict what the new year will bring.
MarTech