Article

Framing the issues: Ushering in the era of trusted AI

The explosive growth of generative AI has created unprecedented attention to ethics and safety in technology.
Visual of UNESCO and Salesforce logos

Author: Sabastian Niles, President and Chief Legal Officer, Salesforce

The benefits of generative AI, such as enhanced productivity, innovation, and decision-making, could unlock a revolution for businesses, governments, academics and more—as long as they take the necessary steps now to build responsible, ethical, and trusted AI systems.

At Salesforce, we're leading the way by embedding ethical principles directly into our governance structure, ensuring Trust thrives across all operations. Our commitment to responsible AI began in 2015 with the establishment of our groundbreaking AI Research Lab, where we explore the ethical implications of model development and deployment. Today, the Salesforce AI Research Lab is committed to advancing AI research and recently launched the first hub outside the U.S., in Singapore. 

Since the creation of our Office of Ethical and Humane Use in 2018, debates over the impacts of new technologies, and how best to manage those impacts, have only intensified across our industry, governments, and civil society. Over the past five years, we've defined what responsible technology looks like in the enterprise -- and now we're doing it again for AI. We've built an infrastructure for the responsible development and deployment of AI, including guidelines and guardrails, policies and processes, and ethical product features. In the last year alone, we've released GAI guidelines, established an AI Acceptable Use Policy (AUP) with a new set of protections, helped pioneer the Einstein Trust Layer to protect customer data, and built ethical guardrails for AI development based on five principles: accuracy, safety, honesty, empowerment, and sustainability. The frameworks we have established to bring an ethics lens to the development of our products and deployment of our technology have prepared us to meet this moment, and we are heartened to see so many other enterprises working to improve transparency and ethical use policies within their walls.

A growing consensus is palpable among AI leaders, who understand that getting it right means the difference between realizing the benefits of AI, and creating systems that are untrustworthy for consumers and businesses alike. Organizations that fail to reckon with the ethical implications of data privacy, security, and transparency will lose their customers’ trust. Customers and stakeholders are increasingly demanding transparency in AI systems, and businesses need to be proactive to meet those expectations. Furthermore, companies should ensure their board members understand how AI is being used, the high-risk use cases at play, and how to ensure data governance. This means taking a "both-and" approach that prioritizes the sustained success of internal and external stakeholders. In anticipating and managing risk, we should apply a lens requiring AI to be an enabler of smart growth, increased productivity, and wise decision-making.

Harnessing the power of AI in a trusted way will require regulators, businesses, and civil society to work together. To ensure responsible AI development and usage, Salesforce supports risk-based AI regulation that differentiates contexts and ensures the protection of individuals, builds trust, and encourages innovation. The AI revolution hinges on building trust and safety, starting with robust privacy protections and clear user awareness. As AI continues to evolve, we need responsible practices at every stage, from data sourcing to model choices and application design. Smaller, targeted models with transparent decision-making processes can empower both innovation and environmental sustainability. While future risks loom, addressing today's challenges with ethical frameworks and practical guidelines will pave the way for responsible AI advancements that benefit all. Governments have a crucial role in setting guardrails, fostering privacy-preserving datasets, and promoting transparency that unlocks the full potential of AI without compromising trust.

As regulatory momentum continues, from comprehensive EU legislation to the US's Executive Order, we should continue embracing risk-based frameworks that distinguish systems from models, mitigate the risks of AI, and continue to encourage innovation. In addition to building trustworthy ethical frameworks, diversity in AI development teams, and ongoing monitoring to identify and rectify biases in AI models, businesses should actively engage in policy discussions to help regulators and lawmakers understand how to assess risk and create effective guardrails for AI. It’s also crucial to champion a collaborative approach early and throughout the AI development process that brings along a wide range of voices and perspectives from diverse global stakeholders to further foster trust and ensure AI empowers all. However, mere legal compliance is insufficient. Companies need to aim higher to build trust by understanding and working to exceed expectations of customers and stakeholders.

As we welcome 2024 and the many AI developments on the horizon, businesses must recognize that the benefits of generative AI come with a responsibility to manage associated risks. Proactively addressing these challenges will build trust, drive innovation, and power economic growth. As the AI landscape continues to evolve, navigating the delicate balance between innovation and risk mitigation must be a core concern of business and government leaders. Salesforce is eager to share our experience and learnings to help usher in a new era of trusted AI.


The ideas and opinions expressed in this article are those of the author and do not necessarily represent the views of UNESCO. The designations employed and the presentation of material throughout the publication do not imply the expression of any opinion whatsoever on the part of UNESCO concerning the legal status of any country, city or area or of its authorities, or concerning its frontiers or boundaries.