If you’re asking whether you need to implement generative artificial intelligence (GAI) tools to support your business, you’re not alone. This technology can boost employee productivity, but is it safe? While these tools can help from marketing to customer service to data insights, business leaders have posed concerns about AI’s potential impact and dangers on society and some are calling for generative AI regulations.
What you need to know
- A global AI regulatory response has started to coalesce; in the U.S., lawmakers met with tech leaders in mid-September, and declared universal agreement on the need for AI regulation.
- The EU has already begun to audit AI algorithms and underlying data from the major platforms that meet certain criteria.
- As a business decision maker, you need to understand GAI — and how it impacts your work with other companies and consumers.
“Most countries are just trying to ensure generative AI is subject to existing measures around privacy, transparency, copyright, and accountability,” said Danielle Gilliam-Moore, Director, Global Public Policy at Salesforce.
What your company can do now
- Review generative AI products on the market and see what makes sense for your business.
- Ask: Do I need to build it internally or work with a third-party vendor, like Salesforce, to add its products to our tech stack?
- Be aware of any risks.
The exec summary
The climate around GAI is moving at breakneck speed and regulators are trying to understand how the technology may affect businesses and the public. Here are some recent headlines:
- Pope Francis calls for global treaty to regulate AI after a deepfake of him wearing a puffer coat goes viral.
- EU reaches landmark deal on world’s first comprehensive AI regulation.
- IBM and Meta launch 50-member AI Alliance with large corporations, start-ups, and universities around the world to push for responsible AI.
- US government’s new AI rules push companies to show how their tech is safe.
The backstory on generative AI regulations
Concerns around artificial intelligence (AI) date back years when discussions covered possible job loss, inequality, bias, security issues, and more. With the rapid growth of generative AI after the public launch of ChatGPT in November 2022, new flags include:
- Privacy issues and data mining: Companies need to have transparency around where they’re gathering data and how they’re using it.
- Copyright concerns: Because GAI tools pull from vast data sources, the chance of plagiarism increases.
- Misinformation: False information could spread more quickly with AI chatbots, which also have created entirely inaccurate stories called hallucinations.
- Identity verification: Is what you’re reading created by a human or chatbot? There is the need to verify articles, social media posts, art, and more.
- Child protection: There’s been a call to ensure children and teenagers are protected against alarming, AI-generated content on social media.
This has all prompted regulators worldwide to investigate how GAI tools collect data and produce outputs and how companies train the AI they’re developing. In Europe, countries have been swift to apply the General Data Protection Regulation (GDPR), which impacts any company working within the EU. It’s one of the world’s strongest legal privacy frameworks; the U.S. does not have a similar overarching privacy law. That may change, with calls for more generative AI regulations..
“These are a lot of the same concerns we’ve seen previously wash up on the shores of the technology industry,” Gilliam-Moore said. “Right now, regulatory efforts, including investigations, seem to focus on privacy, content moderation, and copyright concerns. A lot of this is already addressed in statute, so regulators are trying to make sure that this is fit for purpose for generative AI.”
What considerations do businesses need to make?
Companies continue to wonder how these tools will impact their business. It’s not just what the technology is capable of, but also how regulation will play a role in how businesses use it. Where does the data come from? How is it being used? Are customers protected? Is there transparency?
No matter where your company does business or who you interact with — whether developing the technology for other companies to use or interacting directly with consumers — ensure you speak with lawyers who are following generative AI regulations and can help guide you through your process.
“Talking with your trusted advisers is always a good first step in all of this,” Gilliam-Moore said. “Innovation is happening at an incredible speed. So the conversations we’re having now could become stale in the next six months.”
Regulators have been concerned about how companies collect data and how that information gets delivered to users. Having an acceptable use policy – an agreement between two or more people (like a business and its employees or a university and students) outlining proper use when accessing a corporate network or internet — can help safeguard compliance. In addition, it is important to show data provenance, a documented trail that can prove data’s origins and where it currently sits.
“Without data, none of this works,” Gilliam-Moore said.
Need help with your generative AI strategy?
Whether you’re just starting out with AI or already innovating, this guide is your roadmap to delivering a trusted program blending data, AI and CRM.
How can small businesses stay compliant?
Larger corporations can often invest in the research and development around the technology, especially to stay compliant. Smaller businesses may not have the resources to do their due diligence, so asking vendors and technology partners in their ecosystem the right questions becomes important.
While Salesforce is taking steps to develop trusted generative AI for its customers, those customers also work with other vendors and processors. They need to stay aware of potential harms that may exist and not just trust blindly. Gilliam-Moore said smaller companies should ask questions including:
- Are you GDPR compliant?
- Are you HIPAA, or whichever law regulates your industry, compliant?
- Do you have an acceptable use policy?
- What are your certifications?
- What are your practices around data?
- Do you have policies that try to provide guardrails around the deployment of this technology?
“If you’re a smaller company, you may need to rely upon the due diligence of your third-party service providers,” Gilliam-Moore said. “Look at the privacy protocols, the security procedures, what they identify as harms and safeguards. Pay close attention to that.”
The need for trusted AI regulation
AI development can come with risks. This is why Salesforce supports tailored, risk-based AI regulation. It differentiates contexts and uses of the technology and ensures the protection of individuals, builds trust, and encourages innovation