Skip to Content

Data Protection, Trust, and the Rise of AI Technologies

Artificial intelligence (AI) transforms how we do business within sales, marketing and service, improving productivity and customer experiences. With this exciting potential is also the need for greater security and privacy. Organisations must adopt and integrate robust trust frameworks to protect their business and their customers from cybersecurity threats.

Our recent Leadership Circle Dinner Series featured leaders from various industries across Australia and New Zealand. During the dinner, leaders discussed their experiences tackling the complex challenges between AI, data and security. 

We acknowledged that while AI presents significant cybersecurity threats, the key to stepping up to these challenges is to create a powerful framework of trust throughout every aspect of your organisation, so you can use AI to supercharge productivity and innovation and deliver the service your customers crave. 

“As we prepare for the future, there will be major consequences of not considering safety. While we need the skills to work with new technology, we also need to be able to deal with the associated risks head-on,” said one information and security leader at a major Australian retail organisation.

“One of the most important skills to deal with that is creating a culture of trust, not only within our organisations, but with our customers as well.” 

The rise of generative AI solutions amplifies the need for trust

Generative AI is a game changer across sales, marketing and service. Combined with CRM, it can boost efficiency and improve both customer and employee experiences. But these transformational benefits amplify an already urgent focus on trust — between customers and organisations and – employees and employers. 

The State of the Connected Customer report found 88% of customers believe trust becomes a priority in times of change, and 74% say communicating honestly and transparently is more critical now than before the pandemic. This is not to say customers don’t want to share their data in exchange for a more personalised experience — in fact, most are open to it —  just that they’ll only share on the condition its use is clearly explained. And currently, 64% of consumers say most companies aren’t transparent about how they use personal information. This is even more significant regarding AI, with 72% of customers concerned about its ethical use.

Most generative AI bots, like ChatGPT, don’t guarantee data privacy. As exciting as the buzz around ChatGPT has been, generative AI remains a force to admire and be wary of for many customers. 84% of customers say AI can be a force for good or bad, depending on its use. While customers are increasingly embracing generative AI-powered tools like chatbots and self-service portals for the speedy resolution of straightforward queries, their awareness of the potential for AI misuse is also growing. 

The pressure on businesses is to address customer demand for more personalised, omni-channel experiences. This depends on access to their data, the need to use that data appropriately, be transparent about its use, and keep it secure while still leveraging the benefits of generative AI tools. 

Need help with your generative AI strategy?

Whether you’re just starting out with AI or already innovating, this guide is your roadmap to delivering a trusted program blending data, AI and CRM. The goal? Helping your teams focus on high-value tasks and build stronger customer relationships. 

How AI contributes to data protection challenges

The rise of virtual spaces has provided new opportunities for cyber attacks, data leaks and company breaches. According to the Australian Cyber Security Centre, Australian businesses suffered more than $33 billion in total losses from cybercrime in 2021. While AI is not the sole cause of those losses, its increasing sophistication could fuel a rise in cybercrime. AI, for instance, is already being used in online voice scams, and its ability to process large datasets gives it the power to reveal passwords and find weak points in malware detection software.

Another problematic- area to regulate when it comes to AI and data privacy is the issue of data profiling. Bias in AI applications can have genuine consequences for, say, people applying for jobs or insurance. It’s no surprise that 62% of customers report being concerned about bias in AI. Those consequences can also impact your business if they alienate customers and erode trust, undermining your values as an organisation. In some cases, they could even result in legal ramifications.

What does trust look like?

That their data could be compromised is a real fear for many customers — unconsciously by the businesses that use it to provide personalised customer experiences or by malign outside forces wanting to use it for nefarious purposes. 

Addressing that fear requires a practical framework to safeguard against such breaches, or at least to reduce impact when they happen, and a culture of trust in which transparency and accountability rule.

“Trust is built into our brand promise,” shared a leader at an Australian energy company. “Trust is an organisation-wide issue. It can’t just live with an IT department — trust must be owned and discussed at the board level, reflected in our values and how we work together as a company.”

Building a culture of trust is everyone’s job

This means that security is not just the responsibility of your IT department but of the entire organisation, including leadership and key people at all levels. It’s why, for example, trust leads our five core values here at Salesforce.

“Salesforce is truly a values-driven company and we place trust at the very top of those values,” says Srini Tallapragada, President and Chief Engineering Officer at Salesforce. 

“Those values extend beyond our company culture into how we handle our customer’s data and how we help them achieve success.” 

Trust must be baked into every aspect of a business, including product design, compliance, security, and the employee experience. This level of trust and security will involve technologies like multi-factor authentication and malware protection to protect data. Still, it must also involve critical human supervision to avoid bias in algorithms and train employees to help them identify potential scams, anomalies and vulnerabilities.

After all, says Tallapragada, trust is hard to gain and easy to lose. Companies must stay transparent with their customers and continually work to rebuild trust if something happens to undermine it. 

Using your CRM to nurture trust

As the heart of your customer experience, your CRM must be a shining example of your organisation’s ethical framework for gathering and activating customer data. You can use your CRM to build trust by collecting, storing, and using your customers’ data based on the preferences they share with you. The more transparent you are about your data collection strategy and its contribution to customers’ experiences, the more secure they will feel about the data they provide and how it will be used. Prioritise:

  • Considering your audience: Ensure your AI algorithms aren’t set up to make assumptions about which demographics to target.
  • Ethics at every level: In Australia, there are rules and regulations across technology; however we also need to consider “community rules” and bake ethics into everything we do, from customer service to language chatbots.
  • Collecting and respecting communication preferences: The more transparent you are about your personalisation strategy, the easier it is for customers to understand what data they should provide. Once you have their consent and use their information only in the ways they prefer, their trust in your brand will shine through. 

Sixty-nine percent of customers say they’re open to the use of AI to improve their experiences and 75% trust companies to act with customers’ best interests in mind. It will be comforting to customers to be transparent with your intentions and strategies. It’s even worth sharing your concerns and uncertainties. Data breaches have become an expected cost of operating online and your business is both upfront about and well-prepared to recover from any such crisis.

Salesforce is truly a values-driven company and we place trust at the very top of those values

Srini Tallapragada
President and Chief Engineering Officer at Salesforce

Don’t forget your employees

Customers care about how you treat your employees. In fact, 85% say it influences their decision to buy from a company — that’s second only to how a business treats its customers. 

The employee experience plays a critical role in the culture of trust you establish with consumers. Your employees are also customers, and the way your business uses AI takes on even more importance. Employees will likely have the closest connection with your customers and will want to know the business is doing the right thing by them. Employees must be trained, empowered and enabled with the right technology and opportunities to deliver on the organisation’s promise of trust and transparency. Whatever you say in your marketing material or vision statement is only meaningful if your employees can deliver on it.

Your employees are uniquely placed to be alert to potential cyber threats or vulnerabilities in your data protection systems — but only if they have received the proper training and are engaged enough in their jobs to care. 

As the General Manager of Customer Sales and Strategy at TPG Telecom, Chris Russo believes that establishing this culture of trust is non-negotiable. 

“Often we talk about trust in an abstract way. But as leaders, it’s crucial we’re always coaching and mentoring our sellers on trust and making it habitual in the way they interact on a daily basis,” she says. 

The good news? Generative AI can help with that by delivering efficient and engaging ways to upskill and reducing the burden of manual tasks so employees can spend more time on higher-value tasks. 

When building a culture of trust to create an organisation robust enough to withstand and recover from security breaches, ensure your employees play a central role. 

Start building trust today

Customers face a dual reality with the rise of AI solutions: the promise of enhanced experiences and concern for protecting their data. 

As one leader from an Australian energy company explains, it’s all about finding a balance. “Tech companies are balancing cybersecurity, productivity and innovation. It’s a great juggle because you shouldn’t prioritise one over another — the balance is where you want to land.” 

To navigate this challenge, businesses must nurture trust and develop security. This involves implementing a robust trust framework that integrates technology safeguards and fosters a culture of transparency and accountability. You should start with a customer-centric approach to data collection and usage coupled with clear communication and prioritise the employee experience to successfully deliver your trust commitments. 

What does an ethical AI maturity model look like?

Your customers expect you to use AI responsibly. You need to implement an ethical AI practice to develop and operationalise principles like transparency, fairness, responsibility, accountability, and reliability.

Read more 

Get the latest articles in your inbox.