We’re living in strange times. With the rise of fake news, deep fake videos, artificial intelligence (AI), and ransomware attacks, it’s no wonder that consumers are growing more distrustful of institutions and increasingly hesitant to share their data. Mistrust is the new attitudinal default, and it shows up repeatedly in consumer surveys. For example, our State of the Connected Customer Report found that 32% of customers globally don’t trust companies to respect their privacy or use their information responsibly and 43% don’t trust companies to use AI ethically.
Given these attitudes, it’s easy for some data-dependent marketers to lose hope or feel it’s too late to recapture lost trust. However, our report also shows that consumers are willing to share data comfortably as long as the following certain conditions are met:
- Transparency. Consumers want to understand what data brands have and how they’re using it.
- Value. Consumers feel there’s a clear link between the data they’re sharing and the value they’re receiving.
- Control. Consumers want and expect the ability to control their personal data, retain the right “to be forgotten”, and actively manage their privacy preferences.
We stand at the doorstep of a completely transformed way of doing business, with brands increasingly using AI to enhance operations. But with mistrust rife, it’s a crucial time to check in: how can companies responsibly implement AI to ensure trust and data privacy? Let’s take a look.
Take the Next Step with the World’s #1 CRM for AI
Unleash the benefits of AI while protecting the privacy and security of your data with our Einstein 1 Platform.
Strategies for Enhancing Consumer Trust in Data Privacy and Personalisation
Brands that advertise are going to have to master the art of gaining consumer trust. How? We asked consumers how marketers could best balance their need for privacy with the desire for real, personalised communication from brands. Here’s what we heard:
1. Don’t talk about people behind their back
It turns out that we dislike gossip online just as much as we do in real life. How we feel about sharing information depends both on the type of information and the way it’s shared. Consumers are more comfortable with open “direct sharing” than they are with covert sharing, which is when companies use our data without our explicit permission. Using transparent data collection methods can restore interest and rebuild trust.
Bottom line: tell people directly how you’re gathering their data and why.
2. Give users more control over their personal data
When developing AI strategies, 49% of customers globally are more likely to trust businesses with their data when they’re given control over what’s collected about them. Customers want transparency into their privacy options and the agency to make their own decisions. With laws in place such as the General Data Protection Regulation (GDPR), you need to export customer-related data only when requested to stay compliant. Customers must have a say in how you use their data — after they provide you with their preferences, be sure to honour them.
Bottom line: give customers greater control over their data privacy and preferences.
3. Explain the benefits of sharing first-person data in concrete, positive terms
Our State of the Connected Customer Report found that globally 80% of business buyers and 68% of consumers say they’d be more likely to trust a company with their personal data if its use was clearly explained. It’s up to businesses to describe the privacy value exchange as concretely and positively as it can — stay positive and benefits-focused since there’s evidence that mentioning risks makes people nervous. It’s important to find the balance between privacy and personalisation. That is the need to build a rewarding and relevant experience in the context of an environment that makes data sharing ethical and safe.
Bottom line: paint a picture of the tangible benefits resulting from sharing data.
4. Remember, people are different
It’s often assumed that views on online privacy and ad targeting are demographically determined. While Millennial and Gen Z consumers are thought to be “cultural paranoids”, and Boomers and Gen Xers more relaxed, it turns out that these attitudes are more closely linked to personality than age. So, the privacy conversation will be different with different groups, and how we feel about privacy is unlikely to be segmented strictly by age, gender, or income. The groups that are actively concerned about their privacy might simply need more information — and the more the better. Those who are resistant to all forms of data-driven personalisation need more reassurance. It’s up to your team to figure out which psychographic segment each consumer inhabits.
Bottom line: use your customer insights and data science teams to discover which groups are the most sceptical or which ones are open to personalisation.
Remember, you can always try something different. Ask people to share data after you’ve given them something of value. Be explicit. Ask them how they feel. Give them the remote. The human rules still apply; trust is something that is earned, not just given.
But winning over mistrustful customers to ensure data privacy isn’t always easy. AI can help.
Winning over mistrustful customers with AI
As AI is increasingly integrated into day-to-day operations, customers — wary of the risks — demand a thoughtful approach built on trust. In fact, 80% of customers globally say it’s important for humans to validate AI’s outputs. This hesitation creates an opportunity for companies to drive differentiation through tech strategies grounded in trust and human connection.
At Salesforce, we understand the importance of trust and data privacy. Our Einstein Trust Layer promotes the responsible use of AI across organisations by protecting the privacy and security of data and improving the safety and accuracy of AI results. With features like dynamic grounding, zero data retention, and data masking, it’s designed to help you unleash the power of AI without compromising your safety or security standards.
- Dynamic Grounding. You can embed AI prompts in factual data and relevant context with dynamic grounding. This prevents AI hallucinations, or incorrect responses not grounded in fact or reality.
- Zero-Data Retention. No customer data is stored outside the organisation with zero retention. Generative AI prompts and outputs are never stored nor retained by third-party large language models (LLMs). They simply vanish.
- Data Masking. You can hide sensitive data types like personally identifiable information and payment card industry information before sending AI prompts to LLMs and configure masking settings to your organisation’s needs with data masking.
Data privacy is not just a nice-to-have. It’s a crucial element that takes centre stage in building trust. With AI, you can deliver strong, effective data privacy, governance, and precautions — so data stays private, and customers stay satisfied.
Build Trust and Data Privacy with Einstein Trust Layer
Unleash the power of generative AI without compromising your safety or security standards.