Skip to Content

How To Turn AI From a Cybersecurity Risk to a Source of Strength for Your Business

A knight in a business suit stands with an AI shield, symbolising a Chief Information Security Officer (CISO) working towards AI-driven cybersecurity leadership.
As a cybersecurity leader in your organisation, it’s your role to neutralise risks so your colleagues can work, collaborate, and innovate freely. [Adobe Stock | Rowan Clancy, Salesforce]

Businesses are investing in AI to transform productivity. For the CISO, AI needs to enhance security, not compromise it.

As a cybersecurity leader in your organisation, it’s your role to neutralise risks so your colleagues can work, collaborate, and innovate freely. New technology integrations need to be resilient — they can’t be allowed to dampen defences. 

The former Chief Information Security Officer (CISO) of the Central Intelligence Agency (CIA), William MacMillan explains, “Cybersecurity is all about enabling the business. There can’t be any distinction between getting the job done and getting the job done securely.”

Today, this means adopting AI securely.

For your business, the advantages of AI are numerous. For customer services, it improves case resolution times. In marketing, it can create personalised content at scale. Processes across your business can be optimised like never before. For example, Santander increased daily customer conversations by four times by using Salesforce’s AI tool Einstein 1, to empower relationship managers with valuable, personalised, and real-time insights.

For the CISO, however, AI requires a security strategy rethink. After all, it’s not just you that has it. Hackers can use generative AI to craft more believable phishing emails, landing pages, and other brand collateral much faster. This makes it more likely your colleagues will be fooled — and it’s harder for your team to keep them out.

Discover Trusted Generative AI Strategies

Learn how to prepare for the future with our guide on trusted generative AI. Explore key data security strategies and how the Einstein 1 Platform can enhance your AI initiatives.

What are the key risks and technical challenges for security professionals associated with AI adoption?

Is your data protected?

Are your colleagues feeding sensitive data into the AI tools they’re using? Are the companies behind these tools using (or even selling) that data? Data leaks of this kind could be embarrassing and expensive. Salesforce’s Generative AI Snapshot Research shows that almost 60% of people planning to use Generative AI said they don’t know how to do so using trusted data sources or while ensuring sensitive data is secure.

How prepared are your employees?

Whether your organisation has an AI strategy or not, its use is most likely growing organically among your employees. Salesforce research showed generative AI is already used by nearly a third of the UK population. You need to start talking to your colleagues to both learn how they’re using AI and to educate them about security. A well-informed workforce is a secure workforce — for the CISO, AI know-how needs to be part of the culture.

Can you trust its outputs?

You’ve heard of hallucinations — when AI starts making things up that don’t make sense — and these could make your company look unprofessional if put out into the world.

How do you make AI a point of strength?

1. Reduce attack vectors 

Usually, when you add new technologies to your IT infrastructure, you add new points of vulnerability for malware to exploit. But rather than adding an AI layer on top of your data, you reduce friction of new tools by combining data, AI, CRM, development, and security into a single, comprehensive platform.

Bringing services together has many  benefits — e.g. data security with fewer points of vulnerability, user familiarity, IT visibility and operational value. For example, when Heathrow Airport used Einstein 1 to connect its digital services they boosted digital revenues by 30%.

2. Use generative AI in the SOC

Generative AI can make detection engineering more robust by allowing Security Operations Centre (SOC) analysts and other security professionals to push out new detections at a much higher rate, analyse much larger data sets, and simulate attack scenarios at scale. It also gives your team more time for analysis by automating their routine manual tasks, such as generating reports on permissions and access.

3. Deploy zero-data-retention architecture

The foundation of secure AI deployment is strong data governance, so look for software with zero-data-retention architecture. This provides seamless privacy and data controls, with secure data retrieval, dynamic grounding, and data masking, allowing teams to leverage AI benefits without compromising customer data.

For example, Salesforce partners with Open AI and Azure Open AI to enforce the zero-data retention policy. No data is used for LLM model training or product improvements by third-party LLMs. No data is retained by the third-party LLMs. And no human being at the third-party provider looks at data sent to their LLM.

4. Implement robust AI governance and training

Learning to use AI tools can be awe-inspiring and daresay fun. Learning about security protocols? Less so. Integrate training on AI security, policy, and ethics into general AI training so the information sticks. It may also be worth establishing an oversight committee to monitor AI deployment and ensure compliance with policies and regulations.

5. Clean up toxicity

Some data architectures can mitigate toxicity and harmful outputs through measures like toxicity detection and auditing. This ensures no output is shared before a human accepts or rejects it and enables you to record every step as metadata for your audit trail, simplifying compliance.

6. Drill your incident response plans

Part of cybersecurity is accepting attacks will happen and being ready to get operations running again as quickly as possible. This is no different with AI. Create and practise rapid response plans specifically for AI-related threats and vulnerabilities.

7. Partner with AI experts

Your role is to protect your organisation — you can’t dedicate all your time to becoming an AI security expert. By working with trusted partners, you’ll stay informed about emerging threats and armed with innovative security solutions.

Looking ahead: The future of AI and security

In the era of AI, your role as a security leader is more vital than ever. By focusing on robust data governance, a security-aware culture, and the right tools, CISOs can not only mitigate the risks associated with AI but actually use it to boost security.

The challenges of AI are outweighed by the unprecedented opportunities for innovation, efficiency, and growth. If you can balance these opportunities with security, you can ensure your organisation steps into the future on a sustainable foundation of AI safety.

Discover Trusted Generative AI Strategies

Learn how to prepare for the future with our guide on trusted generative AI. Explore key data security strategies and how the Einstein 1 Platform can enhance your AI initiatives.

Get our bi-weekly newsletter for the latest business insights.