Skip to Content

Building Trust With Technology: AI, Innovation and Ethics

As incredible new technologies enter the market, they bring new opportunities for both businesses and individuals. The challenge is balancing these with ethics.

One of my favourite quotes is from Arthur C Clarke: “Any sufficiently advanced technology is indistinguishable from magic”. At World Tour Sydney recently, we heard many inspirational stories of the implementation of AI and other Fourth Industrial Revolution (4IR) technologies, and I couldn’t help but think that people living a generation ago would describe them as some kind of sorcery.

More importantly, each was unique and original, shining a light on the new reality of business done well in a world where everything is connected and intelligent.

At some Marriott hotels, for example, guests are able to check in on their mobile phones prior to arriving and go directly to their room, bypassing the front desk, unlocking their room’s door with their phone, and finding their favourite drink waiting for them.

We heard about Kone and its remarkable escalators that not only vary their speed depending on the number of people using them, but also let service technicians know when they need maintenance – before they break down.

The machines are proactively saying, ‘Come and fix me!’. And the tech in the background routes the nearest field service agent who has the required part.

Kayo told us about their service bots that are not only providing 24/7 customer support, they are training themselves to do that better.

Subscribe to our newsletter for a monthly round-up of our top blogs.

Technology: the double-edged sword

Incredible new technologies are entering the market regularly, including AI, robotics, cloud computing, 3D printing and precision medicine. The collision of such technologies creates fascinating possibilities that have the potential to disrupt existing business.

It’s this collision, this mix of technologies, that distinguishes the 4IR from the previous three. It is also creating unique challenges for businesses. One of these challenges is trust.

A recent report from PwC Australia said 68 per cent of CEOs in Australia believe it’s more important than ever to develop a strong corporate purpose reflecting the business’s values, behaviours and culture.

At the same time the AI Now Institute, in its December 2018 AI Now Report, says there are too few answers when the “cascading scandals around AI” raise questions of accountability.

“Who is responsible when AI systems harm us?” the report asks. “Where are the points of intervention, and what additional research and regulation is needed to ensure those interventions are effective?

“As the pervasiveness, complexity, and scale of these systems grow, the lack of meaningful accountability and oversight — including basic safeguards of responsibility, liability, and due process — is an increasingly urgent concern.”

Lead by example

At Salesforce, when we think about purpose and our core values, trust is number one. Technologies powering the 4IR are still in their infancy, meaning the values of the individuals and businesses building those technologies will determine their outcomes. Trust in the technologies, and in the businesses that use them, will be built or destroyed over the coming years.

Because of this, we have recently hired our first Chief Ethical and Humane Use Officer, to head up our Office of Ethical and Humane Use of Technology. The position has been developed to ensure that as we continue to build capabilities, the business is always thinking about how they are deployed, governed and used.

The role of the office is to build a regulatory and governance framework to ensure trust is at the core of everything we do. Why? Because in some areas these new technologies are already raising issues of concern. The AI Now Institute identifies numerous real-world examples, including the use of AI in widespread surveillance.

“This is seen in the growing use of sensor networks, social media tracking, facial recognition, and affect recognition. These expansions not only threaten individual privacy, but accelerate the automation of surveillance, and thus its reach and pervasiveness. This presents new dangers, and magnifies many longstanding concerns,” the report says.

“The use of affect recognition, based on debunked pseudoscience, is also on the rise. Affect recognition attempts to read inner emotions by a close analysis of the face and is connected to spurious claims about people’s mood, mental health, level of engagement, and guilt or innocence. This technology is already being used for discriminatory and unethical purposes, often without people’s knowledge.

“Facial recognition technology poses its own dangers, reinforcing skewed and potentially discriminatory practices, from criminal justice to education to employment, and presents risks to human rights and civil liberties in multiple countries.”

And so at Salesforce we are taking the trust issue very seriously by ensuring there is the appropriate rigour around our work.

What practical effects will the new office have? It will impact our organisation from top to bottom. It will impact the way we build things, the way we test things, the standards we set and the selling of our products.

So while we are thrilled with the innovation we’re seeing, we’re also looking forward to the further development of that framework within Salesforce, our partner businesses, other technology companies, clients and governments.

If we are able to get this right, we certainly will be able to create magic.

We’ve recently partnered with the Economist Intelligence Unit to find out more about how AI and other Fourth Industrial Revolution technologies are impacting our world. Download the Navigating The Fourth Industrial Revolution: Is All Change Good? report.

Download the report here: Navigating The Fourth Industrial Revolution: Is All Change Good?

Get the latest articles in your inbox.