MARC BENIOFF:
Now, probably one
of the fastest things that
we've seen really happen
is the growth of this
incredible new generative
But what I like
to show you today
is this idea of where
our AI capabilities have
And for those
of you who have
been with us
since 2014, you
saw us first
launch Einstein.
That really came
out of where we all
had kind of an existential
freakout at Salesforce
That's when we
started to think
this AI revolution
was going to happen,
We started to acquire a
lot of amazing companies
and bring a lot of
incredible minds
And we built this amazing
platform called Einstein.
And that was very
much the beginning
Today is another
critical step on it.
But I'd really like
Srini to kind of walk us
through what we've been
doing since we first
SRINI TALLAPRAGADA:
Thank you, Marc.
So I think Einstein
is AI for CRM.
We've been pioneering
AI for CRM since 2014.
And today, Einstein
does about a trillion
So what does that mean,
trillion predictions?
If you are a Sales
Cloud customer,
if you use our lead
scoring app, Einstein
Lead Scoring, you can
convert your leads
Just in Thanksgiving week,
if you are a Commerce
Cloud customers, we
generate more than 50
billion AI product
recommendations
and customers see a
300 million GMV uplift.
Now, to do this, we
had to first invest
a lot in hiring
world class talent.
We acquired some
great companies
like MetaMind,
RelateIQ, PredictionIO,
and coupled it with a
lot of organic hiring.
So a world class AI team.
Both researchers,
data scientists,
data engineers, and
all and invested a lot.
That's one thing
we had to do.
We also had to solve a lot
of fundamental problems,
because one of the things
that you want to do
in AI, especially in
the predictive case,
was as you're doing
machine learning,
All the data
scientists want
They say, Srini, give me
access to all your data.
So what we have
to do is we
have to invent new
techniques like auto
feature engineering, auto
feature selection, auto
And we open source
a lot of these,
but invent new
technologies.
Some of the other
things as we learned
as we are doing it
were as we deeply
invested with
our customers,
new use cases
started coming.
This is when we started
investing in LLMs.
And some of the earlier
papers that we have
And all were
suggesting how
prompting could be
a new technology
And then what we
did is we said,
if this is where
the world is going,
let's invest deeply in
these LLM technologies.
And we learned,
very interestingly,
that some of
these LLMs are
auto-regressive
techniques.
And then we said,
really interestingly,
we could use it for
protein generations.
And then we publish a
lot of protein generation
papers in Nature
magazine and all.
So we started seeing
this early one.
So what we want to do, in
the predictive AI, what
we did is we brought
all these technologies
So we abstract all
of that technology
so that you can
get business wins.
What we are going to
do is in the new world
of generative
AI, we still have
to solve the trust
problems, security
problems, scale problems,
privacy, ethics.
We'll solve all of
that and bring you.
And that's what we
are excited about
to bring you
to the future.
And that's what
today is about.
Thank you so much, Srini.
I want to just stay
on this slide for one
And I'll tell you, I
don't think a customer
interaction or story
or meeting today
starts or ends
without this idea
of what's really happening
with generative AI.
And the story usually
goes like this.
I know about these new
large language models.
Many of these
models are amazing.
A lot of us have used
ChatGPT and this kind
Also a lot of
the models that
have been created by our
own engineering teams
And many of the startups
who are in this room
are also building
these amazing models.
And so these CEOs are
so inspired or CIOs
And they're
like, what we're
going to do is we're
going to start working
with these
models, and we're
going to take all of
our corporate data.
We're going to put
it into the LM.
And then all of
a sudden, we're
going to have instant,
intelligent company.
And it's not
quite like that.
It's not quite like that.
Now, the reason
why they think that
is, they already know how
these other more public
They take these huge
vacuum cleaners,
and they're just vacuuming
up all of the data
off the internet
that they can get.
So they're all
just taking--
if there's publicly
available data or just
data that's out there
off of an internet
that can be
scraped, they're
taking it and training
their models, which
is much data as
they can bring down.
And then they're
taking that data,
and then they're
turning on their LLM.
And whatever comes
out of it is great.
And if there is this
concept of hallucination
or if the LLMs,
basically, starts to lie,
well, that's not really
their responsibility.
Their responsibility
is to give you
the best case they can
with their generative
That's not exactly
the world of trust
And recently, I was
with one of our largest
customers right
here in New York,
And the CEO made
it very clear
that they want to use
LLMs to become much more
In mortgages, and
account service,
and all the capabilities
that a bank will do.
Can they just take all of
their account information
and all of the
history of all
the accounts
and everything,
Well, I don't think
that's going to work out
very well when it comes to
the regulated industries
in the way that data
works in large companies.
That's why really,
the onus becomes on us
to give our customers
this next generation
And that's what you're
going to see today.
Because, as I
mentioned earlier,
and this is going to
ground our presentation
today, trust is our number
1 value at Salesforce.
We saw that when
Parker and I first
designed the
product, which
was way back in 1999, we
came up with something
called the Sharing Model.
And that idea was
that every person
in the company who gets
Salesforce access, they
also get access
around what data they
can see or not see or use.
Because we know,
especially like
for a bank, every bank
executive cannot see what
every other bank
executive can see.
What one health care
executive can see,
another health care
executive can see.
We all understand the
enterprise sharing model.
All understand what
that means in a company.
And we also
understand that a lot
of us in
enterprise IT come
from the relational model.
And the relational model
is the rows and columns
model of data where
it's security is down
Not only where readers
don't block writers,
but the idea that cells
can be locked by one
particular user so that
cell cannot be seen
That is not the kind
of generative AI
that we're all
experiencing.
Because the way these
large language models
work is they're taking
down all that data
that they grab,
and then they're
amalgamating and
tokenizing that data
and then using
their algorithms
then to generate
their intelligence.
So that idea of
cell-based security
is not in the current
generative model.
That's the breakthrough
that we're really
going to try to show you
today where we really
are leveraging
starting back in 2016,
where we came up with
our first trust model
Now, Parker and I were
joking about this today,
because I always called
it anonymous predictions.
Because the idea
is that when
our systems, when our
applications, when
our platform looks at
all of your data and then
uses machine
intelligence or machine
learning or deep learning,
which are the three
primary artificial
intelligence
techniques that
really existed
before the current
generative AI techniques,
we don't look
at your data.
We're able to provide
you those predictions
and that AI capability
without actually looking
inside the data by just
keeping it anonymous.
And now, with generative
AI, what we're able to do
is we're able to take
the same technology
and the same idea to
create what we call a GPT
trust layer, which
you're going to see today
And we're about
to roll out
So they have the ability
to use generative AI
without sacrificing their
data privacy and data
This is critical for
each and every one
of our customers
all over the world.
Every transaction and
every conversation
at Salesforce begins and
ends with the word trust.
So we understand
that very well.
And there's one
other critical part
It's not just
about trusted AI
and delivering
the technology
to the right person
at the right time.
But it's also about
responsibility.
As we're all
going to learn,
because we're now all on
a societal AI journey,
there is going to be a
lot about responsibility
We've all seen the movies.
And we've all seen
where this can go.
We all have
these crazy ideas
in our head of
what could happen.
There's many different
possible scenarios.
So that's why responsible
AI use is so critical
and why I'm so
excited that we have
an incredible AI
ethics team that
has been in place
at Salesforce
And Kathy Baxter
is here who
wrote an incredible
article that
got published in HBR this
week on ethics in AI.
But I asked her to
speak on our panel.
And Kathy, would you
just ground us right now
in responsibility
of AI before we
begin our technical
journey today?
KATHY BAXTER:
Thank you so much.
KATHY BAXTER: In 2019, we
published our trusted AI
And the very first
principle is responsible.
We believe that
at the core,
our responsibility
is to ensure
that we are protecting
human rights, as well
as the data that all
of you, our customers,
And this first line, your
data is not our product.
That is a key
differentiator.
Your data is
not our product.
At the beginning
of this year,
as we began delving even
more into generative AI,
we recognized
that we needed
more specific guidelines
than the principles that
have been driving
all of our work.
And so we created
these five guidelines
that are specific to
not just generative
AI, but generative
AI in an enterprise
First and foremost,
it has to be accurate.
You're making
business decisions
based on the content
that's being generated.
We need to assess it
for bias and toxicity.
It needs to be
transparent.
We are ensuring that the
data provenance of all
of the data that
trains our models,
it's fully
consented and we
are transparent whenever
content is AI-generated.
We believe that AI
needs to be empowering.
So keeping humans
in control.
Giving them the tools
that they need to know,
is this content
useful, is it accurate?
And then finally,
sustainability
is one of our core values.
And so when we are
building models,
we are going to
ensure that they
It's not about
creating the biggest
model possible, it's
creating the right size
model that is
going to be most
accurate for
our customers,
but also takes
into consideration
the carbon and
water footprint
We're so lucky
to have Kathy
and also our
incredible ethics team.
And guiding
us, I think, is
going to be a huge
burden for them
as we start to move
forward very rapidly
We already
understand this is
going to be a
critical part
of every single
thing that we do.
In the world that
we live in and in
these industries
that we participate
in like banking or health
care or media or so many
of the industries that are
here in this room with us
and all watching us
all over the world,
they like to know exactly
where their data is.
And they want to
know where that is
And that is not how
generative AI works.
Generative AI works with
this kind of expansion
from the deep learning
principles where
deep learning have these
amazing neural networks
and have these
amazing insights.
And now the
networks just got
And as the
network expanded,
all of a sudden, the
ability of the models
But as those expansions
happen, the weights
and how they're
moving all are
And that is
really the burden
of our AI team over here.
They have to really
be able to use
these next
generation models
but have that capability
to deliver a trusted
experience to
our customer.
We're already
the number 1 CRM.
All of our
customers are doing
a trillion transactions
a week using Einstein.
There's no company that's
even close to what we're
doing in the customer
relationship management
area with artificial
intelligence.
We understand
the burden there
for must be on
us as we're going
We're already
trying to do all
To maximize return
on investment
for our customers, to
deliver them the fastest
time to value, to
innervate the fastest
with low code or no
code capabilities.
But now, giving them
the trusted productivity
that they're
demanding with
generative artificial
intelligence.
How will they augment the
employees productivity
just as we heard from
our friends at Gucci?
Now, our taste of
that has really
started to accelerate
earlier last year when
we introduced
our customers
to this incredible
new product
that we have
called Data Cloud.
And Data Cloud has become
our fastest growing cloud
And one of the reasons
why this is becoming such
an important cloud
for our customers
is, as every
customer is preparing
for generative
AI, they must
They must organize and
prepare their data.
So creating a data
cloud is so important.
But the problem for a
lot of our customers
is that they might be
creating data clouds
but with teams or
with technologies
that are outside of the
Salesforce ecosystem.
And that's why we extended
our Salesforce core
platform with
this product that
was intelligent,
real-time, automated,
We introduced it at
Dreamforce last year.
It's taken off
like a rocket ship.
And we've seen this
incredible capability
We're already delivering
$30 trillion transactions
per month with Data Cloud.
And we're
importing, I think,
about $12 trillion
components
of data already on an
fully annualized basis.
It's incredible what
our customers are doing.
I mean, one of the amazing
stories that we have.
We work so closely
now with Ford
and these next
generation cars
that they're
building and trucks.
I have one of these
new lightnings.
It's just
dripping telemetry
and it drips data because
of all the technology
But it needs a
receptacle for that data.
It needs the ability
to have all the data so
that Ford can provide
the next generation sales
I just bought a
second truck actually,
because it was so amazing.
A service experience,
a marketing experience.
All of those things
are then generated out
So the Data Cloud
sets the stage
and the beginning for
every customer's AI
And then our
primary vision,
the vision that
Parker and I set
up with 24
years ago, we're
ready then to
begin connecting
with our customers
in a whole new way.
This idea that we
want to then provide
that customer 360
experience from sales,
to service, to marketing,
to commerce, to Tableau,
to Slack, to all
the capabilities
of Salesforce on
the 360, but then
augmented by
generative AI.
We already know
generative AI
We already can see
the huge growth.
How many people here have
already used ChatGPT?
So I didn't have
to call for hands.
It's the fastest
growing consumer product
We all know the
story, and it's
But we've all seen
the limits as well.
And we all know that
all the data that we're
putting in there,
as we sit there
at home and play things
or whatever, well, it's
getting stored and you're
training that model.
And if you want to
understand that,
talk to some of
the AI experts
in the room today, and
they'll explain to you
why that consumer model
is so powerful for OpenAI.
But does that work
in our enterprise?
Does that give us that
trusted experience
Is it going to give us
trusted productivity?
We want the augmentation.
We want that next
generation capability
But what about the trust?
How are we going
to store that data
when what the LLM's
desire is to take as much
of that data and then put
it into its weights as
But is that
what we're going
to allow it to do with all
of our enterprise data?
Will we be able to
preserve our sharing
Will we be able to
preserve our security
Will we be able
to lock down
for each and
every customer
and each and every
employee, what they need?
That's why we already
know that every CEO needs
We've been talking about
that for over a decade.
And we have the
best team when it
But when we look
at these new models
that everyone's
going to roll out,
And that is, there's
a pretty big gap.
And it takes place
in every conversation
When we talk
about privacy,
when we talk about
hallucinations,
when we talk about
data control,
when we talk
about bias, when
These are technical
terms, actually, in AI.
They're not like kind
of societal terms.
These are actual
technical explanations
of things that
are happening
That we realize
that over here,
we have this desire to
rapidly move forward,
to have more productivity.
But over here, we have
this need for trust.
And what we
hope to do today
is to really
close that gap.
That's why I
mentioned, we already
have delivered that first
generation of our trust
layer with our
predictive model in 2016
when we rolled
out Einstein.
We already crossed
that bridge.
And every customer who's
in this room and online
who uses Einstein
has already
We never had to say
to anyone, oh, yes.
We had to look
at all your data
We didn't have to do that.
And we also know
that we have
to apply it to all
of our applications,
And we also need
to then say,
we're going to
take all of this
to not only be number 1
AI CRM in predictions,
but number a
AI CRM when it
comes to this incredible
generative AI capability.
So that is why we're
introducing today
Trusted enterprise
AI built for CRM,
built for Salesforce,
built for our customers.
Implementing these
key technologies that
Using our Einstein
GPT trust layer,
which is about
to get rolled out
for all of our
customers worldwide.
Allowing all of
our applications
to deploy that capability.
And augmenting our
own capabilities
to be the number
one AI CRM.