We are going to
start with something
And that is our
forward-looking statement.
You might have
seen this before.
We are going to
be talking today
about some
forward-facing technology
So if you are responsible
for any purchasing
decisions,
please make those
based on currently
available products
The real place I want
to start is a thank you.
So thank you
for all of you
for making it
over to this room.
Michelle and I are so
excited to speak with you
And I also just wanted
to say thank you
to our larger
Trailblazer Community.
As you're going
to see today,
we're going to be
covering content that
is based on lots and
lots of research studies,
and those are really only
possible if Trailblazers
are willing to
speak with us
and share their insights
and their experiences.
So we are super grateful
to the Trailblazers
out there, including
yourselves.
And I wanted to start
by making that clear.
Today, you are here for
Gen AI Beyond the Hype,
Practical Insights
for Your Business.
And you can think
of this as sort
of a
behind-the-scenes look
at what other customers
are really saying
There is just
a lot of buzz
and a lot of hype about
generative AI tech,
and it can be hard,
frankly, to understand
And so what we're going
to talk about today
is the key
takeaways that we
have learned as this
technology has really
moved from an
idea to a reality.
I am a research
architect on our Research
and Insights team,
and I am joined today
by the illustrious
Michelle
Tabart who is a principal
researcher on our team.
And she and I, along
with a lot of colleagues
from Research
and Insights,
have spent the
last year really
talking to lots and
lots of customers
And we're going to
take that information
and try to help you walk
away from this with three
First, we want you to
walk away understanding
Secondly, we
want to help you
know how to think
differently about testing
And lastly, we want
to help you understand
how to enable your
end users to drive
This is a slide I
showed on a stage
just like this in the
Admin Track at Dreamforce
And what I told the
attendees at that talk
is there's just a lot
of really big feelings
Part of that was driven
by the tech being new,
and part of it was the
media coverage and all
And the feelings
that people
were having were
really expansive,
so it was everything
from huge excitement.
This is one of the
most shocking moments
in computer science
in my lifetime
to some genuine
worry, like,
should I be
worried that this
And a lot has really
happened since then.
We've seen a big
evolution in the AI stack
So there's usually
a UI layer.
Often it's a web
interface where
you're using
natural language
to ask a generative AI
tool to do something
for you, maybe
generate some code
or answer a
question, maybe write
That's all
driven by models,
and there's lots and
lots of models now.
Some of those are closed
source, like OpenAI's.
Some of them
are open source,
like Meta's Llama
family of models.
And then the real
secret sauce of this
is it's powered
by tons of data.
And so for consumer
apps, that's
There's articles, and
books, and podcasts.
And so that has really
enabled some pretty
But what we're hearing
from enterprise customers
is it makes them
nervous when they think
about using
that technology
for their own
organization.
When you are working
at an enterprise,
you're also worried about
trust, and security,
You're worried about
proprietary data
in your organization,
maybe what
it's going to take to
train your employees.
And so Salesforce
has really
spent the last year
very much focused
on those types of
considerations,
how we can take
this technology
and make it available
for enterprise use.
This is a visualization of
all of our currently GA AI
And hopefully,
as you're seeing,
if you're walking
around Dreamforce,
there's lots and lots
more where this came from.
So in the next
couple releases,
you're just
going to continue
to see this kind
of explosion
And as this has
been happening,
we've done over 100
research studies
with customers from all of
these different products
and all of these
different industries,
talking to them about
their expectations for
and their experiences
with the technology,
and what it's been like
moving it from an idea
to something
that's really being
adopted inside
their organization.
And so now, I'm going to
let Michelle start talking
to you about what
we're hearing
So first, customers
have told us
that, yes, the hype
is indeed real,
but it's not
necessarily enough.
Customers have
been inundated
with solutions both
at work and at home
there are so many
different solutions
out there that they
don't know necessarily
what's noise
and where they
We've actually heard a
lot of customers feeling
a little bit
skeptical that some
of these flashy AI
tools are really
just an idea in
search of a problem.
So what we're seeing
is that customers now
are really looking
for business value,
specifically focused on
ROI for their unique use
And why that's
really critical
is that by demonstrating
that value,
they're able to move these
gen AI implementations
and deployments along a
lot more quickly as well.
It just takes a little
bit of confidence
and ROI being shown
to leadership,
where you're seeing that
momentum move forward.
Customers have also
been telling us
that they're thinking
a little bit more
When we were
here a year ago,
everybody was
really talking
about how gen AI can
really help improve
And that was the key
metric that people
were focusing on,
alongside some financial
metrics around,
how can we use
this to reduce cost
or maybe increase
But over the past
year, customers
have really been
actually talking to us
more about customer
experience metrics.
How can we actually
improve our CSAT scores,
our NPS, by leveraging
these technologies, as
well as their own internal
process improvements,
such as, how can we bring
in more high quality
coaching and increase the
accuracy of our processes
They're also surfacing
some more emerging metrics
around different
types of value
that they've been
thinking about
as they've seen these
technologies in action,
including things like
impacts to employee stress
levels, job satisfaction,
and work-life balance,
as well as the
frequency with which
valuable but
time-consuming tasks can
be completed,
and the ability
to grow and
continue to scale
a business without
necessarily
scaling headcount
at the same rate.
And I actually want to
share some of those value
stories in the words
of our customers
from some of the
research that we've
So a marketing automation
specialist actually
shared that Engagement
Studio metrics
are really important to
her and really valuable.
She wants to see them, but
she doesn't necessarily
have the time to go in
and do that everyday.
So having an AI
to really help
surface those metrics
and insights actually
really increase the access
she has to that highly
This is one that I found
really interesting.
A service technician,
who goes on site
to do field visits,
actually shared with us
that they spent a
lot of their time
actually writing up notes
in their personal time,
So they really saw
AI as an opportunity
to help actually improve
their work-life balance.
Lastly, we spoke to a head
of international customer
service at a growing
retail organization.
And they were
really excited
about this technology
as an opportunity
to actually help grow
and scale that business
while retaining the
service staff that they
have but not
necessarily bringing
on a lot of new headcount
to move with that growth
And he actually
shared this
with his current
service team,
with the goal that the
work that those service
agents will work on or
service employees will
work on is really
that high value,
complex, interesting
service tasks.
They want to be able to
bring AI into support more
of the more sort of core
FAQs and things like that,
so looking at it both from
the headcount to growth
ratio but also from the
employee satisfaction
So I wanted to share
these because they may all
be different kinds
of ways that we're
thinking about
value that are
outside the traditional
framing of productivity
And as we mentioned, this
is a conversation today
about getting a little bit
more practical about that.
So I wanted to share
some examples of how
our customers
have been thinking
about and measuring this
kind of emerging value
So we've really
been seeing,
it's absolutely critical,
with these new types
of technologies,
to actually build
a benchmark for any
of those metrics
This may include some of
those emerging metrics
I just spoke about,
where you might think,
oh, I don't
necessarily have
some of that
information today.
So what we
really recommend
is thinking about running
a quick pulse survey
with employees
to capture some
of those quantitative
benchmarks
before you actually
deploy these technologies.
So that way,
you're actually
able to show the
value, not just
from the traditional
metrics that you might
already have, like average
handling time or case
deflection numbers,
but also from this more
expanded point of
view, as you really
think differently
about the value
that you want to bring
with these technologies
So now that we've talked a
little bit about how we're
thinking about
assessing value,
I'm going to hand
over to Sarah
to talk about where our
customers are getting
We see a lot of
customers starting with
out-of-the-box use cases,
often for use cases that
they deem to be a
little bit lower risk.
So maybe they are
more tightly scoped.
Maybe they have data
that is less sensitive.
And by doing this,
they're able to try it
out and figure
out some things.
They can see, how
does this technology
work with the data at
my organization, which
I have yet to meet
a customer who
tells me their data is tip
top, everything's great.
Also, they can try
out their testing
and monitoring approaches
and kind of see,
does this work with
this new technology,
or do I need to
make some changes?
They can think
about value.
Are they going to
be able to track
the kinds of value
that they're seeking?
And what changes
do they need
to make to make that work?
And then lastly, are
there any processes
that they're
doing that need
to change because of
the way the technology
By doing that with, as
one customer put it,
some of those low-hanging
fruit use cases,
they can start
seeing that value
and building
that confidence,
and then they can
really expand upon that.
Another customer told
us, they put it this way,
not everybody wants
to jump in head first.
A lot of us want
to dip a toe in.
And we do that by really
immersing ourselves
in the technology but
for a very particular use
And so here's what
that might look like.
It might look like a
service agent speeding up
case resolution
by seeing, how
were other cases like
this solved in the past?
It might look
like a sales rep
creating really rich and
personalized introduction
Maybe it comes in the form
of meeting preparation,
or quickly writing code,
or generating marketing
content, or maybe
you're generating
We had a customer, a
transformation director,
say, it's really all about
matching the right AI
to the right problem
and then doing
this cycle of pilot,
test, learn, and rollout.
And so that idea that
you're going to try it,
you're going to
learn from that,
and you're going to
build on it really
boosts confidence
in your organization
that you can roll
this out successfully.
I do want to be
clear, though,
not everyone is starting
with out-of-the-box use
It's not necessarily
a linear journey
where you must
start out of the box
and then grow in
increasing complexity.
We have some
customers who tell
us, for our
organization, we
have a really
clear business need
where it makes
sense for us
to build something custom.
And so they're using our
Prompt Builder technology
to maybe build
a custom action
that they can invoke in an
existing business process.
We have other
customers who
want to start
right with agents.
So maybe our agent for
a service agent, perhaps
their service organization
has a really clear need
And their organization
is interested in trying
generative AI
to solve that.
We have people
coming at this
from different
directions, but the key
is really wherever
you start,
What you want to avoid and
what successful customers
have told us is, you don't
want to peanut butter it.
You don't want to put AI
everywhere and launch it
all at the same
time without really
understanding how to
make those individual
implementations
successful.
On that note, let's talk--
On that note-- OK, let's
talk about that sort
So we've talked about
value and how customers
I'm going to talk to you
a little bit about how you
can get ready to actually
deploy these technologies.
So first things
first, we really
want to say to
everyone in this room
that implementing AI
is really a team sport.
So you don't have
to go it alone.
This is definitely
something that we've seen.
A lot of customers
are starting
to form AI work groups
or committees to come
together to really bring
diverse skill sets,
to be able to do a
few things actually.
So the first one could be
just around identifying
the ideas that internally
you might have around gen
AI and thinking about, is
this the right use case,
and is AI the
right solution
Second thing is around
testing and evaluation.
So I know a lot of
folks in this room
are more on the
technical side
and really thinking
about that testing lens.
But we've seen
increasingly,
that people are
needing to work
with folks within
the business,
within security, within
legal to actually
really make that testing
successful and robust
Lastly, when it
comes to wanting
to progress these
technologies
into adoption, really
want to think about some
of the governance and
enablement strategies
and having those
set in place
early with
collaborative partners
can really help speed
that implementation
Now, I'm going to
talk about everybody's
absolute favorite
topic, which
is risk management,
proactive risk management.
But in all
seriousness, we know
with this kind
of new technology
that there are
risks, and we
don't want you to
shy away from that.
In fact, we
really want you
to encourage talking about
risks both internally
and externally because you
really can't address what
However, I also want you
to be realistic about risk
So one thing we've
seen customers do
is potentially
boil the ocean
a little bit by trying to
think of every single risk
We, instead,
recommended, where
we've seen customers
have success,
is really focusing
risk management
on the specific use case
that you're looking at.
So really doing a
deep dive there on,
what's the type
of data that's
needed for this use case?
What are the integrations?
What are the types
of other systems
this would need
to interact with,
and what are those
data policies?
So really going deep dive
on that individual use
case, it helps you create
targeted risk mitigation
And then again, you can
bring in those folks
from your AI workgroup
or other areas
to consider those broader
things around governance,
testing, monitoring,
and usage as well.
So once you've
identified the risks,
the next critical
step is actually
ensuring your data
is ready for AI.
So we've spoken to
a lot of customers
over the past year,
and our early adopters
have really
consistently told us
that optimizing and
getting your data ready
is what can really
drive those quality AI
So I've put together
some lessons
learned from our
early adopters
So first things
first, we really
encourage, if you
have any terms
or specific statements,
that you want to make sure
that the data that
you're using explicitly
I don't know
about you all,
but at Salesforce, we
have a lot of acronyms,
we have a lot of business
terminology that we use.
And while we might
all intrinsically
understand that,
it's really important
you explicitly state that
to get the best quality
Similarly, you
want to separate
any sort of overlapping
topics or information
that you have so that
AI can process that
You want to remove any
outdated or contradictory
This might kind of
seem like a no-brainer,
but we've had a
lot of customers
go through this
process and think
that the outputs are sort
of hallucinating or coming
out with the
wrong response,
when actually, they
go back and look
and say, oh, OK, we
forgot to update policy
information in this
knowledge article
that we leveraged
to go through that.
So really having
that done early,
optimizing that
data will also
prevent some of
those hiccups
And of course, if there's
any sensitive information
that you have or
internal policy data
that you don't want
AI to leverage,
making sure that you're
separating that out
from the rest
of the knowledge
articles or other
ways that you're
So once your data
is optimized,
the next step is to really
get it ready for testing.
So you can start by using
a sample of real world
data, like past email
threads or articles,
to see how the AI
generates the outputs.
Ideally, you
want to import
like 25 to 100
records and then
be able to run
those against 50
And you want to be
able to identify,
what are those common
business issues that
come up, and be able to
test how the AI is sharing
So now that we've got
data readiness down,
let's dive into testing
and implementation.
So we've found that our
customers are thinking
quite differently
about gen AI
testing and
implementation than they
And one thing that
actually really surprised
me over the past
year was that we
have a lot of
customers who
are testing with
great enthusiasm
but without a clearly
defined testing strategy.
And what we found is, and
similarly, a study by IBM
found, is that
40% of companies
are getting stuck
in the sandbox.
So they're starting
this testing process.
But without a clear,
defined testing strategy,
it can really lead to
those elongated periods
And similarly, I talked
about those benchmarks
and setting those
benchmarks before.
That's another
area where we
see folks who
might be testing
from the technical
side, but they're
having executive
leaders ask them, OK,
is this proving that ROI?
They're having to go
back and do testing again
to really identify
against the benchmarks,
So a few key tips
there to make
sure you don't get stuck
in the sandbox yourself.
I also want to talk
a little bit in depth
about testing too,
because what we've learned
is that it's not
just about getting
It's really
multi-layered in the fact
that testing looks
at technical,
operational, and strategic
elements as well.
So at the foundation
of this pyramid,
it's really
critical to test
how AI aligns with your
business's data, security,
legal, compliance, and
cost requirements as well.
We've actually seen a lot
more admins and technical
leaders being brought
into these conversations
earlier just because
of the novelty
of this technology
and really
trying to think
how it intersects
with these other
elements as well.
The next layer
includes things
like performance testing
and those accuracy checks
that you might
typically do in sandbox.
But we're also
seeing folks
doing more red team
testing to really assess--
I wonder if I can
use this screen?
Things like toxicity,
bias, and anything
that's coming up against
business policies
or potentially going
against those guardrails.
So red teaming is
where you can really
try and break down
the AI and make
sure it's working against
the designed structures
of guardrails
and the policies
that you put in
place as well.
From there, you want to
be able to roll it out,
as Sarah mentioned
earlier, that piloting,
testing, learning phase
to a subset of users.
And all of this
is actually
really working to build
that sense of ROI.
So if this is where
you're able to use
those benchmarks,
capture that information
to track ROI and
be able to show
that value, and
not just value
from a business
value perspective
but also from a
trust perspective.
Are we able to responsibly
deploy this technology?
Do we have confidence
in that too?
So I've really
been thinking,
not only did these
testing strategies aim
to build evidence,
but something
I'm really
encouraging customers
is aim to build
confidence.
How are you
building confidence
with your leadership, how
you're building confidence
with your end users
through the testing
Again, I wanted to give a
little bit of an example
to try and ground
that for us.
So I've chosen
Service Replies,
which is one of our
generative AI features
for service at the
moment, embedded use case.
So the goal with
this feature
is to really boost
agent productivity
by generating email auto
responses so that you can
So if you wanted
to test that,
you'll want to focus
on a few key areas.
Do the replies align with
your organization's brand
How quickly are
they generated?
Do they
appropriately match
And is the output
appropriate following
So some of the ways
that you can test that
is, as I mentioned
before, you
can start by pulling a
set of those historical
transcripts or
conversations
and running them through
to make sure that it's
acting accurately
against metrics
that you might think,
like average handling time
But then once you get to
the next level of testing
in that pyramid
and you deploy it
to some of your
service reps,
this is where
you want to have
some of those reps
that have that domain
expertise because
they're really going
to be able to identify
those inconsistencies
with tone, with
latency and speed,
Having those employees
be able to check
And finally, you
want to review things
like your customer
satisfaction metrics.
So you're showing that
expanded version of value
as you build that
confidence in your testing
So we've also found that
testing doesn't actually
Generative AI operates
with a probabilistic
nature, so it means that
the outputs can vary.
That's why we
really encourage
ongoing monitoring
as critical.
Because those
outputs can change,
you actually want to
maintain and ensure
that they remain accurate,
relevant, and trustworthy
So unlike other solutions
where you might just
test it in
sandbox, and then
briefly with
some pilot users,
and then feel
confident to go,
because of the dynamic
nature of this technology,
we really encourage
that ongoing monitoring
And one way to do this is
by leveraging the audit
trail alongside
user feedback.
And this will allow
you to then fine tune
any instructions and data
sources to really drive
So while ongoing
monitoring
ensures quality, it's
also really important
to consider the user
experience during rollout.
And I'm going to
hand back to Sarah
So one thing that it's
important to keep in mind
is that this
is a little bit
different from
other technology
you might have rolled
out in your organization.
So not only is
the technology
still fairly new,
but it actually
feels really
new to end users
in a couple
important ways.
One of them, Michelle
just talked about.
The output can vary, so
a lot of enterprise users
are used to, every time
I do A and B, I get C.
And with generative
technology, every time
you do A and B, you'll get
slight variations in C.
And that can just
feel different to get
The second is
a lot of people
assume, with their
enterprise tools,
that they're going to
get a final output.
And with generative
AI, sometimes it's
It's a great draft that
you can start with,
then you might
make some changes
before sending
it along its way
to its final destination.
It is conversational,
so you
can use natural language.
And so sometimes, it
feels to end users
like they're
talking to a human
And you have to make
sure to be careful to not
think of it as a
human because that
can have some weird and
unintended consequences.
So an example of that
might be over relying
on it and assuming it can
do things that it cannot
do because it's not
actually a human that
And then the
last, and this one
is actually
personally for me,
one of the most exciting,
is that it allows
So it allows some
people to take on tasks
that maybe they couldn't
do before because they
were lacking a particular
sort of niche skill set.
And so you can
think differently
about what different
roles in your organization
But that also means you do
have to think differently
about what the roles
in your organization
are able to do and whether
that impacts any changes
to job descriptions,
or responsibilities,
And so when
you're thinking
about enabling those
end users on how
to deal with this
technology that does feel
new, there's a
couple of things
that we have seen
almost universally
across customers
seeing success.
The first is helping
employees actually
build their knowledge
about understanding
That knowledge is what
builds that comfort
There was a
recent Slack study
that showed that those
who are trained in AI
are up to 19 times
more likely to report
that AI is improving
their productivity.
And that makes
sense, right?
The more you
understand something,
the more you know how
to use it effectively
and really integrate
it into your workflow.
And so as one
customer told us,
like in a lot of
organizations,
this is the dream, but
it's just not there yet.
It's not that formalized.
And so they said,
like, people
are probably just
casually using it
and sort of slowly
learning how it works.
But if we were to be
more thoughtful and more
specific in how we
think about enabling
our employees, we could
really get ourselves
to that more
rapid efficiency
and effectiveness
and comfort.
The second piece is you
build their knowledge
with how it
works, and then
they really need to
get a feel for it.
They need to try it and
get their hands on it.
Now you are all
at Dreamforce,
so you have a really
unique opportunity
to do hands-on activities.
This Dreamforce has
more hands-on activities
than I have ever
seen at a Dreamforce,
and I've been to
a lot of them.
These are three that
are happening tomorrow.
You can also scan that
code and see lots more.
But I would
really encourage
you to take advantage of
that because we hear over
and over and over
from customers
who are using this that
that was a difference
Learning about it and
then actually trying it
really was super powerful.
You might also feel
a little overwhelmed
at the idea of having
your end users start
And we want to remind
you that it doesn't
We had Chrissie Arnold,
the Director of Future
of Work Programs for
the Workforce Lab,
say, actually, we've had
really amazing results
from just 10 minutes a
day of AI micro learning.
So even small doses
of helping people
get used to it,
trying out a prompt,
perhaps trying out one
of the new Trailhead
modules that has
hands-on capabilities
will help people start
to build that confidence.
And then the
third piece here
is actually a
little more nuanced.
You help people
understand the tech,
you help them get
a feel for it,
and then you
have to help them
think about
themselves in relation
When we were talking
about this last year,
we were encouraging people
to think of themselves
as humans in the
loop, so you might
As the technology
has evolved,
our understanding of how
that relationship should
People tell us, it's going
to take me a little bit,
honestly, to fully
trust the technology.
I need to see
it working over
time in repeated
interactions that are
And during that time, we
are encouraging humans
to really take the lead
in that interaction as
opposed to just
being in the loop.
The technology has grown
by leaps and bounds.
I think we all
agree with that.
But we still can't expect
it to be 100% accurate all
the time, or
to never, ever
show any signs of bias,
or never, ever, ever
have anything that
is misinformation.
We also know though,
that practically
and reasonably, you cannot
ask a human to be involved
in every single
AI interaction.
And so how we're
thinking about
it now is really
in two ways.
One, when we are asking
humans to be involved,
we want those
interactions to feel
empowering and
enhancing, not tedious.
So we want the
humans to be
able to really understand
what's going on.
How is the AI
recommending this?
How does this
technology work?
We want the humans
to be building up
their own personal
skill sets.
So maybe they're learning
how to write effective
And then third,
we want the humans
to be able to be
giving feedback.
I was really surprised
how well it handled this.
And let's think
about, is it the way
they we're asking
the question?
Do we need
additional data?
Being able to
take ownership
of giving that feedback
is really empowering.
The second piece
of this, and this
is true for all use cases,
even autonomous ones,
is we need to have
lots of controls
for things like
governance and monitoring.
That way, we can give
humans the peace of mind
that they can let
some of those actions
take place with
appropriate guardrails
in place so that
they can spend
their time on the
high-judgment use cases,
where they're most needed.
So another way to put
this is we're not always
going to be having
humans row the boat,
but they very much should
be steering the ship.
Well, that was a
lot of information.
And before we go
into Q&A, I just
wanted to wrap
up with three
s first things first,
we really talked about
getting started with those
out-of-the-box use cases.
And we want you to
think about making sure
that you're measuring, not
just traditional metrics
but some of those
emerging metrics as well,
to really be able to show
the full scope of value
that you're seeing
with these technologies
as you implement them
in your organization.
Second, we wanted you
to embrace a new mindset
Gen AI, it's not a
traditional software.
It's dynamic, it's
probabilistic,
And as such, it requires
more thoughtful oversight,
It's a continuous process.
And we really
want to encourage
you to take that
continuous monitoring
mindset so that
you can drive
that accuracy, relevancy,
and reliability,
and be really
confident in that, too.
And remember that
implementing AI is
You don't have to do
it all by yourself.
It's definitely
working with
those cross-functional
partners,
is what we've seen has
really actually improved
speed to adoption
as well, internally.
Finally, invest in
hands-on learning
That's what you're all
here at Dreamforce, doing
We really encourage
you to take that back
into your organization
for those folks who
are going to be
leveraging these tools.
We know that that
combination of knowledge
and practical
experience is really
what's driving
productivity
with these tools as well.
So we see these are
really the key steps
to moving beyond
the hype and truly
making gen AI work
within your organization.
We wouldn't have
been able to share
any of the learnings
that we just
did over the
past 30 minutes
or so with all of
you if it wasn't
for amazing Trailblazers
who've shared
So I really encourage
you to sign up
for our research
program, was
part of the Research
and Insights team.
Feedback from
users like yourself
is really crucial to
ensuring that we're
building the
right products
and prioritizing
the right features.
All of our
research studies
are hosted online as well,
so you can participate
Don't forget
to scan that QR
code if you want to chat
more with Sarah and I.
Lastly, did everybody
know that it's pumpkin
So the first 4,000
attendees to provide
feedback will receive
a Starbucks gift card.
We'd also personally
love to hear
It really helps
us prioritize
the types of
content that we can
So please don't forget
to fill this one out.
And now, we
want to actually
just take some questions.
We have a mic
down the front.
We'd love to open up
Q&A for anyone who
has questions from
the discussion.
You don't have to be shy.
Reuven Shelef, Out of
the Box Consulting.
From the customers
that you surveyed,
how many actually
completed
doing what you're
talking about 100%?
Meaning they have
currently gen
AI in their
Salesforce instances,
that they are relying
on for production work
Yeah, that's a
great question.
We have hundreds
of customers
who have rolled out
all of the solutions
that Sarah shared
earlier, that large list.
And that's where we've
learned all these lessons
So it's-- yeah, in the
hundreds of customers,
maybe even more that
we've been able to speak
And what is the
effort involved?
What's the range of
number of men or women
hours that was invested
to make it happen?
I would say I
think, there's
a lot of variables
that play into that.
I don't think
I have a range.
I mean, what
I would say is
part of it depends on your
organizational readiness.
So if your data is in
a really good place
to start training it,
it's a lot less time.
We have some customers
who were pretty quickly,
able to say, train
a service agent
We have others who,
it took a lot of back
and forth because
they'd see an answer,
and then they'd have to
go find if something's
missing, or they'd
see an answer,
and they'd have to
go make some changes.
So part of it depends
on your maturity.
And I think it's
also around that,
Sarah mentioned before,
about the testing, pilot,
I think being able to do
that quickly is really--
it depends on,
as you mentioned,
like, a people
progressing, like what's
I think having a
process in place that's
allowed people to move
through that has really
created that confidence
with their executive
leadership to invest
more and continue
So I think that's
another area where we've
seen customers
who've moved fast,
have been able to bring
in those types of testing
and learning
iterative motions.
So it would be great to
get some numbers around
that because in
my observation,
without doing it yet,
it's a mountain that
We'll never get to
actual accurate--
I'm talking current
technology today,
if we fast forward
and complete it today.
The quality
won't be there.
So I'm looking for proof
that actually, people
did all that you're
saying, which
looks like it
would take more
than a year in a typical
Salesforce implementation
to accomplish what
you are saying
So I'm really
suspecting that--
or I wanted to see
how these hundreds
of customers actually
did everything
that you're talking about
and how much time it took,
because it looks to me
that it's questionable
if it's even possible
to accomplish
in a reasonable time
to get the ROI that
We could definitely
share that feedback.
I think you're probably
not the only person
I will say we have heard
from customers-- they've
done it in less than a
year because some of these
haven't been out
a year, so they
haven't been
able to implement
this in less than a year.
But also, we've
been hearing
from a lot of them, the
configuration and setup
has taken less time
than they expected
because some of it, it
doesn't require some
of the dev skills
or things that have
And the testing
and monitoring
has sometimes taken
a little longer.
So people will have a
few more eyeballs on it.
Legal is maybe a little
bit more interested
in this one than some
of the other things.
So if you're
factoring in how
you want to
allocate your time,
I would maybe--
you probably
need to allocate a
little less to the setup
and a little more
to the testing.
Of course, and the data--
I mean, just having
clean data without AI--
If you are able to
get Salesforce clients
to actually get to a level
of clean data at the level
that you can
do this for AI,
that's a huge
accomplishment.
I've been in the
Salesforce consulting
business for
over 20 years now
with hundreds of
completed projects,
and there is only two
companies that I can say
They were willing
to do what it takes
So my statistics
seem to be
very different from
yours, and that's
why I'm interested
to learn.
And I appreciate that
feedback because one thing
that we've heard from
our early adopters
that I shared earlier
is how much the--
customers who
really invested
in optimizing their
data were the ones
who were then able to move
very quickly with this.
So I think that we do have
a lot of early adopters
who joined with
us quite quickly
and had a lot of
enthusiasm and energy
And we're able to put
that investment into place
to make these things
successful too.
But I think one
thing I really
want to get across
to a lot of folks
that we talk
about is really
focusing on the
specific use case
that you're
looking at, for
rather than
trying to clean
every single piece
of data that you have
in your organization to
start with one use case
and really focus
on, what are
the pieces of
information and data
that I need to make this
use case successful?
Another thing I would
maybe just throw in there
too, is these
organizations--
like I said, nobody
has perfect data.
That's just a known
fact about the world.
And they're still
operating just fine.
So their service
agents, their sales reps
are able to operate
pretty effectively
And so some of it
is tweaking these,
so they understand
the guardrails
that your human agents
are kind of using.
And so they know
when they can operate
with imperfect data and
when they should say like,
mm, this is
something I need
And so I was in a session
on Monday with some MVPs
and Gary Brandeleer,
who's a PM, who's
been working on a
lot of our agents,
was saying it's sort of
like teaching a child how
You give it some
instructions,
you watch it go,
then you say like,
oh, too much,
that's too far.
Let me go in and give you
a few more instructions.
And then it comes
back, and then you
say, actually, I should
expand your capabilities
And so there
is a little bit
But some of it
is figuring out
how to put those
guardrails in,
so it can operate
within perfect data,
sort of in the same way
your human agents are.
You have the slide
with the bar code,
And there is, like,
should you lose this link,
if you go to the
Salesforce Sessions web
page, you can filter
by hands-on and AI,
and then you'll get
this same content.
So the hands-on filter
is really useful.
And I have been to a
lot of Dreamforces,
and this is by far,
the most hands on I
Michelle and I were
working this morning
at Camp Trailhead, where
people were building
their first agent, and the
feedback coming out of it
is overwhelmingly
positive.
And there are a lot of
opportunities, again,
kind of different from
previous, where you can
take those demo
works home with you
and keep them for a while.
So you can go back
to your organization
and play with
it and really
build that feel
for how it's
working so that when some
of this newer technology--
we have a lot
of GAs coming,
Generally Available
technology coming
in October, you're ready.
You've had time
to think about how
it's going to work
in your organization.
And again, it'll speed
that time to value.
Thank you so much
for the question.
Well, thank you so
much for coming today.
We really appreciate
your time.
And hopefully, we will
speak with some of you