- Up next, our panel on Building
Inclusive and Ethical AI.
Please welcome to the stage
our panelists. Rachel Gillum,
Vice President of Ethical
and Humane Use at Salesforce.
Matt Rosenberg, Chief Revenue Officer
and Head of Grammarly
Business. Catherine Nichols,
Vice President of the Office
of Accessibility, Salesforce.
And our panel moderator- Kathy Baxter,
Principal Architect, Ethical
AI Practice at Salesforce.
with this amazing group of people today.
This is going to be fantastic.
You guys are in for a huge treat.
So I want to kick things off
with a question for everyone,
and we'll start with you Catherine.
So as we begin our conversation today,
how would you define
inclusive and ethical AI
and what are the core principles
that guide your work in this area?
- Thank you. I would say ethical
and inclusive AI is accessible for people
with disabilities to start off.
And it also includes the expertise
and the diverse perspectives of people
with disabilities when building
those systems throughout the
entire product lifecycle.
So from designing to testing to
customer support, to marketing
and implementation, as well
as re-releases and bug fixes
and all of the aspects in between.
So we're always including
people with disabilities into
It's also useful and beneficial
to most of our lives.
- Yeah, absolutely. It ties
back to the earlier panel of
building with us, not for us.
It is so incredibly
important. Absolutely. Matt?
- Yeah, hi. Hi everyone.
Thanks for having me.
It's great to be here. And I
think we're going to share a lot
in common in the way we view this.
For me, you know, ethical
and inclusive AI really comes
down to one, fairness. Right?
Accountability and transparency.
So fairness in terms of, you know, are we
taking bias out of the models?
Are we really understanding the data
that's going in the models when we
think about accountability?
I mean, it's, it's just that, like,
are we actually accountable
to the outcomes that come out
of the models by understanding the inputs
And I think, you know, transparency is,
is incredibly important in
understanding everything
that's happening because
these models have have huge,
huge impacts on people and the speed
and rate in which these
models are producing outcomes.
Like you really have to
be protective of groups.
You have to be protective of
language, you have to be aware
of what's going in and what's going out.
And I think ultimately it's
all of our responsibilities
to be responsible with respect to AI.
So for, it's about safety, right?
Like make sure you understand how to treat
It's about accountability,
knowing what's going in the models.
It's about the transparency,
but it's also about user control,
like really giving individuals the ability
to control what's
happening with the output.
And lastly, it's always going to come down
Like, don't sell the data, treat
people's data respectfully.
And in the end, if you build
on those core principles,
you'll build a more inclusive
and ethical AI solution, I believe.
We talk about human at
the helm, the control
and empowerment is
central. Rachel?
- Yeah. And per the panel,
I think I also think about ethical
and inclusive AI as how you build it.
Exactly. And you know,
for us, we try to do this
by design from the
beginning, not something
To make sure that we're
intentionally embedding our ethical
and humane use principles
into the very design
and the development of the product.
And for us, a lot of
those principles come down
to things like human rights,
privacy, as you mentioned,
safety, honesty, and inclusion.
And it's really about
elevating those voices
that are not traditionally
represented in the room when
these products are being built,
when they're being developed,
and making sure that
that is included, again,
from the very beginning.
It brings incredible
value to our customers
and has a positive impact on society.
- It really, really does.
Now, as VP of Policy in
our Office of Ethical
and Humane Use, how are you ensuring
that ethical considerations
are really included
and integrated throughout
the entire AI development so
that we have safe
AI from the beginning?
- Yeah, I mean, it really
starts with our principles.
That's really where we start
all of our work, right?
And even our company values.
And that's what then drives
the development of the policies
and of the products and features, right?
And right now in this moment,
I think the number one focus is on trust,
our number one company value, right?
Making sure that trust can keep
pace with this rapid change
of technological development as we go.
And it's so important as we're
moving from generative AI
to agentic AI where AI is now
taking actions on our behalf,
that we're really putting this at the
forefront of the development.
And so for us, this
appears in various products
and features that we build
and help develop as a team
and as a company, starting
with the trust layer, right?
Listening to our customers
concerned about data privacy
and security being a top concern, right?
And so with the trust
layer, we're enabling
that secure data retrieval
able to ground the answers
and activity of the AI in your
trusted customer data in a
way that you don't have to
worry about data retention
and retraining by these other models.
and features that we build in
like the toxicity detectors
and things like this, right?
And then even as we move
into agentic, right,
as you mentioned, making sure
the human stays in control
and has that bird's eye view
how the system is working
while still being able
to delegate tasks and
work smarter and safer.
And so this appears
through different features
that our team develops that
we call trust patterns, right?
Yeah. And so we think
about our AI audit trail
so you can really have
that high level view,
how's the system working?
Be able to spot problems should
they arise and address them?
Things like our prompt
builder, right?
Really allowing the human to
come in and tweak and adjust
and get the AI outputs that you want
and expect throughout the process.
Yeah, there's other mindful
friction features that we put in.
There's some moments where we need humans
to lean in on those high
risk, high judgment areas
and make a decision as well.
And so building that into
the product throughout.
So I think it's, again, it's a
mix of letting our principles
and values drive the design
and sort of where we're
mitigating throughout the process.
to be put throughout the entire system.
There can't
be just a single point.
It has to be throughout
everything built in by design.
how can AI products help
level the playing field
to support a broader population?
- Yeah, well, I think a theme
you just heard from the three
of us is it has to be
from inception, right?
From design. It has to be a clear decision
And I think
when you do it that way
and you think about inclusivity,
you could really build a
product that that brings AI to,
And so I think when we think about,
and I'll speak from a Grammarly
experience standpoint,
but it sounds like my colleagues
at Salesforce do the same.
We very much view the
lens of product development
through the curb cut phenomena.
I don't know if folks
are familiar with that.
Just a quick primer on it.
If you walk the streets of San Francisco,
you notice when you get to
a curb, there's a curb cut.
It was built for
wheelchair access, right?
It tends to benefit everyone,
not just people in wheelchairs,
it benefits people
carrying suitcases, parents
with strollers, you name it,
everyone benefits from it.
That is a design decision
that benefits everyone.
And I think when you think
about level leveling the playing
field, if you think about
that from the product inception,
you could build a product
that is more inclusive.
So for example, there are lots of people
with neurodiversity, right?
of the workforce in the world today is
If you build a product
that has that in mind
and you think about somebody
with ADHD, for example,
if you start to pop things
out in the AI product at
that user, it may actually
be detrimental to the user
as opposed to in the flow of their work.
If you think about, you
know, someone with autism
where in an async world
tone sometimes is difficult
and you're communicating asynchronously,
how do you correct for tone?
How do you think about English
as an additional language learners?
Like how do you create an
AI product that incorporates
that type of skillset,
bringing the right tone
If you're thinking about
that from the inception
of the product, you're gonna
end up with a product that
I think includes all, and,
and hopefully we touch on this
in the panel, the economics
of that is actually extraordinary.
I think we think about this as oftentimes,
and I say we like business
thinks about this as, oh,
we should consider, no, you must consider,
because if 15 to 20% of the
workforce is neurodivergent
as an example, and you're
able to improve their ability
to do work by 10, 15, 20%,
think about the impact of that.
And if businesses started to
think along those lines about
how do I optimize for that population,
there is a huge uplift in productivity
that people will receive.
And also just employee
satisfaction, customer satisfaction.
I think the byproduct of that decision
that you make in the very
beginning when designing products
for all has a huge lift for everyone.
I think there's, there's
not enough recognition
Too often compliance roles,
whether that's privacy, ethics,
security, they can often
be seen as cost centers.
and more organizations
are demanding evidence
that the technologies
really do work for everyone.
Yeah. That they really are
safe, they really are trusted.
So there, there is a serious
benefit business benefit there.
- Yeah. And I think,
I Jake one more point.
I think in a world where it's
kind of like a bolt on of AI
to an existing software
product, which we see a lot of,
we see a lot of companies just
wanting to raise their hand
Yes. And they're bolting
software on, that's a very act,
it's actually a risky behavior, right?
I think when you think about inclusivity,
because what is the model?
Like what's the data that's training that?
Who's actually monitoring
and regulating that?
Now, it's not a Salesforce product
where it's built from the ground up.
It's not a Grammarly product,
it's some other product.
And I'm not meaning to
disparage the product,
but it's like, what's the AI
for the AI? You know, and,
- And who's going to make sure
that, that if it's a generative
AI component, that language
coming out is inclusive.
Like how do you regulate that?
How do you create that layer
that actually ensures inclusion?
Because if the economic
benefits are there,
which I think we're saying
there are, there has to be
what are you actually putting
into the ecosystem and,
and who's regulating that.
I love it. Catherine,
building off of that,
how can AI be leveraged
to improve accessibility
for individuals with disabilities?
And what challenges must
be addressed to ensure
that these benefits do reach everyone?
- Yeah. And that really
does come off of everything
that you were just saying,
that AI can really,
even the playing field for,
for people with disabilities.
And when you're talking
about the curb cut effect,
we have audible that
was invented for people
with disabilities, keyboards,
there are so many examples,
voice assistance that was
all developed originally
for people with disabilities,
but we all benefit from,
from those technologies now.
And that can be the same
and is the same for AI.
And I think you heard up here
earlier with Slack,
we actually heard from our
neurodiversity community within
Salesforce that Slack
felt noisy, disruptive,
it was really hard to prioritize your work
and it was difficult to find
what you were looking for.
we have Einstein AI built into Slack
where we can now have recaps from channels
that maybe aren't urgent,
but you still want to keep track
of what happened that day.
So we can find out at the
end of the day a summary
of everything that happened
across those channels,
So if I missed an entire thread
that was happening while I was
working on another priority
project, I can get a summary
instead of having to scroll
through that entire thread
and try to catch up.
And the search functionality
you may have seen,
and I'm sure you have in Google
now, where you can search
and it gives you kind of a summary
of the entire internet answer right there.
That's what we have in Slack now too.
And it's really helpful to kind
of crowdsource the best answer
across Slack instead of
having to kind of search
through every single channel
and every single thread.
So I think there's a great evening of,
of the playing field there.
The challenges we face though
are that you have to hire
and include people with
disabilities in your companies
The other is teaching accessibility in our
I have a degree in computer science
in my college career did
I get told anything about
accessibility in my design
classes and programming
And we partner with a
organization called Teach Access,
to incorporate accessibility
into the curriculum
at our higher institutions
and working on lower
education as well to make sure
that the future of work is
including accessibility when they
and aren't just hearing
about it the second
that they show up at
Salesforce or at your company.
- Yeah. I think there
are so many different,
sometimes they're referred
to as softer skills,
but they really are critical skills.
that are needed in computer science
or other technical programs.
Accessibility
being one of them.
Ethics, you know, I may
be a little biased there,
but understanding of ethics.
What is your responsibility?
And so including these in
curriculums from the beginning
to create what I call
that ethical spidey sense
or an accessibility and inclusive
Very good. So I did have that.
Well, Matt- Grammarly is a pretty
amazing communication tool.
I know. I use it frequently
throughout my day.
- Thank you.
- So in your experience,
what are the key ethical challenges
that arrive when developing
AI driven communication tools
and how can organizations proactively
address these challenges?
- Yeah, that's a great question.
So, and again, thank
you for using Grammarly.
We, for those that don't, you should,
and for those that
aren't familiar with us,
we really do help people
communicate more efficiently,
effectively, and, and we
improve communication.
And so with that said,
communication is incredibly complex.
If you just think about any
of your interpersonal relationships,
think about your significant
other, your best friend,
your parent, how often that
communication can go awry, even
with good intention, right?
It's and well, look at
this political cycle and,
and the impact of communication
and how it, what
that impact has on people.
Like this is a very
significant responsibility
of knowledge workers spend
their time communicating.
And so that is a lot of moments
where you can make a lot
of communication that could
potentially alienate, isolate,
make someone feel pretty bad.
And when you think about
AI generating outcomes
where you don't understand the data sets,
you don't understand
the models being used,
there's no regulation of what's happening
and it's just spewing
outcomes that creates a lot
It creates a lot of dissatisfaction
and worse it perpetuates
stereotypes and worse.
And so I think it's the
responsibility of companies
to your question to really have,
and I really believe this
and I'm always amazed when
companies don't have responsible
Because if you go
back to the beginning,
think about building a product
into that product build has
to be a layer of protection
around what is and,
and building these responsible
AIs, having review cycles,
having feedback loops both
with employees and with customers.
And that's the way you're going to,
you're going to actually help
companies regulate this
because without all those checks
and balances, without
understanding goes back
What the data is, what's
going into the models
and having checks
against it, there is risk
because 88% of communications
of time spent communicating
is a lot of time.
And we wanna make sure
that employees feel safe,
that people feel safe,
that there aren't groups
that are isolated and prejudiced against
and in language it's complex.
So that's what companies need to do.
- Yes, language
is a living changing thing.
And if you're not aware of how
quickly language is changing,
talk to anyone in the tween to teen zone.
My daughter regularly
communicates with me using words
that I thought I
understood what they meant
And when I
repeat some of the things
that she says, I am
quickly informed, no, mom,
that is not for you.
- So language is incredibly difficult,
particularly between
generations to get right.
And so that's a really hard
thing for AI, well
for humans to keep up and
then for AI to keep up with
and be able to give those signals.
- Absolutely.
- Like, you know,
emojis mean different things.
Sometimes you think
it's just a smiley face
and no, you have just horrifically
offended your coworker.
- Yeah. And think about non,
think about non-native
speakers. I mean there are
- Yes!
- Meanings that have different meanings
and what may be a
well-intentioned word actually
And if AI isn't
picking up on the subtleties
and doesn't understand
the nuance of the speaker
and what their native tongue
is versus what their,
what the language is
that they're translating into
that, that can create
a whole host of issues.
So it's not just teen
to adult language, which
by the way, I suffer from as well.
And I've learned never to repeat my kids
because I don't understand the
intention. But yeah, it's insane.
- I like to do it just 'cause
it makes her so uncomfortable.
True, true. Yes. Catherine-
So what advice would you
give to organizations looking
to foster a culture of accessibility
and inclusivity when
implementing AI technologies?
- Yeah, and also I'd just like
to jump off of that too is
that language is so important
when we're talking about
people's identities as well.
And so making sure that
that is part of the language
that we're using in our systems too.
Advice, again, I'm gonna say
hire people with disabilities.
I think that that is crucial.
We partner with a organization
called Inclusively.
It's a job board for
people with disabilities.
We've hired individuals from there
and you all are welcome to as well.
There's, there's many more as well,
but there's a huge talent
pool out there of people
and are wonderful resources
to your companies.
and then make your company
accessible so that those folks
who you do wanna employ
and get their perspectives
and their expertise can
thrive at the company
I'd say another one is
making sure you're building
as we've talked about from
the very beginning, not trying
There's an analogy and I think
we heard something about a
recipe and baking earlier on
and it's a blueberry muffin
and not trying to put the
blueberries into the muffin
after it's already baked, you end up
with a less attractive
product, less tasting,
And then also it's time consuming
to put blueberries into
every single muffin, right?
So bake it in from the
beginning for the benefit
and time as well as for your customers.
- Yeah. That is amazing.
I did not make it up.
That was so good. Was Cordelia I think
- Yeah. Well Rachel,
your role in your team is
I just, yeah, again, I'm biased,
your team rolled out our AI
Acceptable Use policy last year
and continues to update it.
So talking about living,
breathing, changing things,
what are some of the common
pitfalls organizations face when
trying to implement ethical AI practices
and how can they avoid them?
- Oh, it's a good, it's a good question.
I'll start with an overarching statement
by saying don't wait for
regulation to tell you what to do,
number one, because you'll
be behind first of all,
and not to mention as we were saying,
this is actually a business imperative
and it's the right thing to do.
So anyway, don't
wait for the government
to tell you what to do, first of all.
So I'll start with that. But there's a lot
of things to answer this question.
Obviously we have a whole, a whole office
that is dedicated to this work.
And I understand not
every company has that,
but I would actually start
with some really basic things.
Number one is get your data ready, right?
Agents, AI is fully dependent on trusted,
clean, well structured data, right?
If you want reliable, useful
results, it's like, number one,
make sure you have a really
strong data collection
and management process in place.
but it's really
foundational to this work.
Number two, relatedly
really having a strong
and robust data security
and privacy system set up.
So making sure that as your agents
and your AI is taking
in your customer's data
that you're compliant with
various data privacy regulations
that you're doing regular
security audits, right?
These are just like basic maintenance.
If you want to do AI
responsibly, this is like the,
And then thirdly, I
would say, you know, know
that you're going to
need humans to oversee
and monitor, as we were
saying these, you know,
things change constantly.
So you need to be watching these systems,
and using that to iterate
and improve your product constantly.
So I think just having
those mechanisms set up
and then I guess bidding on
making sure you're hiring people
have different experiences
so that as you are iterating,
you're able to identify
these different problems
and challenges and unintentional harms
and be able to address those.
So that's what I would put out.
But you've heard it here first.
You need to be hiring these people.
how can businesses balance innovation
and ethical responsibilities
in AI development without
stifling creativity or
marketing competitiveness?
I think we've heard, like
sometimes doing this work stifles
innovation, but I'm
willing to bet you disagree
Yeah, I mean I think you
could have it both ways.
Yeah, I think you could have, I think,
I think you need to have it both ways.
I mean, just go back
to the stat, 15 to 20%
of the world's population
is neuro divergent.
If you extrapolate
Deloitte did to the US workforce,
that's 15 to 20% of your people.
Now, they may not come forward
and say they have this issue,
It's just the math of the workforce.
And so the, you know, the
business case is there.
It is, this is really important
that we get businesses
to start thinking in
terms of business case.
When businesses think that way,
then software companies
will build that way.
And I think what we're
hearing is you have companies
that are responsibly
approaching the issue.
And we need every company to
responsibly approach the issue.
And it always starts with the customer.
And if the customer
understands the great unlock
that is in front of them,
and that is the opportunity to
enable a workforce regardless
of whether they're
disabled, neuro diversion,
whatever it is, like you need to do it
because again, think through a 10, 15,
20% uplift in productivity.
And think about that in the very beginning
What would you do differently?
What would you unlock?
What would that mean for your customers
who will buy that product?
And then you get into
a cycle of innovation
and sensitivity, which I
think is what we all need.
- Yeah, Leah McGowen-Hare,
who was speaking earlier
and many talks, has talked
about the amplification
these are all examples
of one plus one is three.
And when you can recognize that,
bring those things together,
we really do have something
that is just much, much better
Than when it's a homogenous
group of people coming up
with a homogenous solution
that works awesome for them.
And then not so well for everyone else.
- And I'd say the younger
generation is being more vocal
about their, about being neurodivergent
They're asking for what
they need in their education
and they're asking for what they need
to be successful in
their educational journey
and proudly saying that they are dyslexic
or are autistic and are
asking for what they need
and asking the world for products that
Having that, having the
benefit of being able to engage
So that they are stepping forward
and saying No,
I want you to hear my voice.
I want to be, to be represented.
I want to be involved in this process.
Well, I have one final
question for all of you,
and maybe we'll start with
you, Rachel, looking ahead.
What is one actionable step
that organizations can take today
to build a more inclusive
and ethical AI landscape for the future?
- Yeah, I'm actually
going to take from one
and others, I think training
the next generation is
so important to give us the
tools to be able to deploy
and use AI, make sure that it
is accessible across the board
and that we're really investing.
I know Salesforce, we're
investing in our employees,
but also I'm really proud
that we are, you know,
offering the premium
Trailhead AI training for free
A huge investment by our company.
I'm really proud of that.
As well as the on-hand training
like we're having today here
to build your agents as
well as in the tower.
So that's what I would
call for an action item
for folks in different organizations.
Where can you plug in,
where can you give back
and help really equip, you know,
the broader workforce with this skillset.
- Yeah, a theme we heard
was from the ground up,
build product from the ground
up with ethical AI in mind.
And I'll go back to your point on data,
like really understand the data,
understand what's in the data, understand
where it's coming from,
make sure there's loops
to understand if there's bias in the data,
and make sure there's
an ability to capture
that bias and wean it out.
And I think you will
build better products.
- Yeah. I think you
know what I'm gonna say.
Hire people with disabilities.
We partner with an organization
called The Blind Institute
of Technology, and speaking of Leah,
she presented the golden
hoodie to the founder
of Technology at Dreamforce
a couple years ago.
And it's training people who are blind
or low vision to become
Salesforce admins and developers.
And it is a great resource
for implementing Salesforce as
well as training up the next
workforce in your own
organization as Salesforce admins
I just want to thank all of you to today
and sharing your wisdom
with everybody here.
And if everyone can please
join me in giving a hand
to these amazing individuals.