The year 2023 will be remembered as the beginning of a new era in the relationship between humans and machines. Powerful AI woke from its long slumber and fully entered into public consciousness. Released at the end of last year, OpenAI’s ChatGPT reached 100 million monthly active users in just six weeks. In contrast, YouTube and Facebook took four years to reach that level of usage.
Like the introduction of the World Wide Web in 1991, which made the internet usable for billions of people, generative AI will give everyone an intelligent assistant with nearly everything on the internet in its virtual brain.
ChatGPT, Anthropic’s Claude, Google’s Bard, and other generative AI large language models (LLMs) aren’t the result of immaculate conception. They are the culmination of decades of scientific research and innovation that have brought what was once thought of as science fiction to real life. It’s been nearly 70 years since John McCarthy, a legendary computer scientist, first introduced the term artificial intelligence. He and his colleagues proposed that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
Rudimentary chatbots, such as Eliza, were developed over the years, but it took more than half a century before computers were able to detect cats in YouTube videos or for Apple’s Siri or Amazon’s Alexa to tell you the time. The current incarnation of generative AI can help users by saving time with routine tasks, gathering and summarizing research, writing code, and more.
Now, with the combination of massive amounts of accessible data, vast increases in computing power, and advances in deep learning algorithms, AI has the potential to largely transform how we live and work, and provide a massive boost to innovation and economic output.
Now, with the combination of massive amounts of accessible data, vast increases in computing power, and advances in deep learning algorithms, AI has the potential to largely transform how we live and work, and provide a massive boost to innovation and economic output.
This moment has been presaged in science fiction, in books ranging from Isaac Asimov’s classics to William Gibson’s Neuromancer; in “Star Trek” and movies like “2001: A Space Odyssey” and “Minority Report”; and of course, voluminously through academic journals.
It’s also been evangelized in corporate marketing. Knowledge Navigator, a concept video created by Apple and released in 1987 envisioned an AI future. In the video, a professor organizes his day, searches for information, and connects with colleagues with what could now be described as an iPad-like tablet computer, manipulated with gestures and voice, and guided by a highly advanced Siri-like digital software agent that knows everything about anything.
Fast forward 36 years to the video Salesforce has now created (watch it here or at the top of the page). In it, Jordan and her co-workers are managing through a product marketing and supply chain challenge with the 2030 version Salesforce Einstein. You can see how the underlying concept hasn’t changed much — AI-powered software agents functioning with intelligence exceeding that of a human.
Of course, the hardware technology is updated and the tasks undertaken are far more complex than in Apple’s Knowledge Navigator. In Salesforce’s video, Einstein is capable of seamlessly orchestrating complex business processes, managing a matrix of autonomous agents that perform specific tasks and interoperate with one another. Einstein connects AI agents — collaborating with their human counterparts — managing sourcing, logistics, marketing, sales, operations, and legal, and based on its values configuration, sustainability, to handle the end-to-end process for an increasing demand for product.
“Of all the things we felt we needed to wrestle with and articulate, we saw multi-agent autonomous assignments with human in the loop interaction as the most worthy of exploration and elaboration,” said Mick Costigan, VP of Salesforce Futures, who worked on the video.
While the Einstein “Copilot” portrayed in the video may not be fully ready by 2030, it’s worth recognizing that technology is moving faster now. It won’t take another 36 years to make this vision for the AI future a reality and reshape our world in ways that we haven’t yet imagined.
However, many challenges lay ahead as AI rapidly becomes more powerful and pervasive. A majority of people don’t believe that AI is safe and secure — they worry about data privacy, misinformation, bias, and toxicity. They don’t trust the technology and they worry about job displacement and the potential for widespread harm from AI.
That’s why Salesforce supports tailored, risk-based AI regulation that differentiates contexts and uses of the technology and ensures the protection of individuals, builds trust, and encourages innovation. Salesforce Einstein includes the Einstein Trust Layer, which protects sensitive data while letting companies use their trusted data to improve generative AI responses. What’s clear is that until policies and regulatory standards that help ensure trustworthy AI are in place, highly intelligent, conversant digital assistants will be standing by for duty.
More information:
- See Salesforce’s response to the United States’ AI Executive Order and this recent post with a view on AI regulation
- Read Salesforce’s five guidelines for responsible generative AI development
- Learn about the Einstein 1 Platform