Dmitry Volkov is a doctor of philosophy, serial entrepreneur, investor and founder and CEO of Social discovery group.
It seems that the world is flooded with all things artificial intelligence. And as businesses begin to embrace this technology, two AI-powered tools have emerged as essential productivity enhancers: Microsoft 365 Copilot and ChatGPT. While both tools offer AI-based language models that can help users with various tasks, they differ in functionality and personalization.
ChatGPT, the oldest of the two by four months, is essentially an AI-powered chatbot that can understand natural language and provide relevant answers to user queries. On the other hand, Microsoft 365 Copilot provides personalized assistance and guidance to users for tasks such as creating Word content, designing PowerPoint slides, understanding data in Excel, answering emails and chats, and more. While ChatGPT is built on training models, Copilot uses machine learning algorithms to learn a user’s work patterns and preferences over time. It literally learns from its users and adapts to their needs.
This ability to learn has many wondering how far AIs like Copilot will go. Will they just be assistants that make our lives easier? Or, as some speculate, they will cause our extinction? But before I get to the answer, it’s important to explore just how deep our bond with AIs already is.
Are we all operating on bits of information?
Early thinkers Allen Newell and Herbert A. Simon theorized that the human mind and machines process information in much the same way. Moreover, while there are differences between the brain and a computer, some experts today do seen some similarities between them. This opens up Pandora’s box, for could it ever theoretically be possible to create an intelligent entity on a computer and form a real, human-like relationship?
While the arguments that consciousness is a computation remain esoteric to most people, human intuition often leads us to attribute human-like traits to AI. From my perspective, this could help explain why more people are developing personal relationships with AIs. For example, apps like Replika and EVA AI have made headlines provide companionship.
Why am I in love with an AI?
Many of today’s popular dating apps use AI for moderation, fraud prevention, conversation starters, and match picking. But thanks to Open AI’s ChatGPT and technologies like NVIDIA’s AI-powered avatars, AIs have made their way from working behind the scenes to taking center stage. The main reason is that artificial beings can now take on personas that appear human. Thanks to complex learning skills, chatbots show human emotions and reaction patterns.
Even as a cure for loneliness, these artificial people offer a great solution, especially for people who suffer from social anxiety or just want online relationships. And from a business perspective, this presents a golden opportunity for companies operating in the dating, gaming, entertainment and online shopping space. AI-powered humans can increase stickiness and improve user experience by giving users a friend who is always there to listen, play, shop and more.
I think AI’s ability to demonstrate emotion also presents an opportunity for marketers. Brands can now leverage AIs to deploy personalized messages that automatically appeal to consumers’ emotions. As such, marketers can offer the same level of empathy and personality digitally as through physical communication channels, and they can do so at scale. AIs can also help marketers personalize promotional offers based on users’ unique preferences.
Is AI out to get us?
However, the rise of AI also raises some critical ethical concerns around things like deepfakes. It’s true; AI can be used for good, but it can also be used for nefarious purposessuch as creating fake news or spreading propaganda.
Deepfakes sometimes look so realistic that people find it hard to differentiate between them and real people. As technology improves, the lines will become more blurred. This could make those who only have online relationships even more susceptible to manipulation. The Center for Generational Kinetics has found this 56% of Generation Z is friends with someone they only know online.
Another important ethical consideration is that AIs could displace workers from a variety of industries, including marketing and entertainment, where AIs have the potential to replace spokespersons, influencers, actors, writers, analysts, and more. There is already a trend of it virtual social influencers. Some of these AIs even have millions of followers.
So, will we compete or coexist?
I often turn to other philosophical thinkers for answers to important questions. For me, the answer to whether artificial humans and real humans can coexist depends on how we adapt. Can we design AIs to inherit some of our moral values? Can we improve our own computing power by merging with technology and embracing the expanded brain?
Philosopher Nick Bostrom has theorized scenarios where developing real artificial intelligence could lead to a danger beyond nuclear weapons. However, he also sees a super-intelligent future in which AI will be used to improve our lives in many ways. Then there is Daniel Dennett, who, like Newell and Simon, believes there will be no reason to compete with AIs because humans have always been largely robotic. As such, the coexistence of artificial and real humans will not be the problem many fear.
I tend to agree with Nick Bostrom in thinking that it all depends on how we unfold our AI future. In many ways, we are essentially creating a new breed of beings who have the potential to interact intimately with humans in every way possible. So it is up to business leaders to guide the development of artificial humans. And they can do that in the following three ways.
1). Establish ethical guidelines that encourage actions that are in the best interest of humanity and society.
2). Ensure full transparency on the development and use of AI.
3). Educate and prepare society for the coming age of digital people.
By developing clear guidelines, deploying AI for social good, anticipating potential risks, collaborating with outside technology and policy experts, and educating stakeholders, companies can ensure that AI remains our helper, not our enemy.