By Dianna Booher—
You’ll have a hard time picking up a publication from the past few months that does not add an article about artificial intelligence, especially OpenAI and its ChatGPT capabilities, as well as Google’s Bard. Instead of going crazy over the possible applications of AI, I’ll take another non-technical tactic here.
Recently, Elon Musk and Steve Wozniak, along with hundreds of other technologists, researchers and CEOs, called on AI labs to stop working on these AI systems for at least six months so the world can assess the risks these technologies pose .
Admittedly, ChatGPT can turn summer salvos into exciting applause with these capabilities:
- Having a logical conversation with a human
- Brainstorm ideas for ad copy
- Generating a complete blog article or book based solely on a user’s question or idea
- Answer a question about how many pets have died in natural disasters around the world in the past 7 years
- Translate a German contract into English
- Soften the tone of an angry email before hitting “Send”.
- Redirect incoming calls/traffic to large companies, let the bot decide which department to connect customers to to resolve an issue
- Learn 6 good listening habits to improve your relationship with your partner
So why should anyone care about such AI technology? After all, we’ve been asking questions and topics on Google for years, receiving informative information and opinions to help with our physical ailments, finances, and family relationships. We use our GPS system to go to grandma’s house for vacations. We ask Siri for the location of the nearest Sonic.
So how does using ChatGPT (and other similar AI products) differ?
Let me count the ways – especially those that most affect the reputation of authors, speakers, consultants and even large companies. It may even get in the way of their career or business.
Think accuracy, plagiarism, copyright and trademark infringement, cheating, misinformation reaching the world en masse, and most importantly, personal and business credibility. Let’s look at them one by one:
Unlike when Google returns answers to your question with a long, long list of sources from which you as a user can make judgments about the correctness and reliability of the answers, ChapGPT gives you no idea about the sources of the information. You may be reading a list of 6 causes or treatments for pancreatitis from the Mayo Clinic or Uncle Billy Bob’s blog about his own diagnosis.
ChatGPT scrapes information from unknown sources. In fact, users can enter the same question and get completely different answers – as happened with my group of speaker friends when they debated the issue live online with this demonstration. They all typed in the same question, and all three received and posted their widely differing answers.
AI plagiarism and copyright and trademark infringement
Imagine the situation where two different marketing teams use AI to generate ideas for an ad campaign and ChapGPT sends them the same ideas. In an article by Adweek by Trishla Ostwal, she reminds us that AI tools are currently unable to establish trademarks and copyrights.
Because ChatGPT does not disclose the source of its information, authors and speakers may inadvertently quote the work of other writers or speakers or use their brand names, without realizing the infringement. That, of course, opens up the possibility for lawsuits over copyright and trademark infringement.
Authors are distinguished by their expertise and writing style. Speakers brand themselves through what they call their “signature stories.” They do not look lightly at those who copy/steal their work.
Professors and public school teachers are complaining that students are using AI tools to write their essays. Often they only become aware of this after receiving three or four papers that are very similar, with the same key points. In a recent discussion with a college professor, he made an excellent point: Teachers and professors need to develop better ways of evaluating their students’ work—based not on repetition of historical or scientific facts, but on creativity and analytical thought processes.
In fact, the court recently ruled that images created with AI tools cannot be copyrighted or trademarked.
For authors, speakers, or consultants adding their byline to articles and books generated by AI (and slightly “edited” by the purported author) and claiming that the work is their own destroys credibility. Yes, even books! I’ve seen the ad of a book coach/advisor encouraging clients and prospects to write their entire book by using AI to generate their copy in minutes!
Claiming AI-generated ideas and texts as your own boils down to a matter of personal integrity – or lack thereof.
AI misleading information
Can you imagine a world flooded with misleading and inaccurate information on every social media platform in existence – and a media that reports opinions as ‘facts’? What about the potential for AI to change the words of political leaders by manipulating/faking video clips? By using AI tools to alter photos so that what you see never really happened?
We’ve had this technology for a while. Do you remember situations where Osama Bin Laden and ISIS members released videos to the world, and our political leaders told us to “verify authenticity” before releasing such videos to news outlets?
Imagine a world where we need an agency or board to verify the authenticity of each video, audio or image that could affect our health or physical safety.
When I hear frequent users talk about the time invested in “chatting” with ChapGPT, asking questions, entering data, and researching the tool to get to nuances about their topic or problem, the classroom comes to mind . Students often spend hours and hours coming up with schedules and making test cheat sheets, had they spent the same amount of time learning the required information they would have been much better off.
Becoming proficient at using AI tools, even for the best of reasons, can become a time crunch.
Personal and business credibility
At this point in the article, you may be thinking, wow, this author seems to be stuck in the “far beyond” machine, resisting useful technology that many claim will change the world.
So let me be clear where I stand: I embrace technology to make our lives better. And we have new AI tools that can do both good and harm. We can use AI for both noble and evil purposes. (Those with malicious intent are already bragging about how they put misleading, harmful information into the stratosphere.)
As with so many other tools and techniques, the way AI tools are used – and the claims made about their output – comes down to personal integrity and corporate credibility. The future will reveal the moral basis of individuals and our culture.
Dianna Booher is the best-selling author of 50 books, including Communicate like a leader. She helps organizations to communicate clearly and individuals to expand their influence. Follow her at BooherResearch.com and @DiannaBooher.