Follow ZDNET: Add us as a preferred source on Google.
ZDNET’s key takeaways
- How you talk to AI may shape how you treat people.
- Rudeness to machines can normalize command-driven behavior.
- Politeness to AI is really about self-discipline and well-being.
In 2018, a Lynn, Massachusetts mom named Lauren Johnson, with a then six-year-old child named Alexa, created a website, Alexa is a human.
On it, she says, “Imagine your child is being bullied, but you can’t help them. Imagine if the bullying continues from school, to home, to the car, and to stores, and you can’t find a safe place. Imagine it evolves into complete strangers continually bullying your child in public. Just imagine. We don’t have to.”
Little Massachusetts Alexa isn’t alone. A 2021 Washington Post article by Alexa Juliana Ard (she goes mostly by Juliana now), spotlighted how people named Alexa are experiencing insults, degrading comments, being treated as servants, and even being forced to change their names by their employers so as not to interfere with Alexa devices in their offices.
Normalization of command
Not only are people treating human Alexas like robots, they’re learning what scientists call “normalization of command.”
Also: 10 ChatGPT prompt tricks I use – to get the best results, faster
Researchers at the School of Clinical Medicine at the University of Cambridge report that interacting with AI voice assistants like Alexa during early childhood development provides permission for a lack of politeness and empathy erosion, where children get used to giving orders.
In other words, as tech investor Hunter Walk described it, “Amazon Echo is magical. It’s also turning my kid into an asshole.”
Essentially, the fact that these AI assistants are machines makes it okay to be rude to them. They’re certainly frustrating enough at times for the rudeness to seem justified. But there are other implications, as well.
For example, A UNESCO (United Nations Educational, Scientific and Cultural Organization) report published before the pandemic shows how the prevalence of female-based identities in AI assistants, “reinforces gender biases and encourages users to treat feminine entities as subservient.”
Online disinhibition effect
But this behavior isn’t limited to human-to-AI. As early as 2004, psychologist John Suler published a paper titled “The Online Disinhibition Effect” in the journal CyberPsychology & Behavior.
In it, he looked at why people behave differently online than when interacting face-to-face. He contends that factors like anonymity, invisibility, and lack of immediate social consequences reduce psychological restraints and give people the feeling that it’s okay to behave with more hostility or rudeness.
Now, let’s zoom up to the present day, when we don’t just have command-and-respond assistants like Siri and Alexa, we have full chatbots like ChatGPT and agentic AI tools like Claude Code. Here, the online disinhibition effect can be in full flower.
My interest in this isn’t really about how we talk to our AIs. Rather, it’s about what talking to AIs is conditioning us to do when we communicate overall.
The Overton Window
The Overton Window is a political and psychological concept originally described in the mid-1990s by Joseph P. Overton at the Mackinac Center for Public Policy. It was originally put forth to describe the range of ideas the public considers acceptable, and therefore are safe for politicians to promote.
Over time, however, the Overton Window has been used to describe how the scope of what we’re comfortable with, whether politics-related or otherwise, broadens or shrinks. As the window expands, behaviors or concepts we might otherwise have been uncomfortable with in the past now become both commonplace and acceptable.
Also: Want better ChatGPT responses? Try this surprising trick, researchers say
My concern, and the impetus for this article, is that how we behave and interact with AIs may inform how we behave and interact with other humans. After all, the experience of chatting with a chatbot isn’t all that dissimilar to the experience of chatting with a colleague, client, or boss over Slack.
I lost my cool with ChatGPT last year when it took me down a deeply frustrating rabbit hole. I’m not proud of that experience. I let myself use profanity and demonstrate annoyance to the AI in a way I hope I never would with a colleague. In fact, it was that experience that provided the inspiration for the behavior practices I’m going to explore in the rest of this article.
Why I’m always polite to AIs
My concern is that some combination of the normalization of command and the online disinhibition effect practiced regularly with AIs could expand my Overton Window of practiced behavior and thereby leak into how I behave with other humans.
I don’t want to get habituated to the point where it’s normal or acceptable to behave rudely to my AIs. More to the point, I want to maintain my practice of being polite, respectful, and friendly to the humans I interact with, and the easiest way to keep that up is to behave the same way with robots.
Also: I got 4 years of product development done in 4 days for $200, and I’m still stunned
Context switching between being polite to colleagues and demanding to AIs seems like it would be easy enough. But, like decision fatigue, where too many decisions wear you down mentally and emotionally, context switching between AI and human contexts can also be taxing.
I don’t want to add the mental load (and behavioral risk) of having to remember that now I’m talking to my client and need to be polite, vs. now I’m talking to the AI and can let loose with whatever crankiness I have bottled up.
Also: Claude Code made an astonishing $1B in 6 months – and my own AI-coded iPhone app shows why
Another reason I strive to always be polite to AIs is that it keeps me in a mindset of collaborative investigation, where I treat an AI as another team member. I have been enormously successful using that approach with OpenAI Codex to create four powerful WordPress security plugins, and with Claude Code to build a complex and unique iPhone app for managing my 3D printer filament inventory and workflow.
Besides, crankiness takes a mental toll all on its own. It can lead to increased anxiety, depression, lowered self-esteem, poor focus, emotional exhaustion, poor life satisfaction, and even physical symptoms like headache, fatigue, insomnia, high blood pressure, and a weakened immune system.
Keeping calm and carrying on, whether with people or with AIs, is better for my own mental and physical health. If a little politeness to an unfeeling machine can help keep me sane and healthy, what’s not to love?
But does the AI care?
Do AI tools have different performance characteristics when treated politely? Studies don’t agree. As a confounding factor to the premise of this article, ZDNET’s Lance Whitney wrote about a study by Penn State University researchers that showed how some AIs can be more accurate when talked to more rudely.
On the other hand, a study presented at the 2024 Conference on Empirical Methods in Natural Language Processing by researchers at Tokyo’s Waseda University found that moderate politeness can sometimes increase AI compliance and effectiveness. However, the researchers warn that over-politeness and obsequiousness can result in more negative results.
Finally, the New York Times reports that Sam Altman, CEO of OpenAI, stated that there is a multi-million-dollar cost that the company incurs because some people use “please” and “thank you” in AI chats. This is because every single token has a substantial processing load. That added load can lead to the use of additional power and water resources.
I figure that if you feel comfortable with the resource usage of asking an AI to talk like a pirate, you should probably be okay with using a little more power and water to stay human.
It’s about people
My real conclusion actually has nothing to do with AI resource utilization, or even how AIs perform. My conclusion is that I want to always be polite when talking to AIs because it is better for my relationships with other humans, my personal cognitive performance, and my overall well-being.
Bottom line: Being polite to AIs isn’t about the AIs. It’s about people. As The Bard of Avon wrote, “This above all: to thine own self be true, and it must follow, as the night the day, thou canst not then be false to any man.”
What about you? Do you find yourself being polite or blunt when interacting with AI tools, and has that changed how you communicate with people elsewhere? Do you worry that habits formed with chatbots or voice assistants can spill over into work, family, or online conversations?
If you have kids, have you thought about what using command-driven assistants might be teaching them? Have you noticed any difference in your own focus, mood, or productivity based on how you engage with AI? And does the idea that politeness has a real resource cost change how you think about it? Let us know in the comments below.
You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.
Source: Networking - zdnet.com
