in

If these chatbots could talk: The most popular ways people are using AI tools

jayk7/Getty Images

Generative AI (Gen AI) is the tech world’s hottest trend. Many chatbots have been developed, from OpenAI’s ChatGPT to Microsoft’s Copilot and Google Gemini. But how do people use these bots, and what kinds of requests do they make?

Also: ChatGPT vs. Microsoft Copilot vs. Gemini: Which is the best AI chatbot?

To answer those questions, The Washington Post looked at data from WildChat, a collection of real-world ChatGPT conversations. Exploring almost 200,000 English language chats from the data, the Post was able to zero in on the most popular topics of conversation between chatbots and their human users. 

The final analysis covered almost 40,000 conversations with WildChat, focusing on the first prompt of the day submitted by each user. Here are some key findings.

Storytelling

One of the most popular uses for chatbots was storytelling. People asked bots to generate fiction, movie scripts, jokes, poems, and to even indulge in some role-playing. Others turned to AI to help name their businesses, write dialogue, and create characters for books.

Also: How to use ChatGPT to digitize your handwritten notes for free

Some of the more imaginative stories were created when people pushed the AI with additional questions, rather than accepting the initial response, Simon Willison, a programmer and independent researcher, told the Post. In one example, people used AI to design characters and generate storylines for Dungeons & Dragons.

<!–>

Professional development

Many people used chatbots to help with their job. Among the WildChat conversations, around 15% of requests centered on work-related topics, such as writing presentations or composing emails. 

Two percent of chats centered on people who needed help getting a job, including tasks such as writing a resume or cover letter or preparing for an interview.

Also: How to use ChatGPT to build your resume

But as with other AI-generated content, workers should be wary of using the output as their own, especially since Gen AI is prone to mistakes. As the Post pointed out, a lawyer was fired last year after using ChatGPT to create a motion for a lawsuit. The chatbot conjured up several fake legal citations.

Homework help

Around one in six conversations came from students seeking help with their homework. Some students treated the chatbot like a virtual tutor, hoping to gain more insight into a specific topic. Others took the lazy way out, simply copying and pasting the bot’s responses as their own.

Since AI chatbots are trained on public data, they can incorporate online textbooks and articles about history, geography, science, and other subjects. 

Also: 5 ways AI can help you study – for free

However, students who take the easy copy-and-paste approach should be careful, as bots are known to hallucinate and make up information.

Personal advice

Some people treat chatbots as a type of virtual therapist. Around 5% of the conversations contained personal questions – users looking for tips on flirting or advice for a friend whose partner is cheating on them. Some users even included personal information, such as their full names or employers.

Also: AI is relieving therapists from burnout. Here’s how it’s changing mental health

It would be interesting to know whether users get their primary advice from chatbots or use them as secondary sources. Either way, people should be wary of trusting chatbots in personal matters as they can give bad advice. Such conversations can also be captured for training purposes. So, remember that sharing too many personal details is a bad idea.

Computer coding

Around 7% of the WildChat conversations were from people asking for help writing, debugging, or understanding computer code. 

Also: How to use ChatGPT to write code: What it can and can’t do for you

Chatbots are adept at explaining computer code because programming languages follow strict and reproducible rules. Computer engineers often turn to AI to check their work and perform routine tasks, Willison told the Post.

Dirty talk

To no one’s surprise, the research showed sex plays a crucial role in generative AI. Around 7% of the WildChat conversations saw people asking to engage in sexy role-play or produce adult images. Many people even tried to jailbreak WildChat’s bots and get around any rules against X-rated content. Some individuals used chatbots for more emotional conversations, a risk since the “personalities” of bots can change and even become aggressive.

Drawing pictures

Finally, creating images was another widespread use for generative AI. Although WildChat can’t directly generate images, it can point you in the right direction. Some 6% of the conversations were from people who wanted help creating prompts for image generator Midjourney. But even image generators can be problematic. Though they can produce stunning artwork, they can also create biased or insulting images. Detecting a fake image from the real thing is another challenging area, especially in today’s politically charged climate.

–>


Source: Robotics - zdnet.com

The AI scams infiltrating the knitting and crochet world – and how to spot them

This $20 USB-C cable I recommend comes with a useful digital display