Jump directly to the content

AI USERS are sounding the alarm after concluding that human beings are facilitating a machine learning "takeover."

"We're getting played by AI, and we don't even know it," one netizen professed in a Reddit post Tuesday.

Some chatbot users are raising concern about the potential over-use of artificial intelligence tools, with one user claiming humans are "getting played by AI"
2
Some chatbot users are raising concern about the potential over-use of artificial intelligence tools, with one user claiming humans are "getting played by AI"Credit: Getty

"We're not even consuming content the way it was meant to be consumed anymore. We're letting some AI decide what's important for us."

In addition to denouncing text summarizers, the user took shots at generative AI.

"Content creators are now using AI to pump out their stuff too. So now we've got AI creating content on one end, and AI summarizing it on the other," he professed.

"Where the hell do we fit into this picture? We're turning into the ultimate middle men, but in our own conversation. It's like we're playing telephone, but both ends of the line are robots, and we're just passing the message along."

Some users disputed the original poster's claims, including the assertion that the majority of people are using AI or that its usage is prevalent enough to justify concern.

Others argued that the tools are ideally suited to serve users and only pose a threat when they deviate from their intended purpose.

"If I'm using AI to summarize articles, it probably means I'm looking for something," one Redditor wrote. "Where AI gets nasty is when it's pretending to be another human."

One argument that went unchecked was the potential for data privacy violations.

Artificial intelligence - including tools that summarize articles - learns from enormous swathes of data pulled from the Internet.

Much of the appeal of chatbots like ChatGPT hinges on their ability to replicate the patterns unique to human speech.

US Social Template VERTICAL - Mackenzie Tatanann - Microsoft VALL-E 2 is a text-to-speech generator that can replicate human speech with eerie precision

To do so, the models must first train themselves on real conversation.

Meta is just one example of a company training AI models on information pulled from social media.

Suspicion arose in May that the company had changed its security policies in anticipation of the backlash it would receive for scraping content from billions of Instagram and Facebook users.

As controversy mounted, the company insisted it was not training the AI on private messages, only content users chose to make public and never included the accounts of users under 18 years old.

And what happens when humans are no longer needed to facilitate machine learning?

Models like OpenAI's GPT-4o pull vast swathes of content from the Internet to mimic patterns found in human writing and conversation
2
Models like OpenAI's GPT-4o pull vast swathes of content from the Internet to mimic patterns found in human writing and conversationCredit: Getty

A phenomenon known as MAD, or model autophagy disorder, shows how AI can learn from AI-generated content.

A machine might use its outputs as a dataset or the outputs of other models.

Researchers at Rice and Stanford University were among the first to identify a decline in the quality and diversity of responses without a constant stream of new, real data.

MAD poses a problem as more and more AI-generated content floods the web. It is increasingly likely that such material is being scraped and used in training datasets.

What are the arguments against AI?

Artificial intelligence is a highly contested issue, and it seems everyone has a stance on it. Here are some common arguments against it:

Loss of jobs - Some industry experts argue that AI will create new niches in the job market, and as some roles are eliminated, others will appear. However, many artists and writers insist the argument is ethical, as generative AI tools are being trained on their work and wouldn't function otherwise.

Ethics - When AI is trained on a dataset, much of the content is taken from the Internet. This is almost always, if not exclusively, done without notifying the people whose work is being taken.

Privacy - Content from personal social media accounts may be fed to language models to train them. Concerns have cropped up as Meta unveils its AI assistants across platforms like Facebook and Instagram. There have been legal challenges to this: in 2016, legislation was created to protect personal data in the EU, and similar laws are in the works in the United States.

Misinformation - As AI tools pulls information from the Internet, they may take things out of context or suffer hallucinations that produce nonsensical answers. Tools like Copilot on Bing and Google's generative AI in search are always at risk of getting things wrong. Some critics argue this could have lethal effects - such as AI prescribing the wrong health information.

NewsGuard, a platform that rates the credibility of news sites, has been tracking "AI-enabled misinformation" online.

By the end of 2023, the group had 614 unreliable AI-generated news and information websites. As of last week, the number had to 987.

The websites have generic names to appear like legitimate news sites. Some contain incorrect information about politics and current events, while others fabricate celebrity deaths.

READ MORE SUN STORIES

One Reddit user aptly summarized the discourse.

"Used correctly, AI can be an amazing editing tool," he wrote. "But too many people are lazy and trying to keep the old tech cycle going and using it as sole content creator and editor all in one."

Topics