Already, ChatGPT is producing issues.
The Open AI text generator can quickly produce adequate language for any query, but it isn’t very good at producing factually accurate or engaging writing. That is pretty amazing. And even with a plethora of built-in filters, that can still be quite risky.
Unsurprisingly, less-than-ideal uses for the technology have already been discovered by users. Anything that has the capacity to generate content out of nothing is likely to produce something hazardous. Just a matter of time, really.
We’ve gathered six of the creepier—or at least dubious—uses people have already discovered for the app. Remember that this is all happening when the app is still in its infancy and before it becomes widely used.
1. Producing Malware
It’s actually alarming that ChatGPT produces malware. Not because malware is particularly novel but rather because ChatGPT can do this function indefinitely. AIs do not snooze. Cybersecurity researchers “were able to construct a polymorphic program that is highly elusive and difficult to detect,” Infosecurity Magazine reported. To be clear, I am not an authority on cybersecurity.
"By utilizing ChatGPT's ability to generate various persistence techniques, Anti-VM modules and other malicious payloads, the possibilities for malware development are vast."https://t.co/F13gW0Kjzt
— Adam Levin (@Adam_K_Levin) January 19, 2023
However, in essence, the researchers could use the software to produce malware code and then use it to modify that code to make it difficult to find or stop.
Researchers told Infosecurity Magazine, “In other words, we can change the output on the fly, making it unique every time.
2. Academic Fraud
Okay, while less frightful, this is still predictable. A program that can be used to create text about anything would be ideal for a child attempting to cheat in class.
Children also enjoy lying in class. While school districts all around the country have banned the app, professors have already claimed to have caught pupils in the act. Though it’s difficult to imagine this trend slowing, it’s probable that AI will just end up being another educational tool that children are free to utilize.
3. Spamming On Dating Apps
Okay, so perhaps spam isn’t the right word, but individuals are utilizing ChatGPT to communicate with Tinder matches.
People are allowing AI to conduct the research portion of a conversation. It’s rather unsettling to consider that you might be conversing with an app rather than a potential companion, even while it’s not inherently frightening. Dating has transformed into growth hacking.
4. Scamming and Phishing
It’s more difficult to demonstrate that this has already occurred, but it makes sense that ChatGPT or other AI writing tools would be ideal for phishing. The poor language of phishing communications makes them frequently easy to recognize, but ChatGPT can correct all of that. This is a pretty obvious use case for the app, according to experts.
To test whether this was genuinely feasible, Mashable requested it to clean up the sloppy English in a real scam email. Not only did it do it in a matter of seconds, but one of the outputs also automatically began blackmailing the fictitious reader. So amiable!
5. It Can Deceive Hiring Managers
Everyone is aware of the difficulty of applying for jobs. It’s a process that seems never-ending and is frequently discouraging, but ideally yields the satisfaction of a good job in the end. However, if you’re out there seeking employment, an AI app might get you the position of your dreams.
According to a consultant, ChatGPT outperformed humans at writing applications by 80%. It’s simple to understand how ChatGPT would successfully target all the essential phrases that will appeal to hiring managers or, more likely, get past the filters used by HR software.
Read Next: