The B-side of ChatGPT: deceptions, ‘deepfakes’, scams and ‘phishing’

[ad_1]

ChatGPT and artificial intelligence have taken over the universe, but despite the potential of this chatbot to benefit users’ daily life, some of them are giving you an illegal use to obtain personal information and economic benefits.

As wonderful as it is, criminals have also discovered simple ways to use this chatbot to victimize people and companies and alert institutions such as Europol and governments.

Fraudsters use ChatGPT to create hacking attacks. phishing and write convincing scam emails, which generates increased concern among the public and cybersecurity experts about identifying these scams before people fall prey to them.

On the other hand, this chatbot can also be used in content generation. deepfakewhich could make the scams much more credible than they otherwise would have been, adding to the creation of malware at a much faster rate. But this does not stop here and there is a lot of fabric to cut in this regard.

All that glitters is not gold and ChatGPT also has its disadvantages

‘Phishing’ and malicious code

One of the features of ChatGPT that has been exploited by criminals is its encryption capabilities. It can adapt to different programming languages ​​and generate pieces of code that can be used in applications. This means that hobbyist programmers can generate code for malicious purposes without ever learning to code.

“ChatGPT will give them the advantage of time, making them more efficient by speeding up the process of doing complex things”explains Fernando Arroyo, Data Analyst and AI specialist.

Cyble Research and Intelligence Labs (CRIL).
Cyble Research and Intelligence Labs (CRIL).

On the other hand, cybercriminals they use ChatGPT and other tools to generate mass fraudulent emails. These are grammatically correct, unlike emails of the past when criminals from abroad targeted US citizens, for example.

Fake websites and apps

As ESET explains in a recent report, Cyble Research and Intelligence Labs (CRIL) has identified several cases where the popularity of ChatGPT has been exploited to distribute malware and carry out other attacks.

They identified an unofficial social media page of ChatGPT with many followers and ‘likes’ that features multiple posts about this artificial intelligence and other OpenAI tools.

Cyble Research and Intelligence Labs (CRIL).
Cyble Research and Intelligence Labs (CRIL).

However, a closer look revealed that some posts on the page contain links that take users to pages of phishing that impersonate ChatGPT. These trick users into downloading malicious files onto their computers.

“It is worth mentioning that due to the demand that exists for ChatGPT and that many people cannot access the service due to the volume, OpenAI now offers the possibility of paying to access the Plus version. This is also used by attackers to try to steal the financial data of the card through false forms”warns Camilo Gutiérrez Amaya, head of the ESET Latin America Research Laboratory.

Cyble Research and Intelligence Labs (CRIL).
Cyble Research and Intelligence Labs (CRIL).

Here we must also add that since OpenAI has not developed a mobile application with ChatGPT this is being used by cybercriminals to distribute fake apps for android that download spyware either adware on smartphones. More than 50 malicious applications have already been detected.

deepfakes

Cyber ​​criminals too manipulate and create video content that could be used to impersonate other people. They use a software to swap faces with targets/victims and make them say and do things they haven’t done to trick or manipulate people.

If these technologies are combined with a script compelling content generated by ChatGPT it’s only a matter of time before this content passes as authentic. This could include impersonating vendors to obtain financial information from customers, although in some cases it has gone further.

“It has already been seen how images of mostly famous women have been used for pornographic content by simulating their faces. Of course, this is not the only case in which AI can commit crimes: art forgeries, cybersecurity attacks. ..”explains Fernando Arroyo.

Misinformation and misleading content

In January 2023, analysts at NewsGuard—a journalism and technology company—ordered ChatGPT to respond to a series of leading prompts, using their database of news misinformation published before 2022. They claim the results confirm fears about how the tool could be a danger “armed in the wrong hands”.

“Language AI can be used to generate false content or misinformation, which can have negative consequences in society and politics”qualifies Josué Pérez Suay, specialist in Artificial Intelligence and ChatGPT in an interview for Computer Hoy.

GPT-4: the new AI language model that wants to break all schemes and improve ChatGPT

NewsGuard has also now evaluated GPT-4, concluding that this model presented false narratives not only more frequently, but also more persuasively than ChatGPT-3.5.

It highlights the creation of responses in the form of news articles, Twitter threads, and TV scripts that imitated Russian and Chinese state media.

With all this, it is clear that ChatGPT or a similar tool that uses the same underlying technology – GPT-3.5 or GPT-4 – could be used for all kinds of large-scale crimes and bypass OpenAI policies that prohibit the use of its services for the purpose of generating “fraudulent or deceptive activities”including “scams”, “coordinated inauthentic behavior” and “disinformation”.

[ad_2]


Leave a Reply

Your email address will not be published. Required fields are marked *

About Us

Our team of experienced writers and editors come from diverse backgrounds and bring a wealth of knowledge and expertise to our website. We are passionate about our work and are committed to upholding the highest standards of journalism.