According to Facebook owner Meta, it has found over 10 different malware families and more than 1,000 malicious links that were being promoted as tools containing the infamous OpenAI’s ChatGPT
The parent company to Facebook and Instagram, Meta, just announced that it has discovered a large amount of internet threat actors that are leveraging the public’s interest in AI tools such as ChatGPT and making people download malicious apps and browser extensions that can steal sensitive data and even cryptocurrencies.
According to Meta, it has found over 10 different malware families and more than 1,000 malicious links that were being promoted as tools containing the infamous OpenAI’s chatbot named ChatGPT.
While some of these malicious tools being promoted as ChatGPT failed to provide the same functionality, others did offer a similar experience thus fooling users into thinking that they are using a trusted product.
These malicious ChatGPT versions working perfectly well are reported to have left behind several abusive files into user devices that go onto access sensitive data.
Previously, a lot of hackers were using a similar tactic where they leveraged user interest in cryptocurrencies and offered them some malicious crypto offers that cost the user their whole crypto wallet at times.
Speaking at the press briefing, Meta Chief Information Security Officer Guy Rosen said that for hackers “ChatGPT is the new crypto.”
Talking about the security dangers attached with generative AI technologies, the Meta security officer said that the company is already preparing its defenses for such a technology since it can easily create human-like writing, music and art.
When asked if AI technology can be used to create disinformation campaigns, the security officer said that it’s too early for AI to be used in information operations however he is expecting “bad actors” to use these generative AI tools to speed up or even scale their operations.