AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |
Back to Blog
Play the very organised thief online4/16/2023 ![]() ![]() As things stand, the cost of deploying such AI tools for nefarious reasons at scale, remains too high to justify for individual cyberattackers. Such instances make it clear that for individual bad actors, LLMs like ChatGPT can fill a gap that previously required to be overcome by skill or entities such as Ransomware-as-a-Service (RaaS) groups. While the system technically has guardrails designed to prevent actors using it for straightforwardly malicious ends, with a few creative prompts, it generated a near flawless phishing email that sounded ‘weirdly human’. The makers of ChatGPT have clearly suggested that the AI-driven tool has in-built controls to challenge incorrect premises and reject inappropriate requests. To their surprise, the platform also automatically supplied highly relevant details, such as mentioning a Singaporean law when instructed to generate content for their targets. Eventually, the researchers developed a pipeline that groomed and refined the emails before attacking their intended targets. Using OpenAI’s GPT-3 platform and other AI-as-a-service products, the researchers focused on personality analysis-generated phishing emails customised to their colleagues’ backgrounds and individual characters. Glimpses of this were evidenced when Singapore’s Government Technology Agency demonstrated AI crafting better phishing emails and messages than any human actor could. Now, with the rise in popularity of not only ChatGPT, but other players such as Google’s Bard, and the initial versions of AI powered search, one can fully expect the barrier to entry for bad actors to be lowered even farther than when RaaS groups initially shot up in popularity. The question on everyone’s mind is – can AI democratise cybercrime?Īccording to a recent survey, ransomware and malware are among the top expected sources for cyberattacks in 2023. Since the launch of ChatGPT in November, tech experts and commentators worldwide have become increasingly concerned about the impact AI-generated content tools will have, particularly with respect to cybersecurity. ![]() On the other, amateur hackers can leverage the same technology to develop intelligent malware programs and execute stealth attacks at increasingly higher levels.Īre there challenges with the new chatbot? On one hand, cybersecurity experts today have access to AI-powered security tools and products that enable them to tackle large volumes of incidents with minimum human interference. ![]() On the flip side, AI is widely regarded as a double-edged sword – an attribute that is particularly relevant in the field of cybersecurity of all industries. From accurately fixing coding bugs and creating 3D animations to generating cooking recipes and even composing entire songs, the chatbot has showcased the immense power of AI to unlock a world of incredible new abilities. ![]() The real question with it is not “what can it do?”, but “what can it not do?”. The rise of ChatGPT, a text-based Large Language Model (LLM) AI chatbot, represents yet another milestone in this journey. This pursuit, fuelled by our curiosity has brought us to a point where we are trying to decipher and mimic the human mind. How can the relentless pursuit of knowledge be viewed as bad? But the truth is that any tool can be used for good or bad, depending on the people who wield it. Over the course of many generations, humankind has reaped enormous rewards from the advancements made possible by science and technology. ![]()
0 Comments
Read More
Leave a Reply. |