People are already examining how ChatGPT, which describes itself as “the most advanced artificial intelligence on the market” and claims to “deliver stunningly accurate responses to your every question and comment” can be used maliciously.
Researchers have already examined how it can be used to write both better phishing emails that could lead to more successful phishing per delivered email, for example. Or it could lead to more varied phishing emails that are harder for companies to block en masse.
ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness.
it's a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.
— Sam Altman (@sama) December 11, 2022
Sherrod DeGrippo, VP of Threat Research at Proofpoint, told that she thinks the technology will eventually be used maliciously.
“The BEC and fraud actors, primarily based in Morocco and Nigeria are likely to find ways to use these to automate confidence and trust building in their victims”.
Despite that, however, she doesn’t see ChatGPT as a “huge evolution in tooling”.
“Most of what could be done, they’re already doing,” she continued. “Further, we have found that threat actors are relatively slow to adopt new technology at scale because they simply don’t need it. They’ll do what works until it stops working.”
Of course it’s already possible to detect ChatGPT-generated text, but we’re guessing the amount of compute required to do that at scale would be substantial.