Over the past two years, generative-AI has made significant advances, particularly in its ability to generate realistic appearing outputs. This includes large language models like ChatGPT, as well as text-to-image or text-to-video generators, whose results are truly impressive. Now, this does not mean that there’s no room for improvement, but that’s a topic for another conversation. Some generative-AI technologies are already boosting our productivity, for example by helping software developers to write code more efficiently or by helping creative professionals in quickly generating and testing new ideas. As these technologies continue to mature, more areas are likely to experience similar benefits.
Products based on generative-AI, such as chat-bots like ChatGPT, will likely be incorporated into more and more software products. Currently one limiting factor of adding large language models directly into different devices is the size of these models and the hardware required to run these, but both are rapidly improving. Generative-AI products are set to become the personal assistants we were promised more than a decade ago, and it appears that the technology is finally catching up to the expectations. This adaptation is likely to happen so quickly that, in just a few years, most of us will find it unthinkable to perform everyday activities without these assistants, much like being disconnected from the Internet.
Equally impressive to the human capacity for creating new technology is our capacity to use those technologies in malicious ways against others. As I mentioned previously, generative-AI can tremendously boost our productivity; however, it also has the potential to harm. For instance, ‘deepfakes’ have existed for some time but have only recently achieved high quality, especially in videos, with fever limitations on their real-time use. This technology allows the creation of video and audio content impersonating real individuals based on existing material such as audio clips, photos, or videos, or it can fabricate entirely fictitious persons.
The motivations behind these deceptions vary. Criminals often aim to defraud individuals or entities of money and other assets, but just as easily the aim might be to defame someone or influence a group of people. For example, recently in the US a former athletic director was arrested and charged with using voice synthesis software to impersonate the principal, leading the public to believe that the principal had made racist and antisemitic comments. Similarly, hostile states may try to sway public opinion or influence election outcomes, and having generative-AI in their arsenal can greatly enhance the capabilities of the attackers.
Furthermore, personal frauds are becoming increasingly sophisticated. For example, a common scam involves fraudsters mimicking a relative in distress during a phone call, tricking victims into sending money urgently. Coupled with voice synthesis, it is very likely that many people would fall this type of a fraud in the heat of the moment. Romance scams also utilize real-time ‘deepfakes’ to establish fake relationships for financial gain. For more information regarding use of ‘deepfakes’ in romance scams, do an Internet search using the term “Yahoo Boys”. In essence, as generative-AI technologies advance, the authenticity of any digital content becomes increasingly questionable. Furthermore, even the most digitally literate among us will find it increasingly difficult to detect scams and frauds.
Similar to warfare, where the advancement of one technology typically prompts the development of countermeasures, the rise of generative-AI based fraud poses a similar challenge. The question is whether we can fight fire with fire, using other types of AI to detect frauds, or if a different technology will emerge to validate the authenticity of digital material and certify that we are who we claim to be when online. For example, if digital content created by generative-AI contains a distinct ‘fingerprint’, our personal assistants – also AI-based – could potentially detect this and verify the content’s authenticity. Or, perhaps a completely new industry, like in the case of computer viruses, will emerge that allows us protection from AI based frauds.
#ai #generativeai #fraud #scam