Artificial intelligence (AI) is rightly being touted as a transformative technology in the near future, with some experts seeing its potential impact comparable to the steam engine, electricity, and the internet.

However, huge amounts of money have already been poured into the technology’s development to date, reaching over £30 billion last year. From this investment, we’re seeing tangible applications of AI in action and its immense influence is already being felt.

The pace of change brought by the technology means that in less than a year’s time, it is estimated 80% of emerging technologies will be built primarily on AI foundations. In fact, over a third (37%) of organisations have already adopted and deployed AI in some way, according to Gartner.

About the author

Dr. Nicolai Baldin is CEO and founder of Synthesized

One specific growth area for AI in use, especially as the world becomes more digitised, has been in preventing identity theft and fraud. For example, payments giant Visa recently launched an “Advanced Identity Score” tool, underpinned by AI, that helps organisations to combat fraud relating to credit and loan applications in real-time.

Yet, as technology gets more advanced in dealing with such theft, so do bad actors – 69% of organisations say it is getting harder to proactively manage security threats. The actual scale of the problem is not to be underestimated as one in five Europeans have admitted experiencing identity fraud in the last five years.

More worryingly, the threat is no longer confined to traditional areas like finance; the medical and healthcare industry is being heavily targeted, with the sector suffering the second-highest number of security breaches in 2019. What makes this trend particularly troublesome is that medical records contain nearly as much, if not more detailed, identifiable data than finance records and criminals are using this information to commit insurance fraud.

The impact of these breaches goes beyond mere reputational damage, research has found that revenues drop significantly and, if listed, stock prices can fall by up to 5% in the immediate aftermath.

Added to these ever-increasing security and reputational challenges, there is also a significant level of regulatory compliance that has been placed on companies to protect customers’ identities. To make the situation even more complex, companies face both country-specific, as well as supranational data protection laws.

In Germany, there is the Federal Data Protection Act (FDPA), while France has the Data Protection Act and a host of other countries also operate this way. Meanwhile, at a European level, introduced in 2018, the General Data Protection Regulation (GDPR) imposes heavy penalties on companies at fault for data breaches, with a fine up to £18 million, or up to 4% of the annual worldwide turnover of the previous financial year. Despite such potential fines, research from Crown Records Management revealed that over 75% of organisations are struggling with GDPR compliance.

AI would seem like the perfect, automated solution to tackle the myriad of challenges organisations are facing right now.

While it is true that the technology offers immediate potential to protect customers’ identities its effectiveness is dependent on the quality data an organisation has. Currently, however, 60% of production data is simply not used to its full potential and data scientists still spend more than 50% of their time on data collection and preparation. This is primarily down to companies having a lack of data strategy in place and also relying to date on original or anonymous data. In essence, original data contains PII, while anonymous has transactional information, each presenting problems from a security standpoint.

However, the emergence of a new approach, synthesized data, offers a real fix to these issues and it comes with some critical and transformational benefits. Crucially, identifiable information is removed from synthesized data, thanks to randomised changes to the original data that AI makes.

With such changes to the original data, the risk of identity theft of PII is eliminated, while assuring compliance with data regulations like GDPR. Critically, when implemented correctly, synthesized data gives the same results as real data, according to research.

In addition, it speeds up the sharing of previously sensitive information, making data available ‘on-demand’ without the associated security headaches, thanks to millions of transactions being crunched within just ten minutes due to the AI engine. Thus, there is a major reduction in the time needed for the development and testing of fraud prevention products and systems.

Threats are appearing at a near-constant pace with many organisations looking for actionable and immediate solutions. Yet the truth is that the promise of AI is already being realized through synthesized data and it offers companies a viable approach right now to tackle evolving security issues quickly and effectively.

Source Article