Regulatory Compliance in Digital Screening

Regulatory Compliance in Digital Screening

An International view of the emerging Challenges and Opportunities for Digital Screening

The bulk of global statutory instruments and regulatory controls concerning general data security and the protection of personal data are outdated and inadequate. These controls can only change using prescribed processes. This ability to change, respond, and maintain relevance is inevitably going to be slower than the advances in digital technologies. 

For example, the existing European GDPR legislation was outlined from around 2010 until the regulation entered into force on 24 May 2016 and has only applied since 25 May 2018 in the UK. The digital advances over this timeline have been exponential and were never envisaged or covered within the original drafting.    

As the compliance landscape is changing it provides new challenges and opportunities for Neotas and our Peers. The UK based  Information Commissioners Office (ICO) has been busy looking at the way AI is used not just in marketing products and services but also to gain unfair competitive advantage and to commit fraud, identity theft, etc. 

Organisations will be increasingly concerned about AI and what is real and what is fake, what they can trust and not trust. Screening services need to be seen to be authentic in every way. Building an image based around trust will be the key to doing business with regulated organisations.  

It is ironic that the UK Data Commissioner is now  using AI to examine company cookies and chatbots to help decide if personal data is being captured and stored without consent and if unfair competitive advantage is being gained through new technologies such as bots and AI.  

The UK regulators are piloting a multi-agency advice service called the “AI and Digital Hub” helping innovators such as Neotas to develop products that will meet the requirements of current and projected regulatory compliance. The hub will provide tailored advice to help businesses navigate the development process and remain compliant.  

The Regulators have also created Sandbox, a “safe space” for organisations to come in and work together with ICO experts on their innovative product or service. Their staff are on hand to help organisations work their way through any tricky data protection-related issues they may come up against. The Sandbox also offers a way to stress-test a product or service before releasing it to the wider market and help to iron out any potential problems before they occur, resulting in a smoother process for organisations and their clients. 

The Regulators want organisations to recognise and manage the issues that impact trust in AI as well as benefiting from its adoption.  

Emerging regulatory challenges, particularly in the realm of digital screening and compliance, necessitate a proactive approach from organisations. With regulators increasingly scrutinising AI applications and data protection practices, building trust across supply chains becomes paramount. Opportunities abound for screening and verification services to navigate evolving regulations, mitigate risks, and foster compliance. A collaborative effort between regulators and industry players is essential to harness the potential of AI while ensuring privacy and accountability.

Regulatory Challenges (GDPR and PECR) 

In the UK  (and internationally) there are regulations that sit alongside GDPR legislation, such as the Privacy and Electronic Communications Regulations (PECR). PECR has been around for many years but has recently been updated and given new focus by ICO largely due to the impact  of AI. PECR is mainly used to protect individuals from the adverse effects of electronic digital marketing.  Screeners  needs to be aware of  regulatory changes arising from these  overlapping  regulations.  

To  send electronic marketing messages or use cookies, bots or similar technologies an organisation  must comply with both PECR and the UK GDPR. There will be  some overlap, given that both aim to protect personal privacy. This duality will, particularly impact organisations within a supply chain often share data, including personal information. 

Regulators are therefor focussing into the impact of  AI within the supply chain. They want to understand how the AI models function and  what information  is used for Machine Learning? The challenge for the regulators will be  to prevent  unwanted bias or discrimination against minority groups or those who aren’t represented as widely in society.  And how can they, as a data protection regulator, ensure that these biases aren’t carried over and incorporated into subsequent AI models?  This present an opportunity for OSINT and EDD providers through the adoption and  application of techniques  and technology such as pattern identification, sentiment and semantic analysis. 

Read Neotas Case Study on Social Media Screening and GDPR

Digital Regulation Cooperation Forum (DRCF) 

Increased collaboration between Regulators is  now recognised as a fundamental requirement. An example is  the  creation of the Digital Regulation Cooperation Forum (DRCF)  in the UK.  Recognising that the exponential growth of  AI will drive rapid change in the compliance landscape, the (DRCF) consists of four founding Regulators with a declared  aim is to deliver a coherent approach to digital regulation for the benefit of people and businesses online. The  Regulators mandates are diverse but necessarily overlap. 

  • The Competition and Markets Authority (CMA) 
  • The Financial Conduct Authority (FCA) 
  • The Information Commissioner’s Office (ICO) 
  • Ofcom ( The Communications Regulator) 

In summary, the DCRF is developing a  joined-up approach to  address the impact of digital innovation (including AI advances) when applied to  personal data,. This cross-regulatory group ensures there is a cohesive and collaborative approach to issues that affect society. A collaborative approach is fundamental to the effectiveness of regulators to identify issues, produce timely guidelines, share knowledge and consider cross-regulatory issues affecting citizens.  

Read Neotas Case Study on GDPR and FCRA implications of Social Media Background Checks

Artificial Intelligence, Fear and Trust 

The ICO define AI as an umbrella term for a range of algorithm-based technologies that solve complex tasks by carrying out functions that previously required human interaction. Decisions made using AI can be fully automated or involve a ‘human in the loop’. As with any other form of decision-making, those impacted by an AI supported decision must be able to hold someone accountable for it.  

According to the ICO, people have been  generally supportive  of the benefits that AI brings. But the research isn’t all positive.  A US based Pew Research reported in August 2023 that people are becoming less trusting of AI. Their research found that 52% of those surveyed were more concerned than excited about AI. This is an increase from 37% in 2021 and 38% last year. 

The results are inciteful as there is recognition that AI is fast, powerful and potentially very useful, but can be  potentially  negative or even  dangerous depending on your viewpoint and the use case. An example cited by an Open AI employee about unregulated access to open-source AI, was  the potential catastrophic impact on humanity if an unregulated  AI system created and operated its own biological laboratory. Another  already visible example is the use of  open-source AI models within a disinformation campaign with the intention to disrupt democratic elections. This has occurred several times in recent years. 

Building trust across the entire client supply chain will be a fundamental requirement in the application of advances in technology, organisations will be increasingly cautious about who they do business with, and supply chain security will become  a fundamental requirement  of a successful brand or business.  

Read Neotas Article on AI-Based Social Media Checks Without Human Intervention

Opportunities for providers of screening and verification services 

We are all used to the risks of cyber-attacks and data scraping, but AI brings in new challenges and opportunities. The more that regulators clamp down on organisations the more opportunity will be created. 

A common theme is that Regulators have stated that they  are not against organisations using AI. They will  however ensure that AI  is used in a sensible, privacy-respectful manner. For example, they recently issued a preliminary enforcement notice against Snap Inc due to concerns  over potential failure to properly assess the privacy risks posed by its generative AI chatbot “My AI.” They also issued a £7.5m fine to facial recognition database company Clearview, for  non-compliant collecting and storing images of UK residents. 

AI will provide benefits to organisations and individual citizens.  By way of example this will deliver  new innovations to improve customer service, better safety features for online services or quicker resolutions for common technical issues. There will inevitably  be organisations that  adopt AI for nefarious purposes arising from the misuse of technology advances to harvest data or treat their customers unfairly. There will  also be incompetent organisation that do not  respect personal  information and use AI to gain an unfair advantage over their competitors. 

 The regulators will want to ensure non-compliance is not profitable. Persistent misuse of customers’ information, or misuse of AI in these situations, to gain a commercial advantage will be punished.  

For the screening and verification sector  opportunities will continue to grow as AI based  technology continues to develop. Regulators will struggle to enforce or apply outdated regulations. This will drive opportunities for regulators to utilise the skills of trusted third parties such as Neotas. Brand and reputationally conscious organisations will also utilise  the expertise of these third party products and services as a  vital component of their own security programme. Self-Regulatory organisations will emerge such as the NAPBS aka PBSA did back in 2003.  

This can never be a static environment as  successful organisations will continue to launch  or acquire new products and services for delivery to their client base. Ensuring that their product portfolio is fit for purpose and compliant with regulatory requirements will be a increasing but unavoidable cost of doing business for screeners. 

Opportunities arising from AI and regulatory pressures will  occur in the application of EDD in the three key components of any organisation – People, Processes and Technology. Some things never change !!!!!


In the dynamic landscape of regulatory compliance, the emergence of digital screening presents both challenges and opportunities on an international scale. As global statutory instruments struggle to keep pace with rapid technological advancements, organisations face evolving complexities in data security and privacy protection. Amidst these shifts, regulatory bodies are scrutinising AI applications for potential risks of fraud and privacy breaches. Collaboration between regulators through forums like the Digital Regulation Cooperation Forum (DRCF) aims to foster cohesive approaches to digital innovation governance. Balancing the promise of AI with regulatory vigilance underscores the imperative for trustworthy screening and verification services in an increasingly digitised world.


Schedule a call today! We highlight behavioural risks identified across social media profiles and the wider internet. Supplements the background screening process. Learn more about how we can help you conduct social media screening and background checks in a safe and compliant manner.


Related Content on Social Media Screening and Social Media Background Check

Neotas Social Media Screening and Online Reputation Screening Services:


Neotas Enhanced Due Diligence

Neotas Enhanced Due Diligence

Neotas Enhanced Due Diligence covers 600Bn+ Archived web pages, 1.8Bn+ court records, 198M+ Corporate records, Global Social Media platforms, and more than 40,000 Media sources from over 100 countries to help you screen & manage risks.

Book a Demo

Explore Neotas Enhanced Due Diligence

Stay ahead of financial crime threats and compliance challenges.

  • Learn about the amendments made to Money Laundering Regulations in 2023 aimed at bolstering the AML framework.
  • Gain insights into the significant increase in SARs and its implications for compliance.
  • Explore the implications of new legislative measures, including the Economic Crime and Corporate Transparency Act.
  • Discover innovative solutions for compliance that promise to streamline processes and enhance efficiency.

Stay resilient in the face of regulatory challenges. Download the whitepaper today to empower your compliance strategy for 2024.