
Service Provider Due Diligence Guidance – Data Protection and Artificial Intelligence
1. Welcome and Purpose
We share data with a wide range of Service Providers across our business. In many cases, Service Providers are given access to data or interact with systems that contain personal data as part of delivering their services.
Our approach to data protection and AI due diligence is designed to ensure that data made available to Service Providers, or accessed on our behalf, is handled in a controlled, transparent and accountable way from the point of access through to its final use and removal.
This approach is grounded in applicable data protection and AI laws and regulations and reflects established industry practice. It also supports the standards we expect as a business in maintaining customer trust, protecting our brand reputation, and ensuring consistent operational control.
2. Our Approach
Before we grant access to our data, and at ongoing intervals throughout the relationship, we apply a structured and risk-based approach based on the nature of the data involved and how it is accessed or used.
This enables us to apply greater scrutiny to higher-risk engagements while taking a proportionate approach to lower-risk services.
The objective is to ensure clarity on what data is involved, visibility of how it is accessed, and confidence that appropriate controls are established and operating in practice.
3. What We Ask of Our Service Providers
Service Providers are expected to demonstrate that appropriate controls aligned to global data protection and AI laws are not only documented but actively functioning.
This includes lawful handling of personal data, clear limitation of purpose, transparency in service delivery, and effective technical and organisational safeguards.
If we get in touch with a Service Provider and request due diligence, this means that we believe that personal data processing or the use of AI are in scope.
Service Providers must provide evidence that these controls are embedded in day-to-day operations, as this is not a self-certification exercise – it is an evidence and risk-based assessment. Statements should be supported by documentation or clear descriptions showing how controls are applied in practice. Where full documents are commercially sensitive, relevant extracts may be provided.
4. Evidence Expectations
We are seeking evidence that processes are understood, implemented, and consistently followed.
Examples of robust evidence include approved or signed policies, extracts of relevant sections, system diagrams, descriptions of access controls, and explanations of how processes operate in practice. The focus is on clarity and credibility rather than volume.
Responses that rely on generic statements without supporting detail are unlikely to be sufficient.
5. What Good Looks Like
Strong responses are clear, specific, and directly aligned to the service being provided. Rather than presenting theoretical frameworks or generic policy statements, effective submissions demonstrate how controls are actually applied in practice. Documentation should reflect real operational processes, with examples or descriptions that show how requirements are met on a day-to-day basis. Certain examples include policy extract, system diagram, process description, and relevant certification(s).
We look for consistency between what is documented in written policies and what happens in practice. Where there is a gap between stated policy and operational reality, this will be identified and addressed during the review process. Service Providers should ensure that the evidence provided reflects current working practices rather than aspirational standards.
Transparency is essential. Service Providers should be open about how data is accessed, from which locations, and by whom. Where access arrangements are complex or involve multiple parties, clear explanations supported by diagrams or process descriptions are encouraged. Ambiguity or incomplete information will result in follow-up questions and may delay the assessment process.
6. The Basics
The following terms are used throughout this document and within the due diligence process. Understanding these concepts is important for providing accurate and complete responses.
Personal data refers to any information relating to an identified or identifiable individual. This includes obvious identifiers such as names and contact details but also extends to any data that could be used, alone or in combination with other information, to identify a specific person.
Processing includes any operation or set of operations performed on personal data. This is not limited to active manipulation – it includes collection, storage, access, viewing, retrieval, use, disclosure, transfer, and deletion. Even read-only access to personal data constitutes processing.
Sensitive data (also known as special category data) includes information revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, genetic data, biometric data, and health data. Processing of sensitive data requires enhanced controls and, in many jurisdictions, explicit consent or another specific legal basis.
Sub-processor refers to any third party engaged by a Service Provider to assist in the processing of personal data on our behalf. Sub-processors must be disclosed as part of the due diligence process, and Service Providers remain responsible for ensuring that sub-processors meet the same data protection standards.
Artificial Intelligence (AI) refers to technologies and systems designed to analyse data, extract insights, generate content, or make predictions or decisions with varying degrees of autonomy. This includes machine learning models, natural language processing tools, and automated decision-making systems. Where AI is used in the delivery of services, specific transparency and oversight requirements apply.
7. Core Requirements
At a minimum, we expect the following:
- A documented privacy programme must be operationalised, including privacy by design, data subject rights, and staff training.
- A comprehensive breach response plan must be established.
- Personal data must be used lawfully and only for clearly defined purposes.
- Access should be limited to what is necessary for the service.
- Appropriate security measures must be in place to protect data.
- Sub-processors must be disclosed and appropriately managed.
- Cross-border access must be identified and controlled.
8. AI Expectations
Our expectations of Service Providers in relation to AI depend on the nature of the engagement. We distinguish between two scenarios:
Where You Are the AI Provider
Where your service incorporates or relies upon AI – for example, an AI-enabled platform, tool, or feature – we expect you to be transparent about how AI is used within the service. This includes:
- A clear description of the AI functionality and its role within the service.
- Confirmation of whether our data is used to train, fine-tune, or improve AI models.
- Details of any human oversight, review, and/or intervention mechanisms in place.
- Information on how outputs are generated, including any known limitations or risks.
- Evidence that the appropriate technical and organisational measures are in place to ensure the accuracy, robustness and security of the AI throughout its lifecycle.
- Evidence of compliance with applicable AI regulations, including the EU AI Act where relevant.
Where You Are Developing AI for Us
Where you are engaged to design, build, or deploy AI solutions on our behalf, we expect you to align with our internal AI policy. A high-level summary of that policy is set out below.
Our AI policy is founded on the following principles:
- AI solutions must be developed with transparency, ensuring that their purpose, function, and decision-making processes are explainable.
- Fairness and the avoidance of bias must be considered throughout the design and development lifecycle.
- Data used for training, testing, and validation must be lawfully sourced, fit for purpose, and subject to appropriate quality controls.
- Human oversight must be maintained at appropriate stages, with clear accountability for decisions supported or made by AI.
- Security and resilience must be built into AI systems from the outset, including protections against adversarial attacks, data poisoning, and unauthorised access.
- Ongoing monitoring, testing, and evaluation must be in place to ensure continued compliance and performance.
Service Providers developing AI for us will be expected to demonstrate alignment with these principles as part of the due diligence process and on an ongoing basis throughout the engagement.
9. What is the Diligence Process and What We Will Ask You
The process begins once you are selected to provide services.
- An assessment will be triggered through the OneTrust platform and you will receive a link. There is no password required – simply click the link and you will be taken to a questionnaire.
- Complete the questionnaire, attach the requested evidence, and press submit once complete. Note that if you are unable to select submit, that is because either all the questions have not been answered or there are missing attachments.
- Shortly thereafter, once it has been reviewed, there may be follow-on questions and you will receive a further link to OneTrust to respond.
- Once diligence is satisfied, you will be notified of this and we will move to contracting.
As part of the questionnaire, we will ask you to provide the following:
- Information about your service and how it interacts with data.
- Types of data accessed or used.
- Locations from which data is accessed, and the authority and relationship structure. For example, if you have a subsidiary in another location, we will need to understand the legal entity structure and the mechanisms you have in place to allow internal transfers, likely to be Binding Corporate Rules or an intra-group transfer agreement.
- Use of AI within the service.
- Sub-processors and third parties involved.
- Supporting evidence demonstrating how controls operate.
It is important to highlight that unless the diligence is satisfied, you will not be able to enter into an agreement with us for processing personal data, as to do so would breach our regulatory obligations.
10. Common Misunderstandings
Processing is not limited to hosting. Any access to or interaction with personal data constitutes processing, even where the personal data is hosted in our environment.
This means that Service Providers providing support or troubleshooting, or accessing systems remotely are processing personal data, regardless of the nature of their role.
Similarly, accessing personal data from another country constitutes a cross-border transfer, regardless of where the personal data is physically stored.
The determining factor is the location of the individual accessing the personal data, not the location of the servers or systems.
11. Help and Clarification
For clarification, Service Providers should contact their business owner or primary contact within our organisation. This ensures alignment on the service, data involved, and how it is used.
12. Legal References
We have included details of the key data protection obligations that this process is designed to demonstrate compliance with:
- GDPR Article 4 defines key concepts including personal data and processing, confirming that access and use fall within scope.
- GDPR Article 5 establishes core principles such as purpose limitation and data minimisation.
- GDPR Article 6 requires a lawful basis for processing.
- GDPR Article 28 outlines obligations for processors acting on behalf of a controller.
- GDPR Article 32 requires appropriate security measures.
- GDPR Articles 44–49 govern international transfers, including remote access from another country.
- UAE PDPL Articles 1, 4, 5, 13 and 22 establish definitions, principles, lawful use, security requirements and cross-border transfer rules.
- KSA PDPL Articles 1, 5, 6, 9 and 19 establish definitions, purpose limitation, minimisation, lawful processing and cross-border transfer restrictions.
- EU AI Act introduces a risk-based approach to AI and includes requirements for transparency and oversight.
