An AI Overview for DPO's
This note outlines how HiveHR has assessed and mitigated the data protection risks associated with its new AI powered features – designed organisations move from employee feedback to meaningful insight and confident action, safely and efficiently using features that incorporate AI.
1. Purpose and Functionality
Broadly, there are two distinct functions of AI:
Chat-style AI that helps users to explore and refine ideas for workplace improvement based on aggregated survey results and open feedback. It supports them with logging actions derived from:
- Aggregated survey scores (no identifiable data)
- Free-text feedback (e.g. Open Door)
- Optional user prompts and action descriptions
Â
Insight AI that helps users by turning data from survey results and open feedback into insights.
Â
- Aggregated survey scores
- Free-text feedback (e.g. Open Door)
- Optional organisational context – such as specific words that might be part of an organisation’s vocabulary (co-workers rather than employees)
2. Why a DPIA was Completed
We identified the use of AI as high-risk processing under UK GDPR Article 35 due to:
- The use of AI/LLM models in decision support, which introduces novel risks around transparency, explainability, and control.
- Processing of unstructured free-text feedback, which may contain incidental personal or special category data.
- The potential for perceived or actual influence on HR decisions, even if final control remains with the user.
- The importance of demonstrating robust governance and accountability for AI deployment in a workplace context.
Â
This assessment reflects a precautionary, best-practice approach, in line with guidance from the UK ICO, the 2023 UK AI Regulation White Paper, and the EU AI Act (2024).
Â
3. Data Categories Involved
AI may process:
- Aggregated survey question scores
- Open text feedback from employees
- User-entered prompts or context
- Suggested actions and task metadata
- Organisational context
Â
No special category data is required or expected, but given the free-text nature of employee feedback and prompts, incidental inclusion cannot be entirely ruled out. Appropriate mitigations are in place, including user guidance, prompt warnings, and safeguards on output visibility.
Example: An employee may write, “As someone with long-term anxiety, I’ve found the recent changes helpful.”Â
While this is special category data, it is stored by Hive in its original form (as part of the feedback record), but it is not exposed to or processed by the AI model. AI is exposed to anonymised aggregates and summaries, not raw text linked to individuals.
Â
4. Lawful Basis
HiveHR acts as a data processor. The feature processes data under the Customer’s existing lawful basis for employee feedback and engagement analytics — typically legitimate interests (Article 6(1)(f)).
AI powered Actions does not introduce automated decision-making with legal or similarly significant effects under Article 22.
Â
5. Technical and Organisational Safeguards
Our AI is powered by the latest Large Language Models developed by Anthropic and are accessed exclusively via AWS Bedrock EU — a highly secure, enterprise-grade foundation model platform.
- The LLMs operate entirely within AWS infrastructure and do not run outside the secure Bedrock environment.
- No data is shared with or stored by Anthropic.
- This model isolation provides a key security advantage: data used for AI is processed entirely within AWS’s secure boundary, without exposure to third-party model providers.
- AWS Bedrock’s built-in guardrails suppress unsafe content or model behaviour.
- We implement input sanitisation, output review, and usage logging to monitor system behaviour and prevent unsafe or biased outputs.
- AI invocation logs (including prompt inputs and outputs) are stored in AWS CloudWatch with no expiry currently set. Application-level logs (28 days in hot storage, retained indefinitely in cold storage) contain only high-level invocation details.
- AI-generated suggestions are stored indefinitely in tenant-specific collections for metrics and feedback.
Note: This applies specifically to AI. Hive shares limited customer data (e.g. email addresses) with approved subprocessors for platform functionality — see MSA Schedule 4 for full details.
Â
6. Subprocessors and Data Transfers
- AWS is an existing Hive subprocessor, approved via MSA Schedule 4.
- Anthropic LLMs are accessed via AWS Bedrock and do not result in data transfer.
Â
7. User and Manager Guidance
We’ve developed targeted guidance for:
- HR Directors — to explain the ethical guardrails and control features
- People Managers — to support safe, appropriate usage
- Data Protection Leads — i.e. this document
Â
8. Rights and Transparency
- All actions are logged, and outputs are editable by the user.
- No decisions are automated — users remain in control. AI outputs are presented as suggestions only; users review and decide whether to act on them. There is no automated creation, assignment, or escalation of tasks.
- Users are reminded not to input personal data or identifiable details in prompts.
- Input and output logging is limited to monitoring model performance, debugging, and identifying misuse — not for evaluating individual manager behaviour. Access to logs is restricted and subject to audit.
- Model performance is monitored through the percentage of suggestions accepted versus ignored. Users can flag a suggestion as harmful/inappropriate, triggering immediate notification to the support team.
- Free-text inputs are derived from aggregated summaries of employee feedback, not raw or individually attributable text.
Further Information
If you require a copy of Hive’s DPIA or want to understand technical controls in more detail, please contact your Customer Success Manager or Hive’s Privacy Lead at Becky.Kangurs@hive.hr
Â
9. Ongoing Model Governance
Any future changes to the AI models used in AI powered Actions — including switching provider, adding new capabilities, or deploying additional Bedrock models — will be subject to Hive’s internal AI risk review process. Where relevant, updates will be reflected in the DPIA and communicated to customers.