Ten years ago, the employee data available to a typical HR function was relatively limited. It often consisted of personnel files, attendance records, performance appraisal scores, and payroll history. In short, the basics.
Today, the picture is vastly different. HR teams sit at the centre of a data ecosystem that most employees are not even fully aware of. This includes productivity monitoring software that tracks how many hours an employee's laptop is active and which applications they spend time in. The badge access system is another tool that logs every entry and exit of every employee from the office.
And now, AI sits on top of all of it, synthesising signals from multiple data sources, identifying patterns, making predictions, and flagging insights about individual employees and teams that no human analyst could have produced at that speed or scale.
HR professionals are, right now, operating in a space where the technology has moved significantly faster than the ethical frameworks, the legal standards, and, most importantly, the conversations within organisations about what is and is not acceptable. That gap is where trust gets broken, and when trust between employees and their employer breaks down, no data insight in the world will tell you something your engagement score won't. This blog is about where the line of trust is, why it matters, and what HR leaders need to do to stay on the right side of it.
What employee data do HR teams commonly collect?
Before discussing where the line is, it is worth being clear about what employee data is actually being collected by the HR teams.
- Performance and productivity data: This includes formal performance review scores, objective completion rates, project delivery metrics, and output-based measures. Most employees are aware that this data exists. What has changed is the method of collection. Software that monitors keystrokes, mouse activity, and screen content goes well beyond measuring outcomes; it measures the process of working, in real time, at a level of detail that would have been technically impossible a decade ago.
- Attendance and movement data: Building access systems generate time-stamped records of physical location throughout the working day. For remote employees, virtual equivalents exist, including login times, meeting attendance records, and camera-on tracking during video calls. Some organisations have deployed desk-booking systems that create detailed records of where every employee chooses to sit on any given day.
- Sentiment and engagement data: This data is often captured through pulse surveys, always-on feedback tools, and increasingly, through AI-powered sentiment analysis. Some platforms claim to detect early signs of disengagement or burnout from changes in an employee's digital communication style, without the employee submitting any explicit feedback at all.
How can HR teams differentiate between data that supports employees and data that surveils them?
The most important distinction for HR teams is the difference between using employee data to support employees and using it to surveil them. It sounds simple, but in practice, the line is less obvious than it appears, and the same data can sit on either side of it depending entirely on how it is used, by whom, and with what transparency.
Data used to support employees is data that ultimately benefits the employee it is about. Identifying that a team is consistently working beyond contracted hours, so that a conversation can be had about workload, spotting that an employee's engagement scores have declined over two consecutive quarters, so that their manager can check in with genuine intent, and using learning data to recommend development opportunities that align with what an employee has expressed interest in. In each of these cases, the data creates an opportunity for the organisation to do something that helps the employee, and the employee would likely consider it reasonable.
Data used to surveil employees is data that is collected and used primarily to monitor, assess, or control behaviour, often without the employee being aware of what is being tracked. Keystroke monitoring that generates a productivity score used in performance reviews, sentiment analysis applied to emails without employee knowledge or consent, and using badge data to verify whether employees are arriving and departing within acceptable windows. In each of these cases, the data is being used against the employee's interests, or at least, without their informed awareness that this is what the data is for.
The challenge is that organisations rarely design surveillance systems with that intent explicitly stated. They begin with a legitimate-sounding rationale, like productivity insight, wellbeing monitoring, workforce planning, and the scope and application of the data gradually expand beyond its original purpose. For instance, a dashboard built to identify teams under pressure becomes a tool for assessing individual productivity.
HR leaders need to apply a consistent test: if every employee in the organisation knew exactly what data was being collected about them, how it was being analysed, and what decisions it was influencing, would they consider that use fair and reasonable? If the answer is no, or even probably not, the organisation is on the wrong side of the line.
The consent problem: Why HR telling employees is not the same as informing them
Organisations that are mindful of employee data ethics believe in disclosure as the solution. They inform employees what data is being collected, the purpose behind it, and the inferences they expect to draw from it. This approach may satisfy legal requirements, but ethical obligation goes further. HR teams often make the error of believing that both are the same.
Informing the employees about data ethics requires that the employee actually understands what they are agreeing to, in terms that are clear enough to make the disclosure meaningful. For instance, a privacy notice that runs to fourteen pages of legal language, buried in the onboarding documentation pack that a new employee signs on their first day, is not informing the employee.
This distinction matters because employees who discover, months into their employment, that data has been collected and used in ways they did not explicitly consent to, even if they technically signed a document that disclosed it, view that discovery as a betrayal.
The question that HR leaders need to ask themselves is: Whether they are really being transparent with the employees about what data they collect? And, most importantly, do they even need to collect that data in the first place?
How has the introduction of AI compounded the risk factor for HR teams?
HR functions have been using people analytics data to understand their workforce for a long time. They track attrition rates, analyse engagement survey results, and model the financial impact of compensation changes. What is new is the scale, the speed, and the nature of the inferences that AI makes possible.
AI-powered people analytics can synthesise signals from dozens of data sources simultaneously and flag patterns that no human analyst would identify. It can identify with reasonable confidence which employees are at risk of leaving, which teams are approaching burnout, and which managers are most strongly correlated with high attrition on their teams. These are truly useful insights for the HR teams, and when used responsibly, they can support better decisions for both the organisation and its employees.
AI compounds existing data risks in three specific ways:
The first compounded risk is inference. An AI system that analyses an employee's email communication patterns, meeting attendance, and badge data simultaneously can infer things about that employee's mental state, their level of engagement, their relationships with colleagues, and their likelihood of seeking other employment. Even if the employee consented to their badge data being recorded, it is highly unlikely that they gave consent to an AI system to use it, in combination with other data, to profile their psychological state.
The second compounded risk is opacity. Traditional analytics produces outputs that a human can trace back to their source. An AI system produces predictions and scores whose derivation may not be readily available even to the HR professionals using the tool. For instance, when an AI-generated attrition risk score influences how a manager is advised to approach a particular employee, and that employee later asks why they were treated in a certain way, ‘the AI flagged you as a flight risk’ is not an adequate explanation.
The third risk is automation bias. When AI flags an insight, like ‘this employee is disengaged,’ humans tend to give that insight more credibility than they would give the same conclusion reached through human observation. The AI's output feels objective. But, in reality, that is not the case. Consider a model that has historical bias. Rather than flagging that bias, an AI model trained on historically biased data will reproduce and potentially amplify those biases in its outputs, presenting them with false objectivity.
The appropriate use of AI-powered people analytics is as a signal-detection tool that signals situations worth investigating. An AI that flags rising attrition risk in a particular team should prompt a human-led conversation and investigation, instead of automatically triggering interventions, adjusting performance ratings, or influencing who gets development opportunities. The AI should only provide the signal. Humans should interpret it, investigate it, and decide what to do.
How can HR teams build a data ethics framework that doesn’t violate employee privacy?
Acknowledging that the line exists is necessary. Knowing where it is and building organisational processes to stay on the right side of it consistently requires something more structured: a data ethics framework specific to people data, owned by HR, and embedded into how decisions about data collection and use are actually made.
Most organisations have a general privacy and data protection policy. But very few of them have a coherent framework for evaluating employees’ data decisions against ethical criteria, along with the legal ones. A functional people data ethics framework has several components.
- Central data inventory: Organisations should be aware of what employee data they collect, where it is stored, who has access to it, what it is used for, and how long it is retained. But in practice, most organisations cannot fully answer these questions without significant effort, because data collection has accumulated across systems, teams, and time without central oversight. The inventory is the foundation on which everything else depends.
- Purpose test for new data collection: Before any new form of employee data collection is introduced, it should be evaluated against a clear set of questions. What is the specific purpose this data will serve? Is that purpose genuinely legitimate? Is this data proportionate to that purpose, or could the same goal be achieved with less intrusive collection? Who will have access, and what controls exist on that access? This evaluation should involve HR, legal, and, where appropriate, employee representatives before a decision is made.
- Employee transparency standard: Beyond legal disclosure requirements, the organisation should commit to a standard that employees can find out. Employees have a right to know what data the organisation holds about them, what it has been used for, and how they can raise a concern if they believe it has been misused. This requires HR teams to build accessible channels for employees to exercise their data rights.
- HR governance over employee data decisions: HR teams should have a clear role in evaluating and approving data collection practices that affect employees, and a standing mandate to raise concerns about practices that may be legally compliant but ethically questionable.
- A regular review process: Data ethics is not a one-time exercise. The technology evolves, and the legal landscape keeps changing constantly. For instance, an annual review of the employee data inventory, what is being collected, whether each item still meets the purpose test, and whether the transparency standard is being met, is the minimum necessary to ensure the framework remains operational rather than theoretical.
Getting this right is not simple, and it is not quick. But the organisations that invest in protecting the privacy of their employees build something genuinely valuable: a workforce that trusts its employer with data, because that employer has demonstrated it deserves to be trusted with it. In a competitive talent market, that trust is a real differentiator, and unlike technology, it cannot be purchased. It has to be earned through consistent and deliberate choices about what data to collect, how to use it, and who it ultimately serves.
Key Takeaways
- Data capability without ethical discipline is a liability. The value of people analytics depends entirely on whether the data is collected and used in a way employees can trust.
- Disclosure is not the same as transparency. A summary of privacy rules buried in the onboarding papers does not constitute genuine transparency. Employees need to understand, in plain language, what is collected, why, and how it affects decisions about them.
- The line between support and surveillance is a question of intent and application. The same data can sit on either side depending on how it is used, by whom, and with what honesty to the employee about its purpose.
- AI compounds the risks. AI-powered analytics can infer things from data that the data does not explicitly contain, thereby creating legal and ethical obligations that go well beyond what traditional analytics produced.
- Culture is at stake. Extensive monitoring changes how employees behave and what they believe about their employer. Organisations that over-monitor consistently see the consequences in engagement and retention.
- Build a framework. A data inventory, a purpose test for new collection, a genuine transparency standard, HR governance over people data decisions, and a regular review process are the building blocks of ethical people data practice that actually holds over time.






























.png)
.png)
.png)
.png)





