HR management platform
Subscribe to our Newsletter!
Thank you! You are subscribed to our blogs!
Oops! Something went wrong. Please try again.
When a bot closes the deal, who gets the credit and the pay rise?
HR

When a bot closes the deal, who gets the credit and the pay rise?

Team peopleHum
March 9, 2026
6
mins

Imagine this scenario: A salesperson hits 140% of their quarterly target. Their manager nominates them for the top performer bonus. The recognition goes out in the all-hands meeting. The pay rise follows in the next review cycle.

But here is what the numbers did not show. Sixty per cent of that salesperson's outreach was written by an AI tool. The lead scoring that told them which prospects to prioritise was generated by an AI model. The proposal that closed the big deal was drafted by AI and lightly edited before it was sent. The salesperson made the calls, managed the relationships, and exercised judgment at key moments, but it was AI that did most of the heavy lifting. So does the salesperson completely deserve the pay rise they received? 

This is not a comfortable question, and most organisations are not asking it yet. They are still measuring output the way they always have: by what the employee produces, not by how much of that production is theirs versus their AI tools. The implications reach into the foundations of how HR manages performance and compensation. If two employees produce the same output but one is doing it through significant AI augmentation and the other through their own skill and effort, are they equally high performers?

HR has not solved this question yet. But the organisations that start working through them now will be significantly better positioned than those that wait until the tension becomes a grievance, a retention problem, or a pay equity challenge they cannot explain. This blog works through the five dimensions of the problem that HR leaders need to understand and act on.

How do current performance management systems assign credit?

Most performance management systems in use today were designed around a straightforward assumption: hit your target, deliver your projects on time, meet your KPIs, and your performance rating would reflect those results. 

This logic held up reasonably well when the tools employees used were simply additional helpers, such as a faster computer, a better CRM, and a more efficient workflow. The tools did help in completing work quickly, but they did not fundamentally change the employee’s overall contribution.

But AI tools are a different story altogether. An AI tool does not just help an employee work faster, but also contributes to the quality, volume, and complexity of what they produce. For instance, a copywriter using AI can produce three times the content output they could produce unaided, or a financial analyst using AI can run models and generate insights that would previously have required a team. 

The practical consequences are already visible in some organisations. Employees who adopt AI tools early and use them in their daily workflow are outperforming peers on traditional metrics and being rewarded accordingly, while the performance management system records this as a difference in individual capability rather than a difference in tool adoption. Meanwhile, equally skilled employees who have not yet adopted the same tools, or who work in roles where AI assistance is less applicable, appear to underperform in comparison, through no deficit of their own skill or effort.

This is not equitable. And it is not an accurate picture of where organisational performance is actually coming from. HR needs to acknowledge that performance management frameworks built for a pre-AI world need rethinking — not wholesale replacement, but deliberate adaptation to account for a new reality.

The credit attribution problem: When human and AI work is entangled

Even if HR leaders agree that AI contribution needs to be accounted for in performance measurement, the practical challenge of doing so is significant. Because in most real working situations, the line between what a human contributed and what AI contributed is often entangled.

Consider a product manager who uses AI to synthesise customer research, generate a draft product requirements document, and model the financial impact of different feature prioritisation decisions. Then, by adding their own insights, they rewrite significant portions of the requirements document, challenge the financial model based on their market knowledge, and take the product through to a successful launch. The question then becomes: What proportion of the outcome is theirs?

The honest answer is that the question may not be the right one. Human and AI contributions in knowledge work are increasingly collaborative. The AI did not produce the outcome independently, and neither did the human. Trying to allocate a precise percentage of credit to each is probably both impossible and not particularly useful.

What is useful is shifting the performance question from "what did you produce?" to "what did you contribute?" And contribution, in an AI-augmented environment, looks different from output.

Contribution includes the quality of judgment the employee exercised about when and how to use AI tools. It includes their ability to critically evaluate AI outputs rather than accepting them uncritically, by catching errors in AI-made decisions, applying context the AI lacked, and making decisions that required human insight. It includes the relationship, trust, and influence they built that no AI tool can replicate. And it includes the skill they brought to directing, prompting, and refining AI outputs to a standard that the tool alone could not have reached.

These are genuine human contributions. These are also contributions that traditional performance metrics do not capture well. Building a performance framework that can see and value these things will be one of the more important HR design challenges of the next few years.

What does AI augmentation mean for pay benchmarking and role value?

Here is a question that most HR teams have not yet been asked to answer: if an AI tool enables a mid-level employee to produce the output of a senior specialist, what does that do to the market value of the senior specialist's role?

This is not a hypothetical in every sector. In areas like legal research, financial modelling, content production, and data analysis, AI tools are already enabling employees with less experience to produce work that previously required significantly more knowledge than they possessed. The capability gap between junior and senior roles, which compensation structures are largely built around, has started narrowing in some roles.

For HR leaders, this creates a genuine benchmarking challenge. Market pay data reflects what organisations are currently paying for roles, but it is a lagging indicator. It reflects the value placed on human capability before AI augmentation was widespread. As AI tools become standard, the market will adjust, but unevenly, at different speeds in different sectors and functions, and in ways that are currently difficult to predict.

There are two risks to manage simultaneously. The first is overpaying for roles whose value is being displaced by AI. The second is underpaying for the new human capabilities that AI augmentation requires, like human judgment, critical evaluation, AI direction, and contextual intelligence that genuinely skilled employees bring to AI-augmented work, and that are increasingly the differentiating factor between average and excellent output.

Pay structures built around role level and years of experience are particularly vulnerable. The employee with ten years' experience who resists AI adoption may genuinely be producing less value than the employee with three years' experience who has mastered AI-augmented working. Compensation frameworks that cannot see this difference will create retention problems at both ends, losing the AI-forward talent who feel underrewarded, and overpaying for tenure-based seniority that is no longer producing commensurate output.

There is no right answer here as the market is genuinely in flux. But HR leaders who are not actively monitoring how AI adoption is affecting productivity by role level and function are flying blind on one of the most consequential compensation variables currently in play.

Should HR reward AI adoption, and how?

Some organisations have started to respond to the AI performance question by explicitly rewarding employees for AI adoption. HR teams have been tasked with building AI tool usage into performance criteria, recognising employees who demonstrate high AI proficiency, and adding AI-related incentives to variable pay. 

The intention is right. But the problem arises during its implementation. For instance, an employee who learns that their performance review rewards AI tool engagement may start routing work through AI tools that are not actually the most effective approach for that task. They may generate AI outputs and present them with minimal critical evaluation, because the incentive is for usage, not quality. This result is that the employee is now dependent on AI assistance for tasks they could and should develop their own capability to handle.

HR teams must understand that the incentive is not simply using AI, but ensuring that it produces effective AI outcomes: higher quality work, greater volume, faster turnaround, better decisions, and more sophisticated analysis. 

Lastly, there is also a fairness dimension. Not all roles have equal access to AI tools that meaningfully augment performance. Rewarding AI adoption without accounting for differential access by role, team, or seniority level creates inequity.

Building a compensation and recognition framework fit for AI-augmented work

The existing compensation and recognition infrastructure needs to be updated to reflect the reality of AI-augmented work.

  • Performance criteria need to include human-specific contributions: Alongside output metrics, performance frameworks need to assess the quality of judgement, critical evaluation, stakeholder relationships, and contextual intelligence that employees bring to their work. These are the contributions that AI cannot replicate, and they need to be visible in the performance system.
  • AI proficiency needs to be defined as a capability: Competency frameworks should describe what good AI-augmented working looks like in terms of outcomes and judgment quality. This gives HR teams something tangible to assess and gives employees a clear standard to develop toward.
  • Recognition programmes need to be recalibrated: If top performer recognition is based purely on output metrics, it will systematically over-recognise AI-augmented performance and under-recognise human contribution that is harder to count. HR teams must recognise that human contributions are real, valuable, and current recognition frameworks largely cannot see them.

Key Takeaways

  • Current performance management frameworks were not built for AI-augmented work. Measuring output alone, without accounting for how much of that output AI tools contributed, produces an inaccurate picture of individual performance, and rewards tool adoption as if it were human capability.
  • The credit attribution problem is real, but "what did you produce?" is the wrong question. Trying to split credit precisely between human and AI contribution is largely unworkable. The more useful shift is to ask "what did you contribute?", which covers judgment, critical evaluation, relationship management, and the skill to direct AI effectively.
  • Reward AI outcomes, and not AI usage. Incentivising tool adoption without tying it to quality and judgment creates perverse behaviours. Competency frameworks should define AI proficiency in terms of what good looks like, not how often the tool is opened.
  • The compensation infrastructure needs an update. Performance criteria, competency frameworks, benchmarking frequency, recognition programmes, and variable pay design all need targeted adaptation. The organisations that do this now will avoid the pay equity challenges, retention problems, and performance management breakdowns that those who delay will face later.
See our award-winning HR Software in action
Book a demo
Schedule a demo
Is accurate payroll processing a challenge? Find out how peopleHum can assist you!
Book a demo
Book a demo
See our award-winning HR Software in action
Schedule a demo

See our award-winning HR Software in action

Schedule a demo
Blogs related to "
When a bot closes the deal, who gets the credit and the pay rise?
"

Schedule a Demo !

Get a personalized demo with our experts to get you started
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text
This is some text inside of a div block.
Thank you for scheduling a demo with us! Please check your email inbox for further details.
Explore payroll
Oops! Something went wrong while submitting the form.
Contact Us!
Get a personalized demo with our experts to get you started
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.