Ask yourself this: When your organisation last selected an AI platform, how much time did the HR team spend on evaluating the vendor's values, governance structures, and ethical commitments? Now, compare that with how much time they spend on features, pricing, and implementation timelines?
Most HR teams would answer that they focused more on the second question. That imbalance carries a risk: one which has become impossible to ignore in the light of the ongoing OpenAI scandal.
OpenAI was founded in 2015 on an explicit ethical commitment that artificial general intelligence should benefit humanity as a whole, instead of a select few. This was not just a tagline but a binding principle that even its co-founders rubber-stamped.
A decade later, that principle is being contested in a courtroom. One of the co-founders, Elon Musk, is suing Sam Altman and the organisation they built together, alleging that OpenAI has become a profit-oriented organisation, which goes against the principles they started with. OpenAI denies this. And now it is up to the judiciary to produce a verdict. But the question the scandal has already answered, regardless of what the jury decides, is this: stated values and actual behaviour are not the same thing.
For HR professionals, this is not a courtroom drama but a direct challenge to how HR evaluates, adopts, and governs the AI tools it brings into the organisation. This blog examines what the OpenAI situation reveals about ethical AI adoption, and what HR must do differently as a result.
What does the OpenAI restructuring reveal to HR teams?
OpenAI began as a non-profit. Its founding documents prohibited the use of its assets for private gain. This was a deliberate legal structure designed to ensure that the organisation's work could not be captured by commercial interests.
Over time, OpenAI introduced a capped-profit structure. Then it pursued a fuller commercial restructuring, launched ChatGPT as a consumer product, and became a commercial technology company operating in one of the most competitive and lucrative markets in the world.
Each of these steps was presented, at the time, as necessary to fulfil the mission. More funding meant more research. More research meant better AI. Better AI meant greater benefit to humanity. The mission was not being abandoned. It was being resourced.
Elon Musk's lawsuit challenges this framing directly. He alleges that the restructuring did not serve the mission. Instead, it replaced it. The commercial incentives that now govern OpenAI's decisions are incompatible with the founding commitment to benefit humanity rather than shareholders.
The court will determine the legal answer to that question. But HR professionals do not need to wait for the verdict to learn an organisational lesson. When an organisation's structure changes significantly, its values often change with it, whether or not anyone intended that outcome.
The OpenAI lesson for the HR vendor selection
Most HR vendor selection processes are designed to answer operational questions like: Does the platform do what we need it to do? Is the implementation manageable? Is the pricing competitive? Can it integrate with our existing systems? These are legitimate questions. But also insufficient. The OpenAI situation adds a set of questions that HR must now ask of every AI vendor it evaluates.
- Who owns the organisation, and what are their incentives? An AI company backed primarily by investors seeking commercial returns operates under different incentive structures than one built around a clear and consistent people-first purpose. HR must understand the ownership structure of the vendors it works with and must ask directly how commercial pressure is managed when it conflicts with the stated values of the product. peopleHum, for instance, has built its entire platform around a single orienting principle: that technology should serve people, not the other way around. Its product decisions, from how it handles employee data in its HRIS software to how it shows insights in its performance analytics module, reflect that principle consistently.
- How has the organisation's structure changed over time? A vendor that has undergone significant structural changes, including new investors, new ownership, or new commercial partnerships, may be a different organisation from the one that originally built the product HR is evaluating. HR teams must ask about the vendor’s history and understand what changed and why.
- What happens to your data if the organisation is acquired or restructured? OpenAI's restructuring raised serious questions about the governance of the data it holds. HR teams that process sensitive employee data through AI platforms, covering everything from payroll records and attendance data to performance reviews and people analytics, must understand what happens to that data if the vendor's ownership, structure, or mission changes. HR teams must demand clear, contractual answers before purchasing any services.
- Does the organisation's behaviour match its stated values? HR teams must look beyond the marketing materials and ask their vendor questions like: What decisions have they made when commercial interest and stated values came into conflict? How transparent are they about those decisions? A vendor that answers these questions openly, that can point to specific product and governance decisions that reflect its stated values, is a vendor that has earned the right to be trusted with the most sensitive data an organisation holds.
Ethical AI adoption: What HR must build internally
Most organisations adopt AI tools through a process that is primarily technical and commercial. For instance, the IT team assesses security and integration, the finance team assesses cost, and HR is typically assessing the people implications of implementation.
No one is assessing the ethical governance of the technology itself. This needs to change. And the OpenAI situation provides the most compelling argument for why.
- HR must own the ethical governance: The tools that process employee data, make recommendations about performance, flag attrition risk, or influence hiring decisions are systems that exercise influence over employees’ working lives and careers. HR is the function best positioned to govern that influence responsibly.
- HR teams must establish an AI ethics framework: An AI ethics framework must be clear and specify what values the organisation requires AI tools to reflect, what data governance standards must be met, what the process is for reviewing vendor behaviour over time, and what the threshold is for discontinuing a vendor relationship if governance concerns emerge. This framework should exist before the next platform is selected and not after a problem is detected.
- HR teams must build a vendor review system: An organisation that evaluated OpenAI as a vendor in 2018 was evaluating a different entity from the one that exists in 2025. To avoid such a scenario, HR teams must build regular, structured vendor reviews that assess not just whether the platform is performing operationally, but whether the vendor's values, governance, and behaviour continue to meet the organisation's ethical standards.
- HR teams must develop their AI literacy: HR professionals who do not understand how AI tools make decisions cannot govern those decisions responsibly. The ethical governance of AI adoption requires HR to develop a working understanding of how the tools it uses generate outputs, where their limitations and biases lie, and what the conditions are under which their recommendations should be trusted or questioned.
Key Takeaways
- The OpenAI situation delivers a direct lesson for HR: stated values and actual behaviour are not the same thing. When an organisation's structure changes significantly, its values often change with it.
- Most HR vendor selection processes focus on features, pricing, and implementation. They do not ask the questions that matter most: who owns the organisation and what are their incentives, how has the vendor's structure changed over time, and what happens to your employee data if the vendor is acquired or restructured. These questions must become standard before any AI platform is selected.
- In most organisations, no one is assessing the ethical governance of the AI tools being adopted. HR is the function best positioned to fill this gap. The tools that process employee data, influence performance decisions, and flag attrition risk exercise real influence over employees' working lives.
- HR teams must build an AI ethics framework before the next platform is selected. This framework should define what values AI tools must reflect, what data governance standards must be met, and what the threshold is for ending a vendor relationship if governance concerns emerge.
- Conduct regular, structured vendor reviews that assess not just operational performance but whether the vendor's values, governance, and behaviour continue to meet the organisation's ethical standards. For instance, the vendor HR evaluated five years ago may be a fundamentally different organisation today.





























%20(1).jpg)
.jpg)
.jpg)






