Tyblog

All the posts unfit for blogging
blog.tjll.net

« The Human Resources Alignment Problem

  • 18 June, 2025
  • 692 words
  • 2 minute read time

two-buttons.png

Figure 1: The modern programmer's dilemma

Today I realized that – distinct from the idea of AI misalignment – today's human resource departments have misaligned incentives for how employees will (or are?) using large language models.

Maybe you're like me: an employee – not an employer – and so you trade your time and effort for money. Maybe you've also gotten better at using large language models and they've made you noticeably more productive. In fact (I agree with Thomas here) I can leverage them to make myself significantly more productive.1

Here's Bobby knowledge worker: he's found a way to increase his output by 30%! Maybe even 50%!

Now we come to the biased part of this piece of writing, but it's drawn from years of being an employee, and maybe your experience is the same as mine:

Bobby knowledge worker, armed with his drastically increased efficiency has two choices he can make:

Which would you rather do?

Commentary

You're usually not supposed to try and pre-emptively address rebuttals to your argument, but I can't help myself here:

This is immoral!

Yes, I know. I'm not advocating for it, I'm describing the impending moral decision that many knowledge workers will likely have to encounter soon.

Employees will just fall behind their peers!

Eventually. Once the rising tide of LLM assistance permeates everywhere then the productivity increases will invoke a reckoning between how different people decide to handle it.

But organizations are chronically slow, and despite the radiating influence of AI buzzwords across nearly every space, this is window dressing signalling and emphatically not a part of a very well-developed way to integrate and measure their effectiveness. Sorry, most organizations are fumbling around in the dark above a certain competence requirement level.

These productivity claims are lies!

Maybe. You can choose to believe or disbelieve my claims about my own experiences with AI, but you can find numerous examples of very competent, publicly-visible engineers vouching for the legitimacy and effectiveness of AI in their own workflows. I'd refer here to Mitchell Hashimoto, Jessie Frazelle, and Simon Willison as examples of highly experienced engineers actively doing high-quality work who are open about their positive leverage of LLM tools.

After writing this piece, Xe wrote a post that includes strong arguments that although LLMs might help you produce more, it may be of less quality. I don't have enough exposure to LLM-generated code to have a strong opinion here, but it's a strong argument.

Delivering business value will trickle down!

I intentionally used the terms "employee" versus "employer" in the second sentence introduction. LLM productivity is a massive boon if you can trade a shiny feature for a big check you can take a cut from. That equation isn't equivalent if you're not in control of that business process.

I'm telling your boss!

Personally? I'm navigating this thought experiment with moderate use of code-generation tools and I'm not a time thief yet.

The market will correct this disparity!

Ideally: yes. Employees will leverage their ability into be worth more to find employers who will compensate that increased ability.

In reality, I think this will happen:

  • Employment is sticky. I don't see people jumping ship constantly to achieve optimal compensation equilibrium.
  • This is already overwhelmingly the case because most people can get a significant pay increase by securing more lucrative offers outside of their own labyrinthine promotion cycles. Big shifts in productive output just exacerbates the problem.

My argument falls apart if HR departments see more productive workers and give them all raises. But I doubt that happens.

Footnotes:

1

None of this blog post was written with AI/LLMs.