The People Half of AI Transformation


The People Half of AI Transformation | Correlation One
9:52

Enterprise AI transformation has split into two halves. The systems half — forward-deployed engineers, agents, production deployments — just absorbed roughly $5.5B of fresh capital and is being aggressively built out by OpenAI, Anthropic, and the major consultancies. Every account executive in the market wants to talk about it.

The people half — getting your workforce ready to design with, work with, and govern these systems — is the constraint that decides whether any of that investment actually pays off. Almost nobody is talking about it at the same volume.

That asymmetry is a mistake. If you only buy the systems half, you are buying a dependency cycle dressed up as a transformation.

What is a forward-deployed engineer, and why is every AI lab now building one?

A forward-deployed engineer is a technical specialist embedded inside a customer's operations to design, build, and ship AI systems against that customer's specific data, workflows, and constraints. The term was popularized by Palantir, where FDEs sat inside intelligence agencies and financial institutions for months or years at a time. The model has now been adopted, in different flavors, by OpenAI's solutions and applied teams, Anthropic's applied AI organization, and every Tier 1 consulting firm with an AI practice — Accenture, Deloitte, McKinsey QuantumBlack, BCG X, and IBM Consulting among them.

The pitch is straightforward: most enterprises cannot get from a working model to a production AI system on their own, so a vendor sends senior engineers to do it for them.

The pitch is also accurate. FDEs can absolutely compress time-to-first-deployment from twelve months to twelve weeks. That is real value, and we are not arguing it away.

We are arguing that almost no enterprise has done the math on what comes next.

What is the catch with an FDE-led AI strategy?

A fully loaded forward-deployed engineer from a top-tier vendor costs $400,000 to $800,000 per year, and serious engagements typically require teams of four to eight engineers running for twelve to twenty-four months. A single mid-sized program lands in the $5M–$15M range before you have trained a single person inside your own company to operate what the FDEs built.

That is not the worst part. The worst part is what happens when the FDEs leave.

The FDE model produces a working system, deep tacit knowledge of how it was wired together, and a small number of internal stakeholders who learned to depend on a phone number they can call when something breaks. The first stays. The second leaves with the engineers. The third quietly recurs on your P&L for the next decade.

This is not theoretical. It is the most well-documented pattern in enterprise technology.

What does FDE dependency look like in other industries?

The pattern has played out in at least three prior waves.

Palantir in defense and intelligence.

Palantir's FDE motion is the original and the best executed. It is also why federal agencies have signed twenty-year contracts to keep Palantir engineers embedded — not because the software is irreplaceable in principle, but because the institutional knowledge of how the software was configured to that agency's specific mission is held by Palantir, not by the agency. Several of the most ambitious in-house efforts to replicate Palantir capabilities have failed for exactly this reason.

ERP implementation in the Fortune 500.

Every CFO who has lived through an SAP or Oracle rollout knows the second wave. The software vendor sells the license. The implementation partner — Accenture, Deloitte, IBM — sells the rollout. Then the implementation partner sells the operations, the upgrades, the integrations, and the optimization. Twenty years later, the consulting overhang on a single ERP deployment routinely exceeds the original license cost by 10x. The reason is identical: the configuration knowledge lives outside the company.

The Salesforce ecosystem.

A more recent version. Salesforce as a product is genuinely usable by trained business operators. But most large enterprises outsourced their initial Salesforce build to certified partners, never trained their own administrators to a real level of fluency, and now run permanent retainers with implementation firms to make changes a half-decent in-house admin could make in an afternoon.

In every case the pattern is the same: a vendor compresses time-to-first-value by inserting their experts into your operations, those experts build something only they fully understand, and the expertise asymmetry becomes a permanent line item.

AI is going to be worse, not better, because AI systems are non-deterministic. They drift. They need to be re-evaluated against new data, new edge cases, new regulatory expectations, and new business contexts. A system that needs an outside expert every time it drifts is a system you do not actually own.

Why is the workforce the binding constraint on AI RoI?

The argument, compressed

The return on an AI system is determined by how many of your people use it well, not by how technically impressive the system is on the day it ships.

Three concrete examples — the kind you can hold against your own portfolio.

1
A regional bank deploys an AI commercial underwriting assistant.

The FDE team builds a strong model. The model recommends approvals and flags risk patterns the bank's analysts could not see before. Twelve months later the bank measures impact and finds two things: the loan officers do not trust the model's recommendations on deals above $5M, and they override it on roughly 70% of the deals where it matters most. The system technically works. The RoI is roughly zero, because the people authorized to act on the system's output were not trained to interpret it, calibrate against it, or escalate appropriately when they disagreed with it.

2
A national retailer deploys AI demand forecasting in merchandising.

Same shape. The model is genuinely better than the prior process. The merchandisers are not trained to read the model's confidence intervals, so they treat the output as either gospel or garbage depending on their mood. Stock-outs and overstock both increase in the first year. The retailer concludes "AI does not work for our category." What did not work was the deployment-without-enablement strategy.

3
A pharmaceutical company deploys AI inside clinical operations.

The FDEs build it. Six months later an FDA reviewer asks who validated the model, who monitors it, and what the company's internal governance protocol looks like. The company does not have a credible answer because the answer is "the engineers we hired from a vendor who already rotated to their next account." This is not a marginal issue. It is a regulatory existential issue.

In all three cases the system is fine. The constraint is people: not enough of them trained to design with AI, work with it, override it intelligently, and govern it.

What does the math look like — FDE-led versus workforce-led AI investment?

Take a $5M AI budget for a 5,000-person enterprise. The comparison below shows what each investment approach returns over twelve months.

$5M / 5,000-person enterprise / 12 months
FDE-led
1 major + 1–2 narrow production systems
Internal team supporting it: 5–20 people. Adoption beyond that team is incidental.
Workforce-led
2,000–5,000 people fluent in AI
Hundreds of small productivity gains and dozens of business-unit-led process redesigns. Almost no production systems shipped.

Neither extreme is right. The point is that the two halves return very different things, and almost every enterprise is currently overweighted on the first and underweighted on the second by a ratio of roughly 10:1.

Rebalancing toward workforce enablement is what makes the systems half compound. A production AI agent used confidently by 3,000 people generates an order of magnitude more value than the same agent used reluctantly by 30.

How do you actually structure both halves?

A few principles that hold up across our work with enterprise clients.

  • Pick your systems partner on merit. OpenAI, Anthropic, and the big consultancies all have credible FDE motions. The differences matter at the technical margin and far less at the workforce margin.
  • Decouple the workforce build from the systems partner. The people half of your transformation should not be tied to whichever model lab won your last RFP. Model providers change. Vendor relationships change. Your workforce is the durable asset.
  • Sequence enablement to precede deployment, not follow it. Training people after a system ships is how you get the bank underwriting story above. Training before and during deployment is how you avoid it.
  • Define governance ownership inside your company, not at your vendor. A vendor cannot own the regulatory or reputational risk of an AI system you deployed. Your people have to be capable of doing that work.

Forward-deployed engineers are not the problem. An FDE-only AI strategy is. It compresses your time-to-first-deployment at the cost of compounding your dependency for the next decade, and it leaves the actual constraint on AI RoI — your workforce's ability to use these systems well — completely unaddressed. The systems half has $5.5B chasing it. The people half is where the returns get realized or lost.

We do the people half. We work alongside whichever systems partner you choose.

Publish date: May 15, 2026

Related Posts