The grid’s biggest bottleneck isn’t technology. It’s everything surrounding it.
The generator interconnection queue is broken.
That’s not a controversial statement. It’s an industry-wide acknowledgment. Queues stretching years long, studies bottlenecked for months, and skilled engineers buried in administrative work that technology should have eliminated a decade ago.
The instinct, understandably, is to reach for better tools. Faster solvers. AI.
The bottleneck is almost always upstream: in how problems are defined, how data flows, and how humans and systems hand work between each other. Outdated processes, incompatible tools, and inconsistent data compound into deep-seated systemic friction that no single software purchase resolves.
And this is happening against an already punishing backdrop. Aging infrastructure. Accelerating renewables and DER integration. Growing complexity of regulatory requirements.
Weatherization pressures. Rising demand for lower power costs. The industry was already under strain before the queue became a crisis.
In short, better software deployed into a broken process doesn’t accelerate the queue. It accelerates the chaos.

The Problem With Starting With the Solution
We see a particular pattern repeatedly in highly regulated, physically complex industries. An organization identifies a pain point (simulation runs keep failing to converge, stakeholder data arrives incomplete, report generation takes too long) and moves quickly to procure a solution.
A new tool. A new platform. Everyone’s favorite, an AI tool.
The tool gets deployed. One pain point is relieved. However, the backlog of interconnection requests and long planning study timelines persist.
An incremental approach to problem-solving has value. But implementing solutions without grasping their place in the larger system and how they tie to business goals results in an array of point solutions that fail to make a meaningful impact on the overall problem. In essence, without a holistic grasp of the system, point solutions can never become more than the sum of their parts.
Generator interconnection is a textbook case. The study lifecycle isn’t one bottleneck. It’s a chain of them, compounding each other.
Fragmented data across dozens of legacy systems. Stakeholders submitting incomplete or technically incorrect information through inconsistent channels.
Engineers forced into manual validation loops for months before a single simulation can even begin. Physics-based solvers that consume days of compute time and then fail to converge, sending the process back to the start.
These aren’t isolated frustrations. They’re cascading failures. Poor data quality and fragmented handoffs create compounding uncertainties that ripple through the entire study lifecycle. And here’s the uncomfortable truth: these upstream process failures do more to delay transmission upgrade decisions and slow load integration than the technical demands of grid physics ever will.
Each of the pains experienced in the study process can be disguised as discrete technical problems. And, when treated in isolation, each yields a discrete technical fix. But the compounding effect (weeks added here, re-work triggered there) is a systemic problem that emerges at the intersection of process, technology, and people throughout the entire study lifecycle.
Addressing systemic problems requires a different starting point, one that keeps business outcomes from being sacrificed for flashy, point-solution wins. By developing a deep grasp of the end-to-end system, it becomes possible to look beyond the loudest symptoms to the actual root causes, selecting solutions that create cumulative impact and drive meaningful business outcomes from initial data intake to final infrastructure investment.
Start With the System, Not the Solution
The instinct is to skip straight to solutions. But in our experience, the cost of incomplete discovery isn’t paid upfront. It’s paid later, in failed implementations, re-work, and technology that solves the wrong problem at scale.
Structured discovery starts before any solution is proposed. Not an IT requirements gathering exercise, but a genuine end-to-end mapping of how work actually flows: the human actions, the software dependencies, the data assets consumed and produced at every stage, and critically, where handoffs break down.
In our experience, this process typically takes weeks, not months. The output isn’t a report that sits on a shelf. It’s a prioritized map of friction that directly drives the intervention roadmap.
In complex grid planning environments, this involves extensive interviews with engineers, planning teams, and external stakeholders. It means shadowing model building and simulation workflows, not just documenting them. It surfaces the gap between how a process is supposed to work and how it actually works, a gap that is almost always wider than anyone expects.
Diagnosis itself isn’t new. Fragmented data and broken intake processes have been on the to-do list at most transmission organizations for years.
What changes the outcome isn’t identifying these problems. It’s the deliberate sequencing of how you fix them and making certain each structural improvement is paired with interventions that deliver immediate value. Discovery is what makes that sequencing possible.
What emerges isn’t a shopping list of tools. It’s a clear view of which bottlenecks are structural prerequisites for everything downstream, and which interventions will have compounding rather than isolated impact: the difference between a roadmap that compounds and one that stalls.
Build and Realize. Then Build and Realize Again.
The temptation in large modernization programs is to front-load all the groundwork (clean the data, build the architecture, govern the systems) and only then start claiming value. That’s not progress. That’s a multi-year infrastructure project with a promise attached.
The alternative isn’t to skip the groundwork. It’s to be deliberate about which interventions you pair together, and in what order.
Not all improvements are equal. They differ in value, impact, and deployment effort.
Not all improvements are independent, either. Some only deliver value once something else exists beneath them.
Deploying AI-powered data validation is powerful, but only if there’s a standardized, governed data layer for it to operate on. Accelerating simulation runs matters enormously, but less so if the models being fed into the solver are still built from fragmented, manually reconciled, error-prone data.
This is an agile approach to systemic change: a continuous loop in which every structural improvement unlocks a tangible win, and every tangible win builds the case for the next layer of investment. To structure that loop, we use a simple distinction.
Facilitators are the structural prerequisites: the unglamorous groundwork that makes advanced technology possible. A unified submission portal that centralizes data intake. A computational architecture that processes data and supports workflows across siloed processes and systems.
A solver-agnostic single source of truth that gives engineers confidence in what they’re working with. These don’t generate headlines, but they determine whether the headline-generating technology actually delivers.
Realizers are the high-impact applications that sit on top of that base: AI-driven anomaly detection at the point of data submission, AI-solved grid models that reduce simulation run times from weeks to hours, automated report generation that turns a months-long documentation burden into a same-day task.
The temptation is to lead with Realizers. They’re compelling. But the organizations that get the most durable value deploy both tracks in parallel, securing short wins that deliver immediate impact and simultaneously building the base that allows those wins to scale.
The goal is never to choose between proving value today and future-proofing for tomorrow. Done deliberately, you don’t have to.
Why Regulated Environments Demand This Approach
The problem-first, consultative approach matters more in energy and transmission planning than in, say, a software business.
The regulatory and reliability stakes are real. Errors in interconnection and transmission studies don’t just slow a project. They create compliance exposure, erode stakeholder trust, and in the worst cases, affect grid stability.
In this context, deploying AI that isn’t grounded in auditable, governed data isn’t just ineffective. It’s a liability.
Design thinking, applied with care, is a risk management discipline. By mapping the full system before intervening in any part of it, organizations can identify where automation introduces risk alongside where it creates value.
Build auditability in from the start rather than retrofitting it later. Deploy AI where it genuinely outperforms manual processes, and preserve human judgment where it still matters most.
The goal isn’t to replace engineering expertise. It’s to stop wasting it on work that shouldn’t require it.
The Queue Problem Is Solvable
The generator interconnection backlog isn’t inevitable. It’s the accumulated result of systemic friction that organizations have treated symptom by symptom rather than addressed at its root.
The organizations that will clear their queues fastest aren’t the ones that move quickest to procure AI. They’re the ones willing to slow down first: to map the full system, sequence their interventions, and build the base that makes advanced technology actually perform.
In engagements applying this approach, the most significant study delays we uncover consistently originate not in the simulation itself, but in the intake and validation processes that precede it. The physics is hard. The process friction is fixable.
That’s a consultative and design-led approach. And in our experience, it’s the only one that meaningfully compounds.
Method works with transmission and generation planning organizations to apply structured service design and AI strategy to complex interconnection and planning workflows. If you’re attending IEEE this year and working through queue backlogs or planning modernization, we’d welcome the conversation.
