Eleven Percent of a 350 Megawatt-Hour Battery Does Not Exist
A grid-scale battery storage facility in Europe, 350 MWh of lithium iron phosphate, is missing 15 to 40 megawatt-hours of tradable energy every day. The capacity is installed. The cells are wired. The battery management system reports the system as operational. But 11 percent of the facility’s energy cannot be dispatched to the market, because the cells holding that energy are too far out of balance with one another for the system to access it.
The gap was identified by Volytica, a battery monitoring platform, after operators noticed chronic shortfalls between forecasted and delivered energy. What Volytica found was not a manufacturing defect or a degradation problem. It was a measurement problem. The facility’s BMS was failing to detect the imbalance entirely, chronically overestimating the volume of energy available for trading.
The arithmetic. When the most fully charged cell in a string sits at 100 percent and the least charged cell sits at 75 percent, the system cannot discharge to its rated depth without risking damage to the weakest cells. The BMS protects the hardware by curtailing output. That curtailment is invisible to the market. The battery bids energy it believes it has. The grid expects delivery. The delta shows up as balancing costs: five figures per week at this single facility, according to the analysis. Over a year, that compounds into millions of euros in costs that did not appear in the project’s financial model.
The BMS blind spot. Battery management systems are designed to prevent thermal runaway, manage charge and discharge cycles, and report system health. They are not designed to detect slow, cumulative divergence between cells. Cell imbalance is not a sudden failure. It is a drift: one cell ages slightly faster, operates at a slightly different temperature, or carries a marginally different internal resistance from its neighbors. Over months, these differences compound. By the time the gap is large enough to affect system output, it has already been eroding revenue for weeks.
This is not a chemistry-specific problem. The facility in question uses LFP, the chemistry that dominates grid-scale deployments globally and accounts for nearly all new US installations. LFP’s flat voltage curve, the characteristic that makes it stable and long-lived, also makes state-of-charge estimation harder. The voltage difference between 30 percent and 70 percent charge is so small that conventional BMS algorithms struggle to distinguish between them. The same trait that makes LFP safe makes it difficult to monitor precisely.
Scale of exposure. US manufacturing capacity alone is projected to reach 145 GWh by year-end, according to Canary Media’s analysis of Energy Storage Council data. LG Energy Solution Vertech is converting automotive battery plants in Tennessee and Michigan to grid storage production. The deployment pipeline is enormous and accelerating.
None of these capacity figures account for cell imbalance losses. If the 11 percent stranding rate observed at this European facility is even partially representative, the global fleet may be carrying significant phantom capacity: energy that appears on datasheets and in market bids but cannot actually be delivered.
There is no public data on fleet-wide imbalance rates. Battery operators do not report stranded capacity as a separate metric. Offtake agreements and capacity market bids are based on nameplate ratings, not on independently verified deliverable energy. The gap between rated and actual capacity is, for now, an accounting problem that individual operators absorb through balancing market penalties.
The rebalancing tradeoff. Rebalancing, the process of realigning all cells in a system to the same state of charge, corrects the problem. The challenge is one of timing and frequency: how often should a commercial system take itself offline to rebalance, and how much revenue does it forfeit during that downtime versus the revenue it recovers by restoring full capacity?
For a system earning revenue in wholesale energy and ancillary services markets, every hour offline is a direct cost. The operator faces a choice between accepting chronic underperformance or accepting periodic planned outages. Neither option appears in the project finance model that secured the facility’s funding.
The procurement gap. System integrators compete on cost per kilowatt-hour, energy density, and warranty terms. Monitoring granularity, the ability to detect cell-level drift before it becomes rack-level stranding, is not a standard procurement criterion. Most grid-scale BMS platforms report at the module or rack level. Cell-level telemetry exists but adds cost and complexity that developers under margin pressure are reluctant to specify.
The result is a fleet being built to a cost target, not a performance target. As long as capacity payments and revenue projections are based on nameplate ratings, the economic incentive to invest in higher-resolution monitoring remains weak. The operator who discovers a 15 MWh daily shortfall a year into operations has no contractual remedy against the integrator who delivered a system that met its rated specifications on commissioning day.
The battery storage industry has spent a decade solving the cost problem. LFP cell prices have fallen to levels that make grid-scale storage economic in most US wholesale markets without subsidies. Manufacturing capacity is scaling past demand. The deployment bottleneck has moved from “can we build enough” to “can we build it fast enough.”
The 350 MWh facility in Europe suggests there is a different question the industry has been slower to ask: of the capacity already built, how much of it is actually there?
Sources
When Battery Imbalance Turns 11% of Capacity Into Stranded Energy (ESS News)
US Capacity for Storage Cell Factories (Canary Media)
US Battery Manufacturing Will Surpass 100% of Demand (Energy-Storage.News)