ArthurEld
Solar Wizard
Dzl, all of your theories are true.From the sound of it, I don't think you have any reason to be hugely disappointed, or at least not more than a few % disappointed.
I think there is a big gap between being disappointed in your purchase and acknowledging and documenting shortcomings or misrepresentations. And being reasonably upset/frustrated.
This is true, so most people testing at well below 0.5C or 1C should be getting more than 280 not less.
One reason why it matters (beyond the obvious, getting what you paid for) is that if for instance you plan to use 85% (95-10) of your capacity to maximize cycle life, now imagine your lowest capacity cell is 10% less than its siblings. Meaning your pack (without any bandwidth limiting) is limited to 90% of nominal capacity from the get go, but you planned to only cycle between 95% and 10% to maximize cycle life, if you want to stick to that, your new 95-10% range is 95-10% of 90%.
Using a 100Ah pack as an example:
100Ah nominal, with a 90Ah weak cell = 90Ah usable capacity
Applying a 95-10% bandwidth to your pack now = 76.5Ah usable capacity due to the weak cell.
This roughly the same % capacity loss as would occur if you planned to use all 100Ah
A lot of people erase the undercapacity in their minds in the fuzzy math of limiting themselves to 80% SOC or so (I thought that way for a long time too, it wasn't until I drew it out visually that I understood my error). But the capacity loss whether your bandwidth is 100% or 80% is the same, its just less visible in the 80% example. Since the buffers kinda 'absorb the missing capacity' mentally.
Is this making sense? This is the way I think about it, but I'm glaringly aware of my ability to get shit wrong
All that said, a few % isn't usually a huge deal in practice, most people think of the capacities of the Lishen and EVE and ETC cells pretty interchangeably I think, even though they differ by a few %.
More even wear rates, and more potential usable capacity are what come to mind
One thing I've been thinking about lately--and I'm not 100% sure i've got it right but I'm going to throw it out there and see what the consensus is--is that one issue with mismatched (capacity) cells is that the cell that is already the lowest capacity is going to be cycled marginally harder than the rest, meaning more wear, meaning slightly accelerated degradation/capacity loss, sortof a slow gradual vicious cycle.
Of course if the difference in capacity is small, or the use case is very gentle/low intensity, or the pack is seldomly cycled all the way to near zero, its maybe not a big enough deal to even care about. And that describes the situation for most of us.
But I am curious, in theory does my thinking hold up?
But this is also true. It is cheaper to buy more cells than it is to try to squeeze all of the amp hours out of cells.