diy solar

diy solar

Which is worse, discharging to empty or charging to full?

fafrd

Solar Wizard
Joined
Aug 11, 2020
Messages
4,188
To extend lifetime of LiFePO4, the general recommendation is to never discharge below ~10% or charge above ~90%, but the impact of those two distinct events cannot be equally damaging.

So which will result in faster cycle life degradation:

-never discharging below 10% but charging to 100% (3.65V/cell) [top-balanced battery]

-never charging above 90% but discharging to 0% (2.5V) [bottom-balanced]

Also, since the cells are rated for 2500 to 3500 cycles from 0% (2.5V) to 100% (3.65V) before capacity has degraded to 80% of initial rated capacity, this should mean that cycling between 0% to 80% of rated capacity is guaranteed to deliver over 2500 to 3500 cycles which brings up a second question: will there be any significant difference in lifetime between:

-discharging to 0% (2.5V) but only charging to 80% (~3.325V)

[should be well over 2500 to 3500 cycles before charging needs to continue to 3.65V to charge a full 80% of initial rated capacity

-only discharging to 10% (3.0V) but charging to 90% (~3.35V)

[should also be well over 2500 to 3500 cycles before discharge needs to continue to 2.5V in order to charge a full 80% of initial rated capacity by 3.65V]
 
I should be interesting to see what kind of responses you get.

I'm sure it goes without saying, but because of the way the charge / discharge curve works on LFP - very steep past the knee - charging a multi-cell pack to 3.65V/cell or discharging to 2.5V/cell can actually be pretty difficult. In the steep part of the curve it is pretty likely that cell voltages will get out of balance. This is obviously harder for a 16-cell pack than it is for a 4-cell pack.

I have read that LFP can actually take repeated charging to over 3.65V (even up to 4.0V) without any real impact on cycle life. I certainly wouldn't do it, as there has to be a reason that 3.65V is universally called the max by cell manufacturers.

I'm not as sure about discharging below 2.5V.

I don't think there is universal acceptance of where 80% or 90% is in terms of voltage. I know @Off-Grid-Garage showed in one of his videos that you can still charge a cell to 100% by only going to 3.40V, but it will require a pretty long absorption period, so the current will stay high well after you get to 3.40V. If I recall correctly, he showed that at 3.45V there is very little absorption time, and at 3.5V there was essentially no absorption time.
 
I should be interesting to see what kind of responses you get.

I'm sure it goes without saying, but because of the way the charge / discharge curve works on LFP - very steep past the knee - charging a multi-cell pack to 3.65V/cell or discharging to 2.5V/cell can actually be pretty difficult.
Because the ‘slump’ under 3.0V is a much gentler slope than the ‘climb’ above 3.4V, my experience is that it is far easier to discharge a bottom-balanced battery to 0% (or whatever ‘empty’ limit you are aiming for) than it is to charge a top-balanced battery to 3.65V (or whatever ‘full’ limit you are aiming for).
In the steep part of the curve it is pretty likely that cell voltages will get out of balance. This is obviously harder for a 16-cell pack than it is for a 4-cell pack.
Again, I’m my experience, maintaining bottom-balance is far easier than maintaining top-balance (assuming you have a system that can function with a bottom-balanced battery, meaning the battery is drained nightly).

With well-matched cells, balancing is easy to maintain either way, but I am speaking about maintaining balance in the face of typically-mismatched cells with some mismatch in self-discharge rates that needs to be compensated for.

With a top-balanced battery, it is only the time that the battery is charged up into the knee that balancing can occur. Whether merely using the passive balance typically activated above 3.5V in the typical BMS or utilizing an active balancer with a programmable activation voltage, the period of time that a battery is ‘held’ at 100% SOC (or whatever ‘full’ SOC is being used) is often very short (certainly in the case of self-consumption application like mine). So there is just not much time for balancing to occur.

Contrasted with a bottom-balanced battery where the balance function can be activated once the battery has been drained and can remain in effect until charging begins the next morning.

Between the gentler slope and the much longer balance duration, I’ve found maintaining a battery so that all cells discharge to 2.5V (or rather, 2.8V in my case) and never triggering BMS LVD to be far easier than attempting to keep cells balanced above the knee and avoiding HVD…

“Horsefly” said:
I have read that LFP can actually take repeated charging to over 3.65V (even up to 4.0V) without any real impact on cycle life. I certainly wouldn't do it, as there has to be a reason that 3.65V is universally called the max by cell manufacturers.


And pretty much any BMS is going to trigger HVD at 3.65V or soon after (my HVD is hard-wired to 3.6V). Voltage is changing so rapidly in that region that it’s very difficult to control,


I'm not as sure about discharging below 2.5V.
Me neither (which is why I asked).
I don't think there is universal acceptance of where 80% or 90% is in terms of voltage. I know @Off-Grid-Garage showed in one of his videos that you can still charge a cell to 100% by only going to 3.40V, but it will require a pretty long absorption period, so the current will stay high well after you get to 3.40V. If I recall correctly, he showed that at 3.45V there is very little absorption time, and at 3.5V there was essentially no absorption time.
From what I have understood, what matters most is the voltage the cells settle to, not the voltage they were charged to.

So settling to 3.4V may easily mean charged to a higher SOC than charging at 0.1C to 3.65V.

The same is true on the low-end though the ‘bounce-back’ after discharging to 2.5V but because of the gentler slope the impact in terms of delta V is much less.

Regardless of how one characterizes 0%/10% or 100%/90% in terms of voltage levels, my main question is whether the degradation / damage mechanisms near 0% SOC / minimum SOC / voltage levels is better than or worse than the degradation / damage mechanisms near 100% SOC / maximum SOC / voltage levels…
 
I assume the real issue is how long at the maximum or minimum. If you float the cells at 3.65 all day or hold the cells at 2.5 all night neither situation is great. Charging is easy to control to limit time near the top. Bouncing off the bottom seems worse as the potential shut down may cut power to controls that facilitate charging. Not to mention shut downs in general are inconvenient. If the battery is truly being used 90% each day or each cycle.... consider a bigger battery.
 
Because the ‘slump’ under 3.0V is a much gentler slope than the ‘climb’ above 3.4V, my experience is that it is far easier to discharge a bottom-balanced battery to 0% (or whatever ‘empty’ limit you are aiming for) than it is to charge a top-balanced battery to 3.65V (or whatever ‘full’ limit you are aiming for).
I completely agree, and in fact this is also an indication that there is much more energy in the lower curve than in the upper curve. When I did my capacity test of my 230Ah cells with a constant 25A discharge, my consistent result was that there was <1% of the total above 3.3V but that there was more like 8% below 3.15V, which was where the bottom knee started turning steeper. It also makes it tempting to try and eek out a little more on the bottom knee.
From what I have understood, what matters most is the voltage the cells settle to, not the voltage they were charged to.
I guess I agree, but I think that if you charge until absorption finishes, the settle voltage will be higher than if you cut off charging before the cells have absorbed everything they can, no matter what voltage you were charging to (assuming it is at least 3.4V).
Regardless of how one characterizes 0%/10% or 100%/90% in terms of voltage levels, my main question is whether the degradation / damage mechanisms near 0% SOC / minimum SOC / voltage levels is better than or worse than the degradation / damage mechanisms near 100% SOC / maximum SOC / voltage levels…
I would have to assume there have been some scientific studies and there must be a peer-reviewed paper out there. Just need to find it.
I assume the real issue is how long at the maximum or minimum. If you float the cells at 3.65 all day or hold the cells at 2.5 all night neither situation is great. Charging is easy to control to limit time near the top. Bouncing off the bottom seems worse as the potential shut down may cut power to controls that facilitate charging. Not to mention shut downs in general are inconvenient. If the battery is truly being used 90% each day or each cycle.... consider a bigger battery.
I think we can agree with you there @time2roll, but that may be a separate question from what @fafrd has asked. That is, if you never leave the cells at 100% or 0%, but you repeatedly charge to 100%, does that degrade cycle life?
 
Last edited:
I assume the real issue is how long at the maximum or minimum.
If you’ve got any sources to back up that assumption, I’m interested (other than the plethora of conventional wisdom that cells should be stored at 30% to 50% and it is bad / damaging to store cells near 100% SIC or near 0% SOC).
If you float the cells at 3.65 all day or hold the cells at 2.5 all night neither situation is great.
To be precise, when you charge a cell up to boost voltage in CC and then hold that Boost Voltage for some period in CV as well as when you maintain CV at Float Voltage, you are truly ‘holding’ the cells at that voltage (Boost and/or Float).

There is no realistic way to ‘hold’ cells at 2.5V and when you discharge to 2.5V or 2.8V and then stop discharging, the cells are not being ‘held’ - they relax to a higher voltage determined by their exact SOC (and also largely determined by the discharge current before hitting the Low Voltage cutoff voltage).

Cells can certainly (and easily) be left near 0% SOC overnight, so if you’ve got any evidence this is damaging, please share.

Charging is easy to control to limit time near the top. Bouncing off the bottom seems worse as the potential shut down may cut power to controls that facilitate charging.
My SCC has a programmable relay that makes it easy to monitor battery voltage and shut off inverter once a low voltage threshold is reached (3.0V / 24.0V or 2.8V / 22.4V in my case). This makes it easy-peazy to stop discharging well below the battery voltage that would cause my SCC to have any problem starting to charge once the sun comes up…

Not to mention shut downs in general are inconvenient.
If relying on cell-level LVD controlled by BMS, certainly agree, but ending discharge by turning off the inverter once a properly bottom-balanced battery has reached it’s ‘empty’ / ‘stop discharge’ voltage, not inconvenient at all (desired behavior).

If the battery is truly being used 90% each day or each cycle.... consider a bigger battery.
Pretty strongly disagree with that statement. 3500 or even 2500 cycles is going to be more than I’ll ever need.

And those cycles are at a full 0% to 100% until capacity has degraded to 80% of initial rating.

So by definition, if you set your SOC limits for 80% of initial rating and just start discharging to lower voltages and/or charging to higher voltages to maintain an SOC capacity equaling 80% of initial rating, you’re going to get even more than 2500 to 3500 cycles before you can no longer store 80% of initial rating between 2.5V and 3.65V (again, by definition).

So yes, using your battery at 90% initial rating is not going to deliver as many cycles as using it at 80% of initial rating, but it is going to deliver more than the 1750 cycles to 90% initial capacity you’d get running from 2.5V to 3.65V for that number of cycles.

So worst, worst case, if you run at 90% initial rating, you may only get 2000 or 2500 cycles before 90% requires the full 2.5V to 3.65V range at which point, capacity will slowly degrade to 80% initial capacity over the following ~2000 cycles.

So we’re talking 5-1/2 to almost 7 years before your LiFePO battery may not continue to deliver the full 90% of initial rating you need, at which point you can accept some energy loss or add a second battery.

Spending more money on a new battery right from the get-go when you have absolutely no idea how long you’ll be able to get a full 90% of rating from what you have would be a total waste (at least in my view).

90% is my worst-case use. Many days the battery is charging up to no more than 50% (which should be another factor increasing lifetime).

For me, it’s really a question of whether I can do the ‘easy’ thing of discharging to my ‘empty’ threshold every night and charging up to however high the next day’s production gets me, or will I be degrading my capacity more quickly than if I discharge to ~9.5% / 3.0V for a few years before dropping to 2.9V for a few more years, before dropping to 2.8V once that is needed to avoid charging some cells up past 3.5V on high-production days.

I’m just not understanding the difference in capacity degradation mechanisms above the knee versus below the discharge slope and seeking some guidance on which is worse / more damaging…
 
I’d go with low voltage over high voltage. The only issue is if you fall into the lower knee you need to charge slowly.

Don’t discount keeping the battery longer than 10 years. Mine is now 10 years old, and a decade ago i thought there would be a better battery by now - i’m glad i was conservative and hope to get another 10 years out of it.
 
I completely agree, and in fact this is also an indication that there is much more energy in the lower curve than in the upper curve. When I did my capacity test of my 230Ah cells with a constant 25A discharge, my consistent result was that there was <1% of the total above 3.3V but that there was more like 8% below 3.15V, which was where the bottom knee started turning steeper. It also makes it tempting to try and eek out a little more on the bottom knee.
For sure.
I guess I agree, but I think that if you charge until absorption finishes, the settle voltage will be higher than if you cut off charging before the cells have absorbed everything they can, no matter what voltage you were charging to (assuming it is at least 3.4V).
Agree with this as well. Since EVE’s spec is to halt charging at 0.1C as soon as 3.65V is reached, it’s certain that charging at lower currents and/or absorbing in CV mode at 3.65V results in cells that have been charged above 100% SOC by EVE’s definition…

In the end, no matter how you charged (and to whatever voltages), it is settled cell voltage after a suitable rest period which is probably the best reflection of what % to maximum possible SOC a cell has been charged (at least on a relative basis meaning to itself).

I would have to assume there have been some scientific studies and there must be a peer-reviewed paper out there. Just need to find it.
That’s what I’m hoping (and why I started the thread).
I think we can agree with you there @time2roll, but that may be a separate question from what @fafrd has asked. That is, if you never leave the cells at 100% or 0%, but you repeatedly charge to 100%, does that degrade cycle life?
Until some new quantitative information shows up, I think I’ve clarified my strategy to manage my bottom-balanced cell:

Start with a relatively ‘safe’ discharge limit of ~10% / 3.0V.

Monitor peak charge voltage and as long as SCC never reached Boost Voltage of ~99% 3.375V, leave everything alone.

Once I see first evidence that the lowest-capacity cell in my battery has charged all the way to 3.375V, I will slowly lower my discharge limit voltage below 3.0V / 24V as needed to avoid that cell reaching 3.375V, even on high-production days).
 
I’d go with low voltage over high voltage. The only issue is if you fall into the lower knee you need to charge slowly.
Since my charge begins with first light when PV output is low, that should never be a problem for me, but if you’ve got any references describing the problem charging empty cells low in the knee with higher charge currents (faster) I’d appreciate learning more about the issue.
Don’t discount keeping the battery longer than 10 years. Mine is now 10 years old, and a decade ago i thought there would be a better battery by now - i’m glad i was conservative and hope to get another 10 years out of it.
Oh, I’m not discounting it at all - my point was that adding battery capacity because of a concern that cycling at 90% of rated capacity may not give you the lifetime you expect is misguided.

I’ve seen several references to the fact that LiFePO4 capacity degradation seems to be most related to the cumulative charge that has been cycled through the battery (assuming good treatment overall). Meaning that as long as you are not charging above limits or discharging below limits, it doesn’t matter how many cycles or to exactly what limit voltages as much as total electrons transferred into and out of the cell.

I think I also understood from someone who posted an excellent video in one thread or another, that when LiFePO4 cells are first manufactured, they are in an uncharged state.

Both of these are reasons I ‘feel’ like daily discharge to 2.8V and resting overnight at whatever level those cells bounce back to following inverter shutoff, followed by a daily charge cycle to however full the next day’s production can charge the battery is ‘gentle’ and should result in expected cell lifetime.

But there are so many members here who know so much more about these cells and the mechanisms underlying their degradation I felt that starting this thread to buttress my ‘feeling’ with some science / facts would be a good idea.
 
Oh, I’m not discounting it at all - my point was that adding battery capacity because of a concern that cycling at 90% of rated capacity may not give you the lifetime you expect is misguided.
That is great if your usage is virtually the same every day or you have an autostart generator. If I am doing something even as mundane as typing right now and the inverter shuts me off I am not happy. Battery expansion would happen quickly. This is for my convenience, not to get a few more cycles out of the battery. If it is fine for the inverter to shut down then all is good.
 
That is great if your usage is virtually the same every day or you have an autostart generator. If I am doing something even as mundane as typing right now and the inverter shuts me off I am not happy. Battery expansion would happen quickly. This is for my convenience, not to get a few more cycles out of the battery. If it is fine for the inverter to shut down then all is good.
Makes sense. I’m just doing some offset of self-consumption and some time-shifting to cover peak TOU consumption.

So yeah, in my case the grid is always there to run everything and the only thing I can count on each day is that the inverter will shut down every night after it has finished draining whatever energy was stored in the battery that day to offset some consumption…
 
But there are so many members here who know so much more about these cells and the mechanisms underlying their degradation I felt that starting this thread to buttress my ‘feeling’ with some science / facts would be a good idea.

I see a lot of “facts” bandied around that are in direct contradiction of the information i get from two people i know involved in design/manufacture of LiFePO4.

Still early days for comprehensive knowledge.
 
I see a lot of “facts” bandied around that are in direct contradiction of the information i get from two people i know involved in design/manufacture of LiFePO4.

Still early days for comprehensive knowledge.
If by any chance you have a chance to bounce my question off of them (relative risk of fully discharging nightly versus fully charging daily), I’d sure be interested in their response…
 
The science and facts are out there. For a few decades now. You need to look harder. :)

But will it affect YOUR environment in a significant way? Perhaps not. Only you can tell the difference between a technical obsession that may not matter, or will simply following "best practices" be enough?

In other words, this could be an endless thread that goes nowhere because everyone has a different situation - hence an overaching best-practice may not suit you.

Search for things like lithium-plating, electrolyte oxidation, sei-layer, and secondary reactions, heat aggravation, and the TIME factor that many overlook. Most importantly, see if it applies to *your* application, as blanket statements can be meaningless to a trivial application.
 
The science and facts are out there. For a few decades now. You need to look harder. :)

But will it affect YOUR environment in a significant way? Perhaps not. Only you can tell the difference between a technical obsession that may not matter, or will simply following "best practices" be enough?

In other words, this could be an endless thread that goes nowhere because everyone has a different situation - hence an overaching best-practice may not suit you.

Search for things like lithium-plating, electrolyte oxidation, sei-layer, and secondary reactions, heat aggravation, and the TIME factor that many overlook. Most importantly, see if it applies to *your* application, as blanket statements can be meaningless to a trivial application.
Appreciate the helpful advice. Not really interested in obsessing too much on this question - was just sort of hoping it was understood by some of our more knowledgeable members whether one extreme was generally understood to be more damaging than the other.

I was actually just doing a bit of searching before seeing your post and much of what’s out there seems to be very focused on characterizing impact on lifetime of temperature extremes (which is really not my concern concern since my battery will generally be at 65F / 18C with common excursions 1 or 2 degrees C outside of that and extremely rare worst-case excursions to 40F / 8C on the low end and 77F / 25C on the high end…
 
The science and facts are out there. For a few decades now. You need to look harder. :)

My pack is part of a trial on SEI layer formation, started in 2011, still ongoing.

Some accelerated failure modes are known, the more subtle (such as spending the majority of cell life lower in the SOC range rather than higher) are not conclusively known.

I expect in another 20 years or so there will be a lot more comprehensive information available.

The most frustrating thing with LiFePO4 is the chemical reaction at low charge rates is different to high charge rates, so it’s not possible to extrapolate data.
 
I think Will Prowse has it correct; these batteries last a long time and if you shorten their life by discharging them 100%, there will probably be a better battery available in the next 10-15 years anyway.
 
That’s what people said to me ten years ago when i bought my LiFePO4 cells - where is the better alternative?
 
That’s what people said to me ten years ago when i bought my LiFePO4 cells - where is the better alternative?
You know, there was a proposal here in the U.S. back in 1889 to close the patent office, because everything had already been invented. That prediction didn't age well.

I don't think it is wise to say we are at the peak of any technology. Especially battery technology, since we are just emerging from lead-acid dominance from the 1800's.
 
That’s what people said to me ten years ago when i bought my LiFePO4 cells - where is the better alternative?
What shape are they? Sounds like that was long before the ‘pouch’ design was even conceived…
 
Back
Top