Ok, this has been grinding my gears to the point where I'm annoyed with it, so I just wanted to write it down here.
I see statements like this all over the net when it comes to cell capacity testing:
- "you need to divide the Wh reading by the nominal voltage of 3.2V to get the real Ah rating".
- "AH without a nominal voltage means nothing, otherwise we have an infinite AH battery for you at 0 volts. "
Usually related to some capacity measurement not matching a label.
I'm just going to say this: taking the Wh reading of your measurement and dividing it with the nominal voltage of the cell or battery is just wrong.
During a discharge (and you can pick e.g. any of Andy's curves he recorded as example) the voltage stays above 3.2V for the majority of the time (over 60% iirc). If the voltage were truly 3.2V over the entire discharge, you could do this - even if it were average at 3.2V - but because it's over 3.2V for most of the time you're getting a skewed result in favor of higher Ah results by doing the "divide by 3.2V" thing.
The reason they're skewed is in the definition of nominal voltage: the nominal voltage is measured at the mid point between fully charged and fully discharged based on a 0.2C discharge - so unless you discharge e.g. a particular 304Ah cells at 60A, you're not having a 3.2V nominal voltage over the curve, and you can't just divide these results by the nominal voltage. Now the difference in this voltage for LiFePO4 is small, but it's large enough to make the Ah rating look better than they are.
In fact, if you take the Wh reading you would get and divide that by the Ah measurement of the same test, you would see an average voltage >3.2 with typical 40A or less discharge current that most of those ZKE and others support. Suppose we measure 1000Wh from a cell and divide this by 3.2, we get 312.5Ah. If the actual average voltage were just 0.02V higher, at 3.22V, we get 'only' 310Ah. The whole reason of rating a battery in Ah is because it takes voltage and losses out of the equation: any bad connection, wire resistance, etc. will impact the Wh readings (especially on devices without separate voltage sense wires), but will still produce accurate Ah readings (the current is the same everywhere in the circuit) - demonstrated here.
Stating that Ah readings without nominal voltage are meaningless is wrong. Stating that to get the actual Ah rating of a cell you have to divide by the nominal voltage is wrong. There are specific circumstances where the latter statement works, but it just can not be applied as a blanket statement. If you do a capacity test, test the Ah rating, and believe the Ah rating the measurement gives you, not the label, and not the silly divide by 3.2V 'rule'.
I see statements like this all over the net when it comes to cell capacity testing:
- "you need to divide the Wh reading by the nominal voltage of 3.2V to get the real Ah rating".
- "AH without a nominal voltage means nothing, otherwise we have an infinite AH battery for you at 0 volts. "
Usually related to some capacity measurement not matching a label.
I'm just going to say this: taking the Wh reading of your measurement and dividing it with the nominal voltage of the cell or battery is just wrong.
During a discharge (and you can pick e.g. any of Andy's curves he recorded as example) the voltage stays above 3.2V for the majority of the time (over 60% iirc). If the voltage were truly 3.2V over the entire discharge, you could do this - even if it were average at 3.2V - but because it's over 3.2V for most of the time you're getting a skewed result in favor of higher Ah results by doing the "divide by 3.2V" thing.
The reason they're skewed is in the definition of nominal voltage: the nominal voltage is measured at the mid point between fully charged and fully discharged based on a 0.2C discharge - so unless you discharge e.g. a particular 304Ah cells at 60A, you're not having a 3.2V nominal voltage over the curve, and you can't just divide these results by the nominal voltage. Now the difference in this voltage for LiFePO4 is small, but it's large enough to make the Ah rating look better than they are.
In fact, if you take the Wh reading you would get and divide that by the Ah measurement of the same test, you would see an average voltage >3.2 with typical 40A or less discharge current that most of those ZKE and others support. Suppose we measure 1000Wh from a cell and divide this by 3.2, we get 312.5Ah. If the actual average voltage were just 0.02V higher, at 3.22V, we get 'only' 310Ah. The whole reason of rating a battery in Ah is because it takes voltage and losses out of the equation: any bad connection, wire resistance, etc. will impact the Wh readings (especially on devices without separate voltage sense wires), but will still produce accurate Ah readings (the current is the same everywhere in the circuit) - demonstrated here.
Stating that Ah readings without nominal voltage are meaningless is wrong. Stating that to get the actual Ah rating of a cell you have to divide by the nominal voltage is wrong. There are specific circumstances where the latter statement works, but it just can not be applied as a blanket statement. If you do a capacity test, test the Ah rating, and believe the Ah rating the measurement gives you, not the label, and not the silly divide by 3.2V 'rule'.