diy solar

diy solar

DIY BMS design and reflection

It will measure individual cell IR (not with AC tho since that would be too complicated and less accurate than a true DC IR measurement anyway) but only on user demand since it involves switching the loads on/off multiple times.

Current draw to feed an inverter at high load will be rectified AC. (capacitors can't do much to filter low frequency like 60 Hz.)
So you've got your AC right there, without user request or permission.
 
Diode clamping, and maybe MOV?
MOV don't have tight regulation; has to be selected so some over-voltage could get through be applied but should clip extreme levels.

I suppose for tests you have to do long/short battery cables, large loops vs. tightly coupled.

I added some TVSs which should be a bit tighter than MOVs but the MOSFETs are 80 V so not really enough. The MOSFETs are avalanche rated tho and I calculated the energy to be far below their max.

Yep, exactly, I'll test with multiple different configurations of long/short, thin/thick, large/small loop area, etc... ;)
 
Current draw to feed an inverter at high load will be rectified AC. (capacitors can't do much to filter low frequency like 60 Hz.)
So you've got your AC right there, without user request or permission.

That's not AC, that's rectified DC. And there's so much uncontrolled variables here that it would be pretty much impossible to use. And some people don't even have inverters ^^
 
That's not AC, that's rectified DC. And there's so much uncontrolled variables here that it would be pretty much impossible to use. And some people don't even have inverters ^^

OK, not AC unless a charge current is superimposed so it changes polarity.
But, with current varying periodically, if you measure voltage and current as multiple simultaneous samples over time, I think that lets you calculate resistance. Perhaps your design muxes cell voltages to an ADC, also read current shunt. Not all cells need to be read simultaneously, but one cell voltage and current need to be near simultaneous in the 120 Hz timeframe (assuming inverter load.) And filtered, either analog or many fast samples and digital, to avoid noise.

I think other BMS that report IR do so passively, using uncontrolled variations.

For non-inverter loads, you may have to be patient to see sufficient current variation. By then SoC could have changed. So qualifying readings to show accurate resistance is more difficult.
 
I only have one single channel ADC so I can't do simultaneous readings and its max conversion rate is 60 Sps so not nearly enough to avoid big errors with sequential readings on a 120 Hz signal.

I'll maybe consider doing something like that in a future design but for now I'll stick to the original plan.

Yep, I planned to do some less accurate but non-invasive IR measurement too (basically using current variations of the loads) but the true IR measurement feature will be user triggered and will have a lot more control on the external variables to be more accurate ;)
 
Years ago (mid '80's), I used a GHz bandwidth DSO at HP. It's actual sample rate was much lower, but by repeatedly triggering it could fill in the waveform over multiple passes.

You likely don't have a trigger circuit, but if you have a timebase you could follow a signal over time. Sampling at 59 Hz for instance, you could eventually capture and graph current being drawn by an inverter. Having determined operating frequency and phase, you could sample current at some point on the waveform one cycle, and cell voltage at same point of another cycle. Repeated several times and averaged (or at different points in the cycle) multiple point-slope calculations each giving an IR value could be averaged.

Big software effort, and only works for repeating signals, but my point is it is possible.
Software is much more expensive than hardware for one-off projects, but can be much cheaper at some production volume.

We usually think of using something like Kill-a-Watt to measure various appliances individually. Commercial products out there instead connect to one point of breaker panel and purport to provide the same monitoring function by analyzing the signature of measurements, extracting operation of each appliance from the cacophony of simultaneous loads. So this sort of thing is done, or at least attempted and claimed.
 
That's getting very very complicated for just an IR measurement... ?

Also, remember I'm quite limited in software space and CPU power, and with all of what I have planned software-wise I'm pretty sure I'll have very few resources left if any once it's done. And pretty much all the other features are more important and so will have priority on use of CPU time over secondary features like IR measurement.
 
Just running SPC on cell voltage measurements is probably all it takes to detect bad connections, which is the failure mechanism user can do something about if warned in time. Randomly timed samples and tracking peaks/valleys (excluding spurious spikes) over windows of time might make that work without having to know where in the waveform each measurement was taken.

Personally, I've tried to do what I can or need to in hardware. I'm just noting what may be available with clever enough software; we see some crazy stuff these days.
 
Oh yea, bad connections is easy to detect, no worries here ;)

Yep, I have the same approach: do as much as reasonably possible in hardware and use software for the rest.
 
It is degrees of bad - if contact resistance is high enough to run hot at full load but doesn't fail immediately, does that stand out?

When I started one project, there was zero support from software or firmware. I made everything either straight hardware or set by DAC and read by ADC, control loops in hardware. Settable tuning parameters would have been a nice addition.

Software/firmware, either need reliable time-sharing/RTOS, or a tight enough loop. This does get implemented in some products, but I had no faith in the team even once it was formed. Especially things like keeping a filament from burning out, I wanted fast hardware protection.

I'm thinking if you just rotate through measuring current and all cell voltages, over windows of time by logging high and low of each, you can detect outlier voltages and also calculate IR.

Mean of current gives power from battery, while RMS gives heating of wire/fuse/MOSFET.

4th scope image shows current transformer around battery cable, inverter 40% loaded with resistance heater.
I think at 100% load the DC current (which can be inferred from AC CT measurement & DC reported by inverter or a clamp DC ammeter) would reach about 0A for lows. So a large signal in both you current and cell voltage readings, that you have to average out in some manner.

 
It is degrees of bad - if contact resistance is high enough to run hot at full load but doesn't fail immediately, does that stand out?

Rapid calculations suggest I can easily detect any bad resistance who might generate more than a few W of losses and a few W is far under what is required to heat a connection dangerously so it should be plenty fine ;)


When I started one project, there was zero support from software or firmware. I made everything either straight hardware or set by DAC and read by ADC, control loops in hardware. Settable tuning parameters would have been a nice addition.

Software/firmware, either need reliable time-sharing/RTOS, or a tight enough loop. This does get implemented in some products, but I had no faith in the team even once it was formed. Especially things like keeping a filament from burning out, I wanted fast hardware protection.

Yes, I have pretty a good idea of how I'll handle multitasking on the MCU (multiple non-blocking timers + state machines for some of the things where it's appropriate) and I may even use an interrupt triggered by a timer to have a real time slice for the critical stuff but I'm not sure of what I'll do or not do yet. Even if I have ideas for the software since the start (which I wrote to not forgot them, don't worry) I want to validate the hardware first, then I'll be able to concentrate on the software.

Yep, and that's why I also have the HW protection board, I'll make the most reliable software I can, backed with a proper HW watchdog, but even then I still want full HW redundancy for the critical things.


I'm thinking if you just rotate through measuring current and all cell voltages, over windows of time by logging high and low of each, you can detect outlier voltages and also calculate IR.

Not sure about that yet. My current plan is to measure the current far more often than the other parameters (it is more important than the rest for a lot of reasons so I want the most accurate value as possible and as often as possible) like one in 2 or 3 measurements will be a current one. Then checking if there was a big change since last measurement, and if there was then do a voltage measurement to calculate the IR with (Vnew - Vold) / (Inew - Iold). The real IR measurement would be roughly the same but instead the loads would be switched on and off to generate the current change and have better control over the variables.

The first method would be only used if the user didn't do the second one since a long time (time which is to be defined) to compare with previous IR and see if a cell is drifting.

Your method is very interesting since logging min/max is easier and is less demanding. My only worry would be that it is too innacurate to be useful (since the voltages would be measured relatively far from the current ones time-wise) but the good thing is that I can simply test both methods and compare them to see which one is best.
 
I only have one single channel ADC so I can't do simultaneous readings and its max conversion rate is 60 Sps so not nearly enough to avoid big errors with sequential readings on a 120 Hz signal.

I'll maybe consider doing something like that in a future design but for now I'll stick to the original plan.

Yep, I planned to do some less accurate but non-invasive IR measurement too (basically using current variations of the loads) but the true IR measurement feature will be user triggered and will have a lot more control on the external variables to be more accurate ;)
hi! love this project. great discussion about implementation details!

volunteering more info, unsolicited of course :)

the ADS1115 ADC with 16bit and 860sps is what i'm evaluating for some DIY cell voltage readings. https://www.adafruit.com/product/1085

sampling at 240hz when expecting inverter usage is the lower limit of what i am targeting, but that may well be not helpful, only experiment can tell! synchronous sampling is desired but yea unsure about how much impact staggered sampling below shannon-nyquist rate

anyways, it is interesting to sample the current very reliably, i am interested in the tradeoff of temporal resolution and value resolution between volts and amperes

*cheering*
 
Yep, a future design will definitely use a higher sample rate dual channel ADC with one of the channels dedicated to the current measurement. That would solve in HW problems hard to solve in software with the current design.


sampling at 240hz when expecting inverter usage is the lower limit of what i am targeting, but that may well be not helpful, only experiment can tell! synchronous sampling is desired but yea unsure about how much impact staggered sampling below shannon-nyquist rate

You don't need synchronous sampling and the acceptable lower limit for sampling is usually 5x the highest frequency you want to sample (that's why pretty much all DSO have a sampling rate 5x higher than their BW...). So for 60 Hz it would be 300 Sps, you're a bit under with 240 but with a bit of averaging over a few passes it'll be totally fine ;)


anyways, it is interesting to sample the current very reliably, i am interested in the tradeoff of temporal resolution and value resolution between volts and amperes

Don't forget the ADC accuracy highly depends on the voltage reference accuracy. It's a 16 bits ADC but I can't see any external Vref on this board so expect to have a 13 or 14 bits ADC at best with the lower bits indicating more the temperature than your sampled signal ^^
 
acceptable lower limit for sampling is usually 5x the highest frequency you want to sample (that's why pretty much all DSO have a sampling rate 5x higher than their BW...). So for 60 Hz it would be 300 Sps
thanks, will be sure to aim for 5x rate sampling vs AC line frequency! :)
Don't forget the ADC accuracy highly depends on the voltage reference accuracy. It's a 16 bits ADC but I can't see any external Vref on this board so expect to have a 13 or 14 bits ADC at best with the lower bits indicating more the temperature than your sampled signal ^^
evaluating error magnitude is certainly not my specialty :D
the datasheet seems to indicate Offset drift 0.005 LSB/°C and Offset error as ±3 LSB, may i ask, ought this to mean 1°C change result in 0.005/(2^16) error? and a constant 3/(2^16) error?

may i ask, would operating the ADC in differential mode with one to cell ground and one to cell positive, might this improve the sampling resolution?

sorry if this is too off topic!
 
You don't need synchronous sampling and the acceptable lower limit for sampling is usually 5x the highest frequency you want to sample (that's why pretty much all DSO have a sampling rate 5x higher than their BW...). So for 60 Hz it would be 300 Sps, you're a bit under with 240 but with a bit of averaging over a few passes it'll be totally fine ;)

thanks, will be sure to aim for 5x rate sampling vs AC line frequency! :)

You actually don't have to do the Nyquist thing. Rules were made to be broken. That one was all about reproducing the signal.
It said sampling had to be > 2x the highest frequency component in input signal that was of an amplitude of significance.

"Sub sampling" is a technique for demodulating a signal which allows sampling at a much lower rate. Other frequencies besides the one you're looking for can alias in if present; it has a bunch of pass bands.

If what you want to do is determine the average value of a signal, I think it can be sampled way below its frequency, even at random. Caveat is that if it not repetitive, rather has varying amplitude, could mess up your calculation. A running average over a short enough window could work.

If you do want to do the Nyquist thing, refer back to my sample current waveforms. Lots of shapes other than sine wave, which means far higher multiple of AC line frequency than "> 2x" or "5x"
 
thanks for the additional info!

i believe maybe possible to continuously monitor cell IR in realtime by comparing the low and high of current draw with AC inverter ripple with cell voltages.

cheers, and always happy to see posts in this and the other thread :) the concepts of BMS design are very fun to consider!
 
evaluating error magnitude is certainly not my specialty :D
the datasheet seems to indicate Offset drift 0.005 LSB/°C and Offset error as ±3 LSB, may i ask, ought this to mean 1°C change result in 0.005/(2^16) error? and a constant 3/(2^16) error?

It's not super complicated but it involves adding quite a few error terms to have the final error value. In this datasheet they are nicely grouped in the table page 7.

The error you quote is the offset drift which is basically an error (how much the offset changes with temp) on top of an error (the offset). But here it's very small (around a fraction of a µV per °C) so for normal operating temps you can ignore it.

NB: LSB = least significant bit = the FSR / 2^16 (or 2^15 depending on differential or single ended) which for a 2.048 V FSR would be 31.3 µV (FSR = full scale range).

The real error related to the temperature is the gain drift over temperature which is 5 ppm/°C typical and 40 ppm/°C worst case; that's actually quite nice for an integrated Vref. I always use worst case to design things because Murphy... so if we use the 2.048 V FSR example again then you would have 2.048 * 0.00004 = 82 µV/°C which is roughly 1 mV every 12 °C.

But as you can see 1 °C is around 2.6 LSB so going from let's say 20 °C to 30 °C would be 26 LSB and that's between 4 and 5 bits so you basically have 12 bits useful for the sampling and the lower 4 bits will be a thermometer. That's the worst case of course but even the typical case would be almost 2 bits so effectively 14 bits useful.

Then the other errors add up to around 4-5 LSB which would eat another bit so effectively you typically have a 13 bits ADC. That's pretty much always the case and that's why you want to use a 16 bits ADC when you only need 12 bits in theory for example, and not a 12 bits ADC. You usually loose around 10-20 % of bits from the datasheet value.

NB: using bits to describe that can be a big trap if you're not careful as it's not linear. For example earlier we calculated a typical error of 2 bits for 10 °C but that doesn't mean it's 4 bits for 20 °C, it would be only 3 bits because of the power of 2 relationship. Another example is that the other errors add up to around 2 bits but when added to the 10 °C 2 bits error then I said "another bit" not "two additional bits". If you want to avoid the trap you can calculate everything in LSB and only do the conversion at the end (2 LSB = 1 bit, 4 LSB = 2 bits, 8 LSB = 3 bits, etc...)
 
You actually don't have to do the Nyquist thing. Rules were made to be broken. That one was all about reproducing the signal.
It said sampling had to be > 2x the highest frequency component in input signal that was of an amplitude of significance.

"Sub sampling" is a technique for demodulating a signal which allows sampling at a much lower rate. Other frequencies besides the one you're looking for can alias in if present; it has a bunch of pass bands.

If what you want to do is determine the average value of a signal, I think it can be sampled way below its frequency, even at random. Caveat is that if it not repetitive, rather has varying amplitude, could mess up your calculation. A running average over a short enough window could work.

Yes but 5x guarantees you don't have systematic errors, etc... Otherwise if you sub-sample you need to be ultra careful because it's very hard to not alias (especially if the signal isn't a nice known sinewave) and so to have a big error as a result.

And averaging will not solve that problem.

Speaking of aliasing, @curiouscarbon you need to look what an anti-aliasing filter is and have one to avoid unwanted errors (unless you do sub-sampling of course but then you already know what you're doing anyway ^^).


If you do want to do the Nyquist thing, refer back to my sample current waveforms. Lots of shapes other than sine wave, which means far higher multiple of AC line frequency than "> 2x" or "5x"

That's why I said highest frequency, not main frequency. 300 Sps would be a minimum for sampling 60 Hz inverter ripple, if there's higher frequencies than that in the signal then you would need a higher sample rate.
 
And averaging will not solve that problem.

If what you are trying to do is determine mean or rms current, rather than recover the AC signal (e.g. to plot it), then I think one can get away with far lower sample frequency and averaging many samples. Sample itself must be a narrow enough time slice. If the waveform is steady-state, e.g repeating rectified sine wave and harmonics for a long time, I think this will work. Think, but haven't tried numerically evaluating it.

I have a potential application where I might want to sample near or even below a fundamental, but accurately determine RMS of a signal with harmonics extending well above. Goal is to achieve 100 ppm or even better accuracy. (much higher frequency than powerline, so very high sample rate isn't available at the number of bits I want.) Oscilloscopes don't have the resolution. Sampling DMM which do are about 100k samples/second. My application would have ADC on PCB as part of an instrument product.
 
Back
Top