diy solar

diy solar

Cal's DIY BMS

I am not talking about simulations of the schematic or just summing numbers in the datasheets. I am talking about doing real measurements with the system and analyzing them.
 
Oh ok, I see. I don't have a test system yet so obviously that will not work; maybe Cal will be interested by your offer ;)
 
We have to do it because we design adcs and ASICS (the chips themselves) and measurements instruments that we need to characterize with real measurements in the laboratory in order to verify the simulations.
 
That's pretty cool? I kind of want to ask what kind of ASICs, etc... but I don't want to start an off-topic discussion here; can you start another thread if you're ok to discuss more about it?
 
I had a bit of a rough stretch. Operator errors overstressed the adc and voltage reference. While finishing those repairs, disaster hit when I connected the BMS to the battery in opposite polarity. Nope, circuit didn’t like that. Some parts glowed bright red. After unsuccessful attempts to get the circuit operational again, I decided to trash this board and start over again. I considered ordering a printed wiring board, but the learning curve to get that going is more than I want to invest at this time.


BreadboardV3.jpg

Four weeks later I got a 3rd generation board operational.

There's been some modifications. @Pidjey suggested using a 5V linear regulator to power the adc. The 5V buck regulator (seen on top of the photo) was providing 5V power to the ESP32 and the logic circuits. Now the buck regulator only sources power to the ESP32. A LM1117 powers the logic circuit (including the adc). The multiplexer (DB408) in front of the adc is deleted. The adc just measures the 4 cell voltages. Battery voltage and current are measured by an INA226 (not connected yet). The temperature sensors (not connected yet) have their own adc located on their chip (DS18B20).

Interesting measurement data to follow.
 
If you want to characterize accuracy, precision, noise, linearity and ENOB of the ADC like they do in the datasheets I can give you a hand. It is a nice way to check the specs of the fully assembled system. It is part of my job routines.

I was on a quest to characterize the adc noise (with help from Pidjey) when the previous BMS failed. The process is to take 5000 measurements in succession of cell 3 and record raw adc output. Pidjey can then analyze the data for the characteristics he listed in previous post. In the mean time I determined a simple way to check adc noise and linearity. Just plot the data in a histogram.

His_BMS V2.jpg

BMS V2 histogram is taken from the previous BMS. Of the 5000 measurements, 4450 of them are between 59588 and 59591.

Baseline V3.jpg

Baseline V3 histogram is make with no bypass or filter caps. Of the 5000 measurements just 1600 are at the average.

Accuracy drastically improved after adding bypass and filter caps. It makes no sense providing a plot.
There are just 3 outputs, with 3026 measurements at 60637.

BinFrequency
60635​
4​
60636​
1970​
60637​
3026​
 
In the mean time I determined a simple way to check adc noise and linearity
Oh hey @Cal, I had no idea you'd been noodling on BMSs over in this thread!

ADC noise and linearity almost always agree with the datasheet, particularly for ADI/Linear ADCs. But, if you want to include the effects of your circuitry in front of the ADC, and power supplies, etc. here's how it's measured in the biz':

- Linearity is quantified in a variety of metrics, the most common of which are DNL, INL, and SFDR. The first two are measured by applying a sine wave slightly larger than full scale, plotting a histogram of the distribution of codes, and fitting a certain curve to it which you can look up in IEEE publications. From the difference between the curve fit and the actual histogram bins, you can get DNL data for each code, and INL is just the integral of DNL.

- To measure SFDR you just look for the highest peak-to-biggest-spur ratio in the frequency domain by adjusting the input sine wave amplitude.

- To measure noise you can use a sine wave again, or a DC voltage, and just add up the variance of the samples, so it's basically just a histogram like you show. More commonly you take an FFT and add up the noise in the frequency domain. If using a sine wave, you have to omit the sine wave energy and its harmonics from the variance calculation, so this would always be done in the frequency domain.

- (Bonus bullet added later): the point of doing noise measurement with a sine wave signal and analysis in the frequency domain is that it shows both the signal power (the sine wave) and the noise power (everything that is not the sine wave or its harmonics), which gives you a measure of their ratio SNR, or SNDR if you include the harmonics as well as the noise in the denominator. At a more practical level, you don't need any absolute scale for the FFT in this case since you only care about the ratio; or technically each measurement would be in units of dBFS (dB "full scale") if you normalize to the full-scale sine wave amplitude. From this, you compute ENOB = "equivalent number of bits" by inverting the equation SN[D]R [dB] = 6.02*ENOB + 1.76. A good N-bit ADC would have N-0.5 or more equivalent number of bits, but they don't all necessarily have this. If your algorithm does some averaging you might be ok with some "useless" or "too noisy" least-significant bits as averaging them would still give you the nameplate resolution of N bits if the noise is uncorrelated with the signal.

For really high performance converters (ex: 16-bit ones), it's actually a surprising challenge to generate a sine wave with sufficient fidelity to know that what you're looking at is the ADC performance and not the performance of the signal generator. Because of this, high performance ADCs are tested with some expensive and custom LC ladder filters, which basically "purify" the AC signal going into the ADC so you can be sure that you're actually measuring the ADC and not something else.
 
Last edited by a moderator:
Since there's interest, I'm posting @Pidjey's analysis of the data seen in histogram BMS V2 shown in previous post:

I will try to show how I calculate it because maybe it can be useful for you in order to understand the results.

First of all I have calculated the noise. Noise normally is quantified in RMS value of the noise. I have calculated it and the noise is:

The noise is 6.5686rms codes. (In Europe dot is used to define decimals)

If we translate this to voltage using the adc range of 2^16 and the reference voltage of 4.096V we get a noise of:

6.5686*4.096/2^16 =410.53uV rms

In my opinion it is a bit high for a 16 bit adc but not terrible.


The next figure shows the power distribution of the noise:

1616621129052.png



As you can see the maximum frequency is the half of you sampling frequency. If you are sampling at 35ms, that means the sampling frequency is 1/0.035=28.57Hz so the maximum frequency that you can measure is half of it.

Somehting curious I detect something is happening at 1Hz. Do you have something happening at 1Hz or 1 second frequency?. You can see this effect repeated at 2Hz, 3, 4, 5..... but they are just harmonics.

Now it is the point to calculate the sinad or snr (signal to noise ratio)

This is the formula I use.

Signal-to-noise ratio - Wikipedia


en.wikipedia.org
en.wikipedia.org

1616622155716.png



This amplitude is in RMS. For the signal amplitude it is normally used the RMS value of a sine wave in the input. In our case we have used a DC value but the best is to simulate as our imput was a sine wave with a peak to peak value of the voltage reference. In our case Asignal rms = 4.096*0.7071

SNR=20*log10(4.096V*0.7071/410.53uV)=76.9698dB

Now we calculate the ENOB(just to remember SNR=SINAD):

Effective number of bits - Wikipedia


en.wikipedia.org
en.wikipedia.org

1616622507037.png



ENOB = 12.4933bits

A simplified way is (in this case gain is 1):

https://e2e.ti.com/support/data-con...rformance-demonstration-kit-for-the-ads130e08
1616622593995.png



But leads to the same result.

Another way is using the plot in the top (M is the number of codes, for a 16 bit ADC is 2^16):

https://www.analog.com/media/en/training-seminars/tutorials/MT-003.pdf
So noise floor is equal to 77dB (SNR) + 10log(2^16/2)=122.15dB (as you can see from the top graph)

(the noise floor in the next photo is the green line or average of the noise)

1616622719526.png



And the result is the same.

Feel free to ask me any question.
 
Having a few issues. I've had a couple of adc (LTC2452) failures during power-up. I suspect the SPI connection is the problem. The adc operates at 5V and the ESP32 operates at 3.3V. There's a fet voltage level translator (JY-MCU) between the two components.


jHsyc.jpg



I'm using two lab power supplies to power the circuit. The advantage of using lab supplies (as opposed to just connecting to battery) is that power supply current is easily monitored and current limits can be applied. In other words, there's better visibility. One 13V lab supply provides power: to (1) a resistor divider network that simulates cell voltages ; (2) to a 5V linear regulator (LM1115) which then provides power to the adc; (3) to JY-MCU 5V input. The second lab supply provides 5V power to the ESP32 module. The ESP32 module has an onboard 3.3V linear regulator to power the ESP32.

The power-up sequence is to first turn on the supply that powers the front end (analog and adc) and then turn on the supply to ESP32. A failure is evident when the second lab supply current limits at 50 mA during power-up. What the heck? The adc gets overstressed and requires replacement. This failure occurs on occasion, not always. Is the JY-MCU at fault? The LTC2452 SDO signal is suspect getting overstressed. Thinking of inserting a couple hundred ohm resistor between SDO and JY-MCU.

bmsschematic-jpg.45857


A second problem I'm seeing is that the analog section has errors. The OPA991 op-amp does not accurately reflect cell voltage. For cells 1 to 4 op-amp is -10, 64, 64, 56 mV off. With MUX509 EN signal at 0V, the voltage drop across the input 100k resistor is: -84, 54, 53, 55, 34 mV. The current from S1A is 84mV/100k = 860 nA. Another what the heck? Leakage current is supposed to be <10 nA. Is the MUX509 stressed?
 
I am happy to see that everything is moving.

In my case unluckily I am having some personal life combined with market problems. I am moving to the other side of the world so visa, accomodation, contracts..... so annoying.

Also the semiconductors shortage has hitted me. Luckily I got my last components (sadly the most important) this week and I still need to collect from the local warehouse. But it is crazy how in my company and every company related with electronics the lack of chips is even stoppin the production of cars.

In my experience components dont extress, they work or the burn. They only think that I think can be stressed are the protection diodes. It would be advantadgeful if the diode you have for protecting the ADC is schottky because droup is 0.2V compared to regular protection diodes of the ADC that it is 0.7V so the external will conduct before the internal one.


EDIT after writing the post: I am stupid, I had not read the first two lines you wrote about the level shifter so what I have writen could be only partly valid but I do not remove it as it might be interesting for someone. I any case I would use same voltage for both devices. Level shifters in high speed comunication lines can be tricky.


I though you were using I2C but checking the datasheet is SPI. There something is bad in terms of electronics. Everything I am going to say is taking into account that your ESP32 does not have a level shifter in the input.

Your ADC works at 5V and your ESP32 at 3.3V. Provably you have a evaluation board with a LDO on it converting from 5V to 3.3V. ESP32 working range is 2.3-3.3V. I cannot find that ESP32 inputs are 5V tolerant

Inputs are always protected with diodes like this

1620324775547.png

These diodes have normally 0.7V droput voltage so when the input (PIN in the figure) is higher than the supply 3.3V (VCC in the figure) the diodes will start conducting from 3.3+0.7=4V. Your ADC is applying 5V so the protection diode is conducting.

Normally there are 2 solutions. Level shifting or same operating voltage. I would just run the ADC at 3.3V.
 
Last edited:
The OPA991 op-amp does not accurately reflect cell voltage. For cells 1 to 4 op-amp is -10, 64, 64, 56 mV off.
Could you post both measurements? I mean cell voltage before the system and cell voltage after the OPA. I just want to check if it is related with the MUX non linearity that we discussed looooong ago.
 
Last edited:
These diodes have normally 0.7V droput voltage so when the input (PIN in the figure) is higher than the supply 3.3V (VCC in the figure) the diodes will start conducting from 3.3+0.7=4V. Your ADC is applying 5V so the protection diode is conducting.
As the protection diode is conducting the ADC provably is delivering too much current and the short circuit protection is being activated. Maybe that is the reason your ADC is crying.
 
I think you are also limiting the SPI clock to 2 MHz as it is the maximum of the ADC.
 
Have you thought about using a pre-made cell monitoring IC like the Analog Devices ADBMS1818?

All the ADC stuff is taken care of with cell voltage and current measurements, and it has the mosfets for passive balancing integrated in it, as well as inputs for thermal sensors.
 
I'm operating adc spi at 100 kHz.

After my first adc failure over a month ago I increased the resistor between op-amp output and adc input from 330 ohm to 1 k ohm and added the diode to 5V_Logic for added protection.

I've also thought of operating adc at 3.3V to eliminate the level shifting. That would mean getting another voltage reference and changing the gain of the op-amp. Perhaps I can use a resistive voltage divider to lower Vref?

I have 4 330 ohm resistors connected in series to simulate the 4 cells.
The voltage across each resistor is: 3.294, 3.294, 3.292, 3.296V
The output of the adc (which is a good representation of actual input voltage) is: 3.284, 3.358, 3.356, 3.352V
 
I don't think you should have those diodes on the input and output of your linear regulator. It should be ferrite beads on the input, if anything, and nothing in series with the output.

Also, use a real level-shifter and not one made manually out of fets.
 
I have 4 330 ohm resistor to simulate the 4 cells.
The voltage across each resistor is: 3.294, 3.294, 3.292, 3.296V
The output of the adc (which is a good representation of actual input voltage) is: 3.284, 3.358, 3.356, 3.352V
Are the "simulated" cell voltages measured while the mux is selecting them? If your resistor stack used to simulate cells is not really low resistance, then when the mux selects it, you'll change the result by drawing more current through the input of the op-amp circuit.
 
Have you thought about using a pre-made cell monitoring IC like the Analog Devices ADBMS1818?

All the ADC stuff is taken care of with cell voltage and current measurements, and it has the mosfets for passive balancing integrated in it, as well as inputs for thermal sensors.
No! There's no challenge there. Nothing to learn.
 
No! There's no challenge there. Nothing to learn.
I mean there's still quite a bit to learn lol, you still need to interface with an MCU and handle your charge and discharge MOSFETs somehow.
 
Voltage dividers are not a swiss knife. I would not use a resistor divider for dividing a voltage reference. You will change the output impedance and can that be problematic for the ADC reference. For test I would tie the ref to the supply (if it is allowed). Now noise is not the most important.

What are the values of the resistors for simulating the cells? resistors can simulate voltages quite accurate but it needs to be taken into account the output impedance they offer. Batteries output impeadnce are in the order of mohm. How big are these resistors? What is the voltage supply of these resitors? How many resistors are in series?
 
Last edited:
Back
Top