diy solar

diy solar

BMS MOSFETs Explained

I'm not familiar with that particular BMS. It might have some other reasons for the limitation, or it might well be that the limitation doesn't exist - writing something like "Maximum continuous overcurrent" doesn't really make sense. I'll try to make time to dig into this one.
Appreciate it.

I’ve been told via email that the BMS can sustain 300A continuous (though then why the use of ‘over current’, right?) and can support short bursts of current up to 1200A.

The spec also ends with this:

‘Commonly used: high-power inverters around 7000W, solar energy storage, 24V car startup, etc.’

A 7000W inverter ought to translate to 280A @ 25V…
 
Under normal conditions, both charge and discharge MOSFETS are on, meaning it's like a closed circuit. When the load increases beyond what the charge controller delivers it just starts to pull the needed power from the battery. No delay there at all - there is not switching.
Switching only happens when you actively turn off charge and/or discharge - and in that case (consider you manually disable discharge) your inverter will shut off since the battery won't be part of it.
 
Is there any particular reason why MOSFETS are used for controlling charge/discharge states over high current capable mechanical relays? Modern UPS devices (i.e. APC / CyberPower) still use relays today.
 
Is there any particular reason why MOSFETS are used for controlling charge/discharge states over high current capable mechanical relays? Modern UPS devices (i.e. APC / CyberPower) still use relays today.
Mosfets are used because (AFAIK) they are much less costly than mechanical relays and they also have a much lower power usage at low throughput current rates.
A MOSFET uses power that is proportional to it's throughput current due to the resistance from drain-source -- the RDS(on). But a mechanical relay normally has a constant current to keep it activated.
(I'm simplifying a bit, as it's more complex than this to calculate their full power wasted by the MOSFET, and there are exceptions for relays which have either "latching" mechanisms or other quirks)
 
@RobertGreen, regarding the throughput current, I've noticed near all BMS have a row of MOSFETs (sometimes 10+). Is the high MOSFETs count used to distribute the load for high current situations?
 
Modern UPS devices (i.e. APC / CyberPower) still use relays today.
And every EV uses contactors (high power relays). I don't like Mosfets because when they fail they often fail closed. I use them on 12 volt packs I built to back up some household loads to help relatives during power outages. For my 3S16P pack I would not want to risk losing 48 cells to a stuck Mosfet so I use a BMS that controls a contactor.
 
@RobertGreen, regarding the throughput current, I've noticed near all BMS have a row of MOSFETs (sometimes 10+). Is the high MOSFETs count used to distribute the load for high current situations?
dear @sourceholder, i am not @RobertGreen. however, yes, multiple MOSFET in parallel will share the Ampere Current according to the Resistance of the Connections.

some BMS with MOSFET have a busbar soldered onto the PCB to help evenly distribute the Amperes
 
Is there any particular reason why MOSFETS are used for controlling charge/discharge states over high current capable mechanical relays? Modern UPS devices (i.e. APC / CyberPower) still use relays today.
They last longer in terms of cycle life, are much smaller, and are cheaper (at least in high volume)…
 
Last edited:
OK, so this is somewhat tangential to the conversation, but there is something that I have wondered about for months after being a little wary about the various failure modes of MOSFETs in a BMS. As @Ampster comments above-- if the switches in one of these BMS fail 'closed' then some very bad things can happen to your battery, including potentially a fire in certain failure states (such as a bugged charger that won't stop overcharging)
Can I use a low capacity MOSFET-based BMS to control the coil on a mechanical relay rather than having the BMS switching the entire load by itself? I've attached a very crappy diagram that I just made which can hopefully help explain my question. In this scenario, the BMS would not have to switch the large system loads directly and thus it would not be subject to the large currents and heating that normally could present an issue for the switches.

My questions are these:
-Is it possible?
-Would this actually be likely to offer a higher level of protection for the batteries or not? (in other words, are the failure modes inherent to a BMS really obviated by such an arrangement, or do BMS failures normally originate from high voltage transient spikes or other issues that would not be prevented by this unusual configuration)
-Even if this doesn't offer additional protection, would it not 'bypass' the current limit of the BMS via the use of a relay with a much higher current limit?

edit: I've omitted the BMS sense leads, fuses, and other parts in the diagram for the sake of clarity. BMS for low current applications are much more inexpensive than the 100-200A variety that we often see in peoples' builds. Also, I'm aware that you can already purchase relay-based BMS, even from JBD. The configuration that I describe would allow the use of a particularly selected relay rather than the built-in unit included with those relay based BMS. (Also, I question if the TE brand relays included on the cheap chinese relay based BMS are actually genuine or are instead cheap knockoffs)
 

Attachments

  • bms_switching_relay.jpg
    bms_switching_relay.jpg
    55.6 KB · Views: 31
Last edited:
-Even if this doesn't offer additional protection, would it not 'bypass' the current limit of the BMS via the use of a relay with a much higher current limit?
Since you are not sending the load through the BMS it will have no current so it should bypass the current limit. there are KiloVac contactors made that have an economiser mode such that once they are closed, it takes less current on the coil to keep them closed.
 
Also, it occurs to me that in this proposed configuration the over-current protection offered by the BMS would not work, as it wouldn't be sensing the full system currents. It would rely instead on the fuses for OCP, which I assume are much slower to open than a MOSFET based switch after detecting an over-current condition.
I don't know if this whole thing is completely a senseless idea. Are mosfets failing closed even a meaningful concern in terms of BMS safety?
Aside from the protection scheme differences, this arrangement would seem to allow the use of very high current rated and high quality contactors with a very inexpensive (~$40 or less) small capacity BMS.
 
The question one should ask is: when does a MOSFET exhibit a failure that results in the MOSFET shorting out. Let's go over the known failure mechanisms first:


That article is mostly motor control specific, so we can actually eliminate some of them. The most important ones for our kind of applications are:
  • Excess power dissipation
  • Excess Current
  • Avalanche failure
All of these can be mitigated by a) making sure the BMS is sized appropriately and b) a fuse.
 
That article is mostly motor control specific, so we can actually eliminate some of them. The most important ones for our kind of applications are:
  • Excess power dissipation
  • Excess Current
  • Avalanche failure
All of these can be mitigated by a) making sure the BMS is sized appropriately and b) a fuse.
I don't believe that this is categorically true. Particularly when we're talking about the "excess current" issue.
The way these largish(100-200A) BMS are constructed, they have a bank of switches which are operated in parallel in order to achieve the necessary current capacity. "Hopefully" the current is shared equally among all the switches, but there are certain problems with this assumption. When the switches dissipate power, of course their temperature increases. This increases their RDS(on), which of course increases power dissipation and makes the chips even hotter. There will be variances, however slight, in the manufacturing of these chips which causes small differences in their exact properties (which is why for critical applications, the chips are tested and batched/matched more extensively than you expect from the original manufacturer)
If one MOSFET starts to get hotter than the others, its current should decrease relative to the other paralleled switches, but problems such as an uneven connection to the heat sink/heterogenous thermal dissipation can cause a situation where one or more of the chips wind up getting way more current than they should. In other words, just one of the chips getting hot can lead to a situation where the rest of the bank gets hotter due to shouldering the increased current, and then the bank of hot chips increases in resistance leading to more heating. I have read that overheating events like this can cause a short circuit failure as the composition of junction is altered by the unusual heat.
Also, in a situation like this, switches can operate at higher than normal currents as the RDS(on) of their counterparts change.

In this case, making sure that the BMS is sized appropriately can't really be done without knowledge of how the device is implemented or trusting that the specifications are sane and reasonable. What's often recommended on this forum is to include a safety margin by selecting a BMS rated at a higher capacity than the true intended max current of the system.

I also wonder if avalanche failure is something that should be considered in the case of a BMS tripping it's protection and trying to open the circuit while a large inverter or charge controller are pumping a lot of juice.

All of these problems are of course avoided in the case that the BMS is properly designed (using snubbers to handle transient spikes, and a robust switch selection and implementation scheme) and well integrated and in communication with the inverter and charge controller, but I am a little less than entirely comfortable in assuming that this is true when it comes to the cheap Chinese BMS that I'm using. I realize that just switching to a $500 BMS could eliminate any of these concerns lol.

I really wish that there was a well designed and affordable/open source BMS available for 16s systems.
 
Also, just for the sake of clarity;
Completely aside from the concept of safety in a MOSFET based BMS vs the use of a contactor, I'm curious about the proposed arrangement just as a way to allow a higher system currents when using the inexpensive BMS. Can anyone see a reason why this arrangement would be dangerous or otherwise not workable?
 
In other words, just one of the chips getting hot can lead to a situation where the rest of the bank gets hotter due to shouldering the increased current, and then the bank of hot chips increases in resistance leading to more heating.

That's what the temperature sensor on the MOSFETs is for: the minute this happens the BMS shuts down before breakdown occurs.

which I assume are much slower to open than a MOSFET based switch after detecting an over-current condition.

Fast acting Class T fuse. I know from experience that this protects a BMS. I've had several (some on purpose) dead shorts to see what would happen. The fuse reacts before the BMS, and shuts everything off before bad things happen. The reason I did this is to find out what would happen if the inverter failed and its MOSFETs would go dead short.

I'm curious about the proposed arrangement just as a way to allow a higher system currents when using the inexpensive BMS.

I assume you mean using the BMS to control the contactors? Yes, can be done; I have done that in the past.
 
The way these largish(100-200A) BMS are constructed, they have a bank of switches which are operated in parallel in order to achieve the necessary current capacity.

Actually, this is not entirely true. For example, the JK BMS I took apart (a 100A version) uses G035N10N MOSFETs each capable of handling 180A at 25C with 220W power dissipation capability. The reason to put a bunch of those in parallel, is to lower RDS_on so that power losses get less (yes, distributing current between them). However, they can each handle the total current the BMS is supposed to support, they'll just get hot - and should that happen, the temp sensor that is mounted at the MOSFETs will allow the microcontroller to turn the BMS off.
 
Actually, this is not entirely true. For example, the JK BMS I took apart (a 100A version) uses G035N10N MOSFETs each capable of handling 180A at 25C with 220W power dissipation capability. The reason to put a bunch of those in parallel, is to lower RDS_on so that power losses get less (yes, distributing current between them). However, they can each handle the total current the BMS is supposed to support, they'll just get hot - and should that happen, the temp sensor that is mounted at the MOSFETs will allow the microcontroller to turn the BMS off.
Wow! I just took the case off of a 100A JBD 16s BMS and found a very similar arrangement. In this case, it's 40 (!) CRMICRO CRSS042N10N switches (datasheet: https://www.crmicro.com/ProductSolution/psglqj/mosfet/0v200v/202006/P020200619382105447440.pdf)
These guys are rated at 120A drain current. There's an STM microcontroller at the heart of it all, just like on your Heltec board.

I foolishly assumed that they were paralleling so many MOSFETs because they were using smaller, cheaper switches. This does make me feel better about their current capacity (this particular JBD unit is fanless, I have no idea if they all are)

Thank you for these insightful posts!
 
Yes - you could do a calculation as follows (for e.g. a 200A Heltec BMS):

Rds_on = 3.2mR typical at 25C. We have 10 pairs on the top and 10 pairs at the bottom for this 200A Heltec version. These are configured back to back, gives 6.4mR per pair at 25C. These 20 back to back pairs give a resistance of 6.4/20 = 0.32mR. Heating occurring at 200A continuous, P = R I^2 = 12.8W.

So, instead of having to dissipate a huge amount of (wasted energy) heat with a large heat sink (which they technically could do), we only have to dissipate 12.8W or so at full 200A rated load. This means you typically don't need a huge heat sink, nor active cooling. In addition, it becomes very obvious with a temperature sensor when something is wrong, before it's too late.
 
I'm writing this post since it seems there is a lot of confusion (especially in the comment section of a certain youtuber in Australia) on why both MOSFETs in a typical single port BMS are controlled individually and why/how this works when they're in the same series circuit. It will hopefully also show why just controlling both MOSFETs at the same time to try and accomplish this is not a good idea.

Let's start with the simplified diagram of a single port, MOSFET based BMS:


View attachment 67269

Some basic points: the load is also where the charger would be connected. R Sense is for the BMS to know how much current is flowing and in which direction. MCU (Micro Controller Unit) is the controller, and the discharge (Q DSG) and charge (Q CHG) MOSFETs (connected back-to-back) are individually controlled by the MOSFET driver under instruction from the MCU. Please note that an active MOSFET conducts in both directions. The diodes (body diodes) of the MOSFETs are internal to these devices, and in many applications these are often unwanted, but they are useful in a BMS.

- Suppose we want to both allow charge and discharge. This is easy: enable both MOSFETS and current can flow to/from the battery as needed. Nothing special here.
- If we want to disable both charge and discharge, this is also easy: disable both MOSFETs. Please note that doing so will effectively turn off the battery as you would with a disconnect switch. In other words, any devices that need to know the battery voltage to operate (MPPT, Inverter) will turn off.

So, now we have the issue: what if we hit over voltage protection, or low temperature protection. In both of these instances, we still want to be able to discharge, but not charge. Turning both MOSFETs off won't do since we can't discharge now. The solution is to rely on the internal body diodes:

- To allow discharge, but not charge we turn off the charge MOSFET. In doing so, we still have a current path: through the body diode of the charge MOSFET:

View attachment 67272
You can see that current flow is one directional only - the diode in the charge MOSFET will be reverse biased for any charge current and thus there will be no current flowing in the charge direction. The inverter and charge controller will still see the battery voltage (minus the drop over the internal diode - our friendly youtuber measured this one at around 0.6V).

- To allow charge, but not discharge we do something similar: we turn on the charge MOSFET but turn off the discharge MOSFET:

View attachment 67273

You can see again that the current flow is one direction only: the diode in the discharge MOSFET will be reverse biased for any discharge current and this there will be no current flowing in the discharge direction. Again, the inverter and charge controller will still see the battery voltage (again, minus a small drop).

Now, sending lots of current through this diode is not a good idea. These are far from ideal diodes and they will heat up at relatively low currents. That's where our sense resistor comes in. If the BMS has to only allow charge, as soon as the current builds up (our friendly youtuber measured this to be at 1.5A if I remember correctly) the BMS will automatically enable the discharge MOSFET. This will eliminate this diode voltage drop and charging will occur as normal. The same is done if the BMS only allows discharge. As soon as the discharge current rises, the charge MOSFET is enabled to eliminate the voltage drop over its diode.

Note that this process keeps the protections in place: as soon as the current drops again, the relevant MOSFET is turned off again automatically by the BMS.

In short: having the BMS control both charge and discharge MOSFETs independently prevents abrupt battery disconnection in case a problem arises that only affects one of the current directions (charge or discharge). You want to be able to discharge in over voltage, and want to be able to charge in under voltage situations. Abruptly disconnecting/connecting the battery in such situations can be detrimental for other devices attached to the battery and indeed the battery itself. Using the internal body diode offers a simple hardware feature to make sure these conditions do not arise.
I have 6 server rack style batteries. 2 of them have a weak cell and the 1 cell trips the charge MOSFET every day before it reaches full. My charge MOSFETs shut down due to that high cell, then it never turns back on while all other batteries fill and then work normally. I have to reset the BMS to get the battery to get the charge MOSFET to function again. Any help?
 
Back
Top