diy solar

diy solar

Anyone working with the Overkill Solar Arduino lib?

[UPDATE]
I received the logic level converters from amazon and built two boards up for testing. Neither work properly. These are the converters that I'm using. I've attached a photo of one of them all wired up.


Looks like the bms is responding now. However, the bms.cpp lib is reporting this back from a cell voltage request

Code:
Query 0x04 Cell Voltages
7 bytes available to read
[WAIT_FOR_STOP_BYTE]: DD

Checksum did not calculate!
  Calculated: FFFD
  Received: FD77
[WAIT_FOR_START_BYTE]:  A5
[WAIT_FOR_START_BYTE]:  4
[WAIT_FOR_START_BYTE]:  0
[WAIT_FOR_START_BYTE]:  FF
[WAIT_FOR_START_BYTE]:  FC
[WAIT_FOR_START_BYTE]:  77
The rx_state is: 0...
Voltage: 0.000 volts

And this from the basic info request.

Code:
Query 0x03 Basic Info
7 bytes available to read
[WAIT_FOR_START_BYTE]:  DD
[WAIT_FOR_CMD_CODE]:    A5
[WAIT_FOR_STATUS_BYTE]: 3

RX error! Status byte should have been 0x00 or 0x80!
[WAIT_FOR_LENGTH]:      0
[WAIT_FOR_DATA]:        FF, rx_data_index=0
[WAIT_FOR_CHECKSUM_MSB]: FD
[WAIT_FOR_CHECKSUM_LSB]: 77
The rx_state is: 7...

Troubleshooting that I've done...

The converter requires both 5v and 3.3v input voltage and ground on both side, as far as I understand from reading the amazon listing and questions from users.
  1. tested ground cont
  2. tested 5v output from the esp32
  3. tested 3.3v output from the esp32
  4. Confirmed that tx and rx are connected to rx and tx respectively. I was certain this wasn't the issue since there appears to be actual data coming back from the bms.
  5. Try the other module
 

Attachments

  • IMG_20210925_210533.jpg
    IMG_20210925_210533.jpg
    125.4 KB · Views: 12
[UPDATE]
I received the logic level converters from amazon and built two boards up for testing. Neither work properly. These are the converters that I'm using. I've attached a photo of one of them all wired up.


Looks like the bms is responding now. However, the bms.cpp lib is reporting this back from a cell voltage request

Code:
Query 0x04 Cell Voltages
7 bytes available to read
[WAIT_FOR_STOP_BYTE]: DD

Checksum did not calculate!
  Calculated: FFFD
  Received: FD77
[WAIT_FOR_START_BYTE]:  A5
[WAIT_FOR_START_BYTE]:  4
[WAIT_FOR_START_BYTE]:  0
[WAIT_FOR_START_BYTE]:  FF
[WAIT_FOR_START_BYTE]:  FC
[WAIT_FOR_START_BYTE]:  77
The rx_state is: 0...
Voltage: 0.000 volts

And this from the basic info request.

Code:
Query 0x03 Basic Info
7 bytes available to read
[WAIT_FOR_START_BYTE]:  DD
[WAIT_FOR_CMD_CODE]:    A5
[WAIT_FOR_STATUS_BYTE]: 3

RX error! Status byte should have been 0x00 or 0x80!
[WAIT_FOR_LENGTH]:      0
[WAIT_FOR_DATA]:        FF, rx_data_index=0
[WAIT_FOR_CHECKSUM_MSB]: FD
[WAIT_FOR_CHECKSUM_LSB]: 77
The rx_state is: 7...

Troubleshooting that I've done...

The converter requires both 5v and 3.3v input voltage and ground on both side, as far as I understand from reading the amazon listing and questions from users.
  1. tested ground cont
  2. tested 5v output from the esp32
  3. tested 3.3v output from the esp32
  4. Confirmed that tx and rx are connected to rx and tx respectively. I was certain this wasn't the issue since there appears to be actual data coming back from the bms.
  5. Try the other module
In the spirit of elmination and substitution, did you try serial communication with another device?
 
In the spirit of elmination and substitution, did you try serial communication with another device?
Yessir.

1. communication succeeds between the two esp32s on the same ports.
2. the bt module connected to the bms works with the app.

...Might be these logic level converters. IDK
 
[UPDATE]

Can't seem to wrap my brain around what's going on with this. Here's what I've done.

  1. Completely removed the Overkill lib from the equation
  2. Wrote a sketch that requests basic bms info using the command code that the lib uses. I also read over the JBD protocol doc to understand what I'm actually sending.
The part that I don't get is that it "appears" as if the BMS is returning the same bytes that the sketch is sending to it. This is what the errors from my last update were indicating. The errors are from the lib trying to parse the read request as if it's a response from the bms when it's exactly what IT send to the bms.... which doesn't make sense to me at all.

So, anyhow, by stripping the code down to the bare bones it's a little easier to see what's happening. Though, I don't know why it's happening. BTW, if I unplug the esp32 tx and rx from the logic converter and connect them together, I get the exact same behavior.

The protocol doc is attached.

Here's the sketch
Code:
uint32_t last_update;

HardwareSerial HWSerial(2); // Define a Serial port instance called 'Receiver' using serial port 2

#define Txd_pin 17
#define Rxd_pin 16


void setup() {
    delay(500);

    Serial.begin(115200);
    HWSerial.begin(9600, SERIAL_8N1, Txd_pin, Rxd_pin);

    while (!Serial) {  // Wait for the debug serial port to initialize
    }
    while (!HWSerial) {  // Wait for the BMS serial port to initialize
    }

    last_update = millis();

}

void loop() {
  if (millis() - last_update >= 2500){
    Serial.println("----------------------");
    Serial.println("sending read request");
    uint16_t checksum = 0;
    HWSerial.write(0xDD);
    HWSerial.write(0xA5);
    HWSerial.write(0x03);
    checksum += 0x03;
    HWSerial.write(0);
    checksum = (uint16_t)((0x10000UL) - (uint32_t)checksum);

    uint8_t checksum_msb = (uint8_t)((checksum >> 8) & 0xFF);
    HWSerial.write(checksum_msb);

    uint8_t checksum_lsb = (uint8_t)(checksum & 0xFF);
    HWSerial.write(checksum_lsb);

    // Write the stop byte, 0x77
    HWSerial.write(0x77);


    Serial.println("delaying read....");
 
    delay(2000);

    int bytes_available = HWSerial.available();
    if (bytes_available > 0) {
      for (int i=0; i < bytes_available; i++) {
           int c = HWSerial.read();
           Serial.print("read byte ");
           //Serial.print(c);
           //Serial.print(" ");
           Serial.println(c, HEX);
       
        }
    }
    last_update = millis();
  }
}

[EDIT] Here is the serial output where you can see the read request being returned... or whatever is happening

Code:
----------------------
sending read request
delaying read....
read byte DD
read byte A5
read byte 3
read byte 0
read byte FF
read byte FD
read byte 77
 

Attachments

  • JBD Protocol English version.pdf
    103.4 KB · Views: 11
Last edited:
I don't get the masking you're doing on the checksum. If you shift an 8-bit number 8 bits to the right, you've shifted all of the bits off. It only works to get MSB if it fills on the left with a duplicate of the MSB on each shift. Likewise, masking the checksum with 0xFF just gives you the original checksum.
(uint8_t)((checksum >> 7) & 0xFF)
would give you the MSB in the LSB position.
(uint8_t)(checksum & 0x01)
would give you the the LSB (only) in the LSB position
(Not sure what the rules are for implicit casting with constants.)
Also, why this?
checksum += 0x03
Since checksumn is set to 0 a few statements earlier, this value will always be 3. So you could just initialize to 3. I have not looked at the protocol used for this, but checksum is typically the XOR of each of the bytes leading up to the checksum byte.
 
Last edited:
I don't get the masking you're doing on the checksum. If you shift an 8-bit number 8 bits to the right, you've shifted all of the bits off. It only works to get MSB if it fills on the left with a duplicate of the MSB on each shift. Likewise, masking the checksum with 0xFF just gives you the original checksum.
(uint8_t)((checksum >> 7) & 0xFF)
would give you the MSB in the LSB position.
(uint8_t)(checksum & 0x01)
would give you the the LSB (only) in the LSB position
I don't have an answer. That part I just copied from the bms.cpp file. Based on the protocol file, what would you suggest I do?
 
I don't get the masking you're doing on the checksum. If you shift an 8-bit number 8 bits to the right, you've shifted all of the bits off. It only works to get MSB if it fills on the left with a duplicate of the MSB on each shift. Likewise, masking the checksum with 0xFF just gives you the original checksum.
(uint8_t)((checksum >> 7) & 0xFF)
would give you the MSB in the LSB position.
(uint8_t)(checksum & 0x01)
would give you the the LSB (only) in the LSB position
(Not sure what the rules are for implicit casting with constants.)
Also, why this?
checksum += 0x03
Since checksumn is set to 0 a few statements earlier, this value will always be 3. So you could just initialize to 3. I have not looked at the protocol used for this, but checksum is typically the XOR of each of the bytes leading up to the checksum byte.

It looks like "checksum" is a 16 bit (uint16_t) value in the code above and I believe the part of the code where it subtracts a value from "0x10000UL" presumably is how data gets into the upper (most significant) bits. Are you suggesting that only the lower (least significant) 8 bits are ever populated?

I don't have an answer. That part I just copied from the bms.cpp file. Based on the protocol file, what would you suggest I do?

I'm not sure I'm convinced that's the problem...

It seems like kind of an odd checksum since it doesn't take into account the data being sent, but I assume that's because the message has been hard-coded.
 
It looks like "checksum" is a 16 bit (uint16_t) value in the code above and I believe the part of the code where it subtracts a value from "0x10000UL" presumably is how data gets into the upper (most significant) bits. Are you suggesting that only the lower (least significant) 8 bits are ever populated?



I'm not sure I'm convinced that's the problem...

It seems like kind of an odd checksum since it doesn't take into account the data being sent, but I assume that's because the message has been hard-coded.
According to the protocol doc, there's no data in the data position in a read request. The instructions that the bms should read and respond to come from the A5 (read) and the 03 (basic info). ....I think.

I noticed that my call to begin() had the pin params reversed.

I should have known better to try debugging at 2AM. Anyhow, I fixed that, rechecked all the cables and retested. Now, after the read request is sent, there are no available bytes for reading at all. That seems to be more logical...

Resetting all expectations, this leave the possibility that

1. the logic converters are garbage
2. the bms is not responding to the request because there's something wrong with the request
 
[UPDATE]

....I think I got it.

After all the above, I had forgotten to reconnect the ground from the esp32 to the ground on the bms comm port. I'm now receiving

response bytes that match what the protocol says for an error condition from the bms. That is

DD 03 80 0 FF 80 77

So, that's a step forward but still an error.
 
[UPDATE]

fixed the above and now I'm getting a basic info response!

When passing the data byte it must be 0xFF per the docs. This is a cpp detail that I don't understand. The protocol doc says null.
----------------------
sending read request
delaying read....
read byte DD
read byte 3
read byte 0
read byte 1D
read byte 14
read byte D8
read byte 0
read byte 0
read byte 27
read byte F
read byte 27
read byte 10
read byte 0
read byte 0
read byte 2B
read byte 9
read byte 0
read byte 0
read byte 0
read byte 0
read byte 0
read byte 0
read byte 20
read byte 64
read byte 3
read byte 10
read byte 3
read byte B
read byte D8
read byte B
read byte B8
read byte B
read byte D3
read byte FB
read byte 38
read byte 77

Next step is to start using bms.cpp and actually process the response into something useful.
 
It looks like "checksum" is a 16 bit (uint16_t) value in the code above and I believe the part of the code where it subtracts a value from "0x10000UL" presumably is how data gets into the upper (most significant) bits. Are you suggesting that only the lower (least significant) 8 bits are ever populated?



I'm not sure I'm convinced that's the problem...

It seems like kind of an odd checksum since it doesn't take into account the data being sent, but I assume that's because the message has been hard-coded.
I wish I had a good answer, but without knowing more about the protocol, it's hard to figure out how to handle the checksum, etc. Is there a document somewhere that describes this? Though if it works as-is, then the issue is moot.
 
I wish I had a good answer, but without knowing more about the protocol, it's hard to figure out how to handle the checksum, etc. Is there a document somewhere that describes this? Though if it works as-is, then the issue is moot.
It's briefly covered in the JBD protocol doc (attached in a prior comment)
 
Looks like the checksum is not the XOR of the bytes. It's just the 2's complement of the arithmetic sum (16 bits) of the command (to the BMS) or the status (from the BMS), the length byte, and the data bytes. Doing a bitwise inversion and adding 1 is 2-s complement.

A simplified code snippet is (match up the Serial.println statements to the original code):

Serial.println("----------------------");
Serial.println("Sending read request for basic info and status");
HWSerial.write(0xDD); // Start byte
HWSerial.write(0xA5); // Read command
HWSerial.write(0x03); // Basic info register(s)
HWSerial.write(0x00); // No data bytes
HWSerial.write(0xFF); // Checksum high byte
HWSerial.write(0xFD); // Checksum low byte
HWSerial.write(0x77); // Stop byte

Serial.println("delaying read....");

That doesn't explain why it's not working ... seems like yours is just echoing back what you sent, or something in the Arduino code is doing a loopback.
 
Back
Top