diy solar

diy solar

Solar Assistant - data logger

How did you move SA from the sdcard to the eMMC without root? I had to get root in order to run nand-sata-install first.
That's easy. Put the image file you download onto the flash card. I installed the orange pi normal operating system onto the flash card. Then I copied the image file they provide from SA onto the card too.

Boot up on the flash drive. Now install the image onto the emmc drive. Once that's done edit the files the files on the emmc drive that give root and access and then shut down the pi. Remove the flash drive and boot it up on the emmc.
 
This is the command I used to install the image. Run this after booting up on the orange pi os from the sd card.

sudo dd bs=4M if=2022-10-16-solar-assistant.opi3lts.img of=/dev/mmcblk2 conv=fsync
 
That's easy. Put the image file you download onto the flash card. I installed the orange pi normal operating system onto the flash card. Then I copied the image file they provide from SA onto the card too.

Boot up on the flash drive. Now install the image onto the emmc drive. Once that's done edit the files the files on the emmc drive that give root and access and then shut down the pi. Remove the flash drive and boot it up on the emmc.

I have more Pi learning to do. All my experience since '93 has been Linux on PCs. This is my first Pi. :) Probably not my last at this point.
 
I have more Pi learning to do. All my experience since '93 has been Linux on PCs. This is my first Pi. :) Probably not my last at this point.
I was quite surprised how fast the little pi machines are. The os is just arm linux but its very robust. Cool little machines.
 
Kinda wondering if a new thread should be created to show how to root and gain ssh access for Solar Assistant. Might easier for those searching for the information.

I have Bareos client installed on my SA but I'm not sure were SA stores their database so I can do a restore in case something happens. By default bareos does not backup tmpfs file systems. /dev/shm is a tmpfs so that can't be it. /var/lib/grafana/grafana.db is a symlinke to /dev/shm/grafana.db. Anyone have any insight how it's storing it's data before I go down this rabbit hole.
 
Kinda wondering if a new thread should be created to show how to root and gain ssh access for Solar Assistant. Might easier for those searching for the information.

I have Bareos client installed on my SA but I'm not sure were SA stores their database so I can do a restore in case something happens. By default bareos does not backup tmpfs file systems. /dev/shm is a tmpfs so that can't be it. /var/lib/grafana/grafana.db is a symlinke to /dev/shm/grafana.db. Anyone have any insight how it's storing it's data before I go down this rabbit hole.
Make a directory call /backups on the SA drive and then save this script out and run it. I cut this out of a backup script I wrote that automatically backups my database on schedule.

#!/bin/sh

PATH=$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

rm -rf /backups/backuptmp

systemctl stop influxdb

sleep 5

echo "Copying influxdb directory to backuptmp..."

logger Copying influxdb directory to backuptmp

cp -R /var/lib/influxdb /backups/backuptmp

echo "Done."

sleep 5

logger Done

echo "Starting influxdb."

logger Starting influxdb

systemctl start influxdb

echo "Done."

logger Done

echo "Compressing backup for download."

logger Compressing backup for download

tar cvpjf /backups/backupfiles/solarassistant_`date +%m-%d-%Y`.tar.bz2 /backups/backuptmp

echo "Done."

logger Done
 
Make a directory call /backups on the SA drive and then save this script out and run it. I cut this out of a backup script I wrote that automatically backups my database on schedule.

Just found that it was using influxdb. Thank you for posting the script. I see how that is being done. Looks good. I think I'll have it dump via autofs and NFS to my server.
 
Just found that it was using influxdb. Thank you for posting the script. I see how that is being done. Looks good. I think I'll have it dump via autofs and NFS to my server.

So SA keeps all it's config in influxdb? If that is the case, then Bareos backing up / along with the backup script should take care of everything in case the Orange Pi dies.

I used autofs to automount NFS from my server. My dropped NFS location use ZFS compression. So my tar does not compress and sends the data over the wire uncompressed. My script:
/etc/cron.daily/backup_solar_assistant_influxdb
Code:
#!/bin/bash

PATH=$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

echo "Stopping influxdb..."
systemctl stop influxdb

echo "Taring /var/lib/influxdb to /mnt/nfs/solar_assistant_backup ..."
logger Taring /var/lib/influxdb to /mnt/nfs/solar_assistant_backup ...
cd /var/lib
tar -cvf /mnt/nfs/solar_assistant_backup/var_lib_influxdb_$(date +%Y%m%d).tar influxdb

echo "Done."
logger Done

echo "Starting influxdb."
logger Starting influxdb
systemctl start influxdb

echo "Done."
logger Done

echo "Removing backups older than 30 days..."
logger Removing backups older than 30 days...
find /mnt/nfs/solar_assistant_backup -type f -mtime +30 -exec rm {} \;

echo "Done."
logger Done
 
Last edited:
This might be of no use to you because you're looking for automation, however there is a backup feature in SA, which if all you need is point in time backups that you run once and a while, might be sufficient.

1701284630318.png
 
This might be of no use to you because you're looking for automation, however there is a backup feature in SA, which if all you need is point in time backups that you run once and a while, might be sufficient.

Thanks. I did see that and I did use it when I moved the system over to the eMMC. But yea, I wanted to automate this and have daily full system backups.
 
So, just using dd, the disk performance is not that great:

Code:
root@solar:/var/tmp# dd if=/dev/zero of=./largefile bs=1M count=1024 conv=fdatasync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 21.8935 s, 49.0 MB/s
root@solar:/var/tmp# sync && echo 3 > /proc/sys/vm/drop_caches
root@solar:/var/tmp# dd if=./largefile of=/dev/null bs=1M
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 8.52819 s, 126 MB/s

I like the newer versions that actually have a M.2 slot.
 
The emmc rocks vs the flash drive slot. 50mbs continuous is a good day for something in the flash port.

The m.2 is 350mbs avg so the 3b's is the speed king.
I'm not sure if SA will run on the 3b though.
 
Hi folks,

I successfully logged in via ssh. After quite a time browsing through the file-system of SA I ask myself: Where is the programme code of the application itself?
I would like to implement additional MQTT-Values that are not implemented but shown on the GUI.
 
Hi folks,

I successfully logged in via ssh. After quite a time browsing through the file-system of SA I ask myself: Where is the programme code of the application itself?
I would like to implement additional MQTT-Values that are not implemented but shown on the GUI.
I dunno how large the DEV team is at SA, but I would think exposing all values that SA collects through MQTT should be something SA would want to put towards the top of their roadmap. Have you thought about bringing this to them for consideration?
 
Back
Top