diy solar

diy solar

Architecture: Proxmox with Home Assistant + Solar Assistant + Influxdb + MQTT + Mosquitto + Arduino

solarsimon

Solar Enthusiast
Joined
Oct 9, 2020
Messages
187
I'm looking to deploy Home Assistant and a bunch of other things to sit along side it to allow a cocktail of control & monitoring activities across a farm. There will be a mix of off-the-shelf devices plus Arduino sensors etc.

My goal it to start off with an architecture that I hopefully won't regret. I don't mind learning. If there's a script for deploying something vs a bunch of steps, I'll typically do the steps individually so that I get a better feel for what's under the skin so I stand a better chance of fixing it when it goes wrong.

I think my minimum feature set includes: Home Assistant, Solar Assistant, Mosquito, Node Red, InfluxdB, Grafana. (please tell me other useful ones)

My starting point is an i5 HP pc that I've got running Proxmox as a hypervisor, allowing me to create Virtual Machines in which I can sit Home Assistant and other services. I'm leaning towards running Docker in one of the VMs and stuffing most of the various elements into individual Docker containers in the hope that it's simpler.

I'd really appreciate any thoughts people have on this proposed architecture & what best practices to follow so I'm less likely to hit a dead end.
Many thanks.
 
Not much to add, other than that this architecture sounds like a good prototype. Hoping others chime in with more feedback. Looking forward to maybe seeing some in progress stuff. I'm thinking of a somewhat similar architecture, but haven't implemented much yet.
 
https://tteck.github.io/Proxmox/ is great for scripts
Other than what you already have I use
homarr a nice dashboard to keep all my vm`s/containers neatly accessable
turnkey Linux fileserver a simple very lightweight NAS
Adguard private DNS

Only thing I will say is solar assistant I believe needs to run on arm so I don't think your able to run it via VM/lxc within proxmox
 
Solar Assistant will need to be on its own Raspberry Pi.

I'm not an IT guy but from everything I've read from those who have helped me, they suggest steering away from putting HA into docker.

I have a mini PC with Proxmox and HA installation. Solar Assistant connects via an MQTT bridge. My InfluxDB and Grafana are inside HA but I do wonder if they would be better running on their own virtual server to keep my HA system resource demand down.

The Proxmox is great for automated backups and taking snapshots before any HA updates so a quick rollback can be done if something doesn't work right.
 
For Home Assistant. I use the odroid from Ameridroid. My solar sssistant is running on a pi zero and uses MQTT to connect to Home Assistant. Nabu Casa is how I connect from anywhere in the world.

My system is bulletproof and consumes very little power.
 
My server Rack is full of old cheap dell poweredge's that are power hungry but they run in solar so I'm not worried about efficiency, positive about using old commercial hardware is they have been running non stop for almost a decade in a hot high humidity climate with no issues.
 
Hello.

- Consider running Haos as VM instead of dockers. I use docker for everything except for HA, its a bit messy.
- Docker running on a VM is a good way to go.
- CCTV?; check Frigate, AI for cameras and can integrate with HA.
- If you are doing proxmox, one of the main advantages is orchestrating a multiple machine cluster, so if any of you hardware dies, the services automatically migrate to another machine. Not sure if you are a hardcore linux guy, but if you are starting on this, maybe you could also consider Unraid, which has a great docker management and way easier than Proxmox. Also maybe Bare Ubuntu Server.
- A second machine can help you do automatic backups. Im guessing the services will be important for you, so consider how much data / downtime you will be risking. In my years i had a servers failing because of HD's, Ram's.
- If you are storing important data, consider for the future (Apart from the automatic backups), some sort of "raid" so you are not down if an HD fail, ZFS, Mergerfs & SnapRaid or the Unraid default array.

Basically.
If you have time, and dont mind making the server your hobby, Proxmox its great, you will learn a lot and its very powerfull.
If you are doing this because you need the services and dont want to spend hours learning / testing, try Unraid; basically same principles but with a nice GUI and a lot of resources (Check spaceinvader one on youtube for example)
 
I am running HA as VM, and after few issues, I decided to run most of the HA external components like NodeRed, Influx and Grafana outside the HA VM, as one bad update on HA killed other services and restoration took a lot of time.

Thing about HDD/SDD usage, as HA is know as SD and SDD killer (had that after over 1year continuous use of HA, my SSD died)

So at the moment, ha runs on hard to kill 11 yo Segate HDD as KVM VM, other components I have in podman as containers.
For monitoring I have 2 levels, grafana + NodeRed, and in some cases this redundancy was crucial to get low battery alert :D, more over NodeRed + telegram BOT gives me access to automation from anywhere without spending a cent on 3rd services.

1721988234829.png
 
You know I've been "into" virtualisation since ESX 3.0.3 (by VMware) came out. This was around 2008 I think. I immediately convinced my boss at a time to steer entire company towards virtualisation (A Microsoft Gold partner that time) as a solution to all our problems with Microsoft's server systems. I haven't seen a need to use virtualisation on Linux for a long time.

Then containers came out. IMO if all you run is Linux, containers fulfill 99% of what virtualisation is used for at lower overhead.

This is not to say there are no genuine cases to virtualise Linux. Wanting to play with VMs, not having to touch the host OS, having an unchanging "master template" for the host etc are just some of them.

But having said that, my first choice for home is containers and a simple setup too. I mostly run all my home services via docker-compose scripts. If I had more than a handful or I needed a setup I might leave unattended for a long time I'd probably go for one of the lightweight kubernetes options (k0s most likely) with a couple small servers.

I always do all my prototyping on a straight debian install (with docker). I just go to chatgpt (4 - I have a subscription, 3 is useless for this) and I start with something like "give me a docker compose v2 file to run Grafana in a container, assume configs are saved in /apos/grafana/conf on the host, map it to the same folder on the guest and configure the app to use it". Same for influxdb and others It takes a minute and I have a beginnings of a working setup I can tweak by hand or by AI.
 
You know I've been "into" virtualisation since ESX 3.0.3 (by VMware) came out. This was around 2008 I think. I immediately convinced my boss at a time to steer entire company towards virtualisation (A Microsoft Gold partner that time) as a solution to all our problems with Microsoft's server systems. I haven't seen a need to use virtualisation on Linux for a long time.

Then containers came out. IMO if all you run is Linux, containers fulfill 99% of what virtualisation is used for at lower overhead.

This is not to say there are no genuine cases to virtualise Linux. Wanting to play with VMs, not having to touch the host OS, having an unchanging "master template" for the host etc are just some of them.

But having said that, my first choice for home is containers and a simple setup too. I mostly run all my home services via docker-compose scripts. If I had more than a handful or I needed a setup I might leave unattended for a long time I'd probably go for one of the lightweight kubernetes options (k0s most likely) with a couple small servers.

I always do all my prototyping on a straight debian install (with docker). I just go to chatgpt (4 - I have a subscription, 3 is useless for this) and I start with something like "give me a docker compose v2 file to run Grafana in a container, assume configs are saved in /apos/grafana/conf on the host, map it to the same folder on the guest and configure the app to use it". Same for influxdb and others It takes a minute and I have a beginnings of a working setup I can tweak by hand or by AI.
Pretty much agree with all of this except one major point home assistant can be a massive learning curve if you go the docker route, it's far easier to be set in a VM as HAOS then do as Luk has suggested and run the rest in either docker or lxc containers.
 
The non-techie way to do it is to:
1. buy the kit here
2. Sign up for Nabu Casa
3. Use Reolink cameras
4. Use Shelly switches
5. Use Emporia Vue energy monitoring kit and their smart plugs to monitor loads

Once you climb the learning curve with this setup, you can decide whether to take it to the next level. I didn't go any deeper than this: I have a Flo water meter, Sonoff ceiling fan controllers. broadlink remotes to control my AC mini splits. I recently added the integration to bring Solar Assistant into HA using a pi zero. All my devices are wifi connected. Many people will try to tell you to avoid wifi and use all local devices but it is impossible to completely remove yourself from the cloud. To make my cloud connection robust, I use Starlink for my internet.
 
Im using a dell mini pc with kvm as the virtual machine setup running home assistant. I have large proxmox servers in the house and I found the kvm setup using the official hass full image to be the fastest and of course the most full featured way to go. It will always update to the latest version with no future issues using this method.

Proxmox is sluggish compared to the kvm method from testing. Also having the mini pc in the workshop itself makes all of the network traffic local to that location. Since I feed the workshop via wifi since its so far away from the main servers I pick up reliability network wise by having it in the workshop itself. I plan on running fiber to the workshop eventually but for now this works best for me.

The hass image has all of the support things from grafana to mqtt on it. Solar assistant runs on an orange pi's for different inverters.
I have two cerbos gx's handling the monitoring of the victron gear too.
 
Was this due to the hardware you were testing it on ?
I can throw used Dell hardware at the problem all day long. Anything from micro PC's to Poweredge servers.
Ditto never had a problem with dell stuff and proxmox, some issues with a hp server I tried once but that was purely problems with IOMMU other than that it's always worked great.
 
My proxmox servers are dual 12 core xenon boxes so 24 cores with memory ranging from 64 to 96 gig. These are all rack mount servers not homebrew.

Runs all of my backup servers/linux test bed servers etc just fine and even runs home assistant just fine but it is noticeably more sluggish on the proxmox servers vs a stand alone dell micro i5 pc using kvm.

You have to remember that proxmox is a wonderful platform for hosting things but the kvm setup is pure bare bones. kvm is 1/2 the processes running vs a proxmox setup.

This is the dell I use for home assistant.

Dell OptiPlex 3060 Micro PC Single Intel Core i5-8500T @2.10GHz​


dell3060micro.jpeg
 
My proxmox servers are dual 12 core xenon boxes so 24 cores with memory ranging from 64 to 96 gig. These are all rack mount servers not homebrew.

Runs all of my backup servers/linux test bed servers etc just fine and even runs home assistant just fine but it is noticeably more sluggish on the proxmox servers vs a stand alone dell micro i5 pc using kvm.

You have to remember that proxmox is a wonderful platform for hosting things but the kvm setup is pure bare bones. kvm is 1/2 the processes running vs a proxmox setup.

This is the dell I use for home assistant.

Dell OptiPlex 3060 Micro PC Single Intel Core i5-8500T @2.10GHz​


View attachment 231380
I'm just using single CPU 4 core 1u poweredges (cheapo) either ha plays better with them or my ha is sluggish and I don't know the difference but I've never noticed any slowness.
 
I'm just using single CPU 4 core 1u poweredges (cheapo) either ha plays better with them or my ha is sluggish and I don't know the difference but I've never noticed any slowness.
You wont. Unless have it running on something to compare it too. I didn't know it was sluggish until I put it on the micro pc :)

Like I said it runs just fine under proxmox. Its just snappier on the stand alone using kvm.

Another thing you need to consider is kvm and proxmox are the same thing. They both use the same virtual engine core. Proxmox is kvm with fancy bells and whistles. So think of it as taking windows and turning off and or uninstalling all unneeded things.
 
You wont. Unless have it running on something to compare it too. I didn't know it was sluggish until I put it on the micro pc :)

Like I said it runs just fine under proxmox. Its just snappier on the stand alone using kvm.
So out of curiosity why the kvm? Backups?
Another thing you need to consider is kvm and proxmox are the same thing. They both use the same virtual engine core. Proxmox is kvm with fancy bells and whistles. So think of it as taking windows and turning off and or uninstalling all unneeded things.
See that's where I have a problem I have no idea what I need and don't need....I'll be looking to add more servers once I have the whole house on solar so I might look into it then.
 
So out of curiosity why the kvm? Backups?

See that's where I have a problem I have no idea what I need and don't need....I'll be looking to add more servers once I have the whole house on solar so I might look into it then.
If your only going to have one machine doing HA then the kvm is ideal. Its not hard to setup. Actually it was easy but that said tinkering with it after it is setup is a ROYAL pain if your not familiar with it and or linux I would imagine. Full install is just requires a single command in the terminal.


One key thing is use the full HASS image. Don't do any of the other ones like core or managed or whatever. Do the full one. Its very small in size image wise. Then in kvm you want to grow the drive size after its installed and running. HA will see it allow you to expand out to the full size of the drive increase. This gives you tons or room for expansion later on as needed without having to do anything.
 
If your only going to have one machine doing HA then the kvm is ideal. Its not hard setup. Actually it was easy but that said tinkering with it after it is setup is a ROYAL pain if your not familiar with it and or linux I would imagine.

One key thing is use the full HASS image. Don't do any of the other ones like core or managed or whatever. Do the full one. Its very small in size image wise. Then in kvm you want to grow the drive size after its installed and running. HA will see it allow you to expand out to the full size of the drive increase. This gives you tons or room for expansion later on as needed without having to do anything.
I just don't get why install in kvm if your using a dedicated machine why not just install bare metal no hypervisor
 
I just don't get why install in kvm if your using a dedicated machine why not just install bare metal no hypervisor
Because the image they have requires(lets not say required since they have an x86 package) a pi to run on. I wanted it running on a pc for more horsepower.

By doing it this way I have a full ubuntu install on the other side of HA so if I need to run something there I can but normally I don't have much running there.
 

diy solar

diy solar
Back
Top