diy solar

diy solar

Raspberry Pi Pico (microcontroller)

That's fine, they're allocated on the stack ;)
Not if your doing it right.

I had one product where the customer had their own rate calculating code that was totally secret squirrel stuff (it involved money).

We were supposed to make function calls into their engine for "EVERYTHING". Want the bitmap for a character, make a function call. Want to know how to map the keys, make a function call. Want to look up a user interface prompt (in the local language), make a function call. And we were supposed to walk the same chain of calls every time we needed to do anything.

First implementation was so slow (this was running on a Philips XA51) that it was unusable. I built up a table of function pointers in RAM at startup and jumped around their judiciously slow rate engine. Sped up the user interface by a 5X factor.

I built up the function pointer tables on the fly every time we powered up (new rate data could be loaded into the device without updating our firmware), so the only real risk was that they might get tricky and conditionally remap internal functions based on context. So I created an automated test bench to exhaustively determine if this ever happened and ran this each time we got a new version of the rate engine. It never changed. They honestly weren't very good coders. The customer's internal development group was not very good or responsive and I was routinely asked to fix bugs in their rate engine in my code. No problem, they were buying $25M a year of our product (in the '90s) and we were making a 60 margin on that, so we were glad to cater to their every whim. That $15M of margin a year will buy a lot of my time.
 
I use a C++ compiler (because every embedded C tool chain is C++), but my embedded code is pure C and assembler. And yes, I use the dreaded pointers. Even worse, tables of function pointers! Gasp.

I avoided pointers as long as I could. Once I learned them, I felt like a coding super hero. Super clean and efficient. They can feel convoluted and deliver some really hard to track down bugs, but when I only had an 8bit CPU and, tiny RAM - this is the way. Every bit and clock cycle justified.

When I use a 32bit MCU with some huge amount of flash and RAM to do a relatively tiny task - It is hard to dig in that deep anymore. I don't need most of my projects to run for a decade continuously or some ridiculously limiting power source.
Some see it as a cheat to oversize the MCU to allow a higher level (bloated) code - but if the project is ok with the ramifications - the time saved is delicious.
 
I avoided pointers as long as I could. Once I learned them, I felt like a coding super hero. Super clean and efficient. They can feel convoluted and deliver some really hard to track down bugs, but when I only had an 8bit CPU and, tiny RAM - this is the way. Every bit and clock cycle justified.

When I use a 32bit MCU with some huge amount of flash and RAM to do a relatively tiny task - It is hard to dig in that deep anymore. I don't need most of my projects to run for a decade continuously or some ridiculously limiting power source.
Some see it as a cheat to oversize the MCU to allow a higher level (bloated) code - but if the project is ok with the ramifications - the time saved is delicious.
I may have started writing device drivers in Microsoft Macro Assembler, but all the PC code I write today is in C#. Use the appropriate tool for the job.

I do stuff that is used in nuclear power plants, by the military, Nasa etc. It has to work and I have to be able to prove it works (which is actually a lot harder than writing it in the first place is. That stuff is going to be absolutely deterministic. I am going to use an RTOS, there will be automatic regression testing hooks built into it and the only dynamic memory I use are automatic function variables, and they always get initialized before I use them. And I use hardware watch dog timers, and maintain internal logs (in EEPROM) so we can figure out what happened during a post mortem.

I am also a huge proponent of ring buffers for communication drivers. The idea that a buffer overflow could be a vector for a malicious code attack still baffles me. I have never even heard of a ring buffer crashing out of the sand box yet. I may overflow the ring buffer and drop data on the floor, but as soon as I catch up we are back in business. And the protocols I specify, notice things like this and fix them automatically (or tell you they happened if that is the right thing to do).

The idea is to engineer a robust, efficient product that does what it is supposed to do, when it is supposed to do it. Nothing more, nothing less.
 
@HaldorEE I could learn so much from you.
Are you able to send me an archive of your brain, including all the 'experience' files?

:ROFLMAO::LOL::ROFLMAO::LOL::ROFLMAO:

My coding skills have always been a lower priority than by ME and EE skills. To make that even worse - I owned my own businesses for 25 years which ensures that I was distracted 24/7.

As I get older, all I want to do is software development and coding.
 
Not if your doing it right.

??? without doing heap alloc they will be on the stack, there's no other way... ^^ (unless you have a very weird architecture...)


I had one product where the customer had their own rate calculating code that was totally secret squirrel stuff (it involved money).

We were supposed to make function calls into their engine for "EVERYTHING". Want the bitmap for a character, make a function call. Want to know how to map the keys, make a function call. Want to look up a user interface prompt (in the local language), make a function call. And we were supposed to walk the same chain of calls every time we needed to do anything.

First implementation was so slow (this was running on a Philips XA51) that it was unusable. I built up a table of function pointers in RAM at startup and jumped around their judiciously slow rate engine. Sped up the user interface by a 5X factor.

I built up the function pointer tables on the fly every time we powered up (new rate data could be loaded into the device without updating our firmware), so the only real risk was that they might get tricky and conditionally remap internal functions based on context. So I created an automated test bench to exhaustively determine if this ever happened and ran this each time we got a new version of the rate engine. It never changed. They honestly weren't very good coders. The customer's internal development group was not very good or responsive and I was routinely asked to fix bugs in their rate engine in my code. No problem, they were buying $25M a year of our product (in the '90s) and we were making a 60 margin on that, so we were glad to cater to their every whim. That $15M of margin a year will buy a lot of my time.

?


They can feel convoluted and deliver some really hard to track down bugs

If you follow a few rules you suddenly have far less problems (like always initialize them (to what you want in them or null otherwise), 1 malloc = 1 free, never ever return a pointer to a local variable, etc...).
 
@HaldorEE I could learn so much from you.
Are you able to send me an archive of your brain, including all the 'experience' files?

:ROFLMAO::LOL::ROFLMAO::LOL::ROFLMAO:

My coding skills have always been a lower priority than by ME and EE skills. To make that even worse - I owned my own businesses for 25 years which ensures that I was distracted 24/7.

As I get older, all I want to do is software development and coding.
It would hurt. I am just blessed that I have been able to do this pretty much my entire working life and I am getting close to retirement now.

The first embedded product I worked on used an Intel 8051 processor with 256 bytes of ram, that's right bytes. The firmware was written in assembler and the development environment used one of the old Intel Blue Ice boxes. That was not an inexpensive development system.

ICE-51_01.jpeg

I implemented a 20 bit load cell signal conditioner with an adaptive FIR filter on that processor and serviced a simple UI and a serial port for PC connection. This turned into the most successful product my company ever made. The number made is in the millions. You have seen them. They are on the counter of every FedEX or UPS store. The box weighing scale on the counter. It is a Mettler-Toledo PS60 bench scale. The latest version of it is based on an ARM Cortex M3 processor and it has a USB port with a tiny graphical display (got to be able to handle international character sets now).

The first version of this was the first product I worked on, and the 4th generation redesign (now called the GPS-60) was the last product I worked on during my 32 years at that company.
 
@HaldorEE I could learn so much from you.
Are you able to send me an archive of your brain, including all the 'experience' files?

:ROFLMAO::LOL::ROFLMAO::LOL::ROFLMAO:

My coding skills have always been a lower priority than by ME and EE skills. To make that even worse - I owned my own businesses for 25 years which ensures that I was distracted 24/7.

As I get older, all I want to do is software development and coding.
I have successfully fought off every attempt by my employer to turn me into a manager. I would have sucked so hard at that. It not that good management is not important, it is that I would be terrible at it.

Most important book I ever read was the Peter Principle. I never took that last fatal promotion to my level of incompetence.
 
Here is some hard won bits of wisdom.

Every single processor port pin that is connected to a logic device input gets a pull up resistor. Reason? Because processor GPIO levels are not defined while the processor is in reset and may be floating. The first time I ran into this was with a couple of memory mapped I/O registers on an 8051 (I learned this one a long time ago). This was on a USB powered device back in the day when 4V at 100 mA was the design target. I was really paying attention to the startup inrush current draw from the USB port (spec says limit this to the no more than the effect of a 10 uF cap). We were getting triple this. I finally tracked it down to the 74LS244 latches drawing 20 mA each because the processor's bus pins where floating while the processor was in reset. Turns out this was happening on every single product we designed with external memory busses or memory mapped I/O (about 100% of them). We added pullup resistor to all of our bus pins after that. Intel had nothing about this in their design guidelines either.

Here is another one. Every single signal connected to a processor's GPIO gets a 33 ohm series termination resistor. From a logic level and timing standpoint, that is small enough to be completely transparent. But that little bit of termination can make a difference when you are in the RF test lab trying to pass FCC or CE requirement. And when you find the specific traces that are the real problem, all you have to do is increase the resistor value to something between 60 to 100 ohms to seriously knock down ringing (and radiating). Beats have to change the PCB layout to add the resistor that should already be there.

And take a good look at your SPI bus sometimes. What you see there may horrify you. Add 100 ohm series resistor and sometime a 0.1 nF to 1 nF cap to ground at the receiver end of the signal and that ringing will be gone.

JltEc.png

It is things like this that make adding series resistors to everything just not worth talking about, just do it. Series resistance also helps with ESD immunity. It is like eating your vegetables, there really is no downside and it can save you from some very unpleasant situations.

Here is another one that not many people know:

Where you locate the connectors on your PCB has more influence on how many hours you spend in the Lab fixing RFI and EMI problems than anything else you can do.

Ever seen a PCB with a digital cable coming out one end of the PCB and an analog cable coming out the other end? Ever wonder why they always seem to have huge ferrites on the cables? Those two issues are most definitely related.

The reason why is current flow in the PCB ground plane. Any time you have a signal flowing through a trace, the return current flows back to the source of the signal through one of the power planes (most often the ground plane). If you have current flow in a conductor then you have a voltage potential across that conductor. If that signal is toggling at say a couple of MHz then you have a voltage toggling at a couple of MHz across that ground plane.

Now the bright eyed, young engineer goes to place his connectors. He wants to make sure the "dirty" digital stuff doesn't pollute his pristine analog stuff so he puts the digital cable on one side of the PCB and his analog cable on the other. That will certainly keep them seperate. Problem is those cables have shields and those cable shields are connected to the ground plane.

What is a phrase used to describe a circuit consisting of a MHz signal generator that is connected to a pair of wires extending out each side. I would call that a radio transmitter with a dipole antenna. Better stock up on ferrite cores.

What a lot of naive engineers don't realize is that the primary way that RF leaks out of a PCB is on the shields of the cables connected to it. And managing how and where those cables connect to the PCB is vitally important to the future of your hairline.

The best solution is to put all of your connectors along one edge of the PCB and don't run circuit traces anywhere near that edge of the PCB. They can come directly towards it (at right angles to the PCB edge, just not along side it). If you do this there will not be current flowing along the edge of the ground plane and you won't need to add ferrites to your cables. I have gone so far as to add mote in the ground plane so that there is a strip of ground plane running along the edge of the PCB for all the cable shields to connect to and that strip is only connected to the PCB ground plane at a single location.

Sad thing is people like lots of cables and there are enclosures already designed by people who don't understand RFI and EMI so you get backed into a corner. The next best solution is to put all the connectors in one corner of the PCB and don't put anything switching anywhere near that part of the PCB. The engineers at IBM who designed the ISA bus knew what they were doing. If you are in a truly hopeless situation consider using galvanic isolation ICs and partition the ground planes into isolated domains. Expensive, but it works.
 
Last edited:
I am going to be doing an Arduino Hat PCB design for load cell inputs. I am going to use it for research, and it might turn into a commercial product. I will have too maintain confidentiality for IP related stuff, but it means I suddenly have a real need to learn about Arduino and Raspberry Pi's use for expansion PCBs.

I just got an Arduino UNO sitting here on my desk. Talk about archaic. That thing is based on an 8 bit AVR. I will look at the Arduino programming language, but I think MicroPython makes so much more sense.

I know, this would be perfect for the Victron.Connect relay output box. Basically just implement the Victron.Connect serial protocol and implement a bit of state logic to manage the relay outputs.

I want that box. Guess I need to build one cause except for the Cerbo or ColorGX, you can't just buy it.

I know about Victron Venus on a Raspberry Pi. That would certainly be the way to go if you want all the other stuff.


Still, a relay output box is a pretty trivial exercise. And I have an Arduino Uno I am not going to use for anything else...
 
Here is another one. Every single signal connected to a processor's GPIO gets a 33 ohm series termination resistor. From a logic level and timing standpoint, that is small enough to be completely transparent.

Yes, I planned to use 1.8 k but since 33 Ohms is good enough I guess I'll go down to 330 Ohms (values are because of BoM reuse) to gain back some drive capability.


It is things like this that make adding series resistors to everything just not worth talking about, just do it. Series resistance also helps with ESD immunity. It is like eating your vegetables, there really is no downside and it can save you from some very unpleasant situations.

+1


I will look at the Arduino programming language, but I think MicroPython makes so much more sense.

It's C++ with a few things not available but most people use only the C part of it.
 
Yes, I planned to use 1.8 k but since 33 Ohms is good enough I guess I'll go down to 330 Ohms (values are because of BoM reuse) to gain back some drive capability.
What signal where you thinking about using a 1.8K resistors on? The series resistor and the trace capacitance form a low pass filter which helps reduce noise in general on the PCB. You have to be careful with faster clock rate signals that you don't overdo the resistor value or else you will lose high frequency drive capacity. I am a major fan of using LVDS buffers for really high speed signals, especially if they have to travel any distance or pass through a harness. Every signal is differential, LVDS just forces you to acknowledge this truth in your PCB design.

Most of us configure our PCB layout tools to automatically design 50 ohm microstrip traces. Termination resistance works best when we match the characteristic impedance of the trace, cables and connectors. 47 ohm is a very good match for that, better than 33 ohms. USB Slow Speed and Full Speed used 33 ohm resistors (which is where I stole the 33 ohm value from). I2C use 60 ohm. LVDS, CANBus and RS485 use 120 ohms.

I fell into the habit of using 33 ohm almost as a placeholder value because it is essentially transparent while still providing some EMI benefit, and when I see this value, I know I haven't paid any significant attention to the signal. I have about convinced myself to switch the default value to 47 ohm in the future though. Not like doing this costs more.

Another benefit to the series resistors for me, is the resistor pads give me convenient test points to probe the PCB (nothing like BGA parts to make probing difficult) and if you ever need to rewire a prototype PCB, you can depopulate the resistor and now you have an easy place to extract or inject signals. Beats having to cut traces or soldering wires directly to components.
 
Last edited:
I have TTL clock and data signals going to some shift registers and a few simple TTL signals. A 100 kHz clock would be plenty enough for what I'll do, so speed shouldn't be too much of a concern and the shift registers have schmitt trigger inputs to avoid noise related problems. And the simple TTL signals go into schmitt trigger buffers to avoid any noise problems too.

I'm more worried about having a fan-out of 4 with the relatively high 1.8 k resistors, but since your experimental data shows values far lower that are good enough then I can lower them.

I plan to support up to around 1 m of unshielded flat flex cable so that's why I thought a lot about suppressing probable noise and ringing problems before I even have a chance to encounter them. I also have the shift registers signal wires inbetween grounded wires on the flat cable (even number wires are signal, odd number wires are ground) for the same reasons, and to reduce radiated noise too. It's actually a method which was already used a while back for the floppy drives signals.
 
I have TTL clock and data signals going to some shift registers and a few simple TTL signals. A 100 kHz clock would be plenty enough for what I'll do, so speed shouldn't be too much of a concern and the shift registers have schmitt trigger inputs to avoid noise related problems. And the simple TTL signals go into schmitt trigger buffers to avoid any noise problems too.

I'm more worried about having a fan-out of 4 with the relatively high 1.8 k resistors, but since your experimental data shows values far lower that are good enough then I can lower them.

I plan to support up to around 1 m of unshielded flat flex cable so that's why I thought a lot about suppressing probable noise and ringing problems before I even have a chance to encounter them. I also have the shift registers signal wires inbetween grounded wires on the flat cable (even number wires are signal, odd number wires are ground) for the same reasons, and to reduce radiated noise too. It's actually a method which was already used a while back for the floppy drives signals.
1.8K is too high. I suggest you use 100 ohm resistors then see if you still have a ringing problem. If you do, then add a 1 nF cap to ground on the receive input. If you want to be pre-emptively safe, just add the 1nF caps to the PCB layout and you can decide later if you need to populate these parts or not.

This application note from NXP discusses this specific situation with a SPI bus running at 100 KHz. I would put the 100 ohm resistors close to the transmitter and then fan out from there. Try using point to point (daisy chain) wiring, as apposed to a star trace wiring scheme. Then put a single 1 nF cap to ground at the most distant receiver.


My latest run in with ringing (literally last week) was with an SPI bus clocking at 8 MHz. 100 ohm resistors alone did the job for me. When I migrate this design to a multiple drop SPI network spread across multiple PCBs, I am switching over to using LVDS driver/receivers. Only the SCK, MISO and MOSI signals get differential drivers. The SSEL signals will get by with just series resistors since they are not toggling except to select which ADC I am talking to.

-Edit-

I just noticed you are going to be driving a 1m cable. Save yourself a lot of heart burn and use LVDS drivers.

The SCK and MOSI signals can use standard LVDS drivers since that is a one to many connection. The MISO signal will need to use M-LVDS drivers to permit a many to one connection.

I suggest looking at these ICs for the M-LVDS drivers.

 
Last edited:
Sending data from one transmitter to multiple receivers is simpler, just use normal LVDS drivers and receivers. Where it gets a bit more complex is if you need to clock data in from multiple sources to a single receiver. That is when you need to use M-LVDS drivers.

Are these device all going to be on the same PCB or are you going to be driving receivers on more than one PCB? If they are all on a single PCB, then just using a single LVDS receiver on the cable input and use normal fanout with a single series resistor (100 ohm) from there).

If the cable is going to daisy chain from PCB to PCB, then you might want to consider doing what I am going to do. Most LVDS receiver ICs build the 120 ohm termination resistor into the IC and you only want a single termination resistor at the end of the chain. To get around having to manually put the termination resistor into the last PCB in the chain, I am buffering the input on each PCB and resending it out to the next PCB through another LVCS driver. The data coming back to the master uses M-LVSD drivers so these signals just pass through from PCB to PCB in a daisy chain fashion.

At the clock speed you are operating at, you could drive a bunch of PCBs and handle some significant cable length this way:


Source Cable First PCB Cable Next PCB Cable Next PCB Cable Last PCB
Tx>---------------->Rx--Tx>---------------->Rx--Tx>--------------->Rx--Tx>--------------->Rx--Tx>


-Edit-

Why are fixed space fonts broken now? I made the above diagram using Courier font and it still messed the spacing up.

Warning post was edited multiple times. Be sure to refresh to get the latest version
 
Last edited:
Are these device all going to be on the same PCB or are you going to be driving receivers on more than one PCB?

They are on the same PCB ;)


If the cable is going to daisy chain from PCB to PCB, then you might want to consider doing what I am going to do. Most LVDS receiver ICs build the 120 ohm termination resistor into the IC and you only want a single termination resistor at the end of the chain. To get around having to manually put the termination resistor into the last PCB in the chain, I am buffering the input on each PCB and resending it out to the next PCB through another LVCS driver. The data coming back to the master uses M-LVSD drivers so these signals just pass through from PCB to PCB in a daisy chain fashion.

The signals can also pass through a third optional PCB and I planned for it to buffer the signal for the other PCB (don't really have the choice anyways since the data is serial, not //, so each serial register buffers the data for the next).


Why are fixed space fonts broken now? I made the above diagram using Courier font and it still messed the spacing up.

Because most devs don't care too much anymore... Try to use a code block to put ASCII art stuff in, it should work ;)



Just noticed you edited the previous post;

I just noticed you are going to be driving a 1m cable. Save yourself a lot of heart burn and use LVDS drivers.

I've seen classic old TTL ICs drive things across longer cables than that, and at far higher speeds. So I don't think I'll need LVDS drivers; I guess we will know the answer when I'll test it ;)

Note I can reduce the clock speed quite a bit (100 kHz would be the maximum I would ever want), I have only 48 bits maximum per frame to send (and I'll send around 3 frames per second), so even at 10 kHz that would take less than 5 ms, which is plenty fast enough to not keep the MCU busy for too long.


The SCK and MOSI signals can use standard LVDS drivers since that is a one to many connection. The MISO signal will need to use M-LVDS drivers to permit a many to one connection.

It's not SPI, it's only your classic clock + data for 595 serial registers, and it's unidirectional so no "many to one" problems to handle.



Thanks, I'll keep that link, it might get handy ;)
 
TTL is fine then. I would split the resistance in half and put 60 ohms on the sender and 60 ohms followed by a 1 nF cap an the receiver PCB.

That way both PCBs have better ESD protection.
 
Back
Top