diy solar

diy solar

Does anyone know of a ChatGPT/LLM integration with forums?

Think back to the Dot.com bubble, the future potential of the Internet was being sold based on a gigabyte capable network but 56k was the consumer connection. AI has similar potential but it's running at 56k currently.
 
Sick it on this thread and ask how to properly wire a growatt single phase inverter for split phase...
 
If you have a high end gaming rig with a Nvidia RTX card, there is a local version of AI you can install and all data is kept local. I haven’t tried it yet but heard it works pretty well, I saw where you could point it at a site with content and it will summarize.
 
Think back to the Dot.com bubble, the future potential of the Internet was being sold based on a gigabyte capable network but 56k was the consumer connection. AI has similar potential but it's running at 56k currently.
Have you tried using the tools? There are some applications that people are pretty happy with like foreign language learning.

Which makes sense because that would be assumed as the main use cases, before people realized that even the general models are useful for more applications beyond just language processing.
 
All right, sounds like the better place for me to have this discussion is on a machine learning forum where there are more kool-aid drinkers and people who have tried building the pipeline

:tips_hat:
 
ChatGPT is a LLM - large language model. Essentially, it knows how language works and what typically follows what. You can view it as a fuzzy compressed blurred view of the written internet. So it's fine at answering common questions. It's poor at technical/legal/detail things. So for many of the general threads here, it'd be fine, but for technical details it's as likely to make up the details as know them.

It's not 'intelligent' and doesn't have any special insights into stuff. View it like a well-spoken nice-but-dim friend - talks well, but knows little, and when pressed for details is likely to stumble.....

Speaking as an AI researcher....
 
ChatGPT is a LLM - large language model. Essentially, it knows how language works and what typically follows what. You can view it as a fuzzy compressed blurred view of the written internet. So it's fine at answering common questions. It's poor at technical/legal/detail things. So for many of the general threads here, it'd be fine, but for technical details it's as likely to make up the details as know them.

It's not 'intelligent' and doesn't have any special insights into stuff. View it like a well-spoken nice-but-dim friend - talks well, but knows little, and when pressed for details is likely to stumble.....

Speaking as an AI researcher....

I find it works pretty well if you treat anything it says as trustworthy as something passed through 3 fifteen year old boys with a hot teacher in the room.

For writing code it does pretty well once you learn to talk to it and if you guide it through the logic you want.

If you ask it for any detailed involving calculation, have it show the work and check it. It doesn't do units conversion well and it slips decimal places.

Bingchat is pretty useless for most things including code writing. If it spits something out and you tell it thats wrong and you want something else it tends to flip back and forth between one bit of nonsense and the other. But, after a teams meeting you can ask for transcription and summary and it generally gets the high points but also throws out garbage from time to time.


Either one, if you ask it a technical question it tends to condense the first 20 answers a google search will give you and think those are gospel. And we all know how many links it takea to sift the crap from the good. And how a lot of the time the person that is SURE what the right answer is, but they are totally wrong.

If I post an answer from chatgpt here I always preface it with. .... warning, chatgpt answer follows.
 
Last edited:
For writing code it does pretty well once you learn to talk to it and if you guide it through the logic you want.
This is the part that scares me. If you are a better programmer than it is, you'll catch the errors (but you don't need it). If you are not as good a programmer, there will be no errors or bugs (that you know of).
 
This is the part that scares me. If you are a better programmer than it is, you'll catch the errors (but you don't need it). If you are not as good a programmer, there will be no errors or bugs (that you know of).


That is why when you ask a coder or technical person about it you get a very qualified answer. But if you ask a lay person you will usually get a "it is the best ever" sort of answer.

It has a loooong way to go before it can replace good coders sucessfully. But, it can shorten the time it takes me to write code by at least a factor of 10 including the debugging time. And the resulting code is more clear and readable with more error checking.

One place chatgpt struggles is with comparison operators. If the variable is a string verses an integer or boolean, the different operators can do different things in subtle ways. May appear to work fine, but can crap out on unexpected input.
 
Last edited:
It has a loooong way to go before it can replace good coders sucessfully. But, it can shorten the time it takes me to write code by at least a factor of 10 including ghe debugging time. And the resulting code is more clear and readable with more error checking.
I really need to train myself to use those auto coding platforms as a coding speed up.

And this is also why I want to learn how to use the equivalent tools for writing chores in engineering.

I think we should give the LLM a pat on the head for being so good at pretending to do math.
 
It's not 'intelligent' and doesn't have any special insights into stuff. View it like a well-spoken nice-but-dim friend - talks well, but knows little, and when pressed for details is likely to stumble.....

Speaking as an AI researcher....
I’m aware of that, I’m supporting some teams scaling their text context fetching stage of the pipeline.

The comparison I have here is against not being able to do some writing and summary tasks.

Scenario 1: I don’t put any effort into summarizing 50 pages of a very interesting thread I read last week because I don’t have the patience. Output: nothing

Scenario 2: LLM does the tortured intern level work of finding interesting excerpts from posts by English pattern matching. I go back and look at those posts, correct it’s English, applies my engineering domain knowledge. Output: something

And replace LLM by summarizer model if that is more believable
 
I really need to train myself to use those auto coding platforms as a coding speed up.

And this is also why I want to learn how to use the equivalent tools for writing chores in engineering.

I think we should give the LLM a pat on the head for being so good at pretending to do math.

I give it a lollipop everytime it spits out correct code.
 
It does pretty good if I ask it to write a "it was great working with you" or a resume or things like that.

I've gotten good results telling it to generate transfer paperwork for a deed and for mineral rights on property for me to use as executor of my mother's estate. I told it to lookup and use examples from Texas (where the propery is). Both items were accepted by the clerk and recorder along with the gas company doing extraction.

I also had it generate a will that was at least as good as what a legal assistant would do.

So, for super basic legal matters it is fine. I wouldn't want it committing murder and blaming me for it though.
 
Back
Top