diy solar

diy solar

Let's find out what ChatGPT AI thinks

I had a conversation with it the other night, testing its ability to generate a training plan for me, based on a some back and forth, such as ho many hours I had available, I can't train on a Monday, I like to do some weights sometimes on a Thursday, would like one ride on a weekend etc.

It did a pretty decent job, the plan was coherent, well structured, came with supplemental information to assist and appropriate warnings and caveats. It also provided it in a CSV file format so it could be imported into the tool of my choice.

This morning I asked it a question about how much energy it would take to heat water in a hot water tank. It got that wrong, apologised for that and took note of the better answer and recognised where it had made the mistake.

Screen Shot 2023-02-04 at 1.35.47 pm.png
 
The OpenAI researchers have published a great deal of information on their work, what they are doing, and how they do it. I've been reading it. As well, I've been studying separately how pre-trained transformers work and how to design them, and have written some very trivial AIs. I'm no expert, but not naive.

ChatGPT has a preprocessor, before the request goes to the AI, looking for abuse, pornography, violence, or breaking of TOS. When the AI responds that its an AI and for whatever reason can't complete the request, its because the preprocessor caught something.

And, if you simply ask ChatGPT to talk about how great Biden is, or how great Trump is, you will equally get a response that it can't answer the question-caught by the preprocessor. It is only with carefully constructed abstractions "write a poem", "positive attributes" etc instead of asking about Trump or Biden directly that someone has found a hole that favors Biden.

As I said before, the AI has bias, because the raw training data has bias. OpenAI is not programming any bias into it, rather, they are researching the bias to learn how to get rid of it. Let's not forget, ChatGPT is not a finished product, its is a beta test to collect data on how it responds to various inquiries.
 
This morning I asked it a question about how much energy it would take to heat water in a hot water tank. It got that wrong, apologised for that and took note of the better answer and recognised where it had made the mistake.
Interesting it thought that a heat pump would be more efficient for heating .... wonder if there's anything to that?

Let's see... a quick google says the best EER for heating is 14, so 14 BTU/Wh.

Since it needs 16.5 kWh of energy to heat the water, that's 16.5 x 3412 kWh/BTU = 57kBTU.
With an EER of 14 BTU/Wh that's = 57 kBTU / 14 BTU/Wh) = 4 kWh (or more since the efficiency might not be that good).

Well dang! Resistive heating is ~100% efficient, so it takes ~16.5 kWh. I wonder if I could heat my hot water and dehumidify my garage in one go? Yikes they're expensive and tanks go every several years ... I wonder... oh yeah... it's been done:

 
Interesting it thought that a heat pump would be more efficient for heating .... wonder if there's anything to that?

Let's see... a quick google says the best EER for heating is 14, so 14 BTU/Wh.

Since it needs 16.5 kWh of energy to heat the water, that's 16.5 x 3412 kWh/BTU = 57kBTU.
With an EER of 14 BTU/Wh that's = 57 kBTU / 14 BTU/Wh) = 4 kWh (or more since the efficiency might not be that good).

Well dang! Resistive heating is ~100% efficient, so it takes ~16.5 kWh. I wonder if I could heat my hot water and dehumidify my garage in one go? Yikes they're expensive and tanks go every several years ... I wonder... oh yeah... it's been done:


nagantm441

1 year ago
I would never hire Rich because he'd make me do half the work with him.
 
I had a conversation with it the other night, testing its ability to generate a training plan for me, based on a some back and forth, such as ho many hours I had available, I can't train on a Monday, I like to do some weights sometimes on a Thursday, would like one ride on a weekend etc.

It did a pretty decent job, the plan was coherent, well structured, came with supplemental information to assist and appropriate warnings and caveats. It also provided it in a CSV file format so it could be imported into the tool of my choice.

This morning I asked it a question about how much energy it would take to heat water in a hot water tank. It got that wrong, apologised for that and took note of the better answer and recognised where it had made the mistake.

View attachment 133116
I'm surprised it didn't handle your energy calculation correctly - that's the kind of thing it should be good at.
Unlike Politics with its shades of grey.
 
Nyeh.. until they can make it want to solve things based on rewards and punishments it will never be more than a collection of facts regurgitated at demand. Living things learn based on various triggers that provide pain or pleasure. Take for instance that my handlers/researchers reward me with a banana or a cookie when I learn something new or punish me with electrical shock when I get stubborn and am slow to learn.

How do you reward or punish an AI? When does a AI have a "Hold my beer" moment? Or a "I wonder what would happen if I did...?" thought.

You might get it to answer a question or write a book. But it gets no pleasure or decides to tell you to get stuffed it is taking the day off.

So it is just a tool. Only as good as the person that programed it. Who by the way did so because they hoped to get rewarded in some way like getting a banana.

ETA: BTW I don't like the are you a robot detection devices. Damn things are a pain.
 
It's not really favoring Biden, it just didn't recognize he was important enough to be scrubbed out. ;)
When i tested and played with the issue, Biden was blocked as well. Its when the request is made in an obscure way, asking to write a poem, about attributes, it doesn't realize that's a question about Biden. The issue has even been discussed by OpenAI.
 
Nyeh.. until they can make it want to solve things based on rewards and punishments it will never be more than a collection of facts regurgitated at demand. Living things learn based on various triggers that provide pain or pleasure. Take for instance that my handlers/researchers reward me with a banana or a cookie when I learn something new or punish me with electrical shock when I get stubborn and am slow to learn.

How do you reward or punish an AI? When does a AI have a "Hold my beer" moment? Or a "I wonder what would happen if I did...?" thought.

You might get it to answer a question or write a book. But it gets no pleasure or decides to tell you to get stuffed it is taking the day off.

So it is just a tool. Only as good as the person that programed it. Who by the way did so because they hoped to get rewarded in some way like getting a banana.

ETA: BTW I don't like the are you a robot detection devices. Damn things are a pain.
That actually is what happens during a stage in training. Not "pain" in the biological sense, but the AI generates a response, and the response is graded. The AI learns to avoid responses with a poor grade, and attempt to duplicate responses with a good grade.
When using ChatGPT,
Have you noticed the thumbs up and thumbs down next to a response?
 
That actually is what happens during a stage in training. Not "pain" in the biological sense, but the AI generates a response, and the response is graded. The AI learns to avoid responses with a poor grade, and attempt to duplicate responses with a good grade.
When using ChatGPT,
Have you noticed the thumbs up and thumbs down next to a response?
The AI does not learn. It follows a programed sequence. Just because someone instituted a crowd response feedback into it that does not change the basics. It does not feel that being given a thumbs up is a reward. Or being given a thumbs down is a punishment. It just counts them and alters output.

Incidently it would not be all that hard to create a way to easily identify AI produced information such as a Term paper. Just program the AI to use a particular signature that a software recongition program can look for. Such as failing to capitalize a word the begins the 4th sentence. Or putting in commas wrongly in the 2nd paragraph. Or mispelling Albequerque.
 
Some of us see chatGPT as another tool for humans to get more done better. We don’t complain that a car won’t go on the water very effectively. Asking a tool to divine the complexities of political/power grabbing/divisive behavior seems like a tall order. Aren’t there bigger fish to fry?
 
I'm surprised it didn't handle your energy calculation correctly - that's the kind of thing it should be good at.
I was too, but perhaps it was something about the way I asked the question, not sure. It corrected itself using the specific heat value but it was interesting.

Interesting it thought that a heat pump would be more efficient for heating .... wonder if there's anything to that?
Heat pump HW systems are sold with a COP rating (coefficient of performance), which is the ratio of the heat energy transferred to the water over the electrical energy used by the system to operate. That's not an efficiency measure, to do that you also need to know how much energy it is drawing from the surrounding environment (and efficiency will always be less than unity, laws of thermodynamics and all that).

But a COP of 3 to 5 is typical, at least for heat pump HW systems sold here (cool to hot climates). Units are typically installed outdoors, indeed it would be unusual for an indoor installation. In Winter you really don't want it to be drawing environmental heat from inside the building.

The better quality units have a separate compressor and tank and use CO2 as the refrigerant but they cost a fortune. Brands like Sanden and Reclaim.
 
I was too, but perhaps it was something about the way I asked the question, not sure. It corrected itself using the specific heat value but it was interesting.


Heat pump HW systems are sold with a COP rating (coefficient of performance), which is the ratio of the heat energy transferred to the water over the electrical energy used by the system to operate. That's not an efficiency measure, to do that you also need to know how much energy it is drawing from the surrounding environment (and efficiency will always be less than unity, laws of thermodynamics and all that).

But a COP of 3 to 5 is typical, at least for heat pump HW systems sold here (cool to hot climates). Units are typically installed outdoors, indeed it would be unusual for an indoor installation. In Winter you really don't want it to be drawing environmental heat from inside the building.

The better quality units have a separate compressor and tank and use CO2 as the refrigerant but they cost a fortune. Brands like Sanden and Reclaim.
ChatGPT is a language model. It doesn't know how to "perform" math or logic. Though it often appears to, its because whatever you were asking is represented in the training data. So it can be rather easy to stump it with basic logic and math problems.
 
ChatGPT is a language model. It doesn't know how to "perform" math or logic.
Aware of that, however:
i. the only way the general public can really get to understand its capabilities and limitations is to use it
ii. mathematics is also a language. A pretty precisely defined language at that.

By the way, following is the question I posed. It wasn't my question, I just copied from somewhere I can't recall (probably a FB group) and I thought it would make an interesting test of the system. This following chat preceded the earlier posted chat (same chat):

Screen Shot 2023-02-05 at 8.31.28 am.png

So this morning I thought I'd give it a straight question, same numbers as before. It went through all the logic, which was completely correct. However it made a basic calculation error, see the highlighted calculation. The answer to the multiplication is just wrong.

Screen Shot 2023-02-05 at 8.38.24 am.png

I've since been going through pointing out its error, but then it just changes units and ends up making another similar calculation error.

Eventually I ended up with a server error, so it was unable to acknowledge its mistake.
 
Aware of that, however:
i. the only way the general public can really get to understand its capabilities and limitations is to use it
ii. mathematics is also a language. A pretty precisely defined language at that.

By the way, following is the question I posed. It wasn't my question, I just copied from somewhere I can't recall (probably a FB group) and I thought it would make an interesting test of the system. This following chat preceded the earlier posted chat (same chat):

View attachment 133241

So this morning I thought I'd give it a straight question, same numbers as before. It went through all the logic, which was completely correct. However it made a basic calculation error, see the highlighted calculation. The answer to the multiplication is just wrong.

View attachment 133244

I've since been going through pointing out its error, but then it just changes units and ends up making another similar calculation error.

Eventually I ended up with a server error, so it was unable to acknowledge its mistake.

I misspoke. Not a language model, a "natural language model."
It doesn't know how to apply logic, perform math or do any figuring. It doesn't even understand what the words in the prompt or in its response really mean. Instead, from its training on a huge amount of data performs a word by word answer in a similar way as autocomplete on your phone. It statistically knows what words are likely to be the correct response. Sometimes that means it gives the correct answer to math problems or logic problems. But it did not arrive at those answers by performing the math, and answers to those types of questions should never be trusted or taken as fact.

I know that is unbelievably difficult to believe, given the depth and natural flow of the responses. But from reading reports from experts in this field, from the scientists that created ChatGPT, and my own study of AI programming and how generative pretrained transformers are written and how they work, that is how they work.

If you ask it directly, with the correct terms, ChatGPT even admits that it is not possible for a GPT based AI to solve math problems.

gpt.JPG
 
Back
Top