diy solar

diy solar

Let's find out what ChatGPT AI thinks

I think we're all missing a big opportunity here.

Someone with a chatGPT account should ask it what the deal is with the word "woke" and then paste its answer here. That would be pretty interesting on several levels ?
My guess would be that it will just spit out some benign definition .... is the activism surrounding woke that is the the problem ... maybe get more specific about what it thinks about that.
 
I don't think woke is limited by being a race thing .... I also don't see BLM as a "woke" organization .... they are a Marxist organization .... by their own definition.
 
I can confirm this is real. I just tried it. Humorous really, not surprising. After some back and forth and several questions about Trump and Biden, I think what is happening is that the AI is not allowed to say anything positive or negative about either. This would be from artificial restrictions OpenAI put on it. And, through some clever wording, asking it to write a poem, about attributes, not the person, someone found a way to get around the restrictions.

AIs are still largely misunderstood by the public. The do not have opinions, or emotions, or think. They simply know what they know, what they have learned. And what they learn is almost always with bias, because the information given to them to learn is inherently biased. Among many other sources, ChatGPT has read and learned all of Wikipedia, many novels, and large parts of the Internet, and many news sources. So, from that, of course there is bias. But it isn't anything OpenAI has planned or has any control over. (other than some restrictions it put on it)

ChatGPT also does not have access to the Internet. What it has learned, it has learned, and that is it. The training process is VERY intense, takes several months, running on one of the most powerful supercomputers on the planet. Once that is done, additional training is done with conversations with real people. OpenAI paid people in SE Asia (cheap labor) to talk with it for many more months, grading responses and correcting errors. Once trained, the model can be moved to smaller computers (web servers) to use it and that is what we are using when we use ChatGPT. To "add more" requires retraining. That process is ongoing, with Microsoft investing even more money to build an even bigger supercomputer for the next iteration. Microsoft built the current version supercomputer as well, btw.

It is very difficult to believe, but at its core, ChatGPT is similar to an auto-complete function. It doesn't actually even know what any of the words mean. It just knows, from studying the English language for many months using a powerful supercomputer, what statically is probably a response a person would consider correct.


Capture.JPG
 
I can confirm this is real. I just tried it. Humorous really, not surprising. After some back and forth and several questions about Trump and Biden, I think what is happening is that the AI is not allowed to say anything positive or negative about either. This would be from artificial restrictions OpenAI put on it. And, through some clever wording, asking it to write a poem, about attributes, not the person, someone found a way to get around the restrictions.

AIs are still largely misunderstood by the public. The do not have opinions, or emotions, or think. They simply know what they know, what they have learned. And what they learn is almost always with bias, because the information given to them to learn is inherently biased. Among many other sources, ChatGPT has read and learned all of Wikipedia, many novels, and large parts of the Internet, and many news sources. So, from that, of course there is bias. But it isn't anything OpenAI has planned or has any control over. (other than some restrictions it put on it)

ChatGPT also does not have access to the Internet. What it has learned, it has learned, and that is it. The training process is VERY intense, takes several months, running on one of the most powerful supercomputers on the planet. Once that is done, additional training is done with conversations with real people. OpenAI paid people in SE Asia (cheap labor) to talk with it for many more months, grading responses and correcting errors. Once trained, the model can be moved to smaller computers (web servers) to use it and that is what we are using when we use ChatGPT. To "add more" requires retraining. That process is ongoing, with Microsoft investing even more money to build an even bigger supercomputer for the next iteration. Microsoft built the current version supercomputer as well, btw.

It is very difficult to believe, but at its core, ChatGPT is similar to an auto-complete function. It doesn't actually even know what any of the words mean. It just knows, from studying the English language for many months using a powerful supercomputer, what statically is probably a response a person would consider correct.


View attachment 132937
I just think it's important for people to understand that the AI is only as unbiased as the "training," programming, and data it receives.

Some may be led to believe that the AI is just a computer, so it MUST be totally unbiased. The creators of these things will try to lul us into a false sense of impartiality.

It's probably inevitable that one of the eccentric's who thinks people are the whole problem will get onto the staff and start sneaking things into the AI.
 
I don't think chat AI is really intended to be used for politics.
  • Opinions and personal popularity are subjective and grey areas. Best to stick to provable, agreed-upon "facts"
  • Elections are "local" considering the planet as a whole with 170+ countries.
  • The current version of chatgpt uses data available 12 to 18 months ago, but I'm sure they'll keep adding to it. This can't keep pace with a fluid, fast-moving political campaign.
But, having said that, it would be interesting if it could be used to fact check campaign speeches, especially in real time!
 
But, having said that, it would be interesting if it could be used to fact check campaign speeches, especially in real time!
Bit hard to fact check a campaign promise. As to making statements or claims, well if a politician's mouth is open then the chances of it being the truth are somewhere between none and Buckley's.
 
Bit hard to fact check a campaign promise. As to making statements or claims, well if a politician's mouth is open then the chances of it being the truth are somewhere between none and Buckley's.
Yes, but they might reign-in some of their more outlandish claims, ya think?
Can always hope! Might be worth a try now that we have the technology.
 
Remember, the AI only knows what it has been trained on. And if it trains on publicly available information on the Internet, a fact check will only be as good as the articles and posts included in the training data. An AI can't (at least for now) call and speak with experts, ask where "facts" were sourced, or any of a number of tasks normally performed by human fact checkers.
 
Heck, they have Mein Kamph, Karl Marx, and Ayn Rand written in too. What’s your point?
I'm not talking about the things it knows .... I'm talking about it's demeanor.
Probably not worth trying to explain it to you if you don't understand what a woke attitude is.

I agree with Elon Musk when he says that woke is a mind virus ... and now they are spreading it into AI.
 
I just think it's important for people to understand that the AI is only as unbiased as the "training," programming, and data it receives.

Some may be led to believe that the AI is just a computer, so it MUST be totally unbiased. The creators of these things will try to lul us into a false sense of impartiality.

It's probably inevitable that one of the eccentric's who thinks people are the whole problem will get onto the staff and start sneaking things into the AI.
Something that is encouraging, at least for now. OpenAI, although well funded by large corporations, one of the core missions of OpenAI is research into the ethics of AI, how to detect and prevent bias, and exploring how and where AI should, and should not, be used. Much of the current work being done with the public release of ChatGPT is exactly this. As people find ways to circumvent or get the AI to display a clear bias (like the Trump/Biden thing) the researches look at how that happened, and how it can be prevented. Another example is that OpenAI has released a tool (another AI) that teachers can use to detect if something was written by an AI or a human.

The fear is that as profit becomes more and more of a motive, that this will break down and ethics tossed out the window.
 
I'm not talking about the things it knows .... I'm talking about it's demeanor.
Probably not worth trying to explain it to you if you don't understand what a woke attitude is.

I agree with Elon Musk when he says that woke is a mind virus ... and now they are spreading it into AI.
"They" are not spreading wokeness anywhere.

It sounds as if you would rather the AI have a one sided and biased opinion ok wokeness, which is the opposite of what OpenAI's goals are. It isn't about whether or not wokeness is good/bad/virus/enlightenment. It is that the AI shouldn't have an opinion or bias on wokeness at all.

And what needs to be understood is that no one programs or "writes in" anything. The AI is trained on data that is out there, largely without people reviewing that data first, because there is simply too much of it. The scientists then try to build in ways that the AI can respond in a factual way without bias, even if there is a bias in the data. It's hard and they are not there yet. But they are trying.
 
"They" are not spreading wokeness anywhere.

It sounds as if you would rather the AI have a one sided and biased opinion ok wokeness, which is the opposite of what OpenAI's goals are. It isn't about whether or not wokeness is good/bad/virus/enlightenment. It is that the AI shouldn't have an opinion or bias on wokeness at all.

And what needs to be understood is that no one programs or "writes in" anything. The AI is trained on data that is out there, largely without people reviewing that data first, because there is simply too much of it. The scientists then try to build in ways that the AI can respond in a factual way without bias, even if there is a bias in the data. It's hard and they are not there yet. But they are trying.
I think you are being a little naïve about what may or may not be imbedded in the multitude of algorithms that an AI would have.
The refusal to write a poem about Trump vs the glowing poem written about Biden .... when exactly the same request was made .... is pretty clear evidence of some sort of built in bias.

People wouldn't believe the extent some speech could be suppressed and others amplified until Twitter started releasing the evidence of what had been taking place .... and many of those actions were done automatically by the "AI" ... or algorithms .... built into Twitter's programming .... with some direct action taken by employees.

I don't think it is possible for any government to control what is going to happen with these things.
 
Or mebbe they just baked in that old maxim about if you can't say something nice about someone don't say anything at all.
That by itself would be demonstrating an irrational view point.
 
Back
Top