• Have you tried out dark mode?! Scroll to the bottom of any page to find a sun or moon icon to turn dark mode on or off!

diy solar

diy solar

Using AI in posts

didapper, how many genders are possible, feel free to use an AI :LOL:

nature seems to have the same issue at times also
Capture949.PNG
 
Last edited:
I don't engage in conversation with my voltmeters and I verify with a different one if something is off. They don't give me suggestions and compiled, possibly incorrect, data. They give me known values.

If I had a hammer I'd smash an AI, I'd smash it in the morning, smash it in the evening and smash it at supper time.
 
On the humor thread @Pinvazzy's post made me wonder if there were AI psychics. I found three right away...

Tarotoo blends traditional tarot card symbolism with AI to offer free readings and spiritual guidance.
YesChat’s Psychic AI, is designed to interpret dreams, emotions, and subconscious thoughts in a way that mimics a psychic reading.
Master Psychic Rachel AI, which offers advice on love, career, and life purpose with a clairvoyant twist.

Starting with Tarotoo's daily reading... ohhh ... that's both accurate and scary... think I'll stop here... the future is too dangerous to know. ; -)
1750420329228.png

So, if we already have AI psychics... does that mean we'll have AI everything? What are your prognostications? Specialized AI, or generalized AI, or generalized AI that reaches out to multiple specialist AI for a synthesis?​
 
Last edited:
trusting AI junk is one thing, reposting it in forums is like throwing nuclear waste in the local pond. The junk will inevitably be used to train future AI, and that compounds the effects of AI hallucinations.
Interesting thought, but as AIs get smarter and can pick apart misinformation and propaganda wouldn't they also be able to discard hallucinations?

Honestly, anyone who just posts huge sections of AI output in text form should be ashamed of themselves.
I post huge sections of AI all the time, no shame here. It's not pollution if it's something interesting or factually relevant.
I wonder if you're thinking of something like ChatGPT in story mode vs. having an AI do some research on a topic trying to separate out facts from fiction. People will tell you that AIs hallucinate, but as you probably saw from the recap, you can get around that pretty easily.

... it's incredibly lazy. It says "I can't think of anything to say
I have lot's to say, but I do like to research so what I'm saying isn't complete BS and waste of other people's time. AI is a tool that can help with that.

Even if someone is lazy and using AI in that fashion, is it bad in a "chat" forum? What's wrong with it if sparks a discussion among humans?

As a human I occasionally post in the chat thread something of interest to me, and it flops with only a couple of reads and no responses.

I submit that non-lazy humans can be as bad, if not worse, than a good AI with an interesting topic. Although, that isn't something I've tried, I primarily use it to fact-check and research topics so could be wrong. ; -)

many people are happy not thinking for themselves, and then being the megaphone holding parrot for other's thoughts... so it goes
It's not about not thinking. If they weren't thinking how did they even end up asking an AI a question? As @Bongbong's post eloquently states, it's a tool and better tools make products. Why wouldn't we want to use them? Of course, there is a learning curve to avoid the pitfalls, same as any other tool. That's what this thread is about.

Or, as 420 stated:
...It's how the man uses the tools, handles his thoughts
To recap, tools are tools, how we use them to make the world a better place is up to us.

IF I had a hammer I would wake me a neighbor...
Elon is making them ... but you can their dog today. ; -)


What a strange metaphor...
I thought so too. When my VOM gives wrong readings I just replace the battery. If it still gives wrong reading I take the battery out and throw it away. Same for AI (throw it away if it's not working, otherwise use the proper techniques to get the proper result). Although, my conversation with AIs is usually a lot politer than conversations with my hammer; especially after it's found a finger it had a grudge against.
 
Last edited:
crap! I now see the dangers of AI :oops: it just gave me cognitive dissonance

Capture951.PNG


I was thinking because you can be born male, female, both or none the answer to the above question was greater
then two, then I asked an AI 😕

Capture950.PNG

now I need to rethink :confused: but, but.. we live in a relativistic universe 🤪 social and cultural constructs are real things

but, but.. I'm on topic :LOL:
 
Last edited:
"In as few words as possible"... great prompt qualifier!
 
crap! I now see the dangers of AI :oops: it just gave me cognitive dissonance
I was thinking it might make a difference if you're XX or XY for dosages of specific medicines, but it would probably be better for your doctor to have your DNA profile and an AI that could use it. Turns out there are companies working on AIs that can use your DNA information and other physical information (e.g., weight) to calculate the best medicines and doses.

We should probably change the laws and sports restrictions from such a useless label to genotype (i.e., XX, or XY). The Y chromosome is decline, could it be an endangered species that will require new laws? ; -)
 
svetz, I have to wonder... if AI was around in my breeding days and could detect chromosomal anomalies
in our unborn would that have change my family tree (self reflection or future drama :unsure: )
 
...could detect chromosomal anomalies in our unborn...
Interesting thought on a lot of levels. While we don't have an AI today for DNA analysis, I'd bet money it's coming.

But I don't think an AI can ever tell you who would be the best candidate to have children with (except for limited specific criteria, e.g., how to have blue eyes or red hair).

The problem is even if you ignore the sex of the child, that leaves 22 autosomal pairs making a little over 17.6 trillion possible genetic combinations from any two pair. So most likely every two people have random combinations which stretch to both ends of the probability curve. There have been some advances, for example I've heard single-gene disorders (e.g., Huntington’s disease, sickle cell anemia) have been identified and successfully modified...but it's just lab tests AFAIK.
 
who are these characters
So, I duplicatedthe query with Grok and got a different response:

Based on the silhouettes, it’s tricky to identify the characters with certainty, but I can make some educated guesses. The second figure from the left, with long twin tails, resembles Hatsune Miku, a popular Vocaloid character. The other silhouettes don’t provide enough distinct features to pinpoint exact characters, but they might be from various anime or game series. For a more accurate identification, additional details would help. Would you like me to search for more information to assist?
hatsune-miku.jpg

Hatsune Miku

I had to look up Hatusne Miku, but that looks pretty accurate to me. So, I suggest you're looking at old data or BenTheKiller is spoofing you.

But really, what sort of question is that for an AI anyway? As a human I couldn't identify any of them.
 

So, I duplicatedthe query with Grok and got a different response:
hatsune-miku.jpg

Hatsune Miku

I had to look up Hatusne Miku, but that looks pretty accurate to me. So, I suggest you're looking at really old data or BenTheKiller is spoofing you.
i think you missed the part about the one on the right being saddam husein (the joke)

on a more serious note, you do realize that every AI interaction has "static" injected into it, in the sense that it uses a "random seed" to determine the response. You could say the responses _literally_ depend on the weather
 
i think you missed the part about the one on the right being saddam husein (the joke)
I did miss the joke, there was no context from your post and I was dubious the AI just wouldn't say "I don't know".

on a more serious note, you do realize that every AI interaction has "static" injected into it, in the sense that it uses a "random seed" to determine the response. You could say the responses _literally_ depend on the weather
You could ask Grok the same question in a new conversation a dozen times and yes the text would always vary slightly, but the message would always be the same: Hatsune, I'm not sure, you want more help with that? It's not going to randomly throw unknown characters in there. You can also ask for a confidence level to get an even better feel for it's accuracy.

I'm not saying you can't spoof an AI, but that typically requires a conversation and specific keywords in a prompt trying to spoof it. Not saying they won't hallucinate either, that's what the confidence level is for. It's also why the guidelines suggest including the prompt, if others can validate they can be pretty sure you didn't fudge it.
 
Last edited:
I did miss the joke, there was no context from your post and I was dubious the AI just wouldn't say "I don't know".


You could ask Grok the same question in a new conversation a dozen times and yes the text would always vary slightly, but the message would always be the same: Hatsune, I'm not sure, you want more help with that? It's not going to randomly throw unknown characters in there. You can also ask for a confidence level to get an even better feel for it's accuracy.

I'm not saying you can't spoof an AI, but that typically requires a conversation and specific keywords in a prompt trying to spoof it. Not saying they won't hallucinate either, that's what the confidence level is for. It's also why the guidelines suggest including the prompt, if others can validate they can be pretty sure you didn't fudge it.
if you can convince an AI to release secrets from another user's conversation, you can make it do anything. I think you're reading into the joke too much, but there is truth to the fact that you cannot truly rely on the information any AI shares. I often entertain myself asking AI about public projects I've made. Often times it hallucinates more than half of the details, and at times I've seen it focus on extremely minor details like things from bug reports that are trivial and describe behavior which existed by accident and for short periods, and acts like this is defining behavior.
 
humans greatest talent is also it's Achilles heel (pattern recognition) while it gives
us things that never appeared in nature(microscopes to ping pong balls and MRI machines)
but it also kept us shackled to thousands of years old data and prejudices as a coping mechanism,
if AI models can forgo these limitations it has the possibility of pointing us in new ways despite our
shortcomings, if we don't let our fears and arrogance stand in our own way.
 
Last edited:
if you can convince an AI to release secrets from another user's conversation, you can make it do anything. I think you're reading into the joke too much, but there is truth to the fact that you cannot truly rely on the information any AI shares. I often entertain myself asking AI about public projects I've made. Often times it hallucinates more than half of the details, and at times I've seen it focus on extremely minor details like things from bug reports that are trivial and describe behavior which existed by accident and for short periods, and acts like this is defining behavior.
From a recent Ars Technica article Link.

It’s common to talk about LLMs predicting the next token. But under the hood, what the model actually does is generate a probability distribution over all possibilities for the next token. For example, if you prompt an LLM with the phrase “Peanut butter and,” it will respond with a probability distribution that might look like this made-up example:

P(“jelly”) = 70 percent
P(“sugar”) = 9 percent
P(“peanut”) = 6 percent
P(“chocolate”) = 4 percent
P(“cream”) = 3 percent

And so forth.

After the model generates a list of probabilities like this, the system will select one of these options at random, weighted by their probabilities. So 70 percent of the time the system will generate “Peanut butter and jelly.” Nine percent of the time, we’ll get “Peanut butter and sugar.” Six percent of the time, it will be “Peanut butter and peanut.” You get the idea.

Some interesting discussions there.
 
... Often times it hallucinates more than half of the details, ...
Understanding the tools will solve most problems. For example, using prompt qualifiers that ask for confidence levels, not to speculate, use only verified claims, can mostly eliminate it (still up to you to proof).

... if we don't let our fears .... stand in our own way.
Fear is not all bad. Evolution has already removed nearly all those devoid of fear. For example we don't have anywhere near enough fear for things beyond tomorrow (e.g., climate change, microplastics, PFAS).

There should be a "moderate" fear of societal collapse during singularity (the period in time when AI has surpassed human intelligence and robots take over the work force leading to a post-scarcity society) as many will have no jobs or income during the transition. To successfully navigate those waters we need to understand the pitfalls and instigate change. But one side seems entirely too optimistic trusting we'll find a way with our new tools. But can we really depend on our governments to see us safely through the transition when they deny simple things like global warming? Can we trust corporate entities who repeatedly market lies so they can keep selling us products to do the right things? I think it depends on how much they see how frame-breaking it is to be in a position where history doesn't repeat itself.

In the short term, those in our society who use and adopt to the technology will rapidly outperform those that do not; similar to other technological innovations before it. It's not good or bad, it's just that those afraid of it or without access will probably be the first to be unemployed. The survivors of the possible collapse might well be those that reject the technology. For example, the Amish would describe themselves as content rather than happy, it's just a different way. But a collapse of external society shouldn't affect the Amish (other than ipacts of the collapse and having to pay homage to their new government).

I suspect a lot of countries will weather the change as perilous times also create great leaders (or at least history books that paint them way ; -).

Speaking of which, the Standford 2025 AI Index Report is out. There are a number of these reports and it is more about potential success from having high-speed networks rather than surviving collapse or navigating change. I suspect countries that have experimented with things like UBI successfully will be much farther ahead in that department. The Oxford report might have the most about government adaptability, but largely societal collapse is seen as too speculative so ignored.
 
but, but tommy you dumbass your not seeing harm they can do

Capture965.PNGCapture963.PNGCapture964.PNG

when does "truth is stranger than fiction" become a joke on us :unsure:

Capture967.PNGCapture968.PNGCapture970.PNG
Capture971.PNG

Capture966.PNG
Capture972.PNG

if we can keep human greed from AI models, "the possible could be sustainable"
 
Last edited:

diy solar

diy solar
Back
Top