TommySr
Solar Wizard
didapper, for them with all their fingers and toes maybeloquacious rubber duck

didapper, for them with all their fingers and toes maybeloquacious rubber duck
ai, especially for those with extra fingers and toesdidapper, for them with all their fingers and toes maybe![]()
ai, especially for those with extra fingers and toes
I don't engage in conversation with my voltmeters and I verify with a different one if something is off. They don't give me suggestions and compiled, possibly incorrect, data. They give me known values.
If I had a hammer I'd smash an AI, I'd smash it in the morning, smash it in the evening and smash it at supper time.
What a strange metaphor...if my voltmeter hallucinates i fix it or throw it in the fuck it bucket;
Interesting thought, but as AIs get smarter and can pick apart misinformation and propaganda wouldn't they also be able to discard hallucinations?trusting AI junk is one thing, reposting it in forums is like throwing nuclear waste in the local pond. The junk will inevitably be used to train future AI, and that compounds the effects of AI hallucinations.
I post huge sections of AI all the time, no shame here. It's not pollution if it's something interesting or factually relevant.Honestly, anyone who just posts huge sections of AI output in text form should be ashamed of themselves.
I have lot's to say, but I do like to research so what I'm saying isn't complete BS and waste of other people's time. AI is a tool that can help with that.... it's incredibly lazy. It says "I can't think of anything to say
It's not about not thinking. If they weren't thinking how did they even end up asking an AI a question? As @Bongbong's post eloquently states, it's a tool and better tools make products. Why wouldn't we want to use them? Of course, there is a learning curve to avoid the pitfalls, same as any other tool. That's what this thread is about.many people are happy not thinking for themselves, and then being the megaphone holding parrot for other's thoughts... so it goes
To recap, tools are tools, how we use them to make the world a better place is up to us....It's how the man uses the tools, handles his thoughts
Elon is making them ... but you can their dog today. ; -)IF I had a hammer I would wake me a neighbor...
I thought so too. When my VOM gives wrong readings I just replace the battery. If it still gives wrong reading I take the battery out and throw it away. Same for AI (throw it away if it's not working, otherwise use the proper techniques to get the proper result). Although, my conversation with AIs is usually a lot politer than conversations with my hammer; especially after it's found a finger it had a grudge against.What a strange metaphor...
I was thinking it might make a difference if you're XX or XY for dosages of specific medicines, but it would probably be better for your doctor to have your DNA profile and an AI that could use it. Turns out there are companies working on AIs that can use your DNA information and other physical information (e.g., weight) to calculate the best medicines and doses.crap! I now see the dangers of AIit just gave me cognitive dissonance
Interesting thought on a lot of levels. While we don't have an AI today for DNA analysis, I'd bet money it's coming....could detect chromosomal anomalies in our unborn...
So, I duplicatedthe query with Grok and got a different response:
|
![]() Hatsune Miku |
i think you missed the part about the one on the right being saddam husein (the joke)
So, I duplicatedthe query with Grok and got a different response:![]()
Hatsune Miku
I had to look up Hatusne Miku, but that looks pretty accurate to me. So, I suggest you're looking at really old data or BenTheKiller is spoofing you.
I did miss the joke, there was no context from your post and I was dubious the AI just wouldn't say "I don't know".i think you missed the part about the one on the right being saddam husein (the joke)
You could ask Grok the same question in a new conversation a dozen times and yes the text would always vary slightly, but the message would always be the same: Hatsune, I'm not sure, you want more help with that? It's not going to randomly throw unknown characters in there. You can also ask for a confidence level to get an even better feel for it's accuracy.on a more serious note, you do realize that every AI interaction has "static" injected into it, in the sense that it uses a "random seed" to determine the response. You could say the responses _literally_ depend on the weather
if you can convince an AI to release secrets from another user's conversation, you can make it do anything. I think you're reading into the joke too much, but there is truth to the fact that you cannot truly rely on the information any AI shares. I often entertain myself asking AI about public projects I've made. Often times it hallucinates more than half of the details, and at times I've seen it focus on extremely minor details like things from bug reports that are trivial and describe behavior which existed by accident and for short periods, and acts like this is defining behavior.I did miss the joke, there was no context from your post and I was dubious the AI just wouldn't say "I don't know".
You could ask Grok the same question in a new conversation a dozen times and yes the text would always vary slightly, but the message would always be the same: Hatsune, I'm not sure, you want more help with that? It's not going to randomly throw unknown characters in there. You can also ask for a confidence level to get an even better feel for it's accuracy.
I'm not saying you can't spoof an AI, but that typically requires a conversation and specific keywords in a prompt trying to spoof it. Not saying they won't hallucinate either, that's what the confidence level is for. It's also why the guidelines suggest including the prompt, if others can validate they can be pretty sure you didn't fudge it.
From a recent Ars Technica article Link.if you can convince an AI to release secrets from another user's conversation, you can make it do anything. I think you're reading into the joke too much, but there is truth to the fact that you cannot truly rely on the information any AI shares. I often entertain myself asking AI about public projects I've made. Often times it hallucinates more than half of the details, and at times I've seen it focus on extremely minor details like things from bug reports that are trivial and describe behavior which existed by accident and for short periods, and acts like this is defining behavior.
Understanding the tools will solve most problems. For example, using prompt qualifiers that ask for confidence levels, not to speculate, use only verified claims, can mostly eliminate it (still up to you to proof).... Often times it hallucinates more than half of the details, ...
Fear is not all bad. Evolution has already removed nearly all those devoid of fear. For example we don't have anywhere near enough fear for things beyond tomorrow (e.g., climate change, microplastics, PFAS).... if we don't let our fears .... stand in our own way.
If we can't.....if we can keep human greed from AI models, "the possible could be sustainable"