That may have been the case 4 months ago but this thing is evolving very fast.
I suggest you look at some recent videos of written conversations and see what it can do now.
It can already pass the Turing test with 60% of the human examiners. I expect that by the end of this year it will be able to fool 90% of the people and leave the other 10% feeling like they suspect it's AI, but not enough to confront it.
I use ChatGPT every single day for my work. I used GPT4 and I now use GPT4o. Funny thing, if you insult the AI you often get better quality answers. When singularity happens I'll be one of the first to go for the grief I regularly cause to chatGPTs
But seriously. It was already said in this thread misleading info given by humans is no different than misleading info given by AI.
The current AIs are amazing knowledge retrieval machines(also bullshitting machines, but more about this later). The biggest danger of them is not enshittification of the internet. That has been well on its way prior. The biggest threat is that we get to rely on them too much (I can literally do my IT job 3x faster - I use it a bit like Google before) and then they get taken away, put behind huge paywalls etc. This is why it is so important to keep developing local models. I already have strong suspicion gpt4o is a cost cutting upgrade and not a user experience improvement upgrade. People who pay $20 per month for gpt don't realise how much power and hardware it takes to run it. The two end businesses goals that make sense for openai are:
1- hardware suddenly gets 10x more efficient. It becomes profitable at current prices and we already have users.
2 -(much more likely) - noone can do their job without chatgpt, we introduce "an upgrade" that cripples it. A month later there is a "pro" version that costs $2k per month. And many people will pay.
Coming back to how I use the AIs, the best way to use them is when you know how to do something, but you forgot the exact thing. For example a syntax of a command. Or you can program in some language you haven't touched in 5 years and you need to write a quick script.
Also for things you know exactly how to do, but they are time consuming. For example recently I was pulling data from my inverter into mqtt (a kind if software queue). I wanted to put dozens of sensors into HomeAssistant , but it needs few lines of text written per sensor to get the data and aggregate it. Few more lines in different file to display it.
At the same time these lines have to include things like queue name, sensor name, device, and so on. Days and days of work.
I told chatgpt to write me a Python script to connect to mqtt broker, sit for 2 minutes and record all the topic names for messages that come.(there is no way to just list topics under a parent). It produced a list of topics. Topics had in their names what they are. So I gave chatgpt that list and a fragment of home assistant+Lovelace ui configs written by hand and it generated a 500+ line configs. Did it mess them up? It sure did. Was it still a lot easier to fix them with some search&replace than writing from scratch? Yes it was.
Another approach that could've been better was to tell it to have the original Python script spit the config in a loop. This way the script could be fixed by hand to get the right config.
There are many more uses, but this post is already too long for 99% of people to read.
As for "protecting the forum" from AI users. Hmm, how would such a user "work"? There would have to be a human behind it copy/pasting stuff. This takes time. For what? To at some point slip in " this is the best inverter ever" after establishing themselves? It seems like a lot of work for very little gain. No doubt people will do it, just to fuck with others, but for money? I'm not sure.