The problem isn’t that it didn’t. The problem is that anyone thought that it should have.
But considering the obvious lack of knowledge around AIs, it should have.
It’s not AI, it’s an LLM. It doesn’t know what misinformation is because it doesn’t Know anything
Wow that’s crazy who could have seen that coming
Most things I ask it give me back a fever dream. You’re over thinking the current state of the tech. Give it another election cycle.
I just ask it boilerplate code and it’s ok. I don’t like having to write a million times the same shit
I’m surprised the other ones did better
It’s always refreshing to read reasonable comments to a nonsensical headline, but I do wonder why it even shows up in my feed when it has so many downvotes.
It depends on which sort algorithm you’re using.
Lol GPT vs Copilot were in stark contrast…
I think the journalists should just try to stick to things they understand. They probably ran a single query and it failed so they kept going on the same conversation.
Sometimes the difference between a good answer and a bad answer is two or three attempts.
It’s not like LLM’s are particularly good at sussing out lies anyway. It’s like summarize the concepts in the article than do web searches on each one trying to find an answer. It’s a fairly expensive query that they’re honestly going to try to avoid if they can.
And the first thing Orange Kim will do is take the leash off AI companies.