“Most consumers want fast food companies to label when sawdust has been added to food - but trust restaurants less when they do.”
perfect
/thread
Brilliant.
The title is pretty self explanatory. Yes, I want to know if it’s AI generated because I don’t trust it.
I agree with the conclusion that it’s important to disclose how the AI was used. AI can be great to reduce the time needed for boilerplate work, so the authors can focus on what’s important like reviewing and verifying the accuracy of the information.
reduce the time needed for boilerplate work
Or… and this is just an idea… don’t add “boilerplate” to articles.
If the content of an article can be summarized in a single table, I don’t want to read 10 paragraphs explaining the contents of the table row by row. The main reason to do that, is to pad the article and let the publisher put more ad sections between paragraph and paragraph, while making it harder to find the data I’m interested in.
Still, I foresee a future where humans will fill out the table, shove it at an AI to do the “boilerplate work”, and then… users showing the whole article into an AI to strip the boilerplate and summarize it.
A great scenario for AI vendors, not so great for anyone else.
Yep, my trust would go:
- Site that states they don’t use AI to generate articles
- Site that labels when they use AI generated articles
- Sites that don’t say anything and write in a weird way
- Sites that get caught using AI without disclosing it.
So ideally don’t use AI, but if you do make it clear when and how. If a site gets CAUGHT using AI, then I’m probably going to avoid it altogether
-
AI “content” is trivial to make and will soon be everywhere.
-
Nobody wants to read, watch or listen to AI generated “content”
Infinite supply, zero demand. Sounds pretty devoid of value to me.
AI “content” is trivial to make and will soon be everywhere.
It’s been everywhere for many years already.
Plenty of content mills have been using “templates” and stupid AI models to churn out articles for like a decade, there are whole YouTube channels made of videos that are just an AI generated script read by an AI with random barely related visuals in the background.
The only difference is that simple templates were easy to spot, so search engines like Google would penalize them down to the 10th page of results, while modern AI output is at a level undistinguishable from stuff written by a human.
-
That’s… why we want the labels?
I’m confused by the word “but” in that headline. Seem like they are trying to imply cause and effect when the reality is that readers trust outlets less who use AI whether they label them or not.
Yeah, this is perfectly consistent with the idea that people don’t want to read AI generated news at all.
The title of the paper they are referencing is Or they could just not use it?: The paradox of AI disclosure for audience trust in news. So the source material definitely acknowledges that. And that is a great title, haha.
This makes perfect sense. We want AI content labelled because it’s unreliable.
Furthermore, I want AI content that I specifically asked for, not AI content that someone thought would get them page views.