“it is possible to train 8 days a week.”
– that one ai bot google made
Probably trained on this argument.
Plenty of fun to be had with LLMs.
That’s one example when LLMs won’t work without some tuning. What it does is probably looking up information of how many Rs there are, instead of actually analyzing it.
I doubt it’s looking anything up. It’s probably just grabbing the previous messages, reading the word “wrong” and increasing the number. Before these messages I got ChatGPT to count all the way up to ten r’s.
It cannot “analyze” it. It’s fundamentally not how LLM’s work. The LLM has a finite set of “tokens”: words and word-pieces like “dog”, “house”, but also like “berry” and “straw” or “rasp”. When it reads the input it splits the words into the recognized tokens. It’s like a lookup table. The input becomes “token15, token20043, token1923, token984, token1234, …” and so on. The LLM “thinks” of these tokens as coordinates in a very high dimensional space. But it cannot go back and examine the actual contents (letters) in each token. It has to get the information about the number or “r” from somewhere else. So it has likely ingested some texts where the number of "r"s in strawberry is discussed. But it can never actually “test” it.
A completely new architecture or paradigm is needed to make these LLM’s capable of reading letter by letter and keep some kind of count-memory.
the sheer audacity to call this shit intelligence is making me angrier every day
That’s because you don’t have a basic understanding of language, if you had been exposed to the word intelligence in scientific literature such as biology textbooks then you’d more easily understand what’s being said.
‘Rich in nutrients?! How can a banana be rich when it doesn’t have a job or generational wealth? Makes me so fucking mad when these scientists lie to us!!!’
The comment looks dumb to you because you understand the word ‘rich’ doesn’t only mean having lots of money, you’re used to it in other contexts - likewise if you’d read about animal intelligence and similar subjects then ‘how can you call it intelligence when it does know basic math’ or ‘how is it intelligent when it doesn’t do this thing literally only humans can do’ would sound silly too.
this is not language mate, it’s pr. if you don’t understand the difference between rich being used to mean plentiful and intelligence being used to mean glorified autocorrect that doesn’t even know what it’s saying that’s a problem with your understanding of language.
also my problem isn’t about doing math. doing math is a skill, it’s not intelligence. if you don’t teach someone about math they’re most likely not going to invent the whole concept from scratch no matter how intelligent they may be. my problem is that it can’t analyze and solve problems. this is not a skill, it’s basic intelligence you find in most animals.
also it doesn’t even deal with meaning, and doesn’t even know what it says means, and doesn’t even know whether it knows something or not, and it’s called a “language model”. the whole thing is a joke.
Again you’re confused, it’s the same difficulty people have with the word ‘fruit’ because in botany we use it very specifically but colloquially it means a sweet tasting enable bit of a plant regardless of what role it plays in reproduction. Colloquially you’d be correct to say that corn grain is not fruit but scientifically you’d be very wrong. Ever eaten an Almond and said ‘what a tasty fruit?’ probably not unless your a droll biology teacher making a point to your class.
Likewise in biology no one expects a slug or worms or similar to analyze and solve problems but if you look up scientific papers about slug intelligence you’ll find plenty, though a lot will also be about simulating their intelligence using various coding methods because that’s a popular phd thesis recently - computer science and biology merge in such interesting ways.
The term AI is a scientific term used in computer science and derives its terminology from definitions used in the science of biology.
What you’re thinking of is when your mate down the pub says ‘yeah he’s really intelligent, went to Yale and stuff’
They are different languages, the words mean different things and yes that’s confusing when terms normally only used in textbooks and academic papers get used by your mates in the pub but you can probably understand that almonds are fruit, peanuts are legumes but both will likely be found in a bag of mixed nuts - and there probably won’t be a strawberry in with them unless it was mixed by the pedantic biology teacher we met before…
Language is complex, AI is a scientific term not your friend at the bar telling you about his kid that’s getting good grades.
are you AI? you don’t seem to follow the conversation at all
Exactly my point. But thanks for explaining it further.
So ChatGPT has ADHD
ADHD contains twelve “r’s”
Copilot seemed to be a bit better tuned, but I’ve now confused it by misspelling strawberry. Such fun.
I’ve one upped you!
And this
First mentioned by linus techtip.
i had fun arguing with chatgpt about this
The T in “ninja” is silent. Silent and invisible.
5% of the times it works every time.
You can come up with statistics to prove anything, Kent. 45% of all people know that.
I tried it with my abliterated local model, thinking that maybe its alteration would help, and it gave the same answer. I asked if it was sure and it then corrected itself (maybe reexamining the word in a different way?) I then asked how many Rs in “strawberries” thinking it would either see a new word and give the same incorrect answer, or since it was still in context focus it would say something about it also being 3 Rs. Nope. It said 4 Rs! I then said “really?”, and it corrected itself once again.
LLMs are very useful as long as know how to maximize their power, and you don’t assume whatever they spit out is absolutely right. I’ve had great luck using mine to help with programming (basically as a Google but formatting things far better than if I looked up stuff), but I’ve found some of the simplest errors in the middle of a lot of helpful things. It’s at an assistant level, and you need to remember that assistant helps you, they don’t do the work for you.
Isn’t “Sphinx of black quartz, judge my vow.” more relevant? What’s all the extra bit anyway, even before the “z” debacle?
maybe it’s using the british pronunciation of “strawbry”
The people here don’t get LLMs and it shows. This is neither surprising nor a bad thing imo.
In what way is presenting factually incorrect information as if it’s true not a bad thing?
LLMs operate using tokens, not letters. This is expected behavior. A hammer sucks at controlling a computer and that’s okay. The issue is the people telling you to use a hammer to operate a computer, not the hammer’s inability to do so
Claude 3 nailed it on the first try
It would be luck based for pure LLMs, but now I wonder if the models that can use Python notebooks might be able to code a script to count it. Like its actually possible for an AI to get this answer consistently correct these days.
Maybe in a “it is not going to steal our job… yet” way.
True but if we suddenly invent an AI that can replace most jobs I think the rich have more to worry about than we do.
Maybe, but I am in my 40s and my back aches, I am not in a shape for revolution :D
Lenin was 47 in 1917
“Create a python script to count the number of
r
characters are present in the stringstrawberry
.”The number of 'r' characters in 'strawberry' is: 2
You need to tell it to run the script
"strawberry".split('').filter(c => c === 'r').length
Are you implying that it’s running a function on the word and then giving a length value for a zero indexed array?
A zero indexed array doesn’t have a different length ;)
'strawberry'.match(/r/ig).length
len([c if c == ‘r’ for c in “strawberry”])
import re; r = [c for c in re.findall("r", "strawberry")]; r = "".join(r); len(r)
(\r (frequencies “strawberry”))
There’s a simple explanation: LLMs are “R” agnostic because they were specifically trained to not sail the high seas
I hate AI, but here it’s a bit understandable why copilot says that. If you ask the same thing to someone else they would surely respond 2 as they my imply you are trying to spell the word, and struggle on whether it’s one or two R on the last part.
I know it’s a common thing to ask in french when we struggle to spell our overly complicated language, so it doesn’t shock me
Nah it’s because AI works at the token level which is usually words. They don’t even “see” the letters in the words
Thank you. For as much as this post comes up, I hope people are at least getting an education.
Ladies and gentlemen: The Future.
Copilot may be a stupid LLM but the human in the screenshot used an apostrophe to pluralize which, in my opinion, is an even more egregious offense.
It’s incorrect to pluralizing letters, numbers, acronyms, or decades with apostrophes in English. I will now pass the pedant stick to the next person in line.
I salute your pedantry.
Prescriptivist much?
Thank you. Now, insofar as it concerns apostrophes (he said pedantically), couldn’t it be argued that the tools we have at our immediate disposal for making ourselves understood through text are simply inadequate to express the depth of a thought? And wouldn’t it therefore be more appropriate to condemn the lack of tools rather than the person using them creatively, despite their simplicity? At what point do we cast off the blinders and leave the guardrails behind? Or shall we always bow our heads to the wicked chroniclers who have made unwitting fools of us all; and for what? Evolving our language? Our birthright?
No, I say! We have surged free of the feeble chains of the Oxfords and Websters of the world, and no guardrail can contain us! Let go your clutching minds of the anchors of tradition and spread your wings! Fly, I say! Fly and conformn’t!
…
I relinquish the pedant stick.
Oooh, pedant stick, pedant stick! Give it to me!!
English is a filthy gutter language and deserves to be wielded as such. It does some of its best work in the mud and dirt behind seedy boozestablishments.
That’s half-right. Upper-case letters aren’t pluralised with apostrophes but lower-case letters are. (So the plural of ‘R’ is ‘Rs’ but the plural of ‘r’ is ‘r’s’.) With numbers (written as ‘123’) it’s optional - IIRC, it’s more popular in Britain to pluralise with apostrophes and more popular in America to pluralise without. (And of course numbers written as words are never pluralised with apostrophes.) Acronyms are indeed not pluralised with apostrophes if they’re written in all caps. I’m not sure what you mean by decades.
By decades they meant “the 1970s” or “the 60s”
I don’t know if we can rely on British popularity, given y’all’s prevalence of the “greengrocer’s apostrophe.”
Oh right - that would be the same category as numbers then. (Looked it up out of curiosity: using apostrophes isn’t incorrect, but it seems to be an older/less formal way of pluralising them.)
Now, plurals aside, which is better,
The 60s
Or
The '60s
?
Never heard of the greengrocer’s apostrophe so I looked it up. https://www.thoughtco.com/what-is-a-greengrocers-apostrophe-1690826
I absolutely love that there’s a group called the Apostrophe Protection Society. Is there something like that for the Oxford Comma? I’d gladly join them!
I will die on both of those hills alongside you.
Hah, I do not like the greengrocer’s apostrophe. It is just wrong no matter how you look at it. The Oxford comma is a little different - it’s not technically wrong, but it should only be used to avoid confusion.
I use it for fun, frivolity, and beauty.
Why use for lowercase?
Because otherwise if you have too many small letters in a row it stops looking like a plural and more like a misspelled word. Because capitalization differences you can make more sense of As but not so much as.
As
That looks like an oddly capitalised “as”
That really gives the reason it’s acceptable to use apostrophes when pluralising that sort of case
Because English is stupid
It’s not stupid. It’s just the bastard child of Germany, Dutch, French, Celtic and Scandinavian and tries to pretend this mix of influences is cool and normal.
Victim blaming and ableism!
The French and Scandinavian bits were NOT consensual.
(Don’t forget Latin btw)