- cross-posted to:
- technology@lemmy.ml
- technology@beehaw.org
- cross-posted to:
- technology@lemmy.ml
- technology@beehaw.org
2024 might be the breakout year for efficient ARM chips in desktop and laptop PCs.
2024 might be the breakout year for efficient ARM chips in desktop and laptop PCs.
“The most exciting tech isn’t the thing that currently exists and is being improved and integrated daily, it’s this other thing we don’t even know for sure will maybe happen.”
Right, it’s less exciting now because it’s already here. I’m not expecting radically improved GPT models or whatever in 2024, probably just more iteration. The most exciting stuff there might be local AI tech becoming more usable, like we’ve seen with stable diffusion.
I’m just expecting performance optimisations, especially for local LLMs. Right now there are models as good as GPT-4 (Goliath 120B), but they require 2 RTX 4090 to run.
The models that require less powerful equipment are not as good, of course.
But hopefully, given enough time, good enough models will be able to run with mid end hardware.
“Forget about the possibility that we may finally have developed machines that think, that comprehend the world in a way similar to how humans do and can communicate with us on our level. This new chip design might end up with comparable capabilities to the existing chip design!”
Yeah, there was no need to try to hype this up as the biggest thing ever.
That isn’t what’s happening with “IA” right now.
Which is why I said possibility, I knew picky people would jump on the comment like this.
You clearly don’t work in a field where it’s gutting swaths through workflows and taking up serious slack.
You can describe your problem to it in native English, so it does communicate on our level. It comprehends training data in the same way a human comprehends our lived experience and assimilates the data in the same manner. It’s not truly “reasoning”, but it’s leagues ahead of anything we had even four years ago and it’s only going to grow from here.
Commercial ventures are finding new uses cases everyday and to people in IT it’s hilarious in the same way that people who thought the Internet was a fad were hilarious.
I object to your characterization that current AI “thinks”. It does nothing of the sort.
I literally said “it’s not truly reasoning” to clarify that while it’s drawing on its training data in the same way you draw on your experiences when making new decisions, it can’t really create original thought.
Once again lemmy proves reading comprehension is too damn hard.
I thought you were the one saying IA thinks, it was someone else. Apologies for that.
OTOH you can take your sarcasm and insert it rectally.