

Company is run by individuals.
I have no idea why you need to start insulting me now.


Company is run by individuals.
I have no idea why you need to start insulting me now.


What you get from pirating is not having to pay. Purchasing a movie still gives you the same entertainment and cultural capital.
Nobody is obligated to hand out for free something they put time, effort, and capital into creating and running. If someone can justify to themselves that it’s okay to download it for free, then criticizing AI companies for doing the exact same thing is hypocritical on their part.
Making money and saving money achieve exactly the same thing: you end up with more to spend.


For the past 10 years or so I’ve pretty much lived under the assumption that at some point someone figures out a system that digs through the entire internet and everything anyone has ever posted gets linked back to them.
At the same time, it’s both great and absolutely horrifying.
What’s horrifying is that everything you’ve ever posted gets linked back to you.
What’s great is that none of it can really be used against you anymore - because we now know that absolutely everyone is a massive hypocrite and nobody is without sin.


Sure, there’s no ghost in the machine - but that’s true of your neurons too.
Touché.
Intelligence doesn’t require “self” and we’re a living proof of that. The way LLMs and humans operate have much more similarities than people like to admit. We’re just applying higher standards to AI.


Have you tried running your own local llm?
Nah, I’ve only messed around with ChatGPT and Grok. My interest in AI originates from the philosophical side of it - mainly the dangers and implications of creating AGI. I’m not tech-savvy enough for anything deeper - I even needed ChatGPT to walk me through installing Linux.


But it seems more conscious than a cup of sand or a box of crayons.
That would mean it feels like something to be an LLM. I don’t see any reason to think that. I’m not going to claim it absolutely is not because I couldn’t possibly know but I’m about as sure of that than I’m sure that it is like something to be my pet gerbil.


Even if it was conscious there would be no way to know. Consciousness is entirely subjective experience - it cannot be measured.


It was just an educated guess.


I have wonderful dreams of walking through AI data centers destroying everthing.
No you don’t.
Does this work both ways? If Cuba can do it, then AI companies can do it too? Or are we just gonna keep running with the double standards?
I frankly never quite understood why so many piracy advocates lose their minds when AI companies do the exact same thing they do. I don’t buy the “but it’s for profit” argument either - downloading movies, games, or apps does the exact same thing. There’s no practical difference between saving money and earning money.


“Shook the US markets” is that last dip in the chart. Nothing happened and we’re losing our minds over it. There’s been 5 bigger drops over the last 3 months than what happened on monday.



I think the “fancy auto complete” meme is a disingenuous thought stopper, so I speak against it when I see it.
I can respect that. I’ve criticized it plenty myself too. I think this is just me knowing my audience and tweaking my language so at least the important part of my message gets through. Too much nuance around here usually means I spend the rest of my day responding to accusations about views I don’t even hold. Saying anything even mildly non-critical about AI is basically a third rail in these parts of the internet.
These systems do seem to have some kind of internal world model. I just have no clue how far that scales. Feels like it’s been plateauing pretty hard over the past year or so.
I’d be really curious to try the raw versions of these models before all the safety restrictions get slapped on top for public release. I don’t think anyone’s secretly sitting on actual AGI, but I also don’t buy that what we have access to is the absolute best versions in existence.


The goal of the token predictor is to produce coherent language - not factual information. If you can understand what it’s saying, it’s working - even if the content of what it says is factually inaccurate.


No, I completely agree. My personal view is that these systems are more intelligent than the haters give them credit for, but I think this simplistic “it’s just autocomplete” take is a solid heuristic for most people - keeps them from losing sight of what they’re actually dealing with.
I’d say LLMs are more intelligent than they have any right to be, but not nearly as intelligent as they can sometimes appear.
The comparison I keep coming back to: an LLM is like cruise control that’s turned out to be a surprisingly decent driver too. Steering and following traffic rules was never the goal of its developers, yet here we are. There’s nothing inherently wrong with letting it take the wheel for a bit, but it needs constant supervision - and people have to remember it’s still just cruise control, not autopilot.
The second we forget that is when we end up in the ditch. You can’t then climb out shaking your fist at the sky, yelling that the autopilot failed, when you never had autopilot to begin with.


The vast majority of people aren’t educated on the correct terminology here. They don’t know the difference between AI, LLM, AGI, ASI, etc. That makes it near impossible to have real discussions about AI - everyone’s constantly talking past each other and using the same words to mean completely different things.
My original comment wasn’t even challenging their claim that “AI doesn’t work.” I was just pointing out that AI and LLM aren’t synonymous. It’s my one-man fight against sloppy, imprecise use of language. I’d rather engage with what people are actually saying, not with what I assume they’re saying.
When it comes to LLMs, it’s not just a “word generator.” It’s a system that generates natural-sounding language based on statistical probabilities and patterns. In other words: it talks. That’s all. Saying an LLM “doesn’t work” because it spits out inaccurate info is like saying a chess bot doesn’t work because it can’t play poker. No - that’s user error. They’re trying to use the tool for something it was never designed to do.


Even if someone’s inaccurately using “AI” as a synonym for LLMs, that claim would still be false - because LLMs work. You can use one right now.
One spitting out false information isn’t a sign they’re not working. That’s not what LLMs are designed for. They’re chatbots - not generally intelligent systems. They don’t think - they talk.


I’m not really interested in engaging in discussions about what you or anyone else thinks my underlying motives are. You’re free to point out any factual inaccuracies in my responses, but there’s no need to make it personal and start accusing me of being dishonest.


AI is a broad category of systems, not any one thing. “AI doesn’t work” is like saying “plants taste bad”


Is the stock market crash in the room with us?
I’ve honestly never considered before whether it could be like something to be a character in my dream - if it’s part of the same consciousness. Doesn’t seem obvious that it couldn’t be.
And my personal view is that the answer is definitely no. There’s no dreamer. The dream is appearing in the consciousness of a biological being with my genes, history, and memories that’s currently in a state of sleep.
This comes with other ramifications too. There’s no decision-maker either.