• 1 Post
  • 20 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2023

help-circle
  • My issue is more with the math of it. Since it requires holding your frames until you’ve got one in reserve (can’t generate an in-between until you know what’s next), it fundamentally makes the game less responsive.

    That said, if you understand that, and like the visual smoothness of motion with more frames, then it’s super cool tech. Not every game has to be treated like it’s competitive Counter Strike, and I think it’s really cool if you like it, but it frustrates me how poorly marketed and understood the actual technology and its compromises are.


  • Eh, FSR3 upscaling and FSR3 frame generation are different things. I’m personally a fan of upscaling, it’s great for a sharper picture on my large 4k TV without spending a fortune on a massive GPU (I use a living room gaming PC), but not at all a fan of frame generation, as it introduces more input lag for the illusion of more frames. Not a tradeoff I’m ever willing to make, especially when VRR already does an incredible job of creating the illusion (and a degree of reality) of good performance when my framerate drops.



  • Eh, not much nefarious you can do by pushing data around. Taking a lot of CPU/GPU usage? Certainly, you can do a lot of evil with distributed computing. But bandwidth?

    Costs a lot to host all that data to push to people, and to handle streaming it to so many as well, all for them to just… throw it out? Users certainly don’t keep enough storage to even store a constant 100Mb/s of sneaky evil data, let alone do any compute with it, because the game’s CPU/GPU usage isn’t particularly out of the ordinary.

    So not much you could do here. Ockham’s razor here just says… planes are fast, MSFS is a high fidelity game, they’ve gotta load a lot of high accuracy data very quickly and probably can’t spare the CPU for terribly complicated decompression.


  • I think it is a problem. Maybe not for people like us, that understand the concept and its limitations, but “formal reasoning” is exactly how this technology is being pitched to the masses. “Take a picture of your homework and OpenAI will solve it”, “have it reply to your emails”, “have it write code for you”. All reasoning-heavy tasks.

    On top of that, Google/Bing have it answering user questions directly, it’s commonly pitched as a “tutor”, or an “assistant”, the OpenAI API is being shoved everywhere under the sun for anything you can imagine for all kinds of tasks, and nobody is attempting to clarify it’s weaknesses in their marketing.

    As it becomes more and more common, more and more users who don’t understand it’s fundamentally incapable of reliably doing these things will crop up.




  • Yeah, this is the problem with frankensteining two systems together. Giving an LLM a prompt, and giving it a module that can interpret images for it, leads to this.

    The image parser goes “a crossword, with the following hints”, when what the AI needs to do the job is an actual understanding of the grid. If one singular system understood both images and text, it could hypothetically understand the task well enough to fetch the information it needed from the image. But LLMs aren’t really an approach to any true “intelligence”, so they’ll forever be unable to do that as one piece.


  • Eh, this is a thing, large companies often have internal rules and maximums about how much they can pay any given job title. For example, on our team, everyone we hire is given the role “senior full stack developer”, not because they’re particularly senior, in some cases we’re literally hiring out of college, but because it allows us to pay them better with internal company politics.



  • They did overhaul the controller mapping in this update, along with just about everything else, so it would be worth checking out. I really can’t emphasize enough how massive this update is, it’s like the emulator leaping from 2010 to 2024, they’ve been exceptionally active over the past 4 years.

    Aren’t there emulators for newer platforms out there now?

    And of course. I assume you’re referring to RPCS3 for PS3. PS4 is also in the early stages of being emulated, with simple games being playable.



  • Storytime! Earlier this year, I had an Amazon package stolen. We had reason to be suspicious, so we immediately contacted the landlord and within six hours we had video footage of a woman biking up to the building, taking our packages, and hurriedly leaving.

    So of course, I go to Amazon and try to report my package as stolen… which traps me for a whole hour in a loop with Amazon’s “chat support” AI, repeatedly insisting that I wait 48 hours “in case my package shows up”. I cannot explain to this thing clearly enough that, no, it’s not showing up, I literally have video evidence of it being stolen that I’m willing to send you. It literally cuts off the conversation once it gives its final “solution” and I have to restart the convo over and over.

    Takes me hours to wrench a damn phone number out of the thing, and a human being actually understands me and sends me a refund within 5 minutes.


  • I don’t necessarily disagree that we may figure out AGI, and even that LLM research may help us get there, but frankly, I don’t think an LLM will actually be any part of an AGI system.

    Because fundamentally it doesn’t understand the words it’s writing. The more I play with and learn about it, the more it feels like a glorified autocomplete/autocorrect. I suspect issues like hallucination and “Waluigis” or “jailbreaks” are fundamental issues for a language model trying to complete a story, compared to an actual intelligence with a purpose.