• 1 Post
  • 192 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • Thanks for the clarification, pipes look like copper but might be cast iron.

    Still doesn’t fit with the explanation, aluminum has more resistance than copper, but not that much more. The resistance of cast iron is an order of magnitude higher than aluminum. So it would still be the lowest resistance in the circuit and thus the coolest part.

    And cast iron is pretty good at conducting heat. Not as good as copper or aluminum, but still pretty good. We’ve been using the material to make pans and pots for cooking because of it’s thermal properties. So the heat wouldn’t just stop at the fitting, but continue on at least some ways.

    Moreover it’s physically impossible to get aluminum hot enough to glow like this and still keep it’s shape. It melts at 600 degrees C, well below the point where something gets red hot, let alone yellow like this. If the aluminum were to be this hot, it would be in a puddle and at risk of burning.


  • Thorry84@feddit.nltoLemmy Shitpost@lemmy.worldInstallation
    link
    fedilink
    arrow-up
    21
    arrow-down
    2
    ·
    edit-2
    6 hours ago

    This makes no sense at all.

    Why would only these two specific pipes get hot, so hot to glow, but not the other lines connected to it? And not the fittings around it? It’s all copper, so even if the power itself doesn’t heat them up, why would being connected to an extremely hot pipe heat it up. Since it’s you know copper and being good at transferring heat is what it’s known for.

    And why would the lower resistance part be the part that get hottest? Low resistance means less loss, so those parts would in fact be the coldest of all.

    Plus thin walled copper pipes can’t get so hot they glow without melting or at the very least lose all structural integrity and break.

    And a downed power line with a short to ground would almost immediately turn off. It’s when there isn’t a direct line to ground those things are dangerous. As soon as it shorts, it gets turned off at the source to prevent further damage, fire and not cause issues upstream.

    Either it’s Photoshop or someone has wrapped led lighting around some pipes. Also those aren’t gas pipes.



  • No, you misunderstand. You get seconds assigned to your token. It doesn’t matter where in the video you use those seconds.

    So if you watch an ad you get say 60 secs of video until you need to watch an ad again. You can watch 30 secs, then skip 2 minutes ahead and watch another 30 secs, then you get an ad. In reality the times would be larger, but to illustrate a point.

    In the current setup YT uses, if you watch an ad, watch 2 secs of video, then skip ahead of the next adbreak, you get more ads.

    And yes as stated, a separate client can get around this. But as also stated there will always be ways around it, it’s just a matter of making it harder. If it’s beyond what a simple browser plugin can do, it’s good enough. And YT has been banning 3rd party clients anyways, so that makes it even harder.



  • Yeah I’m thinking of a system like this:

    A user opens a session to watch a video, the user is assigned a token to watch the requested video. When the user isn’t a premium subscriber and the video is monetized the token is used to enforce ads. To get video data from the server, the user needs to supply the token. That token contains a “credit” with how many seconds (or whatever they use internally) the user can watch for that video. In order to get seconds credited to the token, the user needs to stream ad content to their player. New ad content is only available to stream, once the number of seconds they were credited have been elapsed.

    One way to get around this is to have something in the background “watch” the video for you, invisible, including the ads. Then records the video data, so it’s available for you to watch without ads. But it would be easy to rate limit the number of tokens a user can have. There’s ways to get around that as well. But this seems to me well beyond what a simple browser plugin can do, this would require a dedicated client.

    The idea is to make it harder for users to get around the ads, so they’ll watch them instead of looking for a way to block ads. In the end there isn’t anything to be done, users can get around the ads. Big streaming services use DRM and everything and their content gets ripped and shared. With YouTube it would be easy for someone to have a Premium account, rip the vids and share them. But by putting up a barrier, people watch the ads. YouTube doesn’t care if a percentage of users doesn’t watch the ads, as long as most of them do.

    My point was, there’s ways to implement the ads without sending metadata about the ads to the client.


  • I’m not talking about the player or the controls being server-side. I’m talking about the player being locked into a streaming mode where it does nothing but stream the ads. After the ads are streamed, the player returns to normal video mode and the server sends the actual video data.

    This means no metadata about the ads are required on the player side about the ads.

    Sure you can hack the player into not being locked during the streaming of the ads. But that won’t get you very far, since it’s a live stream. You can’t skip forward, because the data isn’t sent yet. You can skip backwards if you’d like, with what’s in the current buffer, but why would you want to? You can have the player not display the ads, but that means staring at a blank screen till the ads are over. And that’s always the case, one can simply walk away during the ads.

    Technically I can think of several ways to implement this, without the client having meta data about the ads. And with little to none ways of getting around the ads. Once the video starts it’s business as usual, so it doesn’t impact regular viewing.




  • Oncoming drivers? I’m getting blasted by “cars” behind me. Fucking trucks or even lifted trucks with their headlights at my eye level. And it seems like lights are getting brighter as well, or people drive with their high beams on. My rearview mirror is auto dimming, which helps a lot. But since I drive the speed limit these trucks are swerving back and forth behind me, blinding me via the side mirrors.

    Man we really really need restrictions on size and weight of cars. It’s getting ridiculous out there.



  • Yeah this game is really annoying to play, which is a shame because it is cute as hell. It continually prompts you to do the thing. It’s like playing Mario and having someone tell you to walk right and jump all through the game. What makes it much worse is that the game fully comes to a stop to do so. Everything just pauses and the game explains what to do. Even when there is a puzzle, the game basically gives you the answer.

    The approach Astro Bot uses is much better. It let’s you struggle for a bit and then gives an animation with the move you need and which button it is. Which is really handy because even if you know what move you want it’s easy to forget the right button combination for it. It’s very non intrusive and if you know the move the animation won’t even pop up. An experienced player won’t notice the mechanic at all. If you come back from not playing for a bit, the reminder about the buttons is useful. For kids who genuinely get stuck, the help prevents them from giving up.

    Games that were infuriating with these kinds of mechanics were the new God of War games. At every fucking puzzle when you take 10 secs just to get oriented and look at what you need to do, some NPC (usually Boi of War) just tells you the answer. There is no way to turn this off and it made me turn off the game multiple times. If you want to put puzzles in the game, put puzzles in your game and let me figure it out. If you are going to give the answer, why are there puzzles to begin with? It doesn’t help Atreus is one of the worse characters ever written especially in the last game.





  • Disagree, if you need a custom big machine made, there are some really good people on AliExpress.

    I have had a couple of customized CNC machines custom built for a fraction of the price it would have cost to have it done over here. Even with the shipping costs it came out cheaper. And the people were really helpful, they use machines like mine all the time so they know their stuff. They offered really good advice and were excited to work with me. It’s a bit butt clenching to fork over a lot of cash and hope a big ass pallet shows up 4 months later, but they’ve come through for me every time. YMMV.

    Temu on the other hand can fuck off. It’s just a scam site like Wish.



  • Rendering a 3D scene is much more intensive and complicated than a simple scaler. The scaler isn’t advanced at all, it’s actually very simple. And it can’t be compared with running a large model locally. These are expert systems, not large models. They are very good at one thing and can do only that thing.

    Like I said the cost is fixed, so if the scaler can handle 1080p at 120fps to upscale to 2K, then it can always handle that. It doesn’t matter how complex or simple the image is, it will always use the same amount of power. It reads the image, does the calculation and outputs the resulting image.

    Rendering a 3D scene is much much more complex and power intensive. The amount of power highly depends on the complexity of the scene and there is a lot more involved. It needs the gpu, cpu, memory and even sometimes storage, plus all the bandwidth and latency in between.

    Upscaling isn’t like that, it’s a lot more simple. So if the hardware is there, like the AI cores on a gpu or the dedicated upscaler chip, it will always work. And since that hardware will normally not be heavily used, the rest of the components are still available for the game. A dedicated scaler is the most efficient, but the cores on the gpu aren’t bad either. That’s why something like DLSS doesn’t just work on any hardware, it needs specialized components. And different generations and parts have different limitations.

    Say your system can render a game at 1080p at a good solid 120fps. But you have a 2K monitor, so you want the game to run at 2K. This requires a lot more from the system, so the computer struggles to run the game at 60 fps and has annoying dips in demanding parts. With upscaling you run the game at 1080p at 120fps and the upscaler takes that image stream and converts it into 2K at a smooth 120fps. Now the scaler may not get all the details right, like running native 2K and it may make some small mistakes. But our eyes are pretty bad and if we’re playing games our brains aren’t looking for those details, but are instead focused on gameplay. So the output is probably pretty good and unless you were to compare it with 2K native side by side, probably you won’t even notice the difference. So it’s a way of having that excellent performance, without shelling out a 1000 bucks for better hardware.

    There are limitations of course. Not all games conform to what the scaler is good at. It usually does well with realistic scenes, but can struggle with more abstract stuff. It can get annoying halos and weird artifacts. There are also limitations to what bandwidth it can push, so for example not all gpus can do 4K at a high framerate. If the game uses the AI cores as well for other stuff, that can become an issue. If the difference in resolution is too much, that becomes very noticeable and unplayable. Often there’s also the option to use previous frames to generate intermediate frames, to boost the framerate with little cost. In my experience this doesn’t work well and just makes the game feel like it’s ghosting and smearing.

    But when used properly, it can give a nice boost basically for free. I have even seen it used where the game could be run at a lower quality at the native resolution and high framerate, but looked better at a lower resolution with a higher quality setting and then upscaled. The extra effects outweighed the small loss of fidelity.


  • The game is rendered at a lower resolution, this saves a lot of resources. This isn’t a linear thing, lowering the resolution reduces the performance needed by a lot more than you would think. Not just in processing power but also bandwidth and memory requirements. Then dedicated AI cores or even special AI scaler chips get used to upscale the image back to the requested resolution. This is a fixed cost and can be done with little power since the components are designed to do this task.

    My TV for example has an AI scaler chip which is pretty nice (especially after tuning) for showing old content on a large high res screen. For games applying AI up scaling to old textures also does wonders.

    Now even though this gets the AI label slapped on, this is nothing like the LMMs such as chat GPT. These are expert systems trained and designed to do exactly one thing. This is the good kind of AI that’s actually useful instead of the BS AI like LLMs. Now these systems have their limitations, but for games the trade off between details and framerate can be worth it. Especially if our bad eyes and mediocre screens wouldn’t really show the difference anyways.