I think this is the natural conclusion to modern social media. Constantly being confronted by a billion different worldviews and farming people for engagement by showing them things they disagree with is just going to breed extreme echo chambers.
I think this is the natural conclusion to modern social media. Constantly being confronted by a billion different worldviews and farming people for engagement by showing them things they disagree with is just going to breed extreme echo chambers.
This type of behaviour is neither new, nor actively harmful. There’s really nothing you, I or anyone else can do to stop it, so the only remaining choice is to ignore it and not post screenshots in different communities where people agree with you.
C has not aged well, despite its popularity in many applications. I’m grateful for the incredible body of work that kernel developers have assembled over the decades, but there are some very useful aspects of rust that might help alleviate some of the hurdles that aspiring contributors face. This was not a push by rust evangelists, but an attempt to enable modernization efforts at least for new driver development. If it doesn’t work out, that’s fair enough but I’m grateful for the willingness - especially of Linus - to try something new.
If I may ask: how practical is monitoring / administering rootless quadlets? I’m running rootless podman containers via systemd for home use, but splitting the single rootless user into multiple has proven to be quite the pain.
That’s not entirely accurate. Google’s influence on the web has grown even beyond the web browser engine majority share (which is bad enough in itself). They offer one of the most popular web frameworks and run several of the most popular websites. There is almost no way to compete when the market leader is simultaneously the developer and the major user of new features. Of course everyone else is going to switch to using your browser engine. What else are they gonna do? There are even websites now that just check the user agent string and refuse service if you don’t use a chromium based browser. Shit’s fucked.
It would certainly help if the GitHub code search wasn’t utter garbage.
Maybe a fixed line-height?
The whole premise of this discussion was about technological progress and growth going by your initial comment. That means refining existing models and training new ones, which is going to cost a lot of energy. The way this industry is going, even privacy conscious usage of open source models will contribute to the insane energy usage by creating demand and popularizing the technology.
Do we really need to grow our energy consumption as a society by such a disproportionate amount?
With bluray rips, I don’t really see any way to avoid that unfortunately, unless someone else has already added the hashes for your release. Most people use it to scan their encoded releases, which will (in most cases) have already been added to AniDB by the release group. I’m a bit surprised though, that none of your rips are recognized. Have you checked the AniDB pages for your series to see if anyone uploaded hashes for bluray rips?
Grouping seasons into a series folder doesn’t work well in some cases, because that’s not the way they are released in Japan. A new season is (most of the time) effectively an entire new show entry. Show seasons are mostly a north american thing. No matter which software you use, there’s always going to be some minor issues if you group seasons into one entry.
Shoko compares a files ED2K hash against the AniDB database. The filename doesn’t matter for automatic detection. Have a look at the log to see if there are any issues. It’s entirely possible that AniDB just doesn’t have the hashes for the raw BluRay rip. In that case you can either manually link them in Shoko, connecting the AniDB episode id to the file hash, or create new file entries on AniDB with your specific hashes.
Shoko also has rate limits. The problem is that AniDB does rate limiting in an extremely stupid way for a UDP API and doesn’t even have the decency to define clear time limits.
The only thing that’s slow is dnf’s repository check and some migration scripts in certain fedora packages. If that’s the price I need to pay to get seamless updates and upgrades across major versions for nearly a decade, then I can live with that.
I tried using connman to setup a wireguard connection once. It was not a good experience and ultimately led nowhere, due to missing feature support.
If anything, he gets most of his inspiration from MacOS.
It’s always been a “whole ass computer”, not some kind of simple storage device.
The joke in the OP stops at the beginning of the joke explanation. If you just share your honest opinion like that in a shitposting community, you can’t expect everyone to “play along” with your “joke”.
Pretty sure that the registry path for official images is “library” (at least it used to be). So it should be “docker.io/library/debian”, though I can’t double check at the moment.
XML aims to be both human-readable and machine-readable, but manages neither. It’s only really worth it if you actually need the complexity or extensibility, otherwise it’s just a major pain to map XML structures to any sensible type representation. I’ve been forced to work with some of the protocols that people like to present as examples of good XML usage and I hate every single one of them.
Fuck YAML though. That spec is longer and more complex than any other markup language I know of and it doesn’t have a single fully compliant implementation.