You still had a 4GB memory limit for processes, as well as a total memory limit of 64GB. Especially the first one was a problem for Java apps before AMD introduced 64bit extensions and a reason to use Sun servers for that.
You still had a 4GB memory limit for processes, as well as a total memory limit of 64GB. Especially the first one was a problem for Java apps before AMD introduced 64bit extensions and a reason to use Sun servers for that.
I was referring to work setups with the overengineering - if I had a cent for every time I had to argue with somebody at work to not make things more complex than we actually need I’d have retired a long time ago.
Unless you are gunning for a job in infrastructure you don’t need to go into kubernetes or terraform or anything like that,
Even then knowing when not to use k8s or similar things is often more valuable than having deep knowledge of those - a lot of stuff where I see k8s or similar stuff used doesn’t have the uptime requirements to warrant the complexity. If I have something that just should be up during working hours, and have reliable monitoring plus the ability to re-deploy it via ansible within 10 minutes if it goes poof maybe putting a few additional layers that can blow up in between isn’t the best idea.
Everything is deployed via ansible - including nameservices. So I already have the description of my infra in ansible, and rest is just a matter of writing scripts to pull it in a more readable form, and maybe add a few comment labels that also get extracted for easily forgettable admin URLs.
Shitty companies did it like that back then - and shitty companies still don’t properly utilize what easy tools they have available for controlled deployment nowayads. So nothing really changed, just that the amount of people (and with that, amount of morons) skyrocketed.
I had automated builds out of CVS with deployment to staging, and option to deploy to production after tests over 15 years ago.
Accessing powershell is not the issue - that Windows is broken, with a sprinkle of bad permission management by corporations using it is the issue. And the bad permission practices are a direct result of how broken Windows is - I tried a while ago to use it with a fully unprivileged user, just like I do for decades on UNIX and now Linux. It pretty much is impossible without privilege elevation prompts every few minutes.
In a proper environment a user should be able to destroy data they’re working with - but not have the ability to alter the operating system.
As a non-Windows-user I see that as a good thing. LLMs are not going away - but that kind of nonsense at least will make sure all PCs will eventually have cheap and reasonably fast AI acceleration. Which is required for killing off centrally hosted LLMs (plus nvidias cash grabbing)
Intel is well known for requiring a new board for each new CPU generation, even if it is the same socket. AMD on the other hand is known to push stuff to its physical limits before they break compatibility.
Had to look that lawyer bit up as it just sounded too much like Gravenreuth - and indeed it was.
I nowadays manage my private stuff with the ansible scripts I develop for work - so mostly my own stuff is a development environment for work, and therefore doesn’t need to be done on private time.
Generally yes, but you still need hardware support (mostly kernel and mesa). They upstream - but generally you currently want packages built from their git for that.
Also the installer is very mac hardware specific.
A lot of the Zen based APUs don’t support ECC. The next thing is if it supports registered or unregistered modules - everything up to threadripper is unregistered (though I think some of the pro parts are registered), Epycs are registered.
That makes a huge difference in how much RAM you can add, and how much you pay for it.
Not just that - intel did dual core CPUs as a response to AMD doing just that, by gluing two cores together. Which is pretty funny when you look at intels 2017 campaign of discrediting ryzen by calling it a glued together CPU.
AMDs Opteron was wiping the floor with intel stuff for years - but not every vendor offered systems as they got paid off by intel. I remember helping a friend with building a kernel for one of the first available Opteron setups - that thing was impressive.
And then there’s the whole 64bit thing which intel eventually had to license from AMD.
Most of the big CPU innovations (at least in x86 space) of the last decade were by AMD - and the chiplet design of ryzen is just another one.
That’s already the friendly variant. Traditional find has a mandatory path as first argument, so to find in the current directory you need to do find .
It also doesn’t know if it really is a path - it just prints that as a likely error. You might just have messed up quoting an argument.
Is it a ‘death by quantity’ thing?
Pretty much that - those companies rely on open projects to sort it for them, so they’re pretty much scraping open databases, and selling good data they pull from there. That’s why they were complaining about the kernel stuff - the info required was there already, just you needed to put effort in, so they were asking for CVEs. Now they got their CVEs - but to profit from it they’d still need to put the same effort in as they’d had to without CVEs in place.
Short version: A bunch of shitty companies have as business model to sell open databases to companies to track security vulnerabilities - at pretty much zero effort to themselves. So they’ve been bugging the kernel folks to start issuing CVEs and do impact analysis so they have more to sell - and the kernel folks just went “it is the kernel, everything is critical”
tl;dr: this is pretty much an elaborate “go fuck yourself” towards shady ‘security’ companies.
Making an exception for one organisation, pressured by politicians, would be harmful. BBC has the following policy about neutral reporting:
We don’t use loaded words like “evil” or “cowardly”. We don’t talk about “terrorists”. And we’re not the only ones to follow this line. Some of the world’s most respected news organisations have exactly the same policy
One fascinating example is one owner that replaced the DC barrel jack with a USB-C port, so they could utilize USB-PD for external power.
Oddly enough that’s also an example for bad design in that notebook: The barrel jack is soldered in. With a module that is plugged into the board that’d be significantly easier to replace - and also provide strain relief for power jack abuse. All my old thinkpads were trivial to move to USB-C PD because they use a separate power jack with attached cable.
The transparent bottom also isn’t very functional - it is pretty annoying to remove and put back, due to the large amount of screws required. For a notebook designed for tinkering I’d have wanted some kind of quick release for that. Also annoying is the lack of USB ports on the board - there’s enough space to integrate a USB hub, but just doing that on the board and providing extra ports would’ve been way more sensible.
The CPU module also is a bit of a mixed bag - it pretty much is designed for the first module they developed, and later modules don’t have full support for the existing ports. I was expecting that, though - many projects trying to offer that kind of modular upgrade path run into that sooner or later, and for that kind of small project with all its teething problems ‘sooner’ was to be expected. It still is very interesting for some prototyping needs - but that’s mostly companies or very dedicated hackers, not the average linux user.
Funny timing, I’m currently going through a stack of Sun hardware in my garage to decide what to keep, and for what I’ll try to find a good home (or eventually dispose of it).
Did pretty much the same with a new server recently - spent ages debugging why it didn’t find the SAS disks. Turns out, disks like to have power connected, and no amount of debugging on software level will help you.