hi, I think EOS uses systemd-boot as a default bootloader. so grub may not matter. I don’t have much experience with systemd-boot, but for now, you could just try and boot from the primary windows drive by pressing the BIOS key and changing the boot order (or using your motherboards‘ external drive boot button)this key is different for many motherboards, so you should check google for yours. As for systemd-boot, I don’t have much experience so I’d either google (specifying the two drives) or ask gpt. Good luck!
I use gnome as a primary, it feels really polished and doesn’t break or crash. Very modern, but if you want to have a super-customized experience, you’re gonna have a bad time. Extensions break every update and so do themes, so you either wait for the dev to port it or so it yourself. Annoying, so I only use vanilla for now.
Maybe I’ll try plasma, looks cool.
Thanks!
Bit unrelated, but who drew that? The elephant looks sooo cute!!
You said it yourself, advertisers must be kept happy.
A game called Celeste. It’s a 2d puzzle platformer.
It gets very fucking hard
wow, the ceo of a ai company claims it can replace everyone!! It’s almost like it’s in his best interests to promise the moon to the shareholders to get as much money as possible!
„Your reactor has been temporarily disabled due to license payment issues. Please consult support@mcaffee.com“
a lot
It‘s really sooo much better. But it lacks in one area: PCVR. SteamVR for Linux feels a bit more janky, but that’s not really the main issue.
The issue is that, to stream from PC to the quest line of devices, you need oculus’s software, which only runs on windows.
ALVR exists, but its compression and latency are considerably worse in my experience.
So I have a small separate SSD for windows :(
bruh I said I was wrong - the hivemind can be so pathetic
i don’t know if I get the joke? could you please explain it because I don’t think(?) you’re trying to be racist
You are correct. Hollywood will simply change up a couple things and then use the assets.
However, I‘m still undecided about how I think about whether generating AI art should count as Human-generated or not. On one hand, people can spend hours if not days or week perfecting a prompt with different tools like ControlNet, different promptstyles and etc. On the other hand, somebody comes up to midjourney, asks for a picture of a dragon wearing a T-Shirt and immediately gets an image that looks pretty decent. It’s probably not exactly what they wanted, but close enough, right? AI gets you 90% there what you want, and the other 10% is the super-hard part that takes forever. Anyway, sorry for dumping my though process from this comment chain on here xD
Sure! You’ll probably want to look at train-text-from-scratch in the llama.cpp project, it runs on pure CPU. The (admittedly little docs) should help, otherwise ChatGPT is a good help if you show it the code. NanoGPT is fine too.
For dataset, maybe you could train on French Wikipedia, or scrape from a French story site or fan fiction or whatever. Wikipedia is probably easiest, since they provide downloadable offline versions that are only a couple gigs.