ChatMusician isn’t exactly new and the underlying dataset isn’t particularly diverse, but it’s one of the few models made specifically for classical music.
Are there any others, by the way?
ChatMusician isn’t exactly new and the underlying dataset isn’t particularly diverse, but it’s one of the few models made specifically for classical music.
Are there any others, by the way?
I expected that recording would be the hard part.
I think some of the open-source ones should work if your phone is rooted?
I’ve heard that Google’s phone app can record calls (though it says it aloud when starting the recording). Of course, it wouldn’t work if Google thinks it shouldn’t in your region.
By the way, Bluetooth headphones can have both speakers and a microphone. And Android can’t tell a peripheral device what it should or shouldn’t do with audio streams. Sounds like a fun DIY project if you’re into it, or maybe somebody sells these already.
Haven’t heard of all-in-one solutions, but once you have a recording, whisper.cpp can do the transcription:
The underlying Whisper models are MIT.
Then you can use any LLM inference engine, e.g. llama.cpp, and ask the model of your choice to summarise the transcript:
You can also write a small bash/python script to make the process a bit more automatic.
It would. But it’s a good option when you have computationally heavy tasks and communication is relatively light.
Once configured, Tor Hidden Services also just work (you may need to use some fresh bridges in certain countries if ISPs block Tor there though). You don’t have to trust any specific third party in this case.
If config prompt = system prompt, its hijacking works more often than not. The creators of a prompt injection game (https://tensortrust.ai/) have discovered that system/user roles don’t matter too much in determining the final behaviour: see appendix H in https://arxiv.org/abs/2311.01011.
Don’t know much of the stochastic parrot debate. Is my position a common one?
In my understanding, current language models don’t have any understanding or reflection, but the probabilistic distributions of the languages that they learn do - at least to some extent. In this sense, there’s some intelligence inherently associated with language itself, and language models are just tools that help us see more aspects of nature than we could earlier, like X-rays or a sonar, except that this part of nature is a bit closer to the world of ideas.
xkcd.com is best viewed with Netscape Navigator 4.0 or below on a Pentium 3±1 emulated in Javascript on an Apple IIGS at a screen resolution of 1024x1. Please enable your ad blockers, disable high-heat drying, and remove your device from Airplane Mode and set it to Boat Mode. For security reasons, please leave caps lock on while browsing.
CVEs are constantly found in complex software, that’s why security updates are important. If not these, it’d have been other ones a couple of weeks or months later. And government users can’t exactly opt out of security updates, even if they come with feature regressions.
You also shouldn’t keep using software with known vulnerabilities. You can find a maintained fork of Chromium with continued Manifest V2 support or choose another browser like Firefox.
You can get your hands on books3 or any other dataset that was exposed to the public at some point, but large companies have private human-filtered high-quality datasets that perform better. You’re unlikely to have the resources to do the same.
Very cool and impressive, but I’d rather be able to share arbitrary files.
And looks like you can only send images in DMs, but not in groups/forums.
If your CPU isn’t ancient, it’s mostly about memory speed. VRAM is very fast, DDR5 RAM is reasonably fast, swap is slow even on a modern SSD.
8x7B is mixtral, yeah.
Mostly via terminal, yeah. It’s convenient when you’re used to it - I am.
Let’s see, my inference speed now is:
As of quality, I try to avoid quantisation below Q5 or at least Q4. I also don’t see any point in using Q8/f16/f32 - the difference with Q6 is minimal. Other than that, it really depends on the model - for instance, llama-3 8B is smarter than many older 30B+ models.
Have been using llama.cpp, whisper.cpp, Stable Diffusion for a long while (most often the first one). My “hub” is a collection of bash scripts and a ssh server running.
I typically use LLMs for translation, interactive technical troubleshooting, advice on obscure topics, sometimes coding, sometimes mathematics (though local models are mostly terrible for this), sometimes just talking. Also music generation with ChatMusician.
I use the hardware I already have - a 16GB AMD card (using ROCm) and some DDR5 RAM. ROCm might be tricky to set up for various libraries and inference engines, but then it just works. I don’t rent hardware - don’t want any data to leave my machine.
My use isn’t intensive enough to warrant measuring energy costs.
After shopping for solutions online, i cleared CMOS via the button on the mobo. I hoped it would either help the keyboard get recognised by GRUB, or at least deactivate fast-boot. But after powering the pc on again, my screen stays blank and the indication LEDs DRAM and BOOT are glowing.
I had to boot from a USB stick and regenerate UEFI entries after things like that. Though it specifically said it couldn’t boot.
What does your motherboard’s manual say about this pattern of LEDs?
Try booting a live OS and running memtest? (disconnect all bootable drives first)
Can you double-check your keyboard works with other devices?
The article isn’t about automatic proofs, but it’d be interesting to see a LLM that can write formal proofs in Coq/Lean/whatever and call external computer algebra systems like SageMath or Mathematica.
Maybe not as important, but I still like having a fancy futuristic animation when a device is locked and idle.
I hope Wayland gets screensaver support at some point.
I see, thanks. Will check. I just thought perhaps you figured out something other than those from your experience.
Well, that’s exactly what I did. My point was rather that there’s no single consistent way to do this across different DEs with different Wayland implementations - and that’s supposedly considered a feature from Wayland design’s perspective.
LLaMA can’t. Chameleon and similar ones can: