• 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle

  • I grew up with CRTs and VCRs, hard pass. There’s a certain nostalgia to it all: the bum-DOOON sound as its electron gun warmed up, the smell of ozone and tingly sensation that got exponentially stronger the closer you were, crusty visuals… But they were objectively orders of magnitude worse than what we have now, if nothing else than because they don’t weigh 150 pounds or make you wonder if watching Rugrats in Paris for the 30th time on this monster is giving you cancer. Maybe it’s because I’m techie, I’ve never really had much issue with “smart” TVs. Sure, apps will slow down or crash because of memory leaks and it’s not as customizable as I’d like, but I might be satiated just knowing that if push comes to shove I can plug in a spare computer and use it like a monitor for a media system.

    I’m rooting it if it starts serving me out-of-band ads, though.


  • This is less an issue of “smartness” and moreso because analog signals degrade gracefully whereas digital signals are all or nothing unless specific mitigations are put in place. HDMI hits kind of a weird spot because it’s a digital protocol based on analog scanlines; if the signal gets disrupted for 0.02 ms, it might only affect the upper half and maybe shift the bits for the lower half. Digital is more contextual and it will resynchronize at least every frame, so this kind of degradation is also unstable.



  • There’s a lot of papers which propose adding new tokens to elicit some behavior or another, though I haven’t seen them catch on for some reason. A new token would mean adding a new trainable static vector which would initially be something nonsensical, and you would want to retrain it on a comparably sized corpus. This is a bit speculative, but I think the introduction of a token totally orthogonal to the original (something like eg smell, which has no textual analog) would require compressing some of the dimensions to make room for that subspace, otherwise it would have a form of synesthesia, relating that token to the original neighboring subspaces. If it was just a new token still within the original domain though, you could get a good enough initial approximation by a linear combination of existing token embeddings - eg a monkey with a hat emoji comes out, you add tokens for monkey emoji + hat emoji, then finetune it.

    Most extreme option, you could increase the embedding dimensionality so the original subspaces are unaffected and the new tokens can take up those new dimensions. This is extreme because it means resizing every matrix in the model, which even for smaller models would be many thousands of parameters, and the performance would tank until it got a lot more retraining.

    (deleted original because I got token embeddings and the embedding dimensions mixed up, essentially assuming a new token would use the “extreme option”).


  • There’s a lot of papers which propose adding new tokens to elicit some behavior or another, though I haven’t seen them catch on for some reason. A new token would mean adding a new trainable static vector which would initially be something nonsensical, and you would want to retrain it on a comparably sized corpus. This is a bit speculative, but I think the introduction of a token totally orthogonal to the original (something like eg smell, which has no textual analog) would require compressing some of the dimensions to make room for that subspace, otherwise it would have a form of synesthesia, relating that token to the original neighboring subspaces. If it was just a new token still within the original domain though, you could get a good enough initial approximation by a linear combination of existing token embeddings - eg a monkey with a hat emoji comes out, you add tokens for monkey emoji + hat emoji, then finetune it.

    Most extreme option, you could increase the embedding dimensionality so the original subspaces are unaffected and the new tokens can take up those new dimensions. This is extreme because it means resizing every matrix in the model, which even for smaller models would be many thousands of parameters, and the performance would tank until it got a lot more retraining.


  • LLMs are not expert systems, unless you characterize them as expert systems in language which is fair enough. My point is that they’re applicable to a wide variety of tasks which makes them general intelligences, as opposed to an expert system which by definition can only do a handful of tasks.

    If you wanted to use an LLM as an expert system (I guess in the sense of an “expert” in that task, rather than a system which literally can’t do anything else), I would say they currently struggle with that. Bare foundation models don’t seem to have the sort of self-awareness or metacognitive capabilities that would be required to restrain them to their given task, and arguably never will because they necessarily can only “think” on one “level”, which is the predicted text. To get that sort of ability you need cognitive architectures, of which chatbot implementations like ChatGPT are a very simple version of. If you want to learn more about what I mean, the most promising idea I’ve seen is the ACE framework. Frameworks like this can allow the system to automatically look up an obscure disease based on the embedded distance to a particular query, so even if you give it a disease which only appears in the literature after its training cut-off date, it knows this disease exists (and is a likely candidate) by virtue of it appearing in its prompt. Something like “You are an expert in diseases yadda yadda. The symptoms of the patient are x y z. This reminds you of these diseases: X (symptoms 1), Y (symptoms 2), etc. What is your diagnosis?” Then you could feed the answer of this question to a critical prompting, and repeat until it reports no issues with the diagnosis. You can even make it “learn” by using LoRA, or keep notes it writes to itself.

    As for poorer data distributions, the magic of large language models (before which we just had “language models”) is that we’ve found that the larger we make them, and the more (high quality) data we feed them, the more intelligent and general they become. For instance, training them on multiple languages other than English somehow allows them to make more robust generalizations even just within English. There are a few papers I can recall which talk about a “phase transition” which happens during training where beforehand, the model seems to be literally memorizing its corpus, and afterwards (to anthropomorphize a bit) it suddenly “gets” it and that memorization is compressed into generalized understanding. This is why LLMs are applicable to more than just what they’ve been taught - you can eg give them rules to follow within the conversation which they’ve never seen before, and they are able to maintain that higher-order abstraction because of that rich generalization. This is also a major reason open source models, particularly quantizations and distillations, are so successful; the models they’re based on did the hard work of extracting higher-order semantic/geometric relations, and now making the model smaller has minimal impact on performance.


  • LLMs are not chatbots, they’re models. ChatGPT/Claude/Bard are chatbots which use LLMs as part of their implementation. I would argue in favor of the article because, while they aren’t particularly intelligent, they are general-purpose and exhibit some level of intelligence and thus qualify as “general intelligence”. Compare this to the opposite, an expert system like a chess computer. You can’t even begin to ask a chess computer to explain what a SQL statement does, the question doesn’t even make sense. But LLMs are capable of being applied to virtually any task which can be transcribed. Even if they aren’t particularly good, compared to GPT-2 which read more like a markov chain they at least attempt to complete the task, and are often correct.



  • The fact that they can perform at all in essentially any task means they’re general intelligences. For comparison, the opposite of a general intelligence is an expert system, like a chess computer. You can’t even begin to ask a chess computer to classify the valence of a tweet, the question doesn’t make sense.

    I think people (including myself until reading the article) have confused AGI to mean “as smart as humans” or even as “artificial person”, neither of which is actually true when you break down the term. What is general intelligence if not applicability to a broad range of tasks?



  • Actually a really interesting article which makes me rethink my position somewhat. I guess I’ve unintentionally been promoting LLMs as AGI since GPT-3.5 - the problem is just with our definitions and how loose they are. People hear “AGI” and assume it would look and act like an AI in a movie, but if we break down the phrase, what is general intelligence if not applicability to most domains?

    This very moment I’m working on a library for creating “semantic functions”, which lets you easily use an LLM almost like a semantic processor. You say await infer(f"List the names in this text: {text}") and it just does it. What most of the hype has ignored with LLMs is that they are not chatbots. They are causal autoregressive models of the joint probabilities of how language evolves over time, which is to say they can be used to build chatbots, but that’s the first and least interesting application.

    So yeah, I guess it’s been AGI this whole time and I just didn’t realize it because they aren’t people, and I had assumed AGI implied personhood (which it doesn’t).



  • I’m an AI nerd and yes, nowhere close. AI can write code snippets pretty well, and that’ll get better with time, but a huge part of software development is translating client demands into something sane and actionable. If a CEO of a 1-man billion dollar company asks his super-AI to “build the next Twitter”, that leaves so many questions on the table that the result will be completely unpredictable. Humans have preferences and experiences which can inform and fill in those implicit questions. They’re generally much better suited as tools and copilots than autonomous entities.

    Now, there was a paper that instantiated a couple dozen LLMs and had them run a virtual software dev company together which got pretty good results, but I wouldn’t trust that without a lot more research. I’ve found individual LLMs with a given task tend to get tunnel vision, so they could easily get stuck in a loop trying the same wrong code or design repeatedly.

    (I think this was the paper, reminiscent of the generative agent simulacra paper, but I also found this)