I think AI is neat.

  • pachrist@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    If an LLM is just regurgitating information in a learned pattern and therefore it isn’t real intelligence, I have really bad news for ~80% of people.

  • AwkwardLookMonkeyPuppet@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    I think AI is the single most powerful tool we’ve ever invented and it is now and will continue completely changing the world. But you’ll get nothing but hate and “iTs Not aCtuaLly AI” replies here on Lemmy.

    • naevaTheRat@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      Umm penicillin? anaesthetic? the Haber process? the transistor? the microscope? steel?

      I get it, the models are new and a bit exciting but GPT wont make it so you can survive surgery, or make rocks take the jobs of computers.

      • GeneralVincent@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        Very true and valid. Tho, devils advocate for a moment, AI is great at discovering new ways to survive surgery and other cool stuff. Of course it uses the existing scientific discoveries to do that, but still. It could be the tool to find the next biggest thing on the penicillin, anaesthesia, haber process, transistor, microscope, steel list which is pretty cool.

        • naevaTheRat@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          0
          ·
          8 months ago

          Is it? This seems like a big citation needed moment.

          Have LLMs been used to make big strides? I know some trials are going on aiding doctors in diagnosis and stuff but computer vision algorithms have been doing that for ages (shit contrast dyes, pcr, and blood analysis also do that really) but they come with their own risks and we haven’t seen like widespread unknown illnesses being discovered or anything. Is the tech actually doing anything useful atm or is it all still hype?

          We’ve had algorithms help find new drugs and stuff, or plot out synthetic routes for novel compounds; We can run DFT simulations to help determine if we should try make a material. These things have been helpful but not revolutionary, I’m not sure why LLMs would be? I actually worry they’ll hamper scientific progress by aiding fraud (unreproducible results are already a fucking massive problem) or extremely convincingly lying or omitting something if trying to use one to help in a literature review.

          Why do you think LLMs will revolutionise science?

          • GeneralVincent@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 months ago

            why do you think LLMs will revolutionise science

            Idk it probably won’t. That wasn’t exactly what I was saying, but I’m also not an expert in any scientific field so that’s my bad for unintentionally contributing to the hype by implying AI is more capable than it currently is or has the potential to be

            • naevaTheRat@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              0
              ·
              8 months ago

              Fair enough, I used to be scientist (a very bad one that never amounted to anything) and my perspective has been that the major barriers to progress are:

              • We’ve just got all the low hangingfruit
              • Science education isn’t available to many people, perspectives are quite limited consequently.
              • power structures are exploitative and ossified, driving away many people
              • industry has too much influence, there isn’t much appetite to fund blue sky projects without obvious short term money earning applications
              • patents slow progress
              • publish or perish incentivises excessive volumes of publication, fraud, and splitting discoveries into multiple papers which increases burden on researchers to stay current
              • nobody wants to pay scientists, bright people end up elsewhere
            • naevaTheRat@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              0
              ·
              8 months ago

              This seems like splitting hairs agi doesn’t exist so that can’t be what they mean. AI applies to everything from pathing algorithms for library robots to computer vision and none of those seem to apply.

              The context of this post is LLMs and their applications

              • A_Very_Big_Fan@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                8 months ago

                The comment you replied to first said “AI”, not “LLMs”. And he even told you himself that he didn’t mean LLMs.

                I’m not saying he’s right, though, because afaik AI hasn’t made any noteworthy progress made in medical science. (Although a quick skim through Google suggests there has been). I’m just saying that’s clearly not what he said.

  • Adalast@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    Ok, but so do most humans? So few people actually have true understanding in topics. They parrot the parroting that they have been told throughout their lives. This only gets worse as you move into more technical topics. Ask someone why it is cold in winter and you will be lucky if they say it is because the days are shorter than in summer. That is the most rudimentary “correct” way to answer that question and it is still an incorrect parroting of something they have been told.

    Ask yourself, what do you actually understand? How many topics could you be asked “why?” on repeatedly and actually be able to answer more than 4 or 5 times. I know I have a few. I also know what I am not able to do that with.

    • Daft_ish@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      8 months ago

      I don’t think actual parroting is the problem. The problem is they don’t understand a word outside of how it is organized. They can’t be told to do simple logic because they don’t have a simple understanding of each word in their vocabulary. They can only reorganize things to varying degrees.

      • fidodo@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        It doesn’t need to understand the words to perform logic because the logic was already performed by humans who encoded their knowledge into words. It’s not reasoning, but the reasoning was already done by humans. It’s not perfect of course since it’s still based on probability, but the fact that it can pull the correct sequence of words to exhibit logic is incredibly powerful. The main hard part of working with LLMs is that they break randomly, so harnessing their power will be a matter of programming in multiple levels of safe guards.

        • Daft_ish@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          8 months ago

          I guess, I just am looking at from an end user vantage point. I’m not saying the model cant understand the words its using. I just don’t think it currently understands that specific words refer to real life objects and there are laws of physics that apply to those specific objects and how they interact with each other.

          Like saying there is a guy that exists and is a historical figure means that information is independently verified by physical objects that exist in the world.

          • Adalast@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            8 months ago

            In some ways, you are correct. It is coming though. The psychological/neurological word you are searching for is “conceptualization”. The AI models lack the ability to abstract the text they know into the abstract ideas of the objects, at least in the same way humans do. Technically the ability to say “show me a chair” and it returns images of a chair, then following up with “show me things related to the last thing you showed me” and it shows couches, butts, tables, etc. is a conceptual abstraction of a sort. The issue comes when you ask “why are those things related to the first thing?” It is coming, but it will be a little while before it is able to describe the abstraction it just did, but it is capable of the first stage at least.

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      I feel that knowing what you don’t know is the key here.

      An LLM doesn’t know what it doesn’t know, and that’s where what it spouts can be dangerous.

      Of course there’s a lot of actual people that applies to as well. And sadly they’re often in positions of power.

      • KeenFlame@feddit.nu
        link
        fedilink
        arrow-up
        0
        ·
        8 months ago

        There are more than a couple research agents in development

        We need something that can real time fact check without error that would fuck twitter up lol

    • bruhduh@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      Few people truly understand what understanding means at all, i got teacher in college that seriously thinked that you should not understand content of lessons but simply remember it to the letter

      • Adalast@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        8 months ago

        I am so glad I had one that was the opposite. I discussed practical applications of the subject material after class with him and at the end of the semester he gave me a B+ even though I only got a C by score because I actually grasped the material better than anyone else in the class, even if I was not able to evaluate it as well on the tests.

    • Ramblingman@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      This is only one type of intelligence and LLMs are already better at humans at regurgitating facts. But I think people really underestimate how smart the average human is. We are incredible problem solvers, and AI can’t even match us in something as simple as driving a car.

      • Adalast@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        8 months ago

        Lol @ driving a car being simple. That is one of the more complex sensory somatic tasks that humans do. You have to calculate the rate of all vehicles in front of you, assess for collision probabilities, monitor for non-vehicle obstructions (like people, animals, etc.), adjust the accelerator to maintain your own velocity while terrain changes, be alert to any functional changes in your vehicle and be ready to adapt to them, maintain a running inventory of laws which apply to you at the given time and be sure to follow them. Hell, that is not even an exhaustive list for a sunny day under the best conditions. Driving is fucking complicated. We have all just formed strong and deeply connected pathways in our somatosensory and motor cortexes to automate most of the tasks. You might say it is a very well-trained neural network with hundreds to thousands of hours spent refining and perfecting the responses.

        The issue that AI has right now is that we are only running 1 to 3 sub-AIs to optimize and calculate results. Once that number goes up, they will be capable of a lot more. For instance: one AI for finding similarities, one for categorizing them, one for mapping them into a use case hierarchy to determine when certain use cases apply, one to analyze structure, one to apply human kineodynamics to the structure and a final one to analyze for effectiveness of the kineodynamic use cases when done by a human. This would be a structure that could be presented an object and told that humans use it and the AI brain could be able to piece together possible uses for the tool and describe them back to the presenter with instructions on how to do so.

  • KeenFlame@feddit.nu
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    Been destroyed for this opinion here. Not many practicioners here just laymen and mostly techbros in this field… But maybe I haven’t found the right node?

    I’m into local diffusion models and open source llms only, not into the megacorp stuff

    • webghost0101@sopuli.xyz
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      8 months ago

      If anything people really need to start experimenting beyond talking to it like its human or in a few years we will end up with a huge ai-illiterate population.

      I’ve had someone fight me stubbornly talking about local llms as “a overhyped downloadable chatbot app” and saying the people on fossai are just a bunch of ai worshipping fools.

      I was like tell me you now absolutely nothing you are talking about by pretending to know everything.

      • KeenFlame@feddit.nu
        link
        fedilink
        arrow-up
        0
        ·
        8 months ago

        But the thing is it’s really fun and exciting to work with, the open source community is extremely nice and helpful, one of the most non toxic fields I have dabbled in! It’s very fun to test parameters tools and write code chains to try different stuff and it’s come a long way, it’s rewarding too because you get really fun responses

        • Fudoshin ️🏳️‍🌈@feddit.uk
          link
          fedilink
          arrow-up
          0
          ·
          8 months ago

          Aren’t the open source LLMs still censored though? I read someone make an off-hand comment that one of the big ones (OLLAMA or something?) was censored past version 1 so you couldn’t ask it to tell you how to make meth?

          I don’t wanna make meth but if OSS LLMs are being censored already it makes having a local one pretty fucking pointless, no? You may as well just use ChatGPT. Pray tell me your thoughts?

          • webghost0101@sopuli.xyz
            link
            fedilink
            arrow-up
            0
            ·
            8 months ago

            Depends who and how the model was made. Llama is a meta product and its genuinely really powerful (i wonder where zuckerberg gets all the data for it)

            Because its powerful you see many people use it as a starting point to develop their own ai ideas and systems. But its not the only decent open source model and the innovation that work for one model often work for all others so it doesn’t matter in the end.

            Every single model used now will be completely outdated and forgotten in a year or 2. Even gpt4 en geminni

          • Kittenstix@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            8 months ago

            Could be legal issues, if an llm tells you how to make meth but gets a step or two wrong and results in your death, might be a case for the family to sue.

            But i also don’t know what all you mean when you say censorship.

            • Fudoshin ️🏳️‍🌈@feddit.uk
              link
              fedilink
              arrow-up
              0
              ·
              8 months ago

              But i also don’t know what all you mean when you say censorship.

              It was literally just that. The commentor I saw said something like "it’s censored after ver 1 so don’t expect it to tell you how to cook meth.

              But when I hear the word “censored” I think of all the stuff ChatGPT refuses to talk about. It won’t write jokes about protected groups and VAST swathes of stuff around it. Like asking it to define “fag-got” can make it cough and refuse even though it’s a British food-stuff.

              Blocking anything sexual - so no romantic/erotica novel writing.

              The latest complaint about ChatGPT is it’s laziness which I can’t help feeling is due to over-zealous censorship. Censorship doesn’t just block the specific things but entirely innocent things (see fag-got above).

              Want help writing a book about Hilter beoing seduced by a Jewish woman and BDSM scenes? No chance. No talking about Hitler, sex, Jewish people or BDSM. That’s censorship.

              I’m using these as examples - I’ve no real interest in these but I am affected by annoyances and having to reword requests because they’ve been mis-interpreted as touching on censored subjects.

              Just take a look at r/ChatGPT and you’ll see endless posts by people complaining they triggered it’s censorship over asinine prompts.

              • Kittenstix@lemmy.world
                link
                fedilink
                arrow-up
                0
                ·
                8 months ago

                Oh ok, then yea that’s a problem, any censorship that’s not directly related to liability issues should be nipped in the bud.

    • Redacted@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      Have you ever considered you might be, you know, wrong?

      No sorry you’re definitely 100% correct. You hold a well-reasoned, evidenced scientific opinion, you just haven’t found the right node yet.

      Perhaps a mental gymnastics node would suit sir better? One without all us laymen and tech bros clogging up the place.

      Or you could create your own instance populated by AIs where you can debate them about the origins of consciousness until androids dream of electric sheep?

      • KeenFlame@feddit.nu
        link
        fedilink
        arrow-up
        0
        ·
        7 months ago

        Do you even understand my viewpoint?

        Why only personal attacks and nothing else?

        You obviously have hate issues, which is exactly why I have a problem with techbros explaining why llms suck.

        They haven’t researched them or understood how they work.

        It’s a fucking incredibly fast developing new science.

        Nobody understands how it works.

        It’s so silly to pretend to know how bad it works when people working with them daily discover new ways the technology surprises us. Idiotic to be pessimistic about such a field.

        • Redacted@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          7 months ago

          You obviously have hate issues

          Says the person who starts chucking out insults the second they get downvoted.

          From what I gather, anyone that disagrees with you is a tech bro with issues, which is quite pathetic to the point that it barely warrants a response but here goes…

          I think I understand your viewpoint. You like playing around with AI models and have bought into the hype so much that you’ve completely failed to consider their limitations.

          People do understand how they work; it’s clever mathematics. The tech is amazing and will no doubt bring numerous positive applications for humanity, but there’s no need to go around making outlandish claims like they understand or reason in the same way living beings do.

          You consider intelligence to be nothing more than parroting which is, quite frankly, dangerous thinking and says a lot about your reductionist worldview.

          You may redefine the word “understanding” and attribute it to an algorithm if you wish, but myself and others are allowed to disagree. No rigorous evidence currently exists that we can replicate any aspect of consciousness using a neural network alone.

          You say pessimistic, I say realistic.

          • KeenFlame@feddit.nu
            link
            fedilink
            arrow-up
            0
            ·
            7 months ago

            Haha it’s pure nonsense. Just do a little digging instead of doing the exact guesstimation I am talking about. You obviously don’t understand the field

            • Redacted@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              7 months ago

              Once again not offering any sort of valid retort, just claiming anyone that disagrees with you doesn’t understand the field.

              I suggest you take a cursory look at how to argue in good faith, learn some maths and maybe look into how neural networks are developed. Then study some neuroscience and how much we comprehend the brain and maybe then we can resume the discussion.

              • KeenFlame@feddit.nu
                link
                fedilink
                arrow-up
                0
                ·
                7 months ago

                You attack my viewpoint, but misunderstood it. I corrected you. Now you tell me I am wrong with my viewpoint (I am not btw) and start going down the idiotic path of bad faith conversation, while strawman arguing your own bad faith accusation, only because you are butthurt that you didn’t understand. Childish approach.

                You don’t understand, because no expert currently understands these things completely. It’s pure nonsense defecation coming out of your mouth

                • Redacted@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  7 months ago

                  You don’t rwally have one lol. You’ve read too many pop-sci articles from AI proponents and haven’t understood any of the underlying tech.

                  All your retorts boil down to copying my arguments because you seem to be incapable of original thought. Therefore it’s not surprising you believe neural networks are approaching sentience and consider imitation to be the same as intelligence.

                  You seem to think there’s something mystical about neural networks but there is not, just layers of complexity that are difficult for humans to unpick.

                  You argue like a religious nutjob or Trump supporter. At this point it seems you don’t understand basic logic or how the scientific method works.

    • Gabu@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      That was never the goal… You might as well say that a bowling ball will never be effectively used to play golf.

      • Jack@slrpnk.net
        link
        fedilink
        arrow-up
        0
        ·
        8 months ago

        I agree, but it’s so annoying when you work as IT and your non-IT boss thinks AI is the solution to every problem.

        At my previous work I had to explain to my boss at least once a month why we can’t have AI diagnosing patients (at a dental clinic) or reading scans or proposing dental plans… It was maddening.

        • Daefsdeda@sh.itjust.works
          link
          fedilink
          arrow-up
          0
          ·
          8 months ago

          I find that these LLMs are great tools for a professional. So no, you still need the professional but it is handy if an ai would say, please check these places. A tool, not a replacemenrt.

    • Kedly@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      Next you’ll tell me that the enemies that I face in video games arent real AI either!

    • Klear@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      P-Zombies, all of them. I happen to be the only one to actually exist. What are the odds, right? But it’s true.

  • inb4_FoundTheVegan@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    As someone who has loves Asimov and read nearly all of his work.

    I absolutely bloody hate calling LLM’s AI, without a doubt they are neat. But they are absolutely nothing in the ballpark of AI, and that’s okay! They weren’t trying to make a synethic brain, it’s just the culture narrative I am most annoyed at.

    • Dagwood222@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      I look at all these kids glued to their phones and I ask 'Where’s the Frankenstein Complex now that we really need it?"

  • poke@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    Knowing that LLMs are just “parroting” is one of the first steps to implementing them in safe, effective ways where they can actually provide value.

    • KᑌᔕᕼIᗩ@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      LLMs definitely provide value its just debatable whether they’re real AI or not. I believe they’re going to be shoved in a round hole regardless.

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      I think a better way to view it is that it’s a search engine that works on the word level of granularity. When library indexing systems were invented they allowed us to look up knowledge at the book level. Search engines allowed look ups at the document level. LLMs allow lookups at the word level, meaning all previously transcribed human knowledge can be synthesized into a response. That’s huge, and where it becomes extra huge is that it can also pull on programming knowledge allowing it to meta program and perform complex tasks accurately. You can also hook them up with external APIs so they can do more tasks. What we have is basically a program that can write itself based on the entire corpus of human knowledge, and that will have a tremendous impact.

    • KeenFlame@feddit.nu
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      8 months ago

      The next step is to understand much more and not get stuck on the most popular semantic trap

      Then you can begin your journey man

      There are so, so many llm chains that do way more than parrot. it’s just the last popular talking point

  • WallEx@feddit.de
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    They’re predicting the next word without any concept of right or wrong, there is no intelligence there. And it shows the second they start hallucinating.

    • A_Very_Big_Fan@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      …yeah dude. Hence ARTIFICIAL intelligence.

      There aren’t any cherries in artificial cherry flavoring either 🤷‍♀️ and nobody is claiming there is

    • LarmyOfLone@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      They are a bit like you’d take just the creative writing center of a human brain. So they are like one part of a human mind without sentience or understanding or long term memory. Just the creative part, even though they are mediocre at being creative atm. But it’s shocking because we kind of expected that to be the last part of human minds to be able to be replicated.

      Put enough of these “parts” of a human mind together and you might get a proper sentient mind sooner than later.

      • Redacted@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        …or you might not.

        It’s fun to think about but we don’t understand the brain enough to extrapolate AIs in their current form to sentience. Even your mention of “parts” of the mind are not clearly defined.

        There are so many potential hidden variables. Sometimes I think people need reminding that the brain is the most complex thing in the universe, we don’t full understand it yet and neural networks are just loosely based on the structure of neurons, not an exact replica.

        • LarmyOfLone@lemm.ee
          link
          fedilink
          arrow-up
          0
          ·
          8 months ago

          True it’s speculation. But before GPT3 I never imagined AI achieving creativity. No idea how you would do it and I would have said it’s a hard problem or like magic, and poof now it’s a reality. A huge leap in quality driven just by quantity of data and computing. Which was shocking that it’s “so simple” at least in this case.

          So that should tell us something. We don’t understand the brain but maybe there isn’t much to understand. The biocomputing hardware is relatively clear how it works and it’s all made out of the same stuff. So it stands to reason that the other parts or function of a brain might also be replicated in similar ways.

          Or maybe not. Or we might need a completely different way to organize and train other functions of a mind. Or it might take a much larger increase in speed and memory.

          • Redacted@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 months ago

            You say maybe there’s not much to understand about the brain but I entirely disagree, it’s the most complex object in the known universe and we haven’t discovered all of it’s secrets yet.

            Generating pictures from a vast database of training material is nowhere near comparable.

            • LarmyOfLone@lemm.ee
              link
              fedilink
              arrow-up
              0
              ·
              8 months ago

              Ok, again I’m just speculating so I’m not trying to argue. But it’s possible that there are no “mysteries of the brain”, that it’s just irreducible complexity. That it’s just due to the functionality of the synapses and the organization of the number of connections and weights in the brain? Then the brain is like a computer you put a program in. The magic happens with how it’s organized.

              And yeah we don’t know how that exactly works for the human brain, but maybe it’s fundamentally unknowable. Maybe there is never going to be a language to describe human consciousness because it’s entirely born out of the complexity of a shit ton of simple things and there is no “rhyme or reason” if you try to understand it. Maybe the closest we get are the models psychology creates.

              Then there is fundamentally no difference between painting based on a “vast database of training material” in a human mind and a computer AI. Currently AI generated images is a bit limited in creativity and it’s mediocre but it’s there.

              Then it would logically follow that all the other functions of a human brain are similarly “possible” if we train it right and add enough computing power and memory. Without ever knowing the secrets of the human brain. I’d expect the truth somewhere in the middle of those two perspectives.

              Another argument in favor of this would be that the human brain evolved through evolution, through random change that was filtered (at least if you do not believe in intelligent design). That means there is no clever organizational structure or something underlying the brain. Just change, test, filter, reproduce. The worst, most complex spaghetti code in the universe. Code written by a moron that can’t be understood. But that means it should also be reproducible by similar means.

              • Redacted@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                8 months ago

                Possible, yes. It’s also entirely possible there’s interactions we are yet to discover.

                I wouldn’t claim it’s unknowable. Just that there’s little evidence so far to suggest any form of sentience could arise from current machine learning models.

                That hypothesis is not verifiable at present as we don’t know the ins and outs of how consciousness arises.

                Then it would logically follow that all the other functions of a human brain are similarly “possible” if we train it right and add enough computing power and memory. Without ever knowing the secrets of the human brain. I’d expect the truth somewhere in the middle of those two perspectives.

                Lots of things are possible, we use the scientific method to test them not speculative logical arguments.

                Functions of the brain

                These would need to be defined.

                But that means it should also be reproducible by similar means.

                Can’t be sure of this… For example, what if quantum interactions are involved in brain activity? How does the grey matter in the brain affect the functioning of neurons? How do the heart/gut affect things? Do cells which aren’t neurons provide any input? Does some aspect of consciousness arise from the very material the brain is made of?

                As far as I know all the above are open questions and I’m sure there are many more. But the point is we can’t suggest there is actually rudimentary consciousness in neural networks until we have pinned it down in living things first.

      • WallEx@feddit.de
        link
        fedilink
        arrow-up
        0
        ·
        8 months ago

        Exactly. Im not saying its not impressive or even not useful, but one should understand the limitation. For example you can’t reason with an llm in a sense that you could convince it of your reasoning. It will only respond how most people in the used dataset would have responded (obiously simplified)

        • webghost0101@sopuli.xyz
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          8 months ago

          You repeat your point but there already was agreement that this is how ai is now.

          I fear you may have glanced over the second part where he states that once we simulated other parts of the brain things start to look different very quickly.

          There do seem to be 2 kind of opinions on ai.

          • those that look at ai in the present compared to a present day human. This seems to be the majority of people overall

          • those that look at ai like a statistic, where it was in the past, what improved it and project within reason how it will start to look soon enough. This is the majority of people that work in the ai industry.

          For me a present day is simply practice for what is yet to come. Because if we dont nuke ourselves back to the stone age. Something, currently undefinable, is coming.

          • What i fear is AI being used with malicious intent. Corporations that use it for collecting data for example. Or governments just putting everyone in jail that they are told by an ai

            • LarmyOfLone@lemm.ee
              link
              fedilink
              arrow-up
              0
              ·
              8 months ago

              I’d expect governments to use it to craft public relation strategies. An extension of what they do now by hiring the smartest sociopaths on the planet. Not sure if this would work but I think so. Basically you train an AI on previous messaging and results from polls or voting. And then you train it to suggest strategies to maximize for support for X. A kind of dumbification of the masses. Of course it’s only going to get shittier from there on out.

          • WallEx@feddit.de
            link
            fedilink
            arrow-up
            0
            ·
            8 months ago

            I didn’t, I just focused on how it is today. I think it can become very big and threatening but also helpful, but that’s just pure speculation at this point :)

    • frezik@midwest.social
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      8 months ago

      I have a silly little model I made for creating Vogoon poetry. One of the models is fed from Shakespeare. The system works by predicting the next letter rather than the next word (and whitespace is just another letter as far as it’s concerned). Here’s one from the Shakespeare generation:


      KING RICHARD II:​

      Exetery in thine eyes spoke of aid.​

      Burkey, good my lord, good morrow now: my mother’s said


      This is silly nonsense, of course, and for its purpose, that’s fine. That being said, as far as I can tell, “Exetery” is not an English word. Not even one of those made-up English words that Shakespeare created all the time. It’s certainly not in the training dataset. However, it does sound like it might be something Shakespeare pulled out of his ass and expected his audience to understand through context, and that’s interesting.

      • WallEx@feddit.de
        link
        fedilink
        arrow-up
        0
        ·
        8 months ago

        Wow, sounds amazing, big probs to you! Are you planning on releasing the model? Would be interested tbh :D

  • antidote101@lemmy.world
    cake
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    I think LLMs are neat, and Teslas are neat, and HHO generators are neat, and aliens are neat…

    …but none of them live up to all of the claims made about them.

    • ALostInquirer@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      HHO generators

      …What are these? Something to do with hydrogen? Despite it not making sense for you to write it that way if you meant H2O, I really enjoy the silly idea of a water generator (as in, making water, not running off water).

      • antidote101@lemmy.world
        cake
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        8 months ago

        HHO generators are a car mod that some backyard scientists got into, but didn’t actually work. They involve cracking hydrogen from water, and making explosive gasses some claimed could make your car run faster. There’s lots of YouTube videos of people playing around with them. Kinda dangerous seeming… Still neat.

  • BaumGeist@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    I’ve know people in my life who put less mental effort into living their lives than LLMs put into crafting the most convincing lies you’ve ever read

        • Obi@sopuli.xyz
          link
          fedilink
          arrow-up
          0
          ·
          8 months ago

          Dude you gave me a heart attack, I was like NO WAY that came out in 2004. It didn’t, it was 2014, which is still like probably twice as old as I would’ve guessed but not as bad.

          And yes it is a fantastic movie, go watch it if you haven’t seen it.

  • Saledovil@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    I once ran an LLM locally using Kobold AI. Said thing has an option to show the alternative tokens for each token it puts out, and what their probably for being chosen was. Seeing this shattered the illusion that these things are really intelligent for me. There’s at least one more thing we need to figure out before we can build an AI that is actually intelligent.

    It’s cool what statistics can do, though.

    • AlolanYoda@mander.xyz
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      That’s actually pretty neat. I tried Kobold AI a few months ago but the novelty wore off quickly. You made me curious, I’m going to check out that option once I get home. Is it just a toggleable opyiont option or do you have to mess with some hidden settings?