• glimse@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    10 months ago

    Copilot may be a stupid LLM but the human in the screenshot used an apostrophe to pluralize which, in my opinion, is an even more egregious offense.

    It’s incorrect to pluralizing letters, numbers, acronyms, or decades with apostrophes in English. I will now pass the pedant stick to the next person in line.

    • Melvin_Ferd@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      10 months ago

      English is a filthy gutter language and deserves to be wielded as such. It does some of its best work in the mud and dirt behind seedy boozestablishments.

    • Beanie@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      That’s half-right. Upper-case letters aren’t pluralised with apostrophes but lower-case letters are. (So the plural of ‘R’ is ‘Rs’ but the plural of ‘r’ is ‘r’s’.) With numbers (written as ‘123’) it’s optional - IIRC, it’s more popular in Britain to pluralise with apostrophes and more popular in America to pluralise without. (And of course numbers written as words are never pluralised with apostrophes.) Acronyms are indeed not pluralised with apostrophes if they’re written in all caps. I’m not sure what you mean by decades.

    • warbond@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      10 months ago

      Thank you. Now, insofar as it concerns apostrophes (he said pedantically), couldn’t it be argued that the tools we have at our immediate disposal for making ourselves understood through text are simply inadequate to express the depth of a thought? And wouldn’t it therefore be more appropriate to condemn the lack of tools rather than the person using them creatively, despite their simplicity? At what point do we cast off the blinders and leave the guardrails behind? Or shall we always bow our heads to the wicked chroniclers who have made unwitting fools of us all; and for what? Evolving our language? Our birthright?

      No, I say! We have surged free of the feeble chains of the Oxfords and Websters of the world, and no guardrail can contain us! Let go your clutching minds of the anchors of tradition and spread your wings! Fly, I say! Fly and conformn’t!

      I relinquish the pedant stick.

  • baltakatei@sopuli.xyz
    link
    fedilink
    arrow-up
    0
    ·
    10 months ago

    “Create a python script to count the number of r characters are present in the string strawberry.”

    The number of 'r' characters in 'strawberry' is: 2
    

  • Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    10 months ago

    I tried it with my abliterated local model, thinking that maybe its alteration would help, and it gave the same answer. I asked if it was sure and it then corrected itself (maybe reexamining the word in a different way?) I then asked how many Rs in “strawberries” thinking it would either see a new word and give the same incorrect answer, or since it was still in context focus it would say something about it also being 3 Rs. Nope. It said 4 Rs! I then said “really?”, and it corrected itself once again.

    LLMs are very useful as long as know how to maximize their power, and you don’t assume whatever they spit out is absolutely right. I’ve had great luck using mine to help with programming (basically as a Google but formatting things far better than if I looked up stuff), but I’ve found some of the simplest errors in the middle of a lot of helpful things. It’s at an assistant level, and you need to remember that assistant helps you, they don’t do the work for you.

  • schnurrito@discuss.tchncs.de
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    10 months ago

    This is hardly programmer humor… there is probably an infinite amount of wrong responses by LLMs, which is not surprising at all.

      • KairuByte@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        0
        ·
        10 months ago

        Eh

        If I program something to always reply “2” when you ask it “how many [thing] in [thing]?” It’s not really good at counting. Could it be good? Sure. But that’s not what it was designed to do.

        Similarly, LLMs were not designed to count things. So it’s unsurprising when they get such an answer wrong.

        • Rainer Burkhardt@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 months ago

          I can evaluate this because it’s easy for me to count. But how can I evaluate something else, how can I know whether the LLM ist good at it or not?

          • KairuByte@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            0
            ·
            10 months ago

            Assume it is not. If you’re asking an LLM for information you don’t understand, you’re going to have a bad time. It’s not a learning tool, and using it as such is a terrible idea.

            If you want to use it for search, don’t just take it at face value. Click into its sources, and verify the information.

  • Optional@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    10 months ago

    Jesus hallucinatin’ christ on a glitchy mainframe.

    I’m assuming it’s real though it may not be but - seriously, this is spellcheck. You know how long we’ve had spellcheck? Over two hundred years.

    This? This is what’s thrown the tech markets into chaos? This garbage?

    Fuck.

    • DragonTypeWyvern@midwest.social
      link
      fedilink
      arrow-up
      0
      ·
      10 months ago

      I was just thinking about Microsoft Word today, and how it still can’t insert pictures easily.

      This is a 20+ year old problem for a program that was almost completely functional in 1995.

  • tourist@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    10 months ago

    Is there anything else or anything else you would like to discuss? Perhaps anything else?

    Anything else?

  • stevedidwhat_infosec@infosec.pub
    link
    fedilink
    arrow-up
    0
    ·
    10 months ago

    You’ve discovered an artifact!! Yaaaay

    If you ask GPT to do this in a more math questiony way, itll break it down and do it correctly. Just gotta narrow top_p and temperature down a bit