• Onno (VK6FLAB)@lemmy.radio
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 month ago

    Are the people who work at OpenAI smoking crack?

    “Over the last year and a half there have been a lot of questions around what might happen if influence operations use generative AI,” Ben Nimmo, principal investigator on OpenAI’s Intelligence and Investigations team, told members of the media in a press briefing

    Here’s a clue, look around you.

    ChatGPT isn’t the only fish in the sea and state actors using a public service like it deserve to be caught. Running your own system privately, without scrutiny, without censorship, without constraints is so trivial that teenagers are doing this on their laptops, so much so that you can docker pull your way into any number of LLM images.

    Seriously, this is so many levels of absurd that it’s beyond comprehension…

    • webghost0101@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      1 month ago

      Having tried many different models on my machine and being a long-time GPT-4 user, I can say the self-hosted models are far more impressive in sheer power for their size. However, the good ones still require a GPU that most people nor teenagers can’t afford.

      Nonetheless, GPT-4 remains the most powerful and useful model, and it’s not even a competition. Even Google’s Gemini doesn’t compare, in my experience.

      The potential for misuse increases alongside usefulness and power. I wouldn’t use Ollama or GPT-3.5 for my professional work because they’re just not reliable enough. However, GPT-4, despite also having its useless moments, is almost essential.

      The same holds true for scammers and malicious actors. GPT-4’s voice will technically allow live, fluent conversations through a phone using a dynamic voice. That’s the holy grail for scamcallers. OpenAI is right to want to eliminate as much abuse of their system as possible before releasing such a thing.

      There is an argument to be made for not releasing such dangerous tools, but the counter is that someone malicious will inevitably release it someday. It’s better to be prepared and understand these systems before that happens. At least i think thats what OpenAi believes, i am not sure what to think. How could i known they Arent malicious?