I once asked ChatGPT to generate some random numerical passwords as I was curious about its capabilities to generate random data. It told me that it couldn’t. I asked why it couldn’t (I knew why it was resisting but I wanted to see its response) and it promptly gave me a bunch of random numerical passwords.
It’s training and fine tuning has a lot of specific instructions given to it about what it can and can’t do, and if something sounds like something it shouldn’t try then it will refuse. Spitting out unbiased random numbers is something it’s specifically trained not to do by virtue of being a neural network architecture. Not sure if OpenAI specifically has included an instruction about it being bad at randomness though.
While the model is fed randomness when you prompt it, it doesn’t have raw access to those random numbers and can’t feed it forward. Instead it’s likely to interpret it to give you numbers it sees less often.
It won’t generate random numbers. It’ll generate random numbers from its training data.
If it’s asked to generate passwords I wouldn’t be surprised if it generated lists of leaked passwords available online.
These models are created from masses of data scraped from the internet. Most of which is unreviewed and unverified. They really don’t want to review and verify it because it’s expensive and much of their data is illegal.
Also, researchers asking ChatGPT for long lists of random numbers were able to extract its training data from the output (which OpenAI promptly blocked).
I once asked ChatGPT to generate some random numerical passwords as I was curious about its capabilities to generate random data. It told me that it couldn’t. I asked why it couldn’t (I knew why it was resisting but I wanted to see its response) and it promptly gave me a bunch of random numerical passwords.
Wait can someone explain why it didn’t want to generate random numbers?
It’s training and fine tuning has a lot of specific instructions given to it about what it can and can’t do, and if something sounds like something it shouldn’t try then it will refuse. Spitting out unbiased random numbers is something it’s specifically trained not to do by virtue of being a neural network architecture. Not sure if OpenAI specifically has included an instruction about it being bad at randomness though.
While the model is fed randomness when you prompt it, it doesn’t have raw access to those random numbers and can’t feed it forward. Instead it’s likely to interpret it to give you numbers it sees less often.
It won’t generate random numbers. It’ll generate random numbers from its training data.
If it’s asked to generate passwords I wouldn’t be surprised if it generated lists of leaked passwords available online.
These models are created from masses of data scraped from the internet. Most of which is unreviewed and unverified. They really don’t want to review and verify it because it’s expensive and much of their data is illegal.
It’s not illegal. They don’t want to review it because “it” is the entire fucking internet…do you know what that would cost?
Once again. For the morons. It is not illegal to have an ai scan all content on the internet. If it was Google wouldnt exist .
Stop making shit up just cuz u want it to be true.
The crawling isn’t illegal, what you do with the data might be
Also, researchers asking ChatGPT for long lists of random numbers were able to extract its training data from the output (which OpenAI promptly blocked).
Or maybe that’s what you meant?