Interesting! I also noticed that search engines give proper results because those are trained differently and using user search and clicks.
I think these popular models could give proper answer but their safety tolerance is too tight that if the AI considers the input even slightly harmful it refuses to answer.
Given some of the results of prior AI systems unleashed on the public once the more ‘eccentric’ parts of society got ahold of them that’s no surprise. Not only do they have to worry about the AI picking up bad behaviors but are probably looking out for ‘well this bot told me that it’s a relatively simple surgery so…’ style liabilities.
I tried it with phind out of curiosity (programming model) and it answered perfectly https://www.phind.com/search?cache=f8lbjt4x6jwct9mfsw6n3j9v
Interesting! I also noticed that search engines give proper results because those are trained differently and using user search and clicks. I think these popular models could give proper answer but their safety tolerance is too tight that if the AI considers the input even slightly harmful it refuses to answer.
Given some of the results of prior AI systems unleashed on the public once the more ‘eccentric’ parts of society got ahold of them that’s no surprise. Not only do they have to worry about the AI picking up bad behaviors but are probably looking out for ‘well this bot told me that it’s a relatively simple surgery so…’ style liabilities.