shish_mish@lemmy.world to Technology@lemmy.worldEnglish · 8 months agoResearchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious querieswww.tomshardware.comexternal-linkmessage-square25fedilinkarrow-up1299arrow-down14
arrow-up1295arrow-down1external-linkResearchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious querieswww.tomshardware.comshish_mish@lemmy.world to Technology@lemmy.worldEnglish · 8 months agomessage-square25fedilink
minus-squareMastengwe@lemm.eelinkfedilinkEnglisharrow-up32·8 months agoSafe AI cannot exist in the same world as hackers.
Safe AI cannot exist in the same world as hackers.