Dan Goodin / Ars Technica:
Researchers element ArtPrompt, a jailbreak that makes use of ASCII artwork to elicit dangerous responses from aligned LLMs comparable to GPT-3.5, GPT-4, Gemini, Claude, and Llama2 — LLMs are educated to dam dangerous responses. Outdated-school photos can override these guidelines. — Researchers have found …
[ad_2]
Source link