Rem
Senior Member
Founding Member
Hot Rod






Have a Beef With AI? Here's How to Poison a Large Language Model
At RSAC, a security researcher explains how bad actors can push LLMs off track by deliberately introducing false inputs, causing them to spew wrong answers in generative AI apps.
honestly, expected this from day 1.
Neggers out there always finding ways to be cunts for fun