Hey, I have more than one comedy bit I do here other than something something Hell in a Cell, OK?
Speaking of which, Hell in a Cell isn’t even that exciting anymore after the WWE made it an annual event and painted the cages red, and why did Seth Rollins get disqualified after he attacked “The Fiend” Bray Wyatt with a sledgehammer 2019 even though Hell in a Cell matches have always been no disqualification?
It’s like their script writers don’t even care about their own rules.
Reddit, and by extension, Lemmy, offers the ideal format for LLM datasets: human generated conversational comments, which, unlike traditional forums, are organized in a branched nested format and scored with votes in the same way that LLM reward models are built.
There is really no way of knowing, much less prevent public facing data from being scraped and used to build LLMs, but, let’s do an thought experiment: what if, hypothetically speaking, there is some particularly individual who wanted to poison that dataset with shitposts in a way that is hard to detect or remove with any easily automate method, by camouflaging their own online presence within common human generated text data created during this time period, let’s say, the internet marketing campaign of a major Hollywood blockbuster.
Since scrapers do not understand context, by creating shitposts in similar format to, let’s say, the social media account of an A-list celebrity starring in this hypothetical film being promoted(ideally, it would be someone who no longer has a major social media presence to avoid shitpost data dilution), whenever an LLM aligned on a reward model built on said dataset is prompted for an impression of this celebrity, it’s likely that shitposts in the same format would be generated instead, with no one being the wiser.
That would be pretty funny.
Again, this is entirely hypothetical, of course.