Fun Fact: It is very difficult to get any of the image generators to make a pentagon.
Fun Fact: It is very difficult to get any of the image generators to make a pentagon.
an eight-year-old girl was among those killed
Hop on Adobe stock right now and search for something. Half of the results will be AI-generated. There’s a search filter that can exclude them.
I think it was Perplexity. Moved to using Flux since that cuddly monstrosity.
Is it Elmo?
Calling what attention transformers do memorization is wildly inaccurate.
It honestly blows my mind that people look at a neutral network that’s even capable of recreating short works it was trained on without having access to that text during generation… and choose to focus on IP law.
The issue is that next to the transformed output, the not-transformed input is being in use in a commercial product.
Are you only talking about the word repetition glitch?
How do you imagine those works are used?
It’s called learning, and I wish people did more of it.
This is an inaccurate understanding of what’s going on. Under the hood is a neutral network with weights and biases, not a database of copyrighted work. That neutral network was trained on a HEAVILY filtered training set (as mentioned above, 45 terabytes was reduced to 570 GB for GPT3). Getting it to bug out and generate full sections of training data from its neutral network is a fun parlor trick, but you’re not going to use it to pirate a book. People do that the old fashioned way by just adding type:pdf to their common web search.
You’ve made a lot of confident assertions without supporting them. Just like an LLM! :)
Just taking GPT 3 as an example, its training set was 45 terabytes, yes. But that set was filtered and processed down to about 570 GB. GPT 3 was only actually trained on that 570 GB. The model itself is about 700 GB. Much of the generalized intelligence of an LLM comes from abstraction to other contexts.
Equating LLMs with compression doesn’t make sense. Model sizes are larger than their training sets. if it requires “hacking” to extract text of sufficient length to break copyright, and the platform is doing everything they can to prevent it, that just makes them like every platform. I can download © material from YouTube (or wherever) all day long.
Aye, flux [pro] via glif.app, though it’s funny, sometimes I get better results from the smaller [schnell] model, depending on the use case.
I don’t really know, but I think it’s mostly to do with pentagons being under-represented in the world in general. That and the specific way that a pentagon breaks symmetry. But it’s not completely impossible to get em to make one. After a lot of futzing around, o1 wrote this prompt, which seems to work 50% of the time with FLUX [pro]: