(nice ad hominem) Christ. When you reduce a high dimensional object into an embedded space, yes you keep only the first N features, but those N features are the most variable, and the loadings they contain can be used to map back to (a very good) approximation of the source images. It’s akin to reverse engineering a very lossy compression to something that (very strongly) resembles the source image (otherwise feature extraction wouldn’t be useful), and it’s entirely doable.
(Ah, the joyful tantrum). Educate yourself on how a simple JPEG works and exactly how little features are needed to produce an image that is almost indistinguishable from the source.
(nice ad hominem) Christ. When you reduce a high dimensional object into an embedded space, yes you keep only the first N features, but those N features are the most variable, and the loadings they contain can be used to map back to (a very good) approximation of the source images. It’s akin to reverse engineering a very lossy compression to something that (very strongly) resembles the source image (otherwise feature extraction wouldn’t be useful), and it’s entirely doable.
So you can’t pull an image out as it went in? Because it’s not stored there? Yeah that’s what I FUCKING SAID! Stop spreading bullshit. Just stop it.
(Ah, the joyful tantrum). Educate yourself on how a simple JPEG works and exactly how little features are needed to produce an image that is almost indistinguishable from the source.