The phrase “synthesised expert knowledge” is the problem here, because apparently you don’t understand that this machine has no meaningful ability to synthesise anything. It has zero fidelity.
You’re not exposing people to expert knowledge, you’re exposing them to expert-sounding words that cannot be made accurate. Sometimes they’re right by accident, but that is not the same thing as accuracy.
You confused what the LLM is doing for synthesis, which is something loads of people will do, and this will just lend more undue credibility to its bullshit.
Why do they have to “WANT” that? Ignoring the fact that they literally said they were happy it was changed back, why does that matter to the criticism? If it’s true, it’s true, and the fact that corporations are the ones in a position to habitually make terrible decisions about FOSS is a big problem. It’s valid to point out that it would be good to find a better way.
If anything it sounds like you “WANT” to ignore it.