• my_hat_stinks@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    I’m not sure where you’re getting the idea that language models are effective lie detectors, it’s very widely known that LLMs have no concept of truth and hallucinate constantly.

    And that’s before we even get into inherent biases and moral judgements required for any form of truth detection.