• 0 Posts
  • 3 Comments
Joined 1 year ago
cake
Cake day: July 8th, 2023

help-circle
  • Stronger guardrails can help, sure. But getting new input and building a new model is the equivalent of replacing the entire vending machine with a different model by the same company if one is failing (by the old analogy).

    The problem is that if you do the same thing with a llm for hiring or job systems, then the failure and bias instead is from the model being bigoted, which while illegal, is hidden in a model that is basically trained on how to be a more effective bigot.

    You can’t hide your race from the llm that was accidentally trained to know what job histories are traditionally black, or anything else.