• elgordino@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    “We envision other types of more complex guardrails should exist in the future, especially for agentic use cases, e.g., the modern Internet is loaded with safeguards that range from web browsers that detect unsafe websites to ML-based spam classifiers for phishing attempts,” the research paper says.

    The thing is folks know how the safeguards for the ‘modern internet’ actually work and are generally straightforward code. Where as LLMs are kinda the opposite, some mathematical model that spews out answers. Product managers thinking it can be corralled to behave in a specific, incorruptible way, I suspect will be disappointed.

    • jacksilver@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Yeah, this is definitely part of the issue when commercializing LLMs. When someone has to provide an SLA or asking how frequently will this fail, it’s not great when the best answer “who knows”.