Current AI models are simply too unwieldy, brittle and malleable, academic and corporate research shows. Security was an afterthought in their training as data scientists amassed breathtakingly complex collections of images and text. They are prone to racial and cultural biases, and easily manipulated.

  • MagicShel@programming.dev
    link
    fedilink
    arrow-up
    35
    ·
    11 months ago

    Security wasn’t a concern? Are we talking about the model itself? Security isn’t part of the model at all and can’t be. Anything you try to add to a model is just a suggestion, and security cannot be a suggestion. Not to mention that it will create a bunch of, “as a secure AI language model I can’t let you do this.”

    A significant problem is a lay person cannot understand what a LLM even is without a lot of reading and thought and these articles are aimed at people who have done neither, or worse they are just posturing and propaganda.

    They are 100% biased and they can’t help but be since they absorb and emulate human writing. An AI that can’t write a biased take also can’t write from a black person’s perspective or a woman’s because bias is part of their experience. How ridiculous would it be if you asked an AI about slavery in America and it had no idea what you were talking about or thought it applied to all races equally?

    • girlfreddy@lemmy.caOP
      link
      fedilink
      arrow-up
      10
      ·
      11 months ago

      I disagree. Even basic inclusion of words to change (ie: the N word to Black or f*g to gay) would have helped.

      Making these companies work harder to bring their product online isn’t a bad thing here.

      • ConsciousCode@beehaw.org
        link
        fedilink
        arrow-up
        23
        ·
        11 months ago

        It sounds simple but data conditioning like that is how you get scunthorpe being blacklisted, and the effects on the model even if perfectly executed are unpredictable. It could get into issues of “race blindness”, where the model has no idea these words are bad and as a result is incapable of accommodating humans when the topic comes up. Suppose in 5 years there’s a therapist AI (not ideal but mental health is horribly understaffed and most people can’t afford a PhD therapist) that gets a client who is upset because they were called a f**got at school, it would have none of the cultural context that would be required to help.

        Techniques like “constitutional AI” and RLHF developed after the foundation models really are the best approach for these, as they allow you to get an unbiased view of a very biased culture, then shape the model’s attitudes towards that afterwards.

        • sciawp@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          11 months ago

          I agree with you but I’m just gonna say with basic regex (hell, even without regex) you can easily find bad words without the problem you mentioned above.

          Word filters tend to suck in online games and stuff because they have to navigate players trying to avoid the filter, which I think could still be improved with a little effort

      • lily33@lemm.ee
        link
        fedilink
        arrow-up
        11
        ·
        11 months ago

        Then you’d get things like “Black is a pejorative word used to refer to black people”

        • girlfreddy@lemmy.caOP
          link
          fedilink
          arrow-up
          6
          ·
          11 months ago

          Then disallow the whole sentece with the N word.

          There are ways to do security in AI learning, easy or not. And companies just throwing their hands in the air and screaming it can’t be done are lying through their teeth.

            • abir_vandergriff@beehaw.org
              link
              fedilink
              arrow-up
              7
              ·
              11 months ago

              I tried to get it to tell me how long it would take to eat a helicopter, as it’s one of the model’s pre-built prompts and thought it would be funny. Went through every AI coercive tactic that’s been thrown around and it just repeatedly said no and that I should be respectful and responsible about the thing. It was quite aggressive and annoying about it.

    • pup_atlas@pawb.social
      link
      fedilink
      arrow-up
      5
      ·
      11 months ago

      The key to safe AI use is to treat the AI the same as the user. Let them automate tasks on behalf of the user (after confirmation) in their scope. That way no matter how much the model is manipulated, it can only ever perform the same tasks as the user.

    • ConsciousCode@beehaw.org
      link
      fedilink
      arrow-up
      5
      ·
      11 months ago

      I like to say “they’re consistently biased”. They might have racial or misogynistic biases from the culture they ingested, but they’ll always express those biases in a consistent way. Meanwhile, humans can become more or less biased depending on whether they’ve eaten lunch yet or woke up tilted.

  • Ubermeisters@lemmy.zip
    link
    fedilink
    arrow-up
    6
    ·
    11 months ago

    Are they going to ‘red-team’ away adversarial prompting as well? Doubt it. Sooooo the issue is the input data. Always has been.

  • Send_me_nude_girls@feddit.de
    link
    fedilink
    arrow-up
    3
    ·
    11 months ago

    As if, all they are looking for is being able to add more bias. What they mean with security is, brand security. Companies are probably paying hard lobby money to have a backdoor into whatever large commercial LLM there is, to “protect” their brand. Read it as manipulating the general public to never have a single negative about a brand pop up whenever someone is asking for it.

    My2cents but maybe I’m completely wrong here and it’s a different topic. Basic filters are already there and most stuff they have done in the last few months made language models dumber.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 months ago

    🤖 I’m a bot that provides automatic summaries for articles:

    Click here to see the summary

    BOSTON (AP) — White House officials concerned by AI chatbots’ potential for societal harm and the Silicon Valley powerhouses rushing them to market are heavily invested in a three-day competition ending Sunday at the DefCon hacker convention in Las Vegas.

    We’re just breaking stuff left and right.” Michael Sellitto of Anthropic, which provided one of the AI testing models, acknowledged in a press briefing that understanding their capabilities and safety issues “is sort of an open area of scientific inquiry.”

    Trained largely by ingesting — and classifying — billions of datapoints in internet crawls, they are perpetual works-in-progress, an unsettling prospect given their transformative potential for humanity.

    Tom Bonner of the AI security firm HiddenLayer, a speaker at this year’s DefCon, tricked a Google system into labeling a piece of malware harmless merely by inserting a line that said “this is safe to use.”

    Researchers have found that “poisoning” a small collection of images or text in the vast sea of data used to train AI systems can wreak havoc — and be easily overlooked.

    The big AI players say security and safety are top priorities and made voluntary commitments to the White House last month to submit their models — largely “black boxes’ whose contents are closely held — to outside scrutiny.