• TheBigBrother@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    The premise it’s false, “AI” isn’t even intelligent, it’s just a tool it can’t work WO a human.

    • Thorny_Insight@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      What do you mean by AI? ChatGPT?

      Just because the AI systems we have today aren’t what most people thought they would be, it doesn’t mean in a few years it’s not a whole different game.

      This isn’t like we invented AI and it sucked so we declare it a waste of time. The AI of today is like those massive “mobile” phones we had 30 years ago with 15 minutes of battery life. It’s the first iteration of it. This is the worst it will ever be. In a decade or so it will be unrecognizeable compared to the AI systems we have then.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        What do you mean by AI? ChatGPT?

        I think their point is that “artificial intelligence” doesn’t exist. It’s more appropriate to ask the person who made the video. WTF are they talking about? Something imaginary in a few years supposedly? The paperclip apocalypse? Have you ever seen Terminator? Cool movie but the planet is facing actual real problems.

        • TheBigBrother@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          My point it’s AI doesn’t think by itself, so it isn’t intelligent(at least in a common sense definition), it will always need a human who analyze the data and provide instructions, yeah it automatize a lot of human efforts but is is not possible an AIcalypse anyway.

    • hedgehog@ttrpg.network
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      AI can be used in automated solutions and it doesn’t intrinsically have to be supervised. It being or not being intelligent is irrelevant - it can still cause harm.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        [Artificial intelligence] can be used in automated solutions and it doesn’t intrinsically have to be supervised. It being or not being intelligent is irrelevant

        It’s relevant because, when people talk about “AI” that’s not actually intelligent (ie. all AI), they’re being incoherent. What exactly are they talking about? Computers in general? It’s just noise, spam, etc.

        • hedgehog@ttrpg.network
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          It’s relevant because, when people talk about “AI” that’s not actually intelligent (ie. all AI), they’re being incoherent. What exactly are they talking about? Computers in general? It’s just noise, spam, etc.

          If your objection is that AI “isn’t actually intelligent” then you’re just being pedantic and your objection has no substance. Replace “AI” with “systems that leverage machine learning solutions and that we don’t fully understand how they work” if you need to.

          Did you watch the video? Do you have any familiarity with how AI technologies are being used today? At least one of those answers must be a no for you to have thought that the video’s message was incoherent.

          Let me give you an example. As part of the ongoing conflict in Gaza, Israel has been using AI systems nicknamed “the Gospel” and “Lavender” to identify Hamas militants, associates, and the buildings that they operate from. Then, this information is rubber-stamped by a human analyst and then unguided missiles are sent to the identified location, often destroying entire buildings (filled with other people, generally the family of the target) to kill the identified target.

          There are countless incidents of AI being used without sufficient oversight, often resulting in harm to someone - the general public, minorities, or even the business who put the AI in place.

          The paperclip video is a cautionary tale against giving an AI system too much power or not enough oversight. That warning is relevant today, regardless of the precise architecture of the underlying system.