• AdrianTheFrog@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    They can’t. AI has hallucinations. Google has shown that AI can’t even rely on external sources, either.

    • FiniteBanjo@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      16 days ago

      At least LLMs will. The only real fix we’ve seen was running it through additional specialized LLMs to try to massage out errors, but that just increases costs and scale for marginally low results.

  • nieceandtows@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    If Apple can stop AI hallucination, any other AI company can also stop AI hallucination. Which is something they could have already done instead of making AI seem a joke on purpose. AI hallucinations are a sort of phenomena that nobody has control over. Why would Tim Cook have unique control over it?

    • cmbabul@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      Unless Apple became the first to figure out how, then they suddenly have a huge leg up on the rest. Which is kinda how Apple has been making their bread for most of their successes in my lifetime

      • 555@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 days ago

        Yeah. When Apple says it’s coming into a market, they mean they have already perfected it.

        • Zorsith@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          0
          ·
          16 days ago

          (Or let other companies polish up a feature/concept for a few years, slap a coat of Space Gray on it, and release it as a revolutionary “new” feature for apple)

      • nieceandtows@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 days ago

        eh. I don’t think Apple’s gonna be a pioneer in AI. If anybody can do it, it would be openai figuring it out first. Happy to be proven wrong tho.

        • cmbabul@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          16 days ago

          Oh I’m not suggesting the will or are able to, I’m coming from a strategic standpoint

  • crystalmerchant@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    Of course they can’t. Any product or feature is only as good as the data underneath it. Training data comes from the internet, and the internet is full of humans. Humans make and write weird shit so so the data that the LLM ingests is weird, this creates hallucinations.

  • StaySquared@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    I don’t know why they’re trying to shove AI down our throats. They need to take their time, allow it to evolve.

    • Snowclone@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      15 days ago

      Because it’s all a corporation and a huge part of the corporate capitalist system is infinite growth. They want returns, BIG ones. When? Right the fuck now. How do you do that? Well AI would turn the world upside down like the dot-com boom. So they dump tons of money into AI. So… it’s the AI done? Oh no no no, we’re at machine leaning AI is pretty far down the road actually, what we’re firing the AI department heads and releasing this machine leaning software as 100% all the way done AI?

      It’s all the same reasons section 8 housing and low cost housing don’t work under corporate capitalism. It’s profitable to take government money, it’s profitable to have low rent apartments. That’s not the problem, the problem is THEY NEED THE GROWTH NOW NOW NOW!!! If you have a choice between owning a condo where you have high wage renters, and you add another $100 to rent every year, you get more profit faster. No one wants to invest in a 10% increase over 5 years if the can invest in 12% over 4 years. So no one ever invests in low rent or section 8 housing.

  • Deconceptualist@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    As others are saying it’s 100% not possible because LLMs are (as Google optimistically describes) “creative writing aids”, or more accurately, predictive word engines. They run on mathematical probability models. They have zero concept of what the words actually mean, what humans are, or even what they themselves are. There’s no “intelligence” present except for filters that have been hand-coded in (which of course is human intelligence, not AI).

    “Hallucinations” is a total misnomer because the text generation isn’t tied to reality in the first place, it’s just mathematically “what next word is most likely”.

    https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/

    • Tobberone@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      An LLM once explained to me that it didn’t know, it simulated an answer. I found that descriptive.

    • _number8_@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      all we know about ourselves is what’s in our memories. the way normal writing or talking works is just picking what words sound best in order

      • Deconceptualist@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        16 days ago

        That’s not the whole story. “The dog swam across the ocean.” is a grammatically valid sentence with correct word order. But you probably wouldn’t write it because you have a concept of what a dog actually is and know its physiological limitations make the sentence ridiculous.

        The LLMs don’t have those kind of smarts. They just blindly mirror what we do. Since humans generally don’t put those specific words together, the LLMs avoid it too, based solely on probability. If lots of people started making bold claims about oceanfaring canids (e.g. as a joke), then the LLMs would absolutely jump onboard with no critical thinking of their own.

      • Deconceptualist@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 days ago

        Ok, maybe there’s a possibility someday with that approach. But that doesn’t reflect my understanding or (limited) experience with the major LLMs (ChatGPT, Gemini) out in the wild today. Right now they confidently advise ingesting poison because it’s grammatically sound and they found it on some BS Facebook post.

        If ML engineers can design an internal concept of what constitutes valid information (a hard problem for humans, let alone machines) maybe there’s hope.

      • Natanael@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 days ago

        The problem is they have many different internal concepts with conflicting information and no mechanism for determining truthfulness or for accuracy or for pruning bad information, and will sample them all randomly when answering stuff

    • Captain Aggravated@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      Remember the game people used to play that was something like “type my girlfriend is and then let your phone keyboards auto suggestion take it from there?” LLMs are that.

    • neo@lemy.lol
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      I was wondering, are people working on networks that train to create a modular model of the world, in order to understand it / predict events in the world?

      I imagine that that is basically what our brains do.

      • eestileib@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 days ago

        Many attempts, some well-funded.

        They have been successful in very limited domains. For example, the F-35 integrated sensor suite.

      • Natanael@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 days ago

        Not really anything properly universal, but a lot of task specific models exists with integration with logic engines and similar stuff. Performance varies a lot.

        You might want to take a look at wolfram alpha’s plugin for chatgpt for something that’s public

  • Brickardo@feddit.nl
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    16 days ago

    That’s what it comes by not really understanding what you’re doing. Most of the AI models I work with are the state of the art just because they happen to work.

    In my case, when I solve a PDE using finite difference schemes, there are precise mathematical conditions that guarantee you if the method is going to be stable or not. When I do the same using AI, I can’t tell if my method is going to work or not unless I run it. Moreover, I’ve had it sometimes fail and sometimes succeed.

    It’s just the way it is for now.

    • DudeDudenson@lemmings.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      16 days ago

      I mean companies world wide just jumped in the AI bandwagon like a lot of people did with the NFT one. Mostly because AI actually has solid use cases and can make a big difference in broad situations.

      Just since people are just slapping AI in everything it’s gonna end up being another fad to raise stock prices, like firing people last year.

      Let’s just hope when all of the hype blows over and the general public thinks of AI as the marketing buzzword that never works quite right we’ll keep AI in the things it’s actually useful for

      • Brickardo@feddit.nl
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 days ago

        AI interest has come and gone. Some decades ago, people would slap the AI label to expert systems. If we go further back, one would call AI to solving problems in blocks world. It’s eventually going to fade away, just like all the previous waves did.

  • 🇰 🔵 🇱 🇦 🇳 🇦 🇰 ℹ️@yiffit.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    15 days ago

    Here’s how you stop AI from hallucinating:

    Turn it off.

    Because everything they output is a hallucination. Just because sometimes those hallucinations are true to life doesn’t mean jack shit. Even a broken clock is right twice a day.

    “Only feed it accurate information.”

    Even that doesn’t work because it just mixes and matches every element of its input to generate a new, novel output. Which would inevitably be wrong.

    • john_lemmy@slrpnk.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 days ago

      Yeah, just pull the plug. The amount of time we waste talking about this shit for these assholes to play another round of monopoly is unbelievable

  • NutWrench@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    If you want to have good AI, you need to spend money and send your AI to college. Have real humans interact with it, correct it’s logic, make sure it understands sarcasm and logical fallacies.

    Or, you can go the cheap route: train it on 10 years of Reddit sh*tposts and hope for the best.

  • Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    Seeing these systems just making shit up when they’re not sure on the answer is probably the closest they’ll ever come to human behaviour.

    We’ve invented the virtual politician.

  • flop_leash_973@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    Well yeah, its using the same dataset as MS copilot.

    Spitting out inaccurate (I wish the media would stop feeding into calling it something that sounds less bad like hallucinations) answers is nothing something that will go away until the LLM gains the ability to decern context.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    This is the best summary I could come up with:


    Even Apple CEO Tim Cook isn’t sure the company can fully stop AI hallucinations.

    In an interview with The Washington Post, Cook said he would “never claim” that its new Apple Intelligence system won’t generate false or misleading information with 100 percent confidence.

    These features will let you generate email responses, create custom emoji, summarize text, and more.

    Recent examples of how AI can get things wrong include last month’s incident with Google’s Gemini-powered AI overviews telling us to use glue to put cheese on pizza or a recent ChatGPT bug that caused it to spit out nonsensical answers.

    The voice assistant will turn to ChatGPT when it receives a question better suited for the chatbot, but it will ask for your permission before doing so.

    In the demo of the feature shown during WWDC, you can see a disclaimer at the bottom of the answer that reads, “Check important info for mistakes.”


    The original article contains 334 words, the summary contains 153 words. Saved 54%. I’m a bot and I’m open source!

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    It’s kind of funny how AI has the exact same problems some humans have.
    I always thought AI wouldn’t have that kind of problems, because they would be carefully fed accurate information.
    Instead they are taught from things like Facebook and the thing formerly known as Twitter.
    What an idiotic timeline we are in. LOL

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      16 days ago

      The problem with AI hallucinations is not that the AI was fed inaccurate information, it’s that it’s coming up with information that it wasn’t fed in the first place.

      As you say, this is a problem that humans have. But I’m not terribly surprised these AIs have it because they’re being built in mimicry of how aspects of the human mind works. And in some cases it’s desirable behaviour, for example when you’re using an AI as a creative assistant. You want it to come up with new stuff in those situations.

      It’s just something you need to keep in mind when coming up with applications.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          0
          ·
          16 days ago

          Exactly, which is why I’ve objected in the past to calling Google Overview’s mistakes “hallucinations.” The AI itself is performing correctly, it’s giving an accurate overview of the search result it’s being told to create an overview for. It’s just being fed incorrect information.

    • NeoNachtwaechter@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      Instead they are taught from things like Facebook and the thing formerly known as Twitter.

      Imagine they would teach in our schools to inform yourself about all the important things, and therefore you should read as many toilet walls as newspapers…

    • MentalEdge@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      There’s also the fact that they can’t tell reality apart from fiction in general, because they don’t understand anything in the first place.

      LLMs have no way of differentiating a fantasy RPG elements from IRL things. So they can lose the plot on what is being discussed suddenly, and for seemingly no reason.

      LLMs don’t just “learn” facts from their training data. They learn how to pretend to be thinking, they can mimic but not really comprehend. If there were facts in the training data, it can regurgitate them, but it doesn’t actually know which facts apply to which subjects, or when to not make some up.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 days ago

        They learn how to pretend

        True, and they are so darn good at it, that it can be somewhat confusing at times.
        But the current AIs are not the ones we read about in SciFi.

    • foggy@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      What weirds me out is that the things it has issues with when generating images/video are basically a list of things lucid dreamers check on to see if they’re awake or dreaming.

      1. Hands. Are your hands… Hands? Do they make sense?

      2. Written language. Does it look like normal written language?

      (3. Turn the lights off/4. Pinch your nose and breath through it) - these two not so much

      1. How did I get here? Where was I before this? Does the transition make sense?

      2. Mirrors. Are they accurate?

      3. Displays on digital devices. Do they look normal?

      4. Clocks. Digital and analog… Do they look like they’re telling time? Even if they do, look away and check again.

      (9. Physics, try to do something physically impossible, like poking your finger through your palm. 10. Do you recognize people/do they recognize you) - on two more that aren’t relevant.

      But still… It’s kinda remarkable.

      Also, Nvidia launched their earth 2 earth simulator recently. So, simulation theory confirmed, I guess.

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 days ago

        Also, check your cell phone. Despite how ubiquitous they are in our daily lives, I don’t think I’ve seen a single cell phone in my dreams. Or any other phone, for that matter.

        And now that I think about it, I’ve definitely had a dream of being in my living room where there’s a TV, but I don’t remember the TV actually being in the dream.

        Weird.

    • scarabic@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 days ago

      Right? In all science fiction, artificial intelligence starts out better than us, and the only question is whether it can capture some idiosyncratic element of “being human.” Instead, AI has started out dumber than us, and we’re all standing around saying “uh what is this good for?”

    • treefrog@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      16 days ago

      I thought the main issue was that AI don’t really know how to say I don’t know or second guess themselves, as it would take a lot more robust architecture with multiple feedback loops. Like a brain.

      Anyway, LLM’s aren’t the only AI that do this. So them being trained on Facebook data certainly isn’t the whole issue.

      • dan1101@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 days ago

        Yeah it’s the old garbage in, garbage out problem, the AI algorithms don’t really understand what they are outputting.

        I think at this point voice recognition and text generation AI would be more useful as something like a phone assistant. You could tell it complex things like “Mute my phone for the next 2 hours” or “Notify me if I receive an email from John Smith.” Those sort of things could be easily done by AI algorithms that A) Understand your voice and B) Are programmed to know all the features of the OS. Hopefully with a known dataset like a phone OS there shouldn’t be hallucination problems, the AI could just act as an OS concierge.

        • Rhaedas@fedia.io
          link
          fedilink
          arrow-up
          0
          ·
          16 days ago

          The narrow purpose models seem to be the most successful, so this would support the idea that a general AI isn’t going to happen from LLMs alone. It’s interesting that hallucinations are seen as a problem yet are probably part of why LLMs can be creative (much like humans). We shouldn’t want to stop them, but just control when they happen and be aware of when the AI is off the tracks. A group of different models working together and checking each other might work (and probably has already been tried, it’s hard to keep up).

        • jaybone@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          16 days ago

          Seems Siri and Alexa could already do things like that without needing LLMs trained on Facebook shit.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      16 days ago

      It’s not the exact same problems humans have. It’s completely different. Marketers and hucksters just use anthropomorphic terminology to hype their dysfunctional programs.

    • dch82@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      What you can do is try to filter out the garbage, but it’s basically trying to find gold in food waste.