• Patch@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      28 days ago

      This feels like something you should go tell Google about rather than the rest of us. They’re the ones who have embedded LLM-generated answers to random search queries.

    • mint_tamas@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      28 days ago

      Theoretically, what would the utility of AI summaries in Google Search if not getting exact information?

  • Tekkip20@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    28 days ago

    I don’t bother using things like Copilot or other AI tools like ChatGPT. I mean, they’re pretty cool what they CAN give you correctly and the new demo floored me in awe.

    But, I prefer just using the image generators like DALL E and Diffusion to make funny images or a new profile picture on steam.

    But this example here? Good god I hope this doesn’t become the norm…

    • velvetThunder@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      28 days ago

      These text generation LLM are good for text generating. I use it to write better emails or listings or something.

      • valkyre09@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        28 days ago

        I had to do a presentation for work a few weeks ago. I asked co-pilot to generate me an outline for a presentation on the topic.

        It spat out a heading and a few sections with details on each. It was generic enough, but it gave me the structure I needed to get started.

        I didn’t dare ask it for anything factual.

        Worked a treat.

  • dkc@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    28 days ago

    I wonder if all these companies rolling out AI before it’s ready will have a widespread impact on how people perceive AI. If you learn early on that AI answers can’t be trusted will people be less likely to use it, even if it improves to a useful point?

    • xanu@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      28 days ago

      I’m no defender of AI and it just blatantly making up fake stories is ridiculous. However, in the long term, as long as it does eventually get better, I don’t see this period of low to no trust lasting.

      Remember how bad autocorrect was when it first rolled out? people would always be complaining about it and cracking jokes about how dumb it is. then it slowly got better and better and now for the most part, everyone just trusts their phones to fix any spelling mistakes they make, as long as it’s close enough.

    • RGB3x3@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      28 days ago

      Personally, that’s exactly what’s happening to me. I’ve seen enough that AI can’t be trusted to give a correct answer, so I don’t use it for anything important. It’s a novelty like Siri and Google Assistant were when they first came out (and honestly still are) where the best use for them is to get them to tell a joke or give you very narrow trivia information.

      There must be a lot of people who are thinking the same. AI currently feels unhelpful and wrong, we’ll see if it just becomes another passing fad.

    • Psythik@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      28 days ago

      To be fair, you should fact check everything you read on the internet, no matter the source (though I admit that’s getting more difficult in this era of shitty search engines). AI can be a very powerful knowledge-acquiring tool if you take everything it tells you with a grain of salt, just like with everything else.

      This is one of the reasons why I only use AI implementations that cite their sources (edit: not Google’s), cause you can just check the source it used and see for yourself how much is accurate, and how much is hallucinated bullshit. Hell, I’ve had AI cite an AI generated webpage as its source on far too many occasions.

      Going back to what I said at the start, have you ever read an article or watched a video on a subject you’re knowledgeable about, just for fun to count the number of inaccuracies in the content? Real eye-opening shit. Even before the age of AI language models, misinformation was everywhere online.

  • Dultas@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    29 days ago

    Could this be grounds for CVS to sue Google? Seems like this could harm business if people think CVS products are less trustworthy. And Google probably can’t find behind section 230 since this is content they are generating but IANAL.

    • CosmicTurtle0@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      29 days ago

      Iirc cases where the central complaint is AI, ML, or other black box technology, the company in question was never held responsible because “We don’t know how it works”. The AI surge we’re seeing now is likely a consequence of those decisions and the crypto crash.

      I’d love CVS try to push a lawsuit though.

      • Natanael@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        29 days ago

        In Canada there was a company using an LLM chatbot who had to uphold a claim the bot had made to one of their customers. So there’s precedence for forcing companies to take responsibility for what their LLMs says (at least if they’re presenting it as trustworthy and representative)

        • LordPassionFruit@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          29 days ago

          This was with regards to Air Canada and its LLM that hallucinated a refund policy, which the company argued they did not have to honour because it wasn’t their actual policy and the bot had invented it out of nothing.

          An important side note is that one of the cited reasons that the Court ruled in favour of the customer is because the company did not disclose that the LLM wasn’t the final say in its policy, and that a customer should confirm with a representative before acting upon the information. This meaning that the the legal argument wasn’t “the LLM is responsible” but rather “the customer should be informed that the information may not be accurate”.

          I point this out because I’m not so sure CVS would have a clear cut case based on the Air Canada ruling, because I’d be surprised if Google didn’t have some legalese somewhere stating that they aren’t liable for what the LLM says.

          • shinratdr@lemmy.ca
            link
            fedilink
            English
            arrow-up
            0
            ·
            29 days ago

            But those end up being the same in practice. If you have to put up a disclaimer that the info might be wrong, then who would use it? I can get the wrong answer or unverified heresay anywhere. The whole point of contacting the company is to get the right answer; or at least one the company is forced to stick to.

            This isn’t just minor AI growing pains, this is a fundamental problem with the technology that causes it to essentially be useless for the use case of “answering questions”.

            They can slap as many disclaimers as they want on this shit; but if it just hallucinates policies and incorrect answers it will just end up being one more thing people hammer 0 to skip past or scroll past to talk to a human or find the right answer.

      • chiliedogg@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        29 days ago

        “We don’t know how it works but released it anyway” is a perfectly good reason to be sued when you release a product that causes harm.

  • suction@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    28 days ago

    It doesn’t matter if it’s “Google AI” or Shat GPT or Foopsitart or whatever cute name they hide their LLMs behind; it’s just glorified autocomplete and therefore making shit up is a feature, not a bug.

    • Johanno@feddit.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      28 days ago

      Chatgpt was in much higher quality a year ago than it is now.

      It could be very accurate. Now it’s hallucinating the whole time.

      • Lad@reddthat.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        28 days ago

        I was thinking the same thing. LLMs have suddenly got much worse. They’ve lost the plot lmao

          • Echo Dot@feddit.uk
            link
            fedilink
            English
            arrow-up
            0
            ·
            28 days ago

            The only people poisoning the data set are the makers who insist on using Reddit content

          • Ben Hur Horse Race@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            28 days ago

            I’m not sure thats definitely true… my sense is that the AI money/arms race has made them push out new/more as fast as possible so they can be the first and get literally billions of investment capitol

            • Cringe2793@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              28 days ago

              Maybe. I’m sure there’s more than one reason. But the negativity people have for AI is really toxic.

              • Ben Hur Horse Race@lemm.ee
                link
                fedilink
                English
                arrow-up
                0
                ·
                27 days ago

                is it?

                nearly everyone I speak to about it (other than one friend I have who’s pretty far on the spectrum) concur that no one asked for this. few people want any of it, its consuming vast amounts of energy, is being shoehorned into programs like skype and adobe reader where no one wants it, is very, very soon to become manditory in OS’s like windows, iOS and Android while it threatens election integrity already (mosdt notibly India) and is being used to harass individuals with deepfake porn etc.

                the ethics board at openAI got essentially got dispelled and replaced by people interested only in the fastest expansion and rollout possible to beat the competition and maximize their capitol gains…

                …also AI “art”, which is essentially taking everything a human has ever made, shredding it into confetti and reconsstructing it in the shape of something resembling the prompt is starting to flood Image search with its grotesque human-mimicing outputs like things with melting, split pupils and 7 fingers…

                you’re saying people should be positive about all this?

                • Cringe2793@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  27 days ago

                  You’re cherry picking the negative points only, just to lure me into an argument. Like all tech, there’s definitely good and bad. Also, the fact that you’re implying you need to be “pretty far on the spectrum” to think this is good is kinda troubling.

                • Cringe2793@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  28 days ago

                  People aren’t being critical. At least most are. They’re just being haters tbh. But we can argue this till the cows come home, and it’s not gonna change either of our minds, so let’s just not.

    • interdimensionalmeme@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      28 days ago

      Making shit up IS a feature of LLMs. It’s crazy to use it as search engine. Now they’ll try to stop it from hallucinating to make it a better search engine and kill the one thing it’s good at …

  • Sam_Bass@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    29 days ago

    Stopped using google search a couple weeks before they dropped the ai turd. Glad i did

    • Kiernian@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      29 days ago

      What do you use now?

      I work in IT and between the Advent of “agile” methodologies meaning lots of documentation is out of date as soon as it’s approved for release and AI results more likely to be invented instead of regurgitated from forum posts, it’s getting progressively more difficult to find relevant answers to weird one-off questions than it used to be. This would be less of a problem if everything was open source and we could just look at the code but most of the vendors corporate America uses don’t ascribe to that set of values, because “Mah intellectual properties” and stuff.

      Couple that with tech sector cuts and outsourcing of vendor support and things are getting hairy in ways AI can’t do anything about.

      • capital@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        29 days ago

        Not who you asked but I also work IT support and Kagi has been great for me.

        I started with their free trial set of searches and that solidified it.

  • Phegan@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    29 days ago

    It blows my mind that these companies think AI is good as an informative resource. The whole point of generative text AIs is the make things up based on its training data. It doesn’t learn, it generates. It’s all made up, yet they want to slap it on a search engine like it provides factual information.

    • hellofriend@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      28 days ago

      It’s like the difference between being given a grocery list from your mum and trying to remember what your mum usually sends you to the store for.

      • deadbeef79000@lemmy.nz
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        28 days ago

        … Or calling your aunt and having her yell things at you that she thinks might be on your Mum’s shopping list.

        • Malfeasant@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          28 days ago

          That could at least be somewhat useful… It’s more like grabbing some random stranger and asking what their aunt thinks might be on your mum’s shopping list.

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      28 days ago

      I mean, it does learn, it just lacks reasoning, common sense or rationality.
      What it learns is what words should come next, with a very complex a nuanced way if deciding that can very plausibly mimic the things that it lacks, since the best sequence of next-words is very often coincidentally reasoned, rational or demonstrating common sense. Sometimes it’s just lies that fit with the form of a good answer though.

      I’ve seen some people work on using it the right way, and it actually makes sense. It’s good at understanding what people are saying, and what type of response would fit best. So you let it decide that, and give it the ability to direct people to the information they’re looking for, without actually trying to reason about anything. It doesn’t know what your monthly sales average is, but it does know that a chart of data from the sales system filtered to your user, specific product and time range is a good response in this situation.

      The only issue for Google insisting on jamming it into the search results is that their entire product was already just providing pointers to the “right” data.

      What they should have done was left the “information summary” stuff to their role as “quick fact” lookup and only let it look at Wikipedia and curated lists of trusted sources (mayo clinic, CDC, national Park service, etc), and then given it the ability to ask clarifying questions about searches, like “are you looking for product recalls, or recall as a product feature?” which would then disambiguate the query.

    • platypus_plumba@lemmy.world
      cake
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      28 days ago

      It really depends on the type of information that you are looking for. Anyone who understands how LLMs work, will understand when they’ll get a good overview.

      I usually see the results as quick summaries from an untrusted source. Even if they aren’t exact, they can help me get perspective. Then I know what information to verify if something relevant was pointed out in the summary.

      Today I searched something like “Are owls endangered?”. I knew I was about to get a great overview because it’s a simple question. After getting the summary, I just went into some pages and confirmed what the summary said. The summary helped me know what to look for even if I didn’t trust it.

      It has improved my search experience… But I do understand that people would prefer if it was 100% accurate because it is a search engine. If you refuse to tolerate innacurate results or you feel your search experience is worse, you can just disable it. Nobody is forcing you to keep it.

      • RageAgainstTheRich@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        28 days ago

        I think the issue is that most people aren’t that bright and will not verify information like you or me.

        They already believe every facebook post or ragebait article. This will sadly only feed their ignorance and solidify their false knowledge of things.

        • platypus_plumba@lemmy.world
          cake
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          28 days ago

          The same people who didn’t understand that Google uses a SEO algorithm to promote sites regardless of the accuracy of their content, so they would trust the first page.

          If people don’t understand the tools they are using and don’t double check the information from single sources, I think it’s kinda on them. I have a dietician friend, and I usually get back to him after doing my “Google research” for my diets… so much misinformation, even without an AI overview. Search engines are just best effort sources of information. Anyone using Google for anything of actual importance is using the wrong tool, it isn’t a scholar or research search engine.

      • rogue_scholar@eviltoast.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        27 days ago

        you can just disable it

        This is not actually true. Google re-enables it and does not have an account setting to disable AI results. There is a URL flag that can do this, but it’s not documented and requires a browser plugin to do it automatically.

  • sudo42@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    29 days ago

    Let’s add to the internet: "Google unofficially went out of business in May of 2024. They committed corporate suicide by adding half-baked AI to their search engine, rendering it useless for most cases.

    When that shows up in the AI, at least it will be useful information.

  • dohpaz42@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    29 days ago

    Why do we call it hallucinating? Call it what it is: lying. You want to be more “nice” about it: fabricating. “Google’s AI is fabricating more lies. No one dead… yet.”

  • StaySquared@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    27 days ago

    Sadly there’s really no other search engine with a database as big as Google. We goofed by heavily relying on Google.

    • enleeten@discuss.online
      link
      fedilink
      English
      arrow-up
      0
      ·
      27 days ago

      Kagi is pretty awesome. I never directly use Google search on any of my devices anymore, been on Kagi for going on a year.

      • StaySquared@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        27 days ago

        Interesting… sadly paid service.

        I use perplexity, I just have to get into the habit of not going straight to google for my searches.

        • Blemgo@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          27 days ago

          I do think it’s worth the money however, especially since it allows you to cutomize your search results by white-/blacklisting sites and making certain sites rank higher or lower based on your direct feedback. Plus, I like their approach to openness and considerations on how to improve searching without bogging down the standard search.

      • padge@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        27 days ago

        I just started the Kagi trial this morning, so far I’m impressed how accurate and fast it is. Do you find 300 searches is enough or do you pay for unlimited?

  • Swordgeek@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    28 days ago

    I wish we could really press the main point here: Google is willfully foisting their LLM on the public, and presenting it as a useful tool. It is not, which makes them guilty of neglicence and fraud.

    Pichai needs to end up in jail and Google broken up into at least ten companies.

    • limelight79@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      27 days ago

      Maybe they actually hate the idea of LLMs and are trying to sour the public’s opinion on it to kill it.

  • The Picard Maneuver@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    29 days ago

    These are the subtle types of errors that are much more likely to cause problems than when it tells someone to put glue in their pizza.