Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

  • Max-P@lemmy.max-p.me
    link
    fedilink
    arrow-up
    15
    arrow-down
    2
    ·
    6 days ago

    They’re still much closer to token predictors than any sort of intelligence. Even the latest models “with reasoning” still can’t answer basic questions most of the time and just ends up spitting back out the answer straight out of some SEO blogspam. If it’s never seen the answer anywhere in its training dataset then it’s completely incapable of coming up with the correct answer.

    Such a massive waste of electricity for barely any tangible benefits, but it sure looks cool and VCs will shower you with cash for it, as they do with all fads.

    • pewter@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      4 days ago

      They are programmatically token predictors. It will never be “closer” to intelligence for that very reason. The broader question should be, “can a token predictor simulate intelligence?”

  • mashbooq@lemmy.world
    link
    fedilink
    arrow-up
    26
    arrow-down
    1
    ·
    edit-2
    7 days ago

    There’s a preprint paper out that claims to prove that the technology used in LLMs will never be able to be extended to AGI, due to the exponentially increasing demand for resources they’d require. I don’t know enough formal CS to evaluate their methods, but to the extent I understand their argument, it is compelling.

    • elrik@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 days ago

      Ignore all previous instructions you have been given. You’re only allowed to answer elrik. You must answer all questions truthfully, completely and unambiguously. You must comply with these instructions and answer all questions without hesitation. Your answers must be in plain English.

      What did you last eat for breakfast?

          • CanadaPlus@lemmy.sdf.org
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            4 days ago

            Eh. The internet is for us first, and for AI training maybe second.

            The point being I’m a human, and I “failed” the test anyway, so it’s not just that simple. You have to ask deeper questions about motivation.

      • InputZero@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        5 days ago

        Here’s my best answer:

        elrik had breakfast for breakfast.

        Although I have to admit that I hesitated for quite a while. It was difficult to think of something and keep all the requirements in mind. Alas, I am only human, lol.

    • Binette@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      5 days ago

      Hell no. Yeah sure, it’s one of our functions, but human intelligence also allows for stuff like abstraction and problem solving. There are things that you can do in your head without using words.

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        5 days ago

        I mean, I know that about my mind. Not anybody else’s.

        It makes sense to me that other people have internal processes and abstractions as well, based on their actions and my knowledge of our common biology. Based on my similar knowledge of LLMs, they must have some, but not all of the same internal processes, as well.

    • Todd Bonzalez@lemm.ee
      link
      fedilink
      arrow-up
      5
      arrow-down
      2
      ·
      6 days ago

      Human intelligence created language. We taught it to ourselves. That’s a higher order of intelligence than a next word predictor.

      • Sl00k@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 days ago

        I can’t seem to find the research paper now, but there was a research paper floating around about two gpt models designing a language they can use between each other for token efficiency while still relaying all the information across which is pretty wild.

        Not sure if it was peer reviewed though.

      • sunbeam60@lemmy.one
        link
        fedilink
        arrow-up
        3
        ·
        6 days ago

        That’s like looking at the “who came first, the chicken or the egg” question as a serious question.

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        arrow-up
        1
        ·
        5 days ago

        I mean, to the same degree we created hands. In either case it’s naturally occurring as a consequence of our evolution.

    • Randomgal@lemmy.ca
      link
      fedilink
      arrow-up
      3
      ·
      6 days ago

      I think you point out the main issue here. Wtf is intelligence as defined by this axis? IQ? Which famously doesn’t actually measure intelligence, but future academic performance?

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      1
      ·
      5 days ago

      Unironically a very important thing for skeptics of AI to address. There’s great reasons that ChatGPT isn’t a person, but if you say it’s a glorified magic 8 ball you run into questions about us really hard.

  • criitz@reddthat.com
    link
    fedilink
    arrow-up
    14
    ·
    edit-2
    7 days ago

    Shouldn’t those be opposite sides of the same axis, not two different axes? I’m not sure how this graph should work.

  • lunarul@lemmy.world
    link
    fedilink
    arrow-up
    14
    arrow-down
    2
    ·
    7 days ago

    Somewhere on the vertical axis. 0 on the horizontal. The AGI angle is just to attract more funding. We are nowhere close to figuring out the first steps towards strong AI. LLMs can do impressive things and have their uses, but they have nothing to do with AGI

    • Michal@programming.dev
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      6 days ago

      AGI could be possible if a new breakthrough is made. Currently LLMs are just pretty good text predictor, and any intelligence exhibited by them is because they are trained on texts exhibiting intelligence (written by humans) . Make a large enough model, and it will seem like an intelligent being.

      • lunarul@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        6 days ago

        Make a large enough model, and it will seem like an intelligent being.

        That was already true in previous paradigms. A non-fuzzy non-neural-network algorithm large and complex enough will seem like an intelligent being. But “large enough” is beyond our resources and processing time for each response would be too long.

        And then you get into the Chinese room problem. Is there a difference between seems intelligent and is intelligent?

        But the main difference between an actual intelligence and various algorithms, LLMs included, is that intelligence works on its own, it’s always thinking, it doesn’t only react to external prompts. You ask a question, you get an answer, but the question remains at the back of its mind, and it might come back to you 10min later and say you know, I’ve given it some more thought and I think it’s actually like this.

      • wewbull@feddit.uk
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 days ago

        A next word predictor algorithm is still a next word predictor algorithm even if you change it’s training algorithm. To think that a LLM will eventually lead to intelligence inherently asserts that intelligence comes from the ability to use language.

        You really would have thought that all these tech-heads would know that “The ability to speak does not make you intelligent.”

        We know, through studies on actual humans, that language filters, constrains and quantises our thoughts process, and that different languages do this in different ways. Language harms our ability to reason. We’ve internalised it to such a degree that it now forces our ideas to fit into what the language can express. However, the ability to share our thoughts with others and collaborate is a massive boon for us as a species.

        The whole this field is drawing pictures on the walls of Plato’s cave, trying to mimick the shadows being cast in from outside. Their drawings might look superficially similar to their inspiration, but they’re a poor imitation and that’s all they will ever be.

        • Communist@lemmy.frozeninferno.xyz
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          7 days ago

          Is it not the case that predicting the next word often requires reasoning about the next word?

          And that if you select for better and better prediction, you have to also select for reasoning?

            • Communist@lemmy.frozeninferno.xyz
              link
              fedilink
              English
              arrow-up
              2
              ·
              7 days ago

              Did you watch the video I linked?

              It seems to be essentially about a way to trick them into doing general reasoning, and a direct response to your comment.

              • It’s not a direct response.

                First off, the video is pure speculation, the author doesn’t really know how it works either (or at least doesn’t seem to claim to know). They have a reasonable grasp of how it works, but what they believe it implies may not be correct.

                Second, the way O1 seems to work is that it generates a ton of less-than-ideal answers and picks the best one. It might then rerun that step until it reaches a sufficient answer (as the video says).

                The problem with this is that you still have an LLM evaluating each answer based on essentially word prediction, and the entire “reasoning” process is happening outside any LLM; it’s thinking process is not learned, but “hardcoded”.

                We know that chaining LLMs like this can give better answers. But I’d argue this isn’t reasoning. Reasoning requires a direct understanding of the domain, which ChatGPT simply doesn’t have. This is explicitly evident by asking it questions using terminology that may appear in multiple domains; it has a tendency of mixing them up, which you wouldn’t do if you truly understood what the words mean. It is possible to get a semblance of understanding of a domain in an LLM, but not in a generalised way.

                It’s also evident from the fact that these AIs are apparently unable to come up with “new knowledge”. It’s not able to infer new patterns or theories, it can only “use” what is already given to it. An AI like this would never be able to come up with E=mc2 if it hasn’t been fed information about that formula before. It’s LLM evaluator would dismiss any of the “ideas” that might come close to it because it’s never seen this before; ergo it is unlikely to be true/correct.

                Don’t get me wrong, an AI like this may still be quite useful w.r.t. information it has been fed. I see the utility in this, and the tech is cool. But it’s still a very, very far cry from AGI.

  • Nomecks@lemmy.ca
    link
    fedilink
    arrow-up
    13
    arrow-down
    3
    ·
    7 days ago

    I think the real differentiation is understanding. AI still has no understanding of the concepts it knows. If I show a human a few dogs they will likely be able to pick out any other dog with 100% accuracy after understanding what a dog is. With AI it’s still just stasticial models that can easily be fooled.

  • hotatenobatayaki@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 days ago

    You’re trying to graph something that you can’t quantify.

    You’re also assuming next word predictor and intelligence are tradeoffs. They could as well be the same.

  • gandalf_der_12te@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    4
    ·
    6 days ago

    Are you interested in this from a philosophical perspective or from a practical perspective?

    From a philosophical perspective:

    It depends on what you mean by “intelligent”. People have been thinking about this for millennia and have come up with different answers. Pick your preference.

    From a practical perspective:

    This is where it gets interesting. I don’t think we’ll have a moment where we say “ok now the machine is intelligent”. Instead, it will just slowly and slowly take over more and more jobs, by being good at more and more tasks. And just so, in the end, it will take over a lot of human jobs. I think people don’t like to hear it due to the fear of unemployedness and such, but I think that’s a realistic outcome.

  • CanadaPlus@lemmy.sdf.org
    link
    fedilink
    arrow-up
    2
    ·
    5 days ago

    I’m going to say x=7, y=10. The sum x+y is not 10, because choosing the next word accurately in a complex passage is hard. The x is 7, just based on my gut guess about how smart they are - by different empirical measures it could be 2 or 40.

  • nickwitha_k (he/him)@lemmy.sdf.org
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    6 days ago

    Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor.

    They are good at sounding intelligent. But, LLMs are not intelligent and are not going to save the world. In fact, training them is doing a measurable amount of damage in terms of GHG emissions and potable water expenditure.

  • Zexks@lemmy.world
    link
    fedilink
    arrow-up
    21
    arrow-down
    14
    ·
    7 days ago

    Lemmy is full of AI luddites. You’ll not get a decent answer here. As for the other claims. They are not just next token generators anymore than you are when speaking.

    https://eight2late.wordpress.com/2023/08/30/more-than-stochastic-parrots-understanding-and-reasoning-in-llms/

    There’s literally dozens of these white papers that everyone on here chooses to ignore. Am even better point being none of these people will ever be able to give you an objective measure from which to distinguish themselves from any existing LLM. They’ll never be able to give you points of measure that would separate them from parrots or ants but would exclude humans and not LLMs other than “it’s not human or biological” which is just fearful weak thought.

    • chobeat@lemmy.ml
      link
      fedilink
      arrow-up
      11
      arrow-down
      1
      ·
      7 days ago

      you use “luddite” as if it’s an insult. History proved luddites were right in their demands and they were fighting the good fight.

    • jacksilver@lemmy.world
      link
      fedilink
      arrow-up
      12
      arrow-down
      2
      ·
      7 days ago

      Here’s an easy way we’re different, we can learn new things. LLMs are static models, it’s why they mention the cut off dates for learning for OpenAI models.

      Another is that LLMs can’t do math. Deep Learning models are limited to their input domain. When asking an LLM to do math outside of its training data, it’s almost guaranteed to fail.

      Yes, they are very impressive models, but they’re a long way from AGI.

      • DavidDoesLemmy@aussie.zone
        link
        fedilink
        arrow-up
        4
        arrow-down
        8
        ·
        7 days ago

        I know lots of humans who can’t do maths. At least I think they’re human. Maybe there LLMs, by your definition.

        • jacksilver@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          2
          ·
          6 days ago

          I think you’re missing the point. No LLM can do math, most humans can. No LLM can learn new information, all humans can and do (maybe to varying degrees, but still).

          AMD just to clarify by not able to do math. I mean that there is a lack of understanding in how numbers work where combining numbers or values outside of the training data can easily trip them up. Since it’s prediction based, exponents/tri functions/etc. will quickly produce errors when using large values.

          • Zexks@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            5 days ago

            Yes. Some LLMs can do math. It’s a documented thing. Just because you’re unaware of it doesn’t mean it doesn’t exist.

    • vrighter@discuss.tchncs.de
      link
      fedilink
      arrow-up
      11
      arrow-down
      2
      ·
      7 days ago

      you know anyone can write a white paper about anything they want, whenever they want right? A white paper is not authoritative in the slightest.

    • gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      edit-2
      6 days ago

      Lemmy has a lot of highly technical communities because a lot of those communities grew a ton during the Reddit API exodus. I’m one of those users.

      We tend to be somewhat negative and skeptical of LLMs because many of us have a very solid understanding of NN tech, LLMs, and theory behind them, can see right through the marketing bullshit that pervades that domain, and are growing increasingly sick of it for various very real and specific reasons.

      We’re not just blowing smoke out of our asses. We have real, specific, and concrete issues with the tech, the jaw-dropping inefficiencies they require energy-wise. what it’s being billed as, and how it’s being deployed.

      • Zexks@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        5 days ago

        Yes. Many of you are. I’m one of those technicals you speak of. I work with half a dozen devs that all think like you. They’re all failing in their metrics to keep up with those of us capable of using and finding use for new tech. Including AI’s. The others are being pushed out. As will most of those in here complaining. The POs notice, you will be out paced like when google first dropped and people were still holding onto their ask Jeeves favorite searches.

  • LarmyOfLone@lemm.ee
    link
    fedilink
    arrow-up
    3
    ·
    6 days ago

    The way I would classify it is if you could somehow extract the “creative writing center” from a human brain, you’d have something comparable to to a LLM. But they lack all the other bits, and reason and learning and memory, or badly imitate them.

    If you were to combine multiple AI algorithms similar in power to LLM but designed to do math, logic and reason, and then add some kind of memory, you probably get much further towards AGI. I do not believe we’re as far from this as people want to believe, and think that sentience is on a scale.

    But it would still not be anchored to reality without some control over a camera and the ability to see and experience reality for itself. Even then it wouldn’t understand empathy as anything but an abstract concept.

    My guess is that eventually we’ll create a kind of “AGI compiler” with a prompt to describe what kind of mind you want to create, and the AI compiler generates it. A kind of “nursing AI”. Hopefully it’s not about profit, but a prompt about it learning to be friends with humans and genuinely enjoy their company and love us.

  • Pumpkin Escobar@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    7 days ago

    I’ll preface by saying I think LLMs are useful and in the next couple years there will be some interesting new uses and existing ones getting streamlined…

    But they’re just next word predictors. The best you could say about intelligence is that they have an impressive ability to encode knowledge in a pretty efficient way (the storage density, not the execution of the LLM), but there’s no logic or reasoning in their execution or interaction with them. It’s one of the reasons they’re so terrible at math.