• fubarx@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    I actually like it when these code helpers guess from one line what the rest should be and suggest it. It’s even more fun when it keeps guessing and the suggestions get progressively more whacky. Then they just start making completely unrelated shit up.

    Once you say no, it goes back to the beginning and meekly repeats the very first suggestion, like a scolded puppy.

  • Sam_Bass@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    If the amount of money spent equalled the amount of utility in the stuff it would be more popular than it is

  • merthyr1831@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Please bro please let me generate a few sentences of garbled sentences for you please bro I fucking love to say stuff like “delve” please

  • riodoro1@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    My company now made mandatory copilot trainings. Nobody wants to use it, but a guy in a suit made them spend hundreds of thousands on it and now it’s our problem.

    • batmaniam@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Dude, they flubbed this so damn hard by over reaching. A few years ago, when they mentioned there would be a button in word that you could use to make a slide deck of your word dock, I was so excited. The teams meeting part where it will summarize meetings is honestly fantastic in doing Roberts rules of order type stuff. My response was “I hate what this means in terms of privacy, but godamn that sounds useful”.

      In turning into an everything all or nothing they massively screwed up. I have a self hosted instance of llama-gpt that I use to solve the “blank page” problem that AI was actually great at.

      I have a lot of issues with AI on principle, like a lot of folks. But it blows my mind how hard they screwed up delivery (and I don’t just mean the startups, that’s to be expected). There’s plenty to be said about uber at a principle level, but it’s still bloody convenient. The entire roll out of a AI-ecosystem reeks of this meme: “but we made plans!”.

    • peto (he/him)@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Isn’t the entire purpose of copilot that it shouldn’t need much in the way of training? I think the extent of it at my employer is “this is the one you use.”

      I’ve tried it a few times, the only thing it seems remotely good for is when your recollection of a source is too fuzzy to form a traditional search query around. “What’s that book series I read in the early 2000s about kids who traveled to another world and the things they brought back from it just looked like junk.” Kind of questions.

      • ggppjj@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I’m a self-taught C# dev, I’ve found tremendous success specifically just describing what I want to do in dumb language that I’d feel stupid asking people IRL about and that aren’t googleable without knowing what both the terms “null-coalescing” and “non-merchandise supergroup” are describing.

        There are a lot of patterns that don’t have obvious names and that aren’t easily described without describing a specific scenario in a way that might only make sense institutionally, or with additional context that your average person might not have. ChatGPT is fairly good at being the “buddy that you have a bunch of in-jokes with that can remember things better than you”. I can skip a lot of explaining why I need to do a thing a certain way like I can with my coworkers (who all aren’t programmers), and I can get helpful answers for programming questions that my coworkers don’t know the answers to.

        It’s frustrating to see this incredibly advanced context-aware autocorrect on steroids get used in ways that don’t acknowledge the inherent strengths of what LLMs are actually great at doing. It’s infuriating to have that potential be actively misused and packaged as a service and have that mediocre service sold to you once a month as a necessity by idiots in suits watching a line on a chart.

      • Amanduh@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        That’s my favorite use of ai, remembering old ass movies I have fragments of memories about from my childhood

      • Sc00ter@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        This was our company too. They struck some sort of deal with chat gpt that we use their base code, but aren’t connected to their machine learning. Feels like a pretty reasonable approach in my opinion.

        So our training was, “use ours. Don’t use anyone else’s because we don’t want our proprietary information out there to never be able to be scrubbed from the internet”

      • Tar_Alcaran@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        2 months ago

        It’s pretty decent at unimportant optimisation tasks with limited options. Like “I’m driving from X to Y, my friend travels by train from Z, what are good places to pick them up?”

    • mipadaitu@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I get daily emails reminding me that the company paid for copilot and we should be using it.

    • MattMatt@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      My company is all in on GitHub Copilot. They have very unrealistic expectations for how much it will increase productivity. I suspect they were sold on data from junior developers, who I think it helps the most. Anyways, now they are measuring how much engineers use it, so there is some amount of pressure to use it more often.

      The training was a little worrisome and disingenuous. The internal team advocating for it aren’t strong coders and kept showing examples of it automating antipatterns, like writing useless tests that duplicate an if statement in the tested function, writing very verbose and vague comments (meaningless), or taking an example function and making a new one in a boiler plate way (that cut/pastes common code rather than extracting it into a shared function).

      Really, I think it’s helpful – sometimes. Especially to new engineers or when dealing with an unfamiliar library. But I do worry it will lower the bar, and feel over using it can be a waste of time.

    • mcforest@feddit.org
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Are you talking about Github Copilot or Microsoft Copilot? Because I really think the 1st one is pretty useful, although I don’t think it needs any training. The 2nd one one the other side is complete bullshit.

  • Saki@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    “Why would you need AI in a toaster firmware? Uhhhh don’t think about it! Yeah, just use the damn thing! It will make your toast so much better trust me!”

  • Destide@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I hope this comment finds you well,

    This meme perfectly captures the desperate plea of tech companies trying to get users to embrace their AI features. It’s like they’re saying, “We promise it’s worth it—just look at that gradient!” 😅

    I am an person

    • Rolando@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      I’m sorry, but I don’t feel comfortable writing a reply to this comment because the only possible intelligent replies involve profanity or hate speech. Would you prefer a nice cookie recipe instead?

  • Got_Bent@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    Fucking Adobe PDF is becoming damn near unusable because of this. Frustrating because I absolutely have to use it all day every day.

      • ThePJN@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        The ability to filter comments actively as you mark them off as completed is magnificent.

        You mark a comment, it hides itself. Neat and tidy, fantastic.

        Why doesn’t Adobe do this, you ask? Who the fuck knows. Especially since you used to be able to in Acrobat.

        Why? Were people complaining it was too helpful?

        • Track_Shovel@slrpnk.net
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          I’m still learning it. It has a ton of capabilities, but haven’t got to it yet. It’s OCR is kind of meh, even at highest setting

    • dexa_scantron@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      I thought it meant that all the icons/interfaces for AI seem to have a graphical gradient between colors, usually cool colors like blue/purple/pink. (Like the face in the meme)

        • watersnipje@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          No. Nobody uses gradient descent anymore, it’s just the technique you learn about in beginner level machine learning courses. It’s about the color gradient in all the AI logos.

      • monsterpiece42@reddthat.com
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        Yes this is the correct answer. The words in the meme are written to a hypothetical end user. They would not reference technology like the other person said.

    • IninewCrow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      What are you talking about asking questions? It’s AI … it’s all we need to know

      • PM_ME_VINTAGE_30S [he/him]@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        “Gradient descent” ≈ on a “hilly” (mathematical) surface, try to find the lowest point by finding the lowest point near an initial guess. Hopefully, the lowest point near your initial guess is low enough to pass as a solution to your problem.

        “Gradient” is basically the steepness, or rate that the thing you’re trying to optimize changes as you move through “space”. The gradient tells you mathematically which direction you need to go to reach the bottom. “Descent” means “try to find the minimum”.

        I’m glossing over a lot of details, particularly what a “surface” actually means in the high dimensional spaces that AI uses, but a lot of problems in mathematical optimization are solved like this. And one of the steps in training an AI agent is to do an optimization, which often does use a gradient descent algorithm. That being said, not every process that uses gradient descent is necessarily AI or even machine learning. I’m actually taking a course this semester where a bunch of my professor’s research is in optimization algorithms that don’t use a gradient descent!

        • mbtrhcs@feddit.org
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          This is a decent explanation of gradient descent but I’m pretty sure the meme is referencing the color gradients often used to highlight when something is AI generated haha

    • maniclucky@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Gradient descent is a common algorithm in machine learning (AI* is a subset of machine learning algorithms). It refers to using math to determine how wrong an answer is in a particular direction and adjusting the algorithm to be less wrong using that information.

      • xthexder@l.sw0.com
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        The way your phrased that perfectly illustrates the current problem AI has: In a problem space as large as natural language, there are nearly an infinite number of ways it can be wrong. So no matter how much data we feed it, there will always be some “brand new sentence” someone asks that breaks it and causes a wrong answer.

        • maniclucky@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          Absolutely. It’s why asking it for facts is inherently bad. It can’t retain information, it is trained to give output shaped like an answer. It’s pretty good at things that don’t have a specific answer (I’ll never write another cover letter thank blob).

          Now, if someone were to have the good sense to have some kind of lookup to inject correct information between the prompt and the output, we’d be cooking with gas. But that’s really human labor intensive and all the tech bros are trying to avoid that.

  • NigelFrobisher@aussie.zone
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    2 months ago

    I don’t even use LLMs to generate code because all we ever do anymore is migrate the horde of microservices with one or two endpoints that was going to fix software development forever three years ago to the latest hype hosting and devops platform that will somehow eliminate the maintenance cost of having all those services this time for real.

  • Roopappy@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I like how on Amazon, the “Rufus” thing always pops up over the stuff I’m trying to read.

    “How can I turn off rufus” didn’t come up with anything except how to turn it off in the app, not on the website.

    I had to use Ublock Origin to select and block it.