• PenisDuckCuck9001@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    I just want computer parts to stop being so expensive. Remember when gaming was cheap? Pepperidge farm remembers. You used to be able to build a relatively high end pc for less than the average dogshit Walmart laptop.

    • filister@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      To be honest right now is a relatively good time to build a PC, except for the GPU, which is heavily overpriced. I think if you are content with last gen AMD, this can also be turned to somewhat acceptable levels.

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      As in as soon as companies realise they won’t be able to lay off everybody except executives and personal masseuses, nVidia will go back to having a normal stock price.

      Rich people will become slightly less grotesquely wealthy, and everything must be done to prevent this.

    • Bilb!@lem.monster
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      The term “AI bubble” refers to the idea that the excitement, investment, and hype surrounding artificial intelligence (AI) may be growing at an unsustainable rate, much like historical financial or technological bubbles (e.g., the dot-com bubble of the late 1990s). Here are some key aspects of this concept:

      1. Overvaluation and Speculation: Investors and companies are pouring significant amounts of money into AI technologies, sometimes without fully understanding the technology or its realistic potential. This could lead to overvaluation of AI companies and startups.

      2. Hype vs. Reality: There is often a mismatch between what people believe AI can achieve in the short term and what it is currently capable of. Some claims about AI may be exaggerated, leading to inflated expectations that cannot be met.

      3. Risk of Market Crash: Like previous bubbles in history, if AI does not deliver on its overhyped promises, there could be a significant drop in AI investments, stock prices, and general interest. This could result in a burst of the “AI bubble,” causing financial losses and slowing down real progress.

      4. Comparison to Previous Bubbles: The “AI bubble” is compared to the dot-com bubble or the housing bubble, where early optimism led to massive growth and investment, followed by a sudden collapse when the reality didn’t meet expectations.

      Not everyone believes an AI bubble is forming, but the term is often used as a cautionary reference, urging people to balance enthusiasm with realistic expectations about the technology’s development and adoption.

    • SturgiesYrFase@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Also bubbles don’t “leak”.

      I mean, sometimes they kinda do? They either pop or slowly deflate, I’d say slow deflation could be argued to be caused by a leak.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          You can do it easily with a balloon (add some tape then poke a hole). An economic bubble can work that way as well, basically demand slowly evaporates and the relevant companies steadily drop in value as they pivot to something else. I expect the housing bubble to work this way because new construction will eventually catch up, but building new buildings takes time.

          The question is, how much money (tape) are the big tech companies willing to throw at it? There’s a lot of ways AI could be modified into niche markets even if mass adoption doesn’t materialize.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              You do realize an economic bubble is a metaphor, right? My point is that a bubble can either deflate rapidly (severe market correction, or a “burst”), or it can deflate slowly (a bear market in a certain sector). I’m guessing the industry will do what it can to have AI be the latter instead of the former.

                • sugar_in_your_tea@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  2 months ago

                  One good example of a bubble that usually deflates slowly is the housing market. The housing market goes through cycles, and those bubbles very rarely pop. It popped in 2008 because banks were simultaneously caught with their hands in the candy jar by lying about risk levels of loans, so when foreclosures started, it caused a domino effect. In most cases, the fed just raises rates and housing prices naturally fall as demand falls, but in 2008, part of the problem was that banks kept selling bad loans despite high mortgage rates and high housing prices, all because they knew they could sell those loans off to another bank and make some quick profit (like a game of hot potato).

                  In the case of AI, I don’t think it’ll be the fed raising rates to cool the market (that market isn’t impacted as much by rates), but the industry investing more to try to revive it. So Nvidia is unlikely to totally crash because it’ll be propped up by Microsoft, Amazon, and Google, and Microsoft, Apple, and Google will keep pitching different use cases to slow the losses as businesses pull away from AI. That’s quite similar to how the fed cuts rates to spur economic investment (i.e. borrowing) to soften the impact of a bubble bursting, just driven from mega tech companies instead of a government.

                  At least that’s my take.

      • stephen01king@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        We taking about bubbles or are we talking about balloons? Maybe we should change to using the word balloon instead, since these economic ‘bubbles’ can also deflate slowly.

        • SturgiesYrFase@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Good point, not sure that economists are human enough to take sense into account, but I think we should try and make it a thing.

    • iopq@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      The broader market did the same thing

      https://finance.yahoo.com/quote/SPY/

      $560 to $510 to $560 to $540

      So why did $NVDA have larger swings? It has to do with the concept called beta. High beta stocks go up faster when the market is up and go down lower when the market is done. Basically high variance risky investments.

      Why did the market have these swings? Because of uncertainty about future interest rates. Interest rates not only matter vis-a-vis business loans but affect the interest-free rate for investors.

      When investors invest into the stock market, they want to get back the risk free rate (how much they get from treasuries) + the risk premium (how much stocks outperform bonds long term)

      If the risks of the stock market are the same, but the payoff of the treasuries changes, then you need a high return from stocks. To get a higher return you can only accept a lower price,

      This is why stocks are down, NVDA is still making plenty of money in AI

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        There’s more to it as well, such as:

        • investors coming back from vacation and selling off losses and whatnot
        • investors expecting reduced spending between summer and holidays; we’re past the “back to school” retail bump and into a slower retail economy
        • upcoming election, with polls shifting between Trump and Harris

        September is pretty consistently more volatile than other months, and has net negative returns long-term. So it’s not just the Fed discussing rate cuts (that news was reported over the last couple months, so it should be factored in), but just normal sideways trading in September.

        • iopq@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          We already knew about back to school sales, they happen every year and they are priced in. If there was a real stock market dump every year in September, everyone would short September, making a drop in August and covering in September, making September a positive month again

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            It’s not every year, but it is more than half the time. Source:

            History suggests September is the worst month of the year in terms of stock-market performance. The S&P 500 SPX has generated an average monthly decline of 1.2% and finished higher only 44.3% of the time dating back to 1928, according to Dow Jones Market Data.

        • Regrettable_incident@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          I could be misremembering but I seem to recall the digits on the front of my 486 case changing from 25 to 33 when I pressed the button. That was the only difference I noticed though. Was the beige bastard lying to me?

          • frezik@midwest.social
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Lying through its teeth.

            There was a bunch of DOS software that runs too fast to be usable on later processors. Like a Rouge-like game where you fly across the map too fast to control. The Turbo button would bring it down to 8086 speeds so that stuff is usable.

            • Regrettable_incident@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              Damn. Lol I kept that turbo button down all the time, thinking turbo = faster. TBF to myself it’s a reasonable mistake! Mind you, I think a lot of what slowed that machine was the hard drive. Faster than loading stuff from a cassette tape but only barely. You could switch the computer on and go make a sandwich while windows 3.1 loads.

          • macrocephalic@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Back in those early days many applications didn’t have proper timing, they basically just ran as fast as they could. That was fine on an 8mhz cpu as you probably just wanted stuff to run as fast as I could (we weren’t listening to music or watching videos back then). When CPUs got faster (or it could be that it started running at a multiple of the base clock speed) then stuff was suddenly happening TOO fast. The turbo button was a way to slow down the clock speed by some amount to make legacy applications run how it was supposed to run.

            • barsoap@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              2 months ago

              Most turbo buttons never worked for that purpose, though, they were still way too fast Like, even ignoring other advances such as better IPC (or rather CPI back in those days) you don’t get to an 8MHz 8086 by halving the clock speed of a 50MHz 486. You get to 25MHz. And practically all games past that 8086 stuff was written with proper timing code because devs knew perfectly well that they’re writing for more than one CPU. Also there’s software to do the same job but more precisely and flexibly.

              It probably worked fine for the original PC-AT or something when running PC-XT programs (how would I know our first family box was a 386) but after that it was pointless. Then it hung on for years, then it vanished.

  • ÞlubbaÐubba@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I’m just praying people will fucking quit it with the worries that we’re about to get SKYNET or HAL when binary computing would inherently be incapable of recreating the fast pattern recognition required to replicate or outpace human intelligence.

    Moore’s law is about similar computing power, which is a measure of hardware performance, not of the software you can run on it.

    • utopiah@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Unfortunately it’s part of the marketing, thanks OpenAI for that “Oh no… we can’t share GPT2, too dangerous” then… here it is. Definitely interesting then but now World shattering. Same for GPT3 … but through exclusive partnership with Microsoft, all closed, rinse and repeat for GPT4. It’s a scare tactic to lock what was initially open, both directly and closing the door behind them through regulation, at least trying to.

  • CosmoNova@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Welp, it was ‘fun’ while it lasted. Time for everyone to adjust their expectations to much more humble levels than was promised and move on to the next sceme. After Metaverse, NFTs and ‘Don’t become a programmer, AI will still your job literally next week!11’, I’m eager to see what they come up with next. And with eager I mean I’m tired. I’m really tired and hope the economy just takes a damn break from breaking things.

    • utopiah@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      move on to the next […] eager to see what they come up with next.

      That’s a point I’m making in a lot of conversations lately : IMHO the bubble didn’t pop BECAUSE capital doesn’t know where to go next. Despite reports from big banks that there is a LOT of investment for not a lot of actual returns, people are still waiting on where to put that money next. Until there is such a place, they believe it’s still more beneficial to keep the bet on-going.

    • Fetus@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I just hope I can buy a graphics card without having to sell organs some time in the next two years.

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        My RX 580 has been working just fine since I bought it used. I’ve not been able to justify buying a new (used) one. If you have one that works, why not just stick with it until the market gets flooded with used ones?

      • macrocephalic@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Don’t count on it. It turns out that the sort of stuff that graphics cards do is good for lots of things, it was crypto, then AI and I’m sure whatever the next fad is will require a GPU to run huge calculations.

        • Grandwolf319@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          AI is shit but imo we have been making amazing progress in computing power, just that we can’t really innovate atm, just more race to the bottom.

          ——

          I thought capitalism bred innovation, did tech bros lied?

          /s

        • utopiah@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          I’m sure whatever the next fad is will require a GPU to run huge calculations.

          I also bet it will, cf my earlier comment on rendering farm and looking for what “recycles” old GPUs https://lemmy.world/comment/12221218 namely that it makes sense to prepare for it now and look for what comes next BASED on the current most popular architecture. It might not be the most efficient but probably will be the most economical.

      • Zorsith@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I’d love an upgrade for my 2080 TI, really wish Nvidia didn’t piss off EVGA into leaving the GPU business…

      • sheogorath@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        If there is even a GPU being sold. It’s much more profitable for Nvidia to just make compute focused chips than upgrading their gaming lineup. GeForce will just get the compute chips rejects and laptop GPU for the lower end parts. After the AI bubble burst, maybe they’ll get back to their gaming roots.

  • masterspace@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Thank fucking god.

    I got sick of the overhyped tech bros pumping AI into everything with no understanding of it…

    But then I got way more sick of everyone else thinking they’re clowning on AI when in reality they’re just demonstrating an equal sized misunderstanding of the technology in a snarky pessimistic format.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        The tech bros had to find an excuse to use all the GPUs they got for crypto after they bled that dry upgraded to proof-of-stake.

        I don’t see a similar upgrade for “AI”.

        And I’m not a fan of BTC but $50,000+ doesn’t seem very dry to me.

    • Sentient Loom@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      As I job-hunt, every job listed over the past year has been “AI-drive [something]” and I’m really hoping that trend subsides.

      • AdamEatsAss@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        “This is an mid level position requiring at least 7 years experience developing LLMs.” -Every software engineer job out there.

        • EldritchFeminity@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Reminds me of when I read about a programmer getting turned down for a job because they didn’t have 5 years of experience with a language that they themselves had created 1 to 2 years prior.

        • macrocephalic@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Yeah, I’m a data engineer and I get that there’s a lot of potential in analytics with AI, but you don’t need to hire a data engineer with LLM experience for aggregating payroll data.

          • utopiah@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            there’s a lot of potential in analytics with AI

            I’d argue there is a lot of potential in any domain with basic numeracy. In pretty much any business or institution somebody with a spreadsheet might help a lot. That doesn’t necessarily require any Big Data or AI though.

    • Jesus@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I’m more annoyed that Nvidia is looked at like some sort of brilliant strategist. It’s a GPU company that was lucky enough to be around when two new massive industries found an alternative use for graphics hardware.

      They happened to be making pick axes in California right before some prospectors found gold.

      And they don’t even really make pick axes, TSMC does. They just design them.

      • Grandwolf319@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Imo we should give credit where credit is due and I agree, not a genius, still my pick is a 4080 for a new gaming computer.

      • Zarxrax@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        They didn’t just “happen to be around”. They created the entire ecosystem around machine learning while AMD just twiddled their thumbs. There is a reason why no one is buying AMD cards to run AI workloads.

        • sanpo@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          One of the reasons being Nvidia forcing unethical vendor lock in through their licensing.

        • towerful@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          I feel like for a long time, CUDA was a laser looking for a problem.
          It’s just that the current (AI) problem might solve expensive employment issues.
          It’s just that C-Suite/managers are pointing that laser at the creatives instead of the jobs whose task it is to accumulate easily digestible facts and produce a set of instructions. You know, like C-Suites and middle/upper managers do.
          And NVidia have pushed CUDA so hard.

          AMD have ROCM, an open source cuda equivalent for amd.
          But it’s kinda like Linux Vs windows. NVidia CUDA is just so damn prevalent.
          I guess it was first. Cuda has wider compatibility with Nvidia cards than rocm with AMD cards.
          The only way AMD can win is to show a performance boost for a power reduction and cheaper hardware. So many people are entrenched in NVidia, the cost to switching to rocm/amd is a huge gamble

        • mycodesucks@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          Go ahead and design a better pickaxe than them, we’ll wait…

          Same argument:

          “He didn’t earn his wealth. He just won the lottery.”

          “If it’s so easy, YOU go ahead and win the lottery then.”

          • masterspace@lemmy.ca
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            2 months ago

            My fucking god.

            “Buying a lottery ticket, and designing the best GPUs, totally the same thing, amiriteguys?”

            • mycodesucks@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              2 months ago

              In the sense that it’s a matter of being in the right place at the right time, yes. Exactly the same thing. Opportunities aren’t equal - they disproportionately effect those who happen to be positioned to take advantage of them. If I’m giving away a free car right now to whoever comes by, and you’re not nearby, you’re shit out of luck. If AI didn’t HAPPEN to use massively multi-threaded computing, Nvidia would still be artificial scarcity-ing themselves to price gouging CoD players. The fact you don’t see it for whatever reason doesn’t make it wrong. NOBODY at Nvidia was there 5 years ago saying “Man, when this new technology hits we’re going to be rolling in it.” They stumbled into it by luck. They don’t get credit for forseeing some future use case. They got lucky. That luck got them first mover advantage. Intel had that too. Look how well it’s doing for them. Nvidia’s position over AMD in this space can be due to any number of factors… production capacity, driver flexibility, faster functioning on a particular vector operation, power efficiency… hell, even the relationship between the CEO of THEIR company and OpenAI. Maybe they just had their salespeople call first. Their market dominance likely has absolutely NOTHING to do with their GPU’s having better graphics performance, and to the extent they are, it’s by chance - they did NOT predict generative AI, and their graphics cards just HAPPEN to be better situated for SOME reason.

              • masterspace@lemmy.ca
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                they did NOT predict generative AI, and their graphics cards just HAPPEN to be better situated for SOME reason.

                This is the part that’s flawed. They have actively targeted neural network applications with hardware and driver support since 2012.

                Yes, they got lucky in that generative AI turned out to be massively popular, and required massively parallel computing capabilities, but luck is one part opportunity and one part preparedness. The reason they were able to capitalize is because they had the best graphics cards on the market and then specifically targeted AI applications.

      • utopiah@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        They just design them.

        It’s not trivial though. They also managed to lock dev with CUDA.

        That being said I don’t think they were “just” lucky, I think they built their luck through practices the DoJ is currently investigating for potential abuse of monopoly.

        • nilloc@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Yeah CUDA, made a lot of this possible.

          Once crypto mining was too hard nvidia needed a market beyond image modeling and college machine learning experiments.

    • linearchaos@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      That would be absolutely amazing. How can we work out a community effort that is designed to teach, you some crowdsource tests maybe we can bring education to the masses for free…

      • vithigar@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        The pictures aren’t very good I’ll grant you that, but they definitely don’t require even one kWh per image, and besides that basically everything made with a computer costs power. We waste power on nonsense just fine without the help of LLMs or diffusion models.

      • CafecitoHippo@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I mean, machine learning and AI does have benefits especially in research in the medical field. The consumer AI products are just stupid though.

        • SynopsisTantilize@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          It’s help me learn coding, Spanish, and helped me build scripts of which I would never have been able to do by myself or with technical works alone.

          If we’re talking specifically about the value I get out of what Gpt is right now, its priceless to me. Like my second, albeit braindead, systems administrator on my shoulder when I need something I don’t want to type out myself. And what ever mistakes it makes is within my abilities to repair on my own without fighting for it.

          • VerbFlow@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Like my second, albeit braindead, systems administrator on my shoulder

            I think the more important part is that your systems administrator is braindead. I know it’s hyperbolic, but you can certainly learn coding (Link 2) and Spanish (Link 2) yourself.

            • SynopsisTantilize@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              I know coding now to a degree. But when I mess something up I’m not going to post to a random forum somewhere to see if someone feels like looking at my problem, and then when they view the issue someone feels the need to include their non objective solution or answer. I don’t want conversation or to be told “see you CoULD do this yourself If you did this”

              Like yea that’s cool…but my building just disconnected from the outside world and 1200 people are now expressing their concern, and me being told to just Google it, when my Juniper flipped a bit isnt going to cut it. And Andy in Montana just locked my post because"this question has been answered before" with no elaboration. And spice works has my exact issue but is closed because: problem solved. But they didn’t show their work.

              How about this; when books first became widely adopted people bitched that the youth would get lazy. Then it was radio. Then it was television. Then it was the Internet. Then it was social media. Now it’s AI.

              The race is always going. But you can stop when ever you feel uncomfortable. But the rest of the pack is going to keep moving to the finish line that never shows up. And new comers can join at any time.

              _------------

              For Spanish learning, I can now have full endless conversation with something and it never gets tired. It never stops being objective. Since the task is so simple it never fucks up or hallucinates. It never tells me it has other things to do. It never discourages our demeans when I get something wrong. Infact it even plays along with what ever speed or level of language I need it to, such as kindergarten level or elementary level. And all of this is supplemental to actually learning through other means. Try to get that consistency on reddit. Whether that be speed, integrity or volition.


              Your suggestion works in 2015 when cleverbot was around or when siri was a creature comfort but it’s 10 years later.

              Oh and all of what I mentioned is free - to me.

          • Sp00kyB00k@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            AI didn’t do that. It stole all the information for free on the internet from people who tried to help others and make money of it.

            • SynopsisTantilize@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              I can very much so assure you that chatGPT did all of those things for me.

              “PIXAR DIDNT MAKE TOY STORY!! THE CHI ARTISTS DID!!!”

            • Grimy@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              Have you ever used Google translate or apps that identify bugs/plants/songs? AI is used in products you most likely use every week.

              You are also arguing for a closed garden system where companies like reddit and Getty get to dictate who can make models and at what price.

              Individual are never getting a dime out of this. In a perfect world, governments would be fighting for copyleft licenses for anything using big data but every law being proposed is meant to create a soft monopoly owned by Microsoft and Google and kill open-source.

      • ReCursing@lemmings.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Oh you’re a luddite, you’re also a hater and about as intractable and strupid as a trump supporter. You can be many crappy things at once!

  • TropicalDingdong@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    It’s like the least popular opinion I have here on Lemmy, but I assure you, this is the begining.

    Yes, we’ll see a dotcom style bust. But it’s not like the world today wasn’t literally invented in that time. Do you remember where image generation was 3 years ago? It was a complete joke compared to a year ago, and today, fuck no one here would know.

    When code generation goes through that same cycle, you can put out an idea in plain language, and get back code that just “does” it.

    I have no idea what that means for the future of my humanity.

    • Grandwolf319@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I agree with you but not for the reason you think.

      I think the golden age of ML is right around the corner, but it won’t be AGI.

      It would be image recognition and video upscaling, you know, the boring stuff that is not game changing but possibly useful.

      • zbyte64@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I feel the same about the code generation stuff. What I really want is a tool that suggests better variable names.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      you can put out an idea in plain language, and get back code that just “does” it

      No you can’t. Simplifying it grossly:

      They can’t do the most low-level, dumbest detail, splitting hairs, “there’s no spoon”, “this is just correct no matter how much you blabber in the opposite direction, this is just wrong no matter how much you blabber to support it” kind of solutions.

      And that happens to be main requirement that makes a task worth software developer’s time.

      We need software developers to write computer programs, because “a general idea” even in a formalized language is not sufficient, you need to address details of actual reality. That is the bottleneck.

      That technology widens the passage in the places which were not the bottleneck in the first place.

      • TropicalDingdong@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I think you live in a nonsense world. I literally use it everyday and yes, sometimes it’s shit and it’s bad at anything that even requires a modicum of creativity. But 90% of shit doesn’t require a modicum of creativity. And my point isn’t about where we’re at, it’s about how far the same tech progressed on another domain adjacent task in three years.

        Lemmy has a “dismiss AI” fetish and does so at its own peril.

        • rottingleaf@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          Are you a software developer? Or a hardware engineer? EDIT: Or anyone credible in evaluating my nonsense world against yours?

            • rottingleaf@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              So close, but not there.

              OK, you’ll know that I’m right when you somewhat expand your expertise to neighboring areas. Should happen naturally.

            • hark@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              2 months ago

              That explains your optimism. Code generation is at a stage where it slaps together Stack Overflow answers and code ripped off from GitHub for you. While that is quite effective to get at least a crappy programmer to cobble together something that barely works, it is a far cry from having just anyone put out an idea in plain language and getting back code that just does it. A programmer is still needed in the loop.

              I’m sure I don’t have to explain to you that AI development over the decades has often reached plateaus where the approach needed to be significantly changed in order for progress to be made, but it could certainly be the case where LLMs (at least as they are developed now) aren’t enough to accomplish what you describe.

              • rottingleaf@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                It’s not about stages. It’s about the Achilles and tortoise problem.

                There’s extrapolation inside the same level of abstraction as the data given and there’s extrapolation of new levels of abstraction.

                But frankly far smarter people than me are working on all that. Maybe they’ll deliver.

        • Jesus_666@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          And I wouldn’t know where to start using it. My problems are often of the “integrate two badly documented company-internal APIs” variety. LLMs can’t do shit about that; they weren’t trained for it.

          They’re nice for basic rote work but that’s often not what you deal with in a mature codebase.

          • TropicalDingdong@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Again, dismiss at your own peril.

            Because “Integrate two badly documented APIs” is precisely the kind of tasks that even the current batch of LLMs actually crush.

            And I’m not worried about being replaced by the current crop. I’m worried about future frameworks on technology like greyskull running 30, or 300, or 3000 uniquely trained LLMs and other transformers at once.

            • EatATaco@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              2 months ago

              I’m with you. I’m a Senior software engineer and copilot/chatgpt have all but completely replaced me googling stuff, and replaced 90% of the time I’ve spent writing the code for simple tasks I want to automate. I’m regularly shocked at how often copilot will accurately auto complete whole methods for me. I’ve even had it generate a whole child class near perfectly, although this is likely primarily due to being very consistent with my naming.

              At the very least it’s an extremely valuable tool that every programmer should get comfortable with. And the tech is just in it’s baby form. I’m glad I’m learning how to use it now instead of pooh-poohing it.

              • TropicalDingdong@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                Ikr? It really seems like the dismissiveness is coming from people either not experienced with it, or just politically angry at its existence.

          • rottingleaf@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            2 months ago

            I’ve written something vague in another place in this thread which seemed a good enough argument. But I didn’t expect that someone is going to link a literal scientific publication in the same very direction. Thank you, sometimes arguing in the Web is not a waste of time.

            EDIT: Have finished reading it. Started thinking it was the same argument, in the middle got confused, in the end realized that yes, it’s the same argument, but explained well by a smarter person. A very cool article, and fully understandable for a random Lemming at that.

          • TropicalDingdong@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Dismiss at your own peril is my mantra on this. I work primarily in machine vision and the things that people were writing on as impossible or “unique to humans” in the 90s and 2000s ended up falling rapidly, and that generation of opinion pieces are now safely stored in the round bin.

            The same was true of agents for games like go and chess and dota. And now the same has been demonstrated to be coming true for languages.

            And maybe that paper built in the right caveats about “human intelligence”. But that isn’t to say human intelligence can’t be surpassed by something distinctly inhuman.

            The real issue is that previously there wasn’t a use case with enough viability to warrant the explosion of interest we’ve seen like with transformers.

            But transformers are like, legit wild. It’s bigger than UNETs. It’s way bigger than ltsm.

            So dismiss at your own peril.

            • barsoap@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              But that isn’t to say human intelligence can’t be surpassed by something distinctly inhuman.

              Tell me you haven’t read the paper without telling me you haven’t read the paper. The paper is about T2 vs. T3 systems, humans are just an example.

              • TropicalDingdong@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                Yeah I skimmed a bit. I’m on like 4 hours of in flight sleep after like 24 hours of air ports and flying. If you really want me to address the points of the paper, I can, but I can also tell it doesn’t diminish my primary point: dismiss at your own peril.

                • barsoap@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  2 months ago

                  dismiss at your own peril.

                  Oooo I’m scared. Just as much as I was scared of missing out on crypto or the last 10000 hype trains VCs rode into bankruptcy. I’m both too old and too much of an engineer for that BS especially when the answer to a technical argument, a fucking information-theoretical one on top of that, is “Dude, but consider FOMO”.

                  That said, I still wish you all the best in your scientific career in applied statistics. Stuff can be interesting and useful aside from AI BS. If OTOH you’re in that career path because AI BS and not a love for the maths… let’s just say that vacation doesn’t help against burnout. Switch tracks, instead, don’t do what you want but what you can.

                  Or do dive into AGI. But then actually read the paper, and understand why current approaches are nowhere near sufficient. We’re not talking about changes in architecture, we’re about architectures that change as a function of training and inference, that learn how to learn. Say goodbye to the VC cesspit, get tenure aka a day job, maybe in 50 years there’s going to be another sigmoid and you’ll have written one of the papers leading up to it because you actually addressed the fucking core problem.

      • tetris11@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        they’re pretty good, and the faults they have are improving steadily. I dont think we’re hitting a ceiling yet, and I shudder to think where they’ll be in 5 years.

      • Grandwolf319@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        this is just wrong no matter how much you blabber to support it" kind of solutions.

        When you put it like that, I might be a perfect fit in today’s world with the loudest voice wins landscape.

        • rottingleaf@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          I regularly think and post conspiracy theory thoughts about why the “AI” is such a hype. And in line with them a certain kind of people seem to think that reality doesn’t matter, because those who control the present control the past and the future. That is, they think that controlling the discourse can replace controlling the reality. The issue with that is that whether a bomb is set, whether a boat is sea-worthy, whether a bridge will fall is not defined by discourse.

  • Teal@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Too much optimism and hype may lead to the premature use of technologies that are not ready for prime time.

    — Daron Acemoglu, MIT

    Preach!