• Noble Shift@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    And this is why I never purchase a product with a revision code of *.0, and almost always purchase used.

  • w2tpmf@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Any real world comparison. Gaming frame rate, video encoding… The 13-700 beats the 7900x while being more energy efficient and costing less.

    That’s even giving AMD a handicap in the comparison since the 7700x is supposed to be the direct comparison to the 13-700.

    I say all this as a longggg time AMD CPU customer. I had planned on buying their CPU before multiple different sources of comparison steered me away this time.

    • M0oP0o@mander.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      Ok, so maybe you are missing the part where the 13 and 14 gens are destroying themselves. No one really cares if you use AMD or what not, this little issue is intel and makes any performance,power use or cost moot as the cpu’s ability to not hurt itself in its confusion will now always be in question.

      Also I don’t think CPU speeds have been a large bottleneck in the last few years, why both AMD and Intel keep pushing is just silly.

      • w2tpmf@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Yeah that does suck. But I was replying specifically to the person saying Intel hasn’t been relevant for years because of a supposed performance dominance from AMD. That’s part just isn’t true.

        • M0oP0o@mander.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Your comment does not reply to anyone though, its just floating out there on its own.

          And even taken as a reply it still does not make sense since as of this “issue” any 13th or 14th gen Intel over a 600 is out of the running since they can not be trusted to not kill themselves.

          • w2tpmf@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Yeah not really sure how my comment ended up where it is. Connect stacks comments in a weird way and I must have clicked reply in the wrong place.

            I was replying to this …

            Is there really still such a market for Intel CPUs? I do not understand that AMDs Zen is so much better and is the superior technology since almost a decade now.

            …Which up untill this issue was NOT true. The entire Zen 2 line was a step behind the Intel chips that released at the same times as it.

            I’ve been running a 3600x for years now and love it … But a i5-10600k that came out at the same time absolutely smashes it in performance.

            • M0oP0o@mander.xyz
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              Those came out a year apart and no one does not “smash” the other in performance. I doubt you can even notice the difference between the two, and that is the issue with CPUs today, they are not the bottleneck in most systems. I have used both of these (I like the 10600k as well) but they are almost exactly the same “performance” and would not turn up my nose at ether. The issue is that (especially in personal use cases) there is no justification in the newer systems. DDR4 still runs literally everything and both of these 4 year+ year old CPUs (that are now a few gens old) also will run anything well outside of exotic cases. You are more likely to see slowdowns with a lack of ram (since most programs today seem to think the stuff is unlimited), GPU bottlenecks, or just badly optimized software.

    • residentmarchant@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      As compared to a recall and re-fitting a fab, a class action is probably the cheaper way out.

      I wish companies cared about what they sold instead of picking the cheapest way out, but welcome to the world we live in.

      • tal@lemmy.today
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        I mean, I’m sure Intel cares.

        My problem is really in how they handled the situation once they knew that there was a problem, not even the initial manufacturing defect.

        Yes, okay. They didn’t know exactly the problem, didn’t know exactly the scope, and didn’t have a fix. Fine. I get that that is a really hard problem to solve.

        But they knew that there was a problem.

        Putting out a list of known-affected processors and a list of known-possibly-affected processors at the earliest date would have at least let their customers do what is possible to mitigate the situation. And I personally think that they shouldn’t have been selling more of the potentially-affected processors until they’d figured out the problem sufficient to ensure that people who bought new ones wouldn’t be affected.

        And I think that, at first opportunity, they should have advised customers as to what Intel planned to do, at least within the limits of certainty (e.g. if Intel can confirm that the problem is due to an Intel manufacturing or design problem, then Intel will issue a replacement to consumers who can send in affected CPUs) and what customers should do (save purchase documentation or physical CPUs).

        Those are things that Intel could certainly have done but didn’t. This is the first statement they’ve made with some of that kind of information.

        It might have meant that an Intel customer holds off on an upgrade to a potentially-problematic processor. Maybe those customers would have been fine taking the risk or just waiting for Intel to figure out the issue, issue an update, and make sure that they used updated systems with the affected processors. But they would have at least been going into this with their eyes open, and been able to mitigate some of the impact.

        Like, I think that in general, the expectation should be that a manufacturer who has sold a product with a defect should put out what information they can to help customers mitigate the impact, even if that information is incomplete, at the soonest opportunity. And I generally don’t think that a manufacturer should sell a product with known severe defects (of the “it might likely destroy itself in a couple months” variety).

        I think that one should be able to expect that a manufacturer do so even today. If there are some kind of reasons that they are not willing to do so (e.g. concerns about any statement affecting their position in potential class-action suits), I’d like regulators to restructure the rules to eliminate that misincentive. Maybe it could be a stick, like “if you don’t issue information dealing with known product defects of severity X within N days, you are exposed to strict liability”. Or a carrot, like “any information in public statements provided to consumers with the intent of mitigating harm caused by a defective product may not be introduced as evidence in class action lawsuits over the issue”. But I want manufacturers of defective products to act, not to just sit there clammed up, even if they haven’t figured out the full extent of the problem, because they are almost certainly in a better position to figure out the problem and issue information to mitigate it than their customers individually are, and in this case, Intel just silently sat there for a very long time while a lot of their customers tried to figure out the scope of what was going wrong, and often spent a lot of money trying to address the problem themselves when more information from Intel probably would have avoided them incurring some of those costs.

        • tal@lemmy.today
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          To put this another way, Intel had at least three serious failures that let the problem reach this level:

          • A manufacturing defect that led to the flawed CPUs being produced in the first place.

          • A QA failure to detect the flawed CPUs initially (or to be able to quickly narrow down the likely and certain scope of the problem once the issue arose). Not to mention having a second generation of chips with the defect go out the door, I can only assume (and hope) without QA having initially identified that they were also affected.

          • A customer care issue, in that Intel did not promptly publicly provide customers with information that Intel either had or should have had about likely scope of the problem, mitigation, and at least within some bounds of uncertainty (“if it can be proven that the problem is due to an Intel manufacturing defect on a given processor for some definition of proven, Intel will provide a replacement processor”), what Intel would do for affected customers. A lot of customers spent a lot of time replicating effort trying to diagnose and address the problem at their level, as well as continuing to buy and use the defective CPUs. It is almost certain that some of that was not necessary.

          The manufacturing failure sucks, fine. But it happens. Intel’s pushing physical limits. I accept that this kind of thing is just one thing that occasionally happens when you do that. Obviously not great, but it happens. This was an especially bad defect, but it’s within the realm of what I can understand and accept. AMD just recalled an initial batch of new CPUs (albeit way, way earlier in the generation than Intel)…they dicked something up too.

          I still don’t understand how the QA failure happened to the degree that it did. Like, yes, it was a hard problem to identify, since it was progressive degradation that took some time to arise, and there were a lot of reasons for other components to potentially be at fault. And CPUs are a fast moving market. You can’t try running a new gen of CPU for weeks or months prior to shipping, maybe. But for Intel to not have identified that they had a problem with the 13th gen at least within certain parameters at least subsequent to release and then to have not held up the 14th gen until it was definitely addressed seems unfathomable to me. Like, does Intel not have a number of CPUs that they just keep hot and running to see if there are aging problems? Surely that has to be part of their QA process, right? I used to work for another PC component manufacturer and while I wasn’t involved in it, I know that they definitely did that as part of their QA process.

          But as much as I think that that QA failure should not have happened, it pales in comparison to the customer care failure.

          Like, there were Intel customers who kept building systems with components that Intel knew or should have known were defective. Far a long time, Intel did not promptly issue a public warning saying “we know that there is a problem with this product”. They did not pull known defective components from the market, which means that customers kept sinking money into them (and resources trying to diagnose and otherwise resolve the issues). Intel did not issue a public statement about the likely-affected components, even though they were probably in the best position to know. Again, they let customers keep building them into systems. They did not issue a statement as to what Intel would do (and I’m not saying that Intel has to conclusively determine that this is an Intel problem, but at least say “if this is shown to be an Intel defect, then we will provide a replacement for parts proven to be defective due to this cause”). They did not issue a statement telling Intel customers what to do to qualify for any such program. Those are all things that I am confident that Intel could have done much earlier and which would have substantially reduced how bad this incident was for their customers. Instead, their customers were left in isolation to try to figure out the problems individually and come up with mitigations themselves. In many cases, manufacturers of other parts were blamed, and money spent buying components unnecessarily, or trying to run important services on components that Intel knew or should have known were potentially defective. Like, I expect Intel, whatever failures happen at the manufacturing or QA stages, to get the customer care done correctly. I expect that to happen even if Intel does not yet completely understand the scope of the problem or how it could be addressed. And they really did not.

          • toddestan@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            I’d argue there was a fourth serious failure, and that was Intel allowing the motherboard manufacturers to go nuts and run these chips way out of spec by default. Granted, ultimately it was the motherboard manufacturers that did it, but there’s really no excuse for what these motherboards were doing by default. Yes, I get the “K” chips are unlocked, but it should be up to the user to choose to overclock their CPU and how they want to go about it. To make matters worse, a lot of these motherboards didn’t even have an easy way to put things back into spec - it was up to you to go through all the settings one by one and set them correctly.

  • demesisx@infosec.pub
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    The other day, when this news hit for the first time, I bought two ITM Put options on INTC. Then, I waited three days and sold them for 200% profit. Then, I used the profit to invest in the SOXX etf. Feels good to finally get some profit from INTC’s incompetence.

  • gamermanh@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    After literally 14 years of avoiding AMD after getting burned twice I finally went back to team red just a week ago, for a new CPU

    so glad I picked them now lol

    • Zetta@mander.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I switched to AMD with the Ryzen 3000 series and can’t see myself going to Intel for at least 2 or 3 more upgrades (like 10 years for me), and that’s only if they are competitive again in that amount of time.

      • gamermanh@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        1 DOA CPU that the physical store I went to purchase it at didn’t have any more of so I got a cheaper Intel CPU they DID have. Tbh that might have been the store dropping it or storing it improperly, they weren’t a very competent electronics store.

        And a Sapphire GPU that only worked with 1 very specific driver version that wasn’t even on their website anymore when I tried to install it for some reason. I eventually got it working after hours of hunting and fiddling, which was repeated when I gave the PC away to a friend’s little brother and they wiped it without checking the driver versions I left behind like I told them.

        Recently built my wife a new AMD based system because grudges have to end eventually and I think I couldn’t have picked a better time tbh

        • communism@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Damn yeah I can definitely understand that grudge, but also yeah modern AMD products are a lot better. I recently upgraded my AM4 CPU and also to a new Radeon GPU and I think they both work really well, after previously having some issues with earlier AMD products. Especially with Linux gaming, AMD is the way to go

    • demizerone@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      In my case I upgraded from threadripper 1950x to a 14900k and the machine died after four months. Went back to threadripper 7960x like I should have. My 14th gen cpu still posts, but haven’t thrown any load at it yet. I’m hoping it can still be a streaming box…

  • CaptainBasculin@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Considering AMD has also paused its release of 9th gen Ryzen just before its release date; I wonder if this issue is caused by TSMC.

    • nek0d3r@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I genuinely think that was the best Intel generation. Things really started going downhill in my eyes after Skylake.

      • gamermanh@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Could also be the fucking GPU of it’s doing that, apparently

        Had some sag on my GPU after years and didn’t really notice. Tried troubleshooting and was about to go mad til someone on Reddit from a year ago had a comment saying to try resetting the GPU and then bracketing it

        Sure as shit it worked

  • wirehead@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    A few years ago now I was thinking that it was about time for me to upgrade my desktop (with a case that dates back to 2000 or so, I guess they call them “sleepers” these days?) because some of my usual computer things were taking too long.

    And I realized that Intel was selling the 12th generation of the Core at that point, which means the next one was a 13th generation and I dono, I’m not superstitious but I figured if anything went wrong I’d feel pretty darn silly. So I pulled the trigger and got a 12th gen core processor and motherboard and a few other bits.

    This is quite amusing in retrospect.

    • JPAKx4@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I recently built myself a computer, and went with AMD’s 3d cache chips and boy am I glad. I think I went 12th Gen for my brothers computer but it was mid range which hasn’t had these issues to my knowledge.

      Also yes, sleeper is the right term.

      • tal@lemmy.today
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        I think I went 12th Gen for my brothers computer

        12th gen isn’t affected. The problem affects only the 13th and 14th gen Intel chips. If your brother has 12th gen – and you might want to confirm that – he’s okay.

        For the high-end thing, initially it was speculated that it was just the high-end chips in these generations, but it’s definitely the case that chips other than just the high-end ones have been recorded failing. It may be that the problem is worse with the high-end CPUs, but it’s known to not be restricted to them at this point.

        The bar they list in the article here is 13th and 14th gen Intel desktop CPUs over 65W TDP.

    • mox@lemmy.sdf.orgOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I don’t think we’ve been given any reason to believe this was caused by Intel Management Engine.

  • gearheart@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    This would be funny if it happened to Nvidia.

    Hope Intel recovers from this. Imagine if Nvidia was the only consumer hardware manufacturer…

    No one wants that.

    • mlg@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      This would be funny if it happened to Nvidia.

      Hope Intel recovers from this. Imagine if Nvidia was the only consumer hardware manufacturer…

      Lol there was a reason Xbox 360s had a whopping 54% failure rate and every OEM was getting sued in the late 2000s for chip defects.

          • icedterminal@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Tagging on here: Both the first model PS3 and Xbox 360 were hot boxes with insufficient cooling. Both suffered from getting too hot too fast for their cooling solutions to keep up. Resulting in hardware stress that caused the chips solder points to weaken until they eventually cracked.

            • john89@lemmy.ca
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              2 months ago

              Owner of original 60gb PS3 here.

              It got very hot and eventually stopped working. It was under warranty and I got an 80gb replacement for $200 cheaper, but lost out on backwards compatibility which really sucked because I sold my PS2 to get a PS3.

              • lennivelkant@discuss.tchncs.de
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                Why would you want backwards compatibility? To play games you already own and like instead of buying new ones? Now now, don’t be ridiculous.

                Sarcasm aside, I do wonder how technically challenging it is to keep your system backwards-compatible. I understand console games are written for specific hardware specs, but I’d assume newer hardware still understands the old instructions. It could be an OS question, but again, I’d assume they would develop the newer version on top of their old, so I don’t know why it wouldn’t support the old features anymore.

                I don’t want to cynically claim that it’s only done for profit reasons, and I’m certainly out of my depth on the topic of developing an entire console system, so I want to assume there’s something I just don’t know about, but I’m curious what that might be.

                • john89@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  2 months ago

                  It’s my understanding that backwards-compatible PS3s actually had PS2 hardware in them.

                  We can play PS2 and PS1 games if they are downloaded from the store, so emulation isn’t an issue. I think Sony looked at the data and saw they would make more money removing backwards compatibility, so that’s what they did.

                  Thankfully the PS3 was my last console before standards got even lower and they started charging an additional fee to use my internet.

        • hardcoreufo@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          I think the 360 failed for the same reason lots of early/mid 2000s PCs failed. They had issues with chips lifting due to the move away from leaded solder. Over time the formulas improved and we don’t see that as much anymore. At least that’s the way I recall it.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      This would be funny if it happened to Nvidia.

      It kinda, has, with Fermi, lol. The GTX 480 was… something.

      Same reason too. They pushed the voltage too hard, to the point of stupidity.

      Nvidia does not compete in this market though, as much as they’d like to. They do not make x86 CPUs, and frankly Intel is hard to displace since they have their own fab capacity. AMD can’t take the market themselves because there simply isn’t enough TSMC/Samsung to go around.

      • Kyrgizion@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        There’s also Intel holding the x86 patent and AMD holding the x64 patent. Those two aren’t going anywhere yet.

        • wax@feddit.nu
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          Actually, looks lhe base patents have expired. All the extentions, SSE, AVX are still in effect though

  • Justin@lemmy.jlh.name
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Intel is about to have a lot of lawsuits on their hands if this deny delay deflect strategy doesn’t work out for them. This problem has been going on for over a year and the details Intel lets slip just keep getting worse and worse. The more customers that realize they’re getting defective CPUs, the more outcry there’ll be for a recall. Intel is going to be in a lot of trouble if they wait until regulators force them to have a recall.

    Big moment of truth is next month when they have earnings and we see what the performance impact from dropping voltages will be. Hopefully it’ll just be 5% and no more CPUs die. I can’t imagine investors will be happy about the cost, though.

    • Archer@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      I want to say gamers rise up, but honestly gamers calling their member of Congress every day and asking what they’re doing about this fraud would be way more effective. Congress is in a Big Tech regulating mood right now