• rtxn@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Cybercriminals are creaming their jorts at the potential exploits this might open up.

  • Coasting0942@reddthat.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Others have already laughed at this idea, but on a similar topic:

    I know we’ve basically disabled a lot of features that sped up the CPU but introduced security flaws. Is there a way to turn those features back on for an airgapped computer intentionally?

  • 9point6@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 months ago

    Haha okay

    Edit: after a skim and a quick Google, this basically looks like a packaging up of existing modern processor features (sorta AVX/SVE with a load of speculative execution thrown on top)

  • xantoxis@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    This change is likened to expanding a CPU from a one-lane road to a multi-lane highway

    This analogy just pegged the bullshit meter so hard I almost died of eyeroll.

    • AnarchistArtificer@slrpnk.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      You’ve got to be careful with rolling your eyes, because the parallelism of the two eyes means that the eye roll can be twice as powerful ^1


      (1) If measured against the silly baseline of a single eyeroll

    • rottingleaf@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Apparently the percentage of people actually understanding what they are doing in the management part of the industry is now too low to filter out even such bullshit.

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 months ago

    Why is this bullshit upvoted?
    Already the first sentence, they change from the headline “without recoding” to “with further optimization”.
    Then the explanation “a companion chip that optimizes processing tasks in real-time”
    This is already done at compiler level and internally in any modern CPU for more than a decade.

    It might be possible to some degree for some specific forms of code, like maybe Java. But generally for the CPU this is bullshit, and the headline is decidedly dishonest.

  • blahsay@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    10 tricks to speed up your cpu and trim belly fat. Electrical engineers hate them! Invest now! Start up is called ‘DefinitelyNotAScam’.

  • tombruzzo@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I don’t care. Intel promised 5nm 10ghz single core processors by this point and I still want it out of principle

  • Amanda@aggregatet.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Has anyone been able to find an actual description of what this does? I clicked two layers deep and neither explains the details. It does sound like they’re doing CPU scheduling in the hardware, which is cool and makes some sense, but the descriptions are too vague to explain what the hell this is except “more parallelism goes brrrr” and it’s not clear to me why current GPUs aren’t already that.