• superkret@feddit.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    Why do I still have to work my boring job while AI gets to create art and look at boobs?

    • FierySpectre@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      2 months ago

      Using AI for anomaly detection is nothing new though. Haven’t read any article about this specific ‘discovery’ but usually this uses a completely different technique than the AI that comes to mind when people think of AI these days.

      • Johanno@feddit.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        That’s why I hate the term AI. Say it is a predictive llm or a pattern recognition model.

        • PM_ME_VINTAGE_30S [he/him]@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          Say it is a predictive llm

          According to the paper cited by the article OP posted, there is no LLM in the model. If I read it correctly, the paper says that it uses PyTorch’s implementation of ResNet18, a deep convolutional neural network that isn’t specifically designed to work on text. So this term would be inaccurate.

          or a pattern recognition model.

          Much better term IMO, especially since it uses a convolutional network. But since the article is a news publication, not a serious academic paper, the author knows the term “AI” gets clicks and positive impressions (which is what their job actually is) and we wouldn’t be here talking about it.

        • 0laura@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          it’s a good term, it refers to lots of thinks. there are many terms like that.

  • ALoafOfBread@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 months ago

    Now make mammograms not $500 and not have a 6 month waiting time and make them available for women under 40. Then this’ll be a useful breakthrough

      • ALoafOfBread@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 months ago

        Oh for sure. I only meant in the US where MIT is located. But it’s already a useful breakthrough for everyone in civilized countries

        • Instigate@aussie.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          For reference here in Australia my wife has been asking to get mammograms for years now (in her 30s) and she keeps getting told she’s too young because she doesn’t have a familial history. That issue is a bit pervasive in countries other than the US.

  • earmuff@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    Serious question: is there a way to get access to medical imagery as a non-student? I would love to do some machine learning with it myself, as I see lot’s of potential in image analysis in general. 5 years ago I created a model that was able to spot certain types of ships based only on satellite imagery, which were not easily detectable by eye and ignoring the fact that one human cannot scan 15k images in one hour. Similar use case with medical imagery - seeing the things that are not yet detectable by human eyes.

  • bluefishcanteen@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    This is a great use of tech. With that said I find that the lines are blurred between “AI” and Machine Learning.

    Real Question: Other than the specific tuning of the recognition model, how is this really different from something like Facebook automatically tagging images of you and your friends? Instead of saying "Here’s a picture of Billy (maybe) " it’s saying, “Here’s a picture of some precancerous masses (maybe)”.

    That tech has been around for a while (at least 15 years). I remember Picasa doing something similar as a desktop program on Windows.

    • AdrianTheFrog@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      I’ve been looking at the paper, some things about it:

      • the paper and article are from 2021
      • the model needs to be able to use optional data from age, family history, etc, but not be reliant on it
      • it needs to combine information from multiple views
      • it predicts risk for each year in the next 5 years
      • it has to produce consistent results with different sensors and diverse patients
      • its not the first model to do this, and it is more accurate than previous methods
    • Lets_Eat_Grandma@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Everything machine learning will be called “ai” from now until forever.

      It’s like how all rc helicopters and planes are now “drones”

      People en masse just can’t handle the nuance of language. They need a dumb word for everything that is remotely similar.

    • pete_the_cat@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      2 months ago

      It’s because AI is the new buzzword that has replaced “machine learning” and “large language models”, it sounds a lot more sexy and futuristic.

    • Flyberius [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      2 months ago

      Honestly this is a pretty good use case for LLMs and I’ve seen them used very successfully to detect infection in samples for various neglected tropical diseases. This literally is what AI should be used for.

    • ilinamorato@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      It’s got a decent chunk of good uses. It’s just that none of those are going to make anyone a huge ton of money, so they don’t have a hype cycle attached. I can’t wait until the grifters get out and the hype cycle falls away, so we can actually get back to using it for what it’s good at and not shoving it indiscriminately into everything.

    • blackbirdbiryani@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      2 months ago

      Honestly they should go back to calling useful applications ML (that is what it is) since AI is getting such a bad rap.

      • 0laura@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        machine learning is a type of AI. scifi movies just misused the term and now the startups are riding the hype trains. AGI =/= AI. there’s lots of stuff to complain about with ai these days like stable diffusion image generation and LLMs, but the fact that they are AI is simply true.

        • blackbirdbiryani@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          14 hours ago

          I mean it’s entirely an arbitrary distinction. AI, for a very long time before chatGPT, meant something like AGI. we didn’t call classification models ‘intelligent’ because it didn’t have any human-like characteristics. It’s as silly as saying a regression model is AI. They aren’t intelligent things.

  • cecinestpasunbot@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Unfortunately AI models like this one often never make it to the clinic. The model could be impressive enough to identify 100% of cases that will develop breast cancer. However if it has a false positive rate of say 5% it’s use may actually create more harm than it intends to prevent.

    • Vigge93@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      That’s why these systems should never be used as the sole decision makers, but instead work as a tool to help the professionals make better decisions.

      Keep the human in the loop!

    • Maven (famous)@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Another big thing to note, we recently had a different but VERY similar headline about finding typhoid early and was able to point it out more accurately than doctors could.

      But when they examined the AI to see what it was doing, it turns out that it was weighing the specs of the machine being used to do the scan… An older machine means the area was likely poorer and therefore more likely to have typhoid. The AI wasn’t pointing out if someone had Typhoid it was just telling you if they were in a rich area or not.

    • ColeSloth@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Not at all, in this case.

      A false positive of even 50% can mean telling the patient “they are at a higher risk of developing breast cancer and should get screened every 6 months instead of every year for the next 5 years”.

      Keep in mind that women have about a 12% chance of getting breast cancer at some point in their lives. During the highest risk years its a 2 percent chamce per year, so a machine with a 50% false positive for a 5 year prediction would still only be telling like 15% of women to be screened more often.

    • TonyOstrich@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      This seems exactly like what I would have referred to as AI before the pandemic. Specifically Deep Learning image processing. In terms of something you can buy off the shelf this is theoretically something the Cognex Vidi Red Tool could be used for. My experience with it is in packaging, but the base concept is the same.

      Training a model requires loading images into the software and having a human mark them before having a very powerful CUDA GPU process all of that. Once the model has been trained it can usually be run on a fairly modest PC in comparison.

    • Captain Aggravated@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      It’s probably more “AI” than the LLMs we’ve been plagued with. This sounds more like an application of machine learning, which is a hell of a lot more promising.

      • reddithalation@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        AI and machine learning are very similar (if not identical) things, just one has been turned into a marketing hype word a whole lot more than the other.

        • Captain Aggravated@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Machine learning is one of the many things that is referred to by “AI”, yes.

          My thought is the term “AI” has been overused to uselessness, from the nested if statements that decide how video game enemies move to various kinds of machine learning to large language models.

          So I’m personally going to avoid the term.

          • 0laura@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            AI == Computer Thingy that looks kinda “smart” to people that don’t understand it. it’s like rectangles and squares. you should use the more precise word (CNN, LLM, Stable diffusion) when applicable, just like with rectangles and squares

    • 0laura@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      it’s ai, it all is. the code that controls where the creepers in Minecraft go? AI. the tiny little neural network that can detect numbers? also AI! is it AGI? no. it’s still AI. it’s not that modern tech is stealing the term ai, scifi movies are the ones that started misusing it and cash grab startups are riding the hypetrain.

      • stormeuh@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        And much before that it was rule-based machine learning, which was basically databases and fancy inference algorithms. So I guess “AI” has always meant “the most advanced computer science thing which looks kind of intelligent”. It’s only now that it looks intelligent enough to fool laypeople into thinking there actually is intelligence there.

  • yesman@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    2 months ago

    The most beneficial application of AI like this is to reverse-engineer the neural network to figure out how the AI works. In this way we may discover a new technique or procedure, or we might find out the AI’s methods are bullshit. Under no circumstance should we accept a “black box” explanation.

    • MystikIncarnate@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      2 months ago

      IMO, the “black box” thing is basically ML developers hand waiving and saying “it’s magic” because they know it will take way too long to explain all the underlying concepts in order to even start to explain how it works.

      I have a very crude understanding of the technology. I’m not a developer, I work in IT support. I have several friends that I’ve spoken to about it, some of whom have made fairly rudimentary machine learning algorithms and neural nets. They understand it, and they’ve explained a few of the concepts to me, and I’d be lying if I said that none of it went over my head. I’ve done programming and development, I’m senior in my role, and I have a lifetime of technology experience and education… And it goes over my head. What hope does anyone else have? If you’re not a developer or someone ML-focused, yeah, it’s basically magic.

      I won’t try to explain. I couldn’t possibly recall enough about what has been said to me, to correctly explain anything at this point.

  • humbletightband@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    2 months ago

    Haha I love Gell-Mann amnesia. A few weeks ago there was news about speeding up the internet to gazillion bytes per nanosecond and it turned out to be fake.

    Now this thing is all over the internet and everyone believes it.

    • Redex@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Well one reason is that this is basically exactly the thing current AI is perfect for - detecting patterns.