DuckDuckGo, Bing, Mojeek, and other search engines are not returning full Reddit results any more.

  • reddig33@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I’m not understanding what stops a search engine from scraping a publicly accessible website. ?

    • Eril@feddit.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      robots.txt, I guess? Yes, you can just ignore it, but you shouldn’t, if you develop a responsible web scraper.

      • hotpot8toe@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Also, rate limiting. A publicly accessible website doesn’t mean that it will allow scrapers to read millions of pages each week. They can easily identify and block scrapers because of the pattern of their activity. I don’t know if Reddit has rate-limiting, but I wouldn’t be surprised if they implement one.

      • reddig33@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Doesn’t seem legal that a robots.txt could pick and choose who scrapes. Seems like legally it would have to be all or nothing. Here’s hoping one of the search engines ignores it and makes it a legal case.

        • capital@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          You’d probably feel differently if it were your service. Should you be able to control who scrapes your sites or should that be all or nothing?

          For the record, I fucking hate what the internet is becoming. I naively believed that even if shit got cordoned off into the walled gardens that are mobile phone apps, the web would remain as open as it was. This is a terrible sign of things to come.

          • reddig33@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            3 months ago

            No, I wouldn’t feel differently. In fact letting search engines scrape and point to your content is what leads people to your site. It’s free advertising. If you’re going to let one search engine in, you should let them all in. If you want to be public, be public. Otherwise put up a login firewall and go private.

            • capital@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              It’s not just search engines. Lots of people on Mastodon were using robots.txt to block ChatGPT (and any other LLM company they knew of) from scraping their sites/blogs.

              I disagree, to a point. I want to be able to control my services to the greatest extent possible, including picking who scrapes me.

              On the other hand, orgs as large as Google doing this poses a real threat to how the internet works right now which I hate.

        • Eril@feddit.org
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Actually currently it contains this:

          User-agent: *
          Disallow: /
          

          Well, that actually is a blanket ban for everyone, so something else must be at play here.