• 2 Posts
  • 8 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle


  • I don’t know how the karma thresholds work behind the scenes, but might I suggest for the bot to do a “top for” sort instead? Like it will only repost top content for the past 6 hours only. This will also help get more quality content as well and avoid reposting low effort/quality posts.

    This is effectively already kinda how it works. For each subreddit it periodically (anywhere between every 30 minutes to every 12 hours, based on subscriber count and posts per day) requests the “hot” content feed. It then checks each post if it has at least 20 upvotes, and a 80% upvote to downvote ratio. Those numbers are configurable, but that’s what they’re currently set to - I believe they’re a good mix between filtering out the complete garbage while still making sure it doesn’t miss good content is.






  • Interesting idea! I have some thoughts if you’re open to feedback:

    Always!

    Have you considered moderation? These mirrored communities on lemmit.online will still be getting comments from all over the federated network, and if you’re the only user and sole moderator of every community, then it might get quite overwhelming!

    I have, and I hope it won’t be a problem ;) I’m a software engineer, as mentioned above, have little interest in managing people outside of work :P If anyone wants to become a moderator, they’re free to request it.

    A small VPS might not be able to handle that

    We’ll see how well it does. I don’t mind spending a little money on this (few dozen €/$ per month), if it takes off. In the end though, it’s more meant as a kickstart for Lemmy content than anything else.

    How are you planning to deal with API limits from Reddit?

    HA! By not using the API. For starters, because someone-who-isnt-me would like to browse NSFW content. I do a bit of client-side throttling between requests, which I hope will keep me under the radar. But it’s mostly based on rss for the subreddit overview, and scraping for the individual posts.

    In the end… we’ll just have to see how it goes.