Hey all,

I’m the author of lemmyverse.net and I’ve recently been working on a new moderation tool called Lemmy Modder. https://modder.lemmyverse.net/

Currently, it supports user registration/approvals and content report management. I offer it either as a hosted app (which is currently only compatible with Lemmy 0.18 instances) or a package that you can run alongside your Lemmy instance (using Docker-Compose)

Feel free to give it a go and send any feedback my way :) https://github.com/tgxn/lemmy-modder

Edit for a note: This tool does not save, proxy or store any of your user credentials or data to me, it is only ever stored locally in your browser. I also do not use any website tracking tools. 👍

  • Skull giver@popplesburger.hilciferous.nl
    link
    fedilink
    English
    arrow-up
    18
    ·
    1 year ago

    Looks good! Suggestion, though: maybe don’t show media embeds by default, considering some trolls have uploaded CSAM to Lemmy in the past? You never know what content you end up moderating.

    • Very true, though as a moderator, I expect you’d need to validate the report is not reporting something that’s totally fine.

      I’m not sure how to deal with this, perhaps I could read your user settings for blur_nsfw and apply that as the default? Or, perhaps a toggle setting that just turns media embeds on/off completely that defaults based on the same user-setting?

      • Skull giver@popplesburger.hilciferous.nl
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        My approach would be to add certain keywords to the report message (“porn”, “CSAM”, “child” in various languages, or make it a textbox where mods can enter words themselves) and hide the image by default when the message contains any of those, with a click-to-expand feature to actually verify reports.

        I don’t think relying on the NSFW flag is very useful against abuse, though it could be a good feature for moderators of instances that have NSFW communities.

        You could also control the behaviour by a setting, just in case.

        • Yeah, I’ve experienced this. It’d be pretty easy to add a report-contains-word based filter for the images, but if content doesn’t match this then it’d still be an issue.

          or make it a textbox where mods can enter words themselves I quite like this approach, maybe:

          • Setting “Display content preview for NSFW posts”
          • Setting "Hide content preview with reports containing "

          I’ve created a bug for myself, https://github.com/tgxn/lemmy-modder/issues/63 If you wanted to add any additional information or track my progress. :D

          I also want to add an in-app popup (with the image/website) when you click the content directly instead of navigating to the actual content in a new tab, so this could get combined to only show content on click (as a configuration, defaulted to only show on “expand”).

      • wiki_me@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        I’m not sure how to deal with this, perhaps I could read your user settings for blur_nsfw and apply that as the default? Or, perhaps a toggle setting that just turns media embeds on/off completely that defaults based on the same user-setting?

        Yeah default off seems like the best option, if somebody reports that i would just hide it because it is not worth the risk), especially if there are multiple reports, i don’t know if lemmy has an “appeal” system but maybe that could help too.

        Showing a user total karma and the number of downvotes he got could also be a decent indication.