I am an archo-communist, cat-loving dude with a very eclectic range of interests and passions. Currenty, I’m into networks of all kinds and open source software.

  • 1 Post
  • 56 Comments
Joined 1 year ago
cake
Cake day: July 8th, 2023

help-circle


  • It’s a Proxmox server that’s well under subscribed and utilized. I currently use it as a remote back-up for my brother’s business computer and the family’s various machines. It has one Arch Linux VM for that purpose. Another Arch Linux VM has two docker containers for running Mastodon and Lemmy.

    I want to do more with it but right now it’s time for me to buckle down and actually take some steps toward bettering my career because I am sick of being a senior Windows desktop support technician. I really want to do Linux/BSD systems administration or DevOps stuff. I hate feeling like I have to learn under the gun but, at this point, thinking about work on Monday is making me physically ill. The only relief will be knowing that there is an end to this tunnel.









  • You need to actually piece together those few to come up with one cohesive working instance. I can share with you the docker-compose.yml file that worked for me, if that will help.

    version: '3'
    services:
      db:
        restart: always
        image: postgres:14-alpine
        shm_size: 256mb
        networks:
          - internal_network
        healthcheck:
          test: ['CMD', 'pg_isready', '-U', 'postgres']
        volumes:
          - ./postgres14:/var/lib/postgresql/data
        environment:
          - 'POSTGRES_HOST_AUTH_METHOD=trust'
    
      redis:
        restart: always
        image: redis:7-alpine
        networks:
          - internal_network
        healthcheck:
          test: ['CMD', 'redis-cli', 'ping']
        volumes:
          - ./redis:/data
    
      # es:
      #   restart: always
      #   image: docker.elastic.co/elasticsearch/elasticsearch:7.17.4
      #   environment:
      #     - "ES_JAVA_OPTS=-Xms512m -Xmx512m -Des.enforce.bootstrap.checks=true"
      #     - "xpack.license.self_generated.type=basic"
      #     - "xpack.security.enabled=false"
      #     - "xpack.watcher.enabled=false"
      #     - "xpack.graph.enabled=false"
      #     - "xpack.ml.enabled=false"
      #     - "bootstrap.memory_lock=true"
      #     - "cluster.name=es-mastodon"
      #     - "discovery.type=single-node"
      #     - "thread_pool.write.queue_size=1000"
      #   networks:
      #      - external_network
      #      - internal_network
      #   healthcheck:
      #      test: ["CMD-SHELL", "curl --silent --fail localhost:9200/_cluster/health || exit 1"]
      #   volumes:
      #      - ./elasticsearch:/usr/share/elasticsearch/data
      #   ulimits:
      #     memlock:
      #       soft: -1
      #       hard: -1
      #     nofile:
      #       soft: 65536
      #       hard: 65536
      #   ports:
      #     - '127.0.0.1:9200:9200'
    
      web:
        #build: .
        #image: ghcr.io/mastodon/mastodon
        image: tootsuite/mastodon:latest
        restart: always
        env_file: .env.production
        command: bash -c "rm -f /mastodon/tmp/pids/server.pid; bundle exec rails s -p 3000"
        networks:
          - external_network
          - internal_network
        healthcheck:
          # prettier-ignore
          test: ['CMD-SHELL', 'wget -q --spider --proxy=off localhost:3000/health || exit 1']
        ports:
          - '127.0.0.1:3000:3000'
        depends_on:
          - db
          - redis
          # - es
        volumes:
          - ./public/system:/mastodon/public/system
    
      streaming:
        #build: .
        #image: ghcr.io/mastodon/mastodon
        image: tootsuite/mastodon:latest
        restart: always
        env_file: .env.production
        command: node ./streaming
        networks:
          - external_network
          - internal_network
        healthcheck:
          # prettier-ignore
          test: ['CMD-SHELL', 'wget -q --spider --proxy=off localhost:4000/api/v1/streaming/health || exit 1']
        ports:
          - '127.0.0.1:4000:4000'
        depends_on:
          - db
          - redis
    
      sidekiq:
        #build: .
        #image: ghcr.io/mastodon/mastodon
        image: tootsuite/mastodon:latest
        restart: always
        env_file: .env.production
        command: bundle exec sidekiq
        depends_on:
          - db
          - redis
        networks:
          - external_network
          - internal_network
        volumes:
          - ./public/system:/mastodon/public/system
        healthcheck:
          test: ['CMD-SHELL', "ps aux | grep '[s]idekiq\ 6' || false"]
    
      ## Uncomment to enable federation with tor instances along with adding the following ENV variables
      ## http_proxy=http://privoxy:8118
      ## ALLOW_ACCESS_TO_HIDDEN_SERVICE=true
      # tor:
      #   image: sirboops/tor
      #   networks:
      #      - external_network
      #      - internal_network
      #
      # privoxy:
      #   image: sirboops/privoxy
      #   volumes:
      #     - ./priv-config:/opt/config
      #   networks:
      #     - external_network
      #     - internal_network
    
    networks:
      external_network:
      internal_network:
        internal: true
    

  • NGINX Proxy Manager makes things even easier! All you have to do is make certain that you have websockets enabled for the proxy settings to go to your Mastodon instance and don’t forward via SSL because NPM is your SSL termination point. On your Mastodon instance’s NGINX configuration, change the port to listen on port 80, comment out all of the SSL related options, and in the @proxy section change the proxy_set_header X-Forwarded-Proto $scheme; to proxy_set_header X-Forwarded-Proto https; This is just telling Mastodon a small lie so it thinks the traffic is encrypted. This is necessary to prevent a redirection loop which will break things.





  • Sure! Let me know how it goes. If you need to do something more complex for internal DNS records for more than just A records, then look at the unbound.conf man page for stub zones. If you need something even more flexible than stub zones, you can use Unbound as a full authoritative DNS server with auth-zones. As far as I know auth-zones can even do zone transfers AXFR style which is cool!



  • Here is a sample configuration that should work for you:

    server:
            interface: 127.0.0.1
            interface: 192.168.1.1
            do-udp: yes
            do-tcp: yes
            do-not-query-localhost: no
            verbosity: 1
            log-queries: yes
    
            access-control: 0.0.0.0/0 refuse
            access-control-view: 127.0.0.0/8 example
            access-control-view: 192.168.1.0/24 example
    
            hide-identity: yes
            hide-version: yes
            tcp-upstream: yes
    
    remote-control:
            control-enable: yes
            control-interface: /var/run/unbound.sock
    
    view:
            name: "example"
            local-zone: "example.com." inform
            local-data: "example.com. IN A 192.168.1.2"
            local-data: "www IN CNAME example.com."
            local-data: "another.example.com. IN A 192.168.1.3"
    
    forward-zone:
            name: "."
            forward-addr: 8.8.8.8
            forward-addr: 8.8.4.4
    

    What makes the split-brain DNS work is if the request for resolution comes from the localhost or from inside your network, it will first go to the view section to see if there is any pertinent local data. So if you do a query from your home network, on say, example.com, it will return your internal IP address which in this case is 192.168.1.2