Yes, I did get the processor off of eBay for 75 bucks and the motherboard for 150. That I figured was a score and it shipped from Taiwan so it didn’t quite take forever to get here. The drives, case, RAM, and low-end GPU I got from the flea markets.
I am an archo-communist, cat-loving dude with a very eclectic range of interests and passions. Currenty, I’m into networks of all kinds and open source software.
Yes, I did get the processor off of eBay for 75 bucks and the motherboard for 150. That I figured was a score and it shipped from Taiwan so it didn’t quite take forever to get here. The drives, case, RAM, and low-end GPU I got from the flea markets.
It’s a Proxmox server that’s well under subscribed and utilized. I currently use it as a remote back-up for my brother’s business computer and the family’s various machines. It has one Arch Linux VM for that purpose. Another Arch Linux VM has two docker containers for running Mastodon and Lemmy.
I want to do more with it but right now it’s time for me to buckle down and actually take some steps toward bettering my career because I am sick of being a senior Windows desktop support technician. I really want to do Linux/BSD systems administration or DevOps stuff. I hate feeling like I have to learn under the gun but, at this point, thinking about work on Monday is making me physically ill. The only relief will be knowing that there is an end to this tunnel.
A lot of the components I actually bought at computer swap meets/flea markets. Some vendors had corporate cast-off hard drives and all kinds of good deals.
Nice! I built my server with a brand new tower case and bunch of second hand components. It has a 2016-era Xeon E5-2690 with 128GB of ECC RAM and 12 TB of storage in a RAID z1 config. I built for less than 500 bucks.
Hey that’s a pretty good setup! It’s a lot better than mine! LOL.
You’ve got Pixelfed which is similar to Instagram.
Nextcloud works well for general files. Immich is the way to go specifically for photo storing. It’s got a whole lot of added features.
If your requirements don’t need federation, then look into nextcloud.
Maybe tell me where you’re stuck and I can help?
You need to actually piece together those few to come up with one cohesive working instance. I can share with you the docker-compose.yml file that worked for me, if that will help.
version: '3'
services:
db:
restart: always
image: postgres:14-alpine
shm_size: 256mb
networks:
- internal_network
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'postgres']
volumes:
- ./postgres14:/var/lib/postgresql/data
environment:
- 'POSTGRES_HOST_AUTH_METHOD=trust'
redis:
restart: always
image: redis:7-alpine
networks:
- internal_network
healthcheck:
test: ['CMD', 'redis-cli', 'ping']
volumes:
- ./redis:/data
# es:
# restart: always
# image: docker.elastic.co/elasticsearch/elasticsearch:7.17.4
# environment:
# - "ES_JAVA_OPTS=-Xms512m -Xmx512m -Des.enforce.bootstrap.checks=true"
# - "xpack.license.self_generated.type=basic"
# - "xpack.security.enabled=false"
# - "xpack.watcher.enabled=false"
# - "xpack.graph.enabled=false"
# - "xpack.ml.enabled=false"
# - "bootstrap.memory_lock=true"
# - "cluster.name=es-mastodon"
# - "discovery.type=single-node"
# - "thread_pool.write.queue_size=1000"
# networks:
# - external_network
# - internal_network
# healthcheck:
# test: ["CMD-SHELL", "curl --silent --fail localhost:9200/_cluster/health || exit 1"]
# volumes:
# - ./elasticsearch:/usr/share/elasticsearch/data
# ulimits:
# memlock:
# soft: -1
# hard: -1
# nofile:
# soft: 65536
# hard: 65536
# ports:
# - '127.0.0.1:9200:9200'
web:
#build: .
#image: ghcr.io/mastodon/mastodon
image: tootsuite/mastodon:latest
restart: always
env_file: .env.production
command: bash -c "rm -f /mastodon/tmp/pids/server.pid; bundle exec rails s -p 3000"
networks:
- external_network
- internal_network
healthcheck:
# prettier-ignore
test: ['CMD-SHELL', 'wget -q --spider --proxy=off localhost:3000/health || exit 1']
ports:
- '127.0.0.1:3000:3000'
depends_on:
- db
- redis
# - es
volumes:
- ./public/system:/mastodon/public/system
streaming:
#build: .
#image: ghcr.io/mastodon/mastodon
image: tootsuite/mastodon:latest
restart: always
env_file: .env.production
command: node ./streaming
networks:
- external_network
- internal_network
healthcheck:
# prettier-ignore
test: ['CMD-SHELL', 'wget -q --spider --proxy=off localhost:4000/api/v1/streaming/health || exit 1']
ports:
- '127.0.0.1:4000:4000'
depends_on:
- db
- redis
sidekiq:
#build: .
#image: ghcr.io/mastodon/mastodon
image: tootsuite/mastodon:latest
restart: always
env_file: .env.production
command: bundle exec sidekiq
depends_on:
- db
- redis
networks:
- external_network
- internal_network
volumes:
- ./public/system:/mastodon/public/system
healthcheck:
test: ['CMD-SHELL', "ps aux | grep '[s]idekiq\ 6' || false"]
## Uncomment to enable federation with tor instances along with adding the following ENV variables
## http_proxy=http://privoxy:8118
## ALLOW_ACCESS_TO_HIDDEN_SERVICE=true
# tor:
# image: sirboops/tor
# networks:
# - external_network
# - internal_network
#
# privoxy:
# image: sirboops/privoxy
# volumes:
# - ./priv-config:/opt/config
# networks:
# - external_network
# - internal_network
networks:
external_network:
internal_network:
internal: true
NGINX Proxy Manager makes things even easier! All you have to do is make certain that you have websockets enabled for the proxy settings to go to your Mastodon instance and don’t forward via SSL because NPM is your SSL termination point. On your Mastodon instance’s NGINX configuration, change the port to listen on port 80, comment out all of the SSL related options, and in the @proxy section change the proxy_set_header X-Forwarded-Proto $scheme;
to proxy_set_header X-Forwarded-Proto https;
This is just telling Mastodon a small lie so it thinks the traffic is encrypted. This is necessary to prevent a redirection loop which will break things.
It’s actually not hard to get Mastodon running behind an existing reverse proxy. It’s also not hard to run it in a docker container. I run mine in a docker container with no issues. When version 4.1.4 was released, I just ran a docker-compose pull, and voila, my instant was upgraded. I can share my configs with you if you want. What is your existing reverse proxy server?
I’m glad you point that out. I frequently have to catch myself accidentally exchanging the word quick for easy. They’re not interchangeable, and in the wrong context, can be insulting.
That’s pretty awesome that you want to go down this route and you’ll certainly benefit from the experience. Are you actually building out your lab as training for your career?
Sure! Let me know how it goes. If you need to do something more complex for internal DNS records for more than just A records, then look at the unbound.conf man page for stub zones. If you need something even more flexible than stub zones, you can use Unbound as a full authoritative DNS server with auth-zones. As far as I know auth-zones can even do zone transfers AXFR style which is cool!
You’ve got the right community IMHO. This is something that I’ve never tackled but I could imagine that it would work. Just make certain you have solid backups just in case the worst should happen.
Here is a sample configuration that should work for you:
server:
interface: 127.0.0.1
interface: 192.168.1.1
do-udp: yes
do-tcp: yes
do-not-query-localhost: no
verbosity: 1
log-queries: yes
access-control: 0.0.0.0/0 refuse
access-control-view: 127.0.0.0/8 example
access-control-view: 192.168.1.0/24 example
hide-identity: yes
hide-version: yes
tcp-upstream: yes
remote-control:
control-enable: yes
control-interface: /var/run/unbound.sock
view:
name: "example"
local-zone: "example.com." inform
local-data: "example.com. IN A 192.168.1.2"
local-data: "www IN CNAME example.com."
local-data: "another.example.com. IN A 192.168.1.3"
forward-zone:
name: "."
forward-addr: 8.8.8.8
forward-addr: 8.8.4.4
What makes the split-brain DNS work is if the request for resolution comes from the localhost or from inside your network, it will first go to the view
section to see if there is any pertinent local data. So if you do a query from your home network, on say, example.com
, it will return your internal IP address which in this case is 192.168.1.2
Instead of pfSense, I would really recommend OPNsense, originally a fork but now standing on its own. I like the fact that OPNsense tracks closer to the current FreeBSD release than pfSense.
I did this myself for all of 150 dollars. I bought an OptiPlex 7050 off of Amazon and added a dual intel network card. From there, I installed OPNsense. I have a DMZ, WAN, and LAN interface.
Thanks, mate! I’ll take all I can get.