I have many services running on my server and about half of them use postgres. As long as I installed them manually I would always create a new database and reuse the same postgres instance for each service, which seems to me quite logical. The least amount of overhead, fast boot, etc.
But since I started to use docker, most of the docker-compose files come with their own instance of postgres. Until now I just let them do it and were running a couple of instances of postgres. But it’s kind of getting rediciolous how many postgres instances I run on one server.
Do you guys run several dockerized instances of postgres or do you rewrite the docker compose files to give access to your one central postgres instance? And are there usually any problems with that like version incompatibilities, etc.?
If you don’t care about down time then feel free to consolidate all your apps to one postgresql instance. If you want to do maintenance on your DB it may impact all your other applications.
My database instances downtime is only when the server itself is rebooting. Never had a single downtime in 20+ years beside that.
You’ve never had to run migrations that lock tables or rebuild an index in two decades?
Why would that have blocked all my databases at once? That would affect the same database I was migrating, not the others.
Yes, it would cause downtime for the one being migrated - right? Or does that not count as downtime?
Yes it counts indeed… But in that case the service is down while its migrated so the fact the database is also down does it count?
I mean, it’s a self hosted home service, not your bank ATM network…