A bit more context there since you might wonder why customers can cause Sev1’s.

Well, I work for a Database Technology company and we provide a managed service offering. This managed service offering has SLA’s that essentially enforce a 5 minute response time for any “urgent” issue.

Well, a common urgent issue is that the customer suddenly wants to load in a bunch of new data without informing us which causes the cluster to stop accepting write loads.

It’s to the point where most if not all urgent pages result in some form of scaling of the cluster.

Since this is a customer driven behavior, there is no real ability to plan for it - and since these particular customers have special requirements (and thus, less ability to automate scaling operations), I’m unsure if there is any recourse here.

It’s to the point that it doesn’t even feel like an SRE team anymore - we should just instead be called “On-demand scaling agents”. Since we’re constantly trying to scale ahead of our customers.

All in all, I’m starting to feel like this is a management/sales level issue that I cannot possibly address. If we’re selling this managed service offering as essentially “magic” that can be scaled whenever they need then it seems like we’re being setup for failure at the organizational level. Not to mention, not being smart about costs behind scaling and factoring that into these contracts.

So, fellow SRE’s have you had to have this conversation with a larger org? What works for something like this? What doesn’t? Should I just seek greener pastures at this point?

P.S. - Posted c/Programming due to lack of a c/SRE

  • Balinares@pawb.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    It’s difficult to answer without a better understanding on your customers’ workloads and how those trigger your outages. There’s a bunch of valid angles from which to look at this.

    If your product consistently buckles under customer workloads that they paid to be able to run, it sounds like you have either an underprovisioning or an overcommitment problem.

    If you accept customer workload spikes that you don’t have the resources to serve but would be able to process if they were more spread over time, it sounds like you have an admission control problem.

    If it’s a matter of adding resources to respond to customer activity spikes and you just have to do it manually, it sounds like you have an automation problem.

    If your pager load is becoming such that you can’t do project work to address whichever ones of the above are relevant to you, it’s time to hand the pager back to devs. If you don’t have the institutional authority to hand back the pager to devs, it sounds like you have a management problem.

    • th3raid0r@tucson.socialOP
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      It is definitely an under provisioning problem. But that under provisioning problem is caused by the customers usually being very very stingy about what they are willing to spend. Also, to be clear, it isn’t buckling. It is doing exactly The thing it was designed to do. Which is to stop writes to the DB since there is no disk space left. And before this time, it’s constantly throwing warnings to the end user. Usually these customers tend to ignore those errors until they reach this stop writes state.

      In fact, we just had to give an RCA to the c-suite detailing why we had not scaled a customer when we should have, but we have a paper trail of them refusing the pricing and refusing to engage.

      We get the same errors, and we usually reach out via email to each of these customers to help project where their data is going and scale appropriately. More frequently though, they are adding data at such a fast clip that them not responding for 2 hours would lead them directly into the stop writes status.

      This has led us to guessing what our customers are going to end up at. Oftentimes being completely wrong and eating to scale multiple times.

      Workload spikes are the entire reason why our database technology exists. That’s the main thing we market ourselves as being able to handle (provided you gave the DB enough disk and the workload isn’t sustained for a long enough to fill the discs.)

      There is definitely an automation problem. Unfortunately, this particular line of our managed services will not be able to be automated. We work with special customers, with special requirements, usually fortune 100 companies that have extensive change control processes. Custom security implementations. And sometimes even no access to their environment unless they flip a switch.

      To me it just seems to all go back to management/c-suite trying to sell a fantasy version of our product and setting us up for failure.

  • gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Definitely sounds like mostly a marketing/support/customer success engagement failure. Also sounds like someone in marketing wrote your SLAs, and that they need to be rewritten to something that’s based on actual data (specifically, the scale-up time of your cluster’s cloud provider + spin up time for whatever you’re running + some margin).

  • Snowplow8861@lemmus.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    If it’s possible to do, and it causes a user experience issue, especially one as jarring as “stop accepting writes” you should start adding rate limits and validate inputs with rate limits expressed to the user before they hit the error rate.

    To me you should already be sanitising input anyway, and this would just be part of that logic. If a user is trying to upload more than x it warns (with link to documentation of the limit). If user has gone past the rate limits, then error.

    I’m not a sre or dev, just a sysadmin though. Users expect guard rails. If it’s possible, it’s permitted.

    • th3raid0r@tucson.socialOP
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      Probably not feasible in our case. We sell our DB tech based on the sheer IOPS it’s capable of. It already alerts the user if the write-cache is full or the replication cache is backing up too.

      The problem is, at full tilt, a 9 node cluster can take on over 1GB/s in new data. This is fine if the customer is writing over old records and doesn’t require any new space. It’s just that it’s more common that Mr. customer added a new microservice and didn’t think through how much data it requires. Thus causing rapid increase in DB disk space or IOPs that the cluster wasn’t sized for.

      We do have another product line in the works (we call it DBaaS) and that can autoscale because it’s based on clearly defined service levels and cluster specifications. I don’t think that product will have this problem.

      It’s just these super mega special (read: big, important, fortune 100) companies have requirements that mean they need something more hand-crafted. Otherwise we’d have automated the toil by now.

      • snowe@programming.devM
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        As soon as you go down the path of customization for “special clients” you’ve already lost the battle. Business needs to agree to not sell something like that. I’m not being helpful here, but as soon as you’ve started customizing like that to get massive clients it will never end and it will just slowly suffocate your company.