Often, MSPs reach out and tell us their techs don’t trust the alerts that come in.
In many ways, it’s like the boy who cried wolf. If something fails you enough, you stop believing in it.
In this same vein, when the RMM alerts start producing false or redundant information, it becomes difficult to trust them.
And lack of trust is a big issue when it comes to automating RMM tools.
What Happens When Techs Stop Trusting Their Alerting?
This is a perception issue in many ways.
When techs no longer trust their alerting, it’s generally because the entire process starts to feel like a waste of time. Technicians might even be spending more time trying to figure out why an alert came through than they are working to address it.
Wasteful, right? Because alerts must seem worth taking the time to solve.
For instance, say an alert says: Group policy failed to load.
If this happens 20 times per day, and your techs know it—and the problem always fixes itself—then why even bother paying attention?
Those self-fixing alerts? They may seem helpful, but they aren’t actually worth technicians’ time.
Move Toward Actionable RMM Alerts
Again, informational alerts are useless.
From a business owner standpoint, inaccurate or redundant alerts are similar to taking money and burning it.
Your team won’t be efficient without accurate, actionable alerts.
And just as no MSP should be using an improperly-configured RMM tool, no RMM platform should be sending out alerts that don’t benefit the client.
So what’s the answer?
Steps an MSP Can Take to Regain Trust in Their RMM Alerts
MSPs can be proactive in rebuilding trust.
Of course, one option is to hire a consultant. Or, you might hire someone in-house to take on your alerts management.
Then, there are the following useful steps:
Take a look at the service agreements you sell to your clients. You can then start to evaluate what to include based on the service level.
Does the tier include monitoring workstations and servers?
Does it include patching and automatic maintenance?
Consider each aspect of the service level, and then adjust your alerts accordingly.
Configure each client based on what they’re paying for. Monitoring matches in your RMM can really go a long way.
This practice can help you avoid giving away services your clients aren’t paying for.
It will also allow you to monitor the patching and really all aspects of your RMM so your system reflects exactly what the client is investing in.
Tailor thresholds to their use cases.
Have you looked at all the monitoring that is out-of-the-box and already enabled?
From there, make adjustments as needed.
Take a very basic memory monitor, for example: If the available RAM is under 10%, maybe an alert should be triggered. It shouldn’t be triggered on all systems, though.
To this end, you’ll want to consider each use case in your decision-making.
Eliminate redundancies and turn off alerts if needed.
Many clients make changes globally, applying the monitoring to all their systems or none of them at all.
But this isn’t ideal. Alerts should be configured on a case-by-case basis.
What this means is that changes should be made on the agreement level, and not universally.
Otherwise, you aren’t considering what’s most productive.
Adjusting the RAM to only alert if there’s only 1% left, for example, won’t actually generate an alert for something that’s using the RAM yet shouldn’t be. Remember to keep those alerts actionable!
Make sure the alerting is set up properly.
Are the alerts going to the correct boards, or are they coming in as email alerts?
Think about how your team should handle this—that is, how you want to handle it.
Accordingly, all MSPs should stake a step back and take a look at what they’re selling. This means you’ll need to communicate with your RMM provider to determine how the RMM can facilitate this service.
By doing so, you can make sure the alerting is set up correctly.
The results, and the corresponding trust in your RMM alerts, will likely follow.