HowTo: Send your Alerts to Microsoft Teams

Veröffentlicht von Florian Stoeber am

Gathering metrics and logs from your applications is usually not enough. You want to receive alerts if something is suspicious and then investigate in a dashboard solution like Grafana or Kibana. At Liquid Reply, we had to find a solution to send our alerts from the Alertmanager to Microsoft Teams. Our initial setup consisted of Prometheus, which is gathering all the metrics, and Alertmanager that was attached to Prometheus. This Alertmanager was sending notifications to a channel in Slack. Due to the customer migrating from Slack to Microsoft Teams, we had to find a solution to send these alerts to Microsoft Teams instead.

We ended up using an open-source project called prometheus-msteams. It provides you with an adapter, that is compatible to Alertmanger as well as Microsoft Teams. You do not need many things to set it up and therefore it is quite easy to integrate in your existing setup. I want to explain to you, how you can set up this solution by yourself.

Architecture

The architecture of the given solution is quite simple. We have our existing stack, that consists of Prometheus, Alertmanager and Grafana. While Alertmanager is capable of sending alerts directly to Slack, we will have to integrate the prometheus-msteams adapter, that will take care of sending alerts to Microsoft Teams in the future. In the end (after establishing the new alert destination) we can remove the Slack integration.

Here is an overview of our proposed architecture:

Installation and configuration

Microsoft Teams

If we want to send our alerts to Microsoft Teams, it is necessary to set up a Team or a Channel for this. In addition, we have to create a Webhook, which will later be used by the prometheus-msteams Pod. For this blogpost, I have set up a new Team. If you want to do this, stick to this tutorial by Microsoft.

After doing this we create a Webhook-Connector, where we can send the alerts to. To do this, click on the channel, you want to send the alerts to and select “Connectors”.

In the Connectors-Overview we choose “Incoming Webhook” and add it to the channel. In this example we will name it “Alertmanger”. In the last step of the wizard, we are prompted with our new webhook url. In this example the URL of the Webhook is:

https://xxxxx.webhook.office.com/webhookb2/96f843e5-16d9-44fb-8a11-d3c7da4880c4@b00367e2-193a-4f48-94de-7245d45c0947/IncomingWebhook/439f6e0a74824e6c9cff3758adc68170/1d39c3f8-0a70-4b38-9133-351cf46c3570

Prometheus-msteams

After preparing our Microsoft Teams webhook we are ready to go. For testing this setup, I have set up a Kubernetes cluster with kubernetes-kind:

kind create cluster

We will install prometheus-msteams via helm. To do this, we have to add the repository to our helm-cli:

helm repo add prometheus-msteams https://prometheus-msteams.github.io/prometheus-msteams/
helm repo update

We are defining a values.yaml, which we will use to configure the helm-chart with the necessary configuration:

replicaCount: 3

connectors:
# in alertmanager, this will be used as http://prometheus-msteams:2000/kubernetes-monitoring
- kubernetes-monitoring: https://xxxxx.webhook.office.com/webhookb2/96f843e5-16d9-44fb-8a11-d3c7da4880c4@b00367e2-193a-4f48-94de-7245d45c0947/IncomingWebhook/439f6e0a74824e6c9cff3758adc68170/1d39c3f8-0a70-4b38-9133-351cf46c3570

metrics:
  serviceMonitor:
    enabled: true
    scrapeInterval: 30s

Next, we will install the prometheus-msteams setup via helm:

helm install prometheus-msteams prometheus-msteams/prometheus-msteams -f prometheus-msteams-values.yaml

Now, if we are looking at our Pods, we will see three prometheus-msteams Pods were created:

kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE
prometheus-msteams-686c7fc45f-6vmkh                1/1     Running   0          24s
prometheus-msteams-686c7fc45f-g6wdv                1/1     Running   0          24s
prometheus-msteams-686c7fc45f-lbxqv                1/1     Running   0          24s

To send alerts to this Pods, we are going to install the kube-prometheus-stack. This stack will provide all the necessary components, we are using in production clusters (e.g. Prometheus, Grafana, Alertmanager, etc.):

helm install prom prometheus-community/kube-prometheus-stack -f kube-prometheus-stack-values.yaml

We will deploy the stack with a few custom values. We are defining prometheus-msteams as a receiver for alerts. The URL we must provide here consists of the Kubernetes Service and the route. In our case, the route “kubernetes-monitoring” was defined as a value of prometheus-msteams:

alertmanager: 
  enabled: true
  config:
    global:
      resolve_timeout: 5m
    receivers:
    - name: prometheus-msteams
      webhook_configs:
      - url: "http://prometheus-msteams.default:2000/kubernetes-monitoring"
        send_resolved: true
    route:
      repeat_interval: 12h
      routes:
      - match:
          severity: critical
        receiver: prometheus-msteams

Let’s check our Pods again:

kubectl get pods
NAME                                                     READY   STATUS    RESTARTS   AGE
alertmanager-prom-kube-prometheus-stack-alertmanager-0   2/2     Running   0          6m7s
prom-grafana-6585f698c7-n7pm7                            2/2     Running   0          6m9s
prom-kube-prometheus-stack-operator-559cd97f65-h9lwl     1/1     Running   0          6m9s
prom-kube-state-metrics-5f44797794-v65wm                 1/1     Running   0          6m9s
prom-prometheus-node-exporter-gl7nq                      1/1     Running   0          6m9s
prometheus-msteams-686c7fc45f-6vmkh                      1/1     Running   0          34m
prometheus-msteams-686c7fc45f-g6wdv                      1/1     Running   0          34m
prometheus-msteams-686c7fc45f-lbxqv                      1/1     Running   0          34m
prometheus-prom-kube-prometheus-stack-prometheus-0       2/2     Running   1          6m7s

Since the setup is up and running, we can test it. I randomly killed a few Pods in the kube-system namespace. As the kube-prometheus-stack has a nice set of preconfigured alerting rules, it will immediately send an alert to our new alerting destination. It shows up in our Microsoft Teams channel:

Conclusion

All in all, we were able to send our alerts to Microsoft Teams instead of Slack. We have a few more Pods running in our cluster and all of it took just a couple of minutes. With the help of the provisioned helm-chart, it is also possible to customize the solution, e.g. with your own message template for Microsoft Teams.

As always, we provided the used files and a command summary of this blogpost in our GitHub-repository at https://github.com/Liquid-Reply/blogpost-resources. If you want to read on topics, that are related to monitoring, feel free to browse the blog and explore the #observability tag.

References

https://github.com/prometheus-msteams/prometheus-msteams

https://github.com/prometheus-operator/kube-prometheus

https://github.com/prometheus-community/helm-charts

https://github.com/Liquid-Reply/blogpost-resources

Slack-Logo: https://slack.com/intl/de-de/media-kit

Microsoft Teams Logo: fasttrack.microsoft.com

MICROSOFT and Microsoft Teams are trademarks of the Microsoft group of companies.


Florian Stoeber

Florian is working as a Kubernetes Engineer for Liquid Reply. After his apprenticeship and his studies he specialized in Kubernetes technologies and worked on a few projects to build up Monitoring and Logging solutions.