BitLocker recovery key is a unique 48-digit numerical password that can be used to unlock your system if BitLocker is otherwise unable to confirm for certain that the attempt to access the system drive is authorized.

There is a SQL query to get Recovery key:

select a.Id, a.Name, b.VolumeId, c.RecoveryKeyId, c.RecoveryKey, c.LastUpdateTime from dbo.RecoveryAndHardwareCore_Machines a
inner join dbo.RecoveryAndHardwareCore_Machines_Volumes b ON a.Id = b.MachineId
inner join dbo.RecoveryAndHardwareCore_Keys c ON b.VolumeId = c.VolumeId

But, we’re getting encrypted value in this case

Luckily, it’s pretty easy to decrypt it:

All we need to do is to locate SQL stored procedure [RecoveryAndHardwareRead].[GetRecoveryKey]

Stored procedures are located under Programmability-Stored Procedures

right click on it-Script Stored procedure as-Create To-New Query Editor Window

Quick look into this stored procedure reveals line which decrypts Recovery key

RecoveryAndHardwareCore.DecryptString(RecoveryAndHardwareCore_Keys.RecoveryKey, DEFAULT) AS RecoveryKey,

DecryptString is built-in scalar-value function which takes encrypted column name and certificate as parametar and decrypts it

CREATE FUNCTION [RecoveryAndHardwareCore].[DecryptString](@ciphertext [varbinary](8000), @certificateName [nvarchar](48) = N'CERT_NAME')
EXTERNAL NAME [CryptoUtility].[Microsoft.SystemsManagementServer.SQLCLR.CryptoServiceProvider].[DecryptString]

EXTERNAL NAME clause specifies that the function [RecoveryAndHardwareCore].[DecryptString] will be created using a SQL Server Assembly. The EXTERNAL NAME statement uses the following syntax to identify the correct class and method to use from the assembly:AssemblyName.ClassName.MethodName. 

In the previous example, the registered assembly is named [CryptoUtility],the class within the assembly is [Microsoft.SystemsManagementServer.SQLCLR.CryptoServiceProvider], and the method within that class that will be executed is [DecryptString]

An assembly is a file that is automatically generated by the compiler upon successful compilation of every .NET application. It can be either a Dynamic Link Library or an executable file.CryptoUtility assembly is located in <ConfigMgr_Install>\bin\x64\CryptoUtility.dll

SQLCLR (SQL Common Language Runtime) is technology for hosting of the Microsoft .NET common language runtime engine within SQL Server. The SQLCLR allows managed code to be hosted by, and run in, the Microsoft SQL Server environment.

This technology, introduced in Microsoft SQL Server 2005, allow users for example to create the following types of managed code objects in SQL Server in .NET languages such as C# or VB.NET.

The SQL CLR relies on the creation, deployment, and registration of CLI assemblies, which are physically stored in managed code dynamic load libraries (DLLs). These assemblies may contain CLI namespaces, classes, functions and properties.

CryptoServiceProvider provides methods and properties for accessing or examining Cryptographic Service Providers (CSPs) registered in the system.

Using this finding, we can create SQL report to get BitLocker status, like this one:

SELECT cm.Name,
s.User_Name0 as 'User name',
s.Last_Logon_Timestamp0 as 'Last Logon Time',
csys.Manufacturer0 as 'Manufacturer',
csys.Model0 as 'Model',
CASE EV.ProtectionStatus0
WHEN '0' THEN 'No' 
WHEN '1' THEN 'Yes' 
WHEN '2' THEN 'Unknown' 
END AS 'Bitlocker Enabled',
CASE WHEN (TPM.IsActivated_InitialValue0 = 1) then 'Yes' else 'No' END [TPM Activated],  
CASE WHEN (TPM.IsEnabled_InitialValue0 = 1) then 'Yes' else 'No' END [TPM Enabled],  
CASE WHEN (TPM.IsOwned_InitialValue0 = 1) then 'Yes' else 'No' END [TPM Owned], 
EV.ProtectionStatus0 AS 'Bitlocker Indicator',

RecoveryAndHardwareCore.DecryptString(ck.RecoveryKey, DEFAULT) AS RecoveryKey,
--RecoveryAndHardwareCore.DecryptBinary(ck.RecoveryKeyPackage, DEFAULT) AS BitLockerRecoveryKeyPackage,

FROM   RecoveryAndHardwareCore_Keys ck
INNER JOIN RecoveryAndHardwareCore_Machines cm on cm .Id=ck.Id
LEFT  JOIN v_R_System s on s.Name0=cm.Name
left join v_GS_COMPUTER_SYSTEM csys on csys.ResourceID = s.ResourceID
left join  v_GS_BITLOCKER_DETAILS  bl on bl.Resourceid=s.ResourceID 
left join v_GS_MBAM_POLICY mbam on mbam.ResourceID=s.ResourceID
left join v_GS_ENCRYPTABLE_VOLUME EV on EV.resourceid=s.resourceid

In previous post we installed Prometheus Operatos using helm on Kubernetes cluster, in this one we’ll configure Prometheus to send alerts, and we’ll also create one custom rule.

Email configuration

First, list all Prometheus operator secrets, we need to edit alertmanager-prometheus-prometheus-oper-alertmanager secret :

kubectl get secrets -n monitoring
NAME                                                          TYPE                                  DATA   AGE
alertmanager-prometheus-prometheus-oper-alertmanager          Opaque                                1      4h36m
default-token-5csvg                                    3      5d18h
prometheus-grafana                                            Opaque                                3      4h36m
prometheus-grafana-test-token-crz5r                    3      4h36m
prometheus-grafana-token-gxrc2                         3      4h36m
prometheus-kube-state-metrics-token-dz4gg              3      4h36m
prometheus-prometheus-node-exporter-token-ct65h        3      4h36m
prometheus-prometheus-oper-admission                          Opaque                                3      5d18h
prometheus-prometheus-oper-alertmanager-token-c4wwv    3      4h36m
prometheus-prometheus-oper-operator-token-kd7fg        3      4h36m
prometheus-prometheus-oper-prometheus-token-rfbk2      3      4h36m
prometheus-prometheus-prometheus-oper-prometheus              Opaque                                1      4h35m
prometheus-prometheus-prometheus-oper-prometheus-tls-assets   Opaque                                0      4h35m
sh.helm.release.v1.prometheus.v1                                        1      4h36m
sh.helm.release.v1.prometheus.v2                                        1      4h22m

Create file alertmanager.yaml:

  resolve_timeout: 5m
  receiver: 'email-alert'
  group_by: ['job']

  - receiver: 'email-alert'
    # When a new group of alerts is created by an incoming alert, wait at
    # least 'group_wait' to send the initial notification.
    # This way ensures that you get multiple alerts for the same group that 
    #start firing shortly after another are batched together on the first 
    # notification.
    group_wait: 50s
    # When the first notification was sent, wait 'group_interval' to send a 
    # batch of new alerts that started firing for that group.  
    group_interval: 5m
    # If an alert has successfully been sent, wait 'repeat_interval' to
    # resend them.
    repeat_interval: 12h

- name: email-alert
  - to:
    # Your smtp server address
    auth_password: pass

Encode content of alertmanager.yaml file:

cat alertmanager.yaml | base64 -w0
---output omitted----

Replace existing encoded secret values from secret alertmanager-prometheus-prometheus-oper-alertmanager with one generated in previous step:

kubectl edit secret -n monitoring alertmanager-prometheus-prometheus-oper-alertmanager
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
apiVersion: v1
  alertmanager.yaml: Z2xvYmFsOgogIHJlc29sdmVfdGltZW91dDogNW0Kcm91dGU6CiAgcmVjZWl2ZXI6ICdlbWFpbC1hbGVydCcKICBncm91cF9ieTogWydqb2InXQogIAogIAogIHJvdXRlczoKICAtIHJlY2VpdmVyOiAnZW1haWwtYWxlcnQnCiAgICBtYXRjaDoKICAgICAgYWxlcnRuYW1lOiBleGFtcGxlCiAgICBncm91cF93YWl0OiA1MHMKICAgIGdyb3VwX2ludGVydmFsOiA1bQogICAgcmVwZWF0X2ludGVydmFsOiAxMmggIAoKcmVjZWl2ZXJzOgotIG5hbWU6IGVtYWlsLWFsZXJ0CiAgZW1haWxfY29uZmlnczoKICAtIHRvOiBkcmFnYW4udnVjYW5vdmljQGRldnRlY2hncm91cC5jb20KICAgIGZyb206IGRyYWdhbi52dWNhbm92aWNAZGV2dGVjaGdyb3VwLmNvbQogICAgIyBZb3VyIHNtdHAgc2VydmVyIGFkZHJlc3MKICAgIHNtYXJ0aG9zdDogc210cC5vZmZpY2UzNjUuY29tOjU4NwogICAgYXV0aF91c2VybmFtZTogZHJhZ2FuLnZ1Y2Fub3ZpY0BkZXZ0ZWNoZ3JvdXAuY29tCiAgICBhdXRoX2lkZW50aXR5OiBkcmFnYW4udnVjYW5vdmljQGRldnRlY2hncm91cC5jb20KICAgIGF1dGhfcGFzc3dvcmQ6IFplbXVuMjAxNAo=
kind: Secret
  creationTimestamp: "2020-03-24T08:51:59Z"
  name: alertmanager-prometheus-prometheus-oper-alertmanager
  namespace: monitoring
  resourceVersion: "1894375"
  selfLink: /api/v1/namespaces/monitoring/secrets/alertmanager-prometheus-prometheus-oper-alertmanager
  uid: 604cced2-679c-4dbc-a842-588d14ce70d5
type: Opaque

Once you’re done, save changes.To check if changes are applied, go to prometheus operator webgui.

Note! your port number may be different

http://Kubernetes IP:30700/#/status

You should receive couple of emails from Prometheus alertmanager soon, if not, check alertmanager logs.

kubectl get pods -n monitoring
NAME                                                     READY   STATUS    RESTARTS   AGE
alertmanager-prometheus-prometheus-oper-alertmanager-0   2/2     Running   0          25h
prometheus-grafana-6f98769f57-dhqtt                      3/3     Running   3          29h
prometheus-kube-state-metrics-d5dc994cc-psqw6            1/1     Running   1          29h
prometheus-prometheus-node-exporter-cjhvv                1/1     Running   1          29h
prometheus-prometheus-oper-operator-7d4497dd9-dp768      2/2     Running   2          29h
prometheus-prometheus-prometheus-oper-prometheus-0       3/3     Running   3          29h

kubectl logs alertmanager-prometheus-prometheus-oper-alertmanager-0 -n monitoring -c alertmanager

level=info ts=2020-03-24T14:47:43.997Z caller=coordinator.go:119 component=configuration msg="Loading configuration file" file=/etc/alertmanager/config/alertmanager.yaml
level=info ts=2020-03-24T14:47:43.997Z caller=coordinator.go:131 component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/config/alertmanager.yaml

Configuring custom alerts

By default following Prometheus operator rules exist:

kubectl -n monitoring get prometheusrules
NAME                                                              AGE
prometheus-prometheus-oper-alertmanager.rules                     29h
prometheus-prometheus-oper-etcd                                   29h
prometheus-prometheus-oper-general.rules                          29h
prometheus-prometheus-oper-k8s.rules                              29h
prometheus-prometheus-oper-kube-apiserver-error                   29h
prometheus-prometheus-oper-kube-apiserver.rules                   29h
prometheus-prometheus-oper-kube-prometheus-node-recording.rules   29h
prometheus-prometheus-oper-kube-scheduler.rules                   29h
prometheus-prometheus-oper-kubernetes-absent                      29h
prometheus-prometheus-oper-kubernetes-apps                        29h
prometheus-prometheus-oper-kubernetes-resources                   29h
prometheus-prometheus-oper-kubernetes-storage                     29h
prometheus-prometheus-oper-kubernetes-system                      29h
prometheus-prometheus-oper-kubernetes-system-apiserver            29h
prometheus-prometheus-oper-kubernetes-system-controller-manager   29h
prometheus-prometheus-oper-kubernetes-system-kubelet              29h
prometheus-prometheus-oper-kubernetes-system-scheduler            29h
prometheus-prometheus-oper-node-exporter                          29h
prometheus-prometheus-oper-node-exporter.rules                    29h
prometheus-prometheus-oper-node-network                           29h
prometheus-prometheus-oper-node-time                              29h
prometheus-prometheus-oper-node.rules                             29h
prometheus-prometheus-oper-prometheus                             29h
prometheus-prometheus-oper-prometheus-operator                    29h

All rules can be deleted except prometheus-prometheus-oper-general.rules.This rule is called Watchdog, this alert is always firing and its purpose is to ensure that the entire alerting pipeline is functional.Before creating rule expression, we first need to evaluate it, navigate to Prometheus Operator Alertmanager Web Gui.

http://Kubernetes IP:30900/graph, your port may be different,

click on Console, and enter expression, in this example, we’ll monitor number of restarts for specific Kubernetes container kube_pod_container_status_restarts_total{container="coredns"}.

Note the output, we’ll specify container, namespace and pod name in alert rule expression,we can also get total count for both containers restarts but on that case we won’t get any output in Element section, so we’ll need to hard-code container, namespace and pod names.

Choose which rule you want to edit (in this example prometheus-prometheus-oper-node-exporter is edited)

kubectl edit prometheusrule prometheus-prometheus-oper-node-exporter -n monitoring

Add rule at the end of file and once done, save it,alert will be triggered when number of container restart is larger than 0.

- alert: RestartAlerts
        description: '{{ $labels.container }} restarted (current value: {{ $value}}s)
          times in pod {{ $labels.namespace }}/{{ $labels.pod }}'
        summary: More than 5 restarts in pod {{ $labels.container }}
      expr: kube_pod_container_status_restarts_total{container="coredns"} > 0
      for: 1s
        severity: warning

If all is OK, new alert should appear in Prometheus operator.

We can add as many rules as we want.

New email should end up in Inbox

Grafana – creating Dashboard

In Grafana dashboard click + sign – Dashboard

Click Add Query

In metric type Prometheus query for which you want to create dashboard, if more queries is needed, just click Add Query

For example, average memory usage for particular container (in MB) was added


Time range can also be adjusted

Click on “diskette drive” icon to save dashboard

StatefulSets in Kubernetes are used for applications where data consistency and replication is required (relational databases).

In stateful set each pod is assigned a unique ordinal number in the range of [0, N),and they are shut down in reverse order to ensure a reliable and repeatable deployment and runtime. The StatefulSet will not even scale until all the required pods are running, so if one dies, it recreates the pod before attempting to add additional instances to meet the scaling criteria.This ID sticks to the pod even when its rescheduled to another worker node. so pod retains the connection to the volume that holds the state of the database. In this way each replica pod in a stateful set has its state and data.

In this example, service named mysql will be exposed to client applications and service will pass write/read request to one of the node.Each node will store data in persistent volume (/mnt/mysql[01-02]) folders in Kubernetes host. Each volume is replicated to Kubernetes node ensuring data availability

Persistent volume claim is created for each pod


kind: StorageClass

  name: localstorage

volumeBindingMode: Immediate
reclaimPolicy: Delete
allowVolumeExpansion: True


kind: PersistentVolume
apiVersion: v1
  name: mysql-01
    type: local
  storageClassName: localstorage
    storage: 5Gi
    - ReadWriteOnce
    path: "/mnt/mysql01"
    type: DirectoryOrCreate
kind: PersistentVolume
apiVersion: v1
  name: mysql-02
    type: local
  storageClassName: localstorage
    storage: 5Gi
    - ReadWriteOnce
    path: "/mnt/mysql02"
    type: DirectoryOrCreate

mysql credentials are stored in env.example


And secret is created from this file and referenced in worload.yaml

kubectl create secret generic prod-secrets --from-env-file=env.example


apiVersion: v1
kind: Service
  name: mysql

  # Open port 3306 only to pods in cluster
    app: mysql-container

    - name: mysql
      port: 3306
      protocol: TCP
      targetPort: 3306
  type: ClusterIP

volumeClaimTemplate will automatically create persitent volume claim for each pod, workload.yaml:

apiVersion: apps/v1
kind: StatefulSet
  name: mysql-container
  serviceName: mysql
  replicas: 2
      app: mysql-container
        app: mysql-container
      - name: mysql-container
        image: mysql:dev
        imagePullPolicy: "IfNotPresent"
          - secretRef:
             name: prod-secrets
        - containerPort: 3306
        # container (pod) path
          - name: mysql-persistent-storage
            mountPath: /var/lib/mysql

            memory: 300Mi
            cpu: 400m
            memory: 400Mi
            cpu: 500m
      restartPolicy: Always

    - metadata:
        name: mysql-persistent-storage
        storageClassName: localstorage
        accessModes: ["ReadWriteOnce"]
          storage: 5Gi
          type: local

After all above yaml files are applied, check microservices status:

kubectl get pvc
NAME                                         STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mysql-persistent-storage-mysql-container-0   Bound    mysql-02   5Gi        RWO            localstorage   3h40m
mysql-persistent-storage-mysql-container-1   Bound    mysql-01   5Gi        RWO            localstorage   3h39m

kubectl get pv
NAME           CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                                                                                       STORAGECLASS   REASON   AGE
mysql-01       5Gi        RWO            Retain           Bound    default/mysql-persistent-storage-mysql-container-1                                                                          localstorage            3h42m
mysql-02       5Gi        RWO            Retain           Bound    default/mysql-persistent-storage-mysql-container-0  

kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
mysql-container-0                   1/1     Running   0          8m17s
mysql-container-1                   1/1     Running   0          8m27s

kubectl get services
NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
adminer-container   NodePort     <none>        8080:30050/TCP   2d5h
kubernetes          ClusterIP        <none>        443/TCP          3d2h
mysql               ClusterIP   <none>        3306/TCP         33m

 kubectl describe statefulset.apps/mysql-container
Name:               mysql-container
Namespace:          default
CreationTimestamp:  Fri, 20 Mar 2020 11:49:31 -0400
Selector:           app=mysql-container
Labels:             <none>
Replicas:           2 desired | 2 total
Update Strategy:    RollingUpdate
  Partition:        824640705452
Pods Status:        2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mysql-container
    Image:      mysql:dev
    Port:       3306/TCP
    Host Port:  0/TCP
      cpu:     500m
      memory:  400Mi
      cpu:     400m
      memory:  300Mi
    Environment Variables from:
      prod-secrets  Secret  Optional: false
    Environment:    <none>
      /var/lib/mysql from mysql-persistent-storage (rw)
  Volumes:  <none>
Volume Claims:
  Name:          mysql-persistent-storage
  StorageClass:  localstorage
  Labels:        <none>
  Annotations:   <none>
  Capacity:      5Gi
  Access Modes:  [ReadWriteOnce]
  Type    Reason            Age                  From                    Message
  ----    ------            ----                 ----                    -------
  Normal  SuccessfulCreate  38m                  statefulset-controller  create Claim mysql-persistent-storage-mysql-container-2 Pod mysql-container-2 in StatefulSet mysql-container success
  Normal  SuccessfulCreate  38m                  statefulset-controller  create Pod mysql-container-2 in StatefulSet mysql-container successful
  Normal  SuccessfulDelete  37m                  statefulset-controller  delete Pod mysql-container-2 in StatefulSet mysql-container successful
  Normal  SuccessfulDelete  21m                  statefulset-controller  delete Pod mysql-container-1 in StatefulSet mysql-container successful
  Normal  SuccessfulCreate  21m (x7 over 3h54m)  statefulset-controller  create Pod mysql-container-1 in StatefulSet mysql-container successful
  Normal  SuccessfulDelete  21m                  statefulset-controller  delete Pod mysql-container-0 in StatefulSet mysql-container successful
  Normal  SuccessfulCreate  21m (x7 over 3h55m)  statefulset-controller  create Pod mysql-container-0 in StatefulSet mysql-container successful

The Prometheus Operator serves to make running Prometheus on top of Kubernetes as easy as possible, while preserving Kubernetes-native configuration options.The Operator ensures at all times that for each Prometheus resource in the cluster a set of Prometheus servers with the desired configuration are running. This entails aspects like the data retention time, persistent volume claims, number of replicas, the Prometheus version, and Alertmanager instances to send alerts to. Each Prometheus instance is paired with a respective configuration that specifies which monitoring targets to scrape for metrics and with which parameters.

Prometheus operator installs:


Grafana allows you to query, visualize, alert on and understand your metrics no matter where they are stored. Create, explore, and share dashboards with your team and foster a data driven culture:

  • Visualize: Fast and flexible client side graphs with a multitude of options. Panel plugins for many different way to visualize metrics and logs.
  • Dynamic Dashboards: Create dynamic & reusable dashboards with template variables that appear as dropdowns at the top of the dashboard.
  • Explore Metrics: Explore your data through ad-hoc queries and dynamic drilldown. Split view and compare different time ranges, queries and data sources side by side.
  • Explore Logs: Experience the magic of switching from metrics to logs with preserved label filters. Quickly search through all your logs or streaming them live.
  • Alerting: Visually define alert rules for your most important metrics. Grafana will continuously evaluate and send notifications to systems like Slack, PagerDuty, VictorOps, OpsGenie.
  • Mixed Data Sources: Mix different data sources in the same graph! You can specify a data source on a per-query basis. This works for even custom datasources.


handles alerts sent by client applications such as the Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct receiver integration such as email, PagerDuty, or OpsGenie. It also takes care of silencing and inhibition of alerts.

Install helm:

Helm is a tool that streamlines the installation and management of Kubernetes applications.

tar xvf helm-v3.0.2-linux-amd64.tar.gz
mv linux-amd64/helm /usr/bin/

Create storage class and Peristent volumes (PV)


kind: StorageClass

  name: prometheus
  namespace: monitoring
volumeBindingMode: Immediate
reclaimPolicy: Delete
allowVolumeExpansion: True


apiVersion: v1
kind: PersistentVolume
  name: operator
  namespace: monitoring
  storageClassName: prometheus
    storage: 10Gi
    - ReadWriteOnce
    path: "/mnt/prometheus/"
    type: DirectoryOrCreate


apiVersion: v1
kind: PersistentVolume
  name: grafana
  namespace: monitoring
  storageClassName: prometheus
    storage: 5Gi
    - ReadWriteOnce
    path: "/mnt/grafana/"
    type: DirectoryOrCreate


apiVersion: v1
kind: PersistentVolume
  name: alertmanager
  namespace: monitoring
  storageClassName: prometheus
    storage: 5Gi
    - ReadWriteOnce
    path: "/mnt/alertmanager/"
    type: DirectoryOrCreate

Create monitoring namespace and apply this file:

kubectl create namespace monitoring
kubectl apply -f storage.yaml

Installing Prometheus operator

Create file custom-values.yaml to specify Prometheus operator:

# Define persistent storage for Prometheus (PVC)
      fsGroup: 0
      runAsUser: 0
      runAsNonRoot: false
          volumeName: template
          accessModes: ["ReadWriteOnce"]
          storageClassName: prometheus
              storage: 10Gi

# Define persistent storage for Grafana (PVC)

  # Set password for Grafana admin user
  adminPassword: pass
    enabled: true
    volumeName: grafana
    storageClassName: prometheus
    accessModes: ["ReadWriteOnce"]
    size: 5Gi
# Define persistent storage for Alertmanager (PVC)
      fsGroup: 0
      runAsUser: 0
      runAsNonRoot: false

          volumeName: alertmanager
          accessModes: ["ReadWriteOnce"]
          storageClassName: prometheus
              storage: 5Gi

# Change default node-exporter port
    port: 30206
    targetPort: 30206

# enable Etcd metrics
  enabled: true

# enable Controller metrics
  enabled: true

# enable Scheduler metrics
  enabled: true

Grafana will be exposed to the outside on port 30800, prometheus will be available on port 30900 and alertmanager is exposed on port 30700

helm install prometheus stable/prometheus-operator --namespace monitoring -f custom-values.yaml --set prometheus.service.nodePort=30900 --set prometheus.service.type=NodePort --set grafana.service.nodePort=30800 --set grafana.service.type=NodePort --set alertmanager.service.nodePort=30700 --set alertmanager.service.type=NodePort

List nodes

kubectl get svc -n monitoring
NAME                                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
alertmanager-operated                     ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP   8m13s
prometheus-grafana                        NodePort    <none>        80:30800/TCP                 8m19s
prometheus-kube-state-metrics             ClusterIP      <none>        8080/TCP                     8m19s
prometheus-operated                       ClusterIP   None             <none>        9090/TCP                     8m2s
prometheus-prometheus-node-exporter       ClusterIP   <none>        30206/TCP                    8m19s
prometheus-prometheus-oper-alertmanager   NodePort   <none>        9093:30700/TCP               8m19s
prometheus-prometheus-oper-operator       ClusterIP   <none>        8080/TCP,443/TCP             8m19s
prometheus-prometheus-oper-prometheus     NodePort       <none>        9090:30900/TCP               8m19s

And services

kubectl get pods -n monitoring
NAME                                                     READY   STATUS    RESTARTS   AGE
alertmanager-prometheus-prometheus-oper-alertmanager-0   2/2     Running   0          8m35s
prometheus-grafana-7d64744489-qp7xb                      3/3     Running   0          8m41s
prometheus-kube-state-metrics-d5dc994cc-9fsrr            1/1     Running   0          8m41s
prometheus-prometheus-node-exporter-2wvs8                1/1     Running   0          8m41s
prometheus-prometheus-oper-operator-7d4497dd9-4xmjb      2/2     Running   0          8m41s
prometheus-prometheus-prometheus-oper-prometheus-0       3/3     Running   0          8m24s

Prometheus, alertmanager and Grafana should be accessibe from the outside of cluster


Login to Prometeus (http://IP:30090), click on Status-Targets

In case you see errors, stop firewall and restart all the Prometheus pods or reboot your machine.

To access Grafana go to http://IP:30800

Installing NRPE

Nagios Remote Plugin Executor (NRPE) is used to remotely execute Nagios plugins on Linux/Unix machines. This makes it easy to monitor remote machine metrics such as disk usage, CPU load, number of running processes, logged in users etc.

On machine on which disk needs to be monitored install nrpe

yum install -y
yum install nrpe -y

Installing disk plugin

List all available plugins

yum list nagios-plugins*

Install disk plugin:

yum install nagios-plugins-disk.x86_64

Uncomment and edit next lines in /etc/nagios/nrpe.conf

server_address=local IP
allowed_hosts=,::1, nagios_server_ip
# allow arguments
command[check_disk]=/usr/lib64/nagios/plugins/check_disk -w 20% -c 10% -p /opt/vbox

In above example partition /opt/vbox is monitored, warning is raised when free space falls below 20% and critical alert is created when free space falls below 10%.

When done, start and enable nrpe service (or restart it if it’s already installed)

systemctl enable nrpe
systemctl start nrpe

Steps on nagios server

Test if plugin works from nagios server:

/usr/lib64/nagios/plugins/check_nrpe -H -c check_disk
DISK CRITICAL - free space: /opt/vbox 2080 MiB (0.61% inode=100%);| /opt/vbox=333875MiB;282112;317376;0;352640

On nagios server add command for disk plugin

Edit /usr/local/nagios/etc/objects/commands.cfg file

Add next lines:

define command {
command_name check_partition
command_line /usr/lib64/nagios/plugins/check_nrpe -H $HOSTADDRESS$ -c check_disk

Add reference to this command in monitored host file located in /usr/local/nagios/etc/objects/conf.d/ folder

define service{
        use                             generic-service         ; Name of service template to use
        host_name                       vagrant.test.local
        service_description             check vagrant partition
        check_command                   check_partition

Restart nagios service

systemctl restart nagios

Couple of modules for creating resources in Azure


  • terraform 12
  • Configure terraform for Azure
  • Download modules

In terraform\modules\vm\example is file for calling all the modules.

Edit this file to satisfy your needs:

#Resource group

module "rg" {

source                                   = "../../../modules/rg"
resource_group_name                      = var.resource_group_name
resource_group_location                  = var.resource_group_location

module "azure_vnet" {
source                                   = "../../../modules/vnet"
environment_name                         = var.environment_name
cidr_block                               = ""
dns_servers                              = [
resource_group_name                      = var.resource_group_name
resource_group_location                  = var.resource_group_location
enable_wan_subnet                        = true
enable_dmz_subnet                        = true
enable_vdi_subnet                        = false
enable_infrastructure_services_subnet    = false
enable_infrastructure_db_services_subnet = false
enable_production_app_services_subnet    = false
enable_production_db_services_subnet     = false
enable_acceptance_app_services_subnet    = false
enable_acceptance_db_services_subnet     = false
enable_test_app_services_subnet          = false
enable_test_db_services_subnet           = false
enable_development_app_services_subnet   = false
enable_development_db_services_subnet    = false

module "application_gateway" {

source                                   = "../../../modules/app_gateway"
resource_group_name                      = var.resource_group_name
resource_group_location                  = var.resource_group_location
sku_name                                 = "WAF_Medium"
tier                                     = "WAF"
capacity                                 = 1
subnet_id                                = module.azure_vnet.subnets_id_dmz
targets                                  = module.azure_vm.azure_vm_nic_id
ip_configuration                         = module.azure_vm.azure_nic_ip_configuration
# https settings
https                                    = false

#module "azure_key_vault" {

#source                                   = "../../../modules/vault"
#environment_name                         = var.environment_name
#resource_group_name                      = module.rg.resource_group_name
#resource_group_location                  = module.rg.resource_group_location
#azure_object_id                          = var.azure_object_id
#azure_tenant_id                          = var.azure_tenant_id
#key_vault_name                           = var.key_vault_name
#network_acl                              = [""]

module "azure_vm" {

source = "../../../modules/vm"
#key_vault_url = module.azure_key_vault.key_vault_url
#key_vault_resource_id = module.azure_key_vault.key_vault_resource_id
#key_encryption_key_name = module.azure_key_vault.key_encryption_key_name
#key_encryption_key_version = module.azure_key_vault.key_encryption_key_version
#key_vault_secret_id = module.azure_key_vault.key_vault_secret_id
subnet_id = module.azure_vnet.subnets_id_wan
resource_group_name = module.rg.resource_group_name
os = "windows"
vm_size = "Standard_B2ms"
vm_image_publisher = "MicrosoftWindowsServer"
vm_image_offer = "WindowsServer"
vm_image_sku = "2016-Datacenter"
vm_name = "myvm2"
vm_admin = "ja"
vm_password = "Passw0rd01234!"
number_of_machines = 1
disk_size = 2
number_of_managed_disks = 0
encryption = false
public_ip  = false

Read me file is in terraform\modules\app_gateway\

Initialize modules and deploy resources

terraform init
terraform apply

Powershell – Hyper-V get VM IP config

Posted: February 28, 2020 in Scripts


Install Integration services on VM

This script will get IPV4 only IP address, VM name and MAC address.

: will be added to MAC address. In case VM has more than one IP/MAC, those values will be in same column separated by space

# Get all running VMs

$vms = Get-VM | Where { $_.State –eq ‘Running’ } | Select-Object -ExpandProperty Name 

# Declare "global" array variable (make it available outside loop)

$results = @()
 foreach($vm in $vms) {

    # Get network interface details
    $out = Get-VMNetworkAdapter -vmname $vm | select VMName, MacAddress, IPAddresses

    # Remove duplicate VM names
    $vm_name = $out.VMName | Get-Unique

    # In case more than 1 IP, put it in same row separated by space (,
    $ip = ($out.IPAddresses | ForEach-Object {
    $_ | ? {$_ -notmatch ':'}   
    }) -join " "

    # If more than 1 MAC , put it in same row separated by space (00:15:5D:58:12:5E 00:15:5D:58:12:5F )
    $mac = ($out.MacAddress | ForEach-Object {
}) -join ' '
  # Add headers
$comp = Get-WmiObject Win32_ComputerSystem | Select-Object -ExpandProperty name

$obj = New-Object -TypeName psobject
$obj | Add-Member -MemberType NoteProperty -Name "VM NAME" -Value $vm_name
$obj | Add-Member -MemberType NoteProperty -Name "IP ADDRESS" -Value $ip
$obj | Add-Member -MemberType NoteProperty -Name "MAC ADDRESS" -Value $mac
$obj | Add-Member -MemberType NoteProperty -Name "HYPER-V HOST" -Value $comp

# Append object to outside "global" variable

$results += $obj

#$obj| Export-Csv -Path "c:\1.csv" -NoTypeInformation -append 

# write results to CSV file
$results| Export-Csv -Path "c:\1.csv" -NoTypeInformation

Option 2: multi-line (every IP/MAC) have new line

$results = Get-VM | Where State –eq Running | Get-VMNetworkAdapter | ForEach-Object {
        'VM NAME'      = $_.VMName
        'IP ADDRESS'   = ($_.IPAddresses -notmatch ':') -join ' '
        'MAC ADDRESS'  = ($_.MacAddress -replace '(..)(..)(..)(..)(..)','$1:$2:$3:$4:$5:') -join ' '
$results | Export-Csv -Path "c:\1.csv" -NoTypeInformation

# Put in single row multiple IP/MAC addresses for one VM:

$csv = $results | Group-Object 'VM NAME'  | ForEach-Object {
 'VM NAME' = $_.Name
 'IP ADDRESS' = $_.Group.'IP ADDRESS' -join ' '
 'MAC ADDRESS' = $_.Group.'MAC ADDRESS' -join ' '
$csv | Export-Csv -Path "c:\1.csv" -NoTypeInformation