BitLocker recovery key is a unique 48-digit numerical password that can be used to unlock your system if BitLocker is otherwise unable to confirm for certain that the attempt to access the system drive is authorized.

There is a SQL query to get Recovery key:

select a.Id, a.Name, b.VolumeId, c.RecoveryKeyId, c.RecoveryKey, c.LastUpdateTime from dbo.RecoveryAndHardwareCore_Machines a
inner join dbo.RecoveryAndHardwareCore_Machines_Volumes b ON a.Id = b.MachineId
inner join dbo.RecoveryAndHardwareCore_Keys c ON b.VolumeId = c.VolumeId

But, we’re getting encrypted value in this case

Luckily, it’s pretty easy to decrypt it:

All we need to do is to locate SQL stored procedure [RecoveryAndHardwareRead].[GetRecoveryKey]

Stored procedures are located under Programmability-Stored Procedures

right click on it-Script Stored procedure as-Create To-New Query Editor Window

Quick look into this stored procedure reveals line which decrypts Recovery key

RecoveryAndHardwareCore.DecryptString(RecoveryAndHardwareCore_Keys.RecoveryKey, DEFAULT) AS RecoveryKey,

DecryptString is built-in scalar-value function which takes encrypted column name and certificate as parametar and decrypts it


CREATE FUNCTION [RecoveryAndHardwareCore].[DecryptString](@ciphertext [varbinary](8000), @certificateName [nvarchar](48) = N'CERT_NAME')
RETURNS [nvarchar](max) WITH EXECUTE AS CALLER
AS 
EXTERNAL NAME [CryptoUtility].[Microsoft.SystemsManagementServer.SQLCLR.CryptoServiceProvider].[DecryptString]
GO

EXTERNAL NAME clause specifies that the function [RecoveryAndHardwareCore].[DecryptString] will be created using a SQL Server Assembly. The EXTERNAL NAME statement uses the following syntax to identify the correct class and method to use from the assembly:AssemblyName.ClassName.MethodName. 

In the previous example, the registered assembly is named [CryptoUtility],the class within the assembly is [Microsoft.SystemsManagementServer.SQLCLR.CryptoServiceProvider], and the method within that class that will be executed is [DecryptString]

An assembly is a file that is automatically generated by the compiler upon successful compilation of every .NET application. It can be either a Dynamic Link Library or an executable file.CryptoUtility assembly is located in <ConfigMgr_Install>\bin\x64\CryptoUtility.dll

SQLCLR (SQL Common Language Runtime) is technology for hosting of the Microsoft .NET common language runtime engine within SQL Server. The SQLCLR allows managed code to be hosted by, and run in, the Microsoft SQL Server environment.

This technology, introduced in Microsoft SQL Server 2005, allow users for example to create the following types of managed code objects in SQL Server in .NET languages such as C# or VB.NET.

The SQL CLR relies on the creation, deployment, and registration of CLI assemblies, which are physically stored in managed code dynamic load libraries (DLLs). These assemblies may contain CLI namespaces, classes, functions and properties.

CryptoServiceProvider provides methods and properties for accessing or examining Cryptographic Service Providers (CSPs) registered in the system.

Using this finding, we can create SQL report to get BitLocker status, like this one:

SELECT cm.Name,
s.User_Name0 as 'User name',
s.Last_Logon_Timestamp0 as 'Last Logon Time',
csys.Manufacturer0 as 'Manufacturer',
csys.Model0 as 'Model',
bl.DriveLetter0,
bl.IsAutoUnlockEnabled0,
bl.ProtectionStatus0,
mbam.MBAMPolicyEnforced0,
mbam.OsDriveEncryption0,
CASE EV.ProtectionStatus0
WHEN '0' THEN 'No' 
WHEN '1' THEN 'Yes' 
WHEN '2' THEN 'Unknown' 
END AS 'Bitlocker Enabled',
CASE WHEN (TPM.IsActivated_InitialValue0 = 1) then 'Yes' else 'No' END [TPM Activated],  
CASE WHEN (TPM.IsEnabled_InitialValue0 = 1) then 'Yes' else 'No' END [TPM Enabled],  
CASE WHEN (TPM.IsOwned_InitialValue0 = 1) then 'Yes' else 'No' END [TPM Owned], 
EV.ProtectionStatus0 AS 'Bitlocker Indicator',

RecoveryAndHardwareCore.DecryptString(ck.RecoveryKey, DEFAULT) AS RecoveryKey,
--RecoveryAndHardwareCore.DecryptBinary(ck.RecoveryKeyPackage, DEFAULT) AS BitLockerRecoveryKeyPackage,
ck.LastUpdateTime


FROM   RecoveryAndHardwareCore_Keys ck
INNER JOIN RecoveryAndHardwareCore_Machines cm on cm .Id=ck.Id
LEFT  JOIN v_R_System s on s.Name0=cm.Name
left join v_GS_COMPUTER_SYSTEM csys on csys.ResourceID = s.ResourceID
left join  v_GS_BITLOCKER_DETAILS  bl on bl.Resourceid=s.ResourceID 
left join v_GS_MBAM_POLICY mbam on mbam.ResourceID=s.ResourceID
left join v_GS_ENCRYPTABLE_VOLUME EV on EV.resourceid=s.resourceid
LEFT JOIN v_GS_TPM TPM ON EV.ResourceID = TPM.ResourceID 

In previous post we installed Prometheus Operatos using helm on Kubernetes cluster, in this one we’ll configure Prometheus to send alerts, and we’ll also create one custom rule.

Email configuration

First, list all Prometheus operator secrets, we need to edit alertmanager-prometheus-prometheus-oper-alertmanager secret :

kubectl get secrets -n monitoring
NAME                                                          TYPE                                  DATA   AGE
alertmanager-prometheus-prometheus-oper-alertmanager          Opaque                                1      4h36m
default-token-5csvg                                           kubernetes.io/service-account-token   3      5d18h
prometheus-grafana                                            Opaque                                3      4h36m
prometheus-grafana-test-token-crz5r                           kubernetes.io/service-account-token   3      4h36m
prometheus-grafana-token-gxrc2                                kubernetes.io/service-account-token   3      4h36m
prometheus-kube-state-metrics-token-dz4gg                     kubernetes.io/service-account-token   3      4h36m
prometheus-prometheus-node-exporter-token-ct65h               kubernetes.io/service-account-token   3      4h36m
prometheus-prometheus-oper-admission                          Opaque                                3      5d18h
prometheus-prometheus-oper-alertmanager-token-c4wwv           kubernetes.io/service-account-token   3      4h36m
prometheus-prometheus-oper-operator-token-kd7fg               kubernetes.io/service-account-token   3      4h36m
prometheus-prometheus-oper-prometheus-token-rfbk2             kubernetes.io/service-account-token   3      4h36m
prometheus-prometheus-prometheus-oper-prometheus              Opaque                                1      4h35m
prometheus-prometheus-prometheus-oper-prometheus-tls-assets   Opaque                                0      4h35m
sh.helm.release.v1.prometheus.v1                              helm.sh/release.v1                    1      4h36m
sh.helm.release.v1.prometheus.v2                              helm.sh/release.v1                    1      4h22m
sh.helm.release.v1.prometheus.v3 

Create file alertmanager.yaml:

global:
  resolve_timeout: 5m
route:
  receiver: 'email-alert'
  group_by: ['job']


  routes:
  - receiver: 'email-alert'
    # When a new group of alerts is created by an incoming alert, wait at
    # least 'group_wait' to send the initial notification.
    # This way ensures that you get multiple alerts for the same group that 
    #start firing shortly after another are batched together on the first 
    # notification.
    group_wait: 50s
    # When the first notification was sent, wait 'group_interval' to send a 
    # batch of new alerts that started firing for that group.  
    group_interval: 5m
    # If an alert has successfully been sent, wait 'repeat_interval' to
    # resend them.
    repeat_interval: 12h

receivers:
- name: email-alert
  email_configs:
  - to: receiver@example.com
    from: sender@example.com
    # Your smtp server address
    smarthost: smtp.office365.com:587
    auth_username: sender@example.com
    auth_identity: sender@example.com
    auth_password: pass

Encode content of alertmanager.yaml file:

cat alertmanager.yaml | base64 -w0
Z2xvYmFsOgogIHJlc29sdmVfdGltZW91dDogNW0Kcm91dGU6CiAgcmVjZWl2ZXI6ICdlbWFpbC1hbGVydCc
---output omitted----

Replace existing encoded secret values from secret alertmanager-prometheus-prometheus-oper-alertmanager with one generated in previous step:

kubectl edit secret -n monitoring alertmanager-prometheus-prometheus-oper-alertmanager
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  alertmanager.yaml: Z2xvYmFsOgogIHJlc29sdmVfdGltZW91dDogNW0Kcm91dGU6CiAgcmVjZWl2ZXI6ICdlbWFpbC1hbGVydCcKICBncm91cF9ieTogWydqb2InXQogIAogIAogIHJvdXRlczoKICAtIHJlY2VpdmVyOiAnZW1haWwtYWxlcnQnCiAgICBtYXRjaDoKICAgICAgYWxlcnRuYW1lOiBleGFtcGxlCiAgICBncm91cF93YWl0OiA1MHMKICAgIGdyb3VwX2ludGVydmFsOiA1bQogICAgcmVwZWF0X2ludGVydmFsOiAxMmggIAoKcmVjZWl2ZXJzOgotIG5hbWU6IGVtYWlsLWFsZXJ0CiAgZW1haWxfY29uZmlnczoKICAtIHRvOiBkcmFnYW4udnVjYW5vdmljQGRldnRlY2hncm91cC5jb20KICAgIGZyb206IGRyYWdhbi52dWNhbm92aWNAZGV2dGVjaGdyb3VwLmNvbQogICAgIyBZb3VyIHNtdHAgc2VydmVyIGFkZHJlc3MKICAgIHNtYXJ0aG9zdDogc210cC5vZmZpY2UzNjUuY29tOjU4NwogICAgYXV0aF91c2VybmFtZTogZHJhZ2FuLnZ1Y2Fub3ZpY0BkZXZ0ZWNoZ3JvdXAuY29tCiAgICBhdXRoX2lkZW50aXR5OiBkcmFnYW4udnVjYW5vdmljQGRldnRlY2hncm91cC5jb20KICAgIGF1dGhfcGFzc3dvcmQ6IFplbXVuMjAxNAo=
kind: Secret
metadata:
  creationTimestamp: "2020-03-24T08:51:59Z"
  name: alertmanager-prometheus-prometheus-oper-alertmanager
  namespace: monitoring
  resourceVersion: "1894375"
  selfLink: /api/v1/namespaces/monitoring/secrets/alertmanager-prometheus-prometheus-oper-alertmanager
  uid: 604cced2-679c-4dbc-a842-588d14ce70d5
type: Opaque

Once you’re done, save changes.To check if changes are applied, go to prometheus operator webgui.

Note! your port number may be different

http://Kubernetes IP:30700/#/status

You should receive couple of emails from Prometheus alertmanager soon, if not, check alertmanager logs.

kubectl get pods -n monitoring
NAME                                                     READY   STATUS    RESTARTS   AGE
alertmanager-prometheus-prometheus-oper-alertmanager-0   2/2     Running   0          25h
prometheus-grafana-6f98769f57-dhqtt                      3/3     Running   3          29h
prometheus-kube-state-metrics-d5dc994cc-psqw6            1/1     Running   1          29h
prometheus-prometheus-node-exporter-cjhvv                1/1     Running   1          29h
prometheus-prometheus-oper-operator-7d4497dd9-dp768      2/2     Running   2          29h
prometheus-prometheus-prometheus-oper-prometheus-0       3/3     Running   3          29h


kubectl logs alertmanager-prometheus-prometheus-oper-alertmanager-0 -n monitoring -c alertmanager


level=info ts=2020-03-24T14:47:43.997Z caller=coordinator.go:119 component=configuration msg="Loading configuration file" file=/etc/alertmanager/config/alertmanager.yaml
level=info ts=2020-03-24T14:47:43.997Z caller=coordinator.go:131 component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/config/alertmanager.yaml

Configuring custom alerts

By default following Prometheus operator rules exist:

kubectl -n monitoring get prometheusrules
NAME                                                              AGE
prometheus-prometheus-oper-alertmanager.rules                     29h
prometheus-prometheus-oper-etcd                                   29h
prometheus-prometheus-oper-general.rules                          29h
prometheus-prometheus-oper-k8s.rules                              29h
prometheus-prometheus-oper-kube-apiserver-error                   29h
prometheus-prometheus-oper-kube-apiserver.rules                   29h
prometheus-prometheus-oper-kube-prometheus-node-recording.rules   29h
prometheus-prometheus-oper-kube-scheduler.rules                   29h
prometheus-prometheus-oper-kubernetes-absent                      29h
prometheus-prometheus-oper-kubernetes-apps                        29h
prometheus-prometheus-oper-kubernetes-resources                   29h
prometheus-prometheus-oper-kubernetes-storage                     29h
prometheus-prometheus-oper-kubernetes-system                      29h
prometheus-prometheus-oper-kubernetes-system-apiserver            29h
prometheus-prometheus-oper-kubernetes-system-controller-manager   29h
prometheus-prometheus-oper-kubernetes-system-kubelet              29h
prometheus-prometheus-oper-kubernetes-system-scheduler            29h
prometheus-prometheus-oper-node-exporter                          29h
prometheus-prometheus-oper-node-exporter.rules                    29h
prometheus-prometheus-oper-node-network                           29h
prometheus-prometheus-oper-node-time                              29h
prometheus-prometheus-oper-node.rules                             29h
prometheus-prometheus-oper-prometheus                             29h
prometheus-prometheus-oper-prometheus-operator                    29h

All rules can be deleted except prometheus-prometheus-oper-general.rules.This rule is called Watchdog, this alert is always firing and its purpose is to ensure that the entire alerting pipeline is functional.Before creating rule expression, we first need to evaluate it, navigate to Prometheus Operator Alertmanager Web Gui.

http://Kubernetes IP:30900/graph, your port may be different,

click on Console, and enter expression, in this example, we’ll monitor number of restarts for specific Kubernetes container kube_pod_container_status_restarts_total{container="coredns"}.

Note the output, we’ll specify container, namespace and pod name in alert rule expression,we can also get total count for both containers restarts but on that case we won’t get any output in Element section, so we’ll need to hard-code container, namespace and pod names.

Choose which rule you want to edit (in this example prometheus-prometheus-oper-node-exporter is edited)

kubectl edit prometheusrule prometheus-prometheus-oper-node-exporter -n monitoring

Add rule at the end of file and once done, save it,alert will be triggered when number of container restart is larger than 0.

- alert: RestartAlerts
      annotations:
        description: '{{ $labels.container }} restarted (current value: {{ $value}}s)
          times in pod {{ $labels.namespace }}/{{ $labels.pod }}'
        summary: More than 5 restarts in pod {{ $labels.container }}
      expr: kube_pod_container_status_restarts_total{container="coredns"} > 0
      for: 1s
      labels:
        severity: warning

If all is OK, new alert should appear in Prometheus operator.

We can add as many rules as we want.

New email should end up in Inbox

Grafana – creating Dashboard

In Grafana dashboard click + sign – Dashboard

Click Add Query

In metric type Prometheus query for which you want to create dashboard, if more queries is needed, just click Add Query

For example, average memory usage for particular container (in MB) was added

avg(container_memory_usage_bytes{container="mysql-container"})/1024/1024

Time range can also be adjusted

Click on “diskette drive” icon to save dashboard

StatefulSets in Kubernetes are used for applications where data consistency and replication is required (relational databases).

In stateful set each pod is assigned a unique ordinal number in the range of [0, N),and they are shut down in reverse order to ensure a reliable and repeatable deployment and runtime. The StatefulSet will not even scale until all the required pods are running, so if one dies, it recreates the pod before attempting to add additional instances to meet the scaling criteria.This ID sticks to the pod even when its rescheduled to another worker node. so pod retains the connection to the volume that holds the state of the database. In this way each replica pod in a stateful set has its state and data.

In this example, service named mysql will be exposed to client applications and service will pass write/read request to one of the node.Each node will store data in persistent volume (/mnt/mysql[01-02]) folders in Kubernetes host. Each volume is replicated to Kubernetes node ensuring data availability

Persistent volume claim is created for each pod

storage.yaml:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:

  name: localstorage

provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
reclaimPolicy: Delete
allowVolumeExpansion: True

---

kind: PersistentVolume
apiVersion: v1
metadata:
  name: mysql-01
  labels:
    type: local
spec:
  storageClassName: localstorage
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/mysql01"
    type: DirectoryOrCreate
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: mysql-02
  labels:
    type: local
spec:
  storageClassName: localstorage
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/mysql02"
    type: DirectoryOrCreate

mysql credentials are stored in env.example

MYSQL_PASSWORD=password
MYSQL_DATABASE=db
MYSQL_ROOT_PASSWORD=password
MYSQL_USER=user

And secret is created from this file and referenced in worload.yaml

kubectl create secret generic prod-secrets --from-env-file=env.example

services.yaml:

apiVersion: v1
kind: Service
metadata:
  name: mysql

spec:
  # Open port 3306 only to pods in cluster
  selector:
    app: mysql-container

  ports:
    - name: mysql
      port: 3306
      protocol: TCP
      targetPort: 3306
  type: ClusterIP

volumeClaimTemplate will automatically create persitent volume claim for each pod, workload.yaml:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql-container
spec:
  serviceName: mysql
  replicas: 2
  selector:
    matchLabels:
      app: mysql-container
  template:
    metadata:
      labels:
        app: mysql-container
    spec:
      containers:
      - name: mysql-container
        image: mysql:dev
        imagePullPolicy: "IfNotPresent"
        envFrom:
          - secretRef:
             name: prod-secrets
        ports:
        - containerPort: 3306
        # container (pod) path
        volumeMounts:
          - name: mysql-persistent-storage
            mountPath: /var/lib/mysql

        resources:
          requests:
            memory: 300Mi
            cpu: 400m
          limits:
            memory: 400Mi
            cpu: 500m
      restartPolicy: Always

  volumeClaimTemplates:
    - metadata:
        name: mysql-persistent-storage
      spec:
        storageClassName: localstorage
        accessModes: ["ReadWriteOnce"]
        resources:
         requests:
          storage: 5Gi
        selector:
         matchLabels:
          type: local

After all above yaml files are applied, check microservices status:

kubectl get pvc
NAME                                         STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mysql-persistent-storage-mysql-container-0   Bound    mysql-02   5Gi        RWO            localstorage   3h40m
mysql-persistent-storage-mysql-container-1   Bound    mysql-01   5Gi        RWO            localstorage   3h39m


kubectl get pv
NAME           CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                                                                                       STORAGECLASS   REASON   AGE
                                                                                           
mysql-01       5Gi        RWO            Retain           Bound    default/mysql-persistent-storage-mysql-container-1                                                                          localstorage            3h42m
mysql-02       5Gi        RWO            Retain           Bound    default/mysql-persistent-storage-mysql-container-0  

kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
mysql-container-0                   1/1     Running   0          8m17s
mysql-container-1                   1/1     Running   0          8m27s



kubectl get services
NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
adminer-container   NodePort    10.109.247.8     <none>        8080:30050/TCP   2d5h
kubernetes          ClusterIP   10.96.0.1        <none>        443/TCP          3d2h
mysql               ClusterIP   10.111.175.116   <none>        3306/TCP         33m


 kubectl describe statefulset.apps/mysql-container
Name:               mysql-container
Namespace:          default
CreationTimestamp:  Fri, 20 Mar 2020 11:49:31 -0400
Selector:           app=mysql-container
Labels:             <none>
Annotations:        kubectl.kubernetes.io/last-applied-configuration:
                      {"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"name":"mysql-container","namespace":"default"},"spec":{"replica...
Replicas:           2 desired | 2 total
Update Strategy:    RollingUpdate
  Partition:        824640705452
Pods Status:        2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mysql-container
  Containers:
   mysql-container:
    Image:      mysql:dev
    Port:       3306/TCP
    Host Port:  0/TCP
    Limits:
      cpu:     500m
      memory:  400Mi
    Requests:
      cpu:     400m
      memory:  300Mi
    Environment Variables from:
      prod-secrets  Secret  Optional: false
    Environment:    <none>
    Mounts:
      /var/lib/mysql from mysql-persistent-storage (rw)
  Volumes:  <none>
Volume Claims:
  Name:          mysql-persistent-storage
  StorageClass:  localstorage
  Labels:        <none>
  Annotations:   <none>
  Capacity:      5Gi
  Access Modes:  [ReadWriteOnce]
Events:
  Type    Reason            Age                  From                    Message
  ----    ------            ----                 ----                    -------
  Normal  SuccessfulCreate  38m                  statefulset-controller  create Claim mysql-persistent-storage-mysql-container-2 Pod mysql-container-2 in StatefulSet mysql-container success
  Normal  SuccessfulCreate  38m                  statefulset-controller  create Pod mysql-container-2 in StatefulSet mysql-container successful
  Normal  SuccessfulDelete  37m                  statefulset-controller  delete Pod mysql-container-2 in StatefulSet mysql-container successful
  Normal  SuccessfulDelete  21m                  statefulset-controller  delete Pod mysql-container-1 in StatefulSet mysql-container successful
  Normal  SuccessfulCreate  21m (x7 over 3h54m)  statefulset-controller  create Pod mysql-container-1 in StatefulSet mysql-container successful
  Normal  SuccessfulDelete  21m                  statefulset-controller  delete Pod mysql-container-0 in StatefulSet mysql-container successful
  Normal  SuccessfulCreate  21m (x7 over 3h55m)  statefulset-controller  create Pod mysql-container-0 in StatefulSet mysql-container successful

The Prometheus Operator serves to make running Prometheus on top of Kubernetes as easy as possible, while preserving Kubernetes-native configuration options.The Operator ensures at all times that for each Prometheus resource in the cluster a set of Prometheus servers with the desired configuration are running. This entails aspects like the data retention time, persistent volume claims, number of replicas, the Prometheus version, and Alertmanager instances to send alerts to. Each Prometheus instance is paired with a respective configuration that specifies which monitoring targets to scrape for metrics and with which parameters.

Prometheus operator installs:

Grafana:

Grafana allows you to query, visualize, alert on and understand your metrics no matter where they are stored. Create, explore, and share dashboards with your team and foster a data driven culture:

  • Visualize: Fast and flexible client side graphs with a multitude of options. Panel plugins for many different way to visualize metrics and logs.
  • Dynamic Dashboards: Create dynamic & reusable dashboards with template variables that appear as dropdowns at the top of the dashboard.
  • Explore Metrics: Explore your data through ad-hoc queries and dynamic drilldown. Split view and compare different time ranges, queries and data sources side by side.
  • Explore Logs: Experience the magic of switching from metrics to logs with preserved label filters. Quickly search through all your logs or streaming them live.
  • Alerting: Visually define alert rules for your most important metrics. Grafana will continuously evaluate and send notifications to systems like Slack, PagerDuty, VictorOps, OpsGenie.
  • Mixed Data Sources: Mix different data sources in the same graph! You can specify a data source on a per-query basis. This works for even custom datasources.

Alertmanager:

handles alerts sent by client applications such as the Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct receiver integration such as email, PagerDuty, or OpsGenie. It also takes care of silencing and inhibition of alerts.

Install helm:

Helm is a tool that streamlines the installation and management of Kubernetes applications.

wget https://get.helm.sh/helm-v3.0.2-linux-amd64.tar.gz
tar xvf helm-v3.0.2-linux-amd64.tar.gz
mv linux-amd64/helm /usr/bin/

Create storage class and Peristent volumes (PV)

storage.yaml:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:

  name: prometheus
  namespace: monitoring
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
reclaimPolicy: Delete
allowVolumeExpansion: True


---

apiVersion: v1
kind: PersistentVolume
metadata:
  name: operator
  namespace: monitoring
spec:
  storageClassName: prometheus
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/prometheus/"
    type: DirectoryOrCreate



---

apiVersion: v1
kind: PersistentVolume
metadata:
  name: grafana
  namespace: monitoring
spec:
  storageClassName: prometheus
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/grafana/"
    type: DirectoryOrCreate


---

apiVersion: v1
kind: PersistentVolume
metadata:
  name: alertmanager
  namespace: monitoring
spec:
  storageClassName: prometheus
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/alertmanager/"
    type: DirectoryOrCreate

Create monitoring namespace and apply this file:

kubectl create namespace monitoring
kubectl apply -f storage.yaml

Installing Prometheus operator

Create file custom-values.yaml to specify Prometheus operator:

# Define persistent storage for Prometheus (PVC)
prometheus:
  prometheusSpec:
    securityContext:
      fsGroup: 0
      runAsUser: 0
      runAsNonRoot: false
    storageSpec:
      volumeClaimTemplate:
        spec:
          volumeName: template
          accessModes: ["ReadWriteOnce"]
          storageClassName: prometheus
          resources:
            requests:
              storage: 10Gi


# Define persistent storage for Grafana (PVC)

grafana:
  # Set password for Grafana admin user
  adminPassword: pass
  persistence:
    enabled: true
    volumeName: grafana
    storageClassName: prometheus
    accessModes: ["ReadWriteOnce"]
    size: 5Gi
# Define persistent storage for Alertmanager (PVC)
alertmanager:
  alertmanagerSpec:
    securityContext:
      fsGroup: 0
      runAsUser: 0
      runAsNonRoot: false

    storage:
      volumeClaimTemplate:
        spec:
          volumeName: alertmanager
          accessModes: ["ReadWriteOnce"]
          storageClassName: prometheus
          resources:
            requests:
              storage: 5Gi


# Change default node-exporter port
prometheus-node-exporter:
  service:
    port: 30206
    targetPort: 30206

# enable Etcd metrics
kubeEtcd:
  enabled: true

# enable Controller metrics
kubeControllerManager:
  enabled: true

# enable Scheduler metrics
kubeScheduler:
  enabled: true

Grafana will be exposed to the outside on port 30800, prometheus will be available on port 30900 and alertmanager is exposed on port 30700

helm install prometheus stable/prometheus-operator --namespace monitoring -f custom-values.yaml --set prometheus.service.nodePort=30900 --set prometheus.service.type=NodePort --set grafana.service.nodePort=30800 --set grafana.service.type=NodePort --set alertmanager.service.nodePort=30700 --set alertmanager.service.type=NodePort

List nodes

kubectl get svc -n monitoring
NAME                                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
alertmanager-operated                     ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP   8m13s
prometheus-grafana                        NodePort    10.105.191.43    <none>        80:30800/TCP                 8m19s
prometheus-kube-state-metrics             ClusterIP   10.100.5.77      <none>        8080/TCP                     8m19s
prometheus-operated                       ClusterIP   None             <none>        9090/TCP                     8m2s
prometheus-prometheus-node-exporter       ClusterIP   10.106.188.132   <none>        30206/TCP                    8m19s
prometheus-prometheus-oper-alertmanager   NodePort    10.110.190.161   <none>        9093:30700/TCP               8m19s
prometheus-prometheus-oper-operator       ClusterIP   10.107.145.130   <none>        8080/TCP,443/TCP             8m19s
prometheus-prometheus-oper-prometheus     NodePort    10.98.4.23       <none>        9090:30900/TCP               8m19s

And services

kubectl get pods -n monitoring
NAME                                                     READY   STATUS    RESTARTS   AGE
alertmanager-prometheus-prometheus-oper-alertmanager-0   2/2     Running   0          8m35s
prometheus-grafana-7d64744489-qp7xb                      3/3     Running   0          8m41s
prometheus-kube-state-metrics-d5dc994cc-9fsrr            1/1     Running   0          8m41s
prometheus-prometheus-node-exporter-2wvs8                1/1     Running   0          8m41s
prometheus-prometheus-oper-operator-7d4497dd9-4xmjb      2/2     Running   0          8m41s
prometheus-prometheus-prometheus-oper-prometheus-0       3/3     Running   0          8m24s

Prometheus, alertmanager and Grafana should be accessibe from the outside of cluster

http://kubernetes_ip:port_number

Login to Prometeus (http://IP:30090), click on Status-Targets

In case you see errors, stop firewall and restart all the Prometheus pods or reboot your machine.

To access Grafana go to http://IP:30800

Installing NRPE

Nagios Remote Plugin Executor (NRPE) is used to remotely execute Nagios plugins on Linux/Unix machines. This makes it easy to monitor remote machine metrics such as disk usage, CPU load, number of running processes, logged in users etc.

On machine on which disk needs to be monitored install nrpe

yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum install nrpe -y

Installing disk plugin

List all available plugins

yum list nagios-plugins*

Install disk plugin:

yum install nagios-plugins-disk.x86_64

Uncomment and edit next lines in /etc/nagios/nrpe.conf

server_address=local IP
allowed_hosts=127.0.0.1,::1, nagios_server_ip
# allow arguments
dont_blame_nrpe=1
command[check_disk]=/usr/lib64/nagios/plugins/check_disk -w 20% -c 10% -p /opt/vbox

In above example partition /opt/vbox is monitored, warning is raised when free space falls below 20% and critical alert is created when free space falls below 10%.

When done, start and enable nrpe service (or restart it if it’s already installed)

systemctl enable nrpe
systemctl start nrpe

Steps on nagios server

Test if plugin works from nagios server:

/usr/lib64/nagios/plugins/check_nrpe -H 1.1.1.2 -c check_disk
DISK CRITICAL - free space: /opt/vbox 2080 MiB (0.61% inode=100%);| /opt/vbox=333875MiB;282112;317376;0;352640

On nagios server add command for disk plugin

Edit /usr/local/nagios/etc/objects/commands.cfg file

Add next lines:

define command {
command_name check_partition
command_line /usr/lib64/nagios/plugins/check_nrpe -H $HOSTADDRESS$ -c check_disk
}

Add reference to this command in monitored host file located in /usr/local/nagios/etc/objects/conf.d/ folder

define service{
        use                             generic-service         ; Name of service template to use
        host_name                       vagrant.test.local
        service_description             check vagrant partition
        check_command                   check_partition
        }

Restart nagios service

systemctl restart nagios

Couple of modules for creating resources in Azure

Prerequisites:

  • terraform 12
  • Configure terraform for Azure
  • Add ARM_CLIENT_ID,ARM_CLIENT_SECRET,ARM_SUBSCRIPTION_ID and ARM_TENANT_ID as environmental variables
  • Download modules

In terraform\modules\vm\example is main.tf file for calling all the modules.

Edit this file to satisfy your needs:

#Resource group


module "rg" {

source                                   = "../../../modules/rg"
resource_group_name                      = var.resource_group_name
resource_group_location                  = var.resource_group_location
}

module "azure_vnet" {
source                                   = "../../../modules/vnet"
environment_name                         = var.environment_name
cidr_block                               = "10.11.96.0/19"
dns_servers                              = [
    "8.8.8.8",
    "18.8.4.4",
   
  ]
resource_group_name                      = var.resource_group_name
resource_group_location                  = var.resource_group_location
enable_wan_subnet                        = true
enable_dmz_subnet                        = true
enable_vdi_subnet                        = false
enable_infrastructure_services_subnet    = false
enable_infrastructure_db_services_subnet = false
enable_production_app_services_subnet    = false
enable_production_db_services_subnet     = false
enable_acceptance_app_services_subnet    = false
enable_acceptance_db_services_subnet     = false
enable_test_app_services_subnet          = false
enable_test_db_services_subnet           = false
enable_development_app_services_subnet   = false
enable_development_db_services_subnet    = false
}

module "application_gateway" {

source                                   = "../../../modules/app_gateway"
resource_group_name                      = var.resource_group_name
resource_group_location                  = var.resource_group_location
sku_name                                 = "WAF_Medium"
tier                                     = "WAF"
capacity                                 = 1
subnet_id                                = module.azure_vnet.subnets_id_dmz
targets                                  = module.azure_vm.azure_vm_nic_id
ip_configuration                         = module.azure_vm.azure_nic_ip_configuration
# https settings
https                                    = false
}


#module "azure_key_vault" {

#source                                   = "../../../modules/vault"
#environment_name                         = var.environment_name
#resource_group_name                      = module.rg.resource_group_name
#resource_group_location                  = module.rg.resource_group_location
#azure_object_id                          = var.azure_object_id
#azure_tenant_id                          = var.azure_tenant_id
#key_vault_name                           = var.key_vault_name
#network_acl                              = ["1.2.3.4/32"]
#}



module "azure_vm" {

source = "../../../modules/vm"
environment_name=var.environment_name
#key_vault_url = module.azure_key_vault.key_vault_url
#key_vault_resource_id = module.azure_key_vault.key_vault_resource_id
#key_encryption_key_name = module.azure_key_vault.key_encryption_key_name
#key_encryption_key_version = module.azure_key_vault.key_encryption_key_version
#key_vault_secret_id = module.azure_key_vault.key_vault_secret_id
subnet_id = module.azure_vnet.subnets_id_wan
resource_group_name = module.rg.resource_group_name
resource_group_location=module.rg.resource_group_location
os = "windows"
vm_size = "Standard_B2ms"
vm_image_publisher = "MicrosoftWindowsServer"
vm_image_offer = "WindowsServer"
vm_image_sku = "2016-Datacenter"
vm_name = "myvm2"
vm_admin = "ja"
vm_password = "Passw0rd01234!"
number_of_machines = 1
disk_size = 2
number_of_managed_disks = 0
encryption = false
public_ip  = false
}

Read me file is in terraform\modules\app_gateway\README.md

Initialize modules and deploy resources

terraform init
terraform apply

Powershell – Hyper-V get VM IP config

Posted: February 28, 2020 in Scripts

Prerequisite:

Install Integration services on VM

This script will get IPV4 only IP address, VM name and MAC address.

: will be added to MAC address. In case VM has more than one IP/MAC, those values will be in same column separated by space

# Get all running VMs

$vms = Get-VM | Where { $_.State –eq ‘Running’ } | Select-Object -ExpandProperty Name 

# Declare "global" array variable (make it available outside loop)

$results = @()
 foreach($vm in $vms) {

    # Get network interface details
    $out = Get-VMNetworkAdapter -vmname $vm | select VMName, MacAddress, IPAddresses

    # Remove duplicate VM names
    $vm_name = $out.VMName | Get-Unique

    # In case more than 1 IP, put it in same row separated by space (192.168.1.1, 192.168.1.2)
    
    $ip = ($out.IPAddresses | ForEach-Object {
    $_ | ? {$_ -notmatch ':'}   
    }) -join " "

    # If more than 1 MAC , put it in same row separated by space (00:15:5D:58:12:5E 00:15:5D:58:12:5F )
    
    $mac = ($out.MacAddress | ForEach-Object {
    $_.Insert(2,":").Insert(5,":").Insert(8,":").Insert(11,":").Insert(14,":")
}) -join ' '
     
  # Add headers
   
$comp = Get-WmiObject Win32_ComputerSystem | Select-Object -ExpandProperty name

$obj = New-Object -TypeName psobject
$obj | Add-Member -MemberType NoteProperty -Name "VM NAME" -Value $vm_name
$obj | Add-Member -MemberType NoteProperty -Name "IP ADDRESS" -Value $ip
$obj | Add-Member -MemberType NoteProperty -Name "MAC ADDRESS" -Value $mac
$obj | Add-Member -MemberType NoteProperty -Name "HYPER-V HOST" -Value $comp

# Append object to outside "global" variable

$results += $obj

#$obj| Export-Csv -Path "c:\1.csv" -NoTypeInformation -append 
}

# write results to CSV file
                                
$results| Export-Csv -Path "c:\1.csv" -NoTypeInformation
 

Option 2: multi-line (every IP/MAC) have new line

$results = Get-VM | Where State –eq Running | Get-VMNetworkAdapter | ForEach-Object {
    [pscustomobject]@{
        'VM NAME'      = $_.VMName
        'IP ADDRESS'   = ($_.IPAddresses -notmatch ':') -join ' '
        'MAC ADDRESS'  = ($_.MacAddress -replace '(..)(..)(..)(..)(..)','$1:$2:$3:$4:$5:') -join ' '
        'HYPER-V HOST' = $env:COMPUTERNAME
    }
}
$results | Export-Csv -Path "c:\1.csv" -NoTypeInformation

# Put in single row multiple IP/MAC addresses for one VM:

$csv = $results | Group-Object 'VM NAME'  | ForEach-Object {
 [PsCustomObject]@{
 'VM NAME' = $_.Name
 'IP ADDRESS' = $_.Group.'IP ADDRESS' -join ' '
 'MAC ADDRESS' = $_.Group.'MAC ADDRESS' -join ' '
 }
}
$csv | Export-Csv -Path "c:\1.csv" -NoTypeInformation