Archive for November, 2018

import boto3
ec2 = boto3.resource('ec2',region_name='eu-west-1')
def lambda_handler(event, context):
     for vol in ec2.volumes.all():
      if vol.state=='available':
        for tag in vol.tags:
         if tag in vol.tags or tag==None:
           print "VolumeID:"+, "Volume Name:"+tag['Value'] + " AWS Region:Ireland"




This Python boto3 script will get notification if instances are stopped for more than 30 days

import json
import boto3
import re
import smtplib
import datetime
import time
from datetime import datetime
from datetime import date
from dateutil.relativedelta
import relativedeltainstance_ids = []
instance_names = []
stopped_reasons = []
ses = boto3.client('ses')
def send_mail(email_from, email_to, subject, body):
smtp_address = ''
provider_username = ''
provider_password = 'Pass'
smtpserver = smtplib.SMTP(smtp_address, 587)
smtpserver.ehlo() # extra characters to permit edit
smtpserver.login(provider_username, provider_password)
header = 'To: ' + email_to + '\n' + 'From: ' + email_from + '\n' + 'Subject: ' + subject + '\n'
msg = header + '\n ' + body + ' \n\n'
smtpserver.sendmail(provider_username, email_to, msg)
def lambda_handler(event, context):
 client = boto3.client('ec2',region_name='eu-west-2')
 reservations = client.describe_instances().get('Reservations', [])
 for reservation in reservations:
    for instance in reservation['Instances']:
      tags = {}
      for tag in instance['Tags']:
            tags[tag['Key']] = tag['Value']
            if tag['Key'] == 'Name':
    if instance['State']['Name'] == 'stopped':
       stopped_reason = instance['StateTransitionReason']
       transition_timestamp = datetime.strptime(instance['StateTransitionReason'][16:39], '%Y-%m-%d %H:%M:%S %Z')
       days=( - transition_timestamp).days
       if days > 30:
          a = "InstanceID:" + instance['InstanceId'] + "," + ' Instance Name:' +name + "," + " Shutdown Time: "+str(transition_timestamp)
body= "\n\n".join(output)
if body:
emailbody = "The following instances are stopped for more than 30 days in London region\n\n\n\n" +body + "\n\n\n If instances are not needed anymore please terminate it or start it otherwise"
send_mail('', '', 'Notification of stopped instances', emailbody)

It’s presumed Chocolatey server will be installed on D drive,it’s presumed Chocolatey server has no internet access.

Choco installation files are obtained in following way:

On any windows machine with internet access following has been done:

  • installed chocolatey
  • after chocolatey has been installed following command has been executed:
    choco install chocolatey.server
  • all required files are downloaded to C:\tools folder


Copy tools folder somewhere to Ansible server, Ansible playbook will copy  it to D drive on Windows server.

Folder structure:

|—————–winplaybook/choco.yml   chocoserver/tools/


|             | ————–features/features.yml

|————–windows/vars_win.yml   vaul_win.yml

features.yml contains list of IIS features and IIS users

- Web-Server
- Web-Asp-Net45
- Web-AppInit 


- IIS APPPOOL\ChocolateyServer

chocoserver/tools contains chocolatey server installation (copied from windows machine with internet access)

Chocolatey API key is in vars_win.yml (unencrypted-point to vault_win.yml) and vault_win.yml (encrypted)


api_key: '{{ vault_api_key }}'


vault_api_key: myapi

Playbook will copy Chocolatey server files to D drive, installs IIS server and features,removes default IIS web site, creates Chocolatey application pool, sets ACL permissions on D:\tools\chocolatey.server and D:\tools\chocolatey.server\App_Data,creates Chocolatey IIS site and changes default API key

– name: install choco server
hosts: dc2
– group_vars/windows/vars_win.yml
– group_vars/features/features.yml
gather_facts: yes
– name: Copy Chocolatey server to D drive
src: /root/win_playbooks/choco_server/
dest: D:\
– name: Ensure IIS is installed
name: Web-Server
state: present
include_management_tools: True
– name: Ensure IIS Web-Server and ASP.NET are installed
name: ‘{{ item }}’
state: present
with_items: ‘{{ features }}’
– name: Ensure Default Web Site is not present
name: “Default Web Site”
state: absent
#- name: Chocolatey.server package is installed
# win_chocolatey:
# name: “chocolatey.server”
# state: present
– name: Configure AppPool for Chocolatey.server
name: ChocolateyServer
state: started
enable32BitAppOnWin64: true
managedRuntimeVersion: v4.0
managedPipelineMode: Integrated
startMode: AlwaysRunning
autoStart: true
– name: Grant read permissions to D:\tools\chocolatey.server
user: ‘{{ item }}’
path: D:\tools\chocolatey.server
rights: Read
state: present
type: allow
inherit: ContainerInherit, ObjectInherit
progagation: InheritOnly
with_items: ‘{{ users }}’
– name: Grant IIS APPPOOL\ modify permissions to D:\tools\chocolatey.server\App_Data
user: ‘{{ item }}’
path: D:\tools\chocolatey.server
rights: Modify
state: present
type: allow
inherit: ContainerInherit, ObjectInherit
progagation: InheritOnly
with_items: ‘{{ users }}’
– name: Create Chocolatey IIS site
name: “chocolatey”
state: started
port: 80
application_pool: “ChocolateyServer”
physical_path: D:\tools\chocolatey.server
register: website
– name: Change default API key
path: D:\tools\chocolatey.server\web.config
regexp: ‘<add key=”apiKey” value=”chocolateyrocks” />’
line:’         <add key=”apiKey” value=”{{ api_key }}”/>’
state: present

PuppetDB is open source storage service for Puppet nodes


In this example postgresql 11 is installed

rpm -Uvh
yum install postgresql11-server postgresql11-contrib

Initialize postgresql

/usr/pgsql-11/bin/postgresql-11-setup initdb

Start PostrgeSQL service

systemctl enable postgresql-11.service
systemctl start postgresql-11.service

Switch to postgress user, create user puppetdb and puppetdb database

sudo -iu postgres
createuser -DRSP puppetdb
createdb -E UTF8 -O puppetdb puppetdb
psql puppetdb -c 'create extension pg_trgm'

Edit  /var/lib/pgsql/11/data/pg_hba.conf


Install puppetdb

rpm -Uvh
yum install puppetdb

Edit /etc/puppetlabs/puppetdb/conf.d/database.ini, specify puppetdb username/password

classname = org.postgresql.Driver
subprotocol = postgresql

# The database address, i.e. //HOST:PORT/DATABASE_NAME
subname = //localhost:5432/puppetdb

# Connect as a specific user
username = puppetdb

# Use a specific password
password = puppetdb

Edit /etc/puppetlabs/puppetdb/conf.d/jetty.ini

Uncomment host =

Edit /etc/sysconfig/puppetdb and re-map memory needed for puppetdb


Start puppetdb

systemctl start puppetdb && systemctl enable puppetdb

Setting Puppet server

make sure puppetb DNS name is resolveable (/etc/hosts)

Edit  /etc/puppetlabs/puppet/puppet.conf,add following lines

storeconfigs = true
storeconfigs_backend = puppetdb

Create /etc/puppetlabs/puppet/puppetdb.conf

server_urls =

Create /etc/puppetlabs/puppet/routes.yaml

terminus: puppetdb
cache: yaml

install puppetdb-termini and restart puppet server

yum install puppetdb-termini
systemctl restart puppetserver

On puppet node run puppet -t

Login to puppetdb and verify data from node are transfered to puppetdb

psql -h localhost puppetdb puppetdb
puppetdb=>select * from catalogs;


Unlike other tasks, this one requires runas (become) permissions. So, we need to specify become statement in playbook, and to add following directives in group_vars folder (see this guide how to create it.

add 4 “ansible_become” lines as per example

ansible_user: Administrator
ansible_password: Pass
ansible_connection: winrm
ansible_port: 5986
ansible_winrm_server_cert_validation: ignore
ansible_become: yes
ansible_become_user: Administrator
ansible_become_pass: Passw
ansible_become_method: runas
Both are same account,it’s local admin account promoted to Domain Administrator after creating AD Domain, the reason why we need to add those 4 lines is because renaming AD joined machines required Active Directory credentials, those 4 “ansible_become” lines instruct Ansible to use domain administrator credentials instead of local administrator.
- name: Change computer name
  hosts: dc2
   - name: Change host name
     become: yes
       name: server2
     register: name_changed
   - name: reboot server after hostname changes
       msg: "Computer name changed,rebooting...."
       pre_reboot_delay: 15
     when: name_changed.changed


In this example Page file will be moved to D drive, in order for Ansible to “track changes” file C:\Pagefile.log will be created after Page file is moved and server will be restarted afterwards.

In this example page file is set to automatic (InitialSize = 0; MaximumSize = 0), we can set custom Initial/Maximum size (in MB).

- name: Moving page file to another drive
  hosts: winserver
  gather_facts: yes
   - name: "Move Page File to D Drive"
     win_shell: |
       $a=Get-WmiObject -Query "select * from Win32_PageFileSetting"
       if ($a.Name -like 'C:\pagefile.sys') {
       $CurrentPageFile = Get-WmiObject -Query "select * from Win32_PageFileSetting where name='c:\\pagefile.sys'"
       Set-WMIInstance -Class Win32_PageFileSetting -Arguments @{name="d:\pagefile.sys";InitialSize = 0; MaximumSize = 0}
       } write-output "Done" | out-file C:\Pagefile.log -Append
      creates: C:\Pagefile.log
     register: page
   - name: reboot server
       msg: "Page file moved,rebooting..."
       pre_reboot_delay: 15
     when: page.changed

Puppet Load Balancing

Posted: November 22, 2018 in Linux, puppet


Puppetmaster is Load Balancer,SSL termination happens there, Puppet client communicates only with puppetmaster, and puppetmaster sends requests to puppetservers1/2

 Settings puppetmaster,puppetservers 1 and 2

rpm -Uvh yum -y install puppetserver

export PATH=/opt/puppetlabs/bin:$PATH

source ~/.bash_profile

edit /etc/puppetlabs/puppetserver/conf.d/webserver.conf.This will configure puppetservers to listen on port 8141 for TLS encrypted traffic and port 18140 for unencrypted traffic.

webserver: {
access-log-config: /etc/puppetlabs/puppetserver/request-logging.xml
client-auth: want
ssl-port: 8141
port: 18140

On all machines run netstat -nltp to make sure ports 18140/8141 are opened


Edit /etc/puppetlabs/puppet/puppet.conf

dns_alt_names = puppet,puppetmaster,puppetserver

Settings on puppetserver 1 and 2

When a Puppet agent connects to a Puppet master, the communication is authenticated with SSL certificates.On these backend servers we need to configure them to access certificate information passed in SSL certificates headers. Setting allow-header-cert-info to ‘true’ puts Puppet Server in vulnerable state. Ensure puppeservers1/1 are not reachable by an untrusted network.With allow-header-cert-info set to ‘true’, authorization code will use only the client HTTP header values—not an SSL-layer client certificate—to determine the client subject name, authentication status, and trusted facts.

On Puppetmaster1/2 edit /etc/puppetlabs/puppetserver/conf.d/auth.conf  and add following line at the beginning of file:allow-header-cert-info: true

authorization: {

version: 1

 allow-header-cert-info: true

 rules: [



Create SSL Certificate on puppetmaster (Load Balancer)

Initialize CA certificate

puppet cert list -a

Create certificate request
puppet certificate generate --dns-alt-names puppet,puppetmaster,puppetserver --ca-location local

Issue certificate

puppet cert sign --allow-dns-alt-names
puppet certificate find --ca-location local

On Puppetserver1/2 remove contents of /etc/puppetlabs/puppet/ssl folder

Now, from Puppetmaster copy content of /etc/puppetlabs/puppet/ssl/ to the same location of puppetserver1/2

cd /etc/puppetlabs/puppet/ssl
scp -r * root@
scp -r * root@

Once certificates are copied, on puppetserver1/2 execute puppet cert list -a

all copied certificates should be recognized by both puppet backend servers.

On puppetmaster (Load Balancer) install apache and mod_ssl

yum install httpd mod_ssl

Create /etc/httpd/conf.d/puppetlb.conf

Listen 8140


SSLEngine on
SSLProtocol -ALL +TLSv1 +TLSv1.1 +TLSv1.2

SSLCertificateFile /etc/puppetlabs/puppet/ssl/certs/
SSLCertificateKeyFile /etc/puppetlabs/puppet/ssl/private_keys/
SSLCertificateChainFile /etc/puppetlabs/puppet/ssl/ca/ca_crt.pem
SSLCACertificateFile /etc/puppetlabs/puppet/ssl/ca/ca_crt.pem

SSLCARevocationFile /etc/puppetlabs/puppet/ssl/ca/ca_crl.pem
SSLVerifyClient optional
SSLVerifyDepth 1
SSLOptions +StdEnvVars +ExportCertData

RequestHeader unset X-Forwarded-For
RequestHeader set X-SSL-Subject %{SSL_CLIENT_S_DN}e
RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e
RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e

ProxyPassMatch ^/(puppet-ca/v[123]/.*)$ balancer://puppetca/$1
ProxyPass / balancer://puppetworker/
ProxyPassReverse / balancer://puppetworker


This configuration creates an Apache VirtualHost that will listen for connections on port 8140 and redirect traffic to one of the three puppetserver instances.Communication between the puppetserver machine and the puppetser1/2 will be unencrypted.

To redirect all certificate related traffic to a specific machine, the following ProxyPassMatch directive can be used:
ProxyPassMatch ^/([^/]+/certificate.*)$ balancer://puppetca/$1

On puppetserver start httpd and puppet

systemctl start httpd && systemctl enable httpd && systemctl start puppetserver && systemctl enable puppetserver

On backend puppetserver1/2 start puppeserver service

systemctl start puppetserver && systemctl enable puppetserver

Now on puppet node run puppet agent -t, sign certificate and puppet agent should work fine.