Archive for January, 2019

A virtual private gateway is the VPN concentrator on the Amazon side of the Site-to-Site VPN connection.

customer gateway is a software application  of the Site-to-Site VPN connection.

From AWS console click VPC-Virtual Private Gateways-Create Virtual Private Gateway

1.png

2.PNG

Now Create Customer Gateway: Customer Gateway-Create Customer Gateway

3.png

Routing Static-Enter Public IP of StrongSwan server

4.PNG

Now click Site-to-Site-VPN Connection-Create VPN Connection

5.png

Now select Virtual Private gateway and Customer Gateway we created previously and click Create VPN Connection-Routing Option:Static-Specify remote network local subnet

6.PNG

Click again Virtual Private Gateways-Actions-Attach to VPC – select VPC and click Yes,attach

7.PNG

Allow inbound traffic from StrongSwan server

From Services-VPC-Security Groups-Select Security Group-Inbound Rules-Edit Rule

8.PNG

Add Rule-Type:All traffic-Source StrongSwan IP address

11.PNG

Installing StrongSwan on CentOS 7

If StrongSwan is installed on AWS EC2 disable Source-Destination check

Ensure that /etc/sysctl.conf contains the following lines and then force them to be loaded by running sysctl -p /etc/sysctl.conf or by rebooting:

net.ipv4.ip_forward = 1
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.tcp_max_syn_backlog = 1280
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.default.secure_redirects = 0
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.tcp_mtu_probing = 1

yum install epel-release
yum repolist
yum update
yum install strongswan
systemctl enable strongswan
yum install ntp
systemctl enable ntpd

Replace the server configuration entries in /etc/ntp.conf so the AWS recommended NTP server pool is used:

server 0.amazon.pool.ntp.org iburst
server 1.amazon.pool.ntp.org iburst
server 2.amazon.pool.ntp.org iburst
server 3.amazon.pool.ntp.org iburst

Switch back to AWS console-Site-To-Site VPN Connection-select VPN connection-click Download Confiduration

12.PNG

13.PNG

For tunnel 1 downloaded configuration looks like this:

– IKE version : IKEv1
– Authentication Method : Pre-Shared Key
– Pre-Shared Key : aqke
– Authentication Algorithm : sha1
– Encryption Algorithm : aes-128-cbc
– Lifetime : 28800 seconds
– Phase 1 Negotiation Mode : main
– Diffie-Hellman : Group 2

#2: IPSec Configuration

Configure the IPSec SA as follows:
Category “VPN” connections in the GovCloud region have a minimum requirement of AES128, SHA2, and DH Group 14.
Please note, you may use these additionally supported IPSec parameters for encryption like AES256 and other DH groups like 2, 5, 14-18, 22, 23, and 24.
Higher parameters are only available for VPNs of category “VPN,” and not for “VPN-Classic”.
– Protocol : esp
– Authentication Algorithm : hmac-sha1-96
– Encryption Algorithm : aes-128-cbc
– Lifetime : 3600 seconds
– Mode : tunnel
– Perfect Forward Secrecy : Diffie-Hellman Group 2

/etc/strongswan/ipsec.conf:
conn %default
mobike=no
compress=no
authby=psk
keyexchange=ikev1
ike=aes128-sha1-modp1024!
ikelifetime=28800s
esp=aes128-sha1-modp1024!
lifetime=3600s
rekeymargin=3m
keyingtries=3
installpolicy=yes
dpdaction=restart
type=tunnel
conn dc-aws1
leftsubnet=172.16.40.0/24 #local subnet
right=1.2.3.4 # AWS Gateway Public IP
rightsubnet=10.34.0.0/16 #remoye subnet
auto=start

Store preshared key in /etc/strongswan/ipsec.secrets

1.2.3.4 : PSK "aqke"

restart stronhswan service and check logs:

tail -f /var/log/messages | grep charon

If all is fine tunnel should be UP

10

Let’s say we have this AWS CloudWatch event

2.PNG

And we have tagged EC2 instance with AutoStopSchedule tag, values between 1-5

1.PNG

Following Lambda function will get cron expression from CloudWatch event and will dynamically filter which instances should be turned off

import boto3
import logging

# define rule name
rule_name = "stop_ec2"

#setup simple logging for INFO
logger = logging.getLogger()
logger.setLevel(logging.INFO)

#define the connection
ec2 = boto3.resource('ec2')

# connect to Clouwatch events
client = boto3.client('events')

def lambda_handler(event, context):
   # get cron expression for Specific CloudWatch rule
   response = client.describe_rule(Name=rule_name)
   expression = response['ScheduleExpression']

   # based on current expression create filter variable and populate it with value in range 1-5
   if "cron(20 * * * ? *)" in expression:
      filter = "1"
   elif "cron(0 */1 * * ? *)" in expression:
      filter = "2"
   elif "cron(0 */6 * * ? *)" in expression:
      filter = "3"
   elif "cron(0 */12 * * ? *)" in expression:
      filter = "4"
   elif "cron(0 10 * * ? *)" in expression:
      filter = "5"
   else:
      filter = "0"

   # Use the filter() method of the instances collection to retrieve
    # all running EC2 instances.
   filters = [

        {
            'Name': 'tag:AutoStopSchedule',
            'Values': [filter]
        },
        {
            'Name': 'instance-state-name',
            'Values': ['running']
        }
       ]
    #filter the instances
    #ec2 = boto3.client('ec2', region_name=region)
   instances = ec2.instances.filter(Filters=filters)

    #locate all running instances
   RunningInstances = [instance.id for instance in instances]

    #print the instances for logging purposes
   print (RunningInstances) 

    #make sure there are actually instances to shut down.
   if len(RunningInstances) > 0:
        #perform the shutdown
        shuttingDown = ec2.instances.filter(InstanceIds=RunningInstances).stop()
        #print shuttingDown
   else:
    print "Nothing to see here"

Make sure IAM policy has following

 
{"Action": [

"events:DescribeRule"
],
"Effect": "Allow",
"Resource": "*"
}

Rundeck – Run Docker container

Posted: January 4, 2019 in docker, RunDeck

In previous article we configured email monitoring by Zabbix, in this one we’ll schedule python script for sending data to Zabbix, to be run from Docker container
Docker image is created from this file, script will be copied to Docker container, and zabbix agent will be installed

FROM python:3.7.2-stretch

RUN wget http://repo.zabbix.com/zabbix/3.4/ubuntu/pool/main/z/zabbix-release/zabbix-release_3.4-1%2Bbionic_all.deb apt-get update -y && apt-get install zabbix-agent -y && mkdir /email_parsing

WORKDIR /email/parsing

COPY start.py .

ENTRYPOINT ["python", "./start.py"]

Create container from image:

docker build . -t zabbix/parse_email:1.0.0

Install Docker on Rundeck, and add Rundeck user to Docker group

usermod -a -G docker rundeck
systemctl restart docker
systemctl restart rundeckd

Create Rundeck job-Local Command

2.PNG

Create password vault for mailbox – see this post for reference and specify it as parameter for script,container will be deleted after every run

docker run --rm zabbix/parse_email:1.0.0 "-password" ${option.mailboxpassword}

Dockerizing Zabbix trapper

We can create docker image for zabbix trapper commands

Dockerfile:

FROM ubuntu:latest

RUN apt-get update -y && apt-get install wget -y && wget http://repo.zabbix.com/zabbix/4.0/ubuntu/pool/main/z/zabbix-release/zabbix-release_4.0-1%2Bbionic_all.deb && apt-get install zabbix-agent -y && mkdir /zabbix_sender

WORKDIR /zabbix_sender

COPY . .

ENTRYPOINT ["./start.sh"]

start.sh:

#!/bin/bash


while test -n "$1"; do
    case "$1" in
      -j|-job)
          job_name=$2
          shift 2
         ;;
    esac
done

#echo $job_name

if [ "$job_name" == "some_job" ]; then

   zabbix_sender -z zabbix_host -s rundeck -k job_status[job_name] -o "job $job_name failed" -vv

i created docker container with tag “zabbix_sender”, it takes job name as parameter

If using remote registry add step for login to it:

echo password | docker login --username username --password-stdin registry_name

Now, we can add Error handler in Rundeck, so if job fails,Zabbix alert will be triggered, under command click “cog” icon and select “Add error handler”

1.PNG

click Command or Local Command and add following line:

docker run --rm zabbix_sender -job ${option.job_name}

job name is declared as Rundeck option

Monitoring email content using Zabbix

Posted: January 1, 2019 in Linux

In this example we’ll use python script for extracting Job name from email body, moving parsed emails to “Processed” folder,create Zabbix item from that job and create LLD discovery:

Put this script under /usr/lib/zabbix/externalscripts folder

#!/usr/bin/python

import email, imaplib, re, sys, json, base64

#read previously encrypted password from file

with open('/opt/an_sys/output.txt', 'r') as myfile:
    data=myfile.read()

#connect to mailbox,switch to "SNAP" folder and serach for emails with "failed"
#in subject, sent from snapuploader@brevanhoward.com

user = 'monitoring@email.com'
pwd = base64.b64decode(data)

conn = imaplib.IMAP4_SSL("outlook.office365.com")
conn.login(user,pwd)
conn.select("ZABBIX")

#resp, items = conn.uid("search",None, 'All')
resp, items = conn.uid("search" ,None, '(FROM "email@domain.com")')

#f = open('output.txt','w')

#sys.stdout = f

tdata=[]

items = items[0].split()
for emailid in items:
    resp, data = conn.uid("fetch",emailid, "(RFC822)")
    if resp == 'OK':
        email_body = data[0][1].decode('utf-8')
        mail = email.message_from_string(email_body)
        if mail["Subject"].find("failed") > 0:
          #print mail["Subject"]
          regex1=r'Snap:\s*(.+?)\s+failed'
          a=re.findall(regex1 ,mail["Subject"], re.DOTALL)

          #regex2 = r'Job finished'
          #c=re.findall(regex2, email_body, re.IGNORECASE)

          #format string by removing "'","\r\n"," ","|",".","-","__" and "Processor_"

          if a:
           a=[item.replace("'","") for item in a]
           a=[item.replace("\r\n","") for item in a]
           a=[item.replace(" ","_") for item in a]
           a=[item.replace("|","_") for item in a]
           a=[item.replace(".","_") for item in a]
           a=[item.replace("-","") for item in a]
           a=[item.replace("__","_") for item in a]
           a=[item.replace("Processor_","") for item in a]
           seen = set()
           result = []

           for item in a:
		       #remove "_for_" and all after it
			   c = item.split("_for_")[0]
			   #remove digits
               c = ''.join([i for i in c if not i.isdigit()])
			   #limit strings to 36 characters (in order to create zabbix items)
               s = c[:36]
			   #if string ends with "_",remove it
			   s = re.sub("_$", "", s)
			   #replace "__" with empty space
               s = c.replace("__","")
               s = s[:36]
               s = re.sub("_$","",s)
               if s not in seen:
                seen.add(s)

                result.append(s)
                output = " ".join(result)
                output.join(result)

                #create LLD JSON output
                tdata.append({'{#JOB}':output,'{#NAME}':item})
print json.dumps({"data": tdata}, indent=4)

Discovery rule

If it takes some time for items to be created in Zabbix, try reducing update interval,if it doesn’t help try decreasing configuration cache.Configuration cache  contains information on hosts and items to be monitored. It re-creates this cache by default every 60 seconds. This period can be customised by configuration parameter CacheUpdateFrequency (/etc/zabbix/zabbix_server.conf), try setting value between 20-60 seconds, if using zabbix proxy edit ConfigFrequency (/etc/zabbix/zabbix_proxy.conf),restart zabbix service.

Capture.PNG

Item property

1.PNG

Items will be sent by zabbix trapper, if item still doesn’t exit on Zabbix server, email will be left in “Zabbix” folder as long as key for that job is created. This script will be run every 2 minutes

#!/usr/local/bin/python3
from subprocess import run, PIPE
import email
import imaplib
import re
import sys
import logging
import base64
import argparse
import os
logging.basicConfig(stream=sys.stdout, level=logging.INFO)

# function to send items to Zabbix server using trapper

def zabbix_sender(key, output):
    """
    Sends a message to the Zabbix monitoring server to update the given key
    with the given output. This is designed to be only called whenever
    the service encounters an error.
    Zabbix should be configured with an Zabbix Trapper
    item for any key passed in, and a trigger for any instance where the output
    value of the message has a string length greater than 0. Since this method
    should only be called when something goes wrong, the Zabbix setup for
    listening for this key should be "any news is bad news"
    @param key
    The item key to use when communicating with Zabbix. This should match a key
    configured on the Zabbix monitoring server.
    @param output
    The data to display in Zabbix notifying TechOps of a problem.
    """
    # When I actually did this at work, I had the server and hostname set in an
    # external configuration file. That's probably how you want to do this as
    # opposed to hard-coding it into the script.
    server = "192.168.10.19"
    hostname = "zabbix_host"
    cmd = ["zabbix_sender", "-z", server, "-s",  hostname,
        "-k", key, "-o",  output]
    result = run(cmd, stdout=PIPE, stderr=PIPE, universal_newlines=True, check=True)
    return result.stdout, result.stderr
# read previously encrypted password

# log in to mailbox 

parser = argparse.ArgumentParser()
parser.add_argument('-p', '-password', dest='pwd', help='The password for authentication.')
args = parser.parse_args()

user = 'monitoring@domain.com'
pwd = args.pwd

conn = imaplib.IMAP4_SSL("outlook.office365.com")
conn.login(user, pwd)

conn.select("ZABBIX")

# resp, items = conn.uid("search",None, 'All')
resp, items = conn.uid("search", None, '(FROM "email1@domain.com")')
items = items[0].split()
for emailid in items:
    res, data = conn.uid("fetch", emailid, "(RFC822)")
    if resp == 'OK':
        email_body = data[0][1].decode('utf-8')
        mail = email.message_from_string(email_body)
        # search for emails with "failed" word in subject
        if mail["Subject"].find("failed") > 0:
                 # and get job name from subject, that string will be used as Zabbix item
         regex1 = r'Snap:\s*(.+?)\s+failed'
         a=re.findall(regex1, mail["Subject"], re.DOTALL)
         # regex2 = r'Job finished'
         # c=re.findall(regex2, email_body, re.IGNORECASE)
         # format job name (remove "'","\r\n","|",".","-","__" and "Processor"
         if a:
           a = [item.replace("'", "") for item in a]
           a = [item.replace("\r\n", "") for item in a]
           a = [item.replace(" ", "_") for item in a]
           a = [item.replace("|", "_") for item in a]
           a = [item.replace(".", "_") for item in a]
           a = [item.replace("-", "") for item in a]
           a = [item.replace("__", "_") for item in a]
           a = [item.replace("Processor_", "") for item in a]
           seen = set()
           result = []

           for item in a:
          # remove all after "_for_" (including "_for_")
               c = item.split("_for_")[0]
              # remove digits from string
               c = ''.join([i for i in c if not i.isdigit()])
              # had to remove string lenghts to 36 (Zabbix item can't be too long)
               s = c[:36]
               s = re.sub("_$", "", s)
               s = c.replace("__", "")
               s = s[:36]
               # if string ends with "_", remove it,(zabbix item can't end with special characters)
               s = re.sub("_$", "", s)
               # put all strings in list
               if s not in seen:
                seen.add(s)
                result.append(s)
                out = " ".join(result)
                # create Zabbix key from strings (email subjects)
                key = "an.snap[" + out + ",an]"
                # send values to Zabbix with value "failed"
                try:
                  r = zabbix_sender (key, "failed")
                  k = "".join(r)
                  if k.find("failed: 0") > 0:

                 # copy all email from "zabbix" folder to "Proceesed" folder
                   result = conn.uid('COPY', emailid, "Processed")
                   if result[0] == 'OK':
                   # clear "SNAP" folder
                      result = mov, data = conn.uid('STORE', emailid, '+FLAGS', '(\Deleted Items)')
                      # conn.expunge()
                except:
                      continue
        else:
                 # if mail subjects don't contain word "failed" move it to "Ignored" folder
         result = conn.uid('COPY', emailid, "Ignored")
          # print result
         if result[0] == 'OK':
                    # clean up "zabbix folder" folder
            result = mov, data = conn.uid('STORE', emailid, '+FLAGS', '(\Deleted Items)')
           # conn.expunge()

#Disconnect from mailbox
conn.close()
conn.logout()