Archive for February, 2018

Create JIRA SubTask using REST API

Posted: February 23, 2018 in Linux, Scripts

Get parent issue key and project key based on ticket subject then export it to json file

curl -XN -u user:password -X GET -H "Content-Type: application/json" https://jira.company.com/rest/api/2/search?jql=project='"Tech"+AND+summary~"Build%20the%20Portal"' | python -m json.tool > 1.json

In newest JIRA sintax changed a bit:

curl -XN -u user:pass -X GET -H "Content-Type: application/json" https://jira.corp.company.com/rest/api/2/search?jql=project=TECH+AND+"summary"~"Build&Fortimanager&Portal"

#search JSON file and export parent key,project key and customfield_1107  (?!!!)-this field is not mentioned anywhere on the internet and no idea why it’s required, i lost half of the day on this

{
    "expand": "names,schema",
    "issues": [
        {
            "expand": "operations,versionedRepresentations,editmeta,changelog,renderedFields",
            "fields": {
                "aggregateprogress": {
                    "percent": 100,
                    "progress": 86400,
                    "total": 86400
                },
                "aggregatetimeestimate": 0,
                "aggregatetimeoriginalestimate": 57600,
                "aggregatetimespent": 86400,
                "assignee": {
                    "active": true,
                    "avatarUrls": {
                        "16x16": "https://secure.gravatar.com/avatar/5d92f3ce51d4a090cdcb9b77ee890989?d=mm&s=16",
                        "24x24": "https://secure.gravatar.com/avatar/5d92f3ce51d4a090cdcb9b77ee890989?d=mm&s=24",
                        "32x32": "https://secure.gravatar.com/avatar/5d92f3ce51d4a090cdcb9b77ee890989?d=mm&s=32",
                        "48x48": "https://secure.gravatar.com/avatar/5d92f3ce51d4a090cdcb9b77ee890989?d=mm&s=48"
                    },
                    "displayName": "user1",
                    "emailAddress": "user1@gmail.com",
                    "key": "user1",
                    "name": "user1",
                    "self": "https://mycompany/rest/api/2/user?username=user1",
                    "timeZone": "Europe/Belgrade"
                },
                "components": [],
                "created": "2018-01-10T10:23:01.000+0000",
                "creator": {
                    "active": true,
                    "avatarUrls": {
                        "16x16": "https://mycompany/secure/useravatar?size=xsmall&avatarId=10349",
                        "24x24": "https://mycompany/secure/useravatar?size=small&avatarId=10349",
                        "32x32": "https://mycompany/secure/useravatar?size=medium&avatarId=10349",
                        "48x48": "https://mycompany/secure/useravatar?avatarId=10349"
                    },
                    "displayName": "user2",
                    "emailAddress": "user2@gmail.com",
                    "key": "user2",
                    "name": "user2",
                    "self": "https://mycompany/rest/api/2/user?username=user2",
                    "timeZone": "Zulu"
                },
                "customfield_10000": null,
                "customfield_10001": null,
                "customfield_10002": null,
                "customfield_10004": "0|i00n0f:",
                "customfield_10005": null,
                "customfield_10006": null,
                "customfield_10100": null,
                "customfield_10101": [],
                "customfield_10102": null,
                "customfield_10103": null,
                "customfield_10107": {
                    "id": "10400",
                    "self": "https://mycompany/rest/api/2/customFieldOption/10400",
                    "value": "user Internal"
                },
                "customfield_10108": null,
                "customfield_10200": null,
                "customfield_10201": "2018-01-12",
                "customfield_10202": "2018-01-12",
                "customfield_10203": null,
                "customfield_10204": null,
                "customfield_10205": null,
                "customfield_10206": null,
                "customfield_10300": "com.atlassian.servicedesk.plugins.approvals.internal.customfield.ApprovalsCFValue@40efdb56",
                "customfield_10301": null,
                "customfield_10302": null,
                "customfield_10600": null,
                "customfield_10700": null,
                "customfield_11000": null,
                "customfield_11001": null,
                "customfield_11002": null,
                "customfield_11003": null,
                "customfield_11004": null,
                "customfield_11005": null,
                "customfield_11006": null,
                "customfield_11007": null,
                "customfield_11008": null,
                "customfield_11009": null,
                "customfield_11010": null,
                "customfield_11011": null,
                "customfield_11012": null,
                "customfield_11013": null,
                "customfield_11014": null,
                "customfield_11015": null,
                "customfield_11016": null,
                "customfield_11017": null,
                "customfield_11018": null,
                "customfield_11019": null,
                "customfield_11100": null,
                "customfield_11101": null,
                "customfield_11102": null,
                "description": ".",
                "duedate": null,
                "environment": null,
                "fixVersions": [],
                "issuelinks": [],
                "issuetype": {
                    "avatarId": 10318,
                    "description": "A task that needs to be done.",
                    "iconUrl": "https://mycompany/secure/viewavatar?size=xsmall&avatarId=10318&avatarType=issuetype",
                    "id": "10100",
                    "name": "Task",
                    "self": "https://mycompany/rest/api/2/issuetype/10100",
                    "subtask": false
                },
                "labels": [],
                "lastViewed": "2018-02-22T15:47:47.792+0000",
                "priority": {
                    "iconUrl": "https://mycompany/images/icons/priorities/medium.svg",
                    "id": "3",
                    "name": "Medium",
                    "self": "https://mycompany/rest/api/2/priority/3"
                },
                "progress": {
                    "percent": 100,
                    "progress": 86400,
                    "total": 86400
                },
                "project": {
                    "avatarUrls": {
                        "16x16": "https://mycompany/secure/projectavatar?size=xsmall&pid=10001&avatarId=10201",
                        "24x24": "https://mycompany/secure/projectavatar?size=small&pid=10001&avatarId=10201",
                        "32x32": "https://mycompany/secure/projectavatar?size=medium&pid=10001&avatarId=10201",
                        "48x48": "https://mycompany/secure/projectavatar?pid=10001&avatarId=10201"
                    },
                    "id": "10001",
                    "key": "TECH",
                    "name": "Technology",
                    "self": "https://mycompany/rest/api/2/project/10001"
                },
                "reporter": {
                    "active": true,
                    "avatarUrls": {
                        "16x16": "https://mycompany/secure/useravatar?size=xsmall&avatarId=10349",
                        "24x24": "https://mycompany/secure/useravatar?size=small&avatarId=10349",
                        "32x32": "https://mycompany/secure/useravatar?size=medium&avatarId=10349",
                        "48x48": "https://mycompany/secure/useravatar?avatarId=10349"
                    },
                    "displayName": "user2",
                    "emailAddress": "user2@gmail.com",
                    "key": "user2",
                    "name": "user2",
                    "self": "https://mycompany/rest/api/2/user?username=user2",
                    "timeZone": "Zulu"
                },
                "resolution": null,
                "resolutiondate": null,
                "status": {
                    "description": "",
                    "iconUrl": "https://mycompany/images/icons/statuses/inprogress.png",
                    "id": "3",
                    "name": "In-Progress",
                    "self": "https://mycompany/rest/api/2/status/3",
                    "statusCategory": {
                        "colorName": "yellow",
                        "id": 4,
                        "key": "indeterminate",
                        "name": "In Progress",
                        "self": "https://mycompany/rest/api/2/statuscategory/4"
                    }
                },
                "subtasks": [],
                "summary": "Build the Fortimanage Portal",
                "timeestimate": 0,
                "timeoriginalestimate": 57600,
                "timespent": 86400,
                "updated": "2018-01-31T15:22:06.000+0000",
                "versions": [],
                "votes": {
                    "hasVoted": false,
                    "self": "https://mycompany/rest/api/2/issue/TECH-456/votes",
                    "votes": 0
                },
                "watches": {
                    "isWatching": false,
                    "self": "https://mycompany/rest/api/2/issue/TECH-456/watchers",
                    "watchCount": 1
                },
                "workratio": 150
            },
            "id": "15377",
            "key": "TECH-456",
            "self": "https://mycompany/rest/api/2/issue/15377"
        }
    ],
    "maxResults": 50,
    "startAt": 0,
    "total": 1
}

Using bellow commands we’ll extract info which we’ll pass to curl command later.

parent=`jq -r '.issues[0] | .key' 1.json`
project=`jq -r '.issues[0] | .fields.project.key' 1.json`
custom_field=`jq -r '.issues[0] | .fields.customfield_10107.id' 1.json`

#create Sub task

curl -D- -u user:pass -X POST --data "{\"fields\":{\"project\":{\"key\":\"$project\"},\"parent\":{\"key\":\"$parent\"},\"summary\":\"Test ChargenNr\",\"description\":\"some description\",\"issuetype\":{\"name\":\"Sub-task\"},\"customfield_10107\":{\"id\":\"$custom_field\"}}}" -H "Content-Type:application/json" https://jira.company.com/rest/api/latest/issue/
Advertisements

In previous post we deployed single machine by python script using terraform plugin.

In this one we’ll go through JSON file, extract username and count of instances and based on it create x instances for x user.

In this file for Djukes one instance will be created, for JWilson 2, for eflame 3.

JSON file:

{
"squadName": "Super hero squad",
"homeTown": "Metro City",
"formed": 2016,
"secretBase": "Super tower",
"active": true,
"customers": [
{
"name": "Molecule Man",
"age": 29,
"email": "DJukes@gmail.com",
"instances": 1,
"powers": [
"Radiation resistance",
"Turning tiny",
"Radiation blast"
]
},
{
"name": "Madame Uppercut",
"age": 39,
"email": "JWilson@gmail.com",
"instances": 2,
"powers": [
"Million tonne punch",
"Damage resistance",
"Superhuman reflexes"
]
},
{
"name": "Eternal Flame",
"age": 1000000,
"email": "eflame@gmail.com",
"instances": 3,
"powers": [
"Immortality",
"Heat Immunity",
"Inferno",
"Teleportation",
"Interdimensional travel"
]
}
]
}

Terraform files:

Get public IP:

output.tf

output "id" {
description = "List of IDs of instances"
value = ["${aws_instance.win-example.*.public_ip}"]
}

Security group-for each user allow-all will be replaced with particular user,so for every user new Security group will be created-name of security group will be username

resource "aws_security_group" "allow-all" {
name="allow-all"
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 0
to_port = 6556
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
tags {
Name = "allow-RDP"
}
}

vars.tf

variable "AWS_REGION" {
  default = "eu-west-1"
}
variable "WIN_AMIS" {
  type = "map"
  default = {
    us-east-1 = "ami-30540427"
    us-west-2 = "ami-9f5efbff"
    eu-west-1 = "ami-cc821eb5"
  }
}


variable "count" {

default="1"

}



variable "PATH_TO_PRIVATE_KEY" {
  default = "mykey"
}
variable "PATH_TO_PUBLIC_KEY" {
  default = "mykey.pub"
}
variable "INSTANCE_USERNAME" {
#  default = "Terraform"
}
variable "INSTANCE_PASSWORD" {
 default="Passw0rd012345"

}

windows.tf

resource "aws_instance" "win-example" {
  ami = "${lookup(var.WIN_AMIS, var.AWS_REGION)}"
  instance_type = "t2.medium"
  count="${var.count}"
  lifecycle {
ignore_changes="ami"

}

   
  vpc_security_group_ids=["${aws_security_group.allow-all.id}"]
#key_name = "${aws_key_pair.mykey.key_name}"
  user_data = <
net user ${var.INSTANCE_USERNAME} '${var.INSTANCE_PASSWORD}' /add /y
net localgroup administrators ${var.INSTANCE_USERNAME} /add

winrm quickconfig -q
winrm set winrm/config/winrs '@{MaxMemoryPerShellMB="300"}'
winrm set winrm/config '@{MaxTimeoutms="1800000"}'
winrm set winrm/config/service '@{AllowUnencrypted="true"}'
winrm set winrm/config/service/auth '@{Basic="true"}'

netsh advfirewall firewall add rule name="WinRM 5985" protocol=TCP dir=in localport=5985 action=allow
netsh advfirewall firewall add rule name="WinRM 5986" protocol=TCP dir=in localport=5986 action=allow

net stop winrm
sc.exe config winrm start=auto
net start winrm

EOF

  provisioner "file" {
    source = "test.txt"
    destination = "C:/test.txt"
  }
  connection {
    type = "winrm"
    timeout = "10m"
    user = "${var.INSTANCE_USERNAME}"
     password = "${var.INSTANCE_PASSWORD}"
      
}

tags {
Name="${format("${var.INSTANCE_USERNAME}-%01d",count.index+1)}"

}
}

For every user new folder will be created, all above terraform files will be copied to every user’s folder, one copied sg.tf and windows.tf files will be searched for “allow-all” and replaced with username variable,it is needed because for each user new security group needs to be created

#!/bin/python
import sys
import json
import os.path
import shutil
from os import mkdir
from pprint import pprint
from python_terraform import *

#open JSON file
json_data=open('./my.json')
data = json.load(json_data)

json_data.close()

 

#Function which will create instances,parameters are username and count of instances fetched from JSON file

def myfunc():

tf = Terraform(working_dir=final_path, variables={'count':count,'INSTANCE_USERNAME':user})
tf.plan(no_color=IsFlagged, refresh=True, capture_output=False)
approve = {"auto-approve": True}
print(tf.init(reconfigure=True))
print(tf.plan())
print(tf.apply(**approve))
return

 

 

 

# sweep through JSON file and store username and number of instances into user and count variables
for i in range (0, len (data['customers'])):
#print data['customers'][i]['email']
k=data['customers'][i]['email']
#print(k.split('@')[0])
user=k.split('@')[0]
#print(user)
count=data['customers'][i]['instances']
#print(count)
#enter = int(input('Enter number of instances: '))

#define "root" directory
start_path="/home/ja/terraform-course/demo-2b/"

#in order to avoid instance recreation,folder for each user needs to be created


#define subdirectories named by user and create it it folder doesn't exist
final_path=os.path.join(start_path,user)
if not os.path.exists(final_path):
os.makedirs(final_path)
#copy terraform files to each newly created folder for user

shutil.copy2('./vars.tf', final_path)
shutil.copy2('./sg.tf', final_path)
shutil.copy2('./windows.tf', final_path)
shutil.copy2('./provider.tf', final_path)
shutil.copy2('./test.txt', final_path)
shutil.copy2('./output.tf', final_path)

 

#for each user new security group needs to be created.Name of SG will be username
final=os.path.join(final_path,'sg.tf')
final1=os.path.join(final_path,'windows.tf')

#replace current name (allow-all) with variable username in sg.tf and windows.tf files
with open(final, 'r') as file :
filedata = file.read()
filedata = filedata.replace('allow-all', user)
with open(final, 'w') as file:
file.write(filedata)
with open(final1, 'r') as file :
filedata = file.read()
filedata = filedata.replace('allow-all', user)
with open(final1, 'w') as file:
file.write(filedata)

#call function for running terraform
myfunc()

 

#in each user folder open terraform.tfstate file and extract public IP to variable ip

final2=os.path.join(final_path,'terraform.tfstate')
json_data=open(final2)
data1 = json.load(json_data)
json_data.close()
#write Public IP,username and password to /home/ja/terraform-course/demo-2b/.txt file

filename="/home/ja/terraform-course/demo-2b/"+user+".txt"
print(filename)
for i in range (0, len (data1['modules'])):
ip=','.join(data1['modules'][i]['outputs']['id']['value'])
sys.stdout = open(filename,'wt')
print("Username is:"+" "+ user+".Password is Passw0rd01234.IP addrress is:"+ip)

 

python-terraform is a python module provide a wrapper of terraform command line tool.More details here

Installation is simple:

pip install python-terraform

Now we can use python script to interact with terraform. In this example we’ll pass number of instances as variable to python script and new instances will be created

Python script

 

#!/bin/python

enter = int(input('Enter number of instances: '))

from python_terraform import *
tf = Terraform(working_dir='/home/ja/terraform/demo-3', variables={'count':enter})
tf.plan(no_color=IsFlagged, refresh=False, capture_output=True)
approve = {"auto-approve": True}
print(tf.plan())
print(tf.apply(**approve))

 

variables={‘count’:enter}

count is variable name specified in vars.tf file, enter is variable specified in python script to which we’ll pass number of instances interactively

Because enter variable is variable, single quotes had to be removed, otherwise, quotes needs to be put around that variable also

 

Running script above will spin-up as many instances as we specified at prompt:

Capture

 

Files in /home/ja/terraform/demo-3 folder

instances.tf

resource "aws_instance" "example" {
  ami = "${lookup(var.AMIS, var.AWS_REGION)}"
  instance_type = "t2.micro"

count="${var.count}"

tags {
Name="${format("test-%01d",count.index+1)}"
}

 output "ime" {
   value = ["${aws_instance.example.*.tags.Name}","${aws_instance.example.*.public_ip}"]
}

 

vars.tf (variable file)

 

variable "AWS_ACCESS_KEY" {
}

variable "count" {
default=2
}

variable "AWS_SECRET_KEY" {
}
variable "AWS_REGION" {
  default = "eu-west-1"
}
variable "AMIS" {
  type = "map"
  default = {
    us-east-1 = "ami-13be557e"
    us-west-2 = "ami-06b94666"
    eu-west-1 = "ami-844e0bf7"
  }
}

 

provider.tf

 

provider "aws" {
    access_key = "${var.AWS_ACCESS_KEY}"
    secret_key = "${var.AWS_SECRET_KEY}"
    region = "${var.AWS_REGION}"
}

 

vars.tf

In this file region, path to private/public key is specified,AMI,as well as RDS password

variable "AWS_REGION" {
  default = "eu-west-1"
}
variable "PATH_TO_PRIVATE_KEY" {
  default = "mykey"
}
variable "PATH_TO_PUBLIC_KEY" {
  default = "mykey.pub"
}
variable "AMIS" {
  type = "map"
  default = {
    us-east-1 = "ami-13be557e"
    us-west-2 = "ami-06b94666"
    eu-west-1 = "ami-844e0bf7"
  }
}
variable "RDS_PASSWORD" {
default="MyRDSsimplePassword"
}

instance.tf

instance type,VPC subnet, security group and public key for instance-all returned from vars.tf

resource "aws_instance" "example" {
ami = "${lookup(var.AMIS, var.AWS_REGION)}"
instance_type = "t2.micro"

# the VPC subnet
subnet_id = "${aws_subnet.main-public-1.id}"

# the security group
vpc_security_group_ids = ["${aws_security_group.example-instance.id}"]

# the public SSH key
key_name = "${aws_key_pair.mykeypair.key_name}"

}

sg.tf

In this file 2 security groups are specified: one will allow access to port 3306  from security group  (example-instance,specified in same file), and second one the incoming traffic on port 22 from  anywhere (0.0.0.0/0)

resource "aws_security_group" "example-instance" {
vpc_id = "${aws_vpc.main.id}"
name = "allow-ssh"
description = "security group that allows ssh and all egress traffic"
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
tags {
Name = "example-instance"
}
}

resource "aws_security_group" "allow-mariadb" {
vpc_id = "${aws_vpc.main.id}"
name = "allow-mariadb"
description = "allow-mariadb"
ingress {
from_port = 3306
to_port = 3306
protocol = "tcp"
security_groups = ["${aws_security_group.example-instance.id}"] # allowing access from our example instance
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
self = true
}
tags {
Name = "allow-mariadb"
}
}

vpc.tf

In this file Virtual Private Network is specified,3 public and 3 private subnets, internet gateway,route table and will associate public subnets to route table (so subnets can be available from the internet)

# Internet VPC
resource "aws_vpc" "main" {
    cidr_block = "10.0.0.0/16"
    instance_tenancy = "default"
    enable_dns_support = "true"
    enable_dns_hostnames = "true"
    enable_classiclink = "false"
    tags {
        Name = "main"
    }
}


# Subnets
resource "aws_subnet" "main-public-1" {
    vpc_id = "${aws_vpc.main.id}"
    cidr_block = "10.0.1.0/24"
    map_public_ip_on_launch = "true"
    availability_zone = "eu-west-1a"

    tags {
        Name = "main-public-1"
    }
}
resource "aws_subnet" "main-public-2" {
    vpc_id = "${aws_vpc.main.id}"
    cidr_block = "10.0.2.0/24"
    map_public_ip_on_launch = "true"
    availability_zone = "eu-west-1b"

    tags {
        Name = "main-public-2"
    }
}
resource "aws_subnet" "main-public-3" {
    vpc_id = "${aws_vpc.main.id}"
    cidr_block = "10.0.3.0/24"
    map_public_ip_on_launch = "true"
    availability_zone = "eu-west-1c"

    tags {
        Name = "main-public-3"
    }
}
resource "aws_subnet" "main-private-1" {
    vpc_id = "${aws_vpc.main.id}"
    cidr_block = "10.0.4.0/24"
    map_public_ip_on_launch = "false"
    availability_zone = "eu-west-1a"

    tags {
        Name = "main-private-1"
    }
}
resource "aws_subnet" "main-private-2" {
    vpc_id = "${aws_vpc.main.id}"
    cidr_block = "10.0.5.0/24"
    map_public_ip_on_launch = "false"
    availability_zone = "eu-west-1b"

    tags {
        Name = "main-private-2"
    }
}
resource "aws_subnet" "main-private-3" {
    vpc_id = "${aws_vpc.main.id}"
    cidr_block = "10.0.6.0/24"
    map_public_ip_on_launch = "false"
    availability_zone = "eu-west-1c"

    tags {
        Name = "main-private-3"
    }
}

# Internet GW
resource "aws_internet_gateway" "main-gw" {
    vpc_id = "${aws_vpc.main.id}"

    tags {
        Name = "main"
    }
}

# route tables
resource "aws_route_table" "main-public" {
    vpc_id = "${aws_vpc.main.id}"
    route {
        cidr_block = "0.0.0.0/0"
        gateway_id = "${aws_internet_gateway.main-gw.id}"
    }

    tags {
        Name = "main-public-1"
    }
}

# route associations public
resource "aws_route_table_association" "main-public-1-a" {
    subnet_id = "${aws_subnet.main-public-1.id}"
    route_table_id = "${aws_route_table.main-public.id}"
}
resource "aws_route_table_association" "main-public-2-a" {
    subnet_id = "${aws_subnet.main-public-2.id}"
    route_table_id = "${aws_route_table.main-public.id}"
}
resource "aws_route_table_association" "main-public-3-a" {
    subnet_id = "${aws_subnet.main-public-3.id}"
    route_table_id = "${aws_route_table.main-public.id}"
}

provider.tf

Gets region from vars.tf

provider "aws" {
    region = "${var.AWS_REGION}"
}

rds.tf

This file specifies subnet group (in which subnet database will be in-group will consist of 2 private subnets), parameter group (parameters to change settings in the database),db instance type,credentials, availability zone, subnet and security group

resource "aws_db_subnet_group" "mariadb-subnet" {
name = "mariadb-subnet"
description = "RDS subnet group"
subnet_ids = ["${aws_subnet.main-private-1.id}","${aws_subnet.main-private-2.id}"]
}

resource "aws_db_parameter_group" "mariadb-parameters" {
name = "mariadb-parameters"
family = "mariadb10.1"
description = "MariaDB parameter group"

parameter {
name = "max_allowed_packet"
value = "16777216"
}
}
resource "aws_db_instance" "mariadb" {
allocated_storage = 100 # 100 GB of storage, gives us more IOPS than a lower number
engine = "mariadb"
engine_version = "10.1.14"
instance_class = "db.t2.small" # use micro if you want to use the free tier
identifier = "mariadb"
name = "mariadb"
username = "root" # username
password = "${var.RDS_PASSWORD}" # password
db_subnet_group_name = "${aws_db_subnet_group.mariadb-subnet.name}"
parameter_group_name = "${aws_db_parameter_group.mariadb-parameters.name}"
multi_az = "false" # set to true to have high availability: 2 instances synchronized with each other
vpc_security_group_ids = ["${aws_security_group.allow-mariadb.id}"]
storage_type = "gp2"
backup_retention_period = 30 # how long you’re going to keep your backups
availability_zone = "${aws_subnet.main-private-1.availability_zone}" # prefered AZ

tags {
Name = "mariadb-instance"
}
}

 

Create key pair and spin up the instance:

 

 ssh-keygen -f mykey && echo "yes" | terraform apply

 

1.PNG

1.PNG

As we can see, RDS id really located on private network (as we specified in security group)

host mariadb.c3wxcgbi9ky2.eu-west-1.rds.amazonaws.com
mariadb.c3wxcgbi9ky2.eu-west-1.rds.amazonaws.com has address 10.0.4.159

Creating Rundec ACL policies

Posted: February 9, 2018 in Linux, RunDeck

Creating role

vi /var/lib/rundeck/exp/webapp/WEB-INF/web.xml

search for section security-role

1

Creating a user

The format is

username:password,rolename

vi /etc/rundeck/realm.properties
demo:demo,user,demo

We created user demo with password demo and put it to demo role

Creating policy

In this example, we’ll create policy for allowing demo role to see only aws project

(-c Context: either ‘project’ or ‘application’.

-c application   Access to projects, users, storage, system info, execution management.

-c project  Access to jobs, nodes, events, within a project.

-a allow

  • Reading read
  • Deleting delete
  • Configuring configure
  • Importing archives import
  • Exporting archives export
  • Deleting executions delete_execution
  • Export project to another Rundeck instance promote
  • Full access admin

-g group

-p project

-j job (read,update,delete,run,runAs,kill,killAs,create)

 

Access to projects (read-only)

rd-acl is tool for creating code which we can append to policy file (usually to /etc/rundeck/admin.aclpolicy)

rd-acl create -c application -g demo -p aws -a read,delete,import>>/etc/rundeck/admin.aclpolicy

Command output:

---
by:
  group: demo
context:
  application: rundeck
for:
  project:
  - allow:
    - read
    - import
    - delete
    equals:
      name: aws
description: generated

Members of demo role will be able to see only aws project

1.PNG

If we need that role have access to multiple projects we just need to add following line in /etc/rundeck/admin.aclpolicy file

---
by:

group: demo

context:

application: rundeck

for:

project:

- allow:

- read

- import

- delete

equals:

name: demo

description: generated

Access to jobs

If we want to allow some jobs we need to type following:

rd-acl create -c project -p aws -g demo -j job2 -a read,run,kill>> /etc/rundeck/admin.aclpolicy

Code added to policy file:

---
by:
  group: demo
context:
  project: aws
for:
  job:
  - allow:
    - read
    - run
    - kill
    equals:
      name: 'jobs'

Access to Activity tab

-G (node,event,job),generic
-G event (read,create)

rd-acl create -c project -p aws -g demo -G event -a read >> /etc/rundeck/admin.aclpolicy

Code in policy:

---
by:
  group: demo
context:
  project: aws
for:
  resource:
  - allow: read
    equals:
      kind: event
description: generated

1.PNG

Access to nodes

-G node (read,create,update,refresh)

rd-acl create -c project -p aws -g demo -G node -a read>> /etc/rundeck/admin.aclpolicy

Policy code:

---
by:
  group: demo
context:
  project: aws
for:
  resource:
  - allow: read
    equals:
      kind: node
description: generated

Node access can be allowed based on node tag -t (read,create,update,refresh)

rd-acl create -c project -p aws -g demo -G node -t prod -a read,refresh
---by:
  group: demo
context:
  project: aws
for:
  node:
  - allow:
    - read
    - refresh
    contains:
      tags:
      - prod
description: generated

Now, users who belong to demo role can see only node with tag named prod

Example of admin ACL

description: Admin, all access.
context:
  application: 'rundeck'
for:
  resource:
    - allow: '*' # allow create of projects
  project:
    - allow: '*' # allow view/admin of all projects
by:
  group: [Rundeck_Admin]

description: Full access.
context:
  project: '.*' # all projects
for:
  resource:
    - allow: '*' # allow read/create all kinds
  adhoc:
    - allow: '*' # allow read/running/killing adhoc jobs
  job:
    - allow: '*' # allow read/write/delete/run/kill of all jobs
  node:
    - allow: '*' # allow read/run for all nodes
by:
  group: [Rundeck_Admin]

---

description: Admin, all access.
context:
  application: 'rundeck'
for:
  resource:
    - allow: '*' # allow create of projects
  project:
    - allow: '*' # allow view/admin of all projects
  project_acl:
    - allow: '*' # allow admin of all project-level ACL policies
  storage:
    - allow: '*' # allow read/create/update/delete for all /keys/* storage content


by:
  group: [Rundeck_Admin]

Read-Only ACL:

description: "Ops Engineers can launch jobs but not edit them"
context:

project: '.*' # all projects

for:

resource:

- equals:

kind: job

allow: [read,run] # allow create jobs

- equals:

kind: node

allow: [read,update,refresh] # allow refresh node sources

- equals:

kind: event

allow: [read] # allow read/create events

adhoc:

- allow: [read,run] # allow running/killing adhoc jobs

job:

- allow: [read,run] # allow create/read/write/delete/run/kill of all jobs

node:

- allow: [read,run] # allow read/run for nodes

by:

group: [Rundeck_Jobs_RunOnly]

In last post we added node to Rundeck, now we’ll add EC2 instance as node

First,we need to add AWS EC2 plugin

cd /var/lib/rundeck/libext/
wget https://github.com/rundeck-plugins/rundeck-ec2-nodes-plugin/releases/download/v1.5.1/rundeck-ec2-nodes-plugin-1.5.1.jar
systemctl restart rundeckd

Now create New project-Add source-AWS EC2 Resources

1.PNG

Specify Access Key, Secret Key, Endpoint (for list of endpoint refer to https://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region)

In mapping parameters field specify:

name.selector=tags/Name;

hostname.selector=publicIcDnsName;

description.default=Ec2 node instance;

osArch.selector=architecture;

osFamily.selector=platform;

osFamily.default=unix;

osName.selector=platform;

osName.default=Linux;

username.selector=tags/Rundeck-User;

username.default=root;

ssh-keypath.default=/var/lib/rundeck/.ssh/id_rsa;

editUrl.default=https://console.aws.amazon.com/ec2/home#c=EC2&s=Instances;

attribute.publicIpAddress.selector=publicIpAddress;

attribute.publicDnsName.selector=publicDnsName;

tags.selector=tags/Rundeck-Tags

Click Save, EC2 node(s) should be visible in Rundeck

Line in projet properties

resources.source.2.config.mappingParams=name.selector\=tags/Name;hostname.selector\=publicDnsName;description.default\=Ec2 node instance;osArch.selector\=architecture;osFamily.selector\=platform;osFamily.default\=unix;osName.selector\=platform;osName.default\=Linux;username.selector\=tags/Rundeck-User;username.default\=root;ssh-keypath.default\=/var/lib/rundeck/.ssh/id_rsa;editUrl.default\=https\://console.aws.amazon.com/ec2/home\#c\=EC2&s\=Instances;attribute.publicIpAddress.selector\=publicIpAddress;attribute.publicDnsName.selector\=publicDnsName;tags.selector\=tags/Rundeck-Tags

1.PNG

On Rundeck server, if not already done create key pair

ssh-keygen –t rsa
cp /root/.ssh/id_rsa /var/lib/rundeck/.ssh/id_rsa
cp /root/.ssh/id_rsa.pub /var/lib/rundeck/.ssh/id_rsa.pub

Now, copy content id_rsa.pub to EC2 instance to /root/.ssh/authorized_keys

In Rundeck GUI, click on project-Nodes, EC2 instance should be visible

 

1.PNG

 

Also, command should be executed

 

1.PNG

 

Running AWS CLI from Rundeck server:

Install AWS CLI, on Rundeck server

On Rundeck go to commands tab in node specify local server, enter following command in interface:

 

aws configure set aws_access_key_id your_access_key
aws configure set aws_secret_access_key your_secret_key
aws configure set default.region us-west-2

 

Rundeck is open source software that helps  automate routine operational procedures in data center or cloud environments

Installation:

Rundeck can be configured to use RDB instead of default file-based data storage. RDB is recommended in large environment.In this post we’ll use file-based storage.

Rundeck requires java

# yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel -y

Create java.sh file in /etc/profile.d and and content below:

#!/bin/bash

JAVA_HOME=/usr/bin/java

PATH=$JAVA_HOME/bin:$PATH

export PATH JAVA_HOME

export CLASSPATH=.

Then make file executable

chmod +x /etc/profile.d/java.sh
source /etc/profile.d/java.sh

Rundeck is available on port 4440-that port needs to be open:

Add below line into file: /etc/sysconfig/iptables

-A INPUT -p tcp -m state --state NEW -m tcp --dport 4440 -j ACCEPT
/etc/init.d/iptables restart

Installing Rundeck:

rpm -Uvh http://repo.rundeck.org/latest.rpm 
yum install rundeck
/etc/init.d/rundeckd start

To make sure the service is running:

/etc/init.d/rundeckd status
netstat -anp | grep '4440\|4443'

The default username and password is admin:admin, if password change for admin is required then edit the file: /etc/rundeck/realm.properties

Comment out the following line in file: /etc/rundeck/rundeck-config.properties

# Comment this out from:
grails.serverURL=http://localhost:4440

# To:
grails.serverURL=http://ip address:4440

Modify the below lines in file: /etc/rundeck/framework.properties

framework.server.name = localhost
framework.server.hostname = localhost
framework.server.port = 4440
framework.server.url = http://localhost:4440

to

framework.server.name = ip address
framework.server.hostname = ip address
framework.server.port = 4440
framework.server.url = http://ip address:4440

Now, restart the service and try to login: http://ipaddress:4440

Adding nodes

At this moment, there is no feature which would allow adding nodes using GUI
https://github.com/rundeck/rundeck/issues/1584

Create New project

1.png

Clear SSH key path

1.png

And click Create

1.png

Go to /var/rundeck/projects//etc
Edit resources.xml file

Add following line for every new node (server which needs to be managed)

1.png

New node appears in Web interface

1.png

To add another node just copy node line and change name and node IP address

Creating keypair on Rundeck server

ssh-keygen

Copy private key to clipboard:

cat /root/.ssh/id_rsa

copy content to clipboard

Now, on Rundeck interface click settings (cog icon)-Key Storage

1.png

Click Add or Upload a Key

1.png

Make sure Private Key is selected from drop-down list, paste content of ~/.ssh/id_rsa
And give key a name. Note:storage path and key name must reflect names in /var/rundeck/projects/etc resources.xml file
(ssh-key-storage-path=”keys/Linuxtopic/server.key”)

Instead of Private/Public keys, password can be used as authentication method

1.png

On client (node) create authorized_keys file (under /root/.ssh)
Copy content of id_rsa.pub file (public key) from Rundeck server to authorized_keys file on node machine
Repeat same step for every new node (copy public key from Rundeck server to /root/.ssh/authorized_keys file on every node

Running command

Now when we added node, we can run command on it, from Rundeck server go to commands-type command
From nodes, type node name-Click Run on node

1.png

Key storage

Private key uploaded to Rundeck server in previous steps are located locally on Rundeck server

/var/lib/rundeck/var/storage/content/keys// folder

1.png

Passing Rundeck password storage to script

Create password storage:

Capture

Create job-add option-specify secure-select password storage created in previous step

Capture.PNG

In script option specify arguments

Capture.PNG

In script body specify argument:

jira_password=$1
curl -XN -u user:$1

Allowing null/empty values as parameter

If you have a script which accepts optional parameters then in Rundeck set Default value  as " " (Only works if step is Local command)

Capture.PNG

Scheduling jobs

Rundeck uses Quartz cron syntax for scheduling jobs

CRON job to run every first day of the month at 09:00 AM

0 00 09 1 * ? *

Run every hour:

0 0 0/1 1/1 * ? *

Run every 55 minutes:

0 0/55 * 1/1 * ? *

Run every 2nd friday

0 15 10 ? * 6#2 *

6 – day of the week

2 – week number

Run last friday in month:

0 15 10 ? * 6L *

6 – day of the week
L – last week of month

This one is carried out Quarterly so March, June, September, December 4th Sunday of month at 10:14 AM)

0 14 10 ? MAR,JUN,SEP,DEC 1#4 *

Changing “from” Rundeck email address

edit /etc/rundeck/rundeck-config.properties and add

grails.mail.default.from=some@mail.com

Script to test if Rundeck service is running:

#!/usr/bin/python
import sys
import os
import commands

sys.stdout = open('log.txt','wt')

output = commands.getoutput('ps -A')
if 'runuser' in output:
print("Rundeck is up an running!")

else:
os.system("systemctl start rundeckd")
print("Rundeck service started")

We can execute this script via cron:

*/5 * * * * /usr/bin/python /root/scripts/service.py