A subinacl is Windows command line utility used to manage Share and NTFS permissions.

I used AMDT tool to migrate users, groups,shares and computer objects from one domain to another, during computer migration, i choose to retain old and to add new permissions so users from both domains can access to shares during migration process.

In my case source domain is source.local and destination domain is destination.local and i had shared and NTFS permsissions from both domains.

 

We first need to export shared and NTFS permissions to file

cd "C:\Program Files (x86)\Windows Resource Kits\Tools"
Subinacl /noverbose /output=C:\NTFSPermissions.txt /subdirectories "C:\SHARE\*.*"

Subinacl /noverbose /output=C:\SharedPermissions.txt /share "\\FS\SHARE"

# or Subinacl /outputlog="c:\outputlog.txt"  /Subdirectories C:\SHARE*.*  /changedomain=source=destination

Then remove entries for old domain (source.local)

subinacl /playfile C:\NTFSPermissions.txt /replacestringonoutput=source=destination

subinacl /playfile C:\SharedPermissions.txt/replacestringonoutput=source=destination

Adding new domain ACE’s entries to shared and NTFS permissions

In case we didn’t use AMDT to migrate fileserver computer, ie, we just copied shared folder together with permissions

ROBOCOPY "\\source\sharelocation" "\\destination\sharelocation" /MIR /SEC /LOG:location:\nameoflogfile.

Only ACE entries from source (old) domain is present

In that case, we can use /migratetodomain subinacl switch to add ACE entries of new domain to shared and NTFS permissions.

# add ACE entries to root folder only
Subinacl /outputlog="c:\outputlog.txt"  /subdirectories "C:\SHARE"  /migratetodomain=source=destination

# add ACE entries to subfolder/files

Subinacl /outputlog="c:\outputlog.txt"  /subdirectories "C:\SHARE\*.*"  /migratetodomain=source=destination

Azure – Deploy VM using ARM templates

Posted: August 20, 2020 in Azure

Azure Resource Manager templates are JavaScript Object Notation (JSON) files that define the infrastructure and configuration for your resources.

This template creates one Windows Server VM with one data disk, one NSG associated with VNET subnet and allows RDP traffic from anywhere.

Then it creates Storage account for storing boot diagnostic data and AzurePerformanceDiagnostics extension.

Parameters file (VM credentials and DNS suffix)

param.json:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
      "adminUsername": {
        "value": "GEN-UNIQUE"
      },
      "adminPassword": {
        "value": "Password1234"
      },
      "dnsLabelPrefix": {
        "value": "gen-unique124"
      }
    }
  }

template.json

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
      "performanceScenario": {
        "type": "string",
        "defaultValue": "basic"
      },
      "srNumber": {
        "type": "string",
        "defaultValue": ""
      },
      "traceDurationInSeconds": {
        "type": "int",
      "defaultValue": 300
      },
      "perfCounterTrace": {
        "type": "string",
        "defaultValue": "p"
      },
      "networkTrace": {
        "type": "string",
        "defaultValue": ""
      },
      "xperfTrace": {
        "type": "string",
        "defaultValue": ""
      },
      "storPortTrace": {
        "type": "string",
        "defaultValue": ""
      },
      "requestTimeUtc": {
        "type": "string",
        "defaultValue": "10/2/2017 11:06:00 PM"
      },		
      "adminUsername": {
        "type": "string",
        "metadata": {
          "description": "Username for the Virtual Machine."
        }
      },
      "adminPassword": {
        "type": "securestring",
        "metadata": {
          "description": "Password for the Virtual Machine."
        }
      },
      "dnsLabelPrefix": {
        "type": "string",
        "metadata": {
          "description": "Unique DNS Name for the Public IP used to access the Virtual Machine."
        }
      },
      "windowsOSVersion": {
        "type": "string",
        "defaultValue": "2016-Datacenter",
        "allowedValues": [
          "2008-R2-SP1",
          "2012-Datacenter",
          "2012-R2-Datacenter",
          "2016-Nano-Server",
          "2016-Datacenter-with-Containers",
          "2016-Datacenter",
          "2019-Datacenter"
        ],
        "metadata": {
          "description": "The Windows version for the VM. This will pick a fully patched image of this given Windows version."
        }
      },
      "vmSize": {
        "type": "string",
        "defaultValue": "Standard_B2ms",
        "metadata": {
          "description": "Size of the virtual machine."
        }
      },
      "location": {
        "type": "string",
        "defaultValue": "[resourceGroup().location]",
        "metadata": {
          "description": "Location for all resources."
        }
      }
    },
    "variables": {
      "storageAccountName": "[concat(uniquestring(resourceGroup().id), 'sawinvm')]",
      "storageAccountId": "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
      "nicName": "myVMNic",
      "addressPrefix": "10.0.0.0/16",
      "subnetName": "Subnet",
      "subnetPrefix": "10.0.0.0/24",
      "publicIPAddressName": "myPublicIP",
      "vmName": "SimpleWinVM",
      "virtualNetworkName": "MyVNET",
      "subnetRef": "[resourceId('Microsoft.Network/virtualNetworks/subnets', variables('virtualNetworkName'), variables('subnetName'))]",
      "networkSecurityGroupName": "default-NSG"
    },
    "resources": [
      {
        "type": "Microsoft.Storage/storageAccounts",
        "apiVersion": "2018-11-01",
        "name": "[variables('storageAccountName')]",
        "location": "[parameters('location')]",
        "sku": {
          "name": "Standard_LRS"
        },
        "kind": "Storage",
        "properties": {}
      },
      {
        "type": "Microsoft.Network/publicIPAddresses",
        "apiVersion": "2018-11-01",
        "name": "[variables('publicIPAddressName')]",
        "location": "[parameters('location')]",
        "properties": {
          "publicIPAllocationMethod": "Dynamic",
          "dnsSettings": {
            "domainNameLabel": "[parameters('dnsLabelPrefix')]"
          }
        }
      },
      {
        "comments":  "Default Network Security Group for template",
        "type":  "Microsoft.Network/networkSecurityGroups",
        "apiVersion":  "2019-08-01",
        "name":  "[variables('networkSecurityGroupName')]",
        "location":  "[parameters('location')]",
        "properties": {
          "securityRules": [
            {
              "name":  "default-allow-3389",
              "properties": {
                "priority":  1000,
                "access":  "Allow",
                "direction":  "Inbound",
                "destinationPortRange":  "3389",
                "protocol":  "Tcp",
                "sourcePortRange":  "*",
                "sourceAddressPrefix":  "*",
                "destinationAddressPrefix":  "*"
              }
            }
          ]
        }
      },
      {
        "type": "Microsoft.Network/virtualNetworks",
        "apiVersion": "2018-11-01",
        "name": "[variables('virtualNetworkName')]",
        "location": "[parameters('location')]",
        "dependsOn": [
          "[resourceId('Microsoft.Network/networkSecurityGroups', variables('networkSecurityGroupName'))]"
        ],
        "properties": {
          "addressSpace": {
            "addressPrefixes": [
              "[variables('addressPrefix')]"
            ]
          },
          "subnets": [
            {
              "name": "[variables('subnetName')]",
              "properties": {
                "addressPrefix": "[variables('subnetPrefix')]",
                "networkSecurityGroup": {
                  "id": "[resourceId('Microsoft.Network/networkSecurityGroups', variables('networkSecurityGroupName'))]"
                }
              }
            }
          ]
        }
      },
      {
        "type": "Microsoft.Network/networkInterfaces",
        "apiVersion": "2018-11-01",
        "name": "[variables('nicName')]",
        "location": "[parameters('location')]",
        "dependsOn": [
          "[resourceId('Microsoft.Network/publicIPAddresses/', variables('publicIPAddressName'))]",
          "[resourceId('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]"
        ],
        "properties": {
          "ipConfigurations": [
            {
              "name": "ipconfig1",
              "properties": {
                "privateIPAllocationMethod": "Dynamic",
                "publicIPAddress": {
                  "id": "[resourceId('Microsoft.Network/publicIPAddresses',variables('publicIPAddressName'))]"
                },
                "subnet": {
                  "id": "[variables('subnetRef')]"
                }
              }
            }
          ]
        }
      },
      {
        "type": "Microsoft.Compute/virtualMachines",
        "apiVersion": "2018-10-01",
        "name": "[variables('vmName')]",
        "location": "[parameters('location')]",
        "dependsOn": [
          "[resourceId('Microsoft.Storage/storageAccounts/', variables('storageAccountName'))]",
          "[resourceId('Microsoft.Network/networkInterfaces/', variables('nicName'))]"
        ],
        "properties": {
          "hardwareProfile": {
            "vmSize": "[parameters('vmSize')]"
          },
          "osProfile": {
            "computerName": "[variables('vmName')]",
            "adminUsername": "[parameters('adminUsername')]",
            "adminPassword": "[parameters('adminPassword')]"
          },
          "storageProfile": {
            "imageReference": {
              "publisher": "MicrosoftWindowsServer",
              "offer": "WindowsServer",
              "sku": "[parameters('windowsOSVersion')]",
              "version": "latest"
            },
            "osDisk": {
              "createOption": "FromImage"
            },
            "dataDisks": [
              {
                "diskSizeGB": 1023,
                "lun": 0,
                "createOption": "Empty"
              }
            ]
          },
          "networkProfile": {
            "networkInterfaces": [
              {
                "id": "[resourceId('Microsoft.Network/networkInterfaces',variables('nicName'))]"
              }
            ]
          },
          "diagnosticsProfile": {
            "bootDiagnostics": {
              "enabled": true,
              "storageUri": "[reference(resourceId('Microsoft.Storage/storageAccounts/', variables('storageAccountName'))).primaryEndpoints.blob]"
            }
          }
        }
      },
      {
        "name": "[concat(variables('vmName'),'/AzurePerformanceDiagnostics')]",
        "type": "Microsoft.Compute/virtualMachines/extensions",
        "location":  "[resourceGroup().location]",
        "apiVersion": "2015-06-15",
        "properties": {
          "publisher": "Microsoft.Azure.Performance.Diagnostics",
          "type": "AzurePerformanceDiagnostics",
          "typeHandlerVersion": "1.0",
          "autoUpgradeMinorVersion": true,
          "settings": {
              "storageAccountName": "[variables('storageAccountName')]",
              "performanceScenario": "[parameters('performanceScenario')]",
              "traceDurationInSeconds": "[parameters('traceDurationInSeconds')]",
              "perfCounterTrace": "[parameters('perfCounterTrace')]",
              "networkTrace": "[parameters('networkTrace')]",
              "xperfTrace": "[parameters('xperfTrace')]",
              "storPortTrace": "[parameters('storPortTrace')]",
              "srNumber": "[parameters('srNumber')]",
              "requestTimeUtc":  "[parameters('requestTimeUtc')]",
              "resourceId": "[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]"
          },
          "protectedSettings": {
              "storageAccountKey": "[listKeys(variables('storageAccountId'), '2019-04-01').keys[0].value]"
          }
        },
        "dependsOn": [
           "[resourceId('Microsoft.Compute/virtualMachines/', variables('vmName'))]",
           "[resourceId('Microsoft.Storage/storageAccounts/', variables('storageAccountName'))]"
        ]
    }
    ],
    "outputs": {
      "hostname": {
        "type": "string",
        "value": "[reference(variables('publicIPAddressName')).dnsSettings.fqdn]"
      }
    }
}

Login to Azure:

Connect-AzureRmAccount

Create resource group for resources

New-AzureRmResourceGroup -Name rg -Location 'west europe'

Deploy resources using ARM template files

New-AzureRmResourceGroupDeployment -ResourceGroupName 'rg' -TemplateFile 'template.json' -TemplateParameterFile 'param.json'

Running VM post-provisioning script

If we need to perform some configuration after VM is deployed, we need to use custom-script extensions.

First we need to upload script to storage account.This is simple script for installing IIS-iis.ps1

Install-Windowsfeature -name Web-Server -IncludeManagementTools
Set-Location -Path c:\inetpub\wwwroot
Add-Content iisstart.htm `
  "<H1><center>Welcome to my Web Server $env:COMPUTERNAME, Azure Rocks!</center></H1>"

Then,we need to add extension block to JSON file

   {
      "type": "Microsoft.Compute/virtualMachines/extensions",
      "apiVersion": "2019-07-01",
      "name": "[concat(variables('vmName'), '/CustomScriptExtension')]",
      "location": "[resourceGroup().location]",
      "dependsOn": [
          "[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]"
      ],
      "properties": {
          "autoUpgradeMinorVersion": true,
          "publisher": "Microsoft.Compute",
          "type": "CustomScriptExtension",
          "typeHandlerVersion": "1.9",
          "settings": {
              "fileUris": [
                  "https://p27zkcgb2etnksawinvm.blob.core.windows.net/ddd/iis.ps1?sp=r&se=2020-08-21T15:09:38Z&sv=2019-12-12&sr=b&sig=oP4U77z4i9UdHUoL1yPSnKSzO%2BuVAQugzbqjp4j%2Frpw%3D"
              ],
              "commandToExecute": "[concat('PowerShell -ExecutionPolicy Unrestricted -File \"' , 'iis.ps1')]"
          },
          "protectedSettings": {}
      }
  }

In order to issue SSL certificate on behalf of another user we first need Enrollment Agent Certificate,in order to create it, we need to copy Enrollment Agent certificate template.

Creating and issuing of Enrollment Agent Certificate

In Certification authority console, under Certificates templates, right click on Certificate templates-Manage

Right click “Enrollment Agent”-Duplicate template

Under Cryptography tab make sure Microsoft Enhanced Cryptography Provider V1.O is selected,minimum key size:2048

Under Request handling, check “Allow private key to be exported”

Under security tab make sure Authenticated users have rights “Read” and “Enroll”.

Right click Certificate Templates-New-Certificate Template to Issue

Select template and click OK.

Configuring certificate template for enrollment

On certificate which should be enrolled to users, right click-Properties-Issuance requirements tab

This number of authorized signatures:1

Policy type required in signature:Application policy

Application policy:Certificate request agent

Enrollment of Enrollment Agent Certificate

On your workstation machine open Certificate console,Expand Personal,right click on Certificates-All tasks-Request New Certificate

Click Next twice and select certificate we just issued-Enroll

Issuing certificate for other user

On your workstation machine open Certificate console,Expand Personal,right click on Certificates-All tasks-Advanced operations-Enroll of behalf of

In signing certificate page click browse-certificate should pop-up automatically

Select certificate you want to issue to user.

When asked for user click browse, in Location specify domain and type user for which you need to issue certificate. You can issue cert only for one user.

Enrollment agents

Enrollment agents are users who can issue certificates for other users.

On CA machine, open Certification Authority console, right click on machine name-Properties-Enrollment agents.

Here you can specify who can issue specific templates.

Powershell script for issuing multiple SSL certificates

This script issues certificated for multiple enabled AD users (search filter can be modified)

function GenerateSSLCert {
    [CmdletBinding()]
    param (
     
    [Parameter(Mandatory=$true)][string]$userName
    )
      
    $PKCS10 = New-Object -ComObject X509Enrollment.CX509CertificateRequestPkcs10
    # Certificate template name for issuing to users
    $PKCS10.InitializeFromTemplateName(0x1,"GP") 
    $PKCS10.Encode()
    $pkcs7 = New-Object -ComObject X509enrollment.CX509CertificateRequestPkcs7
    $pkcs7.InitializeFromInnerRequest($pkcs10)
    $pkcs7.RequesterName = "test\$userName"
    $signer = New-Object -ComObject X509Enrollment.CSignerCertificate
    # Thumbrint of Enrollment Agent certificate
    $signer.Initialize(0,0,0xc,"xxxxxxxxxxxxxxxxxxxxxxxx")
    $pkcs7.SignerCertificate = $signer
    $Request = New-Object -ComObject X509Enrollment.CX509Enrollment
    $Request.InitializeFromRequest($pkcs7)
    $Request.Enroll()
}
   

Get-ADUser -SearchBase "OU=Users,DC=test,DC=local" -filter * | Where { $_.DistinguishedName -notmatch  "OU=DisabledUsers|OU=OpenVPNUsers|OU=ServiceUsers" -and $_.Enabled -eq $True } | ForEach-Object {
 
   Try{
      $user = $_.SamAccountName
      GenerateSSLCert $user 
   }
   Catch{
      Write-Host $_.Exception.Message
      Write-Host "SSL certicate not created for user:$user"
   }
 }

In order to test Maria DB migration, i had to create some databases, tables and users. If you have similar needs you can try this script. It will:

  • Install Maria DB
  • Perform initial configuration (set root password and remove test database)
  • Create 2 sample database and one table for each database, populate tables with dummy data
  • Create 2 users, and assign permission to appropriate database

This is tested on CentOS 8

#!/bin/bash

echo "Installing and configiring mariadb..."

sudo dnf module install mariadb -y
sudo systemctl enable mariadb
sudo systemctl start mariadb

root_password=mypass

# Make sure that NOBODY can access the server without a password
sudo mysql -e "UPDATE mysql.user SET Password = PASSWORD('$root_password') WHERE User = 'root'"

# Kill the anonymous users
sudo mysql -e "DROP USER IF EXISTS ''@'localhost'"
# Because our hostname varies we'll use some Bash magic here.
sudo mysql -e "DROP USER IF EXISTS ''@'$(hostname)'"
# Kill off the demo database
sudo mysql -e "DROP DATABASE IF EXISTS test"


echo "Creating staging database..."

sudo mysql -e "CREATE DATABASE IF NOT EXISTS staging"

echo "Creating production database..."

sudo mysql -e "CREATE DATABASE IF NOT EXISTS production"

echo "Creating table tasks in staging database..."

sudo mysql -e "use staging;CREATE TABLE IF NOT EXISTS tasks ( \
    task_id INT AUTO_INCREMENT PRIMARY KEY, \
    title VARCHAR(255) NOT NULL, \
    start_date DATE, \
    due_date DATE, \
    status TINYINT NOT NULL, \
    priority TINYINT NOT NULL, \
    description TEXT \
    ) ENGINE=INNODB;" \

echo "Table tasks created."


echo "Inserting data into tasks table..."


query1="use staging; INSERT INTO tasks (title, start_date, due_date, status, priority, description) \
        VALUES('task1', '2020-07-01', '2020-07-31', 1, 1, 'this is the first task')"


query2="use staging; INSERT INTO tasks (title, start_date, due_date, status, priority, description) \
        VALUES('task2', '2020-08-01', '2020-08-31', 2, 2, 'this is the second task')"


query3="use staging; INSERT INTO tasks (title, start_date, due_date, status, priority, description) \
        VALUES('task3', '2020-09-01', '2020-09-30', 1, 1, 'this is the third task')"


query4="use staging; INSERT INTO tasks (title, start_date, due_date, status, priority, description) \
        VALUES('task4', '2020-10-01', '2020-10-31', 1, 1, 'this is fourth task')"





sudo mysql -e "$query1"
sudo mysql -e "$query2"
sudo mysql -e "$query3"
sudo mysql -e "$query4"


echo "Inserting dummy data into tasks table finished"


echo "Creating table named 'completed' into production database..."


sudo mysql -e "use production; CREATE TABLE IF NOT EXISTS completed ( \
    task_id INT AUTO_INCREMENT PRIMARY KEY, \
    task_name VARCHAR(255) NOT NULL, \
    finished_date DATE, \
    status TEXT, \
    description TEXT \
    ) ENGINE=INNODB;" \

echo "Populating completed table with some dummy data..."

query_1="use production; INSERT INTO completed (task_name, finished_date, status, description) \
        VALUES('task1', '2020-07-31','done', 'task one finished')"


query_2="use production; INSERT INTO completed (task_name, finished_date, status, description) \
        VALUES('task2', '2020-08-31','completed', 'task two finished')"

query_3="use production; INSERT INTO completed (task_name, finished_date, status, description) \
        VALUES('task3', '2020-09-30','done', 'task three finished')"

query_4="use production; INSERT INTO completed (task_name, finished_date, status, description) \
        VALUES('task4', '2020-10-31','done', 'task four finished')"




sudo mysql -e "$query_1"
sudo mysql -e "$query_2"
sudo mysql -e "$query_3"
sudo mysql -e "$query_4"

echo "Database named 'completed' pupulated with dummy data."

echo "Creating staging_user and grant all permissions to staging database..."

mysql -e "CREATE USER IF NOT EXISTS 'staging_user'@'localhost' IDENTIFIED BY 'password1'"

mysql -e "GRANT ALL PRIVILEGES ON staging.* to 'staging_user'@'localhost'"


echo "Creating production_user and grant all permissions to production database..."

mysql -e "CREATE USER IF NOT EXISTS 'production_user'@'localhost' IDENTIFIED BY 'password2'"

mysql -e "GRANT ALL PRIVILEGES ON production.* to 'production_user'@'localhost'"

# Make our changes take effect
sudo mysql -e "FLUSH PRIVILEGES"

Script for backing up mariadb databases

This script will backup non-system databases, each database will be backed up to separate file.

#!/bin/bash

# mariadb credentials
db_user="root"
db_password="mypass"

# get all databases

databases=$(sudo mysql -uroot -pmypass -sse "show databases")

# Create an array and remove system databases

declare -a dbs=($(echo $databases | sed -e 's/information_schema//g;s/mysql//g;s/performance_schema//g'))

# Loop through an array and backup databases to separate file


# repair

mysqlcheck -u$db_user -p$db_password --auto-repair --check --all-databases

# export databases

for db in "${dbs[@]}"; do

   mysqldump -u$db_user -p$db_password --databases $db > $db.sql

done

# export users and privileges

mysqldump -u$db_user -p$db_password mysql user > users.sql

Script for restoring mariadb database

#!/bin/bash

# folder where dump files are copied
directory="/tmp"
# mariadb credentials
db_user="root"
db_password="mypass"

# list all sql files in $directory
files=$(find $directory -type f -name "*.sql")

# put all sql files into $sql_dumps array

declare -a sql_dumps=($files)

# Users are dumped to users.sql file.

for sql_dump in "${sql_dumps[@]}"; do
    if [[ $sql_dump == *"users.sql"* ]]; then
       # import users and privileges
       sudo mysql -u$db_user -p$db_password mysql < $directory/users.sql
    else
        # import databases
        sudo mysql -u$db_user -p$db_password < $sql_dump
    fi
done

# Apply changes

sudo mysql -u$db_user -p$db_password -e "FLUSH PRIVILEGES"

In order for user to be able to make VPN with global protect, SSL certificate needs to be installed on machine, also, username specified in certificate subject must exist on On-premise domain controller.

In this example certificates are issued from internal Certificate authority.

Generating certificate for PaloAlto firewall

We need SSL to issue certificate assigned to public DNS name of firewall.

Device-certificates-device certificates-generate

Click the checkbox next to the Certificate Name or any whitespace on that line to select it and click export certificate

File with csr extension will be created, copy that file to Certification authority machine, open cmd and run certreq -submit -attrib "CertificateTemplate:template_name" where “template_name” is the name of Certification template.

You’ll be prompted for csr file

Then select certification authority

Certificate will be created with cer extension.

Copy certificate to machine on which you created certificate request. Now

click on Device-certificates-device certificates on PaloAlto web interface, click import in bottom panel.

Certificate screen 6 - 7.1.png

In the Import Certificate dialog, type the name of the pending certificate. It must match exactly the name we used when created certificate request

Importing CA root certificate

On Certificate authority open MMC-Local computer-Trusted root Certification Authority-Certificates-right click on CA certificate-all tasks-export-select Base-64 encoded

Save file with cer.extension

On PaloAlto Device-certificates-device certificates-import

Creating LDAP profile

Specify Domain controllers for user authentication

Device-server profile-Add-type:active-directory, specify Domain controllers,base DN for user search and service account and password for searching AD.

Creating Authentication profile

Device-Authentication profile-Add

Specify Login Active directory Attribute, select LDAP as type and select LDAP profile created in previous step

Creating Certificate profile

Device-certificate manager-Certificate profile-Add

In Username field select Subject Alt and select Email or UserPrincipal name, this means that PaloAlto get username from SSL certificate and it will be automatically populated in GlobalProtect username field.

In Advanced tab select Security group for VPN users. Users who need VPN need to be members of this group.

Creating SSL/TLS profiles

This profile defines which SSL certificate will be used and SSL version

Device-Certificate management-SSL/TLS Service profile-Add

Specify PaloAlto SSL certificate we generated and uploaded in one of previous steps and minimal TLS version.

GlobalProtect portal configuration

Network-GlobalProtect-Portals-Add

On general page select publicy accessible interface and IP address

In Authentication tab specify SSL/TLS Service profile we created in previous step,Certificate profile and click Add

Specify authentication profile

In Agent tab,uder Agent,click Add

In Authentication tab,For Client Certificate select Local and select PaloAlto FW certificate

In External tab specify DNS name of PaloAlto firewall

Click OK, now under Agent tab-Under Trusted Root CA click Add

Network-GlobalProtect-Gateways-Add and add CA root certificate we created earlier.

Configuring GlobalProtect Gateway

Network-GlobalProtect-Portals-Gateways-Add

Specify PaloAlto firewall publicly accessible interface and IP address

Under Authentication tab select SSL/TLS Service profile and certificate profile and click Add

Select Authentication profile, set name and click OK

Under Agent tab specify tunnel settings

Under Client setting specify IP pools and Access routes

Under Network Services specify primary and secondary DNS servers

Commit changes

Generating user certificates

Now we need to create and issue SSL certificates for users who will connect to VPN using global protect

On Issuing certificate authority, in Certification authority console, right click on Certificate templates-Manage

Certification Templates console will launch, right click on template User-duplicate template.

In tab Subject Name make sure settings are as in picture bellow

In General tab select validity period, once done,click OK.

In security tab, make sure Domain users have rights to Enroll and Read.

Once all is set click OK, right click Certificate Templates-New-Certificate Template to Issue

Select template and click OK.

Installing SSL certificate on user computers

To use VPN, install GlobalProtect software,.

Open Local computer certificate store (start-run-certlm.msc).Expand Personal,right click on Certificates-All tasks-Request New certificate,select Certificate template and install certificate

Now you should be able to initiate VPN connection

I faced issues when trying to Synchronize Existing Office 365 user with On-Premise Active Directory :

  • Unable to update this object because the following attributes associated with this object have values that may already be associated with another object in your local directory services: [UserPrincipalName john@contoso.com;]. Correct or remove the duplicate values in your local directory.
  • Unable to update this object because the following attributes associated with this object have values that may already be associated with another object in your local directory services: [ProxyAddresses SMTP:john@contoso.com;]. Correct or remove the duplicate values in your local directory.

During googling i stumbled onto this post , i had to hard link On-Premise and Office 365 account with the ImmutableID.

I modified script in above post to filter OUs from which to list users and added condition to set ImmutabeID only for those Office365 users who have corresponding account in OnPremise AD and whose ImmutableID’s is not matching.

I didn’t have to stop AD synchronization in my case.

# Connect to Office 365
Connect-MsolService

# Base OU to search from
$ou = "OU=Users,OU=Dev,DC=test,DC=local"

# Get AD users filetered out from specific OU's

$ADGuidUsers = Get-ADUser -SearchBase $ou -Filter * | Where { $_.DistinguishedName -notmatch  "OU=DisabledUsers|OU=OpenVPNUsers|OU=ServiceUsers|OU=AzureAD" -and $_.Enabled -eq $True}  | Select Userprincipalname,ObjectGUID | Sort-Object Userprincipalname


# Get Office365 users

$OnlineUsers = Get-MsolUser | Select UserPrincipalName,DisplayName,ProxyAddresses,ImmutableID | Sort-Object UserPrincipalName


foreach ($OnlineUser in $OnlineUsers) {
  
  foreach ($ADGuidUser in $ADGuidUsers){

  # If user in AD equals to Office365 user, Convert the GUID to the Immutable ID format

   if ($OnlineUser.UserPrincipalName -eq $ADGuidUser.Userprincipalname) {
   
      $UserimmutableID = [System.Convert]::ToBase64String($ADGuidUser.ObjectGUID.tobytearray())

      # If Office365 and Local AD ImmutableID don't match, make them equal 

      if($UserimmutableID -ne $OnlineUser.ImmutableID){
    
      # Uncomment line below to verify Immutable IDs
      $UserimmutableID,$OnlineUser.UserPrincipalName -join ',' 

      # Sets the office 365 user with the OnPrem AD ImmutableID

      Set-MSOLuser -UserPrincipalName $OnlineUser.UserPrincipalName -ImmutableID $UserimmutableID
   }
  }
 }
}

Rsync on Windows

Posted: June 22, 2020 in Windows Server

Rsync, which stands for “remote sync”, is a remote and local file synchronization tool. It uses an algorithm that minimizes the amount of data copied by only moving the portions of files that have changed.It’s native linux command, but thanks to Cygwin, we can use it on Windows too.

Installing Cygwin on Windows

Cygwin is a Unix-like environment and command-line interface for Microsoft Windows.It’s a repository of open source software compiled with this dll. In other words, it’s package manager of Linux command line tools which can be run on Windows.

Download cygwin and run installation, in View select Full, in searchbox type rsync, and select rsync under Net category, check Src, and choose version under New

Do the same for openssh (Windows 10 and Server 2019 have this package shipped)

Click next to install both packages.

By default, cygwin is installed on C:\cygwin64 folder, tools are in bin folder

Creating SSH keys (keypair) on Windows

In order to copy files from Windows to Linux we need to authenticate on Linux box, either using credentials or ssh keys. I find ssh more convenient authentication type, so we need to create private and public key on Windows and copy public key to linux machine.

Creating ssh keys

Open Cygwin

Create keys

ssh-keygen

Copy content of C:\cygwin64\home\Administrator.ssh\id_rsa.pub to linux box (/root/.ssh/authorized_keys file)

In cygwin terminal type

ssh-copy-id

Test access from Windows to linux. In this example 192.168.1.11 is IP of Linux machine, userame root

ssh root@192.168.1.11

Copy files from Windows to Linux

In this example we’ll copy content of C:\inetpub folder to /data folder on Linux box

Again, from cygwin terminal type

rsync -avm //localhost/c$/inetpub/ root@192.168.1.11:/data

Simple copy usually is not enough, we need to set ownership and permission on the destination.

There are three basic file system permissions, or modes, to files and directories:

  • read (r)
  • write (w)
  • execute (x)

Each mode can be applied to these classes:

  • user (u)
  • group (g)
  • other (o)

The user is the account that owns the file. The group that owns the file may have other accounts on the system as members. The remaining class, other (sometimes referred to as world), means all other accounts on the system.

To see all permissions on files and folder use following command: ls -l /folder

Following command sets read (r) for user (u),and read (r),write (w) and execute (x) for group (g ) and set ownership to user root and group root.

rsync -avm --chmod=u=r --chown=root:root //localhost/c$/inetpub/ root@192.168.1.11:/data

ls -l /data/
total 0
dr--rwx--- 3 root root 19 Jun 22 10:22 custerr
dr-------- 4 root root 64 Jun 22 10:24 history
dr--rwx--- 3 root root 22 Jun 22 10:22 temp
dr--rwx--- 2 root root 44 Jun 22 10:22 wwwroot

Following command will give user (u) and group (g) read (r) and execute (x) permission on destination folder, and set ownership to user root and group root.

rsync -avm --chmod=ug=rx --chown=root:root //localhost/c$/inetpub/ root@192.168.1.11:/data

ls -l /data/
total 0
dr-xr-x--- 3 root root 19 Jun 22 10:22 custerr
dr-xr-x--- 4 root root 64 Jun 22 10:24 history
dr-xr-x--- 3 root root 22 Jun 22 10:22 temp
dr-xr-x--- 2 root root 44 Jun 22 10:22 wwwroot

Following command will set read (r) and write (w) permissions to user (u),group(g) and all (o) group, ownership to user root and password root

rsync -avm --chmod=ugo=rw --chown=root:root //localhost/c$/inetpub/ root@192.168.1.11:/data

ls -l /data/
total 0
drw-rw-rw- 3 root root 19 Jun 22 10:22 custerr
drw-rw-rw- 4 root root 64 Jun 22 10:24 history
drw-rw-rw- 3 root root 22 Jun 22 10:22 temp
drw-rw-rw- 2 root root 44 Jun 22 10:22 wwwroot


To avoid copying permissions to destination, it’s enough to omit -a switch in rsync command. To copy empty directories use –rWH switches and omit -m switch.

In order to execute above command from command prompt/Powershell, add C:\cygwin64\bin to system path environmental variable

CloudStack is open source software designed to deploy and manage large networks of virtual machines, as a highly available, highly scalable Infrastructure as a Service (IaaS) cloud computing platform.

CloudStack currently supports following hypervisors: VMware, KVM, Citrix XenServer, Xen Cloud Platform (XCP), Oracle VM server and Microsoft Hyper-V.

In order to run API calls we need to create API key and secret access keys.

From management console click Account-select admin user-View users-again click on user-generate keys

API call need to be signed, it’s described on this page.

Bellow code will list all CloudStack users.

Python code:


#!/usr/bin/env python
import urllib2
import urllib
import hashlib
import hmac
import base64

baseurl='http://localhost:8080/client/api?'
request={}

request['command']='listUsers'
request['response']='json'
request['apikey']="Z-dr9dp6gZbKmrl9stAm6uGBSWqSonMhc2i-nqVlSG6MlpvqWFWW1uVJEZnrrwq_drQXDWRFTGwZ1p_qarLzwQ"
secretkey="p_iiuI3oDxBCxmUgceAHYf-f9uotX9B-uK2qxmVAT_bbYfPdhiePnlPRjbL6CvtPH8gDbjIh8uGPmP1KjN6HBQ"
request_str='&'.join(['='.join([k,urllib.quote_plus(request[k])]) for k in request.keys()])
sig_str='&'.join(['='.join([k.lower(),urllib.quote_plus(request[k].lower().replace('+','%20'))])for k in sorted(request.iterkeys())])
sig=urllib.quote_plus(base64.encodestring(hmac.new(secretkey,sig_str,hashlib.sha1).digest()).strip())
req=baseurl+request_str+'&signature='+sig
res=urllib2.urlopen(req)
print res.read()

PHP

<?php

$baseurl = "http://localhost:8080/client/api?";

$response = "response=json";

$command = "command=listUsers";
$apikey = "apikey=Z-dr9dp6gZbKmrl9stAm6uGBSWqSonMhc2i-nqVlSG6MlpvqWFWW1uVJEZnrrwq_drQXDWRFTGwZ1p_qarLzwQ";
$secretkey = "p_iiuI3oDxBCxmUgceAHYf-f9uotX9B-uK2qxmVAT_bbYfPdhiePnlPRjbL6CvtPH8gDbjIh8uGPmP1KjN6HBQ";

$hash = hash_hmac("sha1",strtolower($apikey . "&" . $command . "&" . $response),$secretkey, true);
$base64encoded = base64_encode($hash);
$signature = "signature=" . urlencode($base64encoded);

$link = $baseurl  . $apikey . "&" . $command . "&" . $response . "&" . $signature;
$responsecontents = file_get_contents($link);
var_dump($responsecontents);
?>

Output example

{"listusersresponse":{"count":1,"user":[{"id":"cb1b5f77-ab6f-11ea-b298-00155d349505","username":"admin","firstname":"admin","lastname":"cloud","created":"2020-06-11T03:12:02+0200","state":"enabled","account":"admin","accounttype":1,"usersource":"native","roleid":"b68b55ed-ab6f-11ea-b298-00155d349505","roletype":"Admin","rolename":"Root Admin","domainid":"13f272b0-ab6f-11ea-b298-00155d349505","domain":"ROOT","apikey":"Z-dr9dp6gZbKmrl9stAm6uGBSWqSonMhc2i-nqVlSG6MlpvqWFWW1uVJEZnrrwq_drQXDWRFTGwZ1p_qarLzwQ","accountid":"cb1b4e2a-ab6f-11ea-b298-00155d349505","iscallerchilddomain":false,"isdefault":true}]}}

Conditional access policy is used to give access to company resources based on conditions (if-then-elseif-else-except)

In this example Conditional Access policy checks following:

  • If user access from company network (if in the office), policy won’t be applied, user have unlimited access
  • If user access outside company network, and if device he’s accessing from is compliant (Enrolled in Intune), and if he uses managed browser (or Edge on Windows 10), or if he’s using Office 365 application, he can access to company resources
  • If user access outside company network, and if device he’s accessing from is NOT compliant (Not enrolled in Intune), access is blocked

Enabling modern authentication

Enable Office 365 modern authentication to solve issue when desktop Outlook app is prompting for password

# Connect to Exchange Online
$UserCredential = Get-Credential
$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/ -Credential $UserCredential -Authentication Basic -AllowRedirection
Import-PSSession $Session -DisableNameChecking

# check if modern authentication is enabled

Get-OrganizationConfig | Format-Table Name,OAuth* -Auto

# if output is false, modern authentication is not enabled

# enable modern authentication

Set-OrganizationConfig -OAuth2ClientProfileEnabled $true

Creating named location(s)

Define (trusted) locations, IP ranges.If user’s IP address is within this scope, policy won’t be applied.

In Azure portal click Azure Active Directory-Security-Conditional access-Named Locations-New Location

Creating Conditional access policy

Azure active directory-Security-Conditional access-New policy

In users and groups specify user group to which this policy will apply

Select application to which you need to configure access

For location to include select any

In exclude, select location we created earlier

Select the client apps this policy will apply to

Device state:Yes

In exclude, select both options, policy won’t apply if device is member of On-Premise and Azure AD (Hybrid Azure AD join) or if device is enrolled in Intune and comply to all policies (compliant)

Grant access and specify conditions which needs to be fulfilled

Evaluating conditional access policy

After policy is created, click What If

Select user, application to which access is evaluated, IP address (optionally),device type,application from which resource is accessed and device state

After clicking “What If”, you’ll see if policy will be applied for specific user.This is the best way to start troubleshooting if policy behaves in non expected way.

If trying access outside company office,and “unamanaged” browser (Opera in my case),you’ll get below message

If trying access from Edge, you’ll should be able to sign in to Office 365, in case policy works in unwanted way, check if device you’re trying to access from is compliant:

Microsoft Endpoint Manager Admin Center-Devices-All devices

check login logs:

Azure active directory-Sign-Ins

Select user-click on conditional access and on “three dots” to see details

Kubernetes – Helm 3 Charts

Posted: May 22, 2020 in kubernetes

Helm is Kubernetes package manager.Helm charts are used to deploy an application, or one component of a larger application. Charts contain multipe yaml files used to define services, deploymenys, configmaps, volume definitions and so on.

Installing Helm 3

Instructions can be find here

Creating your first chart

On command prompt type:

helm create chartname

I named chart as mysql so folder with same name is created, this chart contains definitions for nginx chart, i wanted to deploy mysql and this “default” chart is just a skeleton or “blueprint”.

values.yaml is important file in this folder.

This file contains values which will be populating values in files in templates folder.In values.yaml we define definitions for services, deployments, volumes. consider this file as list of parameter. Instead of changing individual values in files in template folders, it’s enough to specify it in values.yaml.

# Default values for mysql.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.



namespace: helm-namespace

configmap: example-configmap 


image:
  name: mysql
  repository: mysql
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: latest
  replicaCount: 1
  

credentials:
  root_pwd: pass
  db_user: my
  db_user_pwd: some_pass
  db_name: mydb



 
# deployment strategy:

# OnDelete - the StatefulSet controller will not automatically update the Pods in a StatefulSet.
# Users must manually delete Pods to cause the controller to create new Pods that reflect modifications made to
# a StatefulSet’s
# RollingUpdate - It works by slowly, one by one, replacing pods of the previous version of your application with pods of the new version without any cluster downtime.


strategy:
  type: RollingUpdate
  
  
# image pull policy

# Always - always pull image

# IfNotPresent - skip pulling an image if it already exists.

imagePullPolicy: IfNotPresent

resources:
  requests:
    memory: 300Mi
    cpu: 400m
  limits:
    memory: 600Mi
    cpu: 700m  

livenessProbe:
  initialDelaySeconds: 30
  periodSeconds: 10
  timeoutSeconds: 5

readinessProbe:
  initialDelaySeconds: 30
  periodSeconds: 2
  timeoutSeconds: 1

podAnnotations: {}

podSecurityContext: {}
  # fsGroup: 2000

securityContext: {}

storage:
  name: localstorage
  capacity: 5Gi
  accessModes: "ReadWriteOnce"

service:
  type: ClusterIP
  port: 3306

 
nodeSelector: {}

tolerations: []

affinity: {}

Values in above files will be “mapped” to files in template folder using variables. These variables are specified in yaml files in template folder. This files are in fact Kubernetes definitions file.Instead of applying it with kubectl command, helm command is used to apply all files in template folder.

Content of templates folder:

“Mapping” values in values.yaml file is in fact referencing those values to yaml files in template folder

Here is example of file templates/deployment.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: {{ .Values.image.name }}
  namespace: {{ .Values.namespace }}
spec:
  serviceName: {{ .Values.image.name }}
  replicas: {{ .Values.image.replicaCount }}
  updateStrategy:
    type: {{ .Values.strategy.type }}
    
  selector:
    matchLabels:
      app: {{ .Values.image.name }}
  template:
    metadata:
      labels:
        app: {{ .Values.image.name }}

{{ .Values.image.name }} refers to

image:
  name: mysql

in values.yaml

{{ .Values.namespace }} refers to namespace: helm-namespace, and so on.

These values can be “‘overriden” during chart installation, it will be discussed a bit later.

Conditional resource creation

Using if else we can skip creating resources if they already exists, in below example, if Persistent volume mysql-01 already exits, it won’t be created

{{ if not (lookup "v1" "PersistentVolume" .Values.namespace "mysql-01") }}


apiVersion: v1
kind: PersistentVolume
metadata:
  namespace: {{ .Values.namespace }}
  name: mysql-01
  labels:
    type: local
spec:
  storageClassName: {{ .Values.storage.name }}
  capacity:
    storage: {{ .Values.storage.capacity }}
  accessModes:
    -  {{ .Values.storage.accessModes }}
  hostPath:
    path: "/mnt/mysql-01"

{{ end }}

Testing if files are correctly populated

To test if files looks correct we can test “template replacement”

If we’re inside chart folder we can test it with helm template command (notice . at the end of command)

[root@kubernetes-master mysql]# pwd
/root/mysql
[root@kubernetes-master mysql]# ls
charts Chart.yaml templates values.yaml
helm template .

Instaling Helm chart

Helm chart can be installed with helm install folder_name_of helm chart

In this example we’re inside chart folder (hence dot . ) and following command will install chart named mysql

helm install mysql .

If we want to “override” values in values.yaml, we can “imperatively ” pass it during helm installation, for example

Values in values.yaml

credentials:
  root_pwd: pass
  db_user: my
  db_user_pwd: some_pass
  db_name: mydb

are replaced during chart creation:

helm install mysql . --set credentials.root_pwd=test,credentials.db_user=moj_user,credentials.db_password=mojpass,credentials.db_name=debe

Packaging Helm chart

Chart can be packed to tgz file

helm package chart_name folder_name

[root@kubernetes-master ~]# pwd
/root
[root@kubernetes-master ~]# ls
1.yaml  anaconda-ks.cfg  mysql
helm package mysql mysql

Example code