Archive for October, 2016

Creating container in AD 

In AD container SCCM will publish object which need to be published in Active Directory.

I used PS script to create container:

# Get the distinguished name of the Active Directory domain
$DomainDn = ([adsi]"").distinguishedName
# Build distinguished name path of the System container
$SystemDn = "CN=System," + $DomainDn
# Retrieve a reference to the System container using the path we just built
$SysContainer = [adsi]"LDAP://$SystemDn"
# Create a new object inside the System container called System Management, of type "container"
$SysMgmtContainer = $SysContainer.Create("Container", "CN=System Management")
# Commit the new object to the Active Directory database
$SysMgmtContainer.SetInfo()

Setting permissions on the System Management container

Setting permissions allows SCCM site servers to publish site information to the container

Open Active Directory Users And Computers (start-run-dsa.msc) ,click on Advanced Features

Untitled10

Expand System Folder,right click System Manager and click Delegare Control

Untitled

Click on Add, on select users,computers or groups window click on Object Types and check for Computers as object types. Click on OK. Type the name of the SCCM server computer account and click on OK.

Untitled1

Add SCCM computer account

capture00

Click create custom task to delegate

Untitled3

Make sure This folder,existing objects in this folder,and creation of new objects in this folder is selected and click next

Untitled4

Untitled5

choose General, Property Specific and Creation/deletion of specific child objects. For the permissions, click on Full Control

Extending AD schema

SCCM uses AD to publish information about its sites and services, making it easily accessible to Active Directory clients. To leverage AD, we must extend the schema to create classes of objects specific to SCCM.

Navigate to \SMSSETUP\Bin\X64 folder and run extadsch.exe as administrator.

capture6.png

Check ExtADSch.log file (Located on system drive)

capture7

Installing Windows Features

For SCCM to work we need to install IIS,Net Framework 3.5,Background Intelligent Transfer (BITS),Windows Update Service,Common HTTP Features – Default Document, Static Content,Application Development – ASP.NET 3.5, .NET Extensibility 3.5, ASP.NET 4.5, .NET Extensibility 4.5, ISAPI extensions,Security – Windows Authentication,IIS 6 Management Compatibility – IIS Management Console, IIS 6 Metabase Compatibility, IIS 6 WMI Compatibility, IIS Management Scripts and Tools:

install-windowsfeature web-server,net-framework-features,bits,rdc,web-net-ext,web-net-ext45,web-wmi,web-scripting-tools,web-windows-auth,updateservices,NET-WCF-Services45

Then install Windows Assessment and Deployment Kit,choose component as per picture

Untitled7

Installing SQL Server 2014

For SQL Service Accounts,(SQL Server Agent,SQL Server Database Engine,SQL Server Reporting Service) best practice is to use domain accounts created only for this purpose.

Here is sample script:

import-module activedirectory
New-ADOrganizationalUnit -NAME "SYSTEM ACCOUNTS"
New-ADUser -name sql_sa -displayname sql_sa -samaccountname sql_sa -AccountPassword (ConvertTo-SecureString "Password01" -asplaintext -force) -Enabled $true -PasswordNeverExpires $true -Path "OU=SYSTEM ACCOUNTS,DC=contoso,DC=com" -userprincipalname sql_sa@contoso.com
New-ADUser -name sql_db -displayname sql_db -samaccountname sql_db -AccountPassword (ConvertTo-SecureString "Password01" -asplaintext -force) -Enabled $true -PasswordNeverExpires $true -Path "OU=SYSTEM ACCOUNTS,DC=contoso,DC=com" -userprincipalname sql_db@contoso.com
New-ADUser -name sql_srs -displayname sql_srs -samaccountname sql_srs -AccountPassword (ConvertTo-SecureString "Password01" -asplaintext -force) -Enabled $true -PasswordNeverExpires $true -Path "OU=SYSTEM ACCOUNTS,DC=contoso,DC=com" -userprincipalname sql_srs@contoso.com

capture00

Select Dtabase Engine Service,Reporting Service and Management tools

capture

Optionally,we can create dedicated instance

capture1

Specify service accounts we created earlier and collation:

capture3

capture4

Install and configure Reporting Service:

Capture5.PNG

SQL server configuration:

We need to open ports for SQL Server,1433 (instance connection) and 4022 (Service Broker)

New-NetFirewallRule -Displayname "Allow port 1433" -direction inbound -LocalPort 1433 -Protocol tcp -Action allow
New-NetFirewallRule -Displayname "Allow port 4022" -direction inbound -LocalPort 4022 -Protocol tcp -Action allow

Prior installation,SCCM checks if SQL server’s memory is limited,if not it throws an warning,to suppres it,set memory boundaries for SQL server,open SQL Server management studio:

Untitled7

Right click SQL server name and choose properties:

Untitled8

Set min/max memory:

Untitled9

Configure static TCP port:

capture00

capture12

capture13

capture14

Add SCCM computer account to local administrator group of SQL server:

untitled

Installing SCCM

Capture00.PNG

capture00

Capture.PNG

Choose path for file needed by SCCM server

capture

Name site code and name

Capture8.PNG

Specify SQL server and instance:

capture

Configure configuration method:

capture

Install Management Pack and Distribution Point:

capture10

Choose whether You want to update SCCM:

capture11

capture

 

And we are done !!!

Capture00.PNG

 

 

 

 

The Registry is server side application that stores and lets you distribute Docker images.

In my previous post when i created  Docker Web Server (docker run –name web –hostname web -m 2g -p 80:80 -P -i -t ubuntu /bin/bash) ubuntu image is pulled from online repository,it’s perfectly OK for test purpose,but it’s not appropriate when we are dealing with Docker containers in working environment.Not only bandwith would be the issue,but also security.In this post we’ll create our own,Private Docker Registry and will configure authentication.

Because we’ll use Nginx (HTTP and reverse proxy server),for configuring security,we need to install appache2-utils package,which will generate passwords for Nginx.Also we need to install docker-compose (tool for defining and running multi-container Docker applications) and curl (a tool to transfer data from or to a server, using one of the supported protocols (DICT, FILE, FTP, FTPS, GOPHER, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET and TFTP)) packages

apt-get install -y docker-compose apache2-utils curl

We need a folder for our containers and it’s volumes:

mkdir /docker-registry
mkdir  /docker-registry/data
mkdir /docker-registry/nginx
chown root:root /docker-registry
cd /docker-registry

Create docker-compose.yml file (for defining docker container properties)

vi docker-compose.yml:
nginx:
  image: "nginx:1.9"
  ports:
    - 443:443
  links:
    - registry:registry
  volumes:
    - /docker-registry/nginx/:/etc/nginx/conf.d
registry:
  image: registry:2
  ports:
    - 127.0.0.1:5000:5000
  environment:
    REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
  volumes:
    - /docker-registry/data:/data

First registry container will be created,it’ll listen on port 5000,REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY variable instructs the registry docker (derived from registry:2) image to store data to /data volume (mapped from /docker-registry/data).

Then nginx container will be made next,it will know how to reach registry container thanks to –link directive (registry container IP will be mapped in nginx /etc/hosts)

Now start containers:

docker-compose up

If everything gone fine,we should see picture below:

capture

Stop the registry (CTRL+C)

Now would be good time to convert docker-compose into service:

Create docker-registry.service file in /etc/systemd/system folder:

nano /etc/systemd/system/docker-registry.service

[Unit]
Description=Starting docker registry

[Service]
Environment= MY_ENVIRONMENT_VAR = /docker-registry/docker-compose.yml
WorkingDirectory=/docker-registry
ExecStart=/usr/bin/docker-compose up
Restart=always

[Install]
WantedBy=multi-user.target

Test if it works:

service docker-registry start
root@ubuntu:~/docker-registry# docker ps

capture

From now on,instead docker-compose up and terminating process,we’ll use service docker-registry start/stop/restart command

Now we need to configure nginx server:

Open new terminal:

vi  /docker-registry/nginx/registry.conf

upstream docker-registry {
  server registry:5000;
}

server {
  listen 443;
  server_name myregistrydomain.com;

  # SSL
  # ssl on;
  # ssl_certificate /etc/nginx/conf.d/domain.crt;
  # ssl_certificate_key /etc/nginx/conf.d/domain.key;

  # disable any limits to avoid HTTP 413 for large image uploads
  client_max_body_size 0;

  # required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)
  chunked_transfer_encoding on;

  location /v2/ {
    # Do not allow connections from docker 1.5 and earlier
    # docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
    if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
      return 404;
    }

    # To add basic authentication to v2 use auth_basic setting plus add_header
    # auth_basic "registry.localhost";
    # auth_basic_user_file /etc/nginx/conf.d/registry.password;
    # add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always;

    proxy_pass                          http://docker-registry;
    proxy_set_header  Host              $http_host;   # required for docker client's sake
    proxy_set_header  X-Real-IP         $remote_addr; # pass on real client's IP
    proxy_set_header  X-Forwarded-For   $proxy_add_x_forwarded_for;
    proxy_set_header  X-Forwarded-Proto $scheme;
    proxy_read_timeout                  900;
  }
}

Test if we can access to Docker Registry  and to Nginx:

service docker-registry restart
curl http://localhost:5000/v2/

We should get output below:

{}root@ubuntu:/docker-registry#

 

Now we need to set up authentication,create user:

cd /docker-registry/nginx
htpasswd -c registry.password mydocker
New password:
Re-type new password:
Adding password for user mydocker

Open again registry.conf:

vi /docker-registry/nginx/registry.conf

Uncomment following lines to configure Nginx for basic HTTP authnrtication :

auth_basic "registry.localhost";
auth_basic_user_file /etc/nginx/conf.d/registry.password;
add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always;

Test again:

service docker-registry restart
curl http://localhost:443/v2/
root@ubuntu:~/docker-registry/nginx# curl http://localhost:5043/v2/401 Authorization Required
401 Authorization Required
nginx/1.9.15

Test with username/password,we should get same output as earlier:

curl http://mydocker:123456@localhost:443/v2/
{}root@ubuntu:~/docker-registry/nginx#

Setting SSL authentication:

Open,againg registry.conf

vi  /docker-registry/nginx/registry.conf

uncomment lines below and set domain name:

upstream docker-registry {
server registry:5000;
}

server {
listen 443;
server_name docker-server.com;

# SSL
ssl on;
ssl_certificate /etc/nginx/conf.d/domain.crt;
ssl_certificate_key /etc/nginx/conf.d/domain.key;

Creating our own Certification Authority

cd /docker-registry/nginx

Generate a new root key:

openssl genrsa -out dockerCA.key 2048

Generate a root certificate (enter docker-server.com for Common Name,whatever You want for other fileds):

openssl req -x509 -new -nodes -key dockerCA.key -days 10000 -out dockerCA.crt

Generate server key (this is the file referenced by ssl_certificate_key in Nginx):

openssl genrsa -out domain.key 2048

Request a new certificate (again,enter docker-server.com for Common Name,don’t enter a password):

openssl req -new -key domain.key -out docker-registry.com.csr

Sign a certificate request:

openssl x509 -req -in docker-registry.com.csr -CA dockerCA.crt -CAkey dockerCA.key -CAcreateserial -out domain.crt -days 10000

Because we created our own CA,by default it wouldn’t be verified by any other CA’s,so we need to “force” computers which will be connecting to our Docker Private Registry.

Do this on our Docker Registry Server (for testing purposes):

cd /docker-registry/nginx
cp dockerCA.crt /usr/local/share/ca-certificates/

By copying root certificate to /usr/local/share/ca-certificates folder we told hosts to “trust” our Certification Authority

update-ca-certificates && service docker restart && service docker-registry restart
curl https://mydocker:123456@docker-server.com/v2/
#output should be
{}root@ubuntu:/docker-registry/nginx#

On client machine,(if  dosen’t exist),create folder /usr/local/share/ca-certificates/,of course,install docker if it’s not installed already.

Then copy dockerCA.crt to client machine

scp dockerCA.crt ja@192.168.0.59:/usr/local/share/ca-certificates
ja@192.168.0.59's password:
dockerCA.crt 100% 1302 1.3KB/s 00:00

On client,update CA certificate

update-ca-certificates && service docker restart
#test login to fresh created repository:
docker login https://docker-server.com
Username: mydocker
Password:
Login Succeeded

On the client create test container

docker run -it ubuntu
#re-tag images DOMAIN-NAME/NEW-TAG
docker tag ubuntu docker-server.com/test-image
#push image to repository:
docker push docker-server.com/test-image

capture

Now remove image from host  and pull it from repository

docker rmi -f docker-server.com/test-image
docker pull docker-server.com/test-image

capture

That’s it !!!,we have now operational repository,in case of any errors please refer to docker logs:

journalctl -u docker for docker logs in the systemd journal
journalctl | grep docker for system logs that mention docker

Sharing data between Docker host and docker containers

We can map volumes on Docker host and docker containers during container creation

Let’s map /root folder on Docker host to container we are about to create:

 docker run --rm --hostname dockerA --name dockerA -it -v "$(pwd)"/create.sh:/create.sh ubuntu bash

–rm:deletes container after exiting

–hostname:container’s hostname

–name:friendly docker name

-it:interactive run (i) and attach terminal (t)

-v:map Docker host volume to docker container

In this example,i mapped ~/root.create.sh script to docker container “$(pwd)” -current host directory (-v “$(pwd)”/create.sh:/create.sh)

root@ubuntu:~# docker run --rm --hostname dockerA --name dockerA -it -v "$(pwd)"/create.sh:/create.sh ubuntu bash
root@dockerA:/# hostname
dockerA
root@dockerA:/# ls
bin boot create.sh dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
root@dockerA:/#

As we can see,script create.sh is copied to docker container root directory (/)

What does create.sh script do:

mkdir -p /docker

if [ "$HOSTNAME" = ubuntu ];then

for f in {1..3}
do
echo ubuntu > "/docker/docker-$f.txt"
done

elif [ "$HOSTNAME" = dockerA ];then
mkdir -p /folder_from_ubuntu
for f in {1..3} do echo dockerA > "/folder_from_ubuntu/docker-$f.txt"
done
else
mkdir -p /folder_from_ubuntu
for f in {1..3} do echo dockerB > "/folder_from_ubuntu/docker-$f.txt"
done
fi

First create /docker folder if it’s not exist,then it creates three files named /docker/docker-[1-3].txt then in these files writes output of $HOSTNAME variable,so if script runs on Docker host,”ubuntu” (docker host hotname),will be written to these files,and so on.

When run ~/create.sh script on Docker host,/docker folder will be created with 3 files (docker-[1-3].txt)

root@ubuntu:~# ./create.sh
root@ubuntu:~# ls /docker/
docker-1.txt docker-2.txt docker-3.txt
root@ubuntu:~# cat /docker/docker-1.txt
ubuntu

Now map ~ and /docker folder to docker container,(remember,docker run –rm switch automatically deletes container upon exit)

docker run --rm --hostname dockerA --name dockerA -it -v "$(pwd)"/create.sh:/create.sh -v /docker:/folder_from_ubuntu ubuntu bash

Now not only ~ folder from host is mapped to container,but also /docker folder  (-v /docker:/folder_from_ubuntu),instead “folder_from_ubuntu” we can put any name we want.

root@ubuntu:~# docker run --rm --hostname dockerA --name dockerA -it -v "$(pwd)"/create.sh:/create.sh -v /docker:/folder_from_ubuntu ubuntu bash
root@dockerA:/# hostname
dockerA
root@dockerA:/# ls
bin create.sh etc home lib64 mnt proc run srv tmp var
boot dev folder_from_ubuntu lib media opt root sbin sys usr
root@dockerA:/# ls /folder_from_ubuntu/
docker-1.txt docker-2.txt docker-3.txt
root@dockerA:/# cat /folder_from_ubuntu/docker-1.txt
ubuntu

As we can see,/docker folder content from Docker host (ubuntu), is replicated to docker container (dockerA),but what happens we we run create.sh script from container ?

root@dockerA:/# ./create.sh
root@dockerA:/# cat /folder_from_ubuntu/docker-1.txt
dockerA

Because script runs from dockerA containers,content shows dockerA.Files modified in docker container is reflected to Docker host:

root@ubuntu:~# cat /docker/docker-1.txt
dockerA

Also,file deleted on container is deleted from Docker host.

Leave docker container running (CTRL+P & CTRL+Q)

Create new file in /docker folder on Docker host:

"created on docker host" > /docker/docker_host.txt

Get back to container (dockerA)

root@ubuntu:/docker# docker exec -it dockerA bash
root@dockerA:/# cat /folder_from_ubuntu/docker_host.txt
created on docker host

Sharing data between containers

Create new docker container (dockerB) which will have mounted all volumes from dockerA

(–volumes-from dockerA)

docker run --rm --hostname dockerB --name dockerB -it --volumes-from dockerA ubuntu bash
root@dockerB:/# hostname
dockerB
root@dockerB:/# ls
bin create.sh etc home lib64 mnt proc run srv tmp var
boot dev folder_from_ubuntu lib media opt root sbin sys usr
root@dockerB:/# ls /folder_from_ubuntu/
docker-1.txt docker-2.txt docker-3.txt docker_host.txt

As we can see,all folder are replicated from dockerA

root@dockerB:/# ./create.sh
root@dockerB:/# cat /folder_from_ubuntu/docker-1.txt
dockerB

I run script on dockerB and again,changes are propagated to Docker host (ubuntu) and dockerA,also,files created on dockerB will be visible on dockerA and docker host (ubuntu)

root@dockerB:/# echo "created on dockerB" > /folder_from_ubuntu/dockerB_host.txt
root@ubuntu:~# docker exec -it dockerA bash
root@dockerA:/# cat /folder_from_ubuntu/dockerB_host.txt
created on dockerB

With Docker we can create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package.For detailed explanation check Linux containers and Docker explained

Installing docker

Update the system and install CA:

sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates

Add the GPG key for the official Docker repository:

sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D

Add the Docker repository to APT sources:

Create /etc/apt/sources.list.d/docker.list file and add entry: deb https://apt.dockerproject.org/repo ubuntu-xenial main

sudo touch /etc/apt/sources.list.d/docker.list
sudo vi /etc/apt/sources.list.d/docker.list

capture

Update the package database with the Docker packages from the repository we’ve just added:

sudo apt-get update

Purge the old repo if it exists.

sudo apt-get lxc-docker

Check if new repository is successfully added:

apt-cache policy docker-engine

capture

Install docker:

sudo apt-get install -y docker-engine

Check if docker daemon is running:

sudo systemctl status docker

Capture.PNG

Test if docker runs as expected:

sudo docker run hello-world

capture

Start docker daemon with system:

update-rc.d docker enable

Creating docker web server

Now when we’ve installed docker,we can build docker web server.In Docker terminology, there are images and there are containers. The two are closely related, but distinct.

An image is really a template that can be turned into a container, essentially it’s a snapshot of a container.A container is instance of the image.To turn an image into a container, the Docker engine takes the image, adds a read-write filesystem on top and initialises various settings including network ports, container name, ID and resource limits. A running container has a currently executing process, but a container can also be stopped (or exited to use Docker’s terminology). An exited container is not the same as an image, as it can be restarted and will retain its settings and any filesystem changes.

So to build a container we need to specify an image from which container will be derived from:

In this example i used ubuntu image,it’s like clean Ubuntu installation,We can search available images:

sudo docker search ubuntu --no-trunc

In this example i searched all ubuntu images and will use selected image for docker web server creation:

Capture.PNG

docker run --name web --hostname web -m 2g -p 80:80 -P -i -t ubuntu /bin/bash

–name give container descriptive name

–hostname giving container hostname (by default container hostname is something like a23dvef)

-m 2g (allocate 2GB of RAM to this container

-p publish container port(s) to the docker host

-P without this switch,docker web site would be accessible only from docker host,not from outside computers

The -t and -i flags allocate a pseudo-tty and keep stdin open even if not attached. This will allow you to use the container like a traditional VM as long as the bash prompt is running,in this example we want to interact with container,so we attached /bin/bash

ubuntu (behind -t) is image from which we want to create container

After creating container you should be redirected to container’s bash

root@web:/# hostname
web

Update container:

apt-get update -y

Install apache,net tolls and vi editor:

apt-get install apache2 vim net-tools -y

Just for fun,i edited index.html:

vi /var/www/html/index.html

capture

Restart apache service:

service apache2 restart

Check current ip address:

root@web:/# ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:ac:11:00:02
inet addr:172.17.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link

Test web access from docker host (it’s VM with bridged networking) with IP 192.168.0.10

capture

Now test access from Win 10 lap-top (on which Ubuntu server 16.04,as docker host is installed).During docker web server container installation,i bound port 80 of docker host (192.168.0.10) to docker port,that’s why i targeting 192.168.0.10 address from my Win 10

Capture.PNG

To leave docker container running press CTRL-P and CTRL-Q

To enter back in already running container:

docker exec -i -t web /bin/bash

Check if docker container is running:

docker inspect --format '{{.State.Running}}' web
true

Or:

docker ps

capture

Passwordless connection is commonly used during automatic backups with scripts, synchronization files using scp and remote command execution.In this example we’ll configure SSH login without password between windows 10 client and Ubuntu Server (192.168.0.10) for user ja

Windows side setup

Download and install putty,after installation run PuTTygen

Untitled.png

Make sure SSH-2 RSA is selected and click generate

untitled

During key creation move mouse within selected area

untitled

You’ll get public key,select save public key (save it to safe location) and DON’T CLOSE THIS WINDOW YET !!.Optionally,you can put some comment.

Untitled.png

Ubuntu config:

If already didn’t,install openssh server:

sudo apt-get install openssh-server

In user home directory create .ssh folder,and within it,authorized_keys file (in my case,username is “ja”)

sudo mkdir /home/ja/.ssh
sudo touch /home/ja/.ssh/authorized_keys

Set folder and file permission (note !!,on the web there are suggestions to set chmod -R 700 .ssh/ but in my case it didn’t work-get permission denied error message)

chmod 755 /home/ja/.ssh
chmod 644 /home/ja/.ssh/authorized_key

Edit sshd_config file:

LogLevel DEBUG3 #verbose log in case of problems

Now copy public key from windows machine to /etc/ja/.ssh/authorized_keys file
Untitled.png

untitled
restart sshd service:

sudo service sshd restart

Windows config-part II:

In putty Connection-Data,enter linux user name

Untitled.png

ssh-auth-browse:path to public file (saved during key generation)

untitled

sesion-enter IP address or DNS name and save config

Untitled.png

In case of issue:

tail -f /var/log/auth.log