Archive for the ‘Linux’ Category

Recently i got long list of linux machines and had to check which of them support password authentication.

I found a tool Hydra , if it finds machine which do not support password authentication, it will print it in output

Hydra v8.2-dev (c) 2016 by van Hauser/THC - Please do not use in military
or secret service organizations, or for illegal purposes. Hydra
( starting at 2019-11-25 14:49:59 [DATA]
max 4 tasks per 4 servers, overall 64 tasks, 5 login tries (l:1/p:5),
~0 tries

per task [DATA] attacking service ssh on port 22
[ERROR] target ssh:// does not support password authentication.
[ERROR] target ssh:// does not support password authentication.

ERROR] target ssh:// does not support password authentication.

[ERROR] target ssh:// does not support password authentication.

4 of 4 targets completed, 0 valid passwords found Hydra 

( finished at 2019-11-25 14:50:01

So i created simple batch script which captures Hydra output into $command variable, then get string between [ERROR] target ssh:// and :22/ does not support into $out variable.

Then get IP address of masines – $filtered variable. Then print every IP into new line and write it to output.txt file.

Installing hydra (CentOS 7)

rpm -Uvh
yum install hydra

Put all your passwords to file pws.txt and machines IP into targets.txt

file: put every password/IP into new line

command=$((hydra -l root -P pws.txt -M targets.txt ssh -t 4) 2>&1)
echo $command
out=$(echo $command | grep -oP '(?<=ERROR] target ssh://).*(?=:22/ does not support)')
filtered=$(echo "$out" | sed 's|does not support password authentication.||g ; s|/||g ; s|ERROR||g ; s|target ssh||g ; s|:22||g ; s/[][]//g ; s|/||g ; s|:||g')
echo $filtered | xargs -n1 > output.txt

output.txt will contain IPs of machine which don’t support password authentication.

Keepalived is used for HA. Keepalived is a service that can monitor servers or processes in order to implement high availability on your infrastructure.

In this example Active-Passive HA is implemented. Nagios secondary will monitor connectivity to primary, when disruption is detected, Nagios secondary will start nagios and postfix service and will serve requests until nagios master is available, when connection to nagios master is restored, nagios secondary will stop nagios and postfix service. Both servers are reachable via keepalived virtual IP:

Install rsync on nagios secondary:

dnf install rsync
systemctl start rsyncd && systemctl enable rsyncd

On both servers install keepalived

dnf install keepalived
systemctl start keepalived && systemctl enable keepalived

retention.dat file holds information about downtime, acknowledgement and comments. This file is read by CGI and shown in dashboard. This file (along with cfg files) will be regularly copied from master to slave nagios

On slave edit /usr/local/nagios/etc/nagios.cfg and set retention_update_interval=1 .It determines how often (in minutes) Nagios will automatically save retention data during normal operation (default is 60 minutes).

In /etc/keepalived create file exclude-list.txt to specify folder/files which don’t need to be synchronized with nagios slave



Keepalived config on Nagios master

set role to BACKUP, priority to 9, set virtual IP to /etc/keepalived/keeplaived.conf:

! Configuration File for keepalived

global_defs {

    enable_script_security 1
    script_user root
vrrp_instance VI_1 {
    debug 4
    interface eth0
    state BACKUP
    virtual_router_id 51
    advert_int 1
    priority 9
    virtual_ipaddress {
   dev eth0    # the virtual IP
    unicast_src_ip # Local IP
    unicast_peer { # Peer IP
    authentication {
        auth_type PASS
        auth_pass XXXX

Keepalived config on nagios_secondary

Set role to MASTER, priority to 10, detect failure (fall) and OK (rise) state on 2 attempts, define check script – track_script, (it will be bash script which will copy files from Nagios_master and report state: 0 if all is good, 1 if there is failure), reduce priority by 2 on check script failure (weight), when nagios_secondary becomes MASTER start nagios and postfix services (notify_master /etc/keepalived/, and when becomes BACKUP (notify_backup /etc/keepalived/, start nagios and postfix service.


! Configuration File for keepalived

global_defs {

   enable_script_security 1
   script_user root

vrrp_script chk_service_health {
    script /etc/keepalived/
    interval 15
    fall 2
    rise 2
    weight -2

vrrp_instance VI_1 {
    debug 4

    interface eth0

    state MASTER

    virtual_router_id 51
    advert_int 1
    priority 10

    virtual_ipaddress {
   dev eth0    # the virtual IP

    unicast_src_ip # Local IP

    unicast_peer { # Peer IP

    authentication {
        auth_type PASS
        auth_pass XXXX

    track_script {
    notify_master /etc/keepalived/
    notify_backup /etc/keepalived/




 rsync -armzv --timeout=5 --delete /usr/local/nagios --exclude-from /etc/keepalived/exclude-list.txt 

 if [ "$?" -eq "0" ]
   exit 0 # All good. Nagios master reachable
  exit 1 # Failover trigger



exec >> $logfile
exec 2>&1

# Define an array of processes to be checked.
# If properly quoted, these may contain spaces

check_process=("nagios" "postfix" )

for p in "${check_process[@]}"; do

   if (systemctl -q is-active $p)
      echo "$p is running, stopping it"
      systemctl stop $p
exit 0



exec >> $logfile
exec 2>&1
# Define an array of processes to be checked.
# If properly quoted, these may contain spaces

check_process=("nagios" "postfix" )

for p in "${check_process[@]}"; do

   if (systemctl -q is-active $p)
      echo "$p is running"

      echo "Staring $p..."
      systemctl start $p
exit 0

Install Nagios Core on CentOS 8

Posted: November 4, 2019 in Linux

I disabled SELinux

sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config
setenforce 0


Install python and other prerequisities


dnf install -y compat-openssl10 python3 perl gcc glibc glibc-common wget unzip httpd php gd gd-devel perl postfix 
alternatives --set python /usr/bin/python3 

Add nagios user and group

useradd nagios
groupadd nagcmd 

Add both the nagios user and the apache user to the nagcmd group 

usermod -G nagcmd nagios
usermod -G nagcmd apache

Download nagios setup

mkdir setup 
cd setup 
tar xvf nagios-4.4.5.tar.gz

Install nagios

cd nagios-4.4.5 
./configure --with-command-group=nagcmd 
make all 
make install 
make install-init 
make install-commandmode 
make install-config 
make install-webconf 
# set nagiosadmin password 
htpasswd -s -c /usr/local/nagios/etc/htpasswd.users nagiosadmin 

Setup EventHandlers

cp -R contrib/eventhandlers/ /usr/local/nagios/libexec/ 
chown -R nagios:nagios /usr/local/nagios/libexec/eventhandlers



Download and install nagios plugins

yum install -y gcc glibc glibc-common make gettext automake autoconf wget openssl-devel net-snmp net-snmp-utils
cd /tmp
wget --no-check-certificate -O nagios-plugins.tar.gz
tar zxf nagios-plugins.tar.gz
cd /tmp/nagios-plugins-release-2.2.1/
./configure --with-openssl --with-nagios-user=nagios --with-nagios-group=nagios
make install


Install NRPE

NRPE allows you to remotely execute Nagios plugins on other Linux/Unix machines. This allows you to monitor remote machine metrics (disk usage, CPU load, etc.). NRPE can also communicate with some of the Windows agent addons, so you can execute scripts and check metrics on remote Windows machines as well.

# install nrpe 
dnf install openssl-devel 
tar -xvf nrpe-3.2.1.tar.gz 
cd nrpe-3.2.1 
./configure --disable-ssl --with-nrpe-user=nagios --with-nrpe-group=nagios --with-nagios-user=nagios --with-nagios-group=nagios --libexecdir=/usr/local/nagios/libexec/ --bindir=/usr/local/nagios/bin/ --prefix=/usr/local/nagios 
make install 
cp src/check_nrpe /usr/local/nagios/libexec/

Install NCPA 

NCPA is a cross-platform monitoring agent that runs on Windows, Linux/Unix, and Mac OS/X machines. Its features include both active and passive checks, remote management, and a local monitoring interface.


tar -zxf check_ncpa.tar.gz 
mv /usr/local/nagios/libexec/ 
chown nagios:nagios /usr/local/nagios/libexec/ 
chmod 775 /usr/local/nagios/libexec/ 

Next add your command (ncpa into commands.cfg) 

vim /usr/local/nagios/etc/objects/commands.cfg 

define command { 

    command_name    check_ncpa 

    command_line    $USER1$/ -H $HOSTADDRESS$ $ARG1$ 


Adding contact

edit contacts.cfg and change email address

define contact{
        contact_name            nagiosadmin             ; Short name of user
        use                     generic-contact         ; Inherit default values from generic-contact template (defined above)
        alias                   Nagios Admin            ; Full name of user
        email                ; <<***** CHANGE THIS TO YOUR EMAIL ADDRESS ******

Enable and start nagios and httpd

systemctl start nagios
systemctl enable nagios
systemctl enable httpd
systemctl start httpd

Recently i create new CentOS 7 Hyper-V VM, i set disk type to Dynamic with 127 GB, during installation, set automatic partition, but soon i realized i only have 50 GB of root partition, copied bunch of files to /opt directory and i left out of disk space.

First, on Hyper-V console, turn off VM and expand disk space

Turn VM and partition unallocated disk space. Check the name(s) of your scsi devices

ls /sys/class/scsi_device/
0:0:0:0  2:0:0:0

Then rescan the scsi bus. Replace the ‘0\:0\:0\:0’ with the actual SCSI bus name found with the previous command.

echo 1 > /sys/class/scsi_device/0\:0\:0\:0/device/rescan
echo 1 > /sys/class/scsi_device/2\:0\:0\:0/device/rescan

Create new partition

fdisk /dev/sda

Enter n, to create a new partition, then choose p to create a new primary partition, Choose partition number. I have /dev/sda1 and /dev/sda2, so the number would be 3,confirm  first and last cylinder,  type t to change the partition type. When prompted, enter the number of the partition you’ve just created in the previous steps. When you’re asked to enter the “Hex code”, enter 8e,  type w to write your partitions to the disk.

The device presents a logical sector size that is smaller than
the physical sector size. Aligning to a physical sector (or optimal
I/O) size boundary is recommended, or performance may be impacted.
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): n
Partition type:
   p   primary (2 primary, 0 extended, 2 free)
   e   extended
Select (default p): p
Partition number (3,4, default 3): 3
First sector (266338304-838860799, default 266338304):
Using default value 266338304
Last sector, +sectors or +size{K,M,G} (266338304-838860799, default 838860799):
Using default value 838860799
Partition 3 of type Linux and of size 273 GiB is set

Command (m for help): t
Partition number (1-3, default 3): 3
Hex code (type L to list all codes): 8e
Changed type of partition 'Linux' to 'Linux LVM'

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.

Scan new partition

partprobe -s

And confirm partition is created

fdisk -l

Create the physical volume,replace /dev/sda3 with the newly created partition.

 pvcreate /dev/sda3

Extend logical volume with new partition, first find out volume group:


Extend volume group by adding new partition

vgextend centos /dev/sda3

If getting “Device /dev/sda3 not found.” reboot VM

See newly added physical volume:


Extend logical volume

lvextend /dev/mapper/centos-root /dev/sda3
Size of logical volume centos/root changed from 50.00 GiB (12800 extents) to <323.00 GiB (82687 extents).
Logical volume centos/root successfully resized.

Resize the file system to the volume group

xfs_growfs /dev/mapper/centos-root

xfs_growfs /dev/mapper/centos-root
meta-data=/dev/mapper/centos-root isize=512    agcount=4, agsize=3276800 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=13107200, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=6400, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 13107200 to 84671488

Root partition is resized

phpIPAM is an open-source web IP address management application (IPAM). Its goal is to provide light, modern and useful IP address management. It is php-based application with MySQL database backend, using jQuery libraries, ajax and HTML5/CSS3 features.


• IPv4/IPv6 IP address management

• Section / Subnet management

• Automatic free space display for subnets • Visual subnet display

• Automatic subnet scanning / IP status checks • PowerDNS integration

• NAT support • RACK management

• Domain authentication (AD, LDAP, Radius)

• Per-group section/subnet permissions

• Device / device types management

• RIPE subnets import

• XLS / CVS subnets import • IP request module


• Locations module

• VLAN management

• VRF Management – Virtual routing and forwarding (VRF) is a technology included in IP (Internet Protocol) network routers that allows multiple instances of a routing table to exist in a router and work simultaneously. This increases functionality by allowing network paths to be segmented without using multiple devices. Because traffic is automatically segregated, VRF also increases network security and can eliminate the need for encryption and authentication. Internet service providers (ISPs) often take advantage of VRF to create separate virtual private networks (VPNs) for customers; thus the technology is also referred to as VPN routing and forwarding. (Ne znam koliko nam je ovo bitno) • IPv4 / IPv6 calculator

• IP database search

• E-mail notifications


It’s presumed SELinux and firewall are disabled.Set locales:

more /etc/environment

Install all required packages

yum install httpd mariadb-server php php-cli php-gd php-common php-ldap php-pdo php-pear php-snmp php-xml php-mysql php-mbstring git
yum install epel-release
yum install php-mcrypt

Configuring Apache

Edit /etc/httpd/conf/httpd.conf

DocumentRoot "/var/www/html"
# Relax access to content within /var/www.
<Directory "/var/www">
    AllowOverride None
    # Allow open access:
    Require all granted
# Further relax access to the default document root:
<Directory "/var/www/html">
    # Possible values for the Options directive are "None", "All",
    # or any combination of:
    #   Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews
    # Note that "MultiViews" must be named *explicitly* --- "Options All"
    # doesn't give it to you.
    # The Options directive is both complicated and important.  Please see
    # for more information.
    Options FollowSymLinks
    # AllowOverride controls what directives may be placed in .htaccess files.
    # It can be "All", "None", or any combination of the keywords:
    #   Options FileInfo AuthConfig Limit
    AllowOverride all
    Order allow,deny
    Allow from all

    # Controls who can get stuff from this server.
    #Require all granted

Set correct timezone to php.ini to avoid php warnings:

grep timezone /etc/php.ini
; Defines the default timezone used by the date functions
date.timezone = Europe/Belgrade

Start Apache and MariaDB

systemctl start httpd
systemctl enable httpd
systemctl start mariadb
systemctl enable mariadb

Harden MariaDB server:


Download PHP installation files and set correct permissions

cd /var/www/html/
git clone .
git checkout 1.4
chown apache:apache -R /var/www/html/
find . -type f -exec chmod 0644 {} \;
find . -type d -exec chmod 0755 {} \;
cp config.dist.php config.php

Installing phpIPAM

Open browser and type http://IP address – Click Automatic database installation

Then type MariaDB username/password (set when hardening MariDB)

On next screen set admin password, phpIPAM now should be installed, login to access GUI.

Configuring logs

edit /etc/rsyslog.conf

auth.alert;auth.warning;auth.debug              /var/log/auth.log
if $programname == 'phpipam' then /var/log/phpipam.log
if $programname == 'phpipam-changelog' then /var/log/phpipam-changelog.log

On IPAM console Administration-phpIPAM Server settingsSyslog-Syslog and local database

Email settings

Install SMTP PHP module

cd /var/www/html/
git submodule update --init --recursive

Set admin name and email address: Administration – phpIPAM settings

Administration – Mail Settings

Server type: SMTP

Server address:

Port: 587

Set username/password and admin email (set in previous step)

Creating Section

In a Section,you can organize yours subnets in differents folders.

Folder is a block or a group a subnets , like a folder on disk.
To create Section click on Administration – Sections

type name and set other options.

Creating VLANs

To create VLANS, L2 Domain needs to be created first (this is not necessary when creating VLANs via API calls)

Administration – VLAN

Add L2 Domain

Type name and select section

Type VLAN ID and description

Creating subnets

Administration – Subnets

Select section created in previous step

Create folder by clicking on folder icon

Type CIDR, select VLAN,nameserver…..

Set Check host status and Disover new hosts to Yes

Scanning subnet and discovering new hosts

Manually scan subnet:

/bin/php /var/www/html/functions/scripts/pingCheck.php
/bin/php /var/www/html/functions/scripts/discoveryCheck.php

Automatically scan subnets every 15 minutes- /etc/crontab

*/15 * * * * root /bin/php /var/www/html/functions/scripts/pingCheck.php
*/15 * * * * root /bin/php /var/www/html/functions/scripts/discoveryCheck.php


Enable API (Administration – phpIPAM settings)

In this example HTTP access is used so we must enable http support in /var/www/htmp/config.php file

$api_allow_unsafe = true;

Create API ID

Type:User token

Set App Permission

Get token

yum install jq -y
curl -X POST --user admin:pass http://localhost/api/myapi/user/ | jq "."
  "code": 200,
  "success": true,
  "data": {
    "token": "token",
    "expires": "2019-09-16 15:15:55"
  "time": 0.015

Now we can use token to search for data or to create new objects

Following methods are available:

  • GET – Reads object(s) details and returns it in requested format
  • POST – Creates new object
  • PUT – Changes object values
  • PATCH – Alias to PUT method
  • DELETE – Deletes an object

Following objects (controllers are available)

  • Sections
  • Subnets
  • Folders
  • VLANs
  • Addresses
  • L2 domains
  • VRFs
  • Devices
  • Tools
  • Prefix

Request structure:


Searching objects

Searching subnets

curl -X GET --header "token: token" | jq '.'

Searching VLANs

curl -X GET --header "token: token" | jq '.'

Search for specific section

curl -X GET  "" --header "token: token" | jq '.'

Deafult REST API output is in JSON format, if output is too lengthy, it can be tedious and troublesome to scroll whole output in PuTTY windows, so maybe better approach would be using Postman. Make sure, in Header section to create key named token and in Value paste token.

Creating VLAN

Creating VLAN 88

curl -X POST --header 'token: token' --header "Content-Type: application/json" http://localhost/api/myapi/vlans/ --data '{"number": "88","name": "vlan_88","description": "VLAN 88"}' | jq "." 

To execute same API in Postman: Import- Paste Raw Text – Paste same command as in previous example.

Create new subnet:

In this example subnet is created in Devtech section (SectionID:4) and assigned to VLAN 87 (vlanId:4)
To get Section and VLan Id, first run GET API for subnets and vlan controllers to get those IDs

curl -X POST --header 'token: MOOG3gikXMPF9htSjY56S-1i' --header "Content-Type:application/json" http://localhost/api/myapi/subnets/ --data '{"subnet": "","sectionId": 1,"description": "Test","masterSubnetId": 0,"mask": 24,"vlanId":"4"}' | jq "."

After packaging VBox VM

vagrant package --base ""
vagrant box add mypackagedbox

, and after provisioning exported/packaged machine,
i started getting errors:”Warning: Authentication failure. Retrying… ”

The solution (at least for me), was specifying config.ssh.insert_key = false on both Vagrantfile (when provisioning “original” and “packaged” VM).

  1. log in to original box and fill disk empty space with zeroes
yum -y clean all
 rm -rf /var/cache/yum
 dd if=/dev/zero of=/EMPTY bs=1M
 rm -f /EMPTY

2. shutdown the VM

shutdown -h 0

3. delete file insecure_private_key from Vagrant directory


4. export the box

vagrant package --base     vm_id_as_it_is_in_virtualbox --output box_file_name

Visual Studio Code (aka VS Code ) is “a lightweight but powerful source code editor which runs on your desktop and is available for Windows, macOS and Linux”.It is half-way between an text editor and an IDE. Main reasons for using Visual Studio Code

  • It comes with a built-in support for Javascript, TypeScript, nodeJs (auto-completion, syntax check, debug, …) , and according to Slant – 12 Best IDEs for TypeScript development as of 2019 it has the best typescript support
  • It has a great ecosystem of plugins for supporting other languages (C, C++, C#, Python, …), you can even install keymaps from text editors like sublime text, atom, vim
  • It is cross-platform :  Windows, Mac or Linux

In this post we’ll be installing Visual Studio code on Windows 10,open and execute Python script on remote linux box.

Creating SSH connection between Windows 10 and CentOS 7

Visual Studio Code uses SSH key-pair to connect to linux box.

So we’ll create key-pair on Windows 10 and copy Windows 10 public key to /~/.ssh/authorized_key file

Open Command prompt on Windows 10 and create keys.



On CentOS 7 create ~/.ssh/authorized_keys file, set appropriate permissions and copy content of public key Windows 10 file to ~/.ssh/authorized_keys</code

mkdir /root/.ssh
chmod -R 700 /root/.ssh/
vi /root/.ssh/authorized_keys
# copy content of your public key file to authorized_keys file
chmod 600 /root/.ssh/authorized_keys
systemctl restart sshd

Test ssh connection from Windows 10 to Linux

Open CMD and type

ssh -i c:\Users\user\.ssh\id_rsa root@

Install Visual Studio Code on Windows 10

Once installed, click on “Cog” button – extensions


Type Remote – SSH to install this extension – click on Install


Now, click again on “Cog” – Command Pallete


Type Remote – SSH: Open Configuration File


Select configuration file located in your User Profile


Change alias in some more descriptive, set IP address as hostname, user and path to private key, then save file


Now click green button (far bottom left) – select alias we set in configuration file


Connection to Linux should be established (Connected to), click Open folder, select desired folder – click OK


Now, open existing .py file (File – Open) or create new one (File – New File Save as .py)

Click debug – Add Configuration


Python extension will be offered for installation – Install Python extension


Select python interpreter (2 or 3 – it depends if one is installed on Linux box), choose whether


Click again on Debug icon – Add Configuration – Select Python File


Select interpreter