Installing and Configuring Elasticsearch,Logstash and Kibana (ELK stack) on CentOS 7

Posted: March 30, 2019 in Linux

Elastic Stack (collection of 3 open sources projects:Elasticsearch,Logastah and Kibana) is complete end-to-end log analysis solution which helps in deep searchinganalyzing and visualizing the log generated from different machines.

In this post we’ll install Elasticsearch,Logstash and Kibana in VM1.test.com, Elasticsearch and Logstash in VM2.test.com, then we’ll search data on Elasticsearch instance on VM2 from VM1, that’s why we need to connect Elasticsearch clusters on instances in VM1 and VM2. These 2 clusters are independed, direction is one-way (ES on VM1 will connect and search data located on VM2).

Also, on VM1 we’ll install filebeat (agent for collecting data from VM1) and will send data to logstash then to Elasticsearch.

We’ll also install winbeat (agent for windows machine) and it will send data to VM2.test.com ES cluster

VM1.TEST.192.168.74.37

VM2.TEST.COM:192.168.74.45

Capture.PNG

Actions on VM1 and VM2:

Create ELK repository

cat >>/etc/yum.repos.d/elk.repo<<EOF
[ELK-6.x]
name=ELK repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF

Install Elasticsearch

Elasticsearch is database where logs are stored, we’ll use Search Guard plugin for EKL security, it’s comercial solution but offers free plugin for SSL security. At the time of this writing (30.03.2019), lastest Search Guard plugin supports ES 6.6.2 version, so i’ll install this one in this exampe

yum install epel-release
yum install elasticsearch-6.6.2

Install java:

yum install java-1.8.0-openjdk

On both machines edit /etc/elasticsearch.yml

cluster.name: set different names (arbitrary)

network.host: set machine name (vm1.test.com)

uncomment

http.port: 9200

Enable, start elasticsearch and check if cluster is accessible

systemct enable elasticsearch
systemctl start elasticsearch

On VM1

curl -X GET "vm1.test.com:9200"

on VM2

curl -X GET "vm2.test.com:9200"

Output should be like this:

{
  "name" : "4P1fXFO",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "NcmuS7CyTHyUIQMcNcT3PA",
  "version" : {
    "number" : "6.6.2",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "3bd3e59",
    "build_date" : "2019-03-06T15:16:26.864148Z",
    "build_snapshot" : false,
    "lucene_version" : "7.6.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

Actions on VM1

Install Kibana, which wil be used for data visualisation

yum install kibana-6.6.2
yum install nginx

Because Kibana allows access only from one machine, we’ll use Nginx to access Kibana GUI from anywhere

Create file /etc/nginx/conf.d/kibana.conf

server {
listen 80;

server_name example.com www.example.com;

auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;

location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}

Start kibana and nginx, test web access

systemct enable kibana
systemctl start kibana
systemct enable nginx
systemct start nginx

2.PNG

Create SSL certificates

I used this great guide to create SSL certificates

Download and extract SSL Search Guard tools

wget https://search.maven.org/remotecontent?filepath=com/floragunn/search-guard-tlstool/1.6/search-guard-tlstool-1.6.tar.gz
tar xvzf remotecontent\?filepath\=com%2Ffloragunn%2Fsearch-guard-tlstool%2F1.6%2Fsearch-guard-tlstool-1.6.tar.gz

Create config file for tstool (test_cluster.yml)

###
### Self-generated certificate authority
###
#
# If you want to create a new certificate authority, you must specify its parameters here.
# You can skip this section if you only want to create CSRs
#
ca:
   root:
      # The distinguished name of this CA. You must specify a distinguished name.
      dn: CN=root.ca.test.com,OU=CA,O=BugBear.BG\, Ltd.,DC=BugBear,DC=com

      # The size of the generated key in bits
      keysize: 2048

      # The validity of the generated certificate in days from now
      validityDays: 3650

      # Password for private key
      #   Possible values:
      #   - auto: automatically generated password, returned in config output;
      #   - none: unencrypted private key;
      #   - other values: other values are used directly as password
      pkPassword: none

      # The name of the generated files can be changed here
      file: root-ca.pem

   # If you want to use an intermediate certificate as signing certificate,
   # please specify its parameters here. This is optional. If you remove this section,
   # the root certificate will be used for signing.
   intermediate:
      # The distinguished name of this CA. You must specify a distinguished name.
      dn: CN=signing.ca.test.com,OU=CA,O=BugBear.BG\, Ltd.,DC=BugBear,DC=com

      # The size of the generated key in bits
      keysize: 2048

      # The validity of the generated certificate in days from now
      validityDays: 3650

      pkPassword: none

      # If you have a certificate revocation list, you can specify its distribution points here
      crlDistributionPoints: URI:https://raw.githubusercontent.com/floragunncom/unittest-assets/master/revoked.crl

###
### Default values and global settings
###
defaults:

      # The validity of the generated certificate in days from now
      validityDays: 3650

      # Password for private key
      #   Possible values:
      #   - auto: automatically generated password, returned in config output;
      #   - none: unencrypted private key;
      #   - other values: other values are used directly as password
      pkPassword: none

      # Specifies to recognize legitimate nodes by the distinguished names
      # of the certificates. This can be a list of DNs, which can contain wildcards.
      # Furthermore, it is possible to specify regular expressions by
      # enclosing the DN in //.
      # Specification of this is optional. The tool will always include
      # the DNs of the nodes specified in the nodes section.
      #nodesDn:
      #- "CN=*.example.com,OU=Ops,O=Example Com\\, Inc.,DC=example,DC=com"
      # - 'CN=node.other.com,OU=SSL,O=Test,L=Test,C=DE'
      # - 'CN=*.example.com,OU=SSL,O=Test,L=Test,C=DE'
      # - 'CN=elk-devcluster*'
      # - '/CN=.*regex/'

      # If you want to use OIDs to mark legitimate node certificates,
      # the OID can be included in the certificates by specifying the following
      # attribute

      # nodeOid: "1.2.3.4.5.5"

      # The length of auto generated passwords
      generatedPasswordLength: 12

      # Set this to true in order to generate config and certificates for
      # the HTTP interface of nodes
      httpsEnabled: true

      # Set this to true in order to re-use the node transport certificates
      # for the HTTP interfaces. Only recognized if httpsEnabled is true

      # reuseTransportCertificatesForHttp: false

      # Set this to true to enable hostname verification
      #verifyHostnames: false

      # Set this to true to resolve hostnames
      #resolveHostnames: false

###
### Nodes
###
#
# Specify the nodes of your ES cluster here
#
nodes:
  - name: node1
    dn: CN=vm1.test.com,OU=Ops,O=BugBear BG\, Ltd.,DC=BugBear,DC=com
    dns:
      - vm1.test.com
    ip:
      - 192.168.74.37

  - name: node2
    dn: CN=vm2.test.com,OU=Ops,O=BugBear BG\, Ltd.,DC=BugBear,DC=com
    dns:
      - vm2.test.com
    ip:
      - 192.168.74.45

###
### Clients
###
#
# Specify the clients that shall access your ES cluster with certificate authentication here
#
# At least one client must be an admin user (i.e., a super-user). Admin users can
# be specified with the attribute admin: true
#
clients:
  - name: admin
    dn: CN=admin.test.com,OU=Ops,O=BugBear Com\, Inc.,DC=example,DC=com
    admin: true

Create certificates:on VM1

cd tools/
 
# Generate new signing authority
./sgtlstool.sh -c ../config/test_cluster.yml -v -ca
 
# Generate CSR's for node + admin certs
./sgtlstool.sh -c ../config/test_cluster.yml -v -csr
 
# Generate cert/keys
./sgtlstool.sh -f -o -c ../config/test_cluster.yml -v -crt

On both machines:

yum install logstash
mkdir /etc/elasticsearch/ssl
mkdir /etc/logstash/ssl

On VM1:

cd out
yum install filebeat
cp node1.pem node1.key /etc/nginx/
cp node1.pem node1.key root-ca.pem /etc/logstash/ssl
cp root-ca.pem /etc/pki/tls/certs
cp root-ca.pem node1.key node1.pem node1_http.pem node1_http.key admin.key admin.pem /etc/elasticsearch/ssl
chown elastocsearch:elasticsearch /etc/elasticsearch/ssl
chown logstash:logstash /etc/logstash/ssl
# copy files to vm2.test.com
scp node2.key node2.pem root-ca.pem root@vm2.test.com:/etc/logstash/ssl
scp node2.pem node2.key root-ca.pem node2_http.key node2_http.pem admin.key admin.pem root@vm2.test.com:/etc/elasticsearch/tls/

Disable cluster shard allocation

curl -Ss -XPUT 'http://vm1.test.com:9200/_cluster/settings?pretty' -H 'Content-Type: application/json' -d '{"persistent":{"cluster.routing.allocation.enable": "none" }}'

Check which search-guard plugin version you need to install

Detect your Elasticsearch version and download correct Search guard version

Stop Elasticsearch cluster and installl search guard plugin

stop elasticsearch
/usr/share/elasticsearch/bin/elasticsearch-plugin install -b com.flosystemctlragunn:search-guard-6:6.6.2-24.2

Add following lines to /etc/elasticsearch.yml

xpack.security.enabled: false
searchguard.enterprise_modules_enabled: false
searchguard.ssl.transport.pemcert_filepath: ssl/node1.pem
searchguard.ssl.transport.pemkey_filepath: ssl/node1.key
searchguard.ssl.transport.pemtrustedcas_filepath: ssl/root-ca.pem
searchguard.ssl.transport.enforce_hostname_verification: false
searchguard.ssl.transport.resolve_hostname: false
searchguard.ssl.http.enabled: true
searchguard.ssl.http.pemcert_filepath: ssl/node1_http.pem
searchguard.ssl.http.pemkey_filepath: ssl/node1_http.key
searchguard.ssl.http.pemtrustedcas_filepath: ssl/root-ca.pem
searchguard.nodes_dn:
- CN=vm1.test.com,OU=Ops,O=BugBear BG\, Ltd.,DC=BugBear,DC=com
- CN=vm2.test.com,OU=Ops,O=BugBear BG\, Ltd.,DC=BugBear,DC=com
searchguard.authcz.admin_dn:
- CN=admin.test.com,OU=Ops,O=BugBear Com\, Inc.,DC=example,DC=com

Allow logstash role to create indexes

vi /usr/share/elasticsearch/plugins/search-guard-6/sgconfig/sg_roles.yml

edit as below:

sg_logstash:
  cluster:
    - CLUSTER_MONITOR
    - CLUSTER_COMPOSITE_OPS
    - indices:admin/template/get
    - indices:admin/template/put
  indices:
    '*':
      '*':
        - CRUD
        - CREATE_INDEX
    '*beat*':
      '*':
        - CRUD
        - CREATE_INDEX

Start Elasticsearch cluster and enable shard allocation

cd /usr/share/elasticsearch/plugins/search-guard-6/tools/

yum start elasticsearch
# Re-enable cluster shard allocation
bash sgadmin.sh --enable-shard-allocation -key /etc/elasticsearch/ssl/admin.key -cert /etc/elasticsearch/ssl/admin.pem -cacert /etc/elasticsearch/ssl/root-ca.pem -icl -nhnv -h vm1.test.com
systemctl restart elasticsearch

Default search guard username/password is admin:admin, to change it run following, enter new password when prompted

bash /usr/share/elasticsearch/plugins/search-guard-6/tools/hash.sh

Copy hash and replace old one in /usr/share/elasticsearch/plugins/search-guard-6/sgconfig/sg_internal_users.yml

For these changes to take effect (updating passwor and logstash role changes) run sgadmin tools

cd /usr/share/elasticsearch/plugins/search-guard-6/tools
bash sgadmin.sh -cd /usr/share/elasticsearch/plugins/search-guard-6/sgconfig -icl -key /etc/elasticsearch/ssl/admin.key -cert /etc/elasticsearch/ssl/admin.pem -cacert /etc/elasticsearch/ssl/root-ca.pem -nhnv -h vm1.test.com

Check access:

yum install jq
curl -Ss -k https://admin:admin@vm1.test.com:9200/_cluster/health | jq

In case of any errors check /var/log/elasticsearch/.log 


Now, when our cluster is secured, we need to configure kibana and nginx to use SSL

cat /etc/nging/conf.d/kibana.conf
server {
     listen 80;
     server_name vm1.test.com; # Replace it with your Subdomain
     return 301 https://$host$request_uri;
}

server {
    listen *:443 ssl;
    server_name vm1.test.com;
    access_log /var/log/nginx/ekl.access.log;
    ssl_certificate /etc/nginx/node1.pem;
    ssl_certificate_key /etc/nginx/node1.key;
    ssl on;
    ssl_session_cache builtin:1000 shared:SSL:10m;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
    ssl_prefer_server_ciphers on;

       location / {

        #auth_basic "Restricted Content";
        #auth_basic_user_file /etc/nginx/conf.d/htpasswd.users;
        proxy_pass http://localhost:5601;
        proxy_redirect http://localhost:5601 https://vm1.test.com;

   }
}
cat /etc/kibana/kibana.yml
elasticsearch.hosts: ["https://vm1.test.com:9200"]
elasticsearch.ssl.verificationMode: none
elasticsearch.username: "admin"
elasticsearch.password: "admin"

Restart nginx and kibana and type http://vm1.test.com, you should be redirected to HTTPS:

4.PNG

Enter username/password provided by serach guard (or put new password if pu changed one)

(

You should be able to see Kibana web page

Configuring logstash

Create simple config file for pushing data to elasticsearch

cat /etc/logstash/conf.d/example.conf

input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate => "/etc/logstash/ssl/node1.pem"
    ssl_key => "/etc/logstash/ssl/node1.key"
  }
}

#filter {
#    if [type] == "syslog" {
#        grok {
#            match => {
#                "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}"
#            }
#            add_field => [ "received_at", "%{@timestamp}" ]
#            add_field => [ "received_from", "%{host}" ]
#        }
#        syslog_pri { }
#        date {
#            match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
#        }
#    }
#}

output {
    elasticsearch {
        hosts => "vm1.test.com:9200"
        index => "vm1-%{+YYYY.MM.dd}"
        user => logstash
        password => "logstash"
        ssl => true
        ssl_certificate_verification => true
        cacert => "/etc/logstash/ssl/root-ca.pem"

    }
}

Configure filebeat on VM1

Filebeat is agent software for collecting data from client machine and it can send it to Logstash or Elasticsearch, in this example it was sent to logstash. Filebeat is installed in one of previous steps

cat /etc/logstash/logstash.yml (in this example only /etc/audit/audit.log is collected), put comment for elasticsearch

  - type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/audit/audit.log

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["vm1.test.com:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  ssl.certificate_authorities: ["/etc/pki/tls/certs/root-ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"
systemctl enable filebeat
systemctl start filebeat
systemctl start logstash
systemct enable logstash

Check filebeat for error

filebeat -e -c /etc/filebeat/filebeat.yml

In case of errors check /var/log/logstash/logstash-plain.log

If all is fine, you shoud see data in Kibana-index management

Capture.PNG

In order to search it go to Management-Kibana-Index Patterns

Capture.PNG

Create index pattern

3.PNG

2.PNG

Discover-select filter

3.PNG

Select time range and see report.

2.PNG

Now afer we secured VM1 and copied certificates to  VM2 we need to configure VM2 too

Configuring VM2.test.com

Set file permissions,disable shard allocations,stop cluster and install search guard

chown elasticsearch:elasticsearch /etc/elasticsearch/ssl
chown logstash:logstash /etc/logstash/ssl
stop elasticsearch
/usr/share/elasticsearch/bin/elasticsearch-plugin install -b com.flosystemctlragunn:search-guard-6:6.6.2-24.2

edit /etc/elasticsearch/elasticsearch.yml

xpack.security.enabled: false
searchguard.enterprise_modules_enabled: false
searchguard.ssl.transport.pemcert_filepath: ssl/node2.pem
searchguard.ssl.transport.pemkey_filepath: ssl/node2.key
searchguard.ssl.transport.pemtrustedcas_filepath: ssl/root-ca.pem
searchguard.ssl.transport.enforce_hostname_verification: false
searchguard.ssl.transport.resolve_hostname: false
searchguard.ssl.http.enabled: true
searchguard.ssl.http.pemcert_filepath: ssl/node2_http.pem
searchguard.ssl.http.pemkey_filepath: ssl/node2_http.key
searchguard.ssl.http.pemtrustedcas_filepath: ssl/root-ca.pem
searchguard.nodes_dn:
- CN=vm1.test.com,OU=Ops,O=BugBear BG\, Ltd.,DC=BugBear,DC=com
- CN=vm2.test.com,OU=Ops,O=BugBear BG\, Ltd.,DC=BugBear,DC=com
searchguard.authcz.admin_dn:
- CN=admin.test.com,OU=Ops,O=BugBear Com\, Inc.,DC=example,DC=com

Allow logstash role to create indexes

vi /usr/share/elasticsearch/plugins/search-guard-6/sgconfig/sg_roles.yml

edit as below:

sg_logstash:
  cluster:
    - CLUSTER_MONITOR
    - CLUSTER_COMPOSITE_OPS
    - indices:admin/template/get
    - indices:admin/template/put
  indices:
    '*':
      '*':
        - CRUD
        - CREATE_INDEX
    '*beat*':
      '*':
        - CRUD
        - CREATE_INDEX

Start Elasticsearch cluster and enable shard allocation

cd /usr/share/elasticsearch/plugins/search-guard-6/tools/

yum start elasticsearch
# Re-enable cluster shard allocation
bash sgadmin.sh --enable-shard-allocation -key /etc/elasticsearch/ssl/admin.key -cert /etc/elasticsearch/ssl/admin.pem -cacert /etc/elasticsearch/ssl/root-ca.pem -icl -nhnv -h vm2.test.com
systemctl restart elasticsearch

Default search guard username/password is admin:admin, to change it run following, enter new password when prompted

bash /usr/share/elasticsearch/plugins/search-guard-6/tools/hash.sh

Copy hash and replace old one in /usr/share/elasticsearch/plugins/search-guard-6/sgconfig/sg_internal_users.yml

For these changes to take effect (updating passwor and logstash role changes) run sgadmin tools

cd /usr/share/elasticsearch/plugins/search-guard-6/tools
bash sgadmin.sh -cd /usr/share/elasticsearch/plugins/search-guard-6/sgconfig -icl -key /etc/elasticsearch/ssl/admin.key -cert /etc/elasticsearch/ssl/admin.pem -cacert /etc/elasticsearch/ssl/root-ca.pem -nhnv -h vm2.test.com
systemctle restrart elasticsearch

Check access:

yum install jq
curl -Ss -k https://admin:admin@vm2.test.com:9200/_cluster/health | jq

Configuring logstash

Logstash input is windows event log and output is elasticsearch

cat /etc/logstash/conf.d/events.conf

input {
    beats {
    port => 5044
    ssl => true
    ssl_certificate => "/etc/logstash/ssl/node2.pem"
    ssl_key => "/etc/logstash/ssl/node2.key"

  }
}

output {
    elasticsearch {
        hosts => ["https://vm2.test.com:9200"]
        index => "client02-eventviewer-%{+YYYY.MM.dd}"
        user => logstash
        password => "logstash"
        ssl => true
        ssl_certificate_verification => true
        cacert => "/etc/logstash/ssl/root-ca.pem"

    }
}

Start logstash

systemctl enable logstash
systemctl start logstash

Installig windowsbeat (ELK agent) on Windows 10

Winlogbeat collects data from windows machine

Unzip file, move it to C:\Program Files, copy content of root-ca.pem to this folder,install agent and configure it

set-executionpolicy -unrestricted
.\install-service-winlogbeat.ps1

C:\Program Files\winlogbeat\winlogbeat.yml

winlogbeat.event_logs:
  - name: Application
    ignore_older: 72h
  - name: Security
  - name: System
winlogbeat.registry_file: C:/ProgramData/winlogbeat/.winlogbeat.yml

output.logstash:
  # The Logstash hosts
  hosts: ["vm2.test.com:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  ssl.certificate_authorities: ["C:/Program Files/winlogbeat/root-ca.pem"]

logging.to_files: true
logging.files:
  path: "C:/Program Files/winlogbeat/Logs/"
logging.level: error

If SSL is enabled, copy content of root-ca.pem to C:\Program Files\Winlogbeat\root-ca.pem file

1.PNG

Check config for errors:

.\winlogbeat.exe test config -c .\winlogbeat.yml -e

Start winlofgbeat

2.PNG

Check logs in C:\Program Files\winlogbeat\Logs

Creating Remote cluster and Cross-cluster serach

Remote clusters module enables you to establish uni-directional connections to a remote cluster. It allows any node to act as a federated client across multiple clusters and allow only searching remote cluster using feature called Cross cluster search. Windows 10 event logs are passed to VM2 cluster and we’ll get it data from VM1 cluster.

On VM2 add following lines to /etc/elasticsearch/elasticsearch.yml

http.cors.enabled: true
http.cors.allow-origin: "*"

Make sure that you can telnet on port 9300 from VM1

On VM1, on Kibana click Management-Remote Clusters

Capture.PNG

specify vm2.test.com:9300

4.PNG

2.PNG

Searching remote cluster

Management-index pattern-Create index pattern-type clustername:index name in order to search remote cluster

Capture.PNG

Adding node to cluster

In this example i’ll add one node to exisitng cluster

master: ekl.test.com

make sure 127.0.0.1 is not bound to ekl.test.com

node:ekl1.test1.com

make sure 127.0.0.1 is not bound to ekl1.test1.com

master:

cluster.name: client1
searchguard.enterprise_modules_enabled: false
node.name: ekl.test.com
node.master: true
node.data: true
node.ingest: true

node:

cluster.name: client1
searchguard.enterprise_modules_enabled: false
node.name: ekl1.test1.com
node.master: true
node.data: true
node.ingest: true
discovery.zen.ping.unicast.hosts: ["ekl.test.com:9300", "ekl1.test1.com:9300"]
discovery.zen.minimum_master_nodes: 1
transport.tcp.port: 9300
transport.host: ekl1.test1.com

Getting logs from syslog devices

Fortigate configuration

Login to Fortigate, open CLI and type

config log syslogd setting
    set status enable
    set server "logstash server IP"
    set port 5044
end

cat /etc/logstash/conf.d/fortigate.conf

input {
udp {
port => 5044
type => firewall
}
}

filter {

if [type] == "firewall" {
        mutate {
                add_tag => ["fortigate"]
                        }
        grok {
            break_on_match => false
                match => ["message", "%{SYSLOG5424PRI:syslog_index}%{GREEDYDATA:message}"]
                overwrite => [ "message" ]
                tag_on_failure => [ "failure_grok_fortigate" ]
        }
                kv { }
        if [msg] {
                mutate {
                        replace => [ "message", "%{msg}" ]
                }
        }
        mutate {
                convert => { "duration" => "integer" }
                convert => { "rcvdbyte" => "integer" }
                convert => { "rcvdpkt" => "integer" }
                convert => { "sentbyte" => "integer" }
                convert => { "sentpkt" => "integer" }
                convert => { "cpu" => "integer" }
                convert => { "disk" => "integer" }
                convert => { "disklograte" => "integer" }
                convert => { "fazlograte" => "integer" }
                convert => { "mem" => "integer" }
                convert => { "totalsession" => "integer" }
        }
        mutate {
                add_field => ["logTimestamp", "%{date} %{time}"]
                add_field => ["loglevel", "%{level}"]
                replace => [ "fortigate_type", "%{type}"]
                replace => [ "fortigate_subtype", "%{subtype}"]
                remove_field => [ "msg", "type", "level", "date", "time" ]
        }
        date {
                locale => "en"
                match => ["logTimestamp", "YYYY-MM-dd HH:mm:ss"]
                remove_field => ["logTimestamp", "year", "month", "day", "time", "date"]
                add_field => ["type", "syslog"]
        }
        if [status] == "clash" {

                grok {
                        match => { "new_status" => "state=%{GREEDYDATA:new_status_state1} tuple-num=%{GREEDYDATA:new_status_tuple-num1} policyid=%{GREEDYDATA:new_status_policyid1} identidx=%{GREEDYDATA:new_status_identidx1} dir=%{GREEDYDATA:new_status_dir1} act=%{GREEDYDATA:new_status_act1} hook=%{GREEDYDATA:new_status_hook1} dir=%{GREEDYDATA:new_status_dir2} act=%{GREEDYDATA:new_status_act2} hook=%{GREEDYDATA:new_status_hook2} dir=%{GREEDYDATA:new_status_dir3} act=%{GREEDYDATA:new_status_act3} hook=%{GREEDYDATA:new_status_hook3}" }
                }
                grok {
                        match => { "old_status" => "state=%{GREEDYDATA:old_status_state1} tuple-num=%{GREEDYDATA:old_status_tuple-num1} policyid=%{GREEDYDATA:old_status_policyid1} identidx=%{GREEDYDATA:old_status_identidx1} dir=%{GREEDYDATA:old_status_dir1} act=%{GREEDYDATA:old_status_act1} hook=%{GREEDYDATA:old_status_hook1} dir=%{GREEDYDATA:old_status_dir2} act=%{GREEDYDATA:old_status_act2} hook=%{GREEDYDATA:old_status_hook2} dir=%{GREEDYDATA:old_status_dir3} act=%{GREEDYDATA:old_status_act3} hook=%{GREEDYDATA:old_status_hook3}" }
                }
        }
}

}

output {
    elasticsearch {
        hosts => ["ekl1.test1.com:9200"]
        index => "client02-fortigate-%{+YYYY.MM.dd}"
        user => logstash
        password => "logstash"
        ssl => true
        ssl_certificate_verification => true
        cacert => "/etc/logstash/ssl/root-ca.pem"

    }
}

Active Directory (LDAP) authentication

For this Search Guard Enterprise license is required, change

searchguard.enterprise_modules_enabled: fals

to

searchguard.enterprise_modules_enabled: true

in /etc/elasticsearch.yml

1.PNG

In this example service account for searching Active Directory  is located in service_accounts OU (service).

Users who need to access Elasticsearch are located in UA folder and AD group test (role) is used to give access to Elasticsearch. So all users wo need to access Elasticsearch/Kibana needs to be put in test AD group.

cd /usr/share/elasticsearch/plugins/search-guard-6/sgconfig/

edit sg_config.yml

searchguard:
  dynamic:
    # Set filtered_alias_mode to 'disallow' to forbid more than 2 filtered aliases per index
    # Set filtered_alias_mode to 'warn' to allow more than 2 filtered aliases per index but warns about it (default)
    # Set filtered_alias_mode to 'nowarn' to allow more than 2 filtered aliases per index silently
    #filtered_alias_mode: warn
    kibana:
      # Kibana multitenancy - NOT FREE FOR COMMERCIAL USE
      # see https://github.com/floragunncom/search-guard-docs/blob/master/multitenancy.md
      # To make this work you need to install https://github.com/floragunncom/search-guard-module-kibana-multitenancy/wiki
      #multitenancy_enabled: true
      #server_username: kibanaserver
      #index: '.kibana'
      #do_not_fail_on_forbidden: false
    http:
      anonymous_auth_enabled: false
      xff:
        enabled: false
        internalProxies: '192\.168\.0\.10|192\.168\.0\.11' # regex pattern
        #internalProxies: '.*' # trust all internal proxies, regex pattern
        remoteIpHeader:  'x-forwarded-for'
        proxiesHeader:   'x-forwarded-by'
        #trustedProxies: '.*' # trust all external proxies, regex pattern
        ###### see https://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html for regex help
        ###### more information about XFF https://en.wikipedia.org/wiki/X-Forwarded-For
        ###### and here https://tools.ietf.org/html/rfc7239
        ###### and https://tomcat.apache.org/tomcat-8.0-doc/config/valve.html#Remote_IP_Valve
    authc:
      kerberos_auth_domain:
        http_enabled: false
        transport_enabled: false
        order: 6
        http_authenticator:
          type: kerberos # NOT FREE FOR COMMERCIAL USE
          challenge: true
          config:
            # If true a lot of kerberos/security related debugging output will be logged to standard out
            krb_debug: false
            # If true then the realm will be stripped from the user name
            strip_realm_from_principal: true
        authentication_backend:
          type: noop
      basic_internal_auth_domain:
        http_enabled: true
        transport_enabled: true
        order: 4
        http_authenticator:
          type: basic
          challenge: true
        authentication_backend:
          type: intern
      proxy_auth_domain:
        http_enabled: false
        transport_enabled: false
        order: 3
        http_authenticator:
          type: proxy
          challenge: false
          config:
            user_header: "x-proxy-user"
            roles_header: "x-proxy-roles"
        authentication_backend:
          type: noop
      jwt_auth_domain:
        http_enabled: false
        transport_enabled: false
        order: 0
        http_authenticator:
          type: jwt
          challenge: false
          config:
            signing_key: "base64 encoded HMAC key or public RSA/ECDSA pem key"
            jwt_header: "Authorization"
            jwt_url_parameter: null
            roles_key: null
            subject_key: null
        authentication_backend:
          type: noop
      clientcert_auth_domain:
        http_enabled: false
        transport_enabled: false
        order: 2
        http_authenticator:
          type: clientcert
          config:
            username_attribute: cn #optional, if omitted DN becomes username
          challenge: false
        authentication_backend:
          type: noop
      ldap:
        http_enabled: true
        transport_enabled: true
        order: 2
        http_authenticator:
          type: basic
          challenge: false
        authentication_backend:
          # LDAP authentication backend (authenticate users against a LDAP or Active Directory)
          type: ldap # NOT FREE FOR COMMERCIAL USE
          config:
            # enable ldaps
            enable_ssl: false
            # enable start tls, enable_ssl should be false
            enable_start_tls: false
            # send client certificate
            enable_ssl_client_auth: false
            # verify ldap hostname
            verify_hostnames: true
            hosts:
              - dc.test.com:389
            bind_dn: "CN=service,OU=service_accounts,DC=test,DC=com"
            password: "Pass"
            userbase: "OU=UA,DC=test,DC=com"
            # Filter to search for users (currently in the whole subtree beneath userbase)
            # {0} is substituted with the username
            usersearch: "(cn={0})"
            # Use this attribute from the user as username (if not set then DN is used)
            username_attribute: "cn"
    authz:
      roles_from_myldap:
        http_enabled: true
        transport_enabled: true
        authorization_backend:
          # LDAP authorization backend (gather roles from a LDAP or Active Directory, you have to configure the above LDAP authentication backend settings too)
          type: ldap # NOT FREE FOR COMMERCIAL USE
          config:
            # enable ldaps
            enable_ssl: false
            # enable start tls, enable_ssl should be false
            enable_start_tls: false
            # send client certificate
            enable_ssl_client_auth: false
            # verify ldap hostname
            verify_hostnames: true
            hosts:
              - "dc.test.com:389"
            bind_dn: "CN=service,OU=service_accounts,DC=test,DC=com"
            password: "Pass"
            #rolebase: "OU=UA,DC=test,DC=com"
            rolebase: "OU=groups,DC=test,DC=com"
            # Filter to search for roles (currently in the whole subtree beneath rolebase)
            # {0} is substituted with the DN of the user
            # {1} is substituted with the username
            # {2} is substituted with an attribute value from user's directory entry, of the authenticated user. Use userroleattribute to specify the name of the attribute
            rolesearch: "(uniqueMember={0})"
            #rolesearch: "(member={2})"
            # Specify the name of the attribute which value should be substituted with {2} above
            userroleattribute: null
            # Roles as an attribute of the user entry
            #userrolename: disabled
            userrolename: "memberOf"
            # The attribute in a role entry containing the name of that role, Default is "name".
            # Can also be "dn" to use the full DN as rolename.
            rolename: "CN"
            # Resolve nested roles transitive (roles which are members of other roles and so on ...)
            resolve_nested_roles: "true"
            userbase: 'OU=groups,DC=test,DC=com'
            #userbase: "OU=UA,DC=test,DC=com"
            # Filter to search for users (currently in the whole subtree beneath userbase)
            # {0} is substituted with the username
            #usersearch: "(cn={0})"
            usersearch: "(uid={0})"
            # Skip users matching a user name, a wildcard or a regex pattern
            #skip_users:
            #  - 'cn=Michael Jackson,ou*people,o=TEST'
            #  - '/\S*/'
      roles_from_another_ldap:
        enabled: false
        authorization_backend:
          type: ldap # NOT FREE FOR COMMERCIAL USE
          #config goes here ...

Edit sg_roles_mapping.yml

sg_ad_admin:
  readonly: true
  backendroles:
    - test

Edit sg_roles.yml

sg_ad_admin:
  readonly: true
  cluster:
    - UNLIMITED
  indices:
    '*':
      '*':
        - UNLIMITED
  tenants:
    admin_tenant: RW

Restart elasticsearch service and run sgadmin tools

systemctl restart elasticsearch
bash /usr/share/elasticsearch/plugins/search-guard-6/tools/sgadmin.sh -cd /usr/share/elasticsearch/plugins/search-guard-6/sgconfig -icl -key /etc/elasticsearch/ssl/admin.key -cert /etc/elasticsearch/ssl/admin.pem -cacert /etc/elasticsearch/ssl/root-ca.pem -nhnv -h vm1.test.com

You should be now able to login to Kibana using Active Directory credentials

Advertisements
Comments
  1. Ramesh Vahinipathi says:

    How is it different than splunk, which doesnt require this complex configuration?

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s