Building ELK on CentOS 7
-
Okay, after much work, we finally have a working ELK install process for CentOS 7. This took a bit of work thanks to all of the configuration files that need to be created or modified. This is a long one, hopefully this will be useful.
Here is a basic VM being created on a Scale HC3. You are going to want to start with at least two vCPU and at least four GB of RAM, I'd recommend at least six and eight is a good starting point if you have the resources and will use this for more than a lab. Half a terabyte is a good starting point for disk space. Heavily recommended that XFS be used.
#!/bin/bash cd /tmp yum -y install wget firewalld epel-release yum -y install nginx httpd-tools unzip systemctl start firewalld systemctl enable firewalld wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u65-b17/jdk-8u65-linux-x64.rpm" yum -y install jdk-8u65-linux-x64.rpm rm jdk-8u65-linux-x64.rpm rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch cat > /etc/yum.repos.d/elasticsearch.repo <<EOF [elasticsearch-2.x] name=Elasticsearch repository for 2.x packages baseurl=http://packages.elastic.co/elasticsearch/2.x/centos gpgcheck=1 gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch enabled=1 EOF cat > /etc/yum.repos.d/elasticsearch.repo <<EOF [elasticsearch-2.x] name=Elasticsearch repository for 2.x packages baseurl=http://packages.elastic.co/elasticsearch/2.x/centos gpgcheck=1 gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch enabled=1 EOF yum -y install elasticsearch mv /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.old echo 'network.host: localhost' > /etc/elasticsearch/elasticsearch.yml systemctl start elasticsearch systemctl enable elasticsearch cat > /etc/yum.repos.d/kibana.repo <<EOF [kibana-4.4] name=Kibana repository for 4.4.x packages baseurl=http://packages.elastic.co/kibana/4.4/centos gpgcheck=1 gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch enabled=1 EOF yum -y install kibana mv /opt/kibana/config/kibana.yml /opt/kibana/config/kibana.yml.old echo 'server.host: "localhost"' > /opt/kibana/config/kibana.yml systemctl start kibana systemctl enable kibana.service htpasswd -c /etc/nginx/htpasswd.users kibanauser setsebool -P httpd_can_network_connect 1 mv /etc/nginx/nginx.conf /etc/nginx/nginx.conf.old cat > /etc/nginx/nginx.conf <<EOF user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; events { worker_connections 1024; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; include /etc/nginx/conf.d/*.conf; } EOF cat > /etc/nginx/conf.d/kibana.conf <<EOF server { listen 80; server_name example.com; auth_basic "Restricted Access"; auth_basic_user_file /etc/nginx/htpasswd.users; location / { proxy_pass http://localhost:5601; proxy_http_version 1.1; proxy_set_header Upgrade \$http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host \$host; proxy_cache_bypass \$http_upgrade; } } EOF systemctl start nginx systemctl enable nginx systemctl start kibana systemctl restart nginx firewall-cmd --zone=public --add-port=80/tcp --perm firewall-cmd --reload cat > /etc/yum.repos.d/logstash.repo <<EOF [logstash-2.2] name=logstash repository for 2.2 packages baseurl=http://packages.elasticsearch.org/logstash/2.2/centos gpgcheck=1 gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch enabled=1 EOF yum -y install logstash # See below for file generation for you # cd /etc/pki/tls/ # openssl req -subj '/CN=elk.lab.ntg.co/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt cat > /etc/logstash/conf.d/02-beats-input.conf <<EOF input { beats { port => 5044 ssl => true ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt" ssl_key => "/etc/pki/tls/private/logstash-forwarder.key" } } EOF cat > /etc/logstash/conf.d/10-syslog-filter.conf <<EOF filter { if [type] == "syslog" { grok { match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } syslog_pri { } date { match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] } } } EOF cat > /etc/logstash/conf.d/30-elasticsearch-output.conf <<EOF output { elasticsearch { hosts => ["localhost:9200"] sniffing => true manage_template => false index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}" document_type => "%{[@metadata][type]}" } } EOF service logstash configtest systemctl restart logstash systemctl enable logstash cd /tmp curl -L -O https://download.elastic.co/beats/dashboards/beats-dashboards-1.1.0.zip unzip beats-dashboards-*.zip cd beats-dashboards-1.1.0 ./load.sh cd /tmp curl -O https://raw.githubusercontent.com/elastic/filebeat/master/etc/filebeat.template.json curl -XPUT 'http://localhost:9200/_template/filebeat?pretty' [email protected] firewall-cmd --zone=public --add-port=5044/tcp --perm firewall-cmd --reload systemctl restart logstash
You will likely want to generate a server-side certificate for use with Logstash. This is not necessary depending on how you intend to use ELK, but for most common usages today, you will want to include this step:
cd /etc/pki/tls/ openssl req -subj '/CN=your.elk.fqdn.com/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
This will generate the
logstash-forwarder.crt
file that we will see in another post. -
I have D&D tonight, but will be on this tomorrow.
-
@JaredBusch said:
I have D&D tonight, but will be on this tomorrow.
Awesome. Let me know if you run into any problems. I tried hard to make this as scriptable as possible. I "think" that you can pop this all into a script and just run it, but there are so many moving parts that I'm wary to present it that way. This was stepped through on a vanilla build. One of the lines is just to verify configuration of the Logstash files and doesn't actually do anything. I also tried to compress the package installs into two lines at the beginning as much as possible.
SELinux is addressed in there, no need to disable SELinux or any crap like that This actually configures that properly (I hope.)
-
@scottalanmiller said:
@JaredBusch said:
I have D&D tonight, but will be on this tomorrow.
Awesome. Let me know if you run into any problems. I tried hard to make this as scriptable as possible. I "think" that you can pop this all into a script and just run it, but there are so many moving parts that I'm wary to present it that way. This was stepped through on a vanilla build. One of the lines is just to verify configuration of the Logstash files and doesn't actually do anything. I also tried to compress the package installs into two lines at the beginning as much as possible.
SELinux is addressed in there, no need to disable SELinux or any crap like that This actually configures that properly (I hope.)
This is definitely one of the more time consuming installs I've done. I need to work on an Ansible playbook for it. I install it so infrequently that it might not be worth it.
-
Eesh, I'm in over my head with this one. Might give it a crack at home but my goodness...
-
Nice! I'll try to give this a whirl at some point in the next couple of days.
Thanks!
-
@scottalanmiller said:
Half a terabyte is a good starting point for disk space.
So much for me trying it - I might be lucky if I have 100 GB available for this.
-
@Dashrender said:
@scottalanmiller said:
Half a terabyte is a good starting point for disk space.
So much for me trying it - I might be lucky if I have 100 GB available for this.
You can do that for seeing what it looks like. 20GB will work for a very tiny test workload. But very tiny.
-
Just tested on a fresh build and it works BEAUTIFULLY. I put it into a script and ran it instead of going line by line, worked on the first try, no problems. It stops in the middle and asks for a password, that could be moved to the end or something, but it works just fine and isn't so slow that you'd want to walk away. So I added a BASH script header. If you want, just copy/paste into a text file and run it. Boom, done. Working ELK in a minute.
-
@scottalanmiller so what do you setup your disk partitioning like in CentOS 7?
On a minimal install left to automatic, if you use a larger drive, it will create a separate partition for all the space after 50gb.
this is highly annoying because I created a 127GB drive (default in Hyper-V) and now 50GB is separate from all the rest.
-
like this
[root@elk ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos_elk-root 50G 855M 50G 2% / devtmpfs 906M 0 906M 0% /dev tmpfs 916M 0 916M 0% /dev/shm tmpfs 916M 8.3M 907M 1% /run tmpfs 916M 0 916M 0% /sys/fs/cgroup /dev/sda2 494M 98M 396M 20% /boot /dev/sda1 200M 9.5M 191M 5% /boot/efi /dev/mapper/centos_elk-home 75G 33M 75G 1% /home tmpfs 184M 0 184M 0% /run/user/0 [root@elk ~]#
-
You should at least tell the user that you are asking for the kibana password.
htpasswd -c /etc/nginx/htpasswd.users kibanauser
-
I had this error.
-
Looks like maybe you forgot to start firewalld?
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 814 100 814 0 0 1370 0 --:--:-- --:--:-- --:--:-- 1372 { "acknowledged" : true } FirewallD is not running FirewallD is not running [root@elk ~]# yum install firewalld Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.oss.ou.edu * epel: fedora-epel.mirror.lstn.net * extras: centos.mirrors.wvstateu.edu * updates: centos.mirrors.wvstateu.edu Package firewalld-0.3.9-14.el7.noarch already installed and latest version Nothing to do [root@elk ~]# systemctl start firewalld [root@elk ~]# systemctl status firewalld ā firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2016-02-23 23:55:11 CST; 14s ago Main PID: 11482 (firewalld) CGroup: /system.slice/firewalld.service āā11482 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid Feb 23 23:55:09 elk systemd[1]: Starting firewalld - dynamic firewall daemon... Feb 23 23:55:11 elk systemd[1]: Started firewalld - dynamic firewall daemon. [root@elk ~]#
-
Yeah, you set it to install, but you never start or enable it.
-
Line 109 needs commented out.
add this right after the yum install to fix the firewall.
yum -y install wget firewalld epel-release systemctl enable firewalld systemctl start firewalld yum -y install nginx httpd-tools unzip
I would just remove line 109 it serves no purpose.
Edit: Some dumbass forgot to snapshot the image so he could repeat the install...
-
Why lock out with .htaccess? There is no hint what is needed to log in here.
I hate this level of authentication.
Using kibanauser and the password I chose, results in Kibana setup.
-
@JaredBusch said:
@scottalanmiller so what do you setup your disk partitioning like in CentOS 7?
If I'm doing this for product, I do 20GB for the OS and 200GB+ on a second VHD for the data. I put it all under LVM and make a XFS filesystem on the secondary mount and mount it to data and make a symlink for the Elasticsearch database directory into there.
-
@JaredBusch said:
Why lock out with .htaccess? There is no hint what is needed to log in here.
It's how Digital Ocean does it as well. Kibana doesn't have a built in authentication scheme that I know of. HTAccess is very simple for someone to just get started.
-
And simple to remove when you want to move to something else.