
ELK stack is the acronimous of Elasticsearch, Logstash and Kibana and it’s a powerful tool that allow you to collect, analyze and visualize your logs.
let’s start!!!!
first install java:
sudo yum install -y java-1.8.0-openjdk-devel
after this check the installed version with java -version.
go to the main elasticsearch website in order to collect the correct and updated repository. Download and install the public key:
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
create a file in the /etc/yum.repos.d/ and paste this data:
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
start the installation with
yum install -y elasticsearch
start, enable and verify the elasticsearch service
systemctl start elasticsearch
systemctl enable elasticsearch
systemctl status elasticsearch
check the cluster status
# curl -X GET "localhost:9200/_cat/health?v"
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1557000685 20:11:25 elasticsearch green 1 1 0 0 0 0 0 0 - 100.0%
check the node status
# curl -X GET "localhost:9200/_cat/nodes?v"
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
127.0.0.1 30 33 2 0.10 0.13 0.13 mdi * elk
Install Kibana
# yum install -y kibana
edit the configuration file
vim /etc/kibana/kibana.yml
uncomment the following line
server.port: 5601
start and enable kibana
systemctl start kibana
systemctl enable kibana
Install Nginx
you need nginx to be used as a reverse proxy. Install epel release and nginx
yum install epel-release
yum install httpd-tools
yum install nginx
edit the nginx configuration
vim /etc/nginx/conf.d/elkstack.conf
paste the following code in this file
server {
listen 80;
server_name elkstack;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.kibana;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
before start the service we need to remove the default nginx configuration file. Edit the /etc/nginx/nginx.conf. Locate and delete the whole server section.
This is the portion of the file that we have to delete
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
Now it’s time to set the username and password to access our ELK stack server.
sudo htpasswd -c /etc/nginx/htpasswd.kibana admin
insert the password that you’ll use to access to the system.
if you have selinux configured in enforce mode you need to allow httpd
sudo setsebool httpd_can_network_connect 1 -P
you should do the same with your firewall (firewalld in centos) if you have problems.
# firewall-cmd --zone=public --add-port=80/tcp --permanent
success
# firewall-cmd --reload
success
visit you ip address with the browser and……IT WORKS!

You’ll see anything right now because we don’t have any data and we have to complete the ELK with Logstash.
Logstash is a data collector that is able to parse incoming data and insert them in elasticsearch to be visualized in kibana
yum install -y logstash
systemctl start logstash
systemctl enable logstash
start and enable logstash
that’s all for this “episode”. In the next step we’ll configure logstash to collect and parse our data.