- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
rsyslog
This guide will walk you through setting up your Linux remote logging server to receive logs from a ctrlX OS device and save them to a file in your home folder using rsyslog
.
Setup ctrlX Core
Use the Host address where the syslog server will be reachable.
Setup remote logging server with rsyslog
Host System Prequisites
For this How To I used the package manager "apt". It is available on Debian derivates like Ubuntu, Linux-Mint and elementary OS. I you prefer another Linux distribution you should use the appropriate package manager.
Step 1: Install rsyslog
First, ensure that rsyslog
is installed on your notebook. Run the following commands to update your package list and install rsyslog
:
sudo apt-get update
sudo apt-get install rsyslog
Step 2: Setup rsyslog
Open the rsyslog
configuration file for editing:
sudo nano /etc/rsyslog.conf
To receive logs via TCP, add the following lines to the rsyslog
configuration file. This will enable TCP reception on port 514 (the default syslog port):
# provides TCP syslog reception
module(load="imtcp")
input(type="imtcp" port="514")
Add a rule to write the incoming logs from a specific ctrlX OS device (with IP 192.168.88.250
) to a file in your home folder. Append this line to the end of the configuration file:
:fromhost-ip, isequal, "192.168.88.250" /home/yourusername/device_logs.log
Replace yourusername
with your actual Linux username.
After adding the configuration, save the file and exit the editor.
Restart the rsyslog
service to apply the changes:
sudo systemctl restart rsyslog
Step 3: Allow Incoming Traffic on Port 514
Ensure your firewall allows traffic on port 514. Run the following command:
sudo ufw allow 514
Step 4: Monitor Logs in Real-Time
You can monitor the incoming logs in real-time by running the following command:
tail -f /home/yourusername/device_logs.log
Notes:
- This setup supports both RFC3164 and RFC5424 log formats. If you need to handle these differently or separate them, you might need to add more complex rules in the
rsyslog
configuration. - Replace any placeholder values (
yourusername
,192.168.88.250
,192.168.88.252
) with your actual device and user information.
Elastic Stack
This guide will walk you through setting up your Linux remote logging server with Elastic Stack to receive logs from a ctrlX OS device and make them searchable in a user friendly way.
Setup ctrlX CORE
Same as in the rsyslog configuration but this time we use port 5000.
Step 1: Setup Elastic Stack via Docker compose
Install Docker and Docker Compose if you haven't already:
sudo apt-get update
sudo apt-get install docker.io docker-compose
Create a new directory for your ELK stack:
mkdir elk-stack
cd elk-stack
Copy the docker-compose.yml
and logstash.conf
file into the newly created folder.
Start the Elastic Stack:
sudo docker-compose up -d
Wait a few minutes for all services to start up.
Step 2: View the logs
Access Kibana by opening a web browser and navigating to http://localhost:5601
.
In Kibana, go to "Management" > "Stack Management" > "Index Patterns" > "Create index pattern".
Enter logstash-*
as the index pattern and click "Next step".
Select @timestamp
as the Time field and click "Create index pattern".
Go to "Discover" in the main menu to start exploring your logs.
Example: Search for scheduler
To troubleshoot:
-
Check if Logstash is receiving logs:
sudo docker-compose logs logstash
-
Ensure Elasticsearch is running:
curl http://localhost:9200
-
If you don't see data in Kibana, check Elasticsearch indices:
curl http://localhost:9200/_cat/indices
Remember to secure your ELK stack before exposing it to a network, as this setup doesn't include authentication.
Explanation of the services
Let's break down the role of each service in our ELK (Elasticsearch, Logstash, Kibana) stack for this log visualization use case:
-
Elasticsearch:
- Purpose: Acts as the search and analytics engine.
- In this use case:
- Stores all the log data in an indexed format for quick searching.
- Provides a RESTful API for other components to store and retrieve data.
- Configuration notes:
- Running in single-node mode for simplicity.
- Has a volume attached (
esdata
) for data persistence.
-
Logstash:
- Purpose: Data collection pipeline tool that ingests data from multiple sources, transforms it, and sends it to a "stash" like Elasticsearch.
- In this use case:
- Reads log data from one source: TCP port 5000 for receiving logs directly from remote devices.
- Processes and structures the log data (as defined in
logstash.conf
). - Forwards the processed data to Elasticsearch.
- Configuration notes:
- Uses a custom configuration file (
logstash.conf
) to define its behavior. - Has access to the host's log file through a volume mount.
- Uses a custom configuration file (
-
Kibana:
- Purpose: Provides a web interface for visualizing and exploring the data stored in Elasticsearch.
- In this use case:
- Offers a user-friendly interface to search through logs.
- Allows creation of custom dashboards and visualizations of log data.
- Provides tools for real-time log monitoring and analysis.
- Configuration notes:
- Exposes port 5601 for web access.
The flow of data in this setup is:
- Logs are sent directly to Logstash via TCP.
- Logstash picks up these logs, processes them according to rules in
logstash.conf
. - Processed logs are sent to Elasticsearch for indexing and storage.
- Kibana connects to Elasticsearch to retrieve and display this data.
This architecture allows for:
- Centralized log collection from multiple sources.
- Efficient searching and analysis of large volumes of log data.
- Real-time visualization and monitoring of log events.
- Creation of custom dashboards for specific monitoring needs.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.