This article is simultaneously published on Sugar Candy·Tribe
As more and more servers are launched, the containers running on each server become increasingly mixed, making log collection a troublesome task. I usually use Docker for easier management, and since I don't quite understand Kubernetes, I haven't built a cluster, which makes log collection a bit awkward. If I use the log command that comes with Docker or Docker Compose, I not only have to connect to the original server and enter the service directory, but also input a long string of commands. After successfully executing, I still have to search for the needed parts among a large output, and if I'm not careful, the content I just found gets overwritten by new logs, which is really not elegant enough. So I tried to find a solution that can quickly and conveniently collect running logs and perform on-demand queries and organization. Recently, I was studying Grafana and Loki, and I indeed found the official Docker log plugin, so I decided to write another article to document the configuration files and commands used, making it easier for friends in need to quickly solve similar problems.
Please adjust the relevant configuration files according to your usage to avoid interference caused by sample configurations.
Preparing Grafana + Loki#
Preparing the Joint Startup File#
Loki can be understood as the server that actually collects logs, while Grafana can be understood as the frontend that queries data from the server. The two are independent yet interconnected and can be deployed within the same local area network or connected separately using the public network. Since this is a specialized scenario for pure log collection, I directly used a docker-compose
to associate and start them.
Moreover, since this server is dedicated solely for log collection and does not need to share port 443 with other services, I also wrote the Caddy configuration together.
The docker-compose.yml
file is as follows:
version: '3.4'
services:
loki:
image: grafana/loki
container_name: grafana-loki
command: -config.file=/etc/loki/local-config.yml
networks:
- internal_network
volumes:
- ./config/loki.yml:/etc/loki/local-config.yml
- ./data/loki:/tmp/loki
grafana:
depends_on:
- grafana-db
- loki
image: grafana/grafana
container_name: grafana
networks:
- internal_network
- external_network
volumes:
- ./config/grafana.ini:/etc/grafana/grafana.ini
restart: unless-stopped
grafana-db:
image: postgres:15-alpine
restart: always
container_name: grafana-db
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
POSTGRES_USER: grafana
POSTGRES_PASSWORD: password
POSTGRES_DB: grafana
POSTGRES_INITDB_ARGS: "--encoding='UTF8' --lc-collate='C' --lc-ctype='C'"
networks:
- internal_network
caddy:
image: caddy
restart: always
ports:
- "443:443"
networks:
- internal_network
- external_network
volumes:
- ./config/Caddyfile:/etc/caddy/Caddyfile
- ./ssl:/ssl:ro
- ./data/caddy/data:/data
- ./data/caddy/config:/config
networks:
internal_network:
internal: true
external_network:
The only exposed port is Caddy's port 443, and forced HTTPS is enabled at the CDN to ensure that no requests are sent to port 80.
Preparing Configuration for Each Service#
Place the configuration files uniformly in the config directory for easier management.
Grafana#
The configuration file for Grafana grafana.ini
is as follows:
[database]
type = postgres
host = grafana-db:5432
name = grafana
user = grafana
password = password
Loki#
The configuration file for Loki loki.yml
is as follows:
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9095
common:
path_prefix: /tmp/loki
storage:
filesystem:
chunks_directory: /tmp/loki/chunks
rules_directory: /tmp/loki/rules
replication_factor: 1
ring:
instance_addr: 127.0.0.1
kvstore:
store: inmemory
query_range:
results_cache:
cache:
embedded_cache:
enabled: true
max_size_mb: 100
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
ruler:
alertmanager_url: http://localhost:9093
analytics:
reporting_enabled: false
# See https://grafana.com/docs/loki/latest/configuration/
# S3 storage can be enabled, but it's not necessary for now
Since I don't know what the alertmanager_url
should be set to, and as a log collection service, I temporarily don't consider the need for alerts, so I kept the sample value unchanged.
It's important to note the auth_enabled
configuration item. This field means that by setting different region request headers, multiple organizations can share the same Loki service. When enabled, the X-Scope-OrgID
must be used in HTTP calls to distinguish the organization making the request, and it must be included when submitting and pulling data. However, since this is for single-party use and does not involve sharing data, this option is disabled.
Loki itself does not have HTTP authorization checks, so it is recommended to write the authorization request headers in the configuration of the frontend reverse proxy program. For example, here it is placed in the Caddy configuration file, requiring authorization checks only for requests from public traffic, while Grafana does not need to verify when using internal connections.
Caddy#
Here, I used the source server certificate provided by the CDN provider, so there is no need for Caddy to apply for it, thus the configuration file is as follows:
example.com {
tls /ssl/example.com/cert.pem /ssl/example.com/key.pem
reverse_proxy grafana:3000
}
loki.example.com {
tls /ssl/example.com/cert.pem /ssl/example.com/key.pem
reverse_proxy loki:3100
basicauth * {
user $2a$14$QZIF1JLM9lFPjAl6PQzBDeANmB4Ssufc0qUnP9nI7QYnv.QevEzyi
}
}
loki-grpc.example.com {
tls /ssl/example.com/cert.pem /ssl/example.com/key.pem
reverse_proxy loki:9095
basicauth * {
user $2a$14$QZIF1JLM9lFPjAl6PQzBDeANmB4Ssufc0qUnP9nI7QYnv.QevEzyi
}
}
The authorization part uses BasicAuth, which is a username + password method; the password is not stored in plain text but is pre-computed using Caddy's hash-password
feature to enhance security as much as possible.
/srv # caddy hash-password
Enter password:
Confirm password:
$2a$14$QZIF1JLM9lFPjAl6PQzBDeANmB4Ssufc0qUnP9nI7QYnv.QevEzyi
/srv #
After completing the configuration, you also need to place the corresponding SSL certificate in the specified location to avoid errors caused by the absence of certificate files.
Once all configurations are completed, you should be able to start using the docker-compose up -d
command.
Managing Permissions for Loki Data Directory#
Loki in the container does not start as the root user, so the root permissions set by Docker will cause access to be denied. You need to manually execute a command like the following to open the data directory permissions for Loki to use:
chown -R 10001:10001 ./data/loki/
After modifying, restart the Loki container.
Associating Loki as a Data Source for Grafana#
Enter the deployed Grafana, log in with the default username and password admin
, and after changing the password, you can start adding data sources.
If following the above docker-compose
configuration scheme, you only need to select Loki and enter http://loki:3100
in the HTTP URL field, then save and test. At this point, Loki will report an error indicating that no labels can be found because there are no logs yet, but don't worry, it will be fine once logs appear later.
Preparing the Server#
Since a Docker environment is used, I won't elaborate on the specific commands here, starting directly from installing the Loki log plugin.
This refers to the tutorial provided by Docker Driver Client, and please note that related content may be updated.
Installing the loki-docker-driver Plugin#
To send data to Loki, a plugin is needed to convert Docker logs into data with detailed label information and send it to the deployed server using a Loki-compatible interface. You can install this plugin using the following command:
docker plugin install grafana/loki-docker-driver:latest --alias loki --grant-all-permissions
After installation, you can check the installed Docker plugins using docker plugin ls
.
If there are upgrades needed later, you can follow the commands provided in the original link for upgrading:
docker plugin disable loki --force
docker plugin upgrade loki grafana/loki-docker-driver:latest --grant-all-permissions
docker plugin enable loki
systemctl restart docker
Then you can configure the plugin specifically. Since I need to collect all Docker logs on the host machine and do not need to distinguish between different organizations, I will configure the /etc/docker/daemon.json
file with the following content:
{
"log-driver": "loki",
"log-opts": {
"loki-url": "https://user:[email protected]/loki/api/v1/push"
}
}
After configuration, you need to restart the Docker process for the changes to take effect.
It is important to note that this will not take effect in real-time for all containers; the logging system of already created containers has been fixed, and it will only be effective for newly created containers. If you urgently need log collection, you can try using docker-compose up -d --force-recreate
to recreate all containers.
Querying in Grafana#
Once the configuration is complete and the newly created containers start working and generating logs, you should be able to query them in Grafana. Since this is for single-party use and does not involve detailed visualization needs, I did not configure a Dashboard but directly entered the Explore feature for interactive queries.
When logs appear, you can filter by labels to identify the source. This plugin provides six filtering labels: host
, compose_project
, compose_service
, container_name
, source
, and filename
, among which:
host
refers to the hostname of the host machine that generated the logs,compose_project
,compose_service
, orcontainer_name
can be used to locate the container,source
indicates whether the log source is standard outputstdout
or error logstderr
,filename
refers to the original log file (usually not important),
Using these labels in conjunction with Grafana's time control, you can quickly query the running logs of a specific container. No more worries about scattered and numerous logs!