ELK Docker deploy

背景

又到了各种运维time的时候了,最近要搭好多东西,比如 今天的主角 ELK,还有一些其他的基础设施,像是K8s,skywalking,prometheus。

每次安装这些组件版本都会比以前更新很多,并且和以前的安装方式有很大区别,每次都会踩新坑,那么现在就开始吧。


这里的 ELK 是 采用 Docker + 二进制的部署方式,因为 logstash 的 Docker 镜像似乎有些问题,Logstash 有些地方还是很坑的,所以后续我还会研究下更优雅的方式去做日志传输。

  • ElasticSearch 存储日志
  • Logstash 传输日志到 ElasticSearch
  • Kibana 查看日志和分析日志

ElasticSearch

ES 目前的 8.1 版本 链接方式默认是HTTPS了, 这会对后续其他组件的链接产生一定的影响


# 拉取镜像
docker pull docker.elastic.co/elasticsearch/elasticsearch:8.1.2

# 创建 elastic 网络
docker network create --subnet 172.50.0.0/24 --gateway 172.50.0.1 elastic

# optional
sysctl -w vm.max_map_count=262144

# 启动es容器
docker run --name es01 --net elastic --ip 172.50.0.10 -p 9200:9200 -it docker.elastic.co/elasticsearch/elasticsearch:8.1.2

用户名密码,CA,和kibana交互的信息都会在启动日志里,这些一定要保存下来

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-> Elasticsearch security features have been automatically configured!
-> Authentication is enabled and cluster connections are encrypted.

->  Password for the elastic user (reset with `bin/elasticsearch-reset-password -u elastic`):
  emqktAmZ*JLAfXOWsdx8

->  HTTP CA certificate SHA-256 fingerprint:
  632c098b94668439a5f0ddb6*****************************************a7b353e

->  Configure Kibana to use this cluster:
* Run Kibana and click the configuration link in the terminal when Kibana starts.
* Copy the following enrollment token and paste it into Kibana in your browser (valid for the next 30 minutes):
  eyJ2ZXIiOiI4LjEuMiIsImFkciI6WyIxNzIuMTguMC4yOjkyMDAiXSwiZmdyIjoiNjMyYzA5OGI*****************************************UzZSIsImtleSI6InFqcy02WDhCQTZWWjlsWkZjMVpxOkNORGJnemg3U2RLNk9oN095SGEtVHcifQ==

-> Configure other nodes to join this cluster:
* Copy the following enrollment token and start new Elasticsearch nodes with `bin/elasticsearch --enrollment-token <token>` (valid for the next 30 minutes):
  eyJ2ZXIiOiI4LjEuMiIsImFkciI6WyIxNzIuMTguMC4yOjkyMDAiXSwiZmdyIjoiNjMyY*****************************************GJhMmQ1MGFjOGZhNzE3YzI2MGNjYTdiMzUzZSIsImtleSI6InFEcy02WDhCQTZWWjlsWkZjMVpHOnN2NjltTEJaUzh5dC01dWc2cDdIdkEifQ==

  If you're running in Docker, copy the enrollment token and run:
  `docker run -e "ENROLLMENT_TOKEN=<token>" docker.elastic.co/elasticsearch/elasticsearch:8.1.2`
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

启动好之后 按 退出容器 然后再启动容器 docker start es01

接下来需要获取 CA 证书,为后续链接做准备。


# 找出 CA 位置
docker exec -it es01 /bin/bash -c "find /usr/share/elasticsearch -name http_ca.crt"

# 复制出 CA
docker cp es01:/usr/share/elasticsearch/config/tls_auto_config_<timestamp>/http_ca.crt .

# curl test
curl --cacert http_ca.crt -u elastic https://localhost:9200

Kibana

kibana 的设置较为简单,一切都自动化了,有一点需要注意的就是账户密码,和ES是共用的,所以上一步的密码要记下来

docker pull docker.elastic.co/kibana/kibana:8.1.2

docker run --name kibana --net elastic -p 5601:5601 docker.elastic.co/kibana/kibana:8.1.2

Logstash

logstash需要应用来使用,这里给出一个 springboot 通过 logbook 发送到 logstash 的一个示例。

先创建一个 config,放到 /opt/logstash/pipline

input { 
    tcp { 
        port => 9000 
        mode => "server" 
        codec => json_lines 
    }
}
output {
  elasticsearch {
    hosts => ["http://es01:9200"]
    index => "app-dev-%{+YYYY.MM.dd}"
    user => "elastic"
    password => "elastic"
    cacert => "/usr/share/logstash/pipeline/http_ca.crt"
  }
}

接着通过 docker 启动容器,注意 CA 证书的位置和权限要修改。
(这一步理论上是ok的,但是就是起不动)

docker pull docker.elastic.co/elasticsearch/elasticsearch:8.1.2

docker run --rm -it \
--name logstash \
--net elastic \
-p 9000:9000 \
-v /opt/logstash/pipeline/:/usr/share/logstash/pipeline/ \
-v /opt/logstash/http_ca.crt:/usr/share/logstash/pipeline/http_ca.crt \
logstash:8.1.2

第二种方法:下载二进制包 不在容器里运行

wget https://artifacts.elastic.co/downloads/logstash/logstash-8.1.2-linux-x86_64.tar.gz

tar xf logstash-8.1.2-linux-x86_64.tar.gz

cd logstash-8.1.2

vim config/pipelines.yml

编辑配置文件

- pipeline.id: node1-piplines
  # queue.type: persisted
  path.config: "/opt/logstash/pipeline/*.conf"

启动

bin/logstash

Springboot logback config (logback.xml)

<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="false">

    <!-- 获取spring的配置 -->
    <springProperty scope="context" name="app_name" source="spring.application.name" defaultValue="icc-default"/>
    <include resource="org/springframework/boot/logging/logback/defaults.xml"/>

    <!-- 控制台输出 -->
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <!-- encoders are assigned the type
             ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
        <encoder>
            <pattern>${CONSOLE_LOG_PATTERN}</pattern>
        </encoder>
    </appender>
    <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>your_ip_and_port</destination>
        <!-- encoder必须配置,有多种可选 -->
        <encoder charset="UTF-8"
                 class="net.logstash.logback.encoder.LogstashEncoder">
            <customFields>{"application":"${app_name}", "env": "dev"}</customFields>
        </encoder>
    </appender>


    <!-- 日志输出级别 -->
    <root level="INFO">
        <appender-ref ref="STDOUT" />
        <appender-ref ref="LOGSTASH" />
    </root>

</configuration>

GUI

图片已丢失

Cluster overview
Discover
Dashboard

Reference