Microservices Logging using EFK !
EFK-logging

Microservices architecture comes up with a different set of challenges when it comes to requirements like logging. Monoliths are deployed as a single application which means implementation of logging is simpler. In case of microservices, there can be any number of talking pieces which can be from different technology stacks, different functionality, different hosting platform etc.

We need a unified log format which can collect logs into a central storage like ElasticSearch and make easier for users to query logs in a structured manner. Additionally, log platform should be storage scalable, cost efficient and performance intensive considering the amount of logs that can be generated from a highly scalable application environment.

When considering Elastic stack as the log storage and aggregation platform, there are 2 options for log collectors- Logstash and Fluentd.

For detailed comparison of these two and different use cases where they fit in, check the below article-

image-21
https://www.techmanyu.com/logstash-fluentd-which-one-is-better/

With microservices, we are going to use Fluentd as the log collector with Elastic as storage and Kibana as the visualizer.

Architecture

EFK-architecture

Download / Clone the below two Git projects.

https://github.com/Abmun/techmanyu-logging-service – Contains docker-compose to build a test service with logging in Json format. This will push service logs to EFK stack.

https://github.com/Abmun/microservice-EFK-logging – Contains docker-compose to build Fluentd, ElasticSearch and Kibana stack.

  • Fleuntd Configuration

fluentd.conf file in project contains the configurations required to take log stream input from service, filter the logs with a key name, parse the logs structured in json format, index pattern and output the logs to ElasticSearch.

<source>
  @type forward
  port 24224
  bind 0.0.0.0
</source>
<filter *.**>
      @type parser
      key_name log
      reserve_data true
      <parse>
        @type json
      </parse>
</filter>
<match *.**>
  @type copy
  <store>
    @type elasticsearch
    host elasticsearch
    port 9200
    logstash_format true
    logstash_prefix applogs
    logstash_dateformat %Y%m%d
    include_tag_key true
    type_name app_log
    tag_key @log_name
    flush_interval 1s
	user elastic
    password changeme
  </store>
  <store>
    @type stdout
  </store>
</match>

For more details on supported filters and configs, click here.

  • Build EFK stack on Docker

Run the docker-compose file from microservice-EFK-logging project. Three containers will spin up with ElasticSearch(v 6.7.0), Fluentd( v1.4.2-2.0 ) and Kibana(v 6.7.0) services running in them.

docker-compose up -d --build

Services should be up and running as shown below-

image-3
  • Build logging service

Run the docker-compose file from techmanyu-logging-service project. This will spin up a container with logging service running on port 8080. Using the inbuilt fluentd logging driver, we are pushing the logs directly from stdout to fluentd which is listening on port 24224.

mvn clean package //to compile and build the solution

docker-compose up -d --build

Check the following section in docker-compose for logging driver configuration-

    logging:
      driver: "fluentd"
      options:
        fluentd-address: localhost:24224
        tag: testlogging.log 
image-5

Logging service – http://localhost:8080/Logging/test

techmanyu-logging-service
  • Validate logs in Kibana

Open Kibana in browser – http://localhost:5601 and go to Management tab. Add the index pattern – applogs-* that we used to index logs in Elastic Search. (applogs-[%Y%m%d])

image-4

Hit the logging service in a separate tab using the URL provided earlier. Now come back to Kibana and go to Discover tab. Validate that logs are flowing and parsed as per the fields present in json logs.

Sample Log-

{"timeMillis":1569550010612,"thread":"http-apr-8080-exec-1","level":"INFO","loggerName":"CONSOLE_JSON_APPENDER","message":"This is a logging statement from techmanyu-logging-service","endOfBatch":false,"loggerFqcn":"org.apache.log4j.Category","threadId":15,"threadPriority":5}

Each of the log fields in json will be indexed as a field in ES which makes it very easy to query in Kibana.

image-7

That’s it from this post. Let us know your views and ideas in comments.

Categories
Comments
All comments.
Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.