Trading system 001 --- run kubernetes on ubuntu 20.04

 

Introduction

This is the first article talking about how to create a private trading platform. We will build a 2 tier system in a microservice architecture. Kubernetes and AWS will be used. The first step of this project is to create a development environment. In this article I will show you how to:

  • set up k8s on ubuntu 20.04
  • with monitoring metrics sending to elastic cloud
  • with logs from all pods / containers inside the k8s cluster to elastic cloud
  • with logs from outside the k8s cluster to elastic cloud

System observability

Environment prepare

Tools used in this setup

  • minikube — v1.28.0
  • kubectl — client v1.26.0, server v1.25.3
  • helm — v3.10.3
  • kvm — v4.2.1
  • elastic on cloud — link

Tools installation

Tools description

  • kvm — tool to create VM for minikube and run the k8s
  • kubectl — tool to control the k8s
  • minikube — simulate the k8s on ubuntu
  • helm — k8s package manager

Start k8s stack

After you install all the tools successfully, it is time to install your stack.

Start the minikube

minikube is a VM to simulate the k8s cluster on your PC. minikube installation.

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube && rm minikube-linux-amd64

$ minikube start --memory 16384 --cpus 8 --disk-size='40000mb' --driver=kvm2

Prepare policy on elastic cloud

The policy on the elastic cloud defined what data is needed to be collected. You can attach all the integrations you needed to the policy. After that, you can add an elastic agent to your policy. The elastic agent will be running on all your k8s nodes as a DamonSet.

Add agent

Finally, you copy the yaml file and install it through kubectl. You can receive data and see details on the dashboard. Now, you can rename the file as elastic-agent-managed-kubernetes.yaml, and we will use it later

Install all charts (package in k8s)

# ./logstash/custom_config/dev/values.customer.yaml
persistence:
  enabled: false

logstashConfig:
  logstash.yml: |
    http.host: 0.0.0.0
    xpack.monitoring.enabled: true
    xpack.monitoring.elasticsearch.cloud_id: "Your id"
    xpack.monitoring.elasticsearch.cloud_auth: elastic:ES_PASSWORD


logstashPipeline:
  uptime.conf: |
    input {
      exec {
        command => "uptime"
        interval => 30
      }
      tcp {
        port => 5960
        codec => json
      }
    }
    output {
      elasticsearch {
        index => "logstash-dev-%{+YYYY-MM-dd}"
        cloud_id => "Your id"
        cloud_auth => "elastic:ES_PASSWORD"
      }
    }

service:
  # annotations: {}
  type: LoadBalancer
  # loadBalancerIP: ""
  ports:
    - name: tcp-tcp
      port: 5967
      protocol: TCP
      targetPort: 5967

extraPorts:
  - name: tcp-tcp
    containerPort: 5967
    


# Add all needed repo
$ helm repo add elastic https://helm.elastic.co
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm repo list # checking

# download all needed charts
$ helm pull bitnami/kube-state-metrics --version 3.2.7 --untar ture
$ helm pull elastic/logstash --version 8.5.1 --untar ture

# start elastic agent --- you can get this file from elastic cloud
$ kubectl apply -f ./elastic_agent/elastic-agent-managed-kubernetes.yaml

# start logstash
# you will need to prepare your own elastic cloud password, you should get it when you deploy your ELK on cloud
$ sed "s/ES_PASSWORD/$ES_PASSWORD/" ./logstash/custom_config/dev/values.customer.yaml > ./logstash/custom_config/dev/values.customer.secret.yaml
$ helm upgrade --wait --timeout=1200s --install --values ./logstash/custom_config/dev/values.customer.secret.yaml logstash-elastic ./logstash

# start kube state metrics
$ helm upgrade --wait --timeout=1200s --install kube-state-metrics ./kube-state-metrics --namespace kube-system

This is done and you should be able to see the data on elastic cloud.

Debug logstash

Here is a dirty way to send logs to logstash to test the connectivity. With this way, you can send log to logstash outside the k8s.

# check for the endpoints
$ minikube service list

|-------------|------------------------------------|--------------------|-----------------------------|
|  NAMESPACE  |                NAME                |    TARGET PORT     |             URL             |
|-------------|------------------------------------|--------------------|-----------------------------|
| default     | kubernetes                         | No node port       |
| default     | logstash-elastic-logstash          | tcp-tcp/5960       | http://192.168.39.187:30331 |
| default     | logstash-elastic-logstash-headless | No node port       |
|-------------|------------------------------------|--------------------|-----------------------------|

$ echo "Test message" | nc -q0 192.168.39.187 30331

You should be able to see the log on the cloud.

Set up Kibana data view

There is index created by the elastic agent, logs-kubernetes.container*. You can create a data view to include them.

You can also include the index created by the logstash, logstash-dev-*.

Now, you are able to monitor and collect logs from your k8s cluster to elastic cloud. Next step is pushing this setup to AWS.

Comments