Follow this step by step guide to get 'logs' from your system to Logit.io:
Step 1 - Copy Manifest File
Copy and use the Kubernetes Filebeat manifest below.
If you aren't logged in, you may need to update the environment variables of your-logstash-host / your-logstash-port.
Step 2 - Deploy Pod
Now your deployment manifest is updated, you can deploy it using.
kubectl apply -f filebeat-kubernetes.yaml
Step 3 - Confirm Completed Deployment
kubectl --namespace=kube-system get ds/filebeat kubectl --namespace=kube-system get pods
You should see a pod for each kubernetes node with a name similar to filebeat-abcde listed. The pods should work though from Pending to Running within a couple of minutes as the containers are downloaded and started.
Step 4 - Check Logit.io for your logs
Now you should view your data:
If you don't see logs take a look at How to diagnose no data in Stack below for how to diagnose common issues.
Step 6 - Kubernetes Logging to Opensearch Overview
Kubernetes was open-sourced in 2014 by Google and has quickly become one of the most popular container management tools on the market as it helps to significantly lower the cost of cloud computing & provides a resilient framework for deploying applications.
A common challenge for effective Kubernetes log aggregation is that during spikes data can easily be lost and not accounted for without a scalable logging solution such as Logit.io. Our platform also provides log tailing for real-time monitoring of your Kubernetes metrics.
Monitor across 1,000s of containers, layers, logs levels, and data types, in one centralised logging platform & save hours on monthly maintenance to support the ELK Stack.
If you need any further help with migrating your Kubernetes log files using Filebeat we're here to help. Feel free to get in contact with our support team via live chat & we'll be happy to assist.