Get a DemoStart Free TrialSign In

AWS Elastic Kubernetes Service Logs

Ship AWS EKS Logs to Logstash

Filebeat is an open source shipping agent that lets you ship AWS Elastic Kubernetes Service (EKS) container Logs to one or more destinations, including Logstash.

Send Your DataLogsAWSAWS Elastic Kubernetes Service Logs Guide

Follow this step by step guide to get 'logs' from your system to Logit.io:

Step 1 - Prerequisites

Firstly Make sure you have the aws cli, eksctl & kubectl installed on local machine using the following guide

Also make sure you have setup aws configure with your AWS credentials.

To do this run the following command in your aws terminal.

aws configure

When you type this command, the AWS CLI prompts you for four pieces of information: access key, secret access key, AWS Region, and output format. This information is stored in a profile named default. This profile is used when you run commands, unless you specify another one.

Step 2 - Connecting to the cluster

Update your config by running the following command. Replace <enter_region> and <enter_name> with your AWS cluster region and name.

aws eks --region <enter_region> update-kubeconfig --name <enter_name>

Check you can connect to your cluster by running the following command:

kubectl get svc

Step 3 - Deploy Filebeat

You're going to need the filebeat deployment manifest.

curl -L -O cdn.logit.io/filebeat-kubernetes.yaml

Now you have the manifest we need to add your Stack Logstash endpoint details.

Open the file in a text editor and around lines 58 you'll see the environment variables that need changing.

env:
- name: LOGSTASH_HOST
  value: "guid-ls.logit.io"
- name: BEATS_PORT
  value: "00000"

After updating the code should look as below.

env:
- name: LOGSTASH_HOST
  value: ["your-logstash-host"]
- name: BEATS_PORT
  value: ["your-ssl-port"]

Exit and save the file.

Step 4 - Apply your updates

Now we're going to apply the file to the cluster.

kubectl apply -f filebeat-kubernetes.yaml
If you need to apply further updates after running the apply command you may need to remove the yaml file, make your changes and then apply again.

Step 5 - Confirm Deployment

Confirm your pod has deployed, you should see output similar to that below.

kubectl get po -A

or

kubectl logs ["podname"] --namespace=kube-system

Browse to your Kibana instance and you should see Logs arriving in your Stack.

Step 6 - Check Logit.io for your logs

Data should now have been sent to your Stack.

View my data

If you don't see logs take a look at How to diagnose no data in Stack below for how to diagnose common issues.

Step 7 - how to diagnose no data in Stack

If you don't see data appearing in your Stack after following the steps, visit the Help Centre guide for steps to diagnose no data appearing in your Stack or Chat to support now.

Step 8 - AWS EKS Logs Overview

Sending data to Logit.io from AWS EKS Logs streamlines the process of log management for Kubernetes container orchestration. With effortless integration, users can easily ship logs from their AWS EKS clusters to Logit.io's powerful log management and analysis platform, ensuring real-time visibility into their environments. This enables rapid issue resolution, performance optimization, and comprehensive log analysis to uncover trends and anomalies. Moreover, centralizing logs in Logit.io supports security monitoring, threat detection, and compliance adherence, ensuring a complete audit trail. The simplicity of these integrations all work side by side within Logit.io's AWS logging solution.

Return to Search
Sign Up

© 2024 Logit.io Ltd, All rights reserved.