Follow this step by step guide to get 'logs' from your system to Logit.io:
Step 1 - Prerequisites
Firstly Make sure you have the aws cli, eksctl & kubectl installed on local machine using the following guide
Also make sure you have setup aws configure with your AWS credentials.
To do this run the following command in your aws terminal.
When you type this command, the AWS CLI prompts you for four pieces of information: access key, secret access key, AWS Region, and output format. This information is stored in a profile named default. This profile is used when you run commands, unless you specify another one.
Step 2 - Connecting to the cluster
Update your config by running the following command. Replace
<enter_name> with your AWS cluster region and name.
aws eks --region <enter_region> update-kubeconfig --name <enter_name>
Check you can connect to your cluster by running the following command:
kubectl get svc
Step 3 - Deploy Filebeat
You're going to need the filebeat deployment manifest.
curl -L -O cdn.logit.io/filebeat-kubernetes.yaml
Now you have the manifest we need to add your Stack Logstash endpoint details.
Open the file in a text editor and around lines 58 you'll see the environment variables that need changing.
env: - name: LOGSTASH_HOST value: "guid-ls.logit.io" - name: BEATS_PORT value: "00000"
After updating the code should look as below.
env: - name: LOGSTASH_HOST value: ["your-logstash-host"] - name: BEATS_PORT value: ["your-ssl-port"]
Exit and save the file.
Step 4 - Apply your updates
Now we're going to apply the file to the cluster.
kubectl apply -f filebeat-kubernetes.yaml
Step 5 - Confirm Deployment
Confirm your pod has deployed, you should see output similar to that below.
kubectl get po -A
kubectl logs ["podname"] --namespace=kube-system
Browse to your Kibana instance and you should see Logs arriving in your Stack.
Step 6 - Check Logit.io for your logs
Now you should view your data:
If you don't see logs take a look at How to diagnose no data in Stack below for how to diagnose common issues.