author profile pic

By Lee Smith

Centralised Logging

4 min read

Here at we believe that centralising your log data is essential, but don’t just take our word for it.

In our latest blog post gathering the latest insights from CTOs, Senior Engineers & InfoSec Influencers we posed our experts the question: Why is logging all of your data in one place so important?

Here is their feedback from their first hand experience working with data on a daily basis.

Common Downfalls Associated With Distributed Systems

When asked to describe some of the common difficulties when dealing with non centralised data, Akram Tariq Khan, Chief Technology Officer at YourLibaas responded with the following;


“A breach is detected and relayed to a remote server faster in a centralised system. A potential hacker could disrupt the system by manipulating the logs which might go undetected in a distributed setup.”

Macro-level view:

“A distributed system has multiple parts that work in tandem. If one of them fails, the system breaks down and troubleshooting is a time-consuming task because errors might not be apparent easily.”

“A centralised system makes it easier for the technology team to have a bird's eye view of the infrastructure and ensure uptime.”


“The costs of backups are reduced as redundancy is minimised. Moreover, if a system fails, we can have a remote server which receives the latest log files.”


“Complying with privacy laws including GDPR and CCPA require hiring a chief data officer who ensures data is handled maintaining integrity, privacy and security.”

“A distributed system makes it expensive and tedious to have monitoring systems and processes to accomplish it.”

Being able to troubleshoot faster thanks to aggregating log data in one location was also cited as a benefit by Pieter VanIperen, Veteran Software Architect and Founder of PWV Consultants in his response;

“Centralized logging is important because it gives IT departments one place to look when an incident happens.”

PWV Consultants is a boutique group consisting of some of the leading influencers working in technology & security and includes members active in many Fortune 500 companies and NGOs.

Pieter continues;

“If logging is done on each individual system with no way to determine which system has a problem, then it will require InfoSec professionals to go through each system individually until the problem is found.”

“When there’s an incident, especially if it’s a breach of some kind, time is imperative. When your incident response team can go into one system and find the issue, they can then proceed to isolating the issue and tracing it.”

“They can cut that system off from the rest of the business to ensure no further access is granted.”

“It all boils down to a single place to log information. The information isn’t stored all in one place, but the logging system should be set up and designed to tell you where and when information and data is logged.”

“It provides a single access point for trouble-shooting, compliance, error resolution and incident response and basically, it saves your IT department a ton of time, especially if you’re a larger corporation with multiple data store types and numerous databases.”

“Rather than take an entire day (or week) to locate an incident or error, you can query one system and gain access to the necessary insights fast.”

The Hidden Costs Of Uncentralised Log Data

“The difference between centralised and distributed data logging is night and day”

says Jack Zmudzinski, Senior Associate at Future Processing.

“Keeping all your data in one place allows you to keep disk partitions static. And what this does is it reduces the possibility of a mistake, as you don’t need your employees to archive the files manually.”

“It also cuts costs, as you can limit the disk space according to your needs.” “Distributed systems are harder to troubleshoot and easier to make a mistake when using them.”

“It is a nightmare to search for something that has been processed on any one of many servers.”

“Your staff would have to log into each one of them and search manually, which is a waste of time and resources. It is pretty clear that it’s much easier to find what you are looking for when you store it in one place.”

The benefits of lowering costs thanks to centralised logging was seconded by Eric McGee, Senior Network Engineer at TRG Data Centers in his response;

“One vast advantage that we experience in-house due to centralized logging is that we can share our dashboards and log information in a streamlined manner.” “There are lower costs involved and we keep training our staff in using consolidated data locations and logs to improve our performance.”

End Notes

If you’ve not yet found a suitable logging aggregation tool then you might want to consider our platform. enables you to store all your data in a single location. Our hosted ELK solution can house data sets from widespread systems that differ dramatically from one another.

What’s more, our log management tool can easily normalise different schemas to ensure that data from different systems can be compared on a like-for-like basis. We also provide alerting, notifications, data visualisations and detailed dashboards, making log management and analysis not only easy but also affordable, especially when compared to the costs of dealing with distributed systems or hosting ELK inhouse.

If you enjoyed this article on centralised logging then why not check out our post on Gitlab vs Github or our guide on incident management tools.

Get the latest elastic Stack & logging resources when you subscribe

backReturn to Blog

© 2022 Ltd, All rights reserved.