After explaining the monitoring of NAS servers in the previous part, I would now like to talk about the monitoring of file servers. Many organisations ask themselves whether NAS or file servers are the better option for them. Both server types are used to share data with other clients. There is no clear answer to this question, but in any case, you always need to monitor servers properly. In this blog post, I'll show you what to look for in file server monitoring.
What are file servers?
File servers are a popular method of making files available to other systems on a network or over the Internet. Users are given access to the data stored on the server after an administrator has granted them access rights. The access control is done directly via the server or via the file system's access permission function.
There are several types of file servers that differ from each other mainly in the transfer protocol used. Some organizations still set up their file servers as FTP servers, for example, because although the FTP protocol is quite old, it is also quite easy to use. However, its use can also lead to problems. Many FTP servers are inadequately secured or can even be publicly accessed. In addition to FTP, companies therefore also rely on FTP advancements such as FTP-over-TLS (FTPS) or the SSH file transfer protocol (SFTP) for file servers. In closed networks with Windows systems, file servers with CIFS/SMB (Server Message Block) are often used and with Linux systems, the network file system (NFS) is usually used.
In addition, the HTTP protocol is often used, too. Just like FTP, it is easy to use, but offers supplementary security functions. It is therefore a popular protocol for file servers, especially for download servers on the Internet.
Organizations usually set up file servers with dedicated hardware or as virtual servers. Theoretically, they can also host several file servers on the same hardware server and also install other software servers under the same operating system. This carries the risk, however, that a single failure can affect several application servers at the same time, so such a practice is therefore not advisable.
Although you can easily run file servers under Windows, Linux and also macOS, you should consider the long-term effort involved in operating them. Compared to NAS servers, file servers are somewhat more complex to manage. Depending on the size of your company, the number of users and the volume of data, the effort required to maintain the folder and directory structure increases. The number of user shares can also quickly become confusing.
This belongs in file server monitoring
Since you generally set up file servers yourself, you will also have administrator rights for the operating systems. This allows you to use monitoring agents. Compared to NAS servers, this is an advantage that you should use. With the right agents, you get much deeper insights, can detect possible threats earlier and need fewer hardware resources for the server monitoring.
Most monitoring tools can query data on server hardware via SNMP, IPMI or manufacturer-specific interfaces, but remain blind at the operating system level and also cannot actively check transmission protocols.
With Checkmk you can set up all data sources with just a few clicks and then assign them all to a host in the monitoring. This means that with Checkmk you can easily and effectively combine agent-based monitoring, active checks, and hardware monitoring via SNMP or IPMI. Through the assignments, you can maintain an overview of your hosts being monitored and always have all context information in one view.
Suitable monitoring integrations are available for Checkmk for all common server hardware manufacturers such as Dell, IBM, HPE, Cisco or Huawei. Checkmk also includes monitoring plug-ins for management boards of enterprise server solutions such as HPE-iLO boards. So, with just a few clicks you can have all the necessary hardware information in a single monitoring view.
For example, the Checkmk agents automatically capture the details of a file server's file system and alert you when storage space is running low. As for all data sources, Checkmk provides default thresholds for alarms, which you can of course customise according to your needs. This simplifies the set-up and gets you through the process more quickly.
Checkmk additionally comes with active checks for all common file server protocols such as FTP, SFTP, CIFS/SMB or HTTP, which you can easily set up via Checkmk's graphical user interface. In this way, Checkmk actively verifies whether the file server's transfer protocol is working. In monitoring these active checks are then available as services of the host and thus complement your file server monitoring.
Thanks to Checkmk's rule-based approach, installation is easy even if you have to monitor a lot of servers. With just a few clicks you can include a large number of file servers into the monitoring and monitor these very precisely.
You can also choose between various mechanisms for the alerting, for example, a telephone loop can be initiated when a critical memory load is reached. The responsible person on-call gets alerted immediately. This ensures that you can react quickly to unexpected performance peaks.
With Checkmk, you can also record of such performance peaks and be automatically informed about unusual anomalies. For example, if you have unusually high data movements at times, Checkmk can trigger an alarm on request. You can also investigate irregularities in detail thanks to the extensive graphing functions in Checkmk.
By extending the Checkmk monitoring agents with the file check, you can monitor individual files in the server's file system with Checkmk. This way you can ensure, for example, that a file is always available on a download server and would not be accidentally moved or deleted.
The extension of agents and the combination of multiple data sources is not only suitable for monitoring file servers. In the next part of the blog series, I would like to show you what you should consider when monitoring SQL servers.