No Data Corruption & Data Integrity in Shared Hosting
The integrity of the data that you upload to your new shared hosting account will be ensured by the ZFS file system which we use on our cloud platform. Most internet hosting suppliers, like our company, use multiple HDDs to keep content and considering that the drives work in a RAID, the same info is synchronized between the drives all the time. When a file on a drive gets corrupted for whatever reason, yet, it's more than likely that it will be reproduced on the other drives since other file systems do not include special checks for that. In contrast to them, ZFS applies a digital fingerprint, or a checksum, for each file. If a file gets damaged, its checksum will not match what ZFS has as a record for it, so the damaged copy shall be swapped with a good one from another hard disk. Because this happens in real time, there is no risk for any of your files to ever be corrupted.
No Data Corruption & Data Integrity in Semi-dedicated Hosting
We've avoided any possibility of files getting damaged silently due to the fact that the servers where your semi-dedicated hosting account will be created employ a powerful file system named ZFS. Its main advantage over alternative file systems is that it uses a unique checksum for every single file - a digital fingerprint which is checked in real time. As we store all content on multiple NVMe drives, ZFS checks whether the fingerprint of a file on one drive corresponds to the one on the rest of the drives and the one it has stored. In the event that there's a mismatch, the bad copy is replaced with a healthy one from one of the other drives and considering that this happens in real time, there's no chance that a damaged copy can remain on our web servers or that it can be duplicated to the other drives in the RAID. None of the other file systems work with this type of checks and in addition, even during a file system check after an unexpected power loss, none of them will detect silently corrupted files. In contrast, ZFS won't crash after a power loss and the constant checksum monitoring makes a lenghty file system check unneeded.
