You are here
Efficiently synchronize copies of a large sparse file locally. I deal with a large amount of large sparse files because of virtualization and other technologies. Because of their size, often a small number of blocks have data and, of these, a small number of blocks are changed and need to be backed up. Using a log-based (snapshotting) file system on USB 2 as a backup device, I only want to write blocks if absolutely necessary.
So what's the solution? Some simple custom code that
- checks that both file sizes are identical;
- verifies that some metadata has changed (i.e time stamp, permissions or owner/group);
- reads both files block-by-block;
- writes only changed blocks to the destination file, and
- updates any changed metadata.
It seems like ages ago now that I found my customer had a process that connected to hundreds of Oracle databases to run predefined SQL for health checks. These databases were hosted all over the world and the SQL could take up to fifteen minutes to complete for a single database (with huge amounts of TNS timeouts). The end result was a CSV file that was ultimately formatted into a spreadsheet to provide management information. It took about a day to obtain this final result.
I thought there was a better way.
According to Slashdot you may soon be able to develop software for your Ford or Vauxhall/Opel car. This could be exciting news but as yet it is not known what constraints will be in place or whether you are permitted to independently develop software for your own vehicle.
The SANS Intitute have released a list containing what are considered to be the twenty-five most dangerous programming errors. Update your code now!