0

Because of some reasons, my server machines had some too-large files (e.g.,:180G), How can I delete these files quickly?

  • is 'rm filename' not working? – switch87 Nov 25 '15 at 12:46
  • 2
    Which of these is giving you the problem: finding the files, actually deleting them, or getting the space back after rm? – Ulrich Schwarz Nov 25 '15 at 13:06
  • Deleting a huge file should be virtually as quick as deleting a very small file. – Petr Skocik Nov 25 '15 at 23:10
  • It basically only removes references to chunks of data. When there's no remaining references, the associated chunks of data become available for reuse. It's not that the time it takes to delete a file grows linearly with the file's size. – Petr Skocik Nov 25 '15 at 23:12
  • It's not inconceivable that some rare filesystems would actually purge the associated data blocks, but standard Unix filesystems should work as I've described. – Petr Skocik Nov 25 '15 at 23:14
  • “1.21 gigawatts!” … wait, wrong units. Never mind. – Tom Zych Nov 26 '15 at 00:19
  • What filesystem are you using? ext3 has a known issue where it takes a long time to delete large files. This is resolved in ext4. – psusi Nov 26 '15 at 01:12

2 Answers2

4

rm deletes files regardless of their size.

Very large files can take a little time to delete, because the filesystem needs to mark all the blocks that the file used as available. That cost has to be paid at one time or another; if you don't pay it at deletion time, you pay it when files are created. Zfs offers a way to defer the cost of deleting a directory tree, but most filesystems don't have the requisite complex features.

Deleting even large files doesn't take much time. If that's still too much for you, and you want to start typing another command immediately, you can run rm in the background (rm /path/to/file &). If you want to be able to create a new file by the same name, you don't need to run rm, you can just overwrite the file. If you need to delete the file, for example, to then delete the directory that contains it, you can first move the file to another directory on the same filesystem (that's instant: moving a file inside a filesystem just only its directory entry), then remove it in its new location.

If what you wanted was to make the space available instantly, you can't, as I explained above. If you want to reclaim some of the space quicker than the time to delete the whole file, you can truncate it to a shorter length, e.g. truncate -s -1G /path/to/file to remove the last GB from the while, then remove the file. The truncate command is from GNU coreutils; if your machine isn't running Linux, you probably don't have it, but you can use dd instead, e.g. dd if=/dev/null of=/path/to/file bs=1024k seek=180000 to truncate to 180000 MB.

0

Deleting a file inturn translates to vfs_unlink and then fs specific fs_unlink system call to get executed. Precisely these calls get to parent directory and removes the inode of corresponding file causing that file's dentry to become negative. The data on disk(whatever size it is of) remains as it is, with only one change: block where it's residing are moved to free blocks bank. This means that anytime now on the data of file you've just deleted will be overwritten by kernel.

So, in nutshell, deleting a file of any size is deleting it's inode from file system in linux and other driven systems and will take same time.