Deletion of many files takes a long time


currently i'm deleting 5 TB of files with " qq fs_delete_tree --path /xyz".

It takes to long to delete the fs tree. Over 3 hours.

The same result is if i try to delte on mounted filesystem.

Replications are offline and no snapshots are running.

What does the system do in the background that it takes so long?

And the next waht i have observe is the cpu utilization.

It is about 20 to 30% on all cores. Is that normal ?


/opt/qumulo/qfsd --build-id /opt/qumulo/build.json -C /config/cluster_v1.json -c /etc/keys/localhost.pem -A -N /run/qfsd/cpu_map.json

is running several times.


Thanks & Regards




Didn't find what you were looking for?

New post



  • Kosta,


    Currently Tree Delete is a single threaded operation.  This allows for minimal impact; however, undesirable if you would like to delete data as quickly as possible.  I will put in a feature request for a "multi-threaded" tree-delete job with tune-able "impact" setting (Limit parallelism based on cluster resources)  In the mean time if you are able to split up the directory in question, you could run multiple jobs (from multiple nodes) to maximize performance.  (Example, If you are able to break up directory A to A/a, A/b, A/c.  Now run a tree delete on each of the 3 directories from separate nodes (at the same time) to maximize performance)

  • Kevin,

    it would be nice to have the option through one parameters such as -f (force) have the full performance of such calls. In the conscience that the performance of the system underneath is significantly lower.

    Would be really good. Such operations are not the rule, but good if needed.


    Thanks & Regards



Please sign in to leave a comment.