shrink_dcache_sb speedup
authorDenis V. Lunev <den@openvz.org>
Wed, 17 Oct 2007 06:29:53 +0000 (23:29 -0700)
committerLinus Torvalds <torvalds@woody.linux-foundation.org>
Wed, 17 Oct 2007 15:42:57 +0000 (08:42 -0700)
This patch makes shrink_dcache_sb consistent with dentry pruning policy.

On the first pass we iterate over dentry unused list and prepare some
dentries for removal.

However, since the existing code moves evicted dentries to the beginning of
the LRU it can happen that fresh dentries from other superblocks will be
inserted *before* our dentries.

This can result in significant slowdown of shrink_dcache_sb().  Moreover,
for virtual filesystems like unionfs which can call dput() during dentries
kill existing code results in O(n^2) complexity.

We observed 2 minutes shrink_dcache_sb() with only 35000 dentries.

To avoid this effects we propose to isolate sb dentries at the end
of LRU list.

Signed-off-by: Denis V. Lunev <den@openvz.org>
Signed-off-by: Kirill Korotaev <dev@openvz.org>
Signed-off-by: Andrey Mirkin <amirkin@openvz.org>
Cc: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
fs/dcache.c
include/linux/list.h

index 5cdd14e958589649f2de51028d8bfbc038f15107..42d290be0ac155c675bae69cbfdb0d50bcd12211 100644 (file)
@@ -553,18 +553,18 @@ void shrink_dcache_sb(struct super_block * sb)
         * superblock to the most recent end of the unused list.
         */
        spin_lock(&dcache_lock);
-       list_for_each_safe(tmp, next, &dentry_unused) {
+       list_for_each_prev_safe(tmp, next, &dentry_unused) {
                dentry = list_entry(tmp, struct dentry, d_lru);
                if (dentry->d_sb != sb)
                        continue;
-               list_move(tmp, &dentry_unused);
+               list_move_tail(tmp, &dentry_unused);
        }
 
        /*
         * Pass two ... free the dentries for this superblock.
         */
 repeat:
-       list_for_each_safe(tmp, next, &dentry_unused) {
+       list_for_each_prev_safe(tmp, next, &dentry_unused) {
                dentry = list_entry(tmp, struct dentry, d_lru);
                if (dentry->d_sb != sb)
                        continue;
index ad9dcb9e337523337d5d2bacaa7c2e9f679f786a..b0cf0135fe3edec0f7054e6602ca7f54572d592f 100644 (file)
@@ -477,6 +477,18 @@ static inline void list_splice_init_rcu(struct list_head *list,
        for (pos = (head)->next, n = pos->next; pos != (head); \
                pos = n, n = pos->next)
 
+/**
+ * list_for_each_prev_safe - iterate over a list backwards safe against removal
+                       of list entry
+ * @pos:       the &struct list_head to use as a loop cursor.
+ * @n:         another &struct list_head to use as temporary storage
+ * @head:      the head for your list.
+ */
+#define list_for_each_prev_safe(pos, n, head) \
+       for (pos = (head)->prev, n = pos->prev; \
+            prefetch(pos->prev), pos != (head); \
+            pos = n, n = pos->prev)
+
 /**
  * list_for_each_entry -       iterate over list of given type
  * @pos:       the type * to use as a loop cursor.
This page took 0.030355 seconds and 5 git commands to generate.