From: Christoph Lameter <clameter@sgi.com>

Remove the atomic counter for slab_reclaim_pages and replace the counter
and NR_SLAB with two ZVC counter that account for unreclaimable and
reclaimable slab pages: NR_SLAB_RECLAIMABLE and NR_SLAB_UNRECLAIMABLE.

Change the check in vmscan.c to refer to to NR_SLAB_RECLAIMABLE.  The
intend seems to be to check for slab pages that could be freed.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
---

 mm/swap_prefetch.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff -puN mm/swap_prefetch.c~zvc-support-nr_slab_reclaimable--nr_slab_unreclaimable-swap_prefetch mm/swap_prefetch.c
--- a/mm/swap_prefetch.c~zvc-support-nr_slab_reclaimable--nr_slab_unreclaimable-swap_prefetch
+++ a/mm/swap_prefetch.c
@@ -393,7 +393,8 @@ static int prefetch_suitable(void)
 		 * would be expensive to fix and not of great significance.
 		 */
 		limit = node_page_state(node, NR_FILE_PAGES);
-		limit += node_page_state(node, NR_SLAB);
+		limit += node_page_state(node, NR_SLAB_UNRECLAIMABLE);
+		limit += node_page_state(node, NR_SLAB_RECLAIMABLE);
 		limit += node_page_state(node, NR_FILE_DIRTY);
 		limit += node_page_state(node, NR_UNSTABLE_NFS);
 		limit += total_swapcache_pages;
_
