Commit | Line | Data |
---|---|---|
4fe4746a DM |
1 | MOTIVATION |
2 | ||
3 | Cleancache is a new optional feature provided by the VFS layer that | |
4 | potentially dramatically increases page cache effectiveness for | |
5 | many workloads in many environments at a negligible cost. | |
6 | ||
7 | Cleancache can be thought of as a page-granularity victim cache for clean | |
8 | pages that the kernel's pageframe replacement algorithm (PFRA) would like | |
9 | to keep around, but can't since there isn't enough memory. So when the | |
10 | PFRA "evicts" a page, it first attempts to use cleancache code to | |
11 | put the data contained in that page into "transcendent memory", memory | |
12 | that is not directly accessible or addressable by the kernel and is | |
13 | of unknown and possibly time-varying size. | |
14 | ||
15 | Later, when a cleancache-enabled filesystem wishes to access a page | |
16 | in a file on disk, it first checks cleancache to see if it already | |
17 | contains it; if it does, the page of data is copied into the kernel | |
18 | and a disk access is avoided. | |
19 | ||
20 | Transcendent memory "drivers" for cleancache are currently implemented | |
21 | in Xen (using hypervisor memory) and zcache (using in-kernel compressed | |
22 | memory) and other implementations are in development. | |
23 | ||
24 | FAQs are included below. | |
25 | ||
26 | IMPLEMENTATION OVERVIEW | |
27 | ||
28 | A cleancache "backend" that provides transcendent memory registers itself | |
29 | to the kernel's cleancache "frontend" by calling cleancache_register_ops, | |
30 | passing a pointer to a cleancache_ops structure with funcs set appropriately. | |
31 | Note that cleancache_register_ops returns the previous settings so that | |
32 | chaining can be performed if desired. The functions provided must conform to | |
33 | certain semantics as follows: | |
34 | ||
35 | Most important, cleancache is "ephemeral". Pages which are copied into | |
36 | cleancache have an indefinite lifetime which is completely unknowable | |
37 | by the kernel and so may or may not still be in cleancache at any later time. | |
38 | Thus, as its name implies, cleancache is not suitable for dirty pages. | |
39 | Cleancache has complete discretion over what pages to preserve and what | |
40 | pages to discard and when. | |
41 | ||
42 | Mounting a cleancache-enabled filesystem should call "init_fs" to obtain a | |
43 | pool id which, if positive, must be saved in the filesystem's superblock; | |
44 | a negative return value indicates failure. A "put_page" will copy a | |
45 | (presumably about-to-be-evicted) page into cleancache and associate it with | |
46 | the pool id, a file key, and a page index into the file. (The combination | |
47 | of a pool id, a file key, and an index is sometimes called a "handle".) | |
48 | A "get_page" will copy the page, if found, from cleancache into kernel memory. | |
49 | A "flush_page" will ensure the page no longer is present in cleancache; | |
50 | a "flush_inode" will flush all pages associated with the specified file; | |
51 | and, when a filesystem is unmounted, a "flush_fs" will flush all pages in | |
52 | all files specified by the given pool id and also surrender the pool id. | |
53 | ||
54 | An "init_shared_fs", like init_fs, obtains a pool id but tells cleancache | |
55 | to treat the pool as shared using a 128-bit UUID as a key. On systems | |
56 | that may run multiple kernels (such as hard partitioned or virtualized | |
57 | systems) that may share a clustered filesystem, and where cleancache | |
58 | may be shared among those kernels, calls to init_shared_fs that specify the | |
59 | same UUID will receive the same pool id, thus allowing the pages to | |
60 | be shared. Note that any security requirements must be imposed outside | |
61 | of the kernel (e.g. by "tools" that control cleancache). Or a | |
62 | cleancache implementation can simply disable shared_init by always | |
63 | returning a negative value. | |
64 | ||
65 | If a get_page is successful on a non-shared pool, the page is flushed (thus | |
66 | making cleancache an "exclusive" cache). On a shared pool, the page | |
67 | is NOT flushed on a successful get_page so that it remains accessible to | |
68 | other sharers. The kernel is responsible for ensuring coherency between | |
69 | cleancache (shared or not), the page cache, and the filesystem, using | |
70 | cleancache flush operations as required. | |
71 | ||
72 | Note that cleancache must enforce put-put-get coherency and get-get | |
73 | coherency. For the former, if two puts are made to the same handle but | |
74 | with different data, say AAA by the first put and BBB by the second, a | |
75 | subsequent get can never return the stale data (AAA). For get-get coherency, | |
76 | if a get for a given handle fails, subsequent gets for that handle will | |
77 | never succeed unless preceded by a successful put with that handle. | |
78 | ||
79 | Last, cleancache provides no SMP serialization guarantees; if two | |
80 | different Linux threads are simultaneously putting and flushing a page | |
81 | with the same handle, the results are indeterminate. Callers must | |
82 | lock the page to ensure serial behavior. | |
83 | ||
84 | CLEANCACHE PERFORMANCE METRICS | |
85 | ||
86 | Cleancache monitoring is done by sysfs files in the | |
87 | /sys/kernel/mm/cleancache directory. The effectiveness of cleancache | |
88 | can be measured (across all filesystems) with: | |
89 | ||
90 | succ_gets - number of gets that were successful | |
91 | failed_gets - number of gets that failed | |
92 | puts - number of puts attempted (all "succeed") | |
93 | flushes - number of flushes attempted | |
94 | ||
95 | A backend implementatation may provide additional metrics. | |
96 | ||
97 | FAQ | |
98 | ||
99 | 1) Where's the value? (Andrew Morton) | |
100 | ||
101 | Cleancache provides a significant performance benefit to many workloads | |
102 | in many environments with negligible overhead by improving the | |
103 | effectiveness of the pagecache. Clean pagecache pages are | |
104 | saved in transcendent memory (RAM that is otherwise not directly | |
105 | addressable to the kernel); fetching those pages later avoids "refaults" | |
106 | and thus disk reads. | |
107 | ||
108 | Cleancache (and its sister code "frontswap") provide interfaces for | |
109 | this transcendent memory (aka "tmem"), which conceptually lies between | |
110 | fast kernel-directly-addressable RAM and slower DMA/asynchronous devices. | |
111 | Disallowing direct kernel or userland reads/writes to tmem | |
112 | is ideal when data is transformed to a different form and size (such | |
113 | as with compression) or secretly moved (as might be useful for write- | |
114 | balancing for some RAM-like devices). Evicted page-cache pages (and | |
115 | swap pages) are a great use for this kind of slower-than-RAM-but-much- | |
116 | faster-than-disk transcendent memory, and the cleancache (and frontswap) | |
117 | "page-object-oriented" specification provides a nice way to read and | |
118 | write -- and indirectly "name" -- the pages. | |
119 | ||
120 | In the virtual case, the whole point of virtualization is to statistically | |
121 | multiplex physical resources across the varying demands of multiple | |
122 | virtual machines. This is really hard to do with RAM and efforts to | |
123 | do it well with no kernel change have essentially failed (except in some | |
124 | well-publicized special-case workloads). Cleancache -- and frontswap -- | |
125 | with a fairly small impact on the kernel, provide a huge amount | |
126 | of flexibility for more dynamic, flexible RAM multiplexing. | |
127 | Specifically, the Xen Transcendent Memory backend allows otherwise | |
128 | "fallow" hypervisor-owned RAM to not only be "time-shared" between multiple | |
129 | virtual machines, but the pages can be compressed and deduplicated to | |
130 | optimize RAM utilization. And when guest OS's are induced to surrender | |
131 | underutilized RAM (e.g. with "self-ballooning"), page cache pages | |
132 | are the first to go, and cleancache allows those pages to be | |
133 | saved and reclaimed if overall host system memory conditions allow. | |
134 | ||
135 | And the identical interface used for cleancache can be used in | |
136 | physical systems as well. The zcache driver acts as a memory-hungry | |
137 | device that stores pages of data in a compressed state. And | |
138 | the proposed "RAMster" driver shares RAM across multiple physical | |
139 | systems. | |
140 | ||
141 | 2) Why does cleancache have its sticky fingers so deep inside the | |
142 | filesystems and VFS? (Andrew Morton and Christoph Hellwig) | |
143 | ||
144 | The core hooks for cleancache in VFS are in most cases a single line | |
145 | and the minimum set are placed precisely where needed to maintain | |
146 | coherency (via cleancache_flush operations) between cleancache, | |
147 | the page cache, and disk. All hooks compile into nothingness if | |
148 | cleancache is config'ed off and turn into a function-pointer- | |
149 | compare-to-NULL if config'ed on but no backend claims the ops | |
150 | functions, or to a compare-struct-element-to-negative if a | |
151 | backend claims the ops functions but a filesystem doesn't enable | |
152 | cleancache. | |
153 | ||
154 | Some filesystems are built entirely on top of VFS and the hooks | |
155 | in VFS are sufficient, so don't require an "init_fs" hook; the | |
156 | initial implementation of cleancache didn't provide this hook. | |
157 | But for some filesystems (such as btrfs), the VFS hooks are | |
158 | incomplete and one or more hooks in fs-specific code are required. | |
159 | And for some other filesystems, such as tmpfs, cleancache may | |
160 | be counterproductive. So it seemed prudent to require a filesystem | |
161 | to "opt in" to use cleancache, which requires adding a hook in | |
162 | each filesystem. Not all filesystems are supported by cleancache | |
163 | only because they haven't been tested. The existing set should | |
164 | be sufficient to validate the concept, the opt-in approach means | |
165 | that untested filesystems are not affected, and the hooks in the | |
166 | existing filesystems should make it very easy to add more | |
167 | filesystems in the future. | |
168 | ||
169 | The total impact of the hooks to existing fs and mm files is only | |
170 | about 40 lines added (not counting comments and blank lines). | |
171 | ||
172 | 3) Why not make cleancache asynchronous and batched so it can | |
173 | more easily interface with real devices with DMA instead | |
174 | of copying each individual page? (Minchan Kim) | |
175 | ||
176 | The one-page-at-a-time copy semantics simplifies the implementation | |
177 | on both the frontend and backend and also allows the backend to | |
178 | do fancy things on-the-fly like page compression and | |
179 | page deduplication. And since the data is "gone" (copied into/out | |
180 | of the pageframe) before the cleancache get/put call returns, | |
181 | a great deal of race conditions and potential coherency issues | |
182 | are avoided. While the interface seems odd for a "real device" | |
183 | or for real kernel-addressable RAM, it makes perfect sense for | |
184 | transcendent memory. | |
185 | ||
186 | 4) Why is non-shared cleancache "exclusive"? And where is the | |
187 | page "flushed" after a "get"? (Minchan Kim) | |
188 | ||
189 | The main reason is to free up space in transcendent memory and | |
190 | to avoid unnecessary cleancache_flush calls. If you want inclusive, | |
191 | the page can be "put" immediately following the "get". If | |
192 | put-after-get for inclusive becomes common, the interface could | |
193 | be easily extended to add a "get_no_flush" call. | |
194 | ||
195 | The flush is done by the cleancache backend implementation. | |
196 | ||
197 | 5) What's the performance impact? | |
198 | ||
199 | Performance analysis has been presented at OLS'09 and LCA'10. | |
200 | Briefly, performance gains can be significant on most workloads, | |
201 | especially when memory pressure is high (e.g. when RAM is | |
202 | overcommitted in a virtual workload); and because the hooks are | |
203 | invoked primarily in place of or in addition to a disk read/write, | |
204 | overhead is negligible even in worst case workloads. Basically | |
205 | cleancache replaces I/O with memory-copy-CPU-overhead; on older | |
206 | single-core systems with slow memory-copy speeds, cleancache | |
207 | has little value, but in newer multicore machines, especially | |
208 | consolidated/virtualized machines, it has great value. | |
209 | ||
210 | 6) How do I add cleancache support for filesystem X? (Boaz Harrash) | |
211 | ||
212 | Filesystems that are well-behaved and conform to certain | |
213 | restrictions can utilize cleancache simply by making a call to | |
214 | cleancache_init_fs at mount time. Unusual, misbehaving, or | |
215 | poorly layered filesystems must either add additional hooks | |
216 | and/or undergo extensive additional testing... or should just | |
217 | not enable the optional cleancache. | |
218 | ||
219 | Some points for a filesystem to consider: | |
220 | ||
221 | - The FS should be block-device-based (e.g. a ram-based FS such | |
222 | as tmpfs should not enable cleancache) | |
223 | - To ensure coherency/correctness, the FS must ensure that all | |
224 | file removal or truncation operations either go through VFS or | |
225 | add hooks to do the equivalent cleancache "flush" operations | |
226 | - To ensure coherency/correctness, either inode numbers must | |
227 | be unique across the lifetime of the on-disk file OR the | |
228 | FS must provide an "encode_fh" function. | |
229 | - The FS must call the VFS superblock alloc and deactivate routines | |
230 | or add hooks to do the equivalent cleancache calls done there. | |
231 | - To maximize performance, all pages fetched from the FS should | |
232 | go through the do_mpag_readpage routine or the FS should add | |
233 | hooks to do the equivalent (cf. btrfs) | |
234 | - Currently, the FS blocksize must be the same as PAGESIZE. This | |
235 | is not an architectural restriction, but no backends currently | |
236 | support anything different. | |
237 | - A clustered FS should invoke the "shared_init_fs" cleancache | |
238 | hook to get best performance for some backends. | |
239 | ||
240 | 7) Why not use the KVA of the inode as the key? (Christoph Hellwig) | |
241 | ||
242 | If cleancache would use the inode virtual address instead of | |
243 | inode/filehandle, the pool id could be eliminated. But, this | |
244 | won't work because cleancache retains pagecache data pages | |
245 | persistently even when the inode has been pruned from the | |
246 | inode unused list, and only flushes the data page if the file | |
247 | gets removed/truncated. So if cleancache used the inode kva, | |
248 | there would be potential coherency issues if/when the inode | |
249 | kva is reused for a different file. Alternately, if cleancache | |
250 | flushed the pages when the inode kva was freed, much of the value | |
251 | of cleancache would be lost because the cache of pages in cleanache | |
252 | is potentially much larger than the kernel pagecache and is most | |
253 | useful if the pages survive inode cache removal. | |
254 | ||
255 | 8) Why is a global variable required? | |
256 | ||
257 | The cleancache_enabled flag is checked in all of the frequently-used | |
258 | cleancache hooks. The alternative is a function call to check a static | |
259 | variable. Since cleancache is enabled dynamically at runtime, systems | |
260 | that don't enable cleancache would suffer thousands (possibly | |
261 | tens-of-thousands) of unnecessary function calls per second. So the | |
262 | global variable allows cleancache to be enabled by default at compile | |
263 | time, but have insignificant performance impact when cleancache remains | |
264 | disabled at runtime. | |
265 | ||
266 | 9) Does cleanache work with KVM? | |
267 | ||
268 | The memory model of KVM is sufficiently different that a cleancache | |
269 | backend may have less value for KVM. This remains to be tested, | |
270 | especially in an overcommitted system. | |
271 | ||
272 | 10) Does cleancache work in userspace? It sounds useful for | |
273 | memory hungry caches like web browsers. (Jamie Lokier) | |
274 | ||
275 | No plans yet, though we agree it sounds useful, at least for | |
276 | apps that bypass the page cache (e.g. O_DIRECT). | |
277 | ||
278 | Last updated: Dan Magenheimer, April 13 2011 |