1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
|
Cache and TLB Flushing
Under Linux
David S. Miller <davem@redhat.com>
This document describes the cache/tlb flushing interfaces called
by the Linux VM subsystem. It enumerates over each interface,
describes it's intended purpose, and what side effect is expected
after the interface is invoked.
The side effects described below are stated for a uniprocessor
implementation, and what is to happen on that single processor. The
SMP cases are a simple extension, in that you just extend the
definition such that the side effect for a particular interface occurs
on all processors in the system. Don't let this scare you into
thinking SMP cache/tlb flushing must be so inefficient, this is in
fact an area where many optimizations are possible. For example,
if it can be proven that a user address space has never executed
on a cpu (see vma->cpu_vm_mask), one need not perform a flush
for this address space on that cpu.
First, the TLB flushing interfaces, since they are the simplest. The
"TLB" is abstracted under Linux as something the cpu uses to cache
virtual-->physical address translations obtained from the software
page tables. Meaning that if the software page tables change, it is
possible for stale translations to exist in this "TLB" cache.
Therefore when software page table changes occur, the kernel will
invoke one of the following flush methods _after_ the page table
changes occur:
1) void flush_tlb_all(void)
The most severe flush of all. After this interface runs,
any previous page table modification whatsoever will be
visible to the cpu.
This is usually invoked when the kernel page tables are
changed, since such translations are "global" in nature.
2) void flush_tlb_mm(struct mm_struct *mm)
This interface flushes an entire user address space from
the TLB. After running, this interface must make sure that
any previous page table modifications for the address space
'mm' will be visible to the cpu. That is, after running,
there will be no entries in the TLB for 'mm'.
This interface is used to handle whole address space
page table operations such as what happens during
fork, and exec.
3) void flush_tlb_range(struct mm_struct *mm,
unsigned long start, unsigned long end)
Here we are flushing a specific range of (user) virtual
address translations from the TLB. After running, this
interface must make sure that any previous page table
modifications for the address space 'mm' in the range 'start'
to 'end' will be visible to the cpu. That is, after running,
there will be no entries in the TLB for 'mm' for virtual
addresses in the range 'start' to 'end'.
Primarily, this is used for munmap() type operations.
The interface is provided in hopes that the port can find
a suitably efficient method for removing multiple page
sized translations from the TLB, instead of having the kernel
call flush_tlb_page (see below) for each entry which may be
modified.
4) void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
This time we need to remove the PAGE_SIZE sized translation
from the TLB. The 'vma' is the backing structure used by
Linux to keep track of mmap'd regions for a process, the
address space is available via vma->vm_mm. Also, one may
test (vma->vm_flags & VM_EXEC) to see if this region is
executable (and thus could be in the 'instruction TLB' in
split-tlb type setups).
After running, this interface must make sure that any previous
page table modification for address space 'vma->vm_mm' for
user virtual address 'page' will be visible to the cpu. That
is, after running, there will be no entries in the TLB for
'vma->vm_mm' for virtual address 'page'.
This is used primarily during fault processing.
5) void flush_tlb_pgtables(struct mm_struct *mm,
unsigned long start, unsigned long end)
The software page tables for address space 'mm' for virtual
addresses in the range 'start' to 'end' are being torn down.
Some platforms cache the lowest level of the software page tables
in a linear virtually mapped array, to make TLB miss processing
more efficient. On such platforms, since the TLB is caching the
software page table structure, it needs to be flushed when parts
of the software page table tree are unlinked/freed.
Sparc64 is one example of a platform which does this.
Usually, when munmap()'ing an area of user virtual address
space, the kernel leaves the page table parts around and just
marks the individual pte's as invalid. However, if very large
portions of the address space are unmapped, the kernel frees up
those portions of the software page tables to prevent potential
excessive kernel memory usage caused by erratic mmap/mmunmap
sequences. It is at these times that flush_tlb_pgtables will
be invoked.
6) void update_mmu_cache(struct vm_area_struct *vma,
unsigned long address, pte_t pte)
At the end of every page fault, this routine is invoked to
tell the architecture specific code that a translation
described by "pte" now exists at virtual address "address"
for address space "vma->vm_mm", in the software page tables.
A port may use this information in any way it so chooses.
For example, it could use this event to pre-load TLB
translations for software managed TLB configurations.
The sparc64 port currently does this.
Next, we have the cache flushing interfaces. In general, when Linux
is changing an existing virtual-->physical mapping to a new value,
the sequence will be in one of the following forms:
1) flush_cache_mm(mm);
change_all_page_tables_of(mm);
flush_tlb_mm(mm);
2) flush_cache_range(mm, start, end);
change_range_of_page_tables(mm, start, end);
flush_tlb_range(mm, start, end);
3) flush_cache_page(vma, page);
set_pte(pte_pointer, new_pte_val);
flush_tlb_page(vma, page);
The cache level flush will always be first, because this allows
us to properly handle systems whose caches are strict and require
a virtual-->physical translation to exist for a virtual address
when that virtual address is flushed from the cache. The HyperSparc
cpu is one such cpu with this attribute.
The cache flushing routines below need only deal with cache flushing
to the extent that it is necessary for a particular cpu. Mostly,
these routines must be implemented for cpus which have virtually
indexed caches which must be flushed when virtual-->physical
translations are changed or removed. So, for example, the physically
indexed physically tagged caches of IA32 processors have no need to
implement these interfaces since the caches are fully synchronized
and have no dependency on translation information.
Here are the routines, one by one:
1) void flush_cache_all(void)
The most severe flush of all. After this interface runs,
the entire cpu cache is flushed.
This is usually invoked when the kernel page tables are
changed, since such translations are "global" in nature.
2) void flush_cache_mm(struct mm_struct *mm)
This interface flushes an entire user address space from
the caches. That is, after running, there will be no cache
lines assosciated with 'mm'.
This interface is used to handle whole address space
page table operations such as what happens during
fork, exit, and exec.
3) void flush_cache_range(struct mm_struct *mm,
unsigned long start, unsigned long end)
Here we are flushing a specific range of (user) virtual
addresses from the cache. After running, there will be no
entries in the cache for 'mm' for virtual addresses in the
range 'start' to 'end'.
Primarily, this is used for munmap() type operations.
The interface is provided in hopes that the port can find
a suitably efficient method for removing multiple page
sized regions from the cache, instead of having the kernel
call flush_cache_page (see below) for each entry which may be
modified.
4) void flush_cache_page(struct vm_area_struct *vma, unsigned long page)
This time we need to remove a PAGE_SIZE sized range
from the cache. The 'vma' is the backing structure used by
Linux to keep track of mmap'd regions for a process, the
address space is available via vma->vm_mm. Also, one may
test (vma->vm_flags & VM_EXEC) to see if this region is
executable (and thus could be in the 'instruction cache' in
"Harvard" type cache layouts).
After running, there will be no entries in the cache for
'vma->vm_mm' for virtual address 'page'.
This is used primarily during fault processing.
There exists another whole class of cpu cache issues which currently
require a whole different set of interfaces to handle properly.
The biggest problem is that of virtual aliasing in the data cache
of a processor.
Is your port subsceptible to virtual aliasing in it's D-cache?
Well, if your D-cache is virtually indexed, is larger in size than
PAGE_SIZE, and does not prevent multiple cache lines for the same
physical address from existing at once, you have this problem.
If your D-cache has this problem, first define asm/shmparam.h SHMLBA
properly, it should essentially be the size of your virtually
addressed D-cache (or if the size is variable, the largest possible
size). This setting will force the SYSv IPC layer to only allow user
processes to mmap shared memory at address which are a multiple of
this value.
Next, you have two methods to solve the D-cache aliasing issue for all
other cases. Please keep in mind that fact that, for a given page
mapped into some user address space, there is always at least one more
mapping, that of the kernel in it's linear mapping starting at
PAGE_OFFSET. So immediately, once the first user maps a given
physical page into it's address space, by implication the D-cache
aliasing problem has the potential to exist since the kernel already
maps this page at it's virtual address.
First, I describe the old method to deal with this problem. I am
describing it for documentation purposes, but it is deprecated and the
latter method I describe next should be used by all new ports and all
existing ports should move over to the new mechanism as well.
flush_page_to_ram(struct page *page)
The physical page 'page' is about to be place into the
user address space of a process. If it is possible for
stores done recently by the kernel into this physical
page, to not be visible to an arbitray mapping in userspace,
you must flush this page from the D-cache.
If the D-cache is writeback in nature, the dirty data (if
any) for this physical page must be written back to main
memory before the cache lines are invalidated.
Admittedly, the author did not think very much when designing this
interface. It does not give the architecture enough information about
what exactly is going on, and there is not context with which to base
any judgment about whether an alias is possible at all. The new
interfaces to deal with D-cache aliasing are meant to address this by
telling the architecture specific code exactly which is going on at
the proper points in time.
Here is the new interface:
void copy_user_page(void *from, void *to, unsigned long address)
void clear_user_page(void *to, unsigned long address)
These two routines store data in user anonymous or COW
pages. It allows a port to efficiently avoid D-cache alias
issues between userspace and the kernel.
For example, a port may temporarily map 'from' and 'to' to
kernel virtual addresses during the copy. The virtual address
for these two pages is choosen in such a way that the kernel
load/store instructions happen to virtual addresses which are
of the same "color" as the user mapping of the page. Sparc64
for example, uses this technique.
The "address" parameter tells the virtual address where the
user will ultimately this page mapped.
If D-cache aliasing is not an issue, these two routines may
simply call memcpy/memset directly and do nothing more.
void flush_dcache_page(struct page *page)
Any time the kernel writes to a page cache page, _OR_
the kernel is about to read from a page cache page and
user space shared/writable mappings of this page potentially
exist, this routine is called.
NOTE: This routine need only be called for page cache pages
which can potentially ever be mapped into the address
space of a user process. So for example, VFS layer code
handling vfs symlinks in the page cache need not call
this interface at all.
The phrase "kernel writes to a page cache page" means,
specifically, that the kernel executes store instructions
that dirty data in that page at the page->virtual mapping
of that page. It is important to flush here to handle
D-cache aliasing, to make sure these kernel stores are
visible to user space mappings of that page.
The corollary case is just as important, if there are users
which have shared+writable mappings of this file, we must make
sure that kernel reads of these pages will see the most recent
stores done by the user.
If D-cache aliasing is not an issue, this routine may
simply be defined as a nop on that architecture.
TODO: If we set aside a few bits in page->flags as
"architecture private", these interfaces could
be implemented much more efficiently. This would
allow one to "defer" (perhaps indefinitely) the
actual flush if there are currently no user processes
mapping this page.
The idea is, first at flush_dcache_page() time, if
page->mapping->i_mmap is an empty list, just mark
one of the architecture private page flag bits.
Later, in update_mmu_cache(), a check could be made
of this flag bit, and if set the flush is done
and the flag bit is cleared.
XXX Not documented: flush_icache_page(), need to talk to Paul
Mackerras, David Mosberger-Tang, et al.
to see what the expected semantics of this
interface are. -DaveM
|