Main part here is making parallel lookups safe for RT - making
sure preemption is disabled in start_dir_add()/ end_dir_add() sections (on non-RT it's automatic, on RT it needs to to be done explicitly) and moving wakeups from __d_lookup_done() inside of such to the end of those sections. Wakeups can be safely delayed for as long as ->d_lock on in-lookup dentry is held; proving that has caught a bug in d_add_ci() that allows memory corruption when sufficiently bogus ntfs (or case-insensitive xfs) image is mounted. Easily fixed, fortunately. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> -----BEGIN PGP SIGNATURE----- iHUEABYIAB0WIQQqUNBr3gm4hGXdBJlZ7Krx/gZQ6wUCYurAlAAKCRBZ7Krx/gZQ 6x0mAP9JI80PC/lkYLda+AJ7NmweorBDwrOxzB34biXtyhYDDQEAvdrV07LUkETM FDN0+jgSpUikcs/kz5NxVBPRRN+RRAY= =qpTA -----END PGP SIGNATURE----- Merge tag 'pull-work.dcache' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull vfs dcache updates from Al Viro: "The main part here is making parallel lookups safe for RT - making sure preemption is disabled in start_dir_add()/ end_dir_add() sections (on non-RT it's automatic, on RT it needs to to be done explicitly) and moving wakeups from __d_lookup_done() inside of such to the end of those sections. Wakeups can be safely delayed for as long as ->d_lock on in-lookup dentry is held; proving that has caught a bug in d_add_ci() that allows memory corruption when sufficiently bogus ntfs (or case-insensitive xfs) image is mounted. Easily fixed, fortunately" * tag 'pull-work.dcache' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: fs/dcache: Move wakeup out of i_seq_dir write held region. fs/dcache: Move the wakeup from __d_lookup_done() to the caller. fs/dcache: Disable preemption on i_dir_seq write side on PREEMPT_RT d_add_ci(): make sure we don't miss d_lookup_done()
This commit is contained in:
commit
200e340f21
2 changed files with 46 additions and 17 deletions
|
|
@ -349,7 +349,7 @@ static inline void dont_mount(struct dentry *dentry)
|
|||
spin_unlock(&dentry->d_lock);
|
||||
}
|
||||
|
||||
extern void __d_lookup_done(struct dentry *);
|
||||
extern void __d_lookup_unhash_wake(struct dentry *dentry);
|
||||
|
||||
static inline int d_in_lookup(const struct dentry *dentry)
|
||||
{
|
||||
|
|
@ -358,11 +358,8 @@ static inline int d_in_lookup(const struct dentry *dentry)
|
|||
|
||||
static inline void d_lookup_done(struct dentry *dentry)
|
||||
{
|
||||
if (unlikely(d_in_lookup(dentry))) {
|
||||
spin_lock(&dentry->d_lock);
|
||||
__d_lookup_done(dentry);
|
||||
spin_unlock(&dentry->d_lock);
|
||||
}
|
||||
if (unlikely(d_in_lookup(dentry)))
|
||||
__d_lookup_unhash_wake(dentry);
|
||||
}
|
||||
|
||||
extern void dput(struct dentry *);
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue