xen/x86: streamline set_pte_mfn()
In preparation for restoring xen_set_pte_init()'s original behavior of avoiding hypercalls, make set_pte_mfn() no longer use the standard set_pte() code path. That one is more complicated than the alternative of simply using an available hypercall directly. This way we can avoid introducing a fair number (2k on my test system) of cases where the hypervisor would trap-and-emulate page table updates. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Link: https://lore.kernel.org/r/b39c08e8-4a53-8bca-e6e7-3684a6cab8d0@suse.com Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
This commit is contained in:
parent
bfc484fe6a
commit
dc4bd2a2dd
1 changed files with 4 additions and 2 deletions
|
|
@ -241,9 +241,11 @@ static void xen_set_pmd(pmd_t *ptr, pmd_t val)
|
|||
* Associate a virtual page frame with a given physical page frame
|
||||
* and protection flags for that frame.
|
||||
*/
|
||||
void set_pte_mfn(unsigned long vaddr, unsigned long mfn, pgprot_t flags)
|
||||
void __init set_pte_mfn(unsigned long vaddr, unsigned long mfn, pgprot_t flags)
|
||||
{
|
||||
set_pte_vaddr(vaddr, mfn_pte(mfn, flags));
|
||||
if (HYPERVISOR_update_va_mapping(vaddr, mfn_pte(mfn, flags),
|
||||
UVMF_INVLPG))
|
||||
BUG();
|
||||
}
|
||||
|
||||
static bool xen_batched_set_pte(pte_t *ptep, pte_t pteval)
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue