ARM:
* Fixes for single-stepping in the presence of an async
exception as well as the preservation of PSTATE.SS
* Better handling of AArch32 ID registers on AArch64-only
systems
* Fixes for the dirty-ring API, allowing it to work on
architectures with relaxed memory ordering
* Advertise the new kvmarm mailing list
* Various minor cleanups and spelling fixes
RISC-V:
* Improved instruction encoding infrastructure for
instructions not yet supported by binutils
* Svinval support for both KVM Host and KVM Guest
* Zihintpause support for KVM Guest
* Zicbom support for KVM Guest
* Record number of signal exits as a VCPU stat
* Use generic guest entry infrastructure
x86:
* Misc PMU fixes and cleanups.
* selftests: fixes for Hyper-V hypercall
* selftests: fix nx_huge_pages_test on TDP-disabled hosts
* selftests: cleanups for fix_hypercall_test
-----BEGIN PGP SIGNATURE-----
iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmM7OcMUHHBib256aW5p
QHJlZGhhdC5jb20ACgkQv/vSX3jHroPAFgf/Rqc9hrXZVdbh2OZ+gScSsFsPK1zO
DISUksLcXaYVYYsvQAEg/N2BPz3XbmO4jA+z8bIUrYTA7fC98we2C4jfR+EaX/fO
+/Kzf0lAgu/nQZyFzUya+1jRsZqvVbC/HmDCI2kzN4u78e/LZ7NVcMijdV/ly6ib
cq0b0LLqJHe/fcpJ806JZP3p5sndQhDmlUkZ2AWZf6CUKSEFcufbbYkt+84ZK4PL
N9mEqXYQ3DXClLQmIBv+NZhtGlmADkWDE4BNouw8dVxhaXH7Hw/jfBHdb6SSHMRe
tQ6Src1j8AYOhf5J35SMudgkbGcMelm0yeZ7Sizk+5Ft0EmdbZsnkvsGdQ==
=4RA+
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull more kvm updates from Paolo Bonzini:
"The main batch of ARM + RISC-V changes, and a few fixes and cleanups
for x86 (PMU virtualization and selftests).
ARM:
- Fixes for single-stepping in the presence of an async exception as
well as the preservation of PSTATE.SS
- Better handling of AArch32 ID registers on AArch64-only systems
- Fixes for the dirty-ring API, allowing it to work on architectures
with relaxed memory ordering
- Advertise the new kvmarm mailing list
- Various minor cleanups and spelling fixes
RISC-V:
- Improved instruction encoding infrastructure for instructions not
yet supported by binutils
- Svinval support for both KVM Host and KVM Guest
- Zihintpause support for KVM Guest
- Zicbom support for KVM Guest
- Record number of signal exits as a VCPU stat
- Use generic guest entry infrastructure
x86:
- Misc PMU fixes and cleanups.
- selftests: fixes for Hyper-V hypercall
- selftests: fix nx_huge_pages_test on TDP-disabled hosts
- selftests: cleanups for fix_hypercall_test"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (57 commits)
riscv: select HAVE_POSIX_CPU_TIMERS_TASK_WORK
RISC-V: KVM: Use generic guest entry infrastructure
RISC-V: KVM: Record number of signal exits as a vCPU stat
RISC-V: KVM: add __init annotation to riscv_kvm_init()
RISC-V: KVM: Expose Zicbom to the guest
RISC-V: KVM: Provide UAPI for Zicbom block size
RISC-V: KVM: Make ISA ext mappings explicit
RISC-V: KVM: Allow Guest use Zihintpause extension
RISC-V: KVM: Allow Guest use Svinval extension
RISC-V: KVM: Use Svinval for local TLB maintenance when available
RISC-V: Probe Svinval extension form ISA string
RISC-V: KVM: Change the SBI specification version to v1.0
riscv: KVM: Apply insn-def to hlv encodings
riscv: KVM: Apply insn-def to hfence encodings
riscv: Introduce support for defining instructions
riscv: Add X register names to gpr-nums
KVM: arm64: Advertise new kvmarm mailing list
kvm: vmx: keep constant definition format consistent
kvm: mmu: fix typos in struct kvm_arch
KVM: selftests: Fix nx_huge_pages_test on TDP-disabled hosts
...
This commit is contained in:
commit
f311d498be
51 changed files with 994 additions and 514 deletions
|
|
@ -7918,8 +7918,8 @@ guest according to the bits in the KVM_CPUID_FEATURES CPUID leaf
|
||||||
(0x40000001). Otherwise, a guest may use the paravirtual features
|
(0x40000001). Otherwise, a guest may use the paravirtual features
|
||||||
regardless of what has actually been exposed through the CPUID leaf.
|
regardless of what has actually been exposed through the CPUID leaf.
|
||||||
|
|
||||||
8.29 KVM_CAP_DIRTY_LOG_RING
|
8.29 KVM_CAP_DIRTY_LOG_RING/KVM_CAP_DIRTY_LOG_RING_ACQ_REL
|
||||||
---------------------------
|
----------------------------------------------------------
|
||||||
|
|
||||||
:Architectures: x86
|
:Architectures: x86
|
||||||
:Parameters: args[0] - size of the dirty log ring
|
:Parameters: args[0] - size of the dirty log ring
|
||||||
|
|
@ -7977,6 +7977,11 @@ on to the next GFN. The userspace should continue to do this until the
|
||||||
flags of a GFN have the DIRTY bit cleared, meaning that it has harvested
|
flags of a GFN have the DIRTY bit cleared, meaning that it has harvested
|
||||||
all the dirty GFNs that were available.
|
all the dirty GFNs that were available.
|
||||||
|
|
||||||
|
Note that on weakly ordered architectures, userspace accesses to the
|
||||||
|
ring buffer (and more specifically the 'flags' field) must be ordered,
|
||||||
|
using load-acquire/store-release accessors when available, or any
|
||||||
|
other memory barrier that will ensure this ordering.
|
||||||
|
|
||||||
It's not necessary for userspace to harvest the all dirty GFNs at once.
|
It's not necessary for userspace to harvest the all dirty GFNs at once.
|
||||||
However it must collect the dirty GFNs in sequence, i.e., the userspace
|
However it must collect the dirty GFNs in sequence, i.e., the userspace
|
||||||
program cannot skip one dirty GFN to collect the one next to it.
|
program cannot skip one dirty GFN to collect the one next to it.
|
||||||
|
|
@ -8005,6 +8010,14 @@ KVM_CAP_DIRTY_LOG_RING with an acceptable dirty ring size, the virtual
|
||||||
machine will switch to ring-buffer dirty page tracking and further
|
machine will switch to ring-buffer dirty page tracking and further
|
||||||
KVM_GET_DIRTY_LOG or KVM_CLEAR_DIRTY_LOG ioctls will fail.
|
KVM_GET_DIRTY_LOG or KVM_CLEAR_DIRTY_LOG ioctls will fail.
|
||||||
|
|
||||||
|
NOTE: KVM_CAP_DIRTY_LOG_RING_ACQ_REL is the only capability that
|
||||||
|
should be exposed by weakly ordered architecture, in order to indicate
|
||||||
|
the additional memory ordering requirements imposed on userspace when
|
||||||
|
reading the state of an entry and mutating it from DIRTY to HARVESTED.
|
||||||
|
Architecture with TSO-like ordering (such as x86) are allowed to
|
||||||
|
expose both KVM_CAP_DIRTY_LOG_RING and KVM_CAP_DIRTY_LOG_RING_ACQ_REL
|
||||||
|
to userspace.
|
||||||
|
|
||||||
8.30 KVM_CAP_XEN_HVM
|
8.30 KVM_CAP_XEN_HVM
|
||||||
--------------------
|
--------------------
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -11204,7 +11204,8 @@ R: Alexandru Elisei <alexandru.elisei@arm.com>
|
||||||
R: Suzuki K Poulose <suzuki.poulose@arm.com>
|
R: Suzuki K Poulose <suzuki.poulose@arm.com>
|
||||||
R: Oliver Upton <oliver.upton@linux.dev>
|
R: Oliver Upton <oliver.upton@linux.dev>
|
||||||
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
|
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
|
||||||
L: kvmarm@lists.cs.columbia.edu (moderated for non-subscribers)
|
L: kvmarm@lists.linux.dev
|
||||||
|
L: kvmarm@lists.cs.columbia.edu (deprecated, moderated for non-subscribers)
|
||||||
S: Maintained
|
S: Maintained
|
||||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm.git
|
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm.git
|
||||||
F: arch/arm64/include/asm/kvm*
|
F: arch/arm64/include/asm/kvm*
|
||||||
|
|
|
||||||
|
|
@ -393,6 +393,7 @@ struct kvm_vcpu_arch {
|
||||||
*/
|
*/
|
||||||
struct {
|
struct {
|
||||||
u32 mdscr_el1;
|
u32 mdscr_el1;
|
||||||
|
bool pstate_ss;
|
||||||
} guest_debug_preserved;
|
} guest_debug_preserved;
|
||||||
|
|
||||||
/* vcpu power state */
|
/* vcpu power state */
|
||||||
|
|
@ -535,6 +536,9 @@ struct kvm_vcpu_arch {
|
||||||
#define IN_WFIT __vcpu_single_flag(sflags, BIT(3))
|
#define IN_WFIT __vcpu_single_flag(sflags, BIT(3))
|
||||||
/* vcpu system registers loaded on physical CPU */
|
/* vcpu system registers loaded on physical CPU */
|
||||||
#define SYSREGS_ON_CPU __vcpu_single_flag(sflags, BIT(4))
|
#define SYSREGS_ON_CPU __vcpu_single_flag(sflags, BIT(4))
|
||||||
|
/* Software step state is Active-pending */
|
||||||
|
#define DBG_SS_ACTIVE_PENDING __vcpu_single_flag(sflags, BIT(5))
|
||||||
|
|
||||||
|
|
||||||
/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
|
/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
|
||||||
#define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \
|
#define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \
|
||||||
|
|
|
||||||
|
|
@ -2269,6 +2269,16 @@ static int __init early_kvm_mode_cfg(char *arg)
|
||||||
if (!arg)
|
if (!arg)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
|
if (strcmp(arg, "none") == 0) {
|
||||||
|
kvm_mode = KVM_MODE_NONE;
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!is_hyp_mode_available()) {
|
||||||
|
pr_warn_once("KVM is not available. Ignoring kvm-arm.mode\n");
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
if (strcmp(arg, "protected") == 0) {
|
if (strcmp(arg, "protected") == 0) {
|
||||||
if (!is_kernel_in_hyp_mode())
|
if (!is_kernel_in_hyp_mode())
|
||||||
kvm_mode = KVM_MODE_PROTECTED;
|
kvm_mode = KVM_MODE_PROTECTED;
|
||||||
|
|
@ -2283,11 +2293,6 @@ static int __init early_kvm_mode_cfg(char *arg)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (strcmp(arg, "none") == 0) {
|
|
||||||
kvm_mode = KVM_MODE_NONE;
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
early_param("kvm-arm.mode", early_kvm_mode_cfg);
|
early_param("kvm-arm.mode", early_kvm_mode_cfg);
|
||||||
|
|
|
||||||
|
|
@ -32,6 +32,10 @@ static DEFINE_PER_CPU(u64, mdcr_el2);
|
||||||
*
|
*
|
||||||
* Guest access to MDSCR_EL1 is trapped by the hypervisor and handled
|
* Guest access to MDSCR_EL1 is trapped by the hypervisor and handled
|
||||||
* after we have restored the preserved value to the main context.
|
* after we have restored the preserved value to the main context.
|
||||||
|
*
|
||||||
|
* When single-step is enabled by userspace, we tweak PSTATE.SS on every
|
||||||
|
* guest entry. Preserve PSTATE.SS so we can restore the original value
|
||||||
|
* for the vcpu after the single-step is disabled.
|
||||||
*/
|
*/
|
||||||
static void save_guest_debug_regs(struct kvm_vcpu *vcpu)
|
static void save_guest_debug_regs(struct kvm_vcpu *vcpu)
|
||||||
{
|
{
|
||||||
|
|
@ -41,6 +45,9 @@ static void save_guest_debug_regs(struct kvm_vcpu *vcpu)
|
||||||
|
|
||||||
trace_kvm_arm_set_dreg32("Saved MDSCR_EL1",
|
trace_kvm_arm_set_dreg32("Saved MDSCR_EL1",
|
||||||
vcpu->arch.guest_debug_preserved.mdscr_el1);
|
vcpu->arch.guest_debug_preserved.mdscr_el1);
|
||||||
|
|
||||||
|
vcpu->arch.guest_debug_preserved.pstate_ss =
|
||||||
|
(*vcpu_cpsr(vcpu) & DBG_SPSR_SS);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void restore_guest_debug_regs(struct kvm_vcpu *vcpu)
|
static void restore_guest_debug_regs(struct kvm_vcpu *vcpu)
|
||||||
|
|
@ -51,6 +58,11 @@ static void restore_guest_debug_regs(struct kvm_vcpu *vcpu)
|
||||||
|
|
||||||
trace_kvm_arm_set_dreg32("Restored MDSCR_EL1",
|
trace_kvm_arm_set_dreg32("Restored MDSCR_EL1",
|
||||||
vcpu_read_sys_reg(vcpu, MDSCR_EL1));
|
vcpu_read_sys_reg(vcpu, MDSCR_EL1));
|
||||||
|
|
||||||
|
if (vcpu->arch.guest_debug_preserved.pstate_ss)
|
||||||
|
*vcpu_cpsr(vcpu) |= DBG_SPSR_SS;
|
||||||
|
else
|
||||||
|
*vcpu_cpsr(vcpu) &= ~DBG_SPSR_SS;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
@ -188,7 +200,18 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
|
||||||
* debugging the system.
|
* debugging the system.
|
||||||
*/
|
*/
|
||||||
if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) {
|
if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) {
|
||||||
*vcpu_cpsr(vcpu) |= DBG_SPSR_SS;
|
/*
|
||||||
|
* If the software step state at the last guest exit
|
||||||
|
* was Active-pending, we don't set DBG_SPSR_SS so
|
||||||
|
* that the state is maintained (to not run another
|
||||||
|
* single-step until the pending Software Step
|
||||||
|
* exception is taken).
|
||||||
|
*/
|
||||||
|
if (!vcpu_get_flag(vcpu, DBG_SS_ACTIVE_PENDING))
|
||||||
|
*vcpu_cpsr(vcpu) |= DBG_SPSR_SS;
|
||||||
|
else
|
||||||
|
*vcpu_cpsr(vcpu) &= ~DBG_SPSR_SS;
|
||||||
|
|
||||||
mdscr = vcpu_read_sys_reg(vcpu, MDSCR_EL1);
|
mdscr = vcpu_read_sys_reg(vcpu, MDSCR_EL1);
|
||||||
mdscr |= DBG_MDSCR_SS;
|
mdscr |= DBG_MDSCR_SS;
|
||||||
vcpu_write_sys_reg(vcpu, mdscr, MDSCR_EL1);
|
vcpu_write_sys_reg(vcpu, mdscr, MDSCR_EL1);
|
||||||
|
|
@ -262,6 +285,15 @@ void kvm_arm_clear_debug(struct kvm_vcpu *vcpu)
|
||||||
* Restore the guest's debug registers if we were using them.
|
* Restore the guest's debug registers if we were using them.
|
||||||
*/
|
*/
|
||||||
if (vcpu->guest_debug || kvm_vcpu_os_lock_enabled(vcpu)) {
|
if (vcpu->guest_debug || kvm_vcpu_os_lock_enabled(vcpu)) {
|
||||||
|
if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) {
|
||||||
|
if (!(*vcpu_cpsr(vcpu) & DBG_SPSR_SS))
|
||||||
|
/*
|
||||||
|
* Mark the vcpu as ACTIVE_PENDING
|
||||||
|
* until Software Step exception is taken.
|
||||||
|
*/
|
||||||
|
vcpu_set_flag(vcpu, DBG_SS_ACTIVE_PENDING);
|
||||||
|
}
|
||||||
|
|
||||||
restore_guest_debug_regs(vcpu);
|
restore_guest_debug_regs(vcpu);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
|
||||||
|
|
@ -937,6 +937,7 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
|
||||||
} else {
|
} else {
|
||||||
/* If not enabled clear all flags */
|
/* If not enabled clear all flags */
|
||||||
vcpu->guest_debug = 0;
|
vcpu->guest_debug = 0;
|
||||||
|
vcpu_clear_flag(vcpu, DBG_SS_ACTIVE_PENDING);
|
||||||
}
|
}
|
||||||
|
|
||||||
out:
|
out:
|
||||||
|
|
|
||||||
|
|
@ -152,8 +152,14 @@ static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu)
|
||||||
run->debug.arch.hsr_high = upper_32_bits(esr);
|
run->debug.arch.hsr_high = upper_32_bits(esr);
|
||||||
run->flags = KVM_DEBUG_ARCH_HSR_HIGH_VALID;
|
run->flags = KVM_DEBUG_ARCH_HSR_HIGH_VALID;
|
||||||
|
|
||||||
if (ESR_ELx_EC(esr) == ESR_ELx_EC_WATCHPT_LOW)
|
switch (ESR_ELx_EC(esr)) {
|
||||||
|
case ESR_ELx_EC_WATCHPT_LOW:
|
||||||
run->debug.arch.far = vcpu->arch.fault.far_el2;
|
run->debug.arch.far = vcpu->arch.fault.far_el2;
|
||||||
|
break;
|
||||||
|
case ESR_ELx_EC_SOFTSTP_LOW:
|
||||||
|
vcpu_clear_flag(vcpu, DBG_SS_ACTIVE_PENDING);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -143,7 +143,7 @@ static void __hyp_vgic_save_state(struct kvm_vcpu *vcpu)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Restore VGICv3 state on non_VEH systems */
|
/* Restore VGICv3 state on non-VHE systems */
|
||||||
static void __hyp_vgic_restore_state(struct kvm_vcpu *vcpu)
|
static void __hyp_vgic_restore_state(struct kvm_vcpu *vcpu)
|
||||||
{
|
{
|
||||||
if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) {
|
if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) {
|
||||||
|
|
|
||||||
|
|
@ -1063,13 +1063,12 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Read a sanitised cpufeature ID register by sys_reg_desc */
|
/* Read a sanitised cpufeature ID register by sys_reg_desc */
|
||||||
static u64 read_id_reg(const struct kvm_vcpu *vcpu,
|
static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r)
|
||||||
struct sys_reg_desc const *r, bool raz)
|
|
||||||
{
|
{
|
||||||
u32 id = reg_to_encoding(r);
|
u32 id = reg_to_encoding(r);
|
||||||
u64 val;
|
u64 val;
|
||||||
|
|
||||||
if (raz)
|
if (sysreg_visible_as_raz(vcpu, r))
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
val = read_sanitised_ftr_reg(id);
|
val = read_sanitised_ftr_reg(id);
|
||||||
|
|
@ -1145,34 +1144,37 @@ static unsigned int id_visibility(const struct kvm_vcpu *vcpu,
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* cpufeature ID register access trap handlers */
|
static unsigned int aa32_id_visibility(const struct kvm_vcpu *vcpu,
|
||||||
|
const struct sys_reg_desc *r)
|
||||||
static bool __access_id_reg(struct kvm_vcpu *vcpu,
|
|
||||||
struct sys_reg_params *p,
|
|
||||||
const struct sys_reg_desc *r,
|
|
||||||
bool raz)
|
|
||||||
{
|
{
|
||||||
if (p->is_write)
|
/*
|
||||||
return write_to_read_only(vcpu, p, r);
|
* AArch32 ID registers are UNKNOWN if AArch32 isn't implemented at any
|
||||||
|
* EL. Promote to RAZ/WI in order to guarantee consistency between
|
||||||
|
* systems.
|
||||||
|
*/
|
||||||
|
if (!kvm_supports_32bit_el0())
|
||||||
|
return REG_RAZ | REG_USER_WI;
|
||||||
|
|
||||||
p->regval = read_id_reg(vcpu, r, raz);
|
return id_visibility(vcpu, r);
|
||||||
return true;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static unsigned int raz_visibility(const struct kvm_vcpu *vcpu,
|
||||||
|
const struct sys_reg_desc *r)
|
||||||
|
{
|
||||||
|
return REG_RAZ;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* cpufeature ID register access trap handlers */
|
||||||
|
|
||||||
static bool access_id_reg(struct kvm_vcpu *vcpu,
|
static bool access_id_reg(struct kvm_vcpu *vcpu,
|
||||||
struct sys_reg_params *p,
|
struct sys_reg_params *p,
|
||||||
const struct sys_reg_desc *r)
|
const struct sys_reg_desc *r)
|
||||||
{
|
{
|
||||||
bool raz = sysreg_visible_as_raz(vcpu, r);
|
if (p->is_write)
|
||||||
|
return write_to_read_only(vcpu, p, r);
|
||||||
|
|
||||||
return __access_id_reg(vcpu, p, r, raz);
|
p->regval = read_id_reg(vcpu, r);
|
||||||
}
|
return true;
|
||||||
|
|
||||||
static bool access_raz_id_reg(struct kvm_vcpu *vcpu,
|
|
||||||
struct sys_reg_params *p,
|
|
||||||
const struct sys_reg_desc *r)
|
|
||||||
{
|
|
||||||
return __access_id_reg(vcpu, p, r, true);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Visibility overrides for SVE-specific control registers */
|
/* Visibility overrides for SVE-specific control registers */
|
||||||
|
|
@ -1208,9 +1210,9 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
/* We can only differ with CSV[23], and anything else is an error */
|
/* We can only differ with CSV[23], and anything else is an error */
|
||||||
val ^= read_id_reg(vcpu, rd, false);
|
val ^= read_id_reg(vcpu, rd);
|
||||||
val &= ~((0xFUL << ID_AA64PFR0_EL1_CSV2_SHIFT) |
|
val &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) |
|
||||||
(0xFUL << ID_AA64PFR0_EL1_CSV3_SHIFT));
|
ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3));
|
||||||
if (val)
|
if (val)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
|
|
@ -1227,45 +1229,21 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
|
||||||
* are stored, and for set_id_reg() we don't allow the effective value
|
* are stored, and for set_id_reg() we don't allow the effective value
|
||||||
* to be changed.
|
* to be changed.
|
||||||
*/
|
*/
|
||||||
static int __get_id_reg(const struct kvm_vcpu *vcpu,
|
|
||||||
const struct sys_reg_desc *rd, u64 *val,
|
|
||||||
bool raz)
|
|
||||||
{
|
|
||||||
*val = read_id_reg(vcpu, rd, raz);
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static int __set_id_reg(const struct kvm_vcpu *vcpu,
|
|
||||||
const struct sys_reg_desc *rd, u64 val,
|
|
||||||
bool raz)
|
|
||||||
{
|
|
||||||
/* This is what we mean by invariant: you can't change it. */
|
|
||||||
if (val != read_id_reg(vcpu, rd, raz))
|
|
||||||
return -EINVAL;
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
|
static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
|
||||||
u64 *val)
|
u64 *val)
|
||||||
{
|
{
|
||||||
bool raz = sysreg_visible_as_raz(vcpu, rd);
|
*val = read_id_reg(vcpu, rd);
|
||||||
|
return 0;
|
||||||
return __get_id_reg(vcpu, rd, val, raz);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
|
static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
|
||||||
u64 val)
|
u64 val)
|
||||||
{
|
{
|
||||||
bool raz = sysreg_visible_as_raz(vcpu, rd);
|
/* This is what we mean by invariant: you can't change it. */
|
||||||
|
if (val != read_id_reg(vcpu, rd))
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
return __set_id_reg(vcpu, rd, val, raz);
|
return 0;
|
||||||
}
|
|
||||||
|
|
||||||
static int set_raz_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
|
|
||||||
u64 val)
|
|
||||||
{
|
|
||||||
return __set_id_reg(vcpu, rd, val, true);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static int get_raz_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
|
static int get_raz_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
|
||||||
|
|
@ -1367,6 +1345,15 @@ static unsigned int mte_visibility(const struct kvm_vcpu *vcpu,
|
||||||
.visibility = id_visibility, \
|
.visibility = id_visibility, \
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* sys_reg_desc initialiser for known cpufeature ID registers */
|
||||||
|
#define AA32_ID_SANITISED(name) { \
|
||||||
|
SYS_DESC(SYS_##name), \
|
||||||
|
.access = access_id_reg, \
|
||||||
|
.get_user = get_id_reg, \
|
||||||
|
.set_user = set_id_reg, \
|
||||||
|
.visibility = aa32_id_visibility, \
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* sys_reg_desc initialiser for architecturally unallocated cpufeature ID
|
* sys_reg_desc initialiser for architecturally unallocated cpufeature ID
|
||||||
* register with encoding Op0=3, Op1=0, CRn=0, CRm=crm, Op2=op2
|
* register with encoding Op0=3, Op1=0, CRn=0, CRm=crm, Op2=op2
|
||||||
|
|
@ -1374,9 +1361,10 @@ static unsigned int mte_visibility(const struct kvm_vcpu *vcpu,
|
||||||
*/
|
*/
|
||||||
#define ID_UNALLOCATED(crm, op2) { \
|
#define ID_UNALLOCATED(crm, op2) { \
|
||||||
Op0(3), Op1(0), CRn(0), CRm(crm), Op2(op2), \
|
Op0(3), Op1(0), CRn(0), CRm(crm), Op2(op2), \
|
||||||
.access = access_raz_id_reg, \
|
.access = access_id_reg, \
|
||||||
.get_user = get_raz_reg, \
|
.get_user = get_id_reg, \
|
||||||
.set_user = set_raz_id_reg, \
|
.set_user = set_id_reg, \
|
||||||
|
.visibility = raz_visibility \
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
@ -1386,9 +1374,10 @@ static unsigned int mte_visibility(const struct kvm_vcpu *vcpu,
|
||||||
*/
|
*/
|
||||||
#define ID_HIDDEN(name) { \
|
#define ID_HIDDEN(name) { \
|
||||||
SYS_DESC(SYS_##name), \
|
SYS_DESC(SYS_##name), \
|
||||||
.access = access_raz_id_reg, \
|
.access = access_id_reg, \
|
||||||
.get_user = get_raz_reg, \
|
.get_user = get_id_reg, \
|
||||||
.set_user = set_raz_id_reg, \
|
.set_user = set_id_reg, \
|
||||||
|
.visibility = raz_visibility, \
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
@ -1452,33 +1441,33 @@ static const struct sys_reg_desc sys_reg_descs[] = {
|
||||||
|
|
||||||
/* AArch64 mappings of the AArch32 ID registers */
|
/* AArch64 mappings of the AArch32 ID registers */
|
||||||
/* CRm=1 */
|
/* CRm=1 */
|
||||||
ID_SANITISED(ID_PFR0_EL1),
|
AA32_ID_SANITISED(ID_PFR0_EL1),
|
||||||
ID_SANITISED(ID_PFR1_EL1),
|
AA32_ID_SANITISED(ID_PFR1_EL1),
|
||||||
ID_SANITISED(ID_DFR0_EL1),
|
AA32_ID_SANITISED(ID_DFR0_EL1),
|
||||||
ID_HIDDEN(ID_AFR0_EL1),
|
ID_HIDDEN(ID_AFR0_EL1),
|
||||||
ID_SANITISED(ID_MMFR0_EL1),
|
AA32_ID_SANITISED(ID_MMFR0_EL1),
|
||||||
ID_SANITISED(ID_MMFR1_EL1),
|
AA32_ID_SANITISED(ID_MMFR1_EL1),
|
||||||
ID_SANITISED(ID_MMFR2_EL1),
|
AA32_ID_SANITISED(ID_MMFR2_EL1),
|
||||||
ID_SANITISED(ID_MMFR3_EL1),
|
AA32_ID_SANITISED(ID_MMFR3_EL1),
|
||||||
|
|
||||||
/* CRm=2 */
|
/* CRm=2 */
|
||||||
ID_SANITISED(ID_ISAR0_EL1),
|
AA32_ID_SANITISED(ID_ISAR0_EL1),
|
||||||
ID_SANITISED(ID_ISAR1_EL1),
|
AA32_ID_SANITISED(ID_ISAR1_EL1),
|
||||||
ID_SANITISED(ID_ISAR2_EL1),
|
AA32_ID_SANITISED(ID_ISAR2_EL1),
|
||||||
ID_SANITISED(ID_ISAR3_EL1),
|
AA32_ID_SANITISED(ID_ISAR3_EL1),
|
||||||
ID_SANITISED(ID_ISAR4_EL1),
|
AA32_ID_SANITISED(ID_ISAR4_EL1),
|
||||||
ID_SANITISED(ID_ISAR5_EL1),
|
AA32_ID_SANITISED(ID_ISAR5_EL1),
|
||||||
ID_SANITISED(ID_MMFR4_EL1),
|
AA32_ID_SANITISED(ID_MMFR4_EL1),
|
||||||
ID_SANITISED(ID_ISAR6_EL1),
|
AA32_ID_SANITISED(ID_ISAR6_EL1),
|
||||||
|
|
||||||
/* CRm=3 */
|
/* CRm=3 */
|
||||||
ID_SANITISED(MVFR0_EL1),
|
AA32_ID_SANITISED(MVFR0_EL1),
|
||||||
ID_SANITISED(MVFR1_EL1),
|
AA32_ID_SANITISED(MVFR1_EL1),
|
||||||
ID_SANITISED(MVFR2_EL1),
|
AA32_ID_SANITISED(MVFR2_EL1),
|
||||||
ID_UNALLOCATED(3,3),
|
ID_UNALLOCATED(3,3),
|
||||||
ID_SANITISED(ID_PFR2_EL1),
|
AA32_ID_SANITISED(ID_PFR2_EL1),
|
||||||
ID_HIDDEN(ID_DFR1_EL1),
|
ID_HIDDEN(ID_DFR1_EL1),
|
||||||
ID_SANITISED(ID_MMFR5_EL1),
|
AA32_ID_SANITISED(ID_MMFR5_EL1),
|
||||||
ID_UNALLOCATED(3,7),
|
ID_UNALLOCATED(3,7),
|
||||||
|
|
||||||
/* AArch64 ID registers */
|
/* AArch64 ID registers */
|
||||||
|
|
@ -2809,6 +2798,9 @@ int kvm_sys_reg_set_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg,
|
||||||
if (!r)
|
if (!r)
|
||||||
return -ENOENT;
|
return -ENOENT;
|
||||||
|
|
||||||
|
if (sysreg_user_write_ignore(vcpu, r))
|
||||||
|
return 0;
|
||||||
|
|
||||||
if (r->set_user) {
|
if (r->set_user) {
|
||||||
ret = (r->set_user)(vcpu, r, val);
|
ret = (r->set_user)(vcpu, r, val);
|
||||||
} else {
|
} else {
|
||||||
|
|
|
||||||
|
|
@ -86,6 +86,7 @@ struct sys_reg_desc {
|
||||||
|
|
||||||
#define REG_HIDDEN (1 << 0) /* hidden from userspace and guest */
|
#define REG_HIDDEN (1 << 0) /* hidden from userspace and guest */
|
||||||
#define REG_RAZ (1 << 1) /* RAZ from userspace and guest */
|
#define REG_RAZ (1 << 1) /* RAZ from userspace and guest */
|
||||||
|
#define REG_USER_WI (1 << 2) /* WI from userspace only */
|
||||||
|
|
||||||
static __printf(2, 3)
|
static __printf(2, 3)
|
||||||
inline void print_sys_reg_msg(const struct sys_reg_params *p,
|
inline void print_sys_reg_msg(const struct sys_reg_params *p,
|
||||||
|
|
@ -136,22 +137,31 @@ static inline void reset_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r
|
||||||
__vcpu_sys_reg(vcpu, r->reg) = r->val;
|
__vcpu_sys_reg(vcpu, r->reg) = r->val;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static inline unsigned int sysreg_visibility(const struct kvm_vcpu *vcpu,
|
||||||
|
const struct sys_reg_desc *r)
|
||||||
|
{
|
||||||
|
if (likely(!r->visibility))
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
return r->visibility(vcpu, r);
|
||||||
|
}
|
||||||
|
|
||||||
static inline bool sysreg_hidden(const struct kvm_vcpu *vcpu,
|
static inline bool sysreg_hidden(const struct kvm_vcpu *vcpu,
|
||||||
const struct sys_reg_desc *r)
|
const struct sys_reg_desc *r)
|
||||||
{
|
{
|
||||||
if (likely(!r->visibility))
|
return sysreg_visibility(vcpu, r) & REG_HIDDEN;
|
||||||
return false;
|
|
||||||
|
|
||||||
return r->visibility(vcpu, r) & REG_HIDDEN;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline bool sysreg_visible_as_raz(const struct kvm_vcpu *vcpu,
|
static inline bool sysreg_visible_as_raz(const struct kvm_vcpu *vcpu,
|
||||||
const struct sys_reg_desc *r)
|
const struct sys_reg_desc *r)
|
||||||
{
|
{
|
||||||
if (likely(!r->visibility))
|
return sysreg_visibility(vcpu, r) & REG_RAZ;
|
||||||
return false;
|
}
|
||||||
|
|
||||||
return r->visibility(vcpu, r) & REG_RAZ;
|
static inline bool sysreg_user_write_ignore(const struct kvm_vcpu *vcpu,
|
||||||
|
const struct sys_reg_desc *r)
|
||||||
|
{
|
||||||
|
return sysreg_visibility(vcpu, r) & REG_USER_WI;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline int cmp_sys_reg(const struct sys_reg_desc *i1,
|
static inline int cmp_sys_reg(const struct sys_reg_desc *i1,
|
||||||
|
|
|
||||||
|
|
@ -406,7 +406,7 @@ static void update_affinity_collection(struct kvm *kvm, struct vgic_its *its,
|
||||||
struct its_ite *ite;
|
struct its_ite *ite;
|
||||||
|
|
||||||
for_each_lpi_its(device, ite, its) {
|
for_each_lpi_its(device, ite, its) {
|
||||||
if (!ite->collection || coll != ite->collection)
|
if (ite->collection != coll)
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
update_affinity_ite(kvm, ite);
|
update_affinity_ite(kvm, ite);
|
||||||
|
|
|
||||||
|
|
@ -104,6 +104,7 @@ config RISCV
|
||||||
select HAVE_PERF_EVENTS
|
select HAVE_PERF_EVENTS
|
||||||
select HAVE_PERF_REGS
|
select HAVE_PERF_REGS
|
||||||
select HAVE_PERF_USER_STACK_DUMP
|
select HAVE_PERF_USER_STACK_DUMP
|
||||||
|
select HAVE_POSIX_CPU_TIMERS_TASK_WORK
|
||||||
select HAVE_REGS_AND_STACK_ACCESS_API
|
select HAVE_REGS_AND_STACK_ACCESS_API
|
||||||
select HAVE_FUNCTION_ARG_ACCESS_API
|
select HAVE_FUNCTION_ARG_ACCESS_API
|
||||||
select HAVE_STACKPROTECTOR
|
select HAVE_STACKPROTECTOR
|
||||||
|
|
@ -228,6 +229,9 @@ config RISCV_DMA_NONCOHERENT
|
||||||
select ARCH_HAS_SETUP_DMA_OPS
|
select ARCH_HAS_SETUP_DMA_OPS
|
||||||
select DMA_DIRECT_REMAP
|
select DMA_DIRECT_REMAP
|
||||||
|
|
||||||
|
config AS_HAS_INSN
|
||||||
|
def_bool $(as-instr,.insn r 51$(comma) 0$(comma) 0$(comma) t0$(comma) t0$(comma) zero)
|
||||||
|
|
||||||
source "arch/riscv/Kconfig.socs"
|
source "arch/riscv/Kconfig.socs"
|
||||||
source "arch/riscv/Kconfig.erratas"
|
source "arch/riscv/Kconfig.erratas"
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -3,6 +3,11 @@
|
||||||
#define __ASM_GPR_NUM_H
|
#define __ASM_GPR_NUM_H
|
||||||
|
|
||||||
#ifdef __ASSEMBLY__
|
#ifdef __ASSEMBLY__
|
||||||
|
|
||||||
|
.irp num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31
|
||||||
|
.equ .L__gpr_num_x\num, \num
|
||||||
|
.endr
|
||||||
|
|
||||||
.equ .L__gpr_num_zero, 0
|
.equ .L__gpr_num_zero, 0
|
||||||
.equ .L__gpr_num_ra, 1
|
.equ .L__gpr_num_ra, 1
|
||||||
.equ .L__gpr_num_sp, 2
|
.equ .L__gpr_num_sp, 2
|
||||||
|
|
@ -39,6 +44,9 @@
|
||||||
#else /* __ASSEMBLY__ */
|
#else /* __ASSEMBLY__ */
|
||||||
|
|
||||||
#define __DEFINE_ASM_GPR_NUMS \
|
#define __DEFINE_ASM_GPR_NUMS \
|
||||||
|
" .irp num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31\n" \
|
||||||
|
" .equ .L__gpr_num_x\\num, \\num\n" \
|
||||||
|
" .endr\n" \
|
||||||
" .equ .L__gpr_num_zero, 0\n" \
|
" .equ .L__gpr_num_zero, 0\n" \
|
||||||
" .equ .L__gpr_num_ra, 1\n" \
|
" .equ .L__gpr_num_ra, 1\n" \
|
||||||
" .equ .L__gpr_num_sp, 2\n" \
|
" .equ .L__gpr_num_sp, 2\n" \
|
||||||
|
|
|
||||||
|
|
@ -58,6 +58,7 @@ enum riscv_isa_ext_id {
|
||||||
RISCV_ISA_EXT_ZICBOM,
|
RISCV_ISA_EXT_ZICBOM,
|
||||||
RISCV_ISA_EXT_ZIHINTPAUSE,
|
RISCV_ISA_EXT_ZIHINTPAUSE,
|
||||||
RISCV_ISA_EXT_SSTC,
|
RISCV_ISA_EXT_SSTC,
|
||||||
|
RISCV_ISA_EXT_SVINVAL,
|
||||||
RISCV_ISA_EXT_ID_MAX = RISCV_ISA_EXT_MAX,
|
RISCV_ISA_EXT_ID_MAX = RISCV_ISA_EXT_MAX,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
@ -69,6 +70,7 @@ enum riscv_isa_ext_id {
|
||||||
enum riscv_isa_ext_key {
|
enum riscv_isa_ext_key {
|
||||||
RISCV_ISA_EXT_KEY_FPU, /* For 'F' and 'D' */
|
RISCV_ISA_EXT_KEY_FPU, /* For 'F' and 'D' */
|
||||||
RISCV_ISA_EXT_KEY_ZIHINTPAUSE,
|
RISCV_ISA_EXT_KEY_ZIHINTPAUSE,
|
||||||
|
RISCV_ISA_EXT_KEY_SVINVAL,
|
||||||
RISCV_ISA_EXT_KEY_MAX,
|
RISCV_ISA_EXT_KEY_MAX,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
@ -90,6 +92,8 @@ static __always_inline int riscv_isa_ext2key(int num)
|
||||||
return RISCV_ISA_EXT_KEY_FPU;
|
return RISCV_ISA_EXT_KEY_FPU;
|
||||||
case RISCV_ISA_EXT_ZIHINTPAUSE:
|
case RISCV_ISA_EXT_ZIHINTPAUSE:
|
||||||
return RISCV_ISA_EXT_KEY_ZIHINTPAUSE;
|
return RISCV_ISA_EXT_KEY_ZIHINTPAUSE;
|
||||||
|
case RISCV_ISA_EXT_SVINVAL:
|
||||||
|
return RISCV_ISA_EXT_KEY_SVINVAL;
|
||||||
default:
|
default:
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
|
||||||
137
arch/riscv/include/asm/insn-def.h
Normal file
137
arch/riscv/include/asm/insn-def.h
Normal file
|
|
@ -0,0 +1,137 @@
|
||||||
|
/* SPDX-License-Identifier: GPL-2.0-only */
|
||||||
|
|
||||||
|
#ifndef __ASM_INSN_DEF_H
|
||||||
|
#define __ASM_INSN_DEF_H
|
||||||
|
|
||||||
|
#include <asm/asm.h>
|
||||||
|
|
||||||
|
#define INSN_R_FUNC7_SHIFT 25
|
||||||
|
#define INSN_R_RS2_SHIFT 20
|
||||||
|
#define INSN_R_RS1_SHIFT 15
|
||||||
|
#define INSN_R_FUNC3_SHIFT 12
|
||||||
|
#define INSN_R_RD_SHIFT 7
|
||||||
|
#define INSN_R_OPCODE_SHIFT 0
|
||||||
|
|
||||||
|
#ifdef __ASSEMBLY__
|
||||||
|
|
||||||
|
#ifdef CONFIG_AS_HAS_INSN
|
||||||
|
|
||||||
|
.macro insn_r, opcode, func3, func7, rd, rs1, rs2
|
||||||
|
.insn r \opcode, \func3, \func7, \rd, \rs1, \rs2
|
||||||
|
.endm
|
||||||
|
|
||||||
|
#else
|
||||||
|
|
||||||
|
#include <asm/gpr-num.h>
|
||||||
|
|
||||||
|
.macro insn_r, opcode, func3, func7, rd, rs1, rs2
|
||||||
|
.4byte ((\opcode << INSN_R_OPCODE_SHIFT) | \
|
||||||
|
(\func3 << INSN_R_FUNC3_SHIFT) | \
|
||||||
|
(\func7 << INSN_R_FUNC7_SHIFT) | \
|
||||||
|
(.L__gpr_num_\rd << INSN_R_RD_SHIFT) | \
|
||||||
|
(.L__gpr_num_\rs1 << INSN_R_RS1_SHIFT) | \
|
||||||
|
(.L__gpr_num_\rs2 << INSN_R_RS2_SHIFT))
|
||||||
|
.endm
|
||||||
|
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#define __INSN_R(...) insn_r __VA_ARGS__
|
||||||
|
|
||||||
|
#else /* ! __ASSEMBLY__ */
|
||||||
|
|
||||||
|
#ifdef CONFIG_AS_HAS_INSN
|
||||||
|
|
||||||
|
#define __INSN_R(opcode, func3, func7, rd, rs1, rs2) \
|
||||||
|
".insn r " opcode ", " func3 ", " func7 ", " rd ", " rs1 ", " rs2 "\n"
|
||||||
|
|
||||||
|
#else
|
||||||
|
|
||||||
|
#include <linux/stringify.h>
|
||||||
|
#include <asm/gpr-num.h>
|
||||||
|
|
||||||
|
#define DEFINE_INSN_R \
|
||||||
|
__DEFINE_ASM_GPR_NUMS \
|
||||||
|
" .macro insn_r, opcode, func3, func7, rd, rs1, rs2\n" \
|
||||||
|
" .4byte ((\\opcode << " __stringify(INSN_R_OPCODE_SHIFT) ") |" \
|
||||||
|
" (\\func3 << " __stringify(INSN_R_FUNC3_SHIFT) ") |" \
|
||||||
|
" (\\func7 << " __stringify(INSN_R_FUNC7_SHIFT) ") |" \
|
||||||
|
" (.L__gpr_num_\\rd << " __stringify(INSN_R_RD_SHIFT) ") |" \
|
||||||
|
" (.L__gpr_num_\\rs1 << " __stringify(INSN_R_RS1_SHIFT) ") |" \
|
||||||
|
" (.L__gpr_num_\\rs2 << " __stringify(INSN_R_RS2_SHIFT) "))\n" \
|
||||||
|
" .endm\n"
|
||||||
|
|
||||||
|
#define UNDEFINE_INSN_R \
|
||||||
|
" .purgem insn_r\n"
|
||||||
|
|
||||||
|
#define __INSN_R(opcode, func3, func7, rd, rs1, rs2) \
|
||||||
|
DEFINE_INSN_R \
|
||||||
|
"insn_r " opcode ", " func3 ", " func7 ", " rd ", " rs1 ", " rs2 "\n" \
|
||||||
|
UNDEFINE_INSN_R
|
||||||
|
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#endif /* ! __ASSEMBLY__ */
|
||||||
|
|
||||||
|
#define INSN_R(opcode, func3, func7, rd, rs1, rs2) \
|
||||||
|
__INSN_R(RV_##opcode, RV_##func3, RV_##func7, \
|
||||||
|
RV_##rd, RV_##rs1, RV_##rs2)
|
||||||
|
|
||||||
|
#define RV_OPCODE(v) __ASM_STR(v)
|
||||||
|
#define RV_FUNC3(v) __ASM_STR(v)
|
||||||
|
#define RV_FUNC7(v) __ASM_STR(v)
|
||||||
|
#define RV_RD(v) __ASM_STR(v)
|
||||||
|
#define RV_RS1(v) __ASM_STR(v)
|
||||||
|
#define RV_RS2(v) __ASM_STR(v)
|
||||||
|
#define __RV_REG(v) __ASM_STR(x ## v)
|
||||||
|
#define RV___RD(v) __RV_REG(v)
|
||||||
|
#define RV___RS1(v) __RV_REG(v)
|
||||||
|
#define RV___RS2(v) __RV_REG(v)
|
||||||
|
|
||||||
|
#define RV_OPCODE_SYSTEM RV_OPCODE(115)
|
||||||
|
|
||||||
|
#define HFENCE_VVMA(vaddr, asid) \
|
||||||
|
INSN_R(OPCODE_SYSTEM, FUNC3(0), FUNC7(17), \
|
||||||
|
__RD(0), RS1(vaddr), RS2(asid))
|
||||||
|
|
||||||
|
#define HFENCE_GVMA(gaddr, vmid) \
|
||||||
|
INSN_R(OPCODE_SYSTEM, FUNC3(0), FUNC7(49), \
|
||||||
|
__RD(0), RS1(gaddr), RS2(vmid))
|
||||||
|
|
||||||
|
#define HLVX_HU(dest, addr) \
|
||||||
|
INSN_R(OPCODE_SYSTEM, FUNC3(4), FUNC7(50), \
|
||||||
|
RD(dest), RS1(addr), __RS2(3))
|
||||||
|
|
||||||
|
#define HLV_W(dest, addr) \
|
||||||
|
INSN_R(OPCODE_SYSTEM, FUNC3(4), FUNC7(52), \
|
||||||
|
RD(dest), RS1(addr), __RS2(0))
|
||||||
|
|
||||||
|
#ifdef CONFIG_64BIT
|
||||||
|
#define HLV_D(dest, addr) \
|
||||||
|
INSN_R(OPCODE_SYSTEM, FUNC3(4), FUNC7(54), \
|
||||||
|
RD(dest), RS1(addr), __RS2(0))
|
||||||
|
#else
|
||||||
|
#define HLV_D(dest, addr) \
|
||||||
|
__ASM_STR(.error "hlv.d requires 64-bit support")
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#define SINVAL_VMA(vaddr, asid) \
|
||||||
|
INSN_R(OPCODE_SYSTEM, FUNC3(0), FUNC7(11), \
|
||||||
|
__RD(0), RS1(vaddr), RS2(asid))
|
||||||
|
|
||||||
|
#define SFENCE_W_INVAL() \
|
||||||
|
INSN_R(OPCODE_SYSTEM, FUNC3(0), FUNC7(12), \
|
||||||
|
__RD(0), __RS1(0), __RS2(0))
|
||||||
|
|
||||||
|
#define SFENCE_INVAL_IR() \
|
||||||
|
INSN_R(OPCODE_SYSTEM, FUNC3(0), FUNC7(12), \
|
||||||
|
__RD(0), __RS1(0), __RS2(1))
|
||||||
|
|
||||||
|
#define HINVAL_VVMA(vaddr, asid) \
|
||||||
|
INSN_R(OPCODE_SYSTEM, FUNC3(0), FUNC7(19), \
|
||||||
|
__RD(0), RS1(vaddr), RS2(asid))
|
||||||
|
|
||||||
|
#define HINVAL_GVMA(gaddr, vmid) \
|
||||||
|
INSN_R(OPCODE_SYSTEM, FUNC3(0), FUNC7(51), \
|
||||||
|
__RD(0), RS1(gaddr), RS2(vmid))
|
||||||
|
|
||||||
|
#endif /* __ASM_INSN_DEF_H */
|
||||||
|
|
@ -67,6 +67,7 @@ struct kvm_vcpu_stat {
|
||||||
u64 mmio_exit_kernel;
|
u64 mmio_exit_kernel;
|
||||||
u64 csr_exit_user;
|
u64 csr_exit_user;
|
||||||
u64 csr_exit_kernel;
|
u64 csr_exit_kernel;
|
||||||
|
u64 signal_exits;
|
||||||
u64 exits;
|
u64 exits;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -11,8 +11,8 @@
|
||||||
|
|
||||||
#define KVM_SBI_IMPID 3
|
#define KVM_SBI_IMPID 3
|
||||||
|
|
||||||
#define KVM_SBI_VERSION_MAJOR 0
|
#define KVM_SBI_VERSION_MAJOR 1
|
||||||
#define KVM_SBI_VERSION_MINOR 3
|
#define KVM_SBI_VERSION_MINOR 0
|
||||||
|
|
||||||
struct kvm_vcpu_sbi_extension {
|
struct kvm_vcpu_sbi_extension {
|
||||||
unsigned long extid_start;
|
unsigned long extid_start;
|
||||||
|
|
|
||||||
|
|
@ -48,6 +48,7 @@ struct kvm_sregs {
|
||||||
/* CONFIG registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
|
/* CONFIG registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
|
||||||
struct kvm_riscv_config {
|
struct kvm_riscv_config {
|
||||||
unsigned long isa;
|
unsigned long isa;
|
||||||
|
unsigned long zicbom_block_size;
|
||||||
};
|
};
|
||||||
|
|
||||||
/* CORE registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
|
/* CORE registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
|
||||||
|
|
@ -98,6 +99,9 @@ enum KVM_RISCV_ISA_EXT_ID {
|
||||||
KVM_RISCV_ISA_EXT_M,
|
KVM_RISCV_ISA_EXT_M,
|
||||||
KVM_RISCV_ISA_EXT_SVPBMT,
|
KVM_RISCV_ISA_EXT_SVPBMT,
|
||||||
KVM_RISCV_ISA_EXT_SSTC,
|
KVM_RISCV_ISA_EXT_SSTC,
|
||||||
|
KVM_RISCV_ISA_EXT_SVINVAL,
|
||||||
|
KVM_RISCV_ISA_EXT_ZIHINTPAUSE,
|
||||||
|
KVM_RISCV_ISA_EXT_ZICBOM,
|
||||||
KVM_RISCV_ISA_EXT_MAX,
|
KVM_RISCV_ISA_EXT_MAX,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -93,6 +93,7 @@ int riscv_of_parent_hartid(struct device_node *node, unsigned long *hartid)
|
||||||
static struct riscv_isa_ext_data isa_ext_arr[] = {
|
static struct riscv_isa_ext_data isa_ext_arr[] = {
|
||||||
__RISCV_ISA_EXT_DATA(sscofpmf, RISCV_ISA_EXT_SSCOFPMF),
|
__RISCV_ISA_EXT_DATA(sscofpmf, RISCV_ISA_EXT_SSCOFPMF),
|
||||||
__RISCV_ISA_EXT_DATA(sstc, RISCV_ISA_EXT_SSTC),
|
__RISCV_ISA_EXT_DATA(sstc, RISCV_ISA_EXT_SSTC),
|
||||||
|
__RISCV_ISA_EXT_DATA(svinval, RISCV_ISA_EXT_SVINVAL),
|
||||||
__RISCV_ISA_EXT_DATA(svpbmt, RISCV_ISA_EXT_SVPBMT),
|
__RISCV_ISA_EXT_DATA(svpbmt, RISCV_ISA_EXT_SVPBMT),
|
||||||
__RISCV_ISA_EXT_DATA(zicbom, RISCV_ISA_EXT_ZICBOM),
|
__RISCV_ISA_EXT_DATA(zicbom, RISCV_ISA_EXT_ZICBOM),
|
||||||
__RISCV_ISA_EXT_DATA(zihintpause, RISCV_ISA_EXT_ZIHINTPAUSE),
|
__RISCV_ISA_EXT_DATA(zihintpause, RISCV_ISA_EXT_ZIHINTPAUSE),
|
||||||
|
|
|
||||||
|
|
@ -204,6 +204,7 @@ void __init riscv_fill_hwcap(void)
|
||||||
SET_ISA_EXT_MAP("zicbom", RISCV_ISA_EXT_ZICBOM);
|
SET_ISA_EXT_MAP("zicbom", RISCV_ISA_EXT_ZICBOM);
|
||||||
SET_ISA_EXT_MAP("zihintpause", RISCV_ISA_EXT_ZIHINTPAUSE);
|
SET_ISA_EXT_MAP("zihintpause", RISCV_ISA_EXT_ZIHINTPAUSE);
|
||||||
SET_ISA_EXT_MAP("sstc", RISCV_ISA_EXT_SSTC);
|
SET_ISA_EXT_MAP("sstc", RISCV_ISA_EXT_SSTC);
|
||||||
|
SET_ISA_EXT_MAP("svinval", RISCV_ISA_EXT_SVINVAL);
|
||||||
}
|
}
|
||||||
#undef SET_ISA_EXT_MAP
|
#undef SET_ISA_EXT_MAP
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -24,6 +24,7 @@ config KVM
|
||||||
select PREEMPT_NOTIFIERS
|
select PREEMPT_NOTIFIERS
|
||||||
select KVM_MMIO
|
select KVM_MMIO
|
||||||
select KVM_GENERIC_DIRTYLOG_READ_PROTECT
|
select KVM_GENERIC_DIRTYLOG_READ_PROTECT
|
||||||
|
select KVM_XFER_TO_GUEST_WORK
|
||||||
select HAVE_KVM_VCPU_ASYNC_IOCTL
|
select HAVE_KVM_VCPU_ASYNC_IOCTL
|
||||||
select HAVE_KVM_EVENTFD
|
select HAVE_KVM_EVENTFD
|
||||||
select SRCU
|
select SRCU
|
||||||
|
|
|
||||||
|
|
@ -122,7 +122,7 @@ void kvm_arch_exit(void)
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
static int riscv_kvm_init(void)
|
static int __init riscv_kvm_init(void)
|
||||||
{
|
{
|
||||||
return kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
|
return kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -12,22 +12,11 @@
|
||||||
#include <linux/kvm_host.h>
|
#include <linux/kvm_host.h>
|
||||||
#include <asm/cacheflush.h>
|
#include <asm/cacheflush.h>
|
||||||
#include <asm/csr.h>
|
#include <asm/csr.h>
|
||||||
|
#include <asm/hwcap.h>
|
||||||
|
#include <asm/insn-def.h>
|
||||||
|
|
||||||
/*
|
#define has_svinval() \
|
||||||
* Instruction encoding of hfence.gvma is:
|
static_branch_unlikely(&riscv_isa_ext_keys[RISCV_ISA_EXT_KEY_SVINVAL])
|
||||||
* HFENCE.GVMA rs1, rs2
|
|
||||||
* HFENCE.GVMA zero, rs2
|
|
||||||
* HFENCE.GVMA rs1
|
|
||||||
* HFENCE.GVMA
|
|
||||||
*
|
|
||||||
* rs1!=zero and rs2!=zero ==> HFENCE.GVMA rs1, rs2
|
|
||||||
* rs1==zero and rs2!=zero ==> HFENCE.GVMA zero, rs2
|
|
||||||
* rs1!=zero and rs2==zero ==> HFENCE.GVMA rs1
|
|
||||||
* rs1==zero and rs2==zero ==> HFENCE.GVMA
|
|
||||||
*
|
|
||||||
* Instruction encoding of HFENCE.GVMA is:
|
|
||||||
* 0110001 rs2(5) rs1(5) 000 00000 1110011
|
|
||||||
*/
|
|
||||||
|
|
||||||
void kvm_riscv_local_hfence_gvma_vmid_gpa(unsigned long vmid,
|
void kvm_riscv_local_hfence_gvma_vmid_gpa(unsigned long vmid,
|
||||||
gpa_t gpa, gpa_t gpsz,
|
gpa_t gpa, gpa_t gpsz,
|
||||||
|
|
@ -40,32 +29,22 @@ void kvm_riscv_local_hfence_gvma_vmid_gpa(unsigned long vmid,
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
for (pos = gpa; pos < (gpa + gpsz); pos += BIT(order)) {
|
if (has_svinval()) {
|
||||||
/*
|
asm volatile (SFENCE_W_INVAL() ::: "memory");
|
||||||
* rs1 = a0 (GPA >> 2)
|
for (pos = gpa; pos < (gpa + gpsz); pos += BIT(order))
|
||||||
* rs2 = a1 (VMID)
|
asm volatile (HINVAL_GVMA(%0, %1)
|
||||||
* HFENCE.GVMA a0, a1
|
: : "r" (pos >> 2), "r" (vmid) : "memory");
|
||||||
* 0110001 01011 01010 000 00000 1110011
|
asm volatile (SFENCE_INVAL_IR() ::: "memory");
|
||||||
*/
|
} else {
|
||||||
asm volatile ("srli a0, %0, 2\n"
|
for (pos = gpa; pos < (gpa + gpsz); pos += BIT(order))
|
||||||
"add a1, %1, zero\n"
|
asm volatile (HFENCE_GVMA(%0, %1)
|
||||||
".word 0x62b50073\n"
|
: : "r" (pos >> 2), "r" (vmid) : "memory");
|
||||||
:: "r" (pos), "r" (vmid)
|
|
||||||
: "a0", "a1", "memory");
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void kvm_riscv_local_hfence_gvma_vmid_all(unsigned long vmid)
|
void kvm_riscv_local_hfence_gvma_vmid_all(unsigned long vmid)
|
||||||
{
|
{
|
||||||
/*
|
asm volatile(HFENCE_GVMA(zero, %0) : : "r" (vmid) : "memory");
|
||||||
* rs1 = zero
|
|
||||||
* rs2 = a0 (VMID)
|
|
||||||
* HFENCE.GVMA zero, a0
|
|
||||||
* 0110001 01010 00000 000 00000 1110011
|
|
||||||
*/
|
|
||||||
asm volatile ("add a0, %0, zero\n"
|
|
||||||
".word 0x62a00073\n"
|
|
||||||
:: "r" (vmid) : "a0", "memory");
|
|
||||||
}
|
}
|
||||||
|
|
||||||
void kvm_riscv_local_hfence_gvma_gpa(gpa_t gpa, gpa_t gpsz,
|
void kvm_riscv_local_hfence_gvma_gpa(gpa_t gpa, gpa_t gpsz,
|
||||||
|
|
@ -78,46 +57,24 @@ void kvm_riscv_local_hfence_gvma_gpa(gpa_t gpa, gpa_t gpsz,
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
for (pos = gpa; pos < (gpa + gpsz); pos += BIT(order)) {
|
if (has_svinval()) {
|
||||||
/*
|
asm volatile (SFENCE_W_INVAL() ::: "memory");
|
||||||
* rs1 = a0 (GPA >> 2)
|
for (pos = gpa; pos < (gpa + gpsz); pos += BIT(order))
|
||||||
* rs2 = zero
|
asm volatile(HINVAL_GVMA(%0, zero)
|
||||||
* HFENCE.GVMA a0
|
: : "r" (pos >> 2) : "memory");
|
||||||
* 0110001 00000 01010 000 00000 1110011
|
asm volatile (SFENCE_INVAL_IR() ::: "memory");
|
||||||
*/
|
} else {
|
||||||
asm volatile ("srli a0, %0, 2\n"
|
for (pos = gpa; pos < (gpa + gpsz); pos += BIT(order))
|
||||||
".word 0x62050073\n"
|
asm volatile(HFENCE_GVMA(%0, zero)
|
||||||
:: "r" (pos) : "a0", "memory");
|
: : "r" (pos >> 2) : "memory");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void kvm_riscv_local_hfence_gvma_all(void)
|
void kvm_riscv_local_hfence_gvma_all(void)
|
||||||
{
|
{
|
||||||
/*
|
asm volatile(HFENCE_GVMA(zero, zero) : : : "memory");
|
||||||
* rs1 = zero
|
|
||||||
* rs2 = zero
|
|
||||||
* HFENCE.GVMA
|
|
||||||
* 0110001 00000 00000 000 00000 1110011
|
|
||||||
*/
|
|
||||||
asm volatile (".word 0x62000073" ::: "memory");
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
|
||||||
* Instruction encoding of hfence.gvma is:
|
|
||||||
* HFENCE.VVMA rs1, rs2
|
|
||||||
* HFENCE.VVMA zero, rs2
|
|
||||||
* HFENCE.VVMA rs1
|
|
||||||
* HFENCE.VVMA
|
|
||||||
*
|
|
||||||
* rs1!=zero and rs2!=zero ==> HFENCE.VVMA rs1, rs2
|
|
||||||
* rs1==zero and rs2!=zero ==> HFENCE.VVMA zero, rs2
|
|
||||||
* rs1!=zero and rs2==zero ==> HFENCE.VVMA rs1
|
|
||||||
* rs1==zero and rs2==zero ==> HFENCE.VVMA
|
|
||||||
*
|
|
||||||
* Instruction encoding of HFENCE.VVMA is:
|
|
||||||
* 0010001 rs2(5) rs1(5) 000 00000 1110011
|
|
||||||
*/
|
|
||||||
|
|
||||||
void kvm_riscv_local_hfence_vvma_asid_gva(unsigned long vmid,
|
void kvm_riscv_local_hfence_vvma_asid_gva(unsigned long vmid,
|
||||||
unsigned long asid,
|
unsigned long asid,
|
||||||
unsigned long gva,
|
unsigned long gva,
|
||||||
|
|
@ -133,18 +90,16 @@ void kvm_riscv_local_hfence_vvma_asid_gva(unsigned long vmid,
|
||||||
|
|
||||||
hgatp = csr_swap(CSR_HGATP, vmid << HGATP_VMID_SHIFT);
|
hgatp = csr_swap(CSR_HGATP, vmid << HGATP_VMID_SHIFT);
|
||||||
|
|
||||||
for (pos = gva; pos < (gva + gvsz); pos += BIT(order)) {
|
if (has_svinval()) {
|
||||||
/*
|
asm volatile (SFENCE_W_INVAL() ::: "memory");
|
||||||
* rs1 = a0 (GVA)
|
for (pos = gva; pos < (gva + gvsz); pos += BIT(order))
|
||||||
* rs2 = a1 (ASID)
|
asm volatile(HINVAL_VVMA(%0, %1)
|
||||||
* HFENCE.VVMA a0, a1
|
: : "r" (pos), "r" (asid) : "memory");
|
||||||
* 0010001 01011 01010 000 00000 1110011
|
asm volatile (SFENCE_INVAL_IR() ::: "memory");
|
||||||
*/
|
} else {
|
||||||
asm volatile ("add a0, %0, zero\n"
|
for (pos = gva; pos < (gva + gvsz); pos += BIT(order))
|
||||||
"add a1, %1, zero\n"
|
asm volatile(HFENCE_VVMA(%0, %1)
|
||||||
".word 0x22b50073\n"
|
: : "r" (pos), "r" (asid) : "memory");
|
||||||
:: "r" (pos), "r" (asid)
|
|
||||||
: "a0", "a1", "memory");
|
|
||||||
}
|
}
|
||||||
|
|
||||||
csr_write(CSR_HGATP, hgatp);
|
csr_write(CSR_HGATP, hgatp);
|
||||||
|
|
@ -157,15 +112,7 @@ void kvm_riscv_local_hfence_vvma_asid_all(unsigned long vmid,
|
||||||
|
|
||||||
hgatp = csr_swap(CSR_HGATP, vmid << HGATP_VMID_SHIFT);
|
hgatp = csr_swap(CSR_HGATP, vmid << HGATP_VMID_SHIFT);
|
||||||
|
|
||||||
/*
|
asm volatile(HFENCE_VVMA(zero, %0) : : "r" (asid) : "memory");
|
||||||
* rs1 = zero
|
|
||||||
* rs2 = a0 (ASID)
|
|
||||||
* HFENCE.VVMA zero, a0
|
|
||||||
* 0010001 01010 00000 000 00000 1110011
|
|
||||||
*/
|
|
||||||
asm volatile ("add a0, %0, zero\n"
|
|
||||||
".word 0x22a00073\n"
|
|
||||||
:: "r" (asid) : "a0", "memory");
|
|
||||||
|
|
||||||
csr_write(CSR_HGATP, hgatp);
|
csr_write(CSR_HGATP, hgatp);
|
||||||
}
|
}
|
||||||
|
|
@ -183,16 +130,16 @@ void kvm_riscv_local_hfence_vvma_gva(unsigned long vmid,
|
||||||
|
|
||||||
hgatp = csr_swap(CSR_HGATP, vmid << HGATP_VMID_SHIFT);
|
hgatp = csr_swap(CSR_HGATP, vmid << HGATP_VMID_SHIFT);
|
||||||
|
|
||||||
for (pos = gva; pos < (gva + gvsz); pos += BIT(order)) {
|
if (has_svinval()) {
|
||||||
/*
|
asm volatile (SFENCE_W_INVAL() ::: "memory");
|
||||||
* rs1 = a0 (GVA)
|
for (pos = gva; pos < (gva + gvsz); pos += BIT(order))
|
||||||
* rs2 = zero
|
asm volatile(HINVAL_VVMA(%0, zero)
|
||||||
* HFENCE.VVMA a0
|
: : "r" (pos) : "memory");
|
||||||
* 0010001 00000 01010 000 00000 1110011
|
asm volatile (SFENCE_INVAL_IR() ::: "memory");
|
||||||
*/
|
} else {
|
||||||
asm volatile ("add a0, %0, zero\n"
|
for (pos = gva; pos < (gva + gvsz); pos += BIT(order))
|
||||||
".word 0x22050073\n"
|
asm volatile(HFENCE_VVMA(%0, zero)
|
||||||
:: "r" (pos) : "a0", "memory");
|
: : "r" (pos) : "memory");
|
||||||
}
|
}
|
||||||
|
|
||||||
csr_write(CSR_HGATP, hgatp);
|
csr_write(CSR_HGATP, hgatp);
|
||||||
|
|
@ -204,13 +151,7 @@ void kvm_riscv_local_hfence_vvma_all(unsigned long vmid)
|
||||||
|
|
||||||
hgatp = csr_swap(CSR_HGATP, vmid << HGATP_VMID_SHIFT);
|
hgatp = csr_swap(CSR_HGATP, vmid << HGATP_VMID_SHIFT);
|
||||||
|
|
||||||
/*
|
asm volatile(HFENCE_VVMA(zero, zero) : : : "memory");
|
||||||
* rs1 = zero
|
|
||||||
* rs2 = zero
|
|
||||||
* HFENCE.VVMA
|
|
||||||
* 0010001 00000 00000 000 00000 1110011
|
|
||||||
*/
|
|
||||||
asm volatile (".word 0x22000073" ::: "memory");
|
|
||||||
|
|
||||||
csr_write(CSR_HGATP, hgatp);
|
csr_write(CSR_HGATP, hgatp);
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -7,6 +7,7 @@
|
||||||
*/
|
*/
|
||||||
|
|
||||||
#include <linux/bitops.h>
|
#include <linux/bitops.h>
|
||||||
|
#include <linux/entry-kvm.h>
|
||||||
#include <linux/errno.h>
|
#include <linux/errno.h>
|
||||||
#include <linux/err.h>
|
#include <linux/err.h>
|
||||||
#include <linux/kdebug.h>
|
#include <linux/kdebug.h>
|
||||||
|
|
@ -18,6 +19,7 @@
|
||||||
#include <linux/fs.h>
|
#include <linux/fs.h>
|
||||||
#include <linux/kvm_host.h>
|
#include <linux/kvm_host.h>
|
||||||
#include <asm/csr.h>
|
#include <asm/csr.h>
|
||||||
|
#include <asm/cacheflush.h>
|
||||||
#include <asm/hwcap.h>
|
#include <asm/hwcap.h>
|
||||||
|
|
||||||
const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
|
const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
|
||||||
|
|
@ -28,6 +30,7 @@ const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
|
||||||
STATS_DESC_COUNTER(VCPU, mmio_exit_kernel),
|
STATS_DESC_COUNTER(VCPU, mmio_exit_kernel),
|
||||||
STATS_DESC_COUNTER(VCPU, csr_exit_user),
|
STATS_DESC_COUNTER(VCPU, csr_exit_user),
|
||||||
STATS_DESC_COUNTER(VCPU, csr_exit_kernel),
|
STATS_DESC_COUNTER(VCPU, csr_exit_kernel),
|
||||||
|
STATS_DESC_COUNTER(VCPU, signal_exits),
|
||||||
STATS_DESC_COUNTER(VCPU, exits)
|
STATS_DESC_COUNTER(VCPU, exits)
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
@ -42,17 +45,23 @@ const struct kvm_stats_header kvm_vcpu_stats_header = {
|
||||||
|
|
||||||
#define KVM_RISCV_BASE_ISA_MASK GENMASK(25, 0)
|
#define KVM_RISCV_BASE_ISA_MASK GENMASK(25, 0)
|
||||||
|
|
||||||
|
#define KVM_ISA_EXT_ARR(ext) [KVM_RISCV_ISA_EXT_##ext] = RISCV_ISA_EXT_##ext
|
||||||
|
|
||||||
/* Mapping between KVM ISA Extension ID & Host ISA extension ID */
|
/* Mapping between KVM ISA Extension ID & Host ISA extension ID */
|
||||||
static const unsigned long kvm_isa_ext_arr[] = {
|
static const unsigned long kvm_isa_ext_arr[] = {
|
||||||
RISCV_ISA_EXT_a,
|
[KVM_RISCV_ISA_EXT_A] = RISCV_ISA_EXT_a,
|
||||||
RISCV_ISA_EXT_c,
|
[KVM_RISCV_ISA_EXT_C] = RISCV_ISA_EXT_c,
|
||||||
RISCV_ISA_EXT_d,
|
[KVM_RISCV_ISA_EXT_D] = RISCV_ISA_EXT_d,
|
||||||
RISCV_ISA_EXT_f,
|
[KVM_RISCV_ISA_EXT_F] = RISCV_ISA_EXT_f,
|
||||||
RISCV_ISA_EXT_h,
|
[KVM_RISCV_ISA_EXT_H] = RISCV_ISA_EXT_h,
|
||||||
RISCV_ISA_EXT_i,
|
[KVM_RISCV_ISA_EXT_I] = RISCV_ISA_EXT_i,
|
||||||
RISCV_ISA_EXT_m,
|
[KVM_RISCV_ISA_EXT_M] = RISCV_ISA_EXT_m,
|
||||||
RISCV_ISA_EXT_SVPBMT,
|
|
||||||
RISCV_ISA_EXT_SSTC,
|
KVM_ISA_EXT_ARR(SSTC),
|
||||||
|
KVM_ISA_EXT_ARR(SVINVAL),
|
||||||
|
KVM_ISA_EXT_ARR(SVPBMT),
|
||||||
|
KVM_ISA_EXT_ARR(ZIHINTPAUSE),
|
||||||
|
KVM_ISA_EXT_ARR(ZICBOM),
|
||||||
};
|
};
|
||||||
|
|
||||||
static unsigned long kvm_riscv_vcpu_base2isa_ext(unsigned long base_ext)
|
static unsigned long kvm_riscv_vcpu_base2isa_ext(unsigned long base_ext)
|
||||||
|
|
@ -87,6 +96,8 @@ static bool kvm_riscv_vcpu_isa_disable_allowed(unsigned long ext)
|
||||||
case KVM_RISCV_ISA_EXT_I:
|
case KVM_RISCV_ISA_EXT_I:
|
||||||
case KVM_RISCV_ISA_EXT_M:
|
case KVM_RISCV_ISA_EXT_M:
|
||||||
case KVM_RISCV_ISA_EXT_SSTC:
|
case KVM_RISCV_ISA_EXT_SSTC:
|
||||||
|
case KVM_RISCV_ISA_EXT_SVINVAL:
|
||||||
|
case KVM_RISCV_ISA_EXT_ZIHINTPAUSE:
|
||||||
return false;
|
return false;
|
||||||
default:
|
default:
|
||||||
break;
|
break;
|
||||||
|
|
@ -254,6 +265,11 @@ static int kvm_riscv_vcpu_get_reg_config(struct kvm_vcpu *vcpu,
|
||||||
case KVM_REG_RISCV_CONFIG_REG(isa):
|
case KVM_REG_RISCV_CONFIG_REG(isa):
|
||||||
reg_val = vcpu->arch.isa[0] & KVM_RISCV_BASE_ISA_MASK;
|
reg_val = vcpu->arch.isa[0] & KVM_RISCV_BASE_ISA_MASK;
|
||||||
break;
|
break;
|
||||||
|
case KVM_REG_RISCV_CONFIG_REG(zicbom_block_size):
|
||||||
|
if (!riscv_isa_extension_available(vcpu->arch.isa, ZICBOM))
|
||||||
|
return -EINVAL;
|
||||||
|
reg_val = riscv_cbom_block_size;
|
||||||
|
break;
|
||||||
default:
|
default:
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
@ -311,6 +327,8 @@ static int kvm_riscv_vcpu_set_reg_config(struct kvm_vcpu *vcpu,
|
||||||
return -EOPNOTSUPP;
|
return -EOPNOTSUPP;
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
|
case KVM_REG_RISCV_CONFIG_REG(zicbom_block_size):
|
||||||
|
return -EOPNOTSUPP;
|
||||||
default:
|
default:
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
@ -784,11 +802,15 @@ static void kvm_riscv_vcpu_update_config(const unsigned long *isa)
|
||||||
{
|
{
|
||||||
u64 henvcfg = 0;
|
u64 henvcfg = 0;
|
||||||
|
|
||||||
if (__riscv_isa_extension_available(isa, RISCV_ISA_EXT_SVPBMT))
|
if (riscv_isa_extension_available(isa, SVPBMT))
|
||||||
henvcfg |= ENVCFG_PBMTE;
|
henvcfg |= ENVCFG_PBMTE;
|
||||||
|
|
||||||
if (__riscv_isa_extension_available(isa, RISCV_ISA_EXT_SSTC))
|
if (riscv_isa_extension_available(isa, SSTC))
|
||||||
henvcfg |= ENVCFG_STCE;
|
henvcfg |= ENVCFG_STCE;
|
||||||
|
|
||||||
|
if (riscv_isa_extension_available(isa, ZICBOM))
|
||||||
|
henvcfg |= (ENVCFG_CBIE | ENVCFG_CBCFE);
|
||||||
|
|
||||||
csr_write(CSR_HENVCFG, henvcfg);
|
csr_write(CSR_HENVCFG, henvcfg);
|
||||||
#ifdef CONFIG_32BIT
|
#ifdef CONFIG_32BIT
|
||||||
csr_write(CSR_HENVCFGH, henvcfg >> 32);
|
csr_write(CSR_HENVCFGH, henvcfg >> 32);
|
||||||
|
|
@ -958,7 +980,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
|
||||||
run->exit_reason = KVM_EXIT_UNKNOWN;
|
run->exit_reason = KVM_EXIT_UNKNOWN;
|
||||||
while (ret > 0) {
|
while (ret > 0) {
|
||||||
/* Check conditions before entering the guest */
|
/* Check conditions before entering the guest */
|
||||||
cond_resched();
|
ret = xfer_to_guest_mode_handle_work(vcpu);
|
||||||
|
if (!ret)
|
||||||
|
ret = 1;
|
||||||
|
|
||||||
kvm_riscv_gstage_vmid_update(vcpu);
|
kvm_riscv_gstage_vmid_update(vcpu);
|
||||||
|
|
||||||
|
|
@ -966,15 +990,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
|
||||||
|
|
||||||
local_irq_disable();
|
local_irq_disable();
|
||||||
|
|
||||||
/*
|
|
||||||
* Exit if we have a signal pending so that we can deliver
|
|
||||||
* the signal to user space.
|
|
||||||
*/
|
|
||||||
if (signal_pending(current)) {
|
|
||||||
ret = -EINTR;
|
|
||||||
run->exit_reason = KVM_EXIT_INTR;
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Ensure we set mode to IN_GUEST_MODE after we disable
|
* Ensure we set mode to IN_GUEST_MODE after we disable
|
||||||
* interrupts and before the final VCPU requests check.
|
* interrupts and before the final VCPU requests check.
|
||||||
|
|
@ -997,7 +1012,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
|
||||||
|
|
||||||
if (ret <= 0 ||
|
if (ret <= 0 ||
|
||||||
kvm_riscv_gstage_vmid_ver_changed(&vcpu->kvm->arch.vmid) ||
|
kvm_riscv_gstage_vmid_ver_changed(&vcpu->kvm->arch.vmid) ||
|
||||||
kvm_request_pending(vcpu)) {
|
kvm_request_pending(vcpu) ||
|
||||||
|
xfer_to_guest_mode_work_pending()) {
|
||||||
vcpu->mode = OUTSIDE_GUEST_MODE;
|
vcpu->mode = OUTSIDE_GUEST_MODE;
|
||||||
local_irq_enable();
|
local_irq_enable();
|
||||||
kvm_vcpu_srcu_read_lock(vcpu);
|
kvm_vcpu_srcu_read_lock(vcpu);
|
||||||
|
|
|
||||||
|
|
@ -8,6 +8,7 @@
|
||||||
|
|
||||||
#include <linux/kvm_host.h>
|
#include <linux/kvm_host.h>
|
||||||
#include <asm/csr.h>
|
#include <asm/csr.h>
|
||||||
|
#include <asm/insn-def.h>
|
||||||
|
|
||||||
static int gstage_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
static int gstage_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
||||||
struct kvm_cpu_trap *trap)
|
struct kvm_cpu_trap *trap)
|
||||||
|
|
@ -62,11 +63,7 @@ unsigned long kvm_riscv_vcpu_unpriv_read(struct kvm_vcpu *vcpu,
|
||||||
{
|
{
|
||||||
register unsigned long taddr asm("a0") = (unsigned long)trap;
|
register unsigned long taddr asm("a0") = (unsigned long)trap;
|
||||||
register unsigned long ttmp asm("a1");
|
register unsigned long ttmp asm("a1");
|
||||||
register unsigned long val asm("t0");
|
unsigned long flags, val, tmp, old_stvec, old_hstatus;
|
||||||
register unsigned long tmp asm("t1");
|
|
||||||
register unsigned long addr asm("t2") = guest_addr;
|
|
||||||
unsigned long flags;
|
|
||||||
unsigned long old_stvec, old_hstatus;
|
|
||||||
|
|
||||||
local_irq_save(flags);
|
local_irq_save(flags);
|
||||||
|
|
||||||
|
|
@ -82,29 +79,19 @@ unsigned long kvm_riscv_vcpu_unpriv_read(struct kvm_vcpu *vcpu,
|
||||||
".option push\n"
|
".option push\n"
|
||||||
".option norvc\n"
|
".option norvc\n"
|
||||||
"add %[ttmp], %[taddr], 0\n"
|
"add %[ttmp], %[taddr], 0\n"
|
||||||
/*
|
HLVX_HU(%[val], %[addr])
|
||||||
* HLVX.HU %[val], (%[addr])
|
|
||||||
* HLVX.HU t0, (t2)
|
|
||||||
* 0110010 00011 00111 100 00101 1110011
|
|
||||||
*/
|
|
||||||
".word 0x6433c2f3\n"
|
|
||||||
"andi %[tmp], %[val], 3\n"
|
"andi %[tmp], %[val], 3\n"
|
||||||
"addi %[tmp], %[tmp], -3\n"
|
"addi %[tmp], %[tmp], -3\n"
|
||||||
"bne %[tmp], zero, 2f\n"
|
"bne %[tmp], zero, 2f\n"
|
||||||
"addi %[addr], %[addr], 2\n"
|
"addi %[addr], %[addr], 2\n"
|
||||||
/*
|
HLVX_HU(%[tmp], %[addr])
|
||||||
* HLVX.HU %[tmp], (%[addr])
|
|
||||||
* HLVX.HU t1, (t2)
|
|
||||||
* 0110010 00011 00111 100 00110 1110011
|
|
||||||
*/
|
|
||||||
".word 0x6433c373\n"
|
|
||||||
"sll %[tmp], %[tmp], 16\n"
|
"sll %[tmp], %[tmp], 16\n"
|
||||||
"add %[val], %[val], %[tmp]\n"
|
"add %[val], %[val], %[tmp]\n"
|
||||||
"2:\n"
|
"2:\n"
|
||||||
".option pop"
|
".option pop"
|
||||||
: [val] "=&r" (val), [tmp] "=&r" (tmp),
|
: [val] "=&r" (val), [tmp] "=&r" (tmp),
|
||||||
[taddr] "+&r" (taddr), [ttmp] "+&r" (ttmp),
|
[taddr] "+&r" (taddr), [ttmp] "+&r" (ttmp),
|
||||||
[addr] "+&r" (addr) : : "memory");
|
[addr] "+&r" (guest_addr) : : "memory");
|
||||||
|
|
||||||
if (trap->scause == EXC_LOAD_PAGE_FAULT)
|
if (trap->scause == EXC_LOAD_PAGE_FAULT)
|
||||||
trap->scause = EXC_INST_PAGE_FAULT;
|
trap->scause = EXC_INST_PAGE_FAULT;
|
||||||
|
|
@ -121,24 +108,14 @@ unsigned long kvm_riscv_vcpu_unpriv_read(struct kvm_vcpu *vcpu,
|
||||||
".option norvc\n"
|
".option norvc\n"
|
||||||
"add %[ttmp], %[taddr], 0\n"
|
"add %[ttmp], %[taddr], 0\n"
|
||||||
#ifdef CONFIG_64BIT
|
#ifdef CONFIG_64BIT
|
||||||
/*
|
HLV_D(%[val], %[addr])
|
||||||
* HLV.D %[val], (%[addr])
|
|
||||||
* HLV.D t0, (t2)
|
|
||||||
* 0110110 00000 00111 100 00101 1110011
|
|
||||||
*/
|
|
||||||
".word 0x6c03c2f3\n"
|
|
||||||
#else
|
#else
|
||||||
/*
|
HLV_W(%[val], %[addr])
|
||||||
* HLV.W %[val], (%[addr])
|
|
||||||
* HLV.W t0, (t2)
|
|
||||||
* 0110100 00000 00111 100 00101 1110011
|
|
||||||
*/
|
|
||||||
".word 0x6803c2f3\n"
|
|
||||||
#endif
|
#endif
|
||||||
".option pop"
|
".option pop"
|
||||||
: [val] "=&r" (val),
|
: [val] "=&r" (val),
|
||||||
[taddr] "+&r" (taddr), [ttmp] "+&r" (ttmp)
|
[taddr] "+&r" (taddr), [ttmp] "+&r" (ttmp)
|
||||||
: [addr] "r" (addr) : "memory");
|
: [addr] "r" (guest_addr) : "memory");
|
||||||
}
|
}
|
||||||
|
|
||||||
csr_write(CSR_STVEC, old_stvec);
|
csr_write(CSR_STVEC, old_stvec);
|
||||||
|
|
|
||||||
|
|
@ -13,6 +13,8 @@
|
||||||
#include <asm/cacheflush.h>
|
#include <asm/cacheflush.h>
|
||||||
|
|
||||||
unsigned int riscv_cbom_block_size;
|
unsigned int riscv_cbom_block_size;
|
||||||
|
EXPORT_SYMBOL_GPL(riscv_cbom_block_size);
|
||||||
|
|
||||||
static bool noncoherent_supported;
|
static bool noncoherent_supported;
|
||||||
|
|
||||||
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
|
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
|
||||||
|
|
|
||||||
|
|
@ -1280,8 +1280,8 @@ struct kvm_arch {
|
||||||
bool tdp_mmu_enabled;
|
bool tdp_mmu_enabled;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* List of struct kvm_mmu_pages being used as roots.
|
* List of kvm_mmu_page structs being used as roots.
|
||||||
* All struct kvm_mmu_pages in the list should have
|
* All kvm_mmu_page structs in the list should have
|
||||||
* tdp_mmu_page set.
|
* tdp_mmu_page set.
|
||||||
*
|
*
|
||||||
* For reads, this list is protected by:
|
* For reads, this list is protected by:
|
||||||
|
|
@ -1300,8 +1300,8 @@ struct kvm_arch {
|
||||||
struct list_head tdp_mmu_roots;
|
struct list_head tdp_mmu_roots;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* List of struct kvmp_mmu_pages not being used as roots.
|
* List of kvm_mmu_page structs not being used as roots.
|
||||||
* All struct kvm_mmu_pages in the list should have
|
* All kvm_mmu_page structs in the list should have
|
||||||
* tdp_mmu_page set and a tdp_mmu_root_count of 0.
|
* tdp_mmu_page set and a tdp_mmu_root_count of 0.
|
||||||
*/
|
*/
|
||||||
struct list_head tdp_mmu_pages;
|
struct list_head tdp_mmu_pages;
|
||||||
|
|
@ -1311,9 +1311,9 @@ struct kvm_arch {
|
||||||
* is held in read mode:
|
* is held in read mode:
|
||||||
* - tdp_mmu_roots (above)
|
* - tdp_mmu_roots (above)
|
||||||
* - tdp_mmu_pages (above)
|
* - tdp_mmu_pages (above)
|
||||||
* - the link field of struct kvm_mmu_pages used by the TDP MMU
|
* - the link field of kvm_mmu_page structs used by the TDP MMU
|
||||||
* - lpage_disallowed_mmu_pages
|
* - lpage_disallowed_mmu_pages
|
||||||
* - the lpage_disallowed_link field of struct kvm_mmu_pages used
|
* - the lpage_disallowed_link field of kvm_mmu_page structs used
|
||||||
* by the TDP MMU
|
* by the TDP MMU
|
||||||
* It is acceptable, but not necessary, to acquire this lock when
|
* It is acceptable, but not necessary, to acquire this lock when
|
||||||
* the thread holds the MMU lock in write mode.
|
* the thread holds the MMU lock in write mode.
|
||||||
|
|
|
||||||
|
|
@ -309,7 +309,7 @@ enum vmcs_field {
|
||||||
GUEST_LDTR_AR_BYTES = 0x00004820,
|
GUEST_LDTR_AR_BYTES = 0x00004820,
|
||||||
GUEST_TR_AR_BYTES = 0x00004822,
|
GUEST_TR_AR_BYTES = 0x00004822,
|
||||||
GUEST_INTERRUPTIBILITY_INFO = 0x00004824,
|
GUEST_INTERRUPTIBILITY_INFO = 0x00004824,
|
||||||
GUEST_ACTIVITY_STATE = 0X00004826,
|
GUEST_ACTIVITY_STATE = 0x00004826,
|
||||||
GUEST_SYSENTER_CS = 0x0000482A,
|
GUEST_SYSENTER_CS = 0x0000482A,
|
||||||
VMX_PREEMPTION_TIMER_VALUE = 0x0000482E,
|
VMX_PREEMPTION_TIMER_VALUE = 0x0000482E,
|
||||||
HOST_IA32_SYSENTER_CS = 0x00004c00,
|
HOST_IA32_SYSENTER_CS = 0x00004c00,
|
||||||
|
|
|
||||||
|
|
@ -28,7 +28,8 @@ config KVM
|
||||||
select HAVE_KVM_IRQCHIP
|
select HAVE_KVM_IRQCHIP
|
||||||
select HAVE_KVM_PFNCACHE
|
select HAVE_KVM_PFNCACHE
|
||||||
select HAVE_KVM_IRQFD
|
select HAVE_KVM_IRQFD
|
||||||
select HAVE_KVM_DIRTY_RING
|
select HAVE_KVM_DIRTY_RING_TSO
|
||||||
|
select HAVE_KVM_DIRTY_RING_ACQ_REL
|
||||||
select IRQ_BYPASS_MANAGER
|
select IRQ_BYPASS_MANAGER
|
||||||
select HAVE_KVM_IRQ_BYPASS
|
select HAVE_KVM_IRQ_BYPASS
|
||||||
select HAVE_KVM_IRQ_ROUTING
|
select HAVE_KVM_IRQ_ROUTING
|
||||||
|
|
|
||||||
|
|
@ -106,9 +106,19 @@ static inline void __kvm_perf_overflow(struct kvm_pmc *pmc, bool in_pmi)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
if (pmc->perf_event && pmc->perf_event->attr.precise_ip) {
|
if (pmc->perf_event && pmc->perf_event->attr.precise_ip) {
|
||||||
/* Indicate PEBS overflow PMI to guest. */
|
if (!in_pmi) {
|
||||||
skip_pmi = __test_and_set_bit(GLOBAL_STATUS_BUFFER_OVF_BIT,
|
/*
|
||||||
(unsigned long *)&pmu->global_status);
|
* TODO: KVM is currently _choosing_ to not generate records
|
||||||
|
* for emulated instructions, avoiding BUFFER_OVF PMI when
|
||||||
|
* there are no records. Strictly speaking, it should be done
|
||||||
|
* as well in the right context to improve sampling accuracy.
|
||||||
|
*/
|
||||||
|
skip_pmi = true;
|
||||||
|
} else {
|
||||||
|
/* Indicate PEBS overflow PMI to guest. */
|
||||||
|
skip_pmi = __test_and_set_bit(GLOBAL_STATUS_BUFFER_OVF_BIT,
|
||||||
|
(unsigned long *)&pmu->global_status);
|
||||||
|
}
|
||||||
} else {
|
} else {
|
||||||
__set_bit(pmc->idx, (unsigned long *)&pmu->global_status);
|
__set_bit(pmc->idx, (unsigned long *)&pmu->global_status);
|
||||||
}
|
}
|
||||||
|
|
@ -227,8 +237,8 @@ static bool pmc_resume_counter(struct kvm_pmc *pmc)
|
||||||
get_sample_period(pmc, pmc->counter)))
|
get_sample_period(pmc, pmc->counter)))
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
if (!test_bit(pmc->idx, (unsigned long *)&pmc_to_pmu(pmc)->pebs_enable) &&
|
if (test_bit(pmc->idx, (unsigned long *)&pmc_to_pmu(pmc)->pebs_enable) !=
|
||||||
pmc->perf_event->attr.precise_ip)
|
(!!pmc->perf_event->attr.precise_ip))
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
/* reuse perf_event to serve as pmc_reprogram_counter() does*/
|
/* reuse perf_event to serve as pmc_reprogram_counter() does*/
|
||||||
|
|
|
||||||
|
|
@ -23,107 +23,52 @@ enum pmu_type {
|
||||||
PMU_TYPE_EVNTSEL,
|
PMU_TYPE_EVNTSEL,
|
||||||
};
|
};
|
||||||
|
|
||||||
enum index {
|
static struct kvm_pmc *amd_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx)
|
||||||
INDEX_ZERO = 0,
|
|
||||||
INDEX_ONE,
|
|
||||||
INDEX_TWO,
|
|
||||||
INDEX_THREE,
|
|
||||||
INDEX_FOUR,
|
|
||||||
INDEX_FIVE,
|
|
||||||
INDEX_ERROR,
|
|
||||||
};
|
|
||||||
|
|
||||||
static unsigned int get_msr_base(struct kvm_pmu *pmu, enum pmu_type type)
|
|
||||||
{
|
{
|
||||||
struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu);
|
unsigned int num_counters = pmu->nr_arch_gp_counters;
|
||||||
|
|
||||||
if (guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE)) {
|
if (pmc_idx >= num_counters)
|
||||||
if (type == PMU_TYPE_COUNTER)
|
return NULL;
|
||||||
return MSR_F15H_PERF_CTR;
|
|
||||||
else
|
|
||||||
return MSR_F15H_PERF_CTL;
|
|
||||||
} else {
|
|
||||||
if (type == PMU_TYPE_COUNTER)
|
|
||||||
return MSR_K7_PERFCTR0;
|
|
||||||
else
|
|
||||||
return MSR_K7_EVNTSEL0;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
static enum index msr_to_index(u32 msr)
|
return &pmu->gp_counters[array_index_nospec(pmc_idx, num_counters)];
|
||||||
{
|
|
||||||
switch (msr) {
|
|
||||||
case MSR_F15H_PERF_CTL0:
|
|
||||||
case MSR_F15H_PERF_CTR0:
|
|
||||||
case MSR_K7_EVNTSEL0:
|
|
||||||
case MSR_K7_PERFCTR0:
|
|
||||||
return INDEX_ZERO;
|
|
||||||
case MSR_F15H_PERF_CTL1:
|
|
||||||
case MSR_F15H_PERF_CTR1:
|
|
||||||
case MSR_K7_EVNTSEL1:
|
|
||||||
case MSR_K7_PERFCTR1:
|
|
||||||
return INDEX_ONE;
|
|
||||||
case MSR_F15H_PERF_CTL2:
|
|
||||||
case MSR_F15H_PERF_CTR2:
|
|
||||||
case MSR_K7_EVNTSEL2:
|
|
||||||
case MSR_K7_PERFCTR2:
|
|
||||||
return INDEX_TWO;
|
|
||||||
case MSR_F15H_PERF_CTL3:
|
|
||||||
case MSR_F15H_PERF_CTR3:
|
|
||||||
case MSR_K7_EVNTSEL3:
|
|
||||||
case MSR_K7_PERFCTR3:
|
|
||||||
return INDEX_THREE;
|
|
||||||
case MSR_F15H_PERF_CTL4:
|
|
||||||
case MSR_F15H_PERF_CTR4:
|
|
||||||
return INDEX_FOUR;
|
|
||||||
case MSR_F15H_PERF_CTL5:
|
|
||||||
case MSR_F15H_PERF_CTR5:
|
|
||||||
return INDEX_FIVE;
|
|
||||||
default:
|
|
||||||
return INDEX_ERROR;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr,
|
static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr,
|
||||||
enum pmu_type type)
|
enum pmu_type type)
|
||||||
{
|
{
|
||||||
struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu);
|
struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu);
|
||||||
|
unsigned int idx;
|
||||||
|
|
||||||
if (!vcpu->kvm->arch.enable_pmu)
|
if (!vcpu->kvm->arch.enable_pmu)
|
||||||
return NULL;
|
return NULL;
|
||||||
|
|
||||||
switch (msr) {
|
switch (msr) {
|
||||||
case MSR_F15H_PERF_CTL0:
|
case MSR_F15H_PERF_CTL0 ... MSR_F15H_PERF_CTR5:
|
||||||
case MSR_F15H_PERF_CTL1:
|
|
||||||
case MSR_F15H_PERF_CTL2:
|
|
||||||
case MSR_F15H_PERF_CTL3:
|
|
||||||
case MSR_F15H_PERF_CTL4:
|
|
||||||
case MSR_F15H_PERF_CTL5:
|
|
||||||
if (!guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE))
|
if (!guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE))
|
||||||
return NULL;
|
return NULL;
|
||||||
fallthrough;
|
/*
|
||||||
|
* Each PMU counter has a pair of CTL and CTR MSRs. CTLn
|
||||||
|
* MSRs (accessed via EVNTSEL) are even, CTRn MSRs are odd.
|
||||||
|
*/
|
||||||
|
idx = (unsigned int)((msr - MSR_F15H_PERF_CTL0) / 2);
|
||||||
|
if (!(msr & 0x1) != (type == PMU_TYPE_EVNTSEL))
|
||||||
|
return NULL;
|
||||||
|
break;
|
||||||
case MSR_K7_EVNTSEL0 ... MSR_K7_EVNTSEL3:
|
case MSR_K7_EVNTSEL0 ... MSR_K7_EVNTSEL3:
|
||||||
if (type != PMU_TYPE_EVNTSEL)
|
if (type != PMU_TYPE_EVNTSEL)
|
||||||
return NULL;
|
return NULL;
|
||||||
|
idx = msr - MSR_K7_EVNTSEL0;
|
||||||
break;
|
break;
|
||||||
case MSR_F15H_PERF_CTR0:
|
|
||||||
case MSR_F15H_PERF_CTR1:
|
|
||||||
case MSR_F15H_PERF_CTR2:
|
|
||||||
case MSR_F15H_PERF_CTR3:
|
|
||||||
case MSR_F15H_PERF_CTR4:
|
|
||||||
case MSR_F15H_PERF_CTR5:
|
|
||||||
if (!guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE))
|
|
||||||
return NULL;
|
|
||||||
fallthrough;
|
|
||||||
case MSR_K7_PERFCTR0 ... MSR_K7_PERFCTR3:
|
case MSR_K7_PERFCTR0 ... MSR_K7_PERFCTR3:
|
||||||
if (type != PMU_TYPE_COUNTER)
|
if (type != PMU_TYPE_COUNTER)
|
||||||
return NULL;
|
return NULL;
|
||||||
|
idx = msr - MSR_K7_PERFCTR0;
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
return &pmu->gp_counters[msr_to_index(msr)];
|
return amd_pmc_idx_to_pmc(pmu, idx);
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool amd_hw_event_available(struct kvm_pmc *pmc)
|
static bool amd_hw_event_available(struct kvm_pmc *pmc)
|
||||||
|
|
@ -139,22 +84,6 @@ static bool amd_pmc_is_enabled(struct kvm_pmc *pmc)
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct kvm_pmc *amd_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx)
|
|
||||||
{
|
|
||||||
unsigned int base = get_msr_base(pmu, PMU_TYPE_COUNTER);
|
|
||||||
struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu);
|
|
||||||
|
|
||||||
if (guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE)) {
|
|
||||||
/*
|
|
||||||
* The idx is contiguous. The MSRs are not. The counter MSRs
|
|
||||||
* are interleaved with the event select MSRs.
|
|
||||||
*/
|
|
||||||
pmc_idx *= 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
return get_gp_pmc_amd(pmu, base + pmc_idx, PMU_TYPE_COUNTER);
|
|
||||||
}
|
|
||||||
|
|
||||||
static bool amd_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx)
|
static bool amd_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx)
|
||||||
{
|
{
|
||||||
struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
|
struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
|
||||||
|
|
@ -168,15 +97,7 @@ static bool amd_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx)
|
||||||
static struct kvm_pmc *amd_rdpmc_ecx_to_pmc(struct kvm_vcpu *vcpu,
|
static struct kvm_pmc *amd_rdpmc_ecx_to_pmc(struct kvm_vcpu *vcpu,
|
||||||
unsigned int idx, u64 *mask)
|
unsigned int idx, u64 *mask)
|
||||||
{
|
{
|
||||||
struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
|
return amd_pmc_idx_to_pmc(vcpu_to_pmu(vcpu), idx & ~(3u << 30));
|
||||||
struct kvm_pmc *counters;
|
|
||||||
|
|
||||||
idx &= ~(3u << 30);
|
|
||||||
if (idx >= pmu->nr_arch_gp_counters)
|
|
||||||
return NULL;
|
|
||||||
counters = pmu->gp_counters;
|
|
||||||
|
|
||||||
return &counters[idx];
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool amd_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr)
|
static bool amd_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr)
|
||||||
|
|
|
||||||
|
|
@ -68,15 +68,11 @@ static struct kvm_pmc *intel_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* function is called when global control register has been updated. */
|
static void reprogram_counters(struct kvm_pmu *pmu, u64 diff)
|
||||||
static void global_ctrl_changed(struct kvm_pmu *pmu, u64 data)
|
|
||||||
{
|
{
|
||||||
int bit;
|
int bit;
|
||||||
u64 diff = pmu->global_ctrl ^ data;
|
|
||||||
struct kvm_pmc *pmc;
|
struct kvm_pmc *pmc;
|
||||||
|
|
||||||
pmu->global_ctrl = data;
|
|
||||||
|
|
||||||
for_each_set_bit(bit, (unsigned long *)&diff, X86_PMC_IDX_MAX) {
|
for_each_set_bit(bit, (unsigned long *)&diff, X86_PMC_IDX_MAX) {
|
||||||
pmc = intel_pmc_idx_to_pmc(pmu, bit);
|
pmc = intel_pmc_idx_to_pmc(pmu, bit);
|
||||||
if (pmc)
|
if (pmc)
|
||||||
|
|
@ -397,7 +393,7 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
|
||||||
struct kvm_pmc *pmc;
|
struct kvm_pmc *pmc;
|
||||||
u32 msr = msr_info->index;
|
u32 msr = msr_info->index;
|
||||||
u64 data = msr_info->data;
|
u64 data = msr_info->data;
|
||||||
u64 reserved_bits;
|
u64 reserved_bits, diff;
|
||||||
|
|
||||||
switch (msr) {
|
switch (msr) {
|
||||||
case MSR_CORE_PERF_FIXED_CTR_CTRL:
|
case MSR_CORE_PERF_FIXED_CTR_CTRL:
|
||||||
|
|
@ -418,7 +414,9 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
|
||||||
if (pmu->global_ctrl == data)
|
if (pmu->global_ctrl == data)
|
||||||
return 0;
|
return 0;
|
||||||
if (kvm_valid_perf_global_ctrl(pmu, data)) {
|
if (kvm_valid_perf_global_ctrl(pmu, data)) {
|
||||||
global_ctrl_changed(pmu, data);
|
diff = pmu->global_ctrl ^ data;
|
||||||
|
pmu->global_ctrl = data;
|
||||||
|
reprogram_counters(pmu, diff);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
|
|
@ -433,7 +431,9 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
|
||||||
if (pmu->pebs_enable == data)
|
if (pmu->pebs_enable == data)
|
||||||
return 0;
|
return 0;
|
||||||
if (!(data & pmu->pebs_enable_mask)) {
|
if (!(data & pmu->pebs_enable_mask)) {
|
||||||
|
diff = pmu->pebs_enable ^ data;
|
||||||
pmu->pebs_enable = data;
|
pmu->pebs_enable = data;
|
||||||
|
reprogram_counters(pmu, diff);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
|
|
@ -776,20 +776,23 @@ static void intel_pmu_cleanup(struct kvm_vcpu *vcpu)
|
||||||
void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu)
|
void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu)
|
||||||
{
|
{
|
||||||
struct kvm_pmc *pmc = NULL;
|
struct kvm_pmc *pmc = NULL;
|
||||||
int bit;
|
int bit, hw_idx;
|
||||||
|
|
||||||
for_each_set_bit(bit, (unsigned long *)&pmu->global_ctrl,
|
for_each_set_bit(bit, (unsigned long *)&pmu->global_ctrl,
|
||||||
X86_PMC_IDX_MAX) {
|
X86_PMC_IDX_MAX) {
|
||||||
pmc = intel_pmc_idx_to_pmc(pmu, bit);
|
pmc = intel_pmc_idx_to_pmc(pmu, bit);
|
||||||
|
|
||||||
if (!pmc || !pmc_speculative_in_use(pmc) ||
|
if (!pmc || !pmc_speculative_in_use(pmc) ||
|
||||||
!intel_pmc_is_enabled(pmc))
|
!intel_pmc_is_enabled(pmc) || !pmc->perf_event)
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
if (pmc->perf_event && pmc->idx != pmc->perf_event->hw.idx) {
|
/*
|
||||||
pmu->host_cross_mapped_mask |=
|
* A negative index indicates the event isn't mapped to a
|
||||||
BIT_ULL(pmc->perf_event->hw.idx);
|
* physical counter in the host, e.g. due to contention.
|
||||||
}
|
*/
|
||||||
|
hw_idx = pmc->perf_event->hw.idx;
|
||||||
|
if (hw_idx != pmc->idx && hw_idx > -1)
|
||||||
|
pmu->host_cross_mapped_mask |= BIT_ULL(hw_idx);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1177,6 +1177,7 @@ struct kvm_ppc_resize_hpt {
|
||||||
#define KVM_CAP_VM_DISABLE_NX_HUGE_PAGES 220
|
#define KVM_CAP_VM_DISABLE_NX_HUGE_PAGES 220
|
||||||
#define KVM_CAP_S390_ZPCI_OP 221
|
#define KVM_CAP_S390_ZPCI_OP 221
|
||||||
#define KVM_CAP_S390_CPU_TOPOLOGY 222
|
#define KVM_CAP_S390_CPU_TOPOLOGY 222
|
||||||
|
#define KVM_CAP_DIRTY_LOG_RING_ACQ_REL 223
|
||||||
|
|
||||||
#ifdef KVM_CAP_IRQ_ROUTING
|
#ifdef KVM_CAP_IRQ_ROUTING
|
||||||
|
|
||||||
|
|
|
||||||
1
tools/testing/selftests/kvm/.gitignore
vendored
1
tools/testing/selftests/kvm/.gitignore
vendored
|
|
@ -1,4 +1,5 @@
|
||||||
# SPDX-License-Identifier: GPL-2.0-only
|
# SPDX-License-Identifier: GPL-2.0-only
|
||||||
|
/aarch64/aarch32_id_regs
|
||||||
/aarch64/arch_timer
|
/aarch64/arch_timer
|
||||||
/aarch64/debug-exceptions
|
/aarch64/debug-exceptions
|
||||||
/aarch64/get-reg-list
|
/aarch64/get-reg-list
|
||||||
|
|
|
||||||
|
|
@ -147,6 +147,7 @@ TEST_GEN_PROGS_x86_64 += system_counter_offset_test
|
||||||
# Compiled outputs used by test targets
|
# Compiled outputs used by test targets
|
||||||
TEST_GEN_PROGS_EXTENDED_x86_64 += x86_64/nx_huge_pages_test
|
TEST_GEN_PROGS_EXTENDED_x86_64 += x86_64/nx_huge_pages_test
|
||||||
|
|
||||||
|
TEST_GEN_PROGS_aarch64 += aarch64/aarch32_id_regs
|
||||||
TEST_GEN_PROGS_aarch64 += aarch64/arch_timer
|
TEST_GEN_PROGS_aarch64 += aarch64/arch_timer
|
||||||
TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions
|
TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions
|
||||||
TEST_GEN_PROGS_aarch64 += aarch64/get-reg-list
|
TEST_GEN_PROGS_aarch64 += aarch64/get-reg-list
|
||||||
|
|
|
||||||
169
tools/testing/selftests/kvm/aarch64/aarch32_id_regs.c
Normal file
169
tools/testing/selftests/kvm/aarch64/aarch32_id_regs.c
Normal file
|
|
@ -0,0 +1,169 @@
|
||||||
|
// SPDX-License-Identifier: GPL-2.0-only
|
||||||
|
/*
|
||||||
|
* aarch32_id_regs - Test for ID register behavior on AArch64-only systems
|
||||||
|
*
|
||||||
|
* Copyright (c) 2022 Google LLC.
|
||||||
|
*
|
||||||
|
* Test that KVM handles the AArch64 views of the AArch32 ID registers as RAZ
|
||||||
|
* and WI from userspace.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include <stdint.h>
|
||||||
|
|
||||||
|
#include "kvm_util.h"
|
||||||
|
#include "processor.h"
|
||||||
|
#include "test_util.h"
|
||||||
|
|
||||||
|
#define BAD_ID_REG_VAL 0x1badc0deul
|
||||||
|
|
||||||
|
#define GUEST_ASSERT_REG_RAZ(reg) GUEST_ASSERT_EQ(read_sysreg_s(reg), 0)
|
||||||
|
|
||||||
|
static void guest_main(void)
|
||||||
|
{
|
||||||
|
GUEST_ASSERT_REG_RAZ(SYS_ID_PFR0_EL1);
|
||||||
|
GUEST_ASSERT_REG_RAZ(SYS_ID_PFR1_EL1);
|
||||||
|
GUEST_ASSERT_REG_RAZ(SYS_ID_DFR0_EL1);
|
||||||
|
GUEST_ASSERT_REG_RAZ(SYS_ID_AFR0_EL1);
|
||||||
|
GUEST_ASSERT_REG_RAZ(SYS_ID_MMFR0_EL1);
|
||||||
|
GUEST_ASSERT_REG_RAZ(SYS_ID_MMFR1_EL1);
|
||||||
|
GUEST_ASSERT_REG_RAZ(SYS_ID_MMFR2_EL1);
|
||||||
|
GUEST_ASSERT_REG_RAZ(SYS_ID_MMFR3_EL1);
|
||||||
|
GUEST_ASSERT_REG_RAZ(SYS_ID_ISAR0_EL1);
|
||||||
|
GUEST_ASSERT_REG_RAZ(SYS_ID_ISAR1_EL1);
|
||||||
|
GUEST_ASSERT_REG_RAZ(SYS_ID_ISAR2_EL1);
|
||||||
|
GUEST_ASSERT_REG_RAZ(SYS_ID_ISAR3_EL1);
|
||||||
|
GUEST_ASSERT_REG_RAZ(SYS_ID_ISAR4_EL1);
|
||||||
|
GUEST_ASSERT_REG_RAZ(SYS_ID_ISAR5_EL1);
|
||||||
|
GUEST_ASSERT_REG_RAZ(SYS_ID_MMFR4_EL1);
|
||||||
|
GUEST_ASSERT_REG_RAZ(SYS_ID_ISAR6_EL1);
|
||||||
|
GUEST_ASSERT_REG_RAZ(SYS_MVFR0_EL1);
|
||||||
|
GUEST_ASSERT_REG_RAZ(SYS_MVFR1_EL1);
|
||||||
|
GUEST_ASSERT_REG_RAZ(SYS_MVFR2_EL1);
|
||||||
|
GUEST_ASSERT_REG_RAZ(sys_reg(3, 0, 0, 3, 3));
|
||||||
|
GUEST_ASSERT_REG_RAZ(SYS_ID_PFR2_EL1);
|
||||||
|
GUEST_ASSERT_REG_RAZ(SYS_ID_DFR1_EL1);
|
||||||
|
GUEST_ASSERT_REG_RAZ(SYS_ID_MMFR5_EL1);
|
||||||
|
GUEST_ASSERT_REG_RAZ(sys_reg(3, 0, 0, 3, 7));
|
||||||
|
|
||||||
|
GUEST_DONE();
|
||||||
|
}
|
||||||
|
|
||||||
|
static void test_guest_raz(struct kvm_vcpu *vcpu)
|
||||||
|
{
|
||||||
|
struct ucall uc;
|
||||||
|
|
||||||
|
vcpu_run(vcpu);
|
||||||
|
|
||||||
|
switch (get_ucall(vcpu, &uc)) {
|
||||||
|
case UCALL_ABORT:
|
||||||
|
REPORT_GUEST_ASSERT(uc);
|
||||||
|
break;
|
||||||
|
case UCALL_DONE:
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
TEST_FAIL("Unexpected ucall: %lu", uc.cmd);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static uint64_t raz_wi_reg_ids[] = {
|
||||||
|
KVM_ARM64_SYS_REG(SYS_ID_PFR0_EL1),
|
||||||
|
KVM_ARM64_SYS_REG(SYS_ID_PFR1_EL1),
|
||||||
|
KVM_ARM64_SYS_REG(SYS_ID_DFR0_EL1),
|
||||||
|
KVM_ARM64_SYS_REG(SYS_ID_MMFR0_EL1),
|
||||||
|
KVM_ARM64_SYS_REG(SYS_ID_MMFR1_EL1),
|
||||||
|
KVM_ARM64_SYS_REG(SYS_ID_MMFR2_EL1),
|
||||||
|
KVM_ARM64_SYS_REG(SYS_ID_MMFR3_EL1),
|
||||||
|
KVM_ARM64_SYS_REG(SYS_ID_ISAR0_EL1),
|
||||||
|
KVM_ARM64_SYS_REG(SYS_ID_ISAR1_EL1),
|
||||||
|
KVM_ARM64_SYS_REG(SYS_ID_ISAR2_EL1),
|
||||||
|
KVM_ARM64_SYS_REG(SYS_ID_ISAR3_EL1),
|
||||||
|
KVM_ARM64_SYS_REG(SYS_ID_ISAR4_EL1),
|
||||||
|
KVM_ARM64_SYS_REG(SYS_ID_ISAR5_EL1),
|
||||||
|
KVM_ARM64_SYS_REG(SYS_ID_MMFR4_EL1),
|
||||||
|
KVM_ARM64_SYS_REG(SYS_ID_ISAR6_EL1),
|
||||||
|
KVM_ARM64_SYS_REG(SYS_MVFR0_EL1),
|
||||||
|
KVM_ARM64_SYS_REG(SYS_MVFR1_EL1),
|
||||||
|
KVM_ARM64_SYS_REG(SYS_MVFR2_EL1),
|
||||||
|
KVM_ARM64_SYS_REG(SYS_ID_PFR2_EL1),
|
||||||
|
KVM_ARM64_SYS_REG(SYS_ID_MMFR5_EL1),
|
||||||
|
};
|
||||||
|
|
||||||
|
static void test_user_raz_wi(struct kvm_vcpu *vcpu)
|
||||||
|
{
|
||||||
|
int i;
|
||||||
|
|
||||||
|
for (i = 0; i < ARRAY_SIZE(raz_wi_reg_ids); i++) {
|
||||||
|
uint64_t reg_id = raz_wi_reg_ids[i];
|
||||||
|
uint64_t val;
|
||||||
|
|
||||||
|
vcpu_get_reg(vcpu, reg_id, &val);
|
||||||
|
ASSERT_EQ(val, 0);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Expect the ioctl to succeed with no effect on the register
|
||||||
|
* value.
|
||||||
|
*/
|
||||||
|
vcpu_set_reg(vcpu, reg_id, BAD_ID_REG_VAL);
|
||||||
|
|
||||||
|
vcpu_get_reg(vcpu, reg_id, &val);
|
||||||
|
ASSERT_EQ(val, 0);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static uint64_t raz_invariant_reg_ids[] = {
|
||||||
|
KVM_ARM64_SYS_REG(SYS_ID_AFR0_EL1),
|
||||||
|
KVM_ARM64_SYS_REG(sys_reg(3, 0, 0, 3, 3)),
|
||||||
|
KVM_ARM64_SYS_REG(SYS_ID_DFR1_EL1),
|
||||||
|
KVM_ARM64_SYS_REG(sys_reg(3, 0, 0, 3, 7)),
|
||||||
|
};
|
||||||
|
|
||||||
|
static void test_user_raz_invariant(struct kvm_vcpu *vcpu)
|
||||||
|
{
|
||||||
|
int i, r;
|
||||||
|
|
||||||
|
for (i = 0; i < ARRAY_SIZE(raz_invariant_reg_ids); i++) {
|
||||||
|
uint64_t reg_id = raz_invariant_reg_ids[i];
|
||||||
|
uint64_t val;
|
||||||
|
|
||||||
|
vcpu_get_reg(vcpu, reg_id, &val);
|
||||||
|
ASSERT_EQ(val, 0);
|
||||||
|
|
||||||
|
r = __vcpu_set_reg(vcpu, reg_id, BAD_ID_REG_VAL);
|
||||||
|
TEST_ASSERT(r < 0 && errno == EINVAL,
|
||||||
|
"unexpected KVM_SET_ONE_REG error: r=%d, errno=%d", r, errno);
|
||||||
|
|
||||||
|
vcpu_get_reg(vcpu, reg_id, &val);
|
||||||
|
ASSERT_EQ(val, 0);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
static bool vcpu_aarch64_only(struct kvm_vcpu *vcpu)
|
||||||
|
{
|
||||||
|
uint64_t val, el0;
|
||||||
|
|
||||||
|
vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1), &val);
|
||||||
|
|
||||||
|
el0 = (val & ARM64_FEATURE_MASK(ID_AA64PFR0_EL0)) >> ID_AA64PFR0_EL0_SHIFT;
|
||||||
|
return el0 == ID_AA64PFR0_ELx_64BIT_ONLY;
|
||||||
|
}
|
||||||
|
|
||||||
|
int main(void)
|
||||||
|
{
|
||||||
|
struct kvm_vcpu *vcpu;
|
||||||
|
struct kvm_vm *vm;
|
||||||
|
|
||||||
|
vm = vm_create_with_one_vcpu(&vcpu, guest_main);
|
||||||
|
|
||||||
|
TEST_REQUIRE(vcpu_aarch64_only(vcpu));
|
||||||
|
|
||||||
|
ucall_init(vm, NULL);
|
||||||
|
|
||||||
|
test_user_raz_wi(vcpu);
|
||||||
|
test_user_raz_invariant(vcpu);
|
||||||
|
test_guest_raz(vcpu);
|
||||||
|
|
||||||
|
ucall_uninit(vm);
|
||||||
|
kvm_vm_free(vm);
|
||||||
|
}
|
||||||
|
|
@ -22,6 +22,7 @@
|
||||||
#define SPSR_SS (1 << 21)
|
#define SPSR_SS (1 << 21)
|
||||||
|
|
||||||
extern unsigned char sw_bp, sw_bp2, hw_bp, hw_bp2, bp_svc, bp_brk, hw_wp, ss_start;
|
extern unsigned char sw_bp, sw_bp2, hw_bp, hw_bp2, bp_svc, bp_brk, hw_wp, ss_start;
|
||||||
|
extern unsigned char iter_ss_begin, iter_ss_end;
|
||||||
static volatile uint64_t sw_bp_addr, hw_bp_addr;
|
static volatile uint64_t sw_bp_addr, hw_bp_addr;
|
||||||
static volatile uint64_t wp_addr, wp_data_addr;
|
static volatile uint64_t wp_addr, wp_data_addr;
|
||||||
static volatile uint64_t svc_addr;
|
static volatile uint64_t svc_addr;
|
||||||
|
|
@ -238,6 +239,46 @@ static void guest_svc_handler(struct ex_regs *regs)
|
||||||
svc_addr = regs->pc;
|
svc_addr = regs->pc;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
enum single_step_op {
|
||||||
|
SINGLE_STEP_ENABLE = 0,
|
||||||
|
SINGLE_STEP_DISABLE = 1,
|
||||||
|
};
|
||||||
|
|
||||||
|
static void guest_code_ss(int test_cnt)
|
||||||
|
{
|
||||||
|
uint64_t i;
|
||||||
|
uint64_t bvr, wvr, w_bvr, w_wvr;
|
||||||
|
|
||||||
|
for (i = 0; i < test_cnt; i++) {
|
||||||
|
/* Bits [1:0] of dbg{b,w}vr are RES0 */
|
||||||
|
w_bvr = i << 2;
|
||||||
|
w_wvr = i << 2;
|
||||||
|
|
||||||
|
/* Enable Single Step execution */
|
||||||
|
GUEST_SYNC(SINGLE_STEP_ENABLE);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* The userspace will veriry that the pc is as expected during
|
||||||
|
* single step execution between iter_ss_begin and iter_ss_end.
|
||||||
|
*/
|
||||||
|
asm volatile("iter_ss_begin:nop\n");
|
||||||
|
|
||||||
|
write_sysreg(w_bvr, dbgbvr0_el1);
|
||||||
|
write_sysreg(w_wvr, dbgwvr0_el1);
|
||||||
|
bvr = read_sysreg(dbgbvr0_el1);
|
||||||
|
wvr = read_sysreg(dbgwvr0_el1);
|
||||||
|
|
||||||
|
asm volatile("iter_ss_end:\n");
|
||||||
|
|
||||||
|
/* Disable Single Step execution */
|
||||||
|
GUEST_SYNC(SINGLE_STEP_DISABLE);
|
||||||
|
|
||||||
|
GUEST_ASSERT(bvr == w_bvr);
|
||||||
|
GUEST_ASSERT(wvr == w_wvr);
|
||||||
|
}
|
||||||
|
GUEST_DONE();
|
||||||
|
}
|
||||||
|
|
||||||
static int debug_version(struct kvm_vcpu *vcpu)
|
static int debug_version(struct kvm_vcpu *vcpu)
|
||||||
{
|
{
|
||||||
uint64_t id_aa64dfr0;
|
uint64_t id_aa64dfr0;
|
||||||
|
|
@ -246,7 +287,7 @@ static int debug_version(struct kvm_vcpu *vcpu)
|
||||||
return id_aa64dfr0 & 0xf;
|
return id_aa64dfr0 & 0xf;
|
||||||
}
|
}
|
||||||
|
|
||||||
int main(int argc, char *argv[])
|
static void test_guest_debug_exceptions(void)
|
||||||
{
|
{
|
||||||
struct kvm_vcpu *vcpu;
|
struct kvm_vcpu *vcpu;
|
||||||
struct kvm_vm *vm;
|
struct kvm_vm *vm;
|
||||||
|
|
@ -259,9 +300,6 @@ int main(int argc, char *argv[])
|
||||||
vm_init_descriptor_tables(vm);
|
vm_init_descriptor_tables(vm);
|
||||||
vcpu_init_descriptor_tables(vcpu);
|
vcpu_init_descriptor_tables(vcpu);
|
||||||
|
|
||||||
__TEST_REQUIRE(debug_version(vcpu) >= 6,
|
|
||||||
"Armv8 debug architecture not supported.");
|
|
||||||
|
|
||||||
vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT,
|
vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT,
|
||||||
ESR_EC_BRK_INS, guest_sw_bp_handler);
|
ESR_EC_BRK_INS, guest_sw_bp_handler);
|
||||||
vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT,
|
vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT,
|
||||||
|
|
@ -294,5 +332,108 @@ int main(int argc, char *argv[])
|
||||||
|
|
||||||
done:
|
done:
|
||||||
kvm_vm_free(vm);
|
kvm_vm_free(vm);
|
||||||
|
}
|
||||||
|
|
||||||
|
void test_single_step_from_userspace(int test_cnt)
|
||||||
|
{
|
||||||
|
struct kvm_vcpu *vcpu;
|
||||||
|
struct kvm_vm *vm;
|
||||||
|
struct ucall uc;
|
||||||
|
struct kvm_run *run;
|
||||||
|
uint64_t pc, cmd;
|
||||||
|
uint64_t test_pc = 0;
|
||||||
|
bool ss_enable = false;
|
||||||
|
struct kvm_guest_debug debug = {};
|
||||||
|
|
||||||
|
vm = vm_create_with_one_vcpu(&vcpu, guest_code_ss);
|
||||||
|
ucall_init(vm, NULL);
|
||||||
|
run = vcpu->run;
|
||||||
|
vcpu_args_set(vcpu, 1, test_cnt);
|
||||||
|
|
||||||
|
while (1) {
|
||||||
|
vcpu_run(vcpu);
|
||||||
|
if (run->exit_reason != KVM_EXIT_DEBUG) {
|
||||||
|
cmd = get_ucall(vcpu, &uc);
|
||||||
|
if (cmd == UCALL_ABORT) {
|
||||||
|
REPORT_GUEST_ASSERT(uc);
|
||||||
|
/* NOT REACHED */
|
||||||
|
} else if (cmd == UCALL_DONE) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST_ASSERT(cmd == UCALL_SYNC,
|
||||||
|
"Unexpected ucall cmd 0x%lx", cmd);
|
||||||
|
|
||||||
|
if (uc.args[1] == SINGLE_STEP_ENABLE) {
|
||||||
|
debug.control = KVM_GUESTDBG_ENABLE |
|
||||||
|
KVM_GUESTDBG_SINGLESTEP;
|
||||||
|
ss_enable = true;
|
||||||
|
} else {
|
||||||
|
debug.control = SINGLE_STEP_DISABLE;
|
||||||
|
ss_enable = false;
|
||||||
|
}
|
||||||
|
|
||||||
|
vcpu_guest_debug_set(vcpu, &debug);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST_ASSERT(ss_enable, "Unexpected KVM_EXIT_DEBUG");
|
||||||
|
|
||||||
|
/* Check if the current pc is expected. */
|
||||||
|
vcpu_get_reg(vcpu, ARM64_CORE_REG(regs.pc), &pc);
|
||||||
|
TEST_ASSERT(!test_pc || pc == test_pc,
|
||||||
|
"Unexpected pc 0x%lx (expected 0x%lx)",
|
||||||
|
pc, test_pc);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* If the current pc is between iter_ss_bgin and
|
||||||
|
* iter_ss_end, the pc for the next KVM_EXIT_DEBUG should
|
||||||
|
* be the current pc + 4.
|
||||||
|
*/
|
||||||
|
if ((pc >= (uint64_t)&iter_ss_begin) &&
|
||||||
|
(pc < (uint64_t)&iter_ss_end))
|
||||||
|
test_pc = pc + 4;
|
||||||
|
else
|
||||||
|
test_pc = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
kvm_vm_free(vm);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void help(char *name)
|
||||||
|
{
|
||||||
|
puts("");
|
||||||
|
printf("Usage: %s [-h] [-i iterations of the single step test]\n", name);
|
||||||
|
puts("");
|
||||||
|
exit(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
int main(int argc, char *argv[])
|
||||||
|
{
|
||||||
|
struct kvm_vcpu *vcpu;
|
||||||
|
struct kvm_vm *vm;
|
||||||
|
int opt;
|
||||||
|
int ss_iteration = 10000;
|
||||||
|
|
||||||
|
vm = vm_create_with_one_vcpu(&vcpu, guest_code);
|
||||||
|
__TEST_REQUIRE(debug_version(vcpu) >= 6,
|
||||||
|
"Armv8 debug architecture not supported.");
|
||||||
|
kvm_vm_free(vm);
|
||||||
|
|
||||||
|
while ((opt = getopt(argc, argv, "i:")) != -1) {
|
||||||
|
switch (opt) {
|
||||||
|
case 'i':
|
||||||
|
ss_iteration = atoi(optarg);
|
||||||
|
break;
|
||||||
|
case 'h':
|
||||||
|
default:
|
||||||
|
help(argv[0]);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
test_guest_debug_exceptions();
|
||||||
|
test_single_step_from_userspace(ss_iteration);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -1,12 +1,14 @@
|
||||||
// SPDX-License-Identifier: GPL-2.0-only
|
// SPDX-License-Identifier: GPL-2.0-only
|
||||||
/*
|
/*
|
||||||
* psci_cpu_on_test - Test that the observable state of a vCPU targeted by the
|
* psci_test - Tests relating to KVM's PSCI implementation.
|
||||||
* CPU_ON PSCI call matches what the caller requested.
|
|
||||||
*
|
*
|
||||||
* Copyright (c) 2021 Google LLC.
|
* Copyright (c) 2021 Google LLC.
|
||||||
*
|
*
|
||||||
* This is a regression test for a race between KVM servicing the PSCI call and
|
* This test includes:
|
||||||
* userspace reading the vCPUs registers.
|
* - A regression test for a race between KVM servicing the PSCI CPU_ON call
|
||||||
|
* and userspace reading the targeted vCPU's registers.
|
||||||
|
* - A test for KVM's handling of PSCI SYSTEM_SUSPEND and the associated
|
||||||
|
* KVM_SYSTEM_EVENT_SUSPEND UAPI.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
#define _GNU_SOURCE
|
#define _GNU_SOURCE
|
||||||
|
|
|
||||||
|
|
@ -17,6 +17,7 @@
|
||||||
#include <linux/bitmap.h>
|
#include <linux/bitmap.h>
|
||||||
#include <linux/bitops.h>
|
#include <linux/bitops.h>
|
||||||
#include <linux/atomic.h>
|
#include <linux/atomic.h>
|
||||||
|
#include <asm/barrier.h>
|
||||||
|
|
||||||
#include "kvm_util.h"
|
#include "kvm_util.h"
|
||||||
#include "test_util.h"
|
#include "test_util.h"
|
||||||
|
|
@ -264,7 +265,8 @@ static void default_after_vcpu_run(struct kvm_vcpu *vcpu, int ret, int err)
|
||||||
|
|
||||||
static bool dirty_ring_supported(void)
|
static bool dirty_ring_supported(void)
|
||||||
{
|
{
|
||||||
return kvm_has_cap(KVM_CAP_DIRTY_LOG_RING);
|
return (kvm_has_cap(KVM_CAP_DIRTY_LOG_RING) ||
|
||||||
|
kvm_has_cap(KVM_CAP_DIRTY_LOG_RING_ACQ_REL));
|
||||||
}
|
}
|
||||||
|
|
||||||
static void dirty_ring_create_vm_done(struct kvm_vm *vm)
|
static void dirty_ring_create_vm_done(struct kvm_vm *vm)
|
||||||
|
|
@ -279,12 +281,12 @@ static void dirty_ring_create_vm_done(struct kvm_vm *vm)
|
||||||
|
|
||||||
static inline bool dirty_gfn_is_dirtied(struct kvm_dirty_gfn *gfn)
|
static inline bool dirty_gfn_is_dirtied(struct kvm_dirty_gfn *gfn)
|
||||||
{
|
{
|
||||||
return gfn->flags == KVM_DIRTY_GFN_F_DIRTY;
|
return smp_load_acquire(&gfn->flags) == KVM_DIRTY_GFN_F_DIRTY;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void dirty_gfn_set_collected(struct kvm_dirty_gfn *gfn)
|
static inline void dirty_gfn_set_collected(struct kvm_dirty_gfn *gfn)
|
||||||
{
|
{
|
||||||
gfn->flags = KVM_DIRTY_GFN_F_RESET;
|
smp_store_release(&gfn->flags, KVM_DIRTY_GFN_F_RESET);
|
||||||
}
|
}
|
||||||
|
|
||||||
static uint32_t dirty_ring_collect_one(struct kvm_dirty_gfn *dirty_gfns,
|
static uint32_t dirty_ring_collect_one(struct kvm_dirty_gfn *dirty_gfns,
|
||||||
|
|
|
||||||
|
|
@ -175,6 +175,10 @@ extern const struct vm_guest_mode_params vm_guest_mode_params[];
|
||||||
|
|
||||||
int open_path_or_exit(const char *path, int flags);
|
int open_path_or_exit(const char *path, int flags);
|
||||||
int open_kvm_dev_path_or_exit(void);
|
int open_kvm_dev_path_or_exit(void);
|
||||||
|
|
||||||
|
bool get_kvm_intel_param_bool(const char *param);
|
||||||
|
bool get_kvm_amd_param_bool(const char *param);
|
||||||
|
|
||||||
unsigned int kvm_check_cap(long cap);
|
unsigned int kvm_check_cap(long cap);
|
||||||
|
|
||||||
static inline bool kvm_has_cap(long cap)
|
static inline bool kvm_has_cap(long cap)
|
||||||
|
|
|
||||||
|
|
@ -63,8 +63,10 @@ void test_assert(bool exp, const char *exp_str,
|
||||||
#a, #b, #a, (unsigned long) __a, #b, (unsigned long) __b); \
|
#a, #b, #a, (unsigned long) __a, #b, (unsigned long) __b); \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
#define TEST_FAIL(fmt, ...) \
|
#define TEST_FAIL(fmt, ...) do { \
|
||||||
TEST_ASSERT(false, fmt, ##__VA_ARGS__)
|
TEST_ASSERT(false, fmt, ##__VA_ARGS__); \
|
||||||
|
__builtin_unreachable(); \
|
||||||
|
} while (0)
|
||||||
|
|
||||||
size_t parse_size(const char *size);
|
size_t parse_size(const char *size);
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -825,6 +825,8 @@ static inline uint8_t wrmsr_safe(uint32_t msr, uint64_t val)
|
||||||
return kvm_asm_safe("wrmsr", "a"(val & -1u), "d"(val >> 32), "c"(msr));
|
return kvm_asm_safe("wrmsr", "a"(val & -1u), "d"(val >> 32), "c"(msr));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bool kvm_is_tdp_enabled(void);
|
||||||
|
|
||||||
uint64_t vm_get_page_table_entry(struct kvm_vm *vm, struct kvm_vcpu *vcpu,
|
uint64_t vm_get_page_table_entry(struct kvm_vm *vm, struct kvm_vcpu *vcpu,
|
||||||
uint64_t vaddr);
|
uint64_t vaddr);
|
||||||
void vm_set_page_table_entry(struct kvm_vm *vm, struct kvm_vcpu *vcpu,
|
void vm_set_page_table_entry(struct kvm_vm *vm, struct kvm_vcpu *vcpu,
|
||||||
|
|
@ -855,6 +857,8 @@ enum pg_level {
|
||||||
#define PG_SIZE_1G PG_LEVEL_SIZE(PG_LEVEL_1G)
|
#define PG_SIZE_1G PG_LEVEL_SIZE(PG_LEVEL_1G)
|
||||||
|
|
||||||
void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level);
|
void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level);
|
||||||
|
void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
|
||||||
|
uint64_t nr_bytes, int level);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Basic CPU control in CR0
|
* Basic CPU control in CR0
|
||||||
|
|
|
||||||
|
|
@ -50,6 +50,45 @@ int open_kvm_dev_path_or_exit(void)
|
||||||
return _open_kvm_dev_path_or_exit(O_RDONLY);
|
return _open_kvm_dev_path_or_exit(O_RDONLY);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static bool get_module_param_bool(const char *module_name, const char *param)
|
||||||
|
{
|
||||||
|
const int path_size = 128;
|
||||||
|
char path[path_size];
|
||||||
|
char value;
|
||||||
|
ssize_t r;
|
||||||
|
int fd;
|
||||||
|
|
||||||
|
r = snprintf(path, path_size, "/sys/module/%s/parameters/%s",
|
||||||
|
module_name, param);
|
||||||
|
TEST_ASSERT(r < path_size,
|
||||||
|
"Failed to construct sysfs path in %d bytes.", path_size);
|
||||||
|
|
||||||
|
fd = open_path_or_exit(path, O_RDONLY);
|
||||||
|
|
||||||
|
r = read(fd, &value, 1);
|
||||||
|
TEST_ASSERT(r == 1, "read(%s) failed", path);
|
||||||
|
|
||||||
|
r = close(fd);
|
||||||
|
TEST_ASSERT(!r, "close(%s) failed", path);
|
||||||
|
|
||||||
|
if (value == 'Y')
|
||||||
|
return true;
|
||||||
|
else if (value == 'N')
|
||||||
|
return false;
|
||||||
|
|
||||||
|
TEST_FAIL("Unrecognized value '%c' for boolean module param", value);
|
||||||
|
}
|
||||||
|
|
||||||
|
bool get_kvm_intel_param_bool(const char *param)
|
||||||
|
{
|
||||||
|
return get_module_param_bool("kvm_intel", param);
|
||||||
|
}
|
||||||
|
|
||||||
|
bool get_kvm_amd_param_bool(const char *param)
|
||||||
|
{
|
||||||
|
return get_module_param_bool("kvm_amd", param);
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Capability
|
* Capability
|
||||||
*
|
*
|
||||||
|
|
@ -82,7 +121,10 @@ unsigned int kvm_check_cap(long cap)
|
||||||
|
|
||||||
void vm_enable_dirty_ring(struct kvm_vm *vm, uint32_t ring_size)
|
void vm_enable_dirty_ring(struct kvm_vm *vm, uint32_t ring_size)
|
||||||
{
|
{
|
||||||
vm_enable_cap(vm, KVM_CAP_DIRTY_LOG_RING, ring_size);
|
if (vm_check_cap(vm, KVM_CAP_DIRTY_LOG_RING_ACQ_REL))
|
||||||
|
vm_enable_cap(vm, KVM_CAP_DIRTY_LOG_RING_ACQ_REL, ring_size);
|
||||||
|
else
|
||||||
|
vm_enable_cap(vm, KVM_CAP_DIRTY_LOG_RING, ring_size);
|
||||||
vm->dirty_ring_size = ring_size;
|
vm->dirty_ring_size = ring_size;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -111,6 +111,14 @@ static void sregs_dump(FILE *stream, struct kvm_sregs *sregs, uint8_t indent)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bool kvm_is_tdp_enabled(void)
|
||||||
|
{
|
||||||
|
if (is_intel_cpu())
|
||||||
|
return get_kvm_intel_param_bool("ept");
|
||||||
|
else
|
||||||
|
return get_kvm_amd_param_bool("npt");
|
||||||
|
}
|
||||||
|
|
||||||
void virt_arch_pgd_alloc(struct kvm_vm *vm)
|
void virt_arch_pgd_alloc(struct kvm_vm *vm)
|
||||||
{
|
{
|
||||||
TEST_ASSERT(vm->mode == VM_MODE_PXXV48_4K, "Attempt to use "
|
TEST_ASSERT(vm->mode == VM_MODE_PXXV48_4K, "Attempt to use "
|
||||||
|
|
@ -214,6 +222,25 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
|
||||||
__virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K);
|
__virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
|
||||||
|
uint64_t nr_bytes, int level)
|
||||||
|
{
|
||||||
|
uint64_t pg_size = PG_LEVEL_SIZE(level);
|
||||||
|
uint64_t nr_pages = nr_bytes / pg_size;
|
||||||
|
int i;
|
||||||
|
|
||||||
|
TEST_ASSERT(nr_bytes % pg_size == 0,
|
||||||
|
"Region size not aligned: nr_bytes: 0x%lx, page size: 0x%lx",
|
||||||
|
nr_bytes, pg_size);
|
||||||
|
|
||||||
|
for (i = 0; i < nr_pages; i++) {
|
||||||
|
__virt_pg_map(vm, vaddr, paddr, level);
|
||||||
|
|
||||||
|
vaddr += pg_size;
|
||||||
|
paddr += pg_size;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
static uint64_t *_vm_get_page_table_entry(struct kvm_vm *vm,
|
static uint64_t *_vm_get_page_table_entry(struct kvm_vm *vm,
|
||||||
struct kvm_vcpu *vcpu,
|
struct kvm_vcpu *vcpu,
|
||||||
uint64_t vaddr)
|
uint64_t vaddr)
|
||||||
|
|
@ -1294,20 +1321,9 @@ done:
|
||||||
/* Returns true if kvm_intel was loaded with unrestricted_guest=1. */
|
/* Returns true if kvm_intel was loaded with unrestricted_guest=1. */
|
||||||
bool vm_is_unrestricted_guest(struct kvm_vm *vm)
|
bool vm_is_unrestricted_guest(struct kvm_vm *vm)
|
||||||
{
|
{
|
||||||
char val = 'N';
|
|
||||||
size_t count;
|
|
||||||
FILE *f;
|
|
||||||
|
|
||||||
/* Ensure that a KVM vendor-specific module is loaded. */
|
/* Ensure that a KVM vendor-specific module is loaded. */
|
||||||
if (vm == NULL)
|
if (vm == NULL)
|
||||||
close(open_kvm_dev_path_or_exit());
|
close(open_kvm_dev_path_or_exit());
|
||||||
|
|
||||||
f = fopen("/sys/module/kvm_intel/parameters/unrestricted_guest", "r");
|
return get_kvm_intel_param_bool("unrestricted_guest");
|
||||||
if (f) {
|
|
||||||
count = fread(&val, sizeof(char), 1, f);
|
|
||||||
TEST_ASSERT(count == 1, "Unable to read from param file.");
|
|
||||||
fclose(f);
|
|
||||||
}
|
|
||||||
|
|
||||||
return val == 'Y';
|
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -60,18 +60,6 @@ static void vmcb_set_seg(struct vmcb_seg *seg, u16 selector,
|
||||||
seg->base = base;
|
seg->base = base;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
|
||||||
* Avoid using memset to clear the vmcb, since libc may not be
|
|
||||||
* available in L1 (and, even if it is, features that libc memset may
|
|
||||||
* want to use, like AVX, may not be enabled).
|
|
||||||
*/
|
|
||||||
static void clear_vmcb(struct vmcb *vmcb)
|
|
||||||
{
|
|
||||||
int n = sizeof(*vmcb) / sizeof(u32);
|
|
||||||
|
|
||||||
asm volatile ("rep stosl" : "+c"(n), "+D"(vmcb) : "a"(0) : "memory");
|
|
||||||
}
|
|
||||||
|
|
||||||
void generic_svm_setup(struct svm_test_data *svm, void *guest_rip, void *guest_rsp)
|
void generic_svm_setup(struct svm_test_data *svm, void *guest_rip, void *guest_rsp)
|
||||||
{
|
{
|
||||||
struct vmcb *vmcb = svm->vmcb;
|
struct vmcb *vmcb = svm->vmcb;
|
||||||
|
|
@ -88,7 +76,7 @@ void generic_svm_setup(struct svm_test_data *svm, void *guest_rip, void *guest_r
|
||||||
wrmsr(MSR_EFER, efer | EFER_SVME);
|
wrmsr(MSR_EFER, efer | EFER_SVME);
|
||||||
wrmsr(MSR_VM_HSAVE_PA, svm->save_area_gpa);
|
wrmsr(MSR_VM_HSAVE_PA, svm->save_area_gpa);
|
||||||
|
|
||||||
clear_vmcb(vmcb);
|
memset(vmcb, 0, sizeof(*vmcb));
|
||||||
asm volatile ("vmsave %0\n\t" : : "a" (vmcb_gpa) : "memory");
|
asm volatile ("vmsave %0\n\t" : : "a" (vmcb_gpa) : "memory");
|
||||||
vmcb_set_seg(&save->es, get_es(), 0, -1U, data_seg_attr);
|
vmcb_set_seg(&save->es, get_es(), 0, -1U, data_seg_attr);
|
||||||
vmcb_set_seg(&save->cs, get_cs(), 0, -1U, code_seg_attr);
|
vmcb_set_seg(&save->cs, get_cs(), 0, -1U, code_seg_attr);
|
||||||
|
|
|
||||||
|
|
@ -17,84 +17,70 @@
|
||||||
/* VMCALL and VMMCALL are both 3-byte opcodes. */
|
/* VMCALL and VMMCALL are both 3-byte opcodes. */
|
||||||
#define HYPERCALL_INSN_SIZE 3
|
#define HYPERCALL_INSN_SIZE 3
|
||||||
|
|
||||||
static bool ud_expected;
|
static bool quirk_disabled;
|
||||||
|
|
||||||
static void guest_ud_handler(struct ex_regs *regs)
|
static void guest_ud_handler(struct ex_regs *regs)
|
||||||
{
|
{
|
||||||
GUEST_ASSERT(ud_expected);
|
regs->rax = -EFAULT;
|
||||||
GUEST_DONE();
|
regs->rip += HYPERCALL_INSN_SIZE;
|
||||||
}
|
}
|
||||||
|
|
||||||
extern uint8_t svm_hypercall_insn[HYPERCALL_INSN_SIZE];
|
static const uint8_t vmx_vmcall[HYPERCALL_INSN_SIZE] = { 0x0f, 0x01, 0xc1 };
|
||||||
static uint64_t svm_do_sched_yield(uint8_t apic_id)
|
static const uint8_t svm_vmmcall[HYPERCALL_INSN_SIZE] = { 0x0f, 0x01, 0xd9 };
|
||||||
|
|
||||||
|
extern uint8_t hypercall_insn[HYPERCALL_INSN_SIZE];
|
||||||
|
static uint64_t do_sched_yield(uint8_t apic_id)
|
||||||
{
|
{
|
||||||
uint64_t ret;
|
uint64_t ret;
|
||||||
|
|
||||||
asm volatile("mov %1, %%rax\n\t"
|
asm volatile("hypercall_insn:\n\t"
|
||||||
"mov %2, %%rbx\n\t"
|
".byte 0xcc,0xcc,0xcc\n\t"
|
||||||
"svm_hypercall_insn:\n\t"
|
: "=a"(ret)
|
||||||
"vmmcall\n\t"
|
: "a"((uint64_t)KVM_HC_SCHED_YIELD), "b"((uint64_t)apic_id)
|
||||||
"mov %%rax, %0\n\t"
|
: "memory");
|
||||||
: "=r"(ret)
|
|
||||||
: "r"((uint64_t)KVM_HC_SCHED_YIELD), "r"((uint64_t)apic_id)
|
|
||||||
: "rax", "rbx", "memory");
|
|
||||||
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
|
|
||||||
extern uint8_t vmx_hypercall_insn[HYPERCALL_INSN_SIZE];
|
|
||||||
static uint64_t vmx_do_sched_yield(uint8_t apic_id)
|
|
||||||
{
|
|
||||||
uint64_t ret;
|
|
||||||
|
|
||||||
asm volatile("mov %1, %%rax\n\t"
|
|
||||||
"mov %2, %%rbx\n\t"
|
|
||||||
"vmx_hypercall_insn:\n\t"
|
|
||||||
"vmcall\n\t"
|
|
||||||
"mov %%rax, %0\n\t"
|
|
||||||
: "=r"(ret)
|
|
||||||
: "r"((uint64_t)KVM_HC_SCHED_YIELD), "r"((uint64_t)apic_id)
|
|
||||||
: "rax", "rbx", "memory");
|
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void guest_main(void)
|
static void guest_main(void)
|
||||||
{
|
{
|
||||||
uint8_t *native_hypercall_insn, *hypercall_insn;
|
const uint8_t *native_hypercall_insn;
|
||||||
uint8_t apic_id;
|
const uint8_t *other_hypercall_insn;
|
||||||
|
uint64_t ret;
|
||||||
apic_id = GET_APIC_ID_FIELD(xapic_read_reg(APIC_ID));
|
|
||||||
|
|
||||||
if (is_intel_cpu()) {
|
if (is_intel_cpu()) {
|
||||||
native_hypercall_insn = vmx_hypercall_insn;
|
native_hypercall_insn = vmx_vmcall;
|
||||||
hypercall_insn = svm_hypercall_insn;
|
other_hypercall_insn = svm_vmmcall;
|
||||||
svm_do_sched_yield(apic_id);
|
|
||||||
} else if (is_amd_cpu()) {
|
} else if (is_amd_cpu()) {
|
||||||
native_hypercall_insn = svm_hypercall_insn;
|
native_hypercall_insn = svm_vmmcall;
|
||||||
hypercall_insn = vmx_hypercall_insn;
|
other_hypercall_insn = vmx_vmcall;
|
||||||
vmx_do_sched_yield(apic_id);
|
|
||||||
} else {
|
} else {
|
||||||
GUEST_ASSERT(0);
|
GUEST_ASSERT(0);
|
||||||
/* unreachable */
|
/* unreachable */
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
memcpy(hypercall_insn, other_hypercall_insn, HYPERCALL_INSN_SIZE);
|
||||||
* The hypercall didn't #UD (guest_ud_handler() signals "done" if a #UD
|
|
||||||
* occurs). Verify that a #UD is NOT expected and that KVM patched in
|
|
||||||
* the native hypercall.
|
|
||||||
*/
|
|
||||||
GUEST_ASSERT(!ud_expected);
|
|
||||||
GUEST_ASSERT(!memcmp(native_hypercall_insn, hypercall_insn, HYPERCALL_INSN_SIZE));
|
|
||||||
GUEST_DONE();
|
|
||||||
}
|
|
||||||
|
|
||||||
static void setup_ud_vector(struct kvm_vcpu *vcpu)
|
ret = do_sched_yield(GET_APIC_ID_FIELD(xapic_read_reg(APIC_ID)));
|
||||||
{
|
|
||||||
vm_init_descriptor_tables(vcpu->vm);
|
/*
|
||||||
vcpu_init_descriptor_tables(vcpu);
|
* If the quirk is disabled, verify that guest_ud_handler() "returned"
|
||||||
vm_install_exception_handler(vcpu->vm, UD_VECTOR, guest_ud_handler);
|
* -EFAULT and that KVM did NOT patch the hypercall. If the quirk is
|
||||||
|
* enabled, verify that the hypercall succeeded and that KVM patched in
|
||||||
|
* the "right" hypercall.
|
||||||
|
*/
|
||||||
|
if (quirk_disabled) {
|
||||||
|
GUEST_ASSERT(ret == (uint64_t)-EFAULT);
|
||||||
|
GUEST_ASSERT(!memcmp(other_hypercall_insn, hypercall_insn,
|
||||||
|
HYPERCALL_INSN_SIZE));
|
||||||
|
} else {
|
||||||
|
GUEST_ASSERT(!ret);
|
||||||
|
GUEST_ASSERT(!memcmp(native_hypercall_insn, hypercall_insn,
|
||||||
|
HYPERCALL_INSN_SIZE));
|
||||||
|
}
|
||||||
|
|
||||||
|
GUEST_DONE();
|
||||||
}
|
}
|
||||||
|
|
||||||
static void enter_guest(struct kvm_vcpu *vcpu)
|
static void enter_guest(struct kvm_vcpu *vcpu)
|
||||||
|
|
@ -117,35 +103,23 @@ static void enter_guest(struct kvm_vcpu *vcpu)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static void test_fix_hypercall(void)
|
static void test_fix_hypercall(bool disable_quirk)
|
||||||
{
|
{
|
||||||
struct kvm_vcpu *vcpu;
|
struct kvm_vcpu *vcpu;
|
||||||
struct kvm_vm *vm;
|
struct kvm_vm *vm;
|
||||||
|
|
||||||
vm = vm_create_with_one_vcpu(&vcpu, guest_main);
|
vm = vm_create_with_one_vcpu(&vcpu, guest_main);
|
||||||
setup_ud_vector(vcpu);
|
|
||||||
|
|
||||||
ud_expected = false;
|
vm_init_descriptor_tables(vcpu->vm);
|
||||||
sync_global_to_guest(vm, ud_expected);
|
vcpu_init_descriptor_tables(vcpu);
|
||||||
|
vm_install_exception_handler(vcpu->vm, UD_VECTOR, guest_ud_handler);
|
||||||
|
|
||||||
virt_pg_map(vm, APIC_DEFAULT_GPA, APIC_DEFAULT_GPA);
|
if (disable_quirk)
|
||||||
|
vm_enable_cap(vm, KVM_CAP_DISABLE_QUIRKS2,
|
||||||
|
KVM_X86_QUIRK_FIX_HYPERCALL_INSN);
|
||||||
|
|
||||||
enter_guest(vcpu);
|
quirk_disabled = disable_quirk;
|
||||||
}
|
sync_global_to_guest(vm, quirk_disabled);
|
||||||
|
|
||||||
static void test_fix_hypercall_disabled(void)
|
|
||||||
{
|
|
||||||
struct kvm_vcpu *vcpu;
|
|
||||||
struct kvm_vm *vm;
|
|
||||||
|
|
||||||
vm = vm_create_with_one_vcpu(&vcpu, guest_main);
|
|
||||||
setup_ud_vector(vcpu);
|
|
||||||
|
|
||||||
vm_enable_cap(vm, KVM_CAP_DISABLE_QUIRKS2,
|
|
||||||
KVM_X86_QUIRK_FIX_HYPERCALL_INSN);
|
|
||||||
|
|
||||||
ud_expected = true;
|
|
||||||
sync_global_to_guest(vm, ud_expected);
|
|
||||||
|
|
||||||
virt_pg_map(vm, APIC_DEFAULT_GPA, APIC_DEFAULT_GPA);
|
virt_pg_map(vm, APIC_DEFAULT_GPA, APIC_DEFAULT_GPA);
|
||||||
|
|
||||||
|
|
@ -156,6 +130,6 @@ int main(void)
|
||||||
{
|
{
|
||||||
TEST_REQUIRE(kvm_check_cap(KVM_CAP_DISABLE_QUIRKS2) & KVM_X86_QUIRK_FIX_HYPERCALL_INSN);
|
TEST_REQUIRE(kvm_check_cap(KVM_CAP_DISABLE_QUIRKS2) & KVM_X86_QUIRK_FIX_HYPERCALL_INSN);
|
||||||
|
|
||||||
test_fix_hypercall();
|
test_fix_hypercall(false);
|
||||||
test_fix_hypercall_disabled();
|
test_fix_hypercall(true);
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -26,7 +26,8 @@ static inline uint8_t hypercall(u64 control, vm_vaddr_t input_address,
|
||||||
: "=a" (*hv_status),
|
: "=a" (*hv_status),
|
||||||
"+c" (control), "+d" (input_address),
|
"+c" (control), "+d" (input_address),
|
||||||
KVM_ASM_SAFE_OUTPUTS(vector)
|
KVM_ASM_SAFE_OUTPUTS(vector)
|
||||||
: [output_address] "r"(output_address)
|
: [output_address] "r"(output_address),
|
||||||
|
"a" (-EFAULT)
|
||||||
: "cc", "memory", "r8", KVM_ASM_SAFE_CLOBBERS);
|
: "cc", "memory", "r8", KVM_ASM_SAFE_CLOBBERS);
|
||||||
return vector;
|
return vector;
|
||||||
}
|
}
|
||||||
|
|
@ -81,13 +82,13 @@ static void guest_hcall(vm_vaddr_t pgs_gpa, struct hcall_data *hcall)
|
||||||
}
|
}
|
||||||
|
|
||||||
vector = hypercall(hcall->control, input, output, &res);
|
vector = hypercall(hcall->control, input, output, &res);
|
||||||
if (hcall->ud_expected)
|
if (hcall->ud_expected) {
|
||||||
GUEST_ASSERT_2(vector == UD_VECTOR, hcall->control, vector);
|
GUEST_ASSERT_2(vector == UD_VECTOR, hcall->control, vector);
|
||||||
else
|
} else {
|
||||||
GUEST_ASSERT_2(!vector, hcall->control, vector);
|
GUEST_ASSERT_2(!vector, hcall->control, vector);
|
||||||
|
GUEST_ASSERT_2(res == hcall->expect, hcall->expect, res);
|
||||||
|
}
|
||||||
|
|
||||||
GUEST_ASSERT_2(!hcall->ud_expected || res == hcall->expect,
|
|
||||||
hcall->expect, res);
|
|
||||||
GUEST_DONE();
|
GUEST_DONE();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -507,7 +508,7 @@ static void guest_test_hcalls_access(void)
|
||||||
switch (stage) {
|
switch (stage) {
|
||||||
case 0:
|
case 0:
|
||||||
feat->eax |= HV_MSR_HYPERCALL_AVAILABLE;
|
feat->eax |= HV_MSR_HYPERCALL_AVAILABLE;
|
||||||
hcall->control = 0xdeadbeef;
|
hcall->control = 0xbeef;
|
||||||
hcall->expect = HV_STATUS_INVALID_HYPERCALL_CODE;
|
hcall->expect = HV_STATUS_INVALID_HYPERCALL_CODE;
|
||||||
break;
|
break;
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -112,6 +112,7 @@ void run_test(int reclaim_period_ms, bool disable_nx_huge_pages,
|
||||||
{
|
{
|
||||||
struct kvm_vcpu *vcpu;
|
struct kvm_vcpu *vcpu;
|
||||||
struct kvm_vm *vm;
|
struct kvm_vm *vm;
|
||||||
|
uint64_t nr_bytes;
|
||||||
void *hva;
|
void *hva;
|
||||||
int r;
|
int r;
|
||||||
|
|
||||||
|
|
@ -134,10 +135,24 @@ void run_test(int reclaim_period_ms, bool disable_nx_huge_pages,
|
||||||
HPAGE_GPA, HPAGE_SLOT,
|
HPAGE_GPA, HPAGE_SLOT,
|
||||||
HPAGE_SLOT_NPAGES, 0);
|
HPAGE_SLOT_NPAGES, 0);
|
||||||
|
|
||||||
virt_map(vm, HPAGE_GVA, HPAGE_GPA, HPAGE_SLOT_NPAGES);
|
nr_bytes = HPAGE_SLOT_NPAGES * vm->page_size;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Ensure that KVM can map HPAGE_SLOT with huge pages by mapping the
|
||||||
|
* region into the guest with 2MiB pages whenever TDP is disabled (i.e.
|
||||||
|
* whenever KVM is shadowing the guest page tables).
|
||||||
|
*
|
||||||
|
* When TDP is enabled, KVM should be able to map HPAGE_SLOT with huge
|
||||||
|
* pages irrespective of the guest page size, so map with 4KiB pages
|
||||||
|
* to test that that is the case.
|
||||||
|
*/
|
||||||
|
if (kvm_is_tdp_enabled())
|
||||||
|
virt_map_level(vm, HPAGE_GVA, HPAGE_GPA, nr_bytes, PG_LEVEL_4K);
|
||||||
|
else
|
||||||
|
virt_map_level(vm, HPAGE_GVA, HPAGE_GPA, nr_bytes, PG_LEVEL_2M);
|
||||||
|
|
||||||
hva = addr_gpa2hva(vm, HPAGE_GPA);
|
hva = addr_gpa2hva(vm, HPAGE_GPA);
|
||||||
memset(hva, RETURN_OPCODE, HPAGE_SLOT_NPAGES * PAGE_SIZE);
|
memset(hva, RETURN_OPCODE, nr_bytes);
|
||||||
|
|
||||||
check_2m_page_count(vm, 0);
|
check_2m_page_count(vm, 0);
|
||||||
check_split_count(vm, 0);
|
check_split_count(vm, 0);
|
||||||
|
|
|
||||||
|
|
@ -19,6 +19,20 @@ config HAVE_KVM_IRQ_ROUTING
|
||||||
config HAVE_KVM_DIRTY_RING
|
config HAVE_KVM_DIRTY_RING
|
||||||
bool
|
bool
|
||||||
|
|
||||||
|
# Only strongly ordered architectures can select this, as it doesn't
|
||||||
|
# put any explicit constraint on userspace ordering. They can also
|
||||||
|
# select the _ACQ_REL version.
|
||||||
|
config HAVE_KVM_DIRTY_RING_TSO
|
||||||
|
bool
|
||||||
|
select HAVE_KVM_DIRTY_RING
|
||||||
|
depends on X86
|
||||||
|
|
||||||
|
# Weakly ordered architectures can only select this, advertising
|
||||||
|
# to userspace the additional ordering requirements.
|
||||||
|
config HAVE_KVM_DIRTY_RING_ACQ_REL
|
||||||
|
bool
|
||||||
|
select HAVE_KVM_DIRTY_RING
|
||||||
|
|
||||||
config HAVE_KVM_EVENTFD
|
config HAVE_KVM_EVENTFD
|
||||||
bool
|
bool
|
||||||
select EVENTFD
|
select EVENTFD
|
||||||
|
|
|
||||||
|
|
@ -74,7 +74,7 @@ int kvm_dirty_ring_alloc(struct kvm_dirty_ring *ring, int index, u32 size)
|
||||||
|
|
||||||
static inline void kvm_dirty_gfn_set_invalid(struct kvm_dirty_gfn *gfn)
|
static inline void kvm_dirty_gfn_set_invalid(struct kvm_dirty_gfn *gfn)
|
||||||
{
|
{
|
||||||
gfn->flags = 0;
|
smp_store_release(&gfn->flags, 0);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void kvm_dirty_gfn_set_dirtied(struct kvm_dirty_gfn *gfn)
|
static inline void kvm_dirty_gfn_set_dirtied(struct kvm_dirty_gfn *gfn)
|
||||||
|
|
@ -84,7 +84,7 @@ static inline void kvm_dirty_gfn_set_dirtied(struct kvm_dirty_gfn *gfn)
|
||||||
|
|
||||||
static inline bool kvm_dirty_gfn_harvested(struct kvm_dirty_gfn *gfn)
|
static inline bool kvm_dirty_gfn_harvested(struct kvm_dirty_gfn *gfn)
|
||||||
{
|
{
|
||||||
return gfn->flags & KVM_DIRTY_GFN_F_RESET;
|
return smp_load_acquire(&gfn->flags) & KVM_DIRTY_GFN_F_RESET;
|
||||||
}
|
}
|
||||||
|
|
||||||
int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring)
|
int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring)
|
||||||
|
|
|
||||||
|
|
@ -4473,7 +4473,13 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
|
||||||
case KVM_CAP_NR_MEMSLOTS:
|
case KVM_CAP_NR_MEMSLOTS:
|
||||||
return KVM_USER_MEM_SLOTS;
|
return KVM_USER_MEM_SLOTS;
|
||||||
case KVM_CAP_DIRTY_LOG_RING:
|
case KVM_CAP_DIRTY_LOG_RING:
|
||||||
#ifdef CONFIG_HAVE_KVM_DIRTY_RING
|
#ifdef CONFIG_HAVE_KVM_DIRTY_RING_TSO
|
||||||
|
return KVM_DIRTY_RING_MAX_ENTRIES * sizeof(struct kvm_dirty_gfn);
|
||||||
|
#else
|
||||||
|
return 0;
|
||||||
|
#endif
|
||||||
|
case KVM_CAP_DIRTY_LOG_RING_ACQ_REL:
|
||||||
|
#ifdef CONFIG_HAVE_KVM_DIRTY_RING_ACQ_REL
|
||||||
return KVM_DIRTY_RING_MAX_ENTRIES * sizeof(struct kvm_dirty_gfn);
|
return KVM_DIRTY_RING_MAX_ENTRIES * sizeof(struct kvm_dirty_gfn);
|
||||||
#else
|
#else
|
||||||
return 0;
|
return 0;
|
||||||
|
|
@ -4578,6 +4584,7 @@ static int kvm_vm_ioctl_enable_cap_generic(struct kvm *kvm,
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
case KVM_CAP_DIRTY_LOG_RING:
|
case KVM_CAP_DIRTY_LOG_RING:
|
||||||
|
case KVM_CAP_DIRTY_LOG_RING_ACQ_REL:
|
||||||
return kvm_vm_ioctl_enable_dirty_log_ring(kvm, cap->args[0]);
|
return kvm_vm_ioctl_enable_dirty_log_ring(kvm, cap->args[0]);
|
||||||
default:
|
default:
|
||||||
return kvm_vm_ioctl_enable_cap(kvm, cap);
|
return kvm_vm_ioctl_enable_cap(kvm, cap);
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue