* Nested virtualization support for VGICv3, giving the nested
 hypervisor control of the VGIC hardware when running an L2 VM
 
 * Removal of 'late' nested virtualization feature register masking,
   making the supported feature set directly visible to userspace
 
 * Support for emulating FEAT_PMUv3 on Apple silicon, taking advantage
   of an IMPLEMENTATION DEFINED trap that covers all PMUv3 registers
 
 * Paravirtual interface for discovering the set of CPU implementations
   where a VM may run, addressing a longstanding issue of guest CPU
   errata awareness in big-little systems and cross-implementation VM
   migration
 
 * Userspace control of the registers responsible for identifying a
   particular CPU implementation (MIDR_EL1, REVIDR_EL1, AIDR_EL1),
   allowing VMs to be migrated cross-implementation
 
 * pKVM updates, including support for tracking stage-2 page table
   allocations in the protected hypervisor in the 'SecPageTable' stat
 
 * Fixes to vPMU, ensuring that userspace updates to the vPMU after
   KVM_RUN are reflected into the backing perf events
 
 LoongArch:
 
 * Remove unnecessary header include path
 
 * Assume constant PGD during VM context switch
 
 * Add perf events support for guest VM
 
 RISC-V:
 
 * Disable the kernel perf counter during configure
 
 * KVM selftests improvements for PMU
 
 * Fix warning at the time of KVM module removal
 
 x86:
 
 * Add support for aging of SPTEs without holding mmu_lock.  Not taking mmu_lock
   allows multiple aging actions to run in parallel, and more importantly avoids
   stalling vCPUs.  This includes an implementation of per-rmap-entry locking;
   aging the gfn is done with only a per-rmap single-bin spinlock taken, whereas
   locking an rmap for write requires taking both the per-rmap spinlock and
   the mmu_lock.
 
   Note that this decreases slightly the accuracy of accessed-page information,
   because changes to the SPTE outside aging might not use atomic operations
   even if they could race against a clear of the Accessed bit.  This is
   deliberate because KVM and mm/ tolerate false positives/negatives for
   accessed information, and testing has shown that reducing the latency of
   aging is far more beneficial to overall system performance than providing
   "perfect" young/old information.
 
 * Defer runtime CPUID updates until KVM emulates a CPUID instruction, to
   coalesce updates when multiple pieces of vCPU state are changing, e.g. as
   part of a nested transition.
 
 * Fix a variety of nested emulation bugs, and add VMX support for synthesizing
   nested VM-Exit on interception (instead of injecting #UD into L2).
 
 * Drop "support" for async page faults for protected guests that do not set
   SEND_ALWAYS (i.e. that only want async page faults at CPL3)
 
 * Bring a bit of sanity to x86's VM teardown code, which has accumulated
   a lot of cruft over the years.  Particularly, destroy vCPUs before
   the MMU, despite the latter being a VM-wide operation.
 
 * Add common secure TSC infrastructure for use within SNP and in the
   future TDX
 
 * Block KVM_CAP_SYNC_REGS if guest state is protected.  It does not make
   sense to use the capability if the relevant registers are not
   available for reading or writing.
 
 * Don't take kvm->lock when iterating over vCPUs in the suspend notifier to
   fix a largely theoretical deadlock.
 
 * Use the vCPU's actual Xen PV clock information when starting the Xen timer,
   as the cached state in arch.hv_clock can be stale/bogus.
 
 * Fix a bug where KVM could bleed PVCLOCK_GUEST_STOPPED across different
   PV clocks; restrict PVCLOCK_GUEST_STOPPED to kvmclock, as KVM's suspend
   notifier only accounts for kvmclock, and there's no evidence that the
   flag is actually supported by Xen guests.
 
 * Clean up the per-vCPU "cache" of its reference pvclock, and instead only
   track the vCPU's TSC scaling (multipler+shift) metadata (which is moderately
   expensive to compute, and rarely changes for modern setups).
 
 * Don't write to the Xen hypercall page on MSR writes that are initiated by
   the host (userspace or KVM) to fix a class of bugs where KVM can write to
   guest memory at unexpected times, e.g. during vCPU creation if userspace has
   set the Xen hypercall MSR index to collide with an MSR that KVM emulates.
 
 * Restrict the Xen hypercall MSR index to the unofficial synthetic range to
   reduce the set of possible collisions with MSRs that are emulated by KVM
   (collisions can still happen as KVM emulates Hyper-V MSRs, which also reside
   in the synthetic range).
 
 * Clean up and optimize KVM's handling of Xen MSR writes and xen_hvm_config.
 
 * Update Xen TSC leaves during CPUID emulation instead of modifying the CPUID
   entries when updating PV clocks; there is no guarantee PV clocks will be
   updated between TSC frequency changes and CPUID emulation, and guest reads
   of the TSC leaves should be rare, i.e. are not a hot path.
 
 x86 (Intel):
 
 * Fix a bug where KVM unnecessarily reads XFD_ERR from hardware and thus
   modifies the vCPU's XFD_ERR on a #NM due to CR0.TS=1.
 
 * Pass XFD_ERR as the payload when injecting #NM, as a preparatory step
   for upcoming FRED virtualization support.
 
 * Decouple the EPT entry RWX protection bit macros from the EPT Violation
   bits, both as a general cleanup and in anticipation of adding support for
   emulating Mode-Based Execution Control (MBEC).
 
 * Reject KVM_RUN if userspace manages to gain control and stuff invalid guest
   state while KVM is in the middle of emulating nested VM-Enter.
 
 * Add a macro to handle KVM's sanity checks on entry/exit VMCS control pairs
   in anticipation of adding sanity checks for secondary exit controls (the
   primary field is out of bits).
 
 x86 (AMD):
 
 * Ensure the PSP driver is initialized when both the PSP and KVM modules are
   built-in (the initcall framework doesn't handle dependencies).
 
 * Use long-term pins when registering encrypted memory regions, so that the
   pages are migrated out of MIGRATE_CMA/ZONE_MOVABLE and don't lead to
   excessive fragmentation.
 
 * Add macros and helpers for setting GHCB return/error codes.
 
 * Add support for Idle HLT interception, which elides interception if the vCPU
   has a pending, unmasked virtual IRQ when HLT is executed.
 
 * Fix a bug in INVPCID emulation where KVM fails to check for a non-canonical
   address.
 
 * Don't attempt VMRUN for SEV-ES+ guests if the vCPU's VMSA is invalid, e.g.
   because the vCPU was "destroyed" via SNP's AP Creation hypercall.
 
 * Reject SNP AP Creation if the requested SEV features for the vCPU don't
   match the VM's configured set of features.
 
 Selftests:
 
 * Fix again the Intel PMU counters test; add a data load and do CLFLUSH{OPT} on the data
   instead of executing code.  The theory is that modern Intel CPUs have
   learned new code prefetching tricks that bypass the PMU counters.
 
 * Fix a flaw in the Intel PMU counters test where it asserts that an event is
   counting correctly without actually knowing what the event counts on the
   underlying hardware.
 
 * Fix a variety of flaws, bugs, and false failures/passes dirty_log_test, and
   improve its coverage by collecting all dirty entries on each iteration.
 
 * Fix a few minor bugs related to handling of stats FDs.
 
 * Add infrastructure to make vCPU and VM stats FDs available to tests by
   default (open the FDs during VM/vCPU creation).
 
 * Relax an assertion on the number of HLT exits in the xAPIC IPI test when
   running on a CPU that supports AMD's Idle HLT (which elides interception of
   HLT if a virtual IRQ is pending and unmasked).
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmfcTkEUHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroMnQAf/cPx72hJOdNy4Qrm8M33YLXVRVV00
 yEZ8eN8TWdOclr0ltE/w/ELGh/qS4CU8pjURAk0A6lPioU+mdcTn3dPEqMDMVYom
 uOQ2lusEHw0UuSnGZSEjvZJsE/Ro2NSAsHIB6PWRqig1ZBPJzyu0frce34pMpeQH
 diwriJL9lKPAhBWXnUQ9BKoi1R0P5OLW9ahX4SOWk7cAFg4DLlDE66Nqf6nKqViw
 DwEucTiUEg5+a3d93gihdD4JNl+fb3vI2erxrMxjFjkacl0qgqRu3ei3DG0MfdHU
 wNcFSG5B1n0OECKxr80lr1Ip1KTVNNij0Ks+w6Gc6lSg9c4PptnNkfLK3A==
 =nnCN
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull kvm updates from Paolo Bonzini:
 "ARM:

   - Nested virtualization support for VGICv3, giving the nested
     hypervisor control of the VGIC hardware when running an L2 VM

   - Removal of 'late' nested virtualization feature register masking,
     making the supported feature set directly visible to userspace

   - Support for emulating FEAT_PMUv3 on Apple silicon, taking advantage
     of an IMPLEMENTATION DEFINED trap that covers all PMUv3 registers

   - Paravirtual interface for discovering the set of CPU
     implementations where a VM may run, addressing a longstanding issue
     of guest CPU errata awareness in big-little systems and
     cross-implementation VM migration

   - Userspace control of the registers responsible for identifying a
     particular CPU implementation (MIDR_EL1, REVIDR_EL1, AIDR_EL1),
     allowing VMs to be migrated cross-implementation

   - pKVM updates, including support for tracking stage-2 page table
     allocations in the protected hypervisor in the 'SecPageTable' stat

   - Fixes to vPMU, ensuring that userspace updates to the vPMU after
     KVM_RUN are reflected into the backing perf events

  LoongArch:

   - Remove unnecessary header include path

   - Assume constant PGD during VM context switch

   - Add perf events support for guest VM

  RISC-V:

   - Disable the kernel perf counter during configure

   - KVM selftests improvements for PMU

   - Fix warning at the time of KVM module removal

  x86:

   - Add support for aging of SPTEs without holding mmu_lock.

     Not taking mmu_lock allows multiple aging actions to run in
     parallel, and more importantly avoids stalling vCPUs. This includes
     an implementation of per-rmap-entry locking; aging the gfn is done
     with only a per-rmap single-bin spinlock taken, whereas locking an
     rmap for write requires taking both the per-rmap spinlock and the
     mmu_lock.

     Note that this decreases slightly the accuracy of accessed-page
     information, because changes to the SPTE outside aging might not
     use atomic operations even if they could race against a clear of
     the Accessed bit.

     This is deliberate because KVM and mm/ tolerate false
     positives/negatives for accessed information, and testing has shown
     that reducing the latency of aging is far more beneficial to
     overall system performance than providing "perfect" young/old
     information.

   - Defer runtime CPUID updates until KVM emulates a CPUID instruction,
     to coalesce updates when multiple pieces of vCPU state are
     changing, e.g. as part of a nested transition

   - Fix a variety of nested emulation bugs, and add VMX support for
     synthesizing nested VM-Exit on interception (instead of injecting
     #UD into L2)

   - Drop "support" for async page faults for protected guests that do
     not set SEND_ALWAYS (i.e. that only want async page faults at CPL3)

   - Bring a bit of sanity to x86's VM teardown code, which has
     accumulated a lot of cruft over the years. Particularly, destroy
     vCPUs before the MMU, despite the latter being a VM-wide operation

   - Add common secure TSC infrastructure for use within SNP and in the
     future TDX

   - Block KVM_CAP_SYNC_REGS if guest state is protected. It does not
     make sense to use the capability if the relevant registers are not
     available for reading or writing

   - Don't take kvm->lock when iterating over vCPUs in the suspend
     notifier to fix a largely theoretical deadlock

   - Use the vCPU's actual Xen PV clock information when starting the
     Xen timer, as the cached state in arch.hv_clock can be stale/bogus

   - Fix a bug where KVM could bleed PVCLOCK_GUEST_STOPPED across
     different PV clocks; restrict PVCLOCK_GUEST_STOPPED to kvmclock, as
     KVM's suspend notifier only accounts for kvmclock, and there's no
     evidence that the flag is actually supported by Xen guests

   - Clean up the per-vCPU "cache" of its reference pvclock, and instead
     only track the vCPU's TSC scaling (multipler+shift) metadata (which
     is moderately expensive to compute, and rarely changes for modern
     setups)

   - Don't write to the Xen hypercall page on MSR writes that are
     initiated by the host (userspace or KVM) to fix a class of bugs
     where KVM can write to guest memory at unexpected times, e.g.
     during vCPU creation if userspace has set the Xen hypercall MSR
     index to collide with an MSR that KVM emulates

   - Restrict the Xen hypercall MSR index to the unofficial synthetic
     range to reduce the set of possible collisions with MSRs that are
     emulated by KVM (collisions can still happen as KVM emulates
     Hyper-V MSRs, which also reside in the synthetic range)

   - Clean up and optimize KVM's handling of Xen MSR writes and
     xen_hvm_config

   - Update Xen TSC leaves during CPUID emulation instead of modifying
     the CPUID entries when updating PV clocks; there is no guarantee PV
     clocks will be updated between TSC frequency changes and CPUID
     emulation, and guest reads of the TSC leaves should be rare, i.e.
     are not a hot path

  x86 (Intel):

   - Fix a bug where KVM unnecessarily reads XFD_ERR from hardware and
     thus modifies the vCPU's XFD_ERR on a #NM due to CR0.TS=1

   - Pass XFD_ERR as the payload when injecting #NM, as a preparatory
     step for upcoming FRED virtualization support

   - Decouple the EPT entry RWX protection bit macros from the EPT
     Violation bits, both as a general cleanup and in anticipation of
     adding support for emulating Mode-Based Execution Control (MBEC)

   - Reject KVM_RUN if userspace manages to gain control and stuff
     invalid guest state while KVM is in the middle of emulating nested
     VM-Enter

   - Add a macro to handle KVM's sanity checks on entry/exit VMCS
     control pairs in anticipation of adding sanity checks for secondary
     exit controls (the primary field is out of bits)

  x86 (AMD):

   - Ensure the PSP driver is initialized when both the PSP and KVM
     modules are built-in (the initcall framework doesn't handle
     dependencies)

   - Use long-term pins when registering encrypted memory regions, so
     that the pages are migrated out of MIGRATE_CMA/ZONE_MOVABLE and
     don't lead to excessive fragmentation

   - Add macros and helpers for setting GHCB return/error codes

   - Add support for Idle HLT interception, which elides interception if
     the vCPU has a pending, unmasked virtual IRQ when HLT is executed

   - Fix a bug in INVPCID emulation where KVM fails to check for a
     non-canonical address

   - Don't attempt VMRUN for SEV-ES+ guests if the vCPU's VMSA is
     invalid, e.g. because the vCPU was "destroyed" via SNP's AP
     Creation hypercall

   - Reject SNP AP Creation if the requested SEV features for the vCPU
     don't match the VM's configured set of features

  Selftests:

   - Fix again the Intel PMU counters test; add a data load and do
     CLFLUSH{OPT} on the data instead of executing code. The theory is
     that modern Intel CPUs have learned new code prefetching tricks
     that bypass the PMU counters

   - Fix a flaw in the Intel PMU counters test where it asserts that an
     event is counting correctly without actually knowing what the event
     counts on the underlying hardware

   - Fix a variety of flaws, bugs, and false failures/passes
     dirty_log_test, and improve its coverage by collecting all dirty
     entries on each iteration

   - Fix a few minor bugs related to handling of stats FDs

   - Add infrastructure to make vCPU and VM stats FDs available to tests
     by default (open the FDs during VM/vCPU creation)

   - Relax an assertion on the number of HLT exits in the xAPIC IPI test
     when running on a CPU that supports AMD's Idle HLT (which elides
     interception of HLT if a virtual IRQ is pending and unmasked)"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (216 commits)
  RISC-V: KVM: Optimize comments in kvm_riscv_vcpu_isa_disable_allowed
  RISC-V: KVM: Teardown riscv specific bits after kvm_exit
  LoongArch: KVM: Register perf callbacks for guest
  LoongArch: KVM: Implement arch-specific functions for guest perf
  LoongArch: KVM: Add stub for kvm_arch_vcpu_preempted_in_kernel()
  LoongArch: KVM: Remove PGD saving during VM context switch
  LoongArch: KVM: Remove unnecessary header include path
  KVM: arm64: Tear down vGIC on failed vCPU creation
  KVM: arm64: PMU: Reload when resetting
  KVM: arm64: PMU: Reload when user modifies registers
  KVM: arm64: PMU: Fix SET_ONE_REG for vPMC regs
  KVM: arm64: PMU: Assume PMU presence in pmu-emul.c
  KVM: arm64: PMU: Set raw values from user to PM{C,I}NTEN{SET,CLR}, PMOVS{SET,CLR}
  KVM: arm64: Create each pKVM hyp vcpu after its corresponding host vcpu
  KVM: arm64: Factor out pKVM hyp vcpu creation to separate function
  KVM: arm64: Initialize HCRX_EL2 traps in pKVM
  KVM: arm64: Factor out setting HCRX_EL2 traps into separate function
  KVM: x86: block KVM_CAP_SYNC_REGS if guest state is protected
  KVM: x86: Add infrastructure for secure TSC
  KVM: x86: Push down setting vcpu.arch.user_set_tsc
  ...
This commit is contained in:
Linus Torvalds 2025-03-25 14:22:07 -07:00
commit edb0e8f6e2
142 changed files with 4118 additions and 1986 deletions

View File

@ -1000,6 +1000,10 @@ blobs in userspace. When the guest writes the MSR, kvm copies one
page of a blob (32- or 64-bit, depending on the vcpu mode) to guest
memory.
The MSR index must be in the range [0x40000000, 0x4fffffff], i.e. must reside
in the range that is unofficially reserved for use by hypervisors. The min/max
values are enumerated via KVM_XEN_MSR_MIN_INDEX and KVM_XEN_MSR_MAX_INDEX.
::
struct kvm_xen_hvm_config {
@ -8258,6 +8262,24 @@ KVM exits with the register state of either the L1 or L2 guest
depending on which executed at the time of an exit. Userspace must
take care to differentiate between these cases.
7.37 KVM_CAP_ARM_WRITABLE_IMP_ID_REGS
-------------------------------------
:Architectures: arm64
:Target: VM
:Parameters: None
:Returns: 0 on success, -EINVAL if vCPUs have been created before enabling this
capability.
This capability changes the behavior of the registers that identify a PE
implementation of the Arm architecture: MIDR_EL1, REVIDR_EL1, and AIDR_EL1.
By default, these registers are visible to userspace but treated as invariant.
When this capability is enabled, KVM allows userspace to change the
aforementioned registers before the first KVM_RUN. These registers are VM
scoped, meaning that the same set of values are presented on all vCPUs in a
given VM.
8. Other capabilities.
======================

View File

@ -116,7 +116,7 @@ The pseudo-firmware bitmap register are as follows:
ARM DEN0057A.
* KVM_REG_ARM_VENDOR_HYP_BMAP:
Controls the bitmap of the Vendor specific Hypervisor Service Calls.
Controls the bitmap of the Vendor specific Hypervisor Service Calls[0-63].
The following bits are accepted:
@ -127,6 +127,19 @@ The pseudo-firmware bitmap register are as follows:
Bit-1: KVM_REG_ARM_VENDOR_HYP_BIT_PTP:
The bit represents the Precision Time Protocol KVM service.
* KVM_REG_ARM_VENDOR_HYP_BMAP_2:
Controls the bitmap of the Vendor specific Hypervisor Service Calls[64-127].
The following bits are accepted:
Bit-0: KVM_REG_ARM_VENDOR_HYP_BIT_DISCOVER_IMPL_VER
This represents the ARM_SMCCC_VENDOR_HYP_KVM_DISCOVER_IMPL_VER_FUNC_ID
function-id. This is reset to 0.
Bit-1: KVM_REG_ARM_VENDOR_HYP_BIT_DISCOVER_IMPL_CPUS
This represents the ARM_SMCCC_VENDOR_HYP_KVM_DISCOVER_IMPL_CPUS_FUNC_ID
function-id. This is reset to 0.
Errors:
======= =============================================================

View File

@ -142,3 +142,62 @@ region is equal to the memory protection granule advertised by
| | | +---------------------------------------------+
| | | | ``INVALID_PARAMETER (-3)`` |
+---------------------+----------+----+---------------------------------------------+
``ARM_SMCCC_VENDOR_HYP_KVM_DISCOVER_IMPL_VER_FUNC_ID``
-------------------------------------------------------
Request the target CPU implementation version information and the number of target
implementations for the Guest VM.
+---------------------+-------------------------------------------------------------+
| Presence: | Optional; KVM/ARM64 Guests only |
+---------------------+-------------------------------------------------------------+
| Calling convention: | HVC64 |
+---------------------+----------+--------------------------------------------------+
| Function ID: | (uint32) | 0xC6000040 |
+---------------------+----------+--------------------------------------------------+
| Arguments: | None |
+---------------------+----------+----+---------------------------------------------+
| Return Values: | (int64) | R0 | ``SUCCESS (0)`` |
| | | +---------------------------------------------+
| | | | ``NOT_SUPPORTED (-1)`` |
| +----------+----+---------------------------------------------+
| | (uint64) | R1 | Bits [63:32] Reserved/Must be zero |
| | | +---------------------------------------------+
| | | | Bits [31:16] Major version |
| | | +---------------------------------------------+
| | | | Bits [15:0] Minor version |
| +----------+----+---------------------------------------------+
| | (uint64) | R2 | Number of target implementations |
| +----------+----+---------------------------------------------+
| | (uint64) | R3 | Reserved / Must be zero |
+---------------------+----------+----+---------------------------------------------+
``ARM_SMCCC_VENDOR_HYP_KVM_DISCOVER_IMPL_CPUS_FUNC_ID``
-------------------------------------------------------
Request the target CPU implementation information for the Guest VM. The Guest kernel
will use this information to enable the associated errata.
+---------------------+-------------------------------------------------------------+
| Presence: | Optional; KVM/ARM64 Guests only |
+---------------------+-------------------------------------------------------------+
| Calling convention: | HVC64 |
+---------------------+----------+--------------------------------------------------+
| Function ID: | (uint32) | 0xC6000041 |
+---------------------+----------+----+---------------------------------------------+
| Arguments: | (uint64) | R1 | selected implementation index |
| +----------+----+---------------------------------------------+
| | (uint64) | R2 | Reserved / Must be zero |
| +----------+----+---------------------------------------------+
| | (uint64) | R3 | Reserved / Must be zero |
+---------------------+----------+----+---------------------------------------------+
| Return Values: | (int64) | R0 | ``SUCCESS (0)`` |
| | | +---------------------------------------------+
| | | | ``INVALID_PARAMETER (-3)`` |
| +----------+----+---------------------------------------------+
| | (uint64) | R1 | MIDR_EL1 of the selected implementation |
| +----------+----+---------------------------------------------+
| | (uint64) | R2 | REVIDR_EL1 of the selected implementation |
| +----------+----+---------------------------------------------+
| | (uint64) | R3 | AIDR_EL1 of the selected implementation |
+---------------------+----------+----+---------------------------------------------+

View File

@ -126,7 +126,8 @@ KVM_DEV_ARM_VGIC_GRP_ITS_REGS
ITS Restore Sequence:
---------------------
The following ordering must be followed when restoring the GIC and the ITS:
The following ordering must be followed when restoring the GIC, ITS, and
KVM_IRQFD assignments:
a) restore all guest memory and create vcpus
b) restore all redistributors
@ -139,6 +140,8 @@ d) restore the ITS in the following order:
3. Load the ITS table data (KVM_DEV_ARM_ITS_RESTORE_TABLES)
4. Restore GITS_CTLR
e) restore KVM_IRQFD assignments for MSIs
Then vcpus can be started.
ITS Table ABI REV0:

View File

@ -291,8 +291,18 @@ Groups:
| Aff3 | Aff2 | Aff1 | Aff0 |
Errors:
======= =============================================
-EINVAL vINTID is not multiple of 32 or info field is
not VGIC_LEVEL_INFO_LINE_LEVEL
======= =============================================
KVM_DEV_ARM_VGIC_GRP_MAINT_IRQ
Attributes:
The attr field of kvm_device_attr encodes the following values:
bits: | 31 .... 5 | 4 .... 0 |
values: | RES0 | vINTID |
The vINTID specifies which interrupt is generated when the vGIC
must generate a maintenance interrupt. This must be a PPI.

View File

@ -196,7 +196,7 @@ writable between reading spte and updating spte. Like below case:
The Dirty bit is lost in this case.
In order to avoid this kind of issue, we always treat the spte as "volatile"
if it can be updated out of mmu-lock [see spte_has_volatile_bits()]; it means
if it can be updated out of mmu-lock [see spte_needs_atomic_update()]; it means
the spte is always atomically updated in this case.
3) flush tlbs due to spte updated
@ -212,7 +212,7 @@ function to update spte (present -> present).
Since the spte is "volatile" if it can be updated out of mmu-lock, we always
atomically update the spte and the race caused by fast page fault can be avoided.
See the comments in spte_has_volatile_bits() and mmu_spte_update().
See the comments in spte_needs_atomic_update() and mmu_spte_update().
Lockless Access Tracking:

View File

@ -71,6 +71,8 @@ cpucap_is_possible(const unsigned int cap)
* KVM MPAM support doesn't rely on the host kernel supporting MPAM.
*/
return true;
case ARM64_HAS_PMUV3:
return IS_ENABLED(CONFIG_HW_PERF_EVENTS);
}
return true;

View File

@ -525,29 +525,6 @@ cpuid_feature_extract_unsigned_field(u64 features, int field)
return cpuid_feature_extract_unsigned_field_width(features, field, 4);
}
/*
* Fields that identify the version of the Performance Monitors Extension do
* not follow the standard ID scheme. See ARM DDI 0487E.a page D13-2825,
* "Alternative ID scheme used for the Performance Monitors Extension version".
*/
static inline u64 __attribute_const__
cpuid_feature_cap_perfmon_field(u64 features, int field, u64 cap)
{
u64 val = cpuid_feature_extract_unsigned_field(features, field);
u64 mask = GENMASK_ULL(field + 3, field);
/* Treat IMPLEMENTATION DEFINED functionality as unimplemented */
if (val == ID_AA64DFR0_EL1_PMUVer_IMP_DEF)
val = 0;
if (val > cap) {
features &= ~mask;
features |= (cap << field) & mask;
}
return features;
}
static inline u64 arm64_ftr_mask(const struct arm64_ftr_bits *ftrp)
{
return (u64)GENMASK(ftrp->shift + ftrp->width - 1, ftrp->shift);
@ -866,6 +843,11 @@ static __always_inline bool system_supports_mpam_hcr(void)
return alternative_has_cap_unlikely(ARM64_MPAM_HCR);
}
static inline bool system_supports_pmuv3(void)
{
return cpus_have_final_cap(ARM64_HAS_PMUV3);
}
int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt);
bool try_emulate_mrs(struct pt_regs *regs, u32 isn);

View File

@ -245,6 +245,16 @@
#define read_cpuid(reg) read_sysreg_s(SYS_ ## reg)
/*
* The CPU ID never changes at run time, so we might as well tell the
* compiler that it's constant. Use this function to read the CPU ID
* rather than directly reading processor_id or read_cpuid() directly.
*/
static inline u32 __attribute_const__ read_cpuid_id(void)
{
return read_cpuid(MIDR_EL1);
}
/*
* Represent a range of MIDR values for a given CPU model and a
* range of variant/revision values.
@ -280,30 +290,14 @@ static inline bool midr_is_cpu_model_range(u32 midr, u32 model, u32 rv_min,
return _model == model && rv >= rv_min && rv <= rv_max;
}
static inline bool is_midr_in_range(u32 midr, struct midr_range const *range)
{
return midr_is_cpu_model_range(midr, range->model,
range->rv_min, range->rv_max);
}
struct target_impl_cpu {
u64 midr;
u64 revidr;
u64 aidr;
};
static inline bool
is_midr_in_range_list(u32 midr, struct midr_range const *ranges)
{
while (ranges->model)
if (is_midr_in_range(midr, ranges++))
return true;
return false;
}
/*
* The CPU ID never changes at run time, so we might as well tell the
* compiler that it's constant. Use this function to read the CPU ID
* rather than directly reading processor_id or read_cpuid() directly.
*/
static inline u32 __attribute_const__ read_cpuid_id(void)
{
return read_cpuid(MIDR_EL1);
}
bool cpu_errata_set_target_impl(u64 num, void *impl_cpus);
bool is_midr_in_range_list(struct midr_range const *ranges);
static inline u64 __attribute_const__ read_cpuid_mpidr(void)
{

View File

@ -6,6 +6,7 @@
void kvm_init_hyp_services(void);
bool kvm_arm_hyp_service_available(u32 func_id);
void kvm_arm_target_impl_cpu_init(void);
#ifdef CONFIG_ARM_PKVM_GUEST
void pkvm_init_hyp_services(void);

View File

@ -92,12 +92,12 @@
* SWIO: Turn set/way invalidates into set/way clean+invalidate
* PTW: Take a stage2 fault if a stage1 walk steps in device memory
* TID3: Trap EL1 reads of group 3 ID registers
* TID2: Trap CTR_EL0, CCSIDR2_EL1, CLIDR_EL1, and CSSELR_EL1
* TID1: Trap REVIDR_EL1, AIDR_EL1, and SMIDR_EL1
*/
#define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWE | HCR_TWI | HCR_VM | \
HCR_BSU_IS | HCR_FB | HCR_TACR | \
HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TLOR | \
HCR_FMO | HCR_IMO | HCR_PTW | HCR_TID3)
HCR_FMO | HCR_IMO | HCR_PTW | HCR_TID3 | HCR_TID1)
#define HCR_HOST_NVHE_FLAGS (HCR_RW | HCR_API | HCR_APK | HCR_ATA)
#define HCR_HOST_NVHE_PROTECTED_FLAGS (HCR_HOST_NVHE_FLAGS | HCR_TSC)
#define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)

View File

@ -275,6 +275,19 @@ static __always_inline u64 kvm_vcpu_get_esr(const struct kvm_vcpu *vcpu)
return vcpu->arch.fault.esr_el2;
}
static inline bool guest_hyp_wfx_traps_enabled(const struct kvm_vcpu *vcpu)
{
u64 esr = kvm_vcpu_get_esr(vcpu);
bool is_wfe = !!(esr & ESR_ELx_WFx_ISS_WFE);
u64 hcr_el2 = __vcpu_sys_reg(vcpu, HCR_EL2);
if (!vcpu_has_nv(vcpu) || vcpu_is_el2(vcpu))
return false;
return ((is_wfe && (hcr_el2 & HCR_TWE)) ||
(!is_wfe && (hcr_el2 & HCR_TWI)));
}
static __always_inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu)
{
u64 esr = kvm_vcpu_get_esr(vcpu);
@ -649,4 +662,28 @@ static inline bool guest_hyp_sve_traps_enabled(const struct kvm_vcpu *vcpu)
{
return __guest_hyp_cptr_xen_trap_enabled(vcpu, ZEN);
}
static inline void vcpu_set_hcrx(struct kvm_vcpu *vcpu)
{
struct kvm *kvm = vcpu->kvm;
if (cpus_have_final_cap(ARM64_HAS_HCX)) {
/*
* In general, all HCRX_EL2 bits are gated by a feature.
* The only reason we can set SMPME without checking any
* feature is that its effects are not directly observable
* from the guest.
*/
vcpu->arch.hcrx_el2 = HCRX_EL2_SMPME;
if (kvm_has_feat(kvm, ID_AA64ISAR2_EL1, MOPS, IMP))
vcpu->arch.hcrx_el2 |= (HCRX_EL2_MSCEn | HCRX_EL2_MCE2);
if (kvm_has_tcr2(kvm))
vcpu->arch.hcrx_el2 |= HCRX_EL2_TCR2En;
if (kvm_has_fpmr(kvm))
vcpu->arch.hcrx_el2 |= HCRX_EL2_EnFPM;
}
}
#endif /* __ARM64_KVM_EMULATE_H__ */

View File

@ -44,14 +44,15 @@
#define KVM_REQ_SLEEP \
KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
#define KVM_REQ_IRQ_PENDING KVM_ARCH_REQ(1)
#define KVM_REQ_VCPU_RESET KVM_ARCH_REQ(2)
#define KVM_REQ_RECORD_STEAL KVM_ARCH_REQ(3)
#define KVM_REQ_RELOAD_GICv4 KVM_ARCH_REQ(4)
#define KVM_REQ_RELOAD_PMU KVM_ARCH_REQ(5)
#define KVM_REQ_SUSPEND KVM_ARCH_REQ(6)
#define KVM_REQ_RESYNC_PMU_EL0 KVM_ARCH_REQ(7)
#define KVM_REQ_NESTED_S2_UNMAP KVM_ARCH_REQ(8)
#define KVM_REQ_IRQ_PENDING KVM_ARCH_REQ(1)
#define KVM_REQ_VCPU_RESET KVM_ARCH_REQ(2)
#define KVM_REQ_RECORD_STEAL KVM_ARCH_REQ(3)
#define KVM_REQ_RELOAD_GICv4 KVM_ARCH_REQ(4)
#define KVM_REQ_RELOAD_PMU KVM_ARCH_REQ(5)
#define KVM_REQ_SUSPEND KVM_ARCH_REQ(6)
#define KVM_REQ_RESYNC_PMU_EL0 KVM_ARCH_REQ(7)
#define KVM_REQ_NESTED_S2_UNMAP KVM_ARCH_REQ(8)
#define KVM_REQ_GUEST_HYP_IRQ_PENDING KVM_ARCH_REQ(9)
#define KVM_DIRTY_LOG_MANUAL_CAPS (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE | \
KVM_DIRTY_LOG_INITIALLY_SET)
@ -86,6 +87,9 @@ struct kvm_hyp_memcache {
phys_addr_t head;
unsigned long nr_pages;
struct pkvm_mapping *mapping; /* only used from EL1 */
#define HYP_MEMCACHE_ACCOUNT_STAGE2 BIT(1)
unsigned long flags;
};
static inline void push_hyp_memcache(struct kvm_hyp_memcache *mc,
@ -237,7 +241,8 @@ struct kvm_arch_memory_slot {
struct kvm_smccc_features {
unsigned long std_bmap;
unsigned long std_hyp_bmap;
unsigned long vendor_hyp_bmap;
unsigned long vendor_hyp_bmap; /* Function numbers 0-63 */
unsigned long vendor_hyp_bmap_2; /* Function numbers 64-127 */
};
typedef unsigned int pkvm_handle_t;
@ -245,6 +250,7 @@ typedef unsigned int pkvm_handle_t;
struct kvm_protected_vm {
pkvm_handle_t handle;
struct kvm_hyp_memcache teardown_mc;
struct kvm_hyp_memcache stage2_teardown_mc;
bool enabled;
};
@ -334,6 +340,8 @@ struct kvm_arch {
#define KVM_ARCH_FLAG_FGU_INITIALIZED 8
/* SVE exposed to guest */
#define KVM_ARCH_FLAG_GUEST_HAS_SVE 9
/* MIDR_EL1, REVIDR_EL1, and AIDR_EL1 are writable from userspace */
#define KVM_ARCH_FLAG_WRITABLE_IMP_ID_REGS 10
unsigned long flags;
/* VM-wide vCPU feature set */
@ -373,6 +381,9 @@ struct kvm_arch {
#define KVM_ARM_ID_REG_NUM (IDREG_IDX(sys_reg(3, 0, 0, 7, 7)) + 1)
u64 id_regs[KVM_ARM_ID_REG_NUM];
u64 midr_el1;
u64 revidr_el1;
u64 aidr_el1;
u64 ctr_el0;
/* Masks for VNCR-backed and general EL2 sysregs */
@ -557,7 +568,33 @@ enum vcpu_sysreg {
VNCR(CNTP_CVAL_EL0),
VNCR(CNTP_CTL_EL0),
VNCR(ICH_LR0_EL2),
VNCR(ICH_LR1_EL2),
VNCR(ICH_LR2_EL2),
VNCR(ICH_LR3_EL2),
VNCR(ICH_LR4_EL2),
VNCR(ICH_LR5_EL2),
VNCR(ICH_LR6_EL2),
VNCR(ICH_LR7_EL2),
VNCR(ICH_LR8_EL2),
VNCR(ICH_LR9_EL2),
VNCR(ICH_LR10_EL2),
VNCR(ICH_LR11_EL2),
VNCR(ICH_LR12_EL2),
VNCR(ICH_LR13_EL2),
VNCR(ICH_LR14_EL2),
VNCR(ICH_LR15_EL2),
VNCR(ICH_AP0R0_EL2),
VNCR(ICH_AP0R1_EL2),
VNCR(ICH_AP0R2_EL2),
VNCR(ICH_AP0R3_EL2),
VNCR(ICH_AP1R0_EL2),
VNCR(ICH_AP1R1_EL2),
VNCR(ICH_AP1R2_EL2),
VNCR(ICH_AP1R3_EL2),
VNCR(ICH_HCR_EL2),
VNCR(ICH_VMCR_EL2),
NR_SYS_REGS /* Nothing after this line! */
};
@ -869,6 +906,8 @@ struct kvm_vcpu_arch {
#define VCPU_INITIALIZED __vcpu_single_flag(cflags, BIT(0))
/* SVE config completed */
#define VCPU_SVE_FINALIZED __vcpu_single_flag(cflags, BIT(1))
/* pKVM VCPU setup completed */
#define VCPU_PKVM_FINALIZED __vcpu_single_flag(cflags, BIT(2))
/* Exception pending */
#define PENDING_EXCEPTION __vcpu_single_flag(iflags, BIT(0))
@ -919,6 +958,8 @@ struct kvm_vcpu_arch {
#define PMUSERENR_ON_CPU __vcpu_single_flag(sflags, BIT(5))
/* WFI instruction trapped */
#define IN_WFI __vcpu_single_flag(sflags, BIT(6))
/* KVM is currently emulating a nested ERET */
#define IN_NESTED_ERET __vcpu_single_flag(sflags, BIT(7))
/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
@ -1334,8 +1375,6 @@ static inline bool kvm_system_needs_idmapped_vectors(void)
return cpus_have_final_cap(ARM64_SPECTRE_V3A);
}
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
void kvm_init_host_debug_data(void);
void kvm_vcpu_load_debug(struct kvm_vcpu *vcpu);
void kvm_vcpu_put_debug(struct kvm_vcpu *vcpu);
@ -1459,6 +1498,12 @@ static inline u64 *__vm_id_reg(struct kvm_arch *ka, u32 reg)
return &ka->id_regs[IDREG_IDX(reg)];
case SYS_CTR_EL0:
return &ka->ctr_el0;
case SYS_MIDR_EL1:
return &ka->midr_el1;
case SYS_REVIDR_EL1:
return &ka->revidr_el1;
case SYS_AIDR_EL1:
return &ka->aidr_el1;
default:
WARN_ON_ONCE(1);
return NULL;

View File

@ -76,6 +76,8 @@ DECLARE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params);
int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu);
u64 __gic_v3_get_lr(unsigned int lr);
void __vgic_v3_save_state(struct vgic_v3_cpu_if *cpu_if);
void __vgic_v3_restore_state(struct vgic_v3_cpu_if *cpu_if);
void __vgic_v3_activate_traps(struct vgic_v3_cpu_if *cpu_if);

View File

@ -188,6 +188,7 @@ static inline bool kvm_supported_tlbi_s1e2_op(struct kvm_vcpu *vpcu, u32 instr)
}
int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu);
u64 limit_nv_id_reg(struct kvm *kvm, u32 reg, u64 val);
#ifdef CONFIG_ARM64_PTR_AUTH
bool kvm_auth_eretax(struct kvm_vcpu *vcpu, u64 *elr);

View File

@ -19,6 +19,7 @@
int pkvm_init_host_vm(struct kvm *kvm);
int pkvm_create_hyp_vm(struct kvm *kvm);
void pkvm_destroy_hyp_vm(struct kvm *kvm);
int pkvm_create_hyp_vcpu(struct kvm_vcpu *vcpu);
/*
* This functions as an allow-list of protected VM capabilities.

View File

@ -101,8 +101,7 @@ static inline bool kaslr_requires_kpti(void)
if (IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456)) {
extern const struct midr_range cavium_erratum_27456_cpus[];
if (is_midr_in_range_list(read_cpuid_id(),
cavium_erratum_27456_cpus))
if (is_midr_in_range_list(cavium_erratum_27456_cpus))
return false;
}

View File

@ -562,9 +562,6 @@
#define SYS_ICH_VSEIR_EL2 sys_reg(3, 4, 12, 9, 4)
#define SYS_ICC_SRE_EL2 sys_reg(3, 4, 12, 9, 5)
#define SYS_ICH_HCR_EL2 sys_reg(3, 4, 12, 11, 0)
#define SYS_ICH_VTR_EL2 sys_reg(3, 4, 12, 11, 1)
#define SYS_ICH_MISR_EL2 sys_reg(3, 4, 12, 11, 2)
#define SYS_ICH_EISR_EL2 sys_reg(3, 4, 12, 11, 3)
#define SYS_ICH_ELRSR_EL2 sys_reg(3, 4, 12, 11, 5)
#define SYS_ICH_VMCR_EL2 sys_reg(3, 4, 12, 11, 7)
@ -985,10 +982,6 @@
#define SYS_MPIDR_SAFE_VAL (BIT(31))
/* GIC Hypervisor interface registers */
/* ICH_MISR_EL2 bit definitions */
#define ICH_MISR_EOI (1 << 0)
#define ICH_MISR_U (1 << 1)
/* ICH_LR*_EL2 bit definitions */
#define ICH_LR_VIRTUAL_ID_MASK ((1ULL << 32) - 1)
@ -1003,17 +996,6 @@
#define ICH_LR_PRIORITY_SHIFT 48
#define ICH_LR_PRIORITY_MASK (0xffULL << ICH_LR_PRIORITY_SHIFT)
/* ICH_HCR_EL2 bit definitions */
#define ICH_HCR_EN (1 << 0)
#define ICH_HCR_UIE (1 << 1)
#define ICH_HCR_NPIE (1 << 3)
#define ICH_HCR_TC (1 << 10)
#define ICH_HCR_TALL0 (1 << 11)
#define ICH_HCR_TALL1 (1 << 12)
#define ICH_HCR_TDIR (1 << 14)
#define ICH_HCR_EOIcount_SHIFT 27
#define ICH_HCR_EOIcount_MASK (0x1f << ICH_HCR_EOIcount_SHIFT)
/* ICH_VMCR_EL2 bit definitions */
#define ICH_VMCR_ACK_CTL_SHIFT 2
#define ICH_VMCR_ACK_CTL_MASK (1 << ICH_VMCR_ACK_CTL_SHIFT)
@ -1034,18 +1016,6 @@
#define ICH_VMCR_ENG1_SHIFT 1
#define ICH_VMCR_ENG1_MASK (1 << ICH_VMCR_ENG1_SHIFT)
/* ICH_VTR_EL2 bit definitions */
#define ICH_VTR_PRI_BITS_SHIFT 29
#define ICH_VTR_PRI_BITS_MASK (7 << ICH_VTR_PRI_BITS_SHIFT)
#define ICH_VTR_ID_BITS_SHIFT 23
#define ICH_VTR_ID_BITS_MASK (7 << ICH_VTR_ID_BITS_SHIFT)
#define ICH_VTR_SEIS_SHIFT 22
#define ICH_VTR_SEIS_MASK (1 << ICH_VTR_SEIS_SHIFT)
#define ICH_VTR_A3V_SHIFT 21
#define ICH_VTR_A3V_MASK (1 << ICH_VTR_A3V_SHIFT)
#define ICH_VTR_TDS_SHIFT 19
#define ICH_VTR_TDS_MASK (1 << ICH_VTR_TDS_SHIFT)
/*
* Permission Indirection Extension (PIE) permission encodings.
* Encodings with the _O suffix, have overlays applied (Permission Overlay Extension).

View File

@ -105,6 +105,7 @@ struct kvm_regs {
#define KVM_ARM_VCPU_PTRAUTH_ADDRESS 5 /* VCPU uses address authentication */
#define KVM_ARM_VCPU_PTRAUTH_GENERIC 6 /* VCPU uses generic authentication */
#define KVM_ARM_VCPU_HAS_EL2 7 /* Support nested virtualization */
#define KVM_ARM_VCPU_HAS_EL2_E2H0 8 /* Limit NV support to E2H RES0 */
struct kvm_vcpu_init {
__u32 target;
@ -371,6 +372,7 @@ enum {
#endif
};
/* Vendor hyper call function numbers 0-63 */
#define KVM_REG_ARM_VENDOR_HYP_BMAP KVM_REG_ARM_FW_FEAT_BMAP_REG(2)
enum {
@ -381,6 +383,17 @@ enum {
#endif
};
/* Vendor hyper call function numbers 64-127 */
#define KVM_REG_ARM_VENDOR_HYP_BMAP_2 KVM_REG_ARM_FW_FEAT_BMAP_REG(3)
enum {
KVM_REG_ARM_VENDOR_HYP_BIT_DISCOVER_IMPL_VER = 0,
KVM_REG_ARM_VENDOR_HYP_BIT_DISCOVER_IMPL_CPUS = 1,
#ifdef __KERNEL__
KVM_REG_ARM_VENDOR_HYP_BMAP_2_BIT_COUNT,
#endif
};
/* Device Control API on vm fd */
#define KVM_ARM_VM_SMCCC_CTRL 0
#define KVM_ARM_VM_SMCCC_FILTER 0
@ -403,6 +416,7 @@ enum {
#define KVM_DEV_ARM_VGIC_GRP_CPU_SYSREGS 6
#define KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO 7
#define KVM_DEV_ARM_VGIC_GRP_ITS_REGS 8
#define KVM_DEV_ARM_VGIC_GRP_MAINT_IRQ 9
#define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT 10
#define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_MASK \
(0x3fffffULL << KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT)

View File

@ -14,31 +14,85 @@
#include <asm/kvm_asm.h>
#include <asm/smp_plat.h>
static u64 target_impl_cpu_num;
static struct target_impl_cpu *target_impl_cpus;
bool cpu_errata_set_target_impl(u64 num, void *impl_cpus)
{
if (target_impl_cpu_num || !num || !impl_cpus)
return false;
target_impl_cpu_num = num;
target_impl_cpus = impl_cpus;
return true;
}
static inline bool is_midr_in_range(struct midr_range const *range)
{
int i;
if (!target_impl_cpu_num)
return midr_is_cpu_model_range(read_cpuid_id(), range->model,
range->rv_min, range->rv_max);
for (i = 0; i < target_impl_cpu_num; i++) {
if (midr_is_cpu_model_range(target_impl_cpus[i].midr,
range->model,
range->rv_min, range->rv_max))
return true;
}
return false;
}
bool is_midr_in_range_list(struct midr_range const *ranges)
{
while (ranges->model)
if (is_midr_in_range(ranges++))
return true;
return false;
}
EXPORT_SYMBOL_GPL(is_midr_in_range_list);
static bool __maybe_unused
is_affected_midr_range(const struct arm64_cpu_capabilities *entry, int scope)
__is_affected_midr_range(const struct arm64_cpu_capabilities *entry,
u32 midr, u32 revidr)
{
const struct arm64_midr_revidr *fix;
u32 midr = read_cpuid_id(), revidr;
WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
if (!is_midr_in_range(midr, &entry->midr_range))
if (!is_midr_in_range(&entry->midr_range))
return false;
midr &= MIDR_REVISION_MASK | MIDR_VARIANT_MASK;
revidr = read_cpuid(REVIDR_EL1);
for (fix = entry->fixed_revs; fix && fix->revidr_mask; fix++)
if (midr == fix->midr_rv && (revidr & fix->revidr_mask))
return false;
return true;
}
static bool __maybe_unused
is_affected_midr_range(const struct arm64_cpu_capabilities *entry, int scope)
{
int i;
if (!target_impl_cpu_num) {
WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
return __is_affected_midr_range(entry, read_cpuid_id(),
read_cpuid(REVIDR_EL1));
}
for (i = 0; i < target_impl_cpu_num; i++) {
if (__is_affected_midr_range(entry, target_impl_cpus[i].midr,
target_impl_cpus[i].midr))
return true;
}
return false;
}
static bool __maybe_unused
is_affected_midr_range_list(const struct arm64_cpu_capabilities *entry,
int scope)
{
WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
return is_midr_in_range_list(read_cpuid_id(), entry->midr_range_list);
return is_midr_in_range_list(entry->midr_range_list);
}
static bool __maybe_unused
@ -186,12 +240,48 @@ static bool __maybe_unused
has_neoverse_n1_erratum_1542419(const struct arm64_cpu_capabilities *entry,
int scope)
{
u32 midr = read_cpuid_id();
bool has_dic = read_cpuid_cachetype() & BIT(CTR_EL0_DIC_SHIFT);
const struct midr_range range = MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N1);
WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
return is_midr_in_range(midr, &range) && has_dic;
return is_midr_in_range(&range) && has_dic;
}
static const struct midr_range impdef_pmuv3_cpus[] = {
MIDR_ALL_VERSIONS(MIDR_APPLE_M1_ICESTORM),
MIDR_ALL_VERSIONS(MIDR_APPLE_M1_FIRESTORM),
MIDR_ALL_VERSIONS(MIDR_APPLE_M1_ICESTORM_PRO),
MIDR_ALL_VERSIONS(MIDR_APPLE_M1_FIRESTORM_PRO),
MIDR_ALL_VERSIONS(MIDR_APPLE_M1_ICESTORM_MAX),
MIDR_ALL_VERSIONS(MIDR_APPLE_M1_FIRESTORM_MAX),
MIDR_ALL_VERSIONS(MIDR_APPLE_M2_BLIZZARD),
MIDR_ALL_VERSIONS(MIDR_APPLE_M2_AVALANCHE),
MIDR_ALL_VERSIONS(MIDR_APPLE_M2_BLIZZARD_PRO),
MIDR_ALL_VERSIONS(MIDR_APPLE_M2_AVALANCHE_PRO),
MIDR_ALL_VERSIONS(MIDR_APPLE_M2_BLIZZARD_MAX),
MIDR_ALL_VERSIONS(MIDR_APPLE_M2_AVALANCHE_MAX),
{},
};
static bool has_impdef_pmuv3(const struct arm64_cpu_capabilities *entry, int scope)
{
u64 dfr0 = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
unsigned int pmuver;
if (!is_kernel_in_hyp_mode())
return false;
pmuver = cpuid_feature_extract_unsigned_field(dfr0,
ID_AA64DFR0_EL1_PMUVer_SHIFT);
if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF)
return false;
return is_midr_in_range_list(impdef_pmuv3_cpus);
}
static void cpu_enable_impdef_pmuv3_traps(const struct arm64_cpu_capabilities *__unused)
{
sysreg_clear_set_s(SYS_HACR_EL2, 0, BIT(56));
}
#ifdef CONFIG_ARM64_WORKAROUND_REPEAT_TLBI
@ -794,6 +884,13 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
{}
})),
},
{
.desc = "Apple IMPDEF PMUv3 Traps",
.capability = ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS,
.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
.matches = has_impdef_pmuv3,
.cpu_enable = cpu_enable_impdef_pmuv3_traps,
},
{
}
};

View File

@ -86,6 +86,7 @@
#include <asm/kvm_host.h>
#include <asm/mmu_context.h>
#include <asm/mte.h>
#include <asm/hypervisor.h>
#include <asm/processor.h>
#include <asm/smp.h>
#include <asm/sysreg.h>
@ -497,6 +498,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr3[] = {
static const struct arm64_ftr_bits ftr_id_aa64mmfr4[] = {
S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR4_EL1_E2H0_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR4_EL1_NV_frac_SHIFT, 4, 0),
ARM64_FTR_END,
};
@ -1792,7 +1794,7 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
char const *str = "kpti command line option";
bool meltdown_safe;
meltdown_safe = is_midr_in_range_list(read_cpuid_id(), kpti_safe_list);
meltdown_safe = is_midr_in_range_list(kpti_safe_list);
/* Defer to CPU feature registers */
if (has_cpuid_feature(entry, scope))
@ -1862,7 +1864,7 @@ static bool has_nv1(const struct arm64_cpu_capabilities *entry, int scope)
return (__system_matches_cap(ARM64_HAS_NESTED_VIRT) &&
!(has_cpuid_feature(entry, scope) ||
is_midr_in_range_list(read_cpuid_id(), nv1_ni_list)));
is_midr_in_range_list(nv1_ni_list)));
}
#if defined(ID_AA64MMFR0_EL1_TGRAN_LPA2) && defined(ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_LPA2)
@ -1898,6 +1900,28 @@ static bool has_lpa2(const struct arm64_cpu_capabilities *entry, int scope)
}
#endif
#ifdef CONFIG_HW_PERF_EVENTS
static bool has_pmuv3(const struct arm64_cpu_capabilities *entry, int scope)
{
u64 dfr0 = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
unsigned int pmuver;
/*
* PMUVer follows the standard ID scheme for an unsigned field with the
* exception of 0xF (IMP_DEF) which is treated specially and implies
* FEAT_PMUv3 is not implemented.
*
* See DDI0487L.a D24.1.3.2 for more details.
*/
pmuver = cpuid_feature_extract_unsigned_field(dfr0,
ID_AA64DFR0_EL1_PMUVer_SHIFT);
if (pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF)
return false;
return pmuver >= ID_AA64DFR0_EL1_PMUVer_IMP;
}
#endif
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
#define KPTI_NG_TEMP_VA (-(1UL << PMD_SHIFT))
@ -2045,7 +2069,7 @@ static bool cpu_has_broken_dbm(void)
{},
};
return is_midr_in_range_list(read_cpuid_id(), cpus);
return is_midr_in_range_list(cpus);
}
static bool cpu_can_use_dbm(const struct arm64_cpu_capabilities *cap)
@ -2162,7 +2186,7 @@ static bool has_nested_virt_support(const struct arm64_cpu_capabilities *cap,
if (kvm_get_mode() != KVM_MODE_NV)
return false;
if (!has_cpuid_feature(cap, scope)) {
if (!cpucap_multi_entry_cap_matches(cap, scope)) {
pr_warn("unavailable: %s\n", cap->desc);
return false;
}
@ -2519,7 +2543,17 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.capability = ARM64_HAS_NESTED_VIRT,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_nested_virt_support,
ARM64_CPUID_FIELDS(ID_AA64MMFR2_EL1, NV, NV2)
.match_list = (const struct arm64_cpu_capabilities []){
{
.matches = has_cpuid_feature,
ARM64_CPUID_FIELDS(ID_AA64MMFR2_EL1, NV, NV2)
},
{
.matches = has_cpuid_feature,
ARM64_CPUID_FIELDS(ID_AA64MMFR4_EL1, NV_frac, NV2_ONLY)
},
{ /* Sentinel */ }
},
},
{
.capability = ARM64_HAS_32BIT_EL0_DO_NOT_USE,
@ -2998,6 +3032,14 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.matches = has_cpuid_feature,
ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, GCS, IMP)
},
#endif
#ifdef CONFIG_HW_PERF_EVENTS
{
.desc = "PMUv3",
.capability = ARM64_HAS_PMUV3,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_pmuv3,
},
#endif
{},
};
@ -3680,6 +3722,7 @@ unsigned long cpu_get_elf_hwcap3(void)
static void __init setup_boot_cpu_capabilities(void)
{
kvm_arm_target_impl_cpu_init();
/*
* The boot CPU's feature register values have been recorded. Detect
* boot cpucaps and local cpucaps for the boot CPU, then enable and

View File

@ -49,6 +49,7 @@ PROVIDE(__pi_arm64_sw_feature_override = arm64_sw_feature_override);
PROVIDE(__pi_arm64_use_ng_mappings = arm64_use_ng_mappings);
#ifdef CONFIG_CAVIUM_ERRATUM_27456
PROVIDE(__pi_cavium_erratum_27456_cpus = cavium_erratum_27456_cpus);
PROVIDE(__pi_is_midr_in_range_list = is_midr_in_range_list);
#endif
PROVIDE(__pi__ctype = _ctype);
PROVIDE(__pi_memstart_offset_seed = memstart_offset_seed);
@ -112,11 +113,6 @@ KVM_NVHE_ALIAS(broken_cntvoff_key);
KVM_NVHE_ALIAS(__start___kvm_ex_table);
KVM_NVHE_ALIAS(__stop___kvm_ex_table);
/* PMU available static key */
#ifdef CONFIG_HW_PERF_EVENTS
KVM_NVHE_ALIAS(kvm_arm_pmu_available);
#endif
/* Position-independent library routines */
KVM_NVHE_ALIAS_HYP(clear_page, __pi_clear_page);
KVM_NVHE_ALIAS_HYP(copy_page, __pi_copy_page);

View File

@ -172,7 +172,7 @@ static enum mitigation_state spectre_v2_get_cpu_hw_mitigation_state(void)
return SPECTRE_UNAFFECTED;
/* Alternatively, we have a list of unaffected CPUs */
if (is_midr_in_range_list(read_cpuid_id(), spectre_v2_safe_list))
if (is_midr_in_range_list(spectre_v2_safe_list))
return SPECTRE_UNAFFECTED;
return SPECTRE_VULNERABLE;
@ -331,7 +331,7 @@ bool has_spectre_v3a(const struct arm64_cpu_capabilities *entry, int scope)
};
WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
return is_midr_in_range_list(read_cpuid_id(), spectre_v3a_unsafe_list);
return is_midr_in_range_list(spectre_v3a_unsafe_list);
}
void spectre_v3a_enable_mitigation(const struct arm64_cpu_capabilities *__unused)
@ -475,7 +475,7 @@ static enum mitigation_state spectre_v4_get_cpu_hw_mitigation_state(void)
{ /* sentinel */ },
};
if (is_midr_in_range_list(read_cpuid_id(), spectre_v4_safe_list))
if (is_midr_in_range_list(spectre_v4_safe_list))
return SPECTRE_UNAFFECTED;
/* CPU features are detected first */
@ -864,7 +864,7 @@ static bool is_spectre_bhb_safe(int scope)
if (scope != SCOPE_LOCAL_CPU)
return all_safe;
if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_safe_list))
if (is_midr_in_range_list(spectre_bhb_safe_list))
return true;
all_safe = false;
@ -913,17 +913,17 @@ static u8 spectre_bhb_loop_affected(void)
{},
};
if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k132_list))
if (is_midr_in_range_list(spectre_bhb_k132_list))
k = 132;
else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k38_list))
else if (is_midr_in_range_list(spectre_bhb_k38_list))
k = 38;
else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k32_list))
else if (is_midr_in_range_list(spectre_bhb_k32_list))
k = 32;
else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k24_list))
else if (is_midr_in_range_list(spectre_bhb_k24_list))
k = 24;
else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k11_list))
else if (is_midr_in_range_list(spectre_bhb_k11_list))
k = 11;
else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k8_list))
else if (is_midr_in_range_list(spectre_bhb_k8_list))
k = 8;
return k;

View File

@ -23,7 +23,7 @@ kvm-y += arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.o \
vgic/vgic-v3.o vgic/vgic-v4.o \
vgic/vgic-mmio.o vgic/vgic-mmio-v2.o \
vgic/vgic-mmio-v3.o vgic/vgic-kvm-device.o \
vgic/vgic-its.o vgic/vgic-debug.o
vgic/vgic-its.o vgic/vgic-debug.o vgic/vgic-v3-nested.o
kvm-$(CONFIG_HW_PERF_EVENTS) += pmu-emul.o pmu.o
kvm-$(CONFIG_ARM64_PTR_AUTH) += pauth.o

View File

@ -125,6 +125,14 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
}
mutex_unlock(&kvm->slots_lock);
break;
case KVM_CAP_ARM_WRITABLE_IMP_ID_REGS:
mutex_lock(&kvm->lock);
if (!kvm->created_vcpus) {
r = 0;
set_bit(KVM_ARCH_FLAG_WRITABLE_IMP_ID_REGS, &kvm->arch.flags);
}
mutex_unlock(&kvm->lock);
break;
default:
break;
}
@ -313,6 +321,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_ARM_SYSTEM_SUSPEND:
case KVM_CAP_IRQFD_RESAMPLE:
case KVM_CAP_COUNTER_OFFSET:
case KVM_CAP_ARM_WRITABLE_IMP_ID_REGS:
r = 1;
break;
case KVM_CAP_SET_GUEST_DEBUG2:
@ -366,7 +375,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
r = get_num_wrps();
break;
case KVM_CAP_ARM_PMU_V3:
r = kvm_arm_support_pmu_v3();
r = kvm_supports_guest_pmuv3();
break;
case KVM_CAP_ARM_INJECT_SERROR_ESR:
r = cpus_have_final_cap(ARM64_HAS_RAS_EXTN);
@ -466,7 +475,11 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
if (err)
return err;
return kvm_share_hyp(vcpu, vcpu + 1);
err = kvm_share_hyp(vcpu, vcpu + 1);
if (err)
kvm_vgic_vcpu_destroy(vcpu);
return err;
}
void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
@ -586,8 +599,12 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
nommu:
vcpu->cpu = cpu;
kvm_vgic_load(vcpu);
/*
* The timer must be loaded before the vgic to correctly set up physical
* interrupt deactivation in nested state (e.g. timer interrupt).
*/
kvm_timer_vcpu_load(vcpu);
kvm_vgic_load(vcpu);
kvm_vcpu_load_debug(vcpu);
if (has_vhe())
kvm_vcpu_load_vhe(vcpu);
@ -825,6 +842,12 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
if (ret)
return ret;
if (vcpu_has_nv(vcpu)) {
ret = kvm_vgic_vcpu_nv_init(vcpu);
if (ret)
return ret;
}
/*
* This needs to happen after any restriction has been applied
* to the feature set.
@ -835,14 +858,20 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
if (ret)
return ret;
ret = kvm_arm_pmu_v3_enable(vcpu);
if (ret)
return ret;
if (kvm_vcpu_has_pmu(vcpu)) {
ret = kvm_arm_pmu_v3_enable(vcpu);
if (ret)
return ret;
}
if (is_protected_kvm_enabled()) {
ret = pkvm_create_hyp_vm(kvm);
if (ret)
return ret;
ret = pkvm_create_hyp_vcpu(vcpu);
if (ret)
return ret;
}
mutex_lock(&kvm->arch.config_lock);
@ -1148,7 +1177,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
*/
preempt_disable();
kvm_pmu_flush_hwstate(vcpu);
if (kvm_vcpu_has_pmu(vcpu))
kvm_pmu_flush_hwstate(vcpu);
local_irq_disable();
@ -1167,7 +1197,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
if (ret <= 0 || kvm_vcpu_exit_request(vcpu, &ret)) {
vcpu->mode = OUTSIDE_GUEST_MODE;
isb(); /* Ensure work in x_flush_hwstate is committed */
kvm_pmu_sync_hwstate(vcpu);
if (kvm_vcpu_has_pmu(vcpu))
kvm_pmu_sync_hwstate(vcpu);
if (unlikely(!irqchip_in_kernel(vcpu->kvm)))
kvm_timer_sync_user(vcpu);
kvm_vgic_sync_hwstate(vcpu);
@ -1197,7 +1228,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
* that the vgic can properly sample the updated state of the
* interrupt line.
*/
kvm_pmu_sync_hwstate(vcpu);
if (kvm_vcpu_has_pmu(vcpu))
kvm_pmu_sync_hwstate(vcpu);
/*
* Sync the vgic state before syncing the timer state because
@ -1386,7 +1418,7 @@ static unsigned long system_supported_vcpu_features(void)
if (!cpus_have_final_cap(ARM64_HAS_32BIT_EL1))
clear_bit(KVM_ARM_VCPU_EL1_32BIT, &features);
if (!kvm_arm_support_pmu_v3())
if (!kvm_supports_guest_pmuv3())
clear_bit(KVM_ARM_VCPU_PMU_V3, &features);
if (!system_supports_sve())
@ -2307,6 +2339,13 @@ static int __init init_subsystems(void)
goto out;
}
if (kvm_mode == KVM_MODE_NV &&
!(vgic_present && kvm_vgic_global_state.type == VGIC_V3)) {
kvm_err("NV support requires GICv3, giving up\n");
err = -EINVAL;
goto out;
}
/*
* Init HYP architected timer support
*/
@ -2714,6 +2753,14 @@ int kvm_arch_irq_bypass_add_producer(struct irq_bypass_consumer *cons,
{
struct kvm_kernel_irqfd *irqfd =
container_of(cons, struct kvm_kernel_irqfd, consumer);
struct kvm_kernel_irq_routing_entry *irq_entry = &irqfd->irq_entry;
/*
* The only thing we have a chance of directly-injecting is LPIs. Maybe
* one day...
*/
if (irq_entry->type != KVM_IRQ_ROUTING_MSI)
return 0;
return kvm_vgic_v4_set_forwarding(irqfd->kvm, prod->irq,
&irqfd->irq_entry);
@ -2723,6 +2770,10 @@ void kvm_arch_irq_bypass_del_producer(struct irq_bypass_consumer *cons,
{
struct kvm_kernel_irqfd *irqfd =
container_of(cons, struct kvm_kernel_irqfd, consumer);
struct kvm_kernel_irq_routing_entry *irq_entry = &irqfd->irq_entry;
if (irq_entry->type != KVM_IRQ_ROUTING_MSI)
return;
kvm_vgic_v4_unset_forwarding(irqfd->kvm, prod->irq,
&irqfd->irq_entry);
@ -2803,11 +2854,12 @@ static __init int kvm_arm_init(void)
if (err)
goto out_hyp;
kvm_info("%s%sVHE mode initialized successfully\n",
kvm_info("%s%sVHE%s mode initialized successfully\n",
in_hyp_mode ? "" : (is_protected_kvm_enabled() ?
"Protected " : "Hyp "),
in_hyp_mode ? "" : (cpus_have_final_cap(ARM64_KVM_HVHE) ?
"h" : "n"));
"h" : "n"),
cpus_have_final_cap(ARM64_HAS_NESTED_VIRT) ? "+NV2": "");
/*
* FIXME: Do something reasonable if kvm_init() fails after pKVM

View File

@ -412,26 +412,26 @@ static const struct trap_bits coarse_trap_bits[] = {
},
[CGT_ICH_HCR_TC] = {
.index = ICH_HCR_EL2,
.value = ICH_HCR_TC,
.mask = ICH_HCR_TC,
.value = ICH_HCR_EL2_TC,
.mask = ICH_HCR_EL2_TC,
.behaviour = BEHAVE_FORWARD_RW,
},
[CGT_ICH_HCR_TALL0] = {
.index = ICH_HCR_EL2,
.value = ICH_HCR_TALL0,
.mask = ICH_HCR_TALL0,
.value = ICH_HCR_EL2_TALL0,
.mask = ICH_HCR_EL2_TALL0,
.behaviour = BEHAVE_FORWARD_RW,
},
[CGT_ICH_HCR_TALL1] = {
.index = ICH_HCR_EL2,
.value = ICH_HCR_TALL1,
.mask = ICH_HCR_TALL1,
.value = ICH_HCR_EL2_TALL1,
.mask = ICH_HCR_EL2_TALL1,
.behaviour = BEHAVE_FORWARD_RW,
},
[CGT_ICH_HCR_TDIR] = {
.index = ICH_HCR_EL2,
.value = ICH_HCR_TDIR,
.mask = ICH_HCR_TDIR,
.value = ICH_HCR_EL2_TDIR,
.mask = ICH_HCR_EL2_TDIR,
.behaviour = BEHAVE_FORWARD_RW,
},
};
@ -2503,6 +2503,7 @@ void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu)
}
preempt_disable();
vcpu_set_flag(vcpu, IN_NESTED_ERET);
kvm_arch_vcpu_put(vcpu);
if (!esr_iss_is_eretax(esr))
@ -2514,9 +2515,11 @@ void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu)
*vcpu_cpsr(vcpu) = spsr;
kvm_arch_vcpu_load(vcpu, smp_processor_id());
vcpu_clear_flag(vcpu, IN_NESTED_ERET);
preempt_enable();
kvm_pmu_nested_transition(vcpu);
if (kvm_vcpu_has_pmu(vcpu))
kvm_pmu_nested_transition(vcpu);
}
static void kvm_inject_el2_exception(struct kvm_vcpu *vcpu, u64 esr_el2,
@ -2599,7 +2602,8 @@ static int kvm_inject_nested(struct kvm_vcpu *vcpu, u64 esr_el2,
kvm_arch_vcpu_load(vcpu, smp_processor_id());
preempt_enable();
kvm_pmu_nested_transition(vcpu);
if (kvm_vcpu_has_pmu(vcpu))
kvm_pmu_nested_transition(vcpu);
return 1;
}

View File

@ -129,8 +129,12 @@ static int kvm_handle_fpasimd(struct kvm_vcpu *vcpu)
static int kvm_handle_wfx(struct kvm_vcpu *vcpu)
{
u64 esr = kvm_vcpu_get_esr(vcpu);
bool is_wfe = !!(esr & ESR_ELx_WFx_ISS_WFE);
if (esr & ESR_ELx_WFx_ISS_WFE) {
if (guest_hyp_wfx_traps_enabled(vcpu))
return kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
if (is_wfe) {
trace_kvm_wfx_arm64(*vcpu_pc(vcpu), true);
vcpu->stat.wfe_exit_stat++;
} else {

View File

@ -244,7 +244,7 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
* counter, which could make a PMXEVCNTR_EL0 access UNDEF at
* EL1 instead of being trapped to EL2.
*/
if (kvm_arm_support_pmu_v3()) {
if (system_supports_pmuv3()) {
struct kvm_cpu_context *hctxt;
write_sysreg(0, pmselr_el0);
@ -281,7 +281,7 @@ static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
write_sysreg(*host_data_ptr(host_debug_state.mdcr_el2), mdcr_el2);
write_sysreg(0, hstr_el2);
if (kvm_arm_support_pmu_v3()) {
if (system_supports_pmuv3()) {
struct kvm_cpu_context *hctxt;
hctxt = host_data_ptr(host_ctxt);

View File

@ -43,6 +43,17 @@ static inline u64 *ctxt_mdscr_el1(struct kvm_cpu_context *ctxt)
return &ctxt_sys_reg(ctxt, MDSCR_EL1);
}
static inline u64 ctxt_midr_el1(struct kvm_cpu_context *ctxt)
{
struct kvm *kvm = kern_hyp_va(ctxt_to_vcpu(ctxt)->kvm);
if (!(ctxt_is_guest(ctxt) &&
test_bit(KVM_ARCH_FLAG_WRITABLE_IMP_ID_REGS, &kvm->arch.flags)))
return read_cpuid_id();
return kvm_read_vm_id_reg(kvm, SYS_MIDR_EL1);
}
static inline void __sysreg_save_common_state(struct kvm_cpu_context *ctxt)
{
*ctxt_mdscr_el1(ctxt) = read_sysreg(mdscr_el1);
@ -168,8 +179,9 @@ static inline void __sysreg_restore_user_state(struct kvm_cpu_context *ctxt)
}
static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt,
u64 mpidr)
u64 midr, u64 mpidr)
{
write_sysreg(midr, vpidr_el2);
write_sysreg(mpidr, vmpidr_el2);
if (has_vhe() ||

View File

@ -56,7 +56,7 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt);
int hyp_pin_shared_mem(void *from, void *to);
void hyp_unpin_shared_mem(void *from, void *to);
void reclaim_guest_pages(struct pkvm_hyp_vm *vm, struct kvm_hyp_memcache *mc);
void reclaim_pgtable_pages(struct pkvm_hyp_vm *vm, struct kvm_hyp_memcache *mc);
int refill_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages,
struct kvm_hyp_memcache *host_mc);

View File

@ -43,12 +43,6 @@ struct pkvm_hyp_vm {
struct hyp_pool pool;
hyp_spinlock_t lock;
/*
* The number of vcpus initialized and ready to run.
* Modifying this is protected by 'vm_table_lock'.
*/
unsigned int nr_vcpus;
/* Array of the hyp vCPU structures for this VM. */
struct pkvm_hyp_vcpu *vcpus[];
};

View File

@ -266,7 +266,7 @@ int kvm_guest_prepare_stage2(struct pkvm_hyp_vm *vm, void *pgd)
return 0;
}
void reclaim_guest_pages(struct pkvm_hyp_vm *vm, struct kvm_hyp_memcache *mc)
void reclaim_pgtable_pages(struct pkvm_hyp_vm *vm, struct kvm_hyp_memcache *mc)
{
struct hyp_page *page;
void *addr;

View File

@ -46,7 +46,8 @@ static void pkvm_vcpu_reset_hcr(struct kvm_vcpu *vcpu)
vcpu->arch.hcr_el2 |= HCR_FWB;
if (cpus_have_final_cap(ARM64_HAS_EVT) &&
!cpus_have_final_cap(ARM64_MISMATCHED_CACHE_TYPE))
!cpus_have_final_cap(ARM64_MISMATCHED_CACHE_TYPE) &&
kvm_read_vm_id_reg(vcpu->kvm, SYS_CTR_EL0) == read_cpuid(CTR_EL0))
vcpu->arch.hcr_el2 |= HCR_TID4;
else
vcpu->arch.hcr_el2 |= HCR_TID2;
@ -166,8 +167,13 @@ static int pkvm_vcpu_init_traps(struct pkvm_hyp_vcpu *hyp_vcpu)
pkvm_vcpu_reset_hcr(vcpu);
if ((!pkvm_hyp_vcpu_is_protected(hyp_vcpu)))
if ((!pkvm_hyp_vcpu_is_protected(hyp_vcpu))) {
struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu;
/* Trust the host for non-protected vcpu features. */
vcpu->arch.hcrx_el2 = host_vcpu->arch.hcrx_el2;
return 0;
}
ret = pkvm_check_pvm_cpu_features(vcpu);
if (ret)
@ -175,6 +181,7 @@ static int pkvm_vcpu_init_traps(struct pkvm_hyp_vcpu *hyp_vcpu)
pvm_init_traps_hcr(vcpu);
pvm_init_traps_mdcr(vcpu);
vcpu_set_hcrx(vcpu);
return 0;
}
@ -239,10 +246,12 @@ struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle,
hyp_spin_lock(&vm_table_lock);
hyp_vm = get_vm_by_handle(handle);
if (!hyp_vm || hyp_vm->nr_vcpus <= vcpu_idx)
if (!hyp_vm || hyp_vm->kvm.created_vcpus <= vcpu_idx)
goto unlock;
hyp_vcpu = hyp_vm->vcpus[vcpu_idx];
if (!hyp_vcpu)
goto unlock;
/* Ensure vcpu isn't loaded on more than one cpu simultaneously. */
if (unlikely(hyp_vcpu->loaded_hyp_vcpu)) {
@ -315,6 +324,9 @@ static void pkvm_init_features_from_host(struct pkvm_hyp_vm *hyp_vm, const struc
unsigned long host_arch_flags = READ_ONCE(host_kvm->arch.flags);
DECLARE_BITMAP(allowed_features, KVM_VCPU_MAX_FEATURES);
/* CTR_EL0 is always under host control, even for protected VMs. */
hyp_vm->kvm.arch.ctr_el0 = host_kvm->arch.ctr_el0;
if (test_bit(KVM_ARCH_FLAG_MTE_ENABLED, &host_kvm->arch.flags))
set_bit(KVM_ARCH_FLAG_MTE_ENABLED, &kvm->arch.flags);
@ -325,6 +337,10 @@ static void pkvm_init_features_from_host(struct pkvm_hyp_vm *hyp_vm, const struc
bitmap_copy(kvm->arch.vcpu_features,
host_kvm->arch.vcpu_features,
KVM_VCPU_MAX_FEATURES);
if (test_bit(KVM_ARCH_FLAG_WRITABLE_IMP_ID_REGS, &host_arch_flags))
hyp_vm->kvm.arch.midr_el1 = host_kvm->arch.midr_el1;
return;
}
@ -361,8 +377,14 @@ static void unpin_host_vcpus(struct pkvm_hyp_vcpu *hyp_vcpus[],
{
int i;
for (i = 0; i < nr_vcpus; i++)
unpin_host_vcpu(hyp_vcpus[i]->host_vcpu);
for (i = 0; i < nr_vcpus; i++) {
struct pkvm_hyp_vcpu *hyp_vcpu = hyp_vcpus[i];
if (!hyp_vcpu)
continue;
unpin_host_vcpu(hyp_vcpu->host_vcpu);
}
}
static void init_pkvm_hyp_vm(struct kvm *host_kvm, struct pkvm_hyp_vm *hyp_vm,
@ -386,24 +408,18 @@ static void pkvm_vcpu_init_sve(struct pkvm_hyp_vcpu *hyp_vcpu, struct kvm_vcpu *
static int init_pkvm_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu,
struct pkvm_hyp_vm *hyp_vm,
struct kvm_vcpu *host_vcpu,
unsigned int vcpu_idx)
struct kvm_vcpu *host_vcpu)
{
int ret = 0;
if (hyp_pin_shared_mem(host_vcpu, host_vcpu + 1))
return -EBUSY;
if (host_vcpu->vcpu_idx != vcpu_idx) {
ret = -EINVAL;
goto done;
}
hyp_vcpu->host_vcpu = host_vcpu;
hyp_vcpu->vcpu.kvm = &hyp_vm->kvm;
hyp_vcpu->vcpu.vcpu_id = READ_ONCE(host_vcpu->vcpu_id);
hyp_vcpu->vcpu.vcpu_idx = vcpu_idx;
hyp_vcpu->vcpu.vcpu_idx = READ_ONCE(host_vcpu->vcpu_idx);
hyp_vcpu->vcpu.arch.hw_mmu = &hyp_vm->kvm.arch.mmu;
hyp_vcpu->vcpu.arch.cflags = READ_ONCE(host_vcpu->arch.cflags);
@ -641,27 +657,28 @@ int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm_vcpu *host_vcpu,
goto unlock;
}
idx = hyp_vm->nr_vcpus;
ret = init_pkvm_hyp_vcpu(hyp_vcpu, hyp_vm, host_vcpu);
if (ret)
goto unlock;
idx = hyp_vcpu->vcpu.vcpu_idx;
if (idx >= hyp_vm->kvm.created_vcpus) {
ret = -EINVAL;
goto unlock;
}
ret = init_pkvm_hyp_vcpu(hyp_vcpu, hyp_vm, host_vcpu, idx);
if (ret)
if (hyp_vm->vcpus[idx]) {
ret = -EINVAL;
goto unlock;
}
hyp_vm->vcpus[idx] = hyp_vcpu;
hyp_vm->nr_vcpus++;
unlock:
hyp_spin_unlock(&vm_table_lock);
if (ret) {
if (ret)
unmap_donated_memory(hyp_vcpu, sizeof(*hyp_vcpu));
return ret;
}
return 0;
return ret;
}
static void
@ -678,7 +695,7 @@ teardown_donated_memory(struct kvm_hyp_memcache *mc, void *addr, size_t size)
int __pkvm_teardown_vm(pkvm_handle_t handle)
{
struct kvm_hyp_memcache *mc;
struct kvm_hyp_memcache *mc, *stage2_mc;
struct pkvm_hyp_vm *hyp_vm;
struct kvm *host_kvm;
unsigned int idx;
@ -706,18 +723,24 @@ int __pkvm_teardown_vm(pkvm_handle_t handle)
/* Reclaim guest pages (including page-table pages) */
mc = &host_kvm->arch.pkvm.teardown_mc;
reclaim_guest_pages(hyp_vm, mc);
unpin_host_vcpus(hyp_vm->vcpus, hyp_vm->nr_vcpus);
stage2_mc = &host_kvm->arch.pkvm.stage2_teardown_mc;
reclaim_pgtable_pages(hyp_vm, stage2_mc);
unpin_host_vcpus(hyp_vm->vcpus, hyp_vm->kvm.created_vcpus);
/* Push the metadata pages to the teardown memcache */
for (idx = 0; idx < hyp_vm->nr_vcpus; ++idx) {
for (idx = 0; idx < hyp_vm->kvm.created_vcpus; ++idx) {
struct pkvm_hyp_vcpu *hyp_vcpu = hyp_vm->vcpus[idx];
struct kvm_hyp_memcache *vcpu_mc = &hyp_vcpu->vcpu.arch.pkvm_memcache;
struct kvm_hyp_memcache *vcpu_mc;
if (!hyp_vcpu)
continue;
vcpu_mc = &hyp_vcpu->vcpu.arch.pkvm_memcache;
while (vcpu_mc->nr_pages) {
void *addr = pop_hyp_memcache(vcpu_mc, hyp_phys_to_virt);
push_hyp_memcache(mc, addr, hyp_virt_to_phys);
push_hyp_memcache(stage2_mc, addr, hyp_virt_to_phys);
unmap_donated_memory_noclear(addr, PAGE_SIZE);
}

View File

@ -28,7 +28,9 @@ void __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt)
void __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt)
{
__sysreg_restore_el1_state(ctxt, ctxt_sys_reg(ctxt, MPIDR_EL1));
u64 midr = ctxt_midr_el1(ctxt);
__sysreg_restore_el1_state(ctxt, midr, ctxt_sys_reg(ctxt, MPIDR_EL1));
__sysreg_restore_common_state(ctxt);
__sysreg_restore_user_state(ctxt);
__sysreg_restore_el2_return_state(ctxt);

View File

@ -18,7 +18,7 @@
#define vtr_to_nr_pre_bits(v) ((((u32)(v) >> 26) & 7) + 1)
#define vtr_to_nr_apr_regs(v) (1 << (vtr_to_nr_pre_bits(v) - 5))
static u64 __gic_v3_get_lr(unsigned int lr)
u64 __gic_v3_get_lr(unsigned int lr)
{
switch (lr & 0xf) {
case 0:
@ -218,7 +218,7 @@ void __vgic_v3_save_state(struct vgic_v3_cpu_if *cpu_if)
elrsr = read_gicreg(ICH_ELRSR_EL2);
write_gicreg(cpu_if->vgic_hcr & ~ICH_HCR_EN, ICH_HCR_EL2);
write_gicreg(cpu_if->vgic_hcr & ~ICH_HCR_EL2_En, ICH_HCR_EL2);
for (i = 0; i < used_lrs; i++) {
if (elrsr & (1 << i))
@ -274,7 +274,7 @@ void __vgic_v3_activate_traps(struct vgic_v3_cpu_if *cpu_if)
* system registers to trap to EL1 (duh), force ICC_SRE_EL1.SRE to 1
* so that the trap bits can take effect. Yes, we *loves* the GIC.
*/
if (!(cpu_if->vgic_hcr & ICH_HCR_EN)) {
if (!(cpu_if->vgic_hcr & ICH_HCR_EL2_En)) {
write_gicreg(ICC_SRE_EL1_SRE, ICC_SRE_EL1);
isb();
} else if (!cpu_if->vgic_sre) {
@ -752,7 +752,7 @@ static void __vgic_v3_bump_eoicount(void)
u32 hcr;
hcr = read_gicreg(ICH_HCR_EL2);
hcr += 1 << ICH_HCR_EOIcount_SHIFT;
hcr += 1 << ICH_HCR_EL2_EOIcount_SHIFT;
write_gicreg(hcr, ICH_HCR_EL2);
}
@ -1069,7 +1069,7 @@ static bool __vgic_v3_check_trap_forwarding(struct kvm_vcpu *vcpu,
case SYS_ICC_EOIR0_EL1:
case SYS_ICC_HPPIR0_EL1:
case SYS_ICC_IAR0_EL1:
return ich_hcr & ICH_HCR_TALL0;
return ich_hcr & ICH_HCR_EL2_TALL0;
case SYS_ICC_IGRPEN1_EL1:
if (is_read &&
@ -1090,10 +1090,10 @@ static bool __vgic_v3_check_trap_forwarding(struct kvm_vcpu *vcpu,
case SYS_ICC_EOIR1_EL1:
case SYS_ICC_HPPIR1_EL1:
case SYS_ICC_IAR1_EL1:
return ich_hcr & ICH_HCR_TALL1;
return ich_hcr & ICH_HCR_EL2_TALL1;
case SYS_ICC_DIR_EL1:
if (ich_hcr & ICH_HCR_TDIR)
if (ich_hcr & ICH_HCR_EL2_TDIR)
return true;
fallthrough;
@ -1101,7 +1101,7 @@ static bool __vgic_v3_check_trap_forwarding(struct kvm_vcpu *vcpu,
case SYS_ICC_RPR_EL1:
case SYS_ICC_CTLR_EL1:
case SYS_ICC_PMR_EL1:
return ich_hcr & ICH_HCR_TC;
return ich_hcr & ICH_HCR_EL2_TC;
default:
return false;

View File

@ -527,6 +527,25 @@ static bool kvm_hyp_handle_sysreg_vhe(struct kvm_vcpu *vcpu, u64 *exit_code)
return kvm_hyp_handle_sysreg(vcpu, exit_code);
}
static bool kvm_hyp_handle_impdef(struct kvm_vcpu *vcpu, u64 *exit_code)
{
u64 iss;
if (!cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS))
return false;
/*
* Compute a synthetic ESR for a sysreg trap. Conveniently, AFSR1_EL2
* is populated with a correct ISS for a sysreg trap. These fruity
* parts are 64bit only, so unconditionally set IL.
*/
iss = ESR_ELx_ISS(read_sysreg_s(SYS_AFSR1_EL2));
vcpu->arch.fault.esr_el2 = FIELD_PREP(ESR_ELx_EC_MASK, ESR_ELx_EC_SYS64) |
FIELD_PREP(ESR_ELx_ISS_MASK, iss) |
ESR_ELx_IL;
return false;
}
static const exit_handler_fn hyp_exit_handlers[] = {
[0 ... ESR_ELx_EC_MAX] = NULL,
[ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15_32,
@ -538,6 +557,9 @@ static const exit_handler_fn hyp_exit_handlers[] = {
[ESR_ELx_EC_WATCHPT_LOW] = kvm_hyp_handle_watchpt_low,
[ESR_ELx_EC_ERET] = kvm_hyp_handle_eret,
[ESR_ELx_EC_MOPS] = kvm_hyp_handle_mops,
/* Apple shenanigans */
[0x3F] = kvm_hyp_handle_impdef,
};
static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)

View File

@ -87,11 +87,12 @@ static void __sysreg_restore_vel2_state(struct kvm_vcpu *vcpu)
write_sysreg(__vcpu_sys_reg(vcpu, PAR_EL1), par_el1);
write_sysreg(__vcpu_sys_reg(vcpu, TPIDR_EL1), tpidr_el1);
write_sysreg(__vcpu_sys_reg(vcpu, MPIDR_EL1), vmpidr_el2);
write_sysreg_el1(__vcpu_sys_reg(vcpu, MAIR_EL2), SYS_MAIR);
write_sysreg_el1(__vcpu_sys_reg(vcpu, VBAR_EL2), SYS_VBAR);
write_sysreg_el1(__vcpu_sys_reg(vcpu, CONTEXTIDR_EL2), SYS_CONTEXTIDR);
write_sysreg_el1(__vcpu_sys_reg(vcpu, AMAIR_EL2), SYS_AMAIR);
write_sysreg(ctxt_midr_el1(&vcpu->arch.ctxt), vpidr_el2);
write_sysreg(__vcpu_sys_reg(vcpu, MPIDR_EL1), vmpidr_el2);
write_sysreg_el1(__vcpu_sys_reg(vcpu, MAIR_EL2), SYS_MAIR);
write_sysreg_el1(__vcpu_sys_reg(vcpu, VBAR_EL2), SYS_VBAR);
write_sysreg_el1(__vcpu_sys_reg(vcpu, CONTEXTIDR_EL2), SYS_CONTEXTIDR);
write_sysreg_el1(__vcpu_sys_reg(vcpu, AMAIR_EL2), SYS_AMAIR);
if (vcpu_el2_e2h_is_set(vcpu)) {
/*
@ -191,7 +192,7 @@ void __vcpu_load_switch_sysregs(struct kvm_vcpu *vcpu)
{
struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt;
struct kvm_cpu_context *host_ctxt;
u64 mpidr;
u64 midr, mpidr;
host_ctxt = host_data_ptr(host_ctxt);
__sysreg_save_user_state(host_ctxt);
@ -220,23 +221,18 @@ void __vcpu_load_switch_sysregs(struct kvm_vcpu *vcpu)
__sysreg_restore_vel2_state(vcpu);
} else {
if (vcpu_has_nv(vcpu)) {
/*
* Use the guest hypervisor's VPIDR_EL2 when in a
* nested state. The hardware value of MIDR_EL1 gets
* restored on put.
*/
write_sysreg(ctxt_sys_reg(guest_ctxt, VPIDR_EL2), vpidr_el2);
/*
* As we're restoring a nested guest, set the value
* provided by the guest hypervisor.
*/
midr = ctxt_sys_reg(guest_ctxt, VPIDR_EL2);
mpidr = ctxt_sys_reg(guest_ctxt, VMPIDR_EL2);
} else {
midr = ctxt_midr_el1(guest_ctxt);
mpidr = ctxt_sys_reg(guest_ctxt, MPIDR_EL1);
}
__sysreg_restore_el1_state(guest_ctxt, mpidr);
__sysreg_restore_el1_state(guest_ctxt, midr, mpidr);
}
vcpu_set_flag(vcpu, SYSREGS_ON_CPU);
@ -271,9 +267,5 @@ void __vcpu_put_switch_sysregs(struct kvm_vcpu *vcpu)
/* Restore host user state */
__sysreg_restore_user_state(host_ctxt);
/* If leaving a nesting guest, restore MIDR_EL1 default view */
if (vcpu_has_nv(vcpu))
write_sysreg(read_cpuid_id(), vpidr_el2);
vcpu_clear_flag(vcpu, SYSREGS_ON_CPU);
}

View File

@ -15,6 +15,8 @@
GENMASK(KVM_REG_ARM_STD_HYP_BMAP_BIT_COUNT - 1, 0)
#define KVM_ARM_SMCCC_VENDOR_HYP_FEATURES \
GENMASK(KVM_REG_ARM_VENDOR_HYP_BMAP_BIT_COUNT - 1, 0)
#define KVM_ARM_SMCCC_VENDOR_HYP_FEATURES_2 \
GENMASK(KVM_REG_ARM_VENDOR_HYP_BMAP_2_BIT_COUNT - 1, 0)
static void kvm_ptp_get_time(struct kvm_vcpu *vcpu, u64 *val)
{
@ -360,6 +362,8 @@ int kvm_smccc_call_handler(struct kvm_vcpu *vcpu)
break;
case ARM_SMCCC_VENDOR_HYP_KVM_FEATURES_FUNC_ID:
val[0] = smccc_feat->vendor_hyp_bmap;
/* Function numbers 2-63 are reserved for pKVM for now */
val[2] = smccc_feat->vendor_hyp_bmap_2;
break;
case ARM_SMCCC_VENDOR_HYP_KVM_PTP_FUNC_ID:
kvm_ptp_get_time(vcpu, val);
@ -387,6 +391,7 @@ static const u64 kvm_arm_fw_reg_ids[] = {
KVM_REG_ARM_STD_BMAP,
KVM_REG_ARM_STD_HYP_BMAP,
KVM_REG_ARM_VENDOR_HYP_BMAP,
KVM_REG_ARM_VENDOR_HYP_BMAP_2,
};
void kvm_arm_init_hypercalls(struct kvm *kvm)
@ -497,6 +502,9 @@ int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
case KVM_REG_ARM_VENDOR_HYP_BMAP:
val = READ_ONCE(smccc_feat->vendor_hyp_bmap);
break;
case KVM_REG_ARM_VENDOR_HYP_BMAP_2:
val = READ_ONCE(smccc_feat->vendor_hyp_bmap_2);
break;
default:
return -ENOENT;
}
@ -527,6 +535,10 @@ static int kvm_arm_set_fw_reg_bmap(struct kvm_vcpu *vcpu, u64 reg_id, u64 val)
fw_reg_bmap = &smccc_feat->vendor_hyp_bmap;
fw_reg_features = KVM_ARM_SMCCC_VENDOR_HYP_FEATURES;
break;
case KVM_REG_ARM_VENDOR_HYP_BMAP_2:
fw_reg_bmap = &smccc_feat->vendor_hyp_bmap_2;
fw_reg_features = KVM_ARM_SMCCC_VENDOR_HYP_FEATURES_2;
break;
default:
return -ENOENT;
}
@ -633,6 +645,7 @@ int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
case KVM_REG_ARM_STD_BMAP:
case KVM_REG_ARM_STD_HYP_BMAP:
case KVM_REG_ARM_VENDOR_HYP_BMAP:
case KVM_REG_ARM_VENDOR_HYP_BMAP_2:
return kvm_arm_set_fw_reg_bmap(vcpu, reg->id, val);
default:
return -ENOENT;

View File

@ -1086,14 +1086,26 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu)
}
}
static void hyp_mc_free_fn(void *addr, void *unused)
static void hyp_mc_free_fn(void *addr, void *mc)
{
struct kvm_hyp_memcache *memcache = mc;
if (memcache->flags & HYP_MEMCACHE_ACCOUNT_STAGE2)
kvm_account_pgtable_pages(addr, -1);
free_page((unsigned long)addr);
}
static void *hyp_mc_alloc_fn(void *unused)
static void *hyp_mc_alloc_fn(void *mc)
{
return (void *)__get_free_page(GFP_KERNEL_ACCOUNT);
struct kvm_hyp_memcache *memcache = mc;
void *addr;
addr = (void *)__get_free_page(GFP_KERNEL_ACCOUNT);
if (addr && memcache->flags & HYP_MEMCACHE_ACCOUNT_STAGE2)
kvm_account_pgtable_pages(addr, 1);
return addr;
}
void free_hyp_memcache(struct kvm_hyp_memcache *mc)
@ -1102,7 +1114,7 @@ void free_hyp_memcache(struct kvm_hyp_memcache *mc)
return;
kfree(mc->mapping);
__free_hyp_memcache(mc, hyp_mc_free_fn, kvm_host_va, NULL);
__free_hyp_memcache(mc, hyp_mc_free_fn, kvm_host_va, mc);
}
int topup_hyp_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages)
@ -1117,7 +1129,7 @@ int topup_hyp_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages)
}
return __topup_hyp_memcache(mc, min_pages, hyp_mc_alloc_fn,
kvm_host_pa, NULL);
kvm_host_pa, mc);
}
/**

View File

@ -16,9 +16,6 @@
#include "sys_regs.h"
/* Protection against the sysreg repainting madness... */
#define NV_FTR(r, f) ID_AA64##r##_EL1_##f
/*
* Ratio of live shadow S2 MMU per vcpu. This is a trade-off between
* memory usage and potential number of different sets of S2 PTs in
@ -54,6 +51,10 @@ int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
struct kvm_s2_mmu *tmp;
int num_mmus, ret = 0;
if (test_bit(KVM_ARM_VCPU_HAS_EL2_E2H0, kvm->arch.vcpu_features) &&
!cpus_have_final_cap(ARM64_HAS_HCR_NV1))
return -EINVAL;
/*
* Let's treat memory allocation failures as benign: If we fail to
* allocate anything, return an error and keep the allocated array
@ -807,134 +808,151 @@ void kvm_arch_flush_shadow_all(struct kvm *kvm)
* This list should get updated as new features get added to the NV
* support, and new extension to the architecture.
*/
static void limit_nv_id_regs(struct kvm *kvm)
u64 limit_nv_id_reg(struct kvm *kvm, u32 reg, u64 val)
{
u64 val, tmp;
switch (reg) {
case SYS_ID_AA64ISAR0_EL1:
/* Support everything but TME */
val &= ~ID_AA64ISAR0_EL1_TME;
break;
/* Support everything but TME */
val = kvm_read_vm_id_reg(kvm, SYS_ID_AA64ISAR0_EL1);
val &= ~NV_FTR(ISAR0, TME);
kvm_set_vm_id_reg(kvm, SYS_ID_AA64ISAR0_EL1, val);
case SYS_ID_AA64ISAR1_EL1:
/* Support everything but LS64 and Spec Invalidation */
val &= ~(ID_AA64ISAR1_EL1_LS64 |
ID_AA64ISAR1_EL1_SPECRES);
break;
/* Support everything but Spec Invalidation and LS64 */
val = kvm_read_vm_id_reg(kvm, SYS_ID_AA64ISAR1_EL1);
val &= ~(NV_FTR(ISAR1, LS64) |
NV_FTR(ISAR1, SPECRES));
kvm_set_vm_id_reg(kvm, SYS_ID_AA64ISAR1_EL1, val);
case SYS_ID_AA64PFR0_EL1:
/* No RME, AMU, MPAM, S-EL2, or RAS */
val &= ~(ID_AA64PFR0_EL1_RME |
ID_AA64PFR0_EL1_AMU |
ID_AA64PFR0_EL1_MPAM |
ID_AA64PFR0_EL1_SEL2 |
ID_AA64PFR0_EL1_RAS |
ID_AA64PFR0_EL1_EL3 |
ID_AA64PFR0_EL1_EL2 |
ID_AA64PFR0_EL1_EL1 |
ID_AA64PFR0_EL1_EL0);
/* 64bit only at any EL */
val |= SYS_FIELD_PREP_ENUM(ID_AA64PFR0_EL1, EL0, IMP);
val |= SYS_FIELD_PREP_ENUM(ID_AA64PFR0_EL1, EL1, IMP);
val |= SYS_FIELD_PREP_ENUM(ID_AA64PFR0_EL1, EL2, IMP);
val |= SYS_FIELD_PREP_ENUM(ID_AA64PFR0_EL1, EL3, IMP);
break;
/* No AMU, MPAM, S-EL2, or RAS */
val = kvm_read_vm_id_reg(kvm, SYS_ID_AA64PFR0_EL1);
val &= ~(GENMASK_ULL(55, 52) |
NV_FTR(PFR0, AMU) |
NV_FTR(PFR0, MPAM) |
NV_FTR(PFR0, SEL2) |
NV_FTR(PFR0, RAS) |
NV_FTR(PFR0, EL3) |
NV_FTR(PFR0, EL2) |
NV_FTR(PFR0, EL1) |
NV_FTR(PFR0, EL0));
/* 64bit only at any EL */
val |= FIELD_PREP(NV_FTR(PFR0, EL0), 0b0001);
val |= FIELD_PREP(NV_FTR(PFR0, EL1), 0b0001);
val |= FIELD_PREP(NV_FTR(PFR0, EL2), 0b0001);
val |= FIELD_PREP(NV_FTR(PFR0, EL3), 0b0001);
kvm_set_vm_id_reg(kvm, SYS_ID_AA64PFR0_EL1, val);
case SYS_ID_AA64PFR1_EL1:
/* Only support BTI, SSBS, CSV2_frac */
val &= (ID_AA64PFR1_EL1_BT |
ID_AA64PFR1_EL1_SSBS |
ID_AA64PFR1_EL1_CSV2_frac);
break;
/* Only support BTI, SSBS, CSV2_frac */
val = kvm_read_vm_id_reg(kvm, SYS_ID_AA64PFR1_EL1);
val &= (NV_FTR(PFR1, BT) |
NV_FTR(PFR1, SSBS) |
NV_FTR(PFR1, CSV2_frac));
kvm_set_vm_id_reg(kvm, SYS_ID_AA64PFR1_EL1, val);
case SYS_ID_AA64MMFR0_EL1:
/* Hide ExS, Secure Memory */
val &= ~(ID_AA64MMFR0_EL1_EXS |
ID_AA64MMFR0_EL1_TGRAN4_2 |
ID_AA64MMFR0_EL1_TGRAN16_2 |
ID_AA64MMFR0_EL1_TGRAN64_2 |
ID_AA64MMFR0_EL1_SNSMEM);
/* Hide ECV, ExS, Secure Memory */
val = kvm_read_vm_id_reg(kvm, SYS_ID_AA64MMFR0_EL1);
val &= ~(NV_FTR(MMFR0, ECV) |
NV_FTR(MMFR0, EXS) |
NV_FTR(MMFR0, TGRAN4_2) |
NV_FTR(MMFR0, TGRAN16_2) |
NV_FTR(MMFR0, TGRAN64_2) |
NV_FTR(MMFR0, SNSMEM));
/* Hide CNTPOFF if present */
val = ID_REG_LIMIT_FIELD_ENUM(val, ID_AA64MMFR0_EL1, ECV, IMP);
/* Disallow unsupported S2 page sizes */
switch (PAGE_SIZE) {
case SZ_64K:
val |= FIELD_PREP(NV_FTR(MMFR0, TGRAN16_2), 0b0001);
fallthrough;
case SZ_16K:
val |= FIELD_PREP(NV_FTR(MMFR0, TGRAN4_2), 0b0001);
fallthrough;
case SZ_4K:
/* Support everything */
/* Disallow unsupported S2 page sizes */
switch (PAGE_SIZE) {
case SZ_64K:
val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN16_2, NI);
fallthrough;
case SZ_16K:
val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN4_2, NI);
fallthrough;
case SZ_4K:
/* Support everything */
break;
}
/*
* Since we can't support a guest S2 page size smaller
* than the host's own page size (due to KVM only
* populating its own S2 using the kernel's page
* size), advertise the limitation using FEAT_GTG.
*/
switch (PAGE_SIZE) {
case SZ_4K:
val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN4_2, IMP);
fallthrough;
case SZ_16K:
val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN16_2, IMP);
fallthrough;
case SZ_64K:
val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN64_2, IMP);
break;
}
/* Cap PARange to 48bits */
val = ID_REG_LIMIT_FIELD_ENUM(val, ID_AA64MMFR0_EL1, PARANGE, 48);
break;
case SYS_ID_AA64MMFR1_EL1:
val &= (ID_AA64MMFR1_EL1_HCX |
ID_AA64MMFR1_EL1_PAN |
ID_AA64MMFR1_EL1_LO |
ID_AA64MMFR1_EL1_HPDS |
ID_AA64MMFR1_EL1_VH |
ID_AA64MMFR1_EL1_VMIDBits);
/* FEAT_E2H0 implies no VHE */
if (test_bit(KVM_ARM_VCPU_HAS_EL2_E2H0, kvm->arch.vcpu_features))
val &= ~ID_AA64MMFR1_EL1_VH;
break;
case SYS_ID_AA64MMFR2_EL1:
val &= ~(ID_AA64MMFR2_EL1_BBM |
ID_AA64MMFR2_EL1_TTL |
GENMASK_ULL(47, 44) |
ID_AA64MMFR2_EL1_ST |
ID_AA64MMFR2_EL1_CCIDX |
ID_AA64MMFR2_EL1_VARange);
/* Force TTL support */
val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR2_EL1, TTL, IMP);
break;
case SYS_ID_AA64MMFR4_EL1:
/*
* You get EITHER
*
* - FEAT_VHE without FEAT_E2H0
* - FEAT_NV limited to FEAT_NV2
* - HCR_EL2.NV1 being RES0
*
* OR
*
* - FEAT_E2H0 without FEAT_VHE nor FEAT_NV
*
* Life is too short for anything else.
*/
if (test_bit(KVM_ARM_VCPU_HAS_EL2_E2H0, kvm->arch.vcpu_features)) {
val = 0;
} else {
val = SYS_FIELD_PREP_ENUM(ID_AA64MMFR4_EL1, NV_frac, NV2_ONLY);
val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR4_EL1, E2H0, NI_NV1);
}
break;
case SYS_ID_AA64DFR0_EL1:
/* Only limited support for PMU, Debug, BPs, WPs, and HPMN0 */
val &= (ID_AA64DFR0_EL1_PMUVer |
ID_AA64DFR0_EL1_WRPs |
ID_AA64DFR0_EL1_BRPs |
ID_AA64DFR0_EL1_DebugVer|
ID_AA64DFR0_EL1_HPMN0);
/* Cap Debug to ARMv8.1 */
val = ID_REG_LIMIT_FIELD_ENUM(val, ID_AA64DFR0_EL1, DebugVer, VHE);
break;
}
/*
* Since we can't support a guest S2 page size smaller than
* the host's own page size (due to KVM only populating its
* own S2 using the kernel's page size), advertise the
* limitation using FEAT_GTG.
*/
switch (PAGE_SIZE) {
case SZ_4K:
val |= FIELD_PREP(NV_FTR(MMFR0, TGRAN4_2), 0b0010);
fallthrough;
case SZ_16K:
val |= FIELD_PREP(NV_FTR(MMFR0, TGRAN16_2), 0b0010);
fallthrough;
case SZ_64K:
val |= FIELD_PREP(NV_FTR(MMFR0, TGRAN64_2), 0b0010);
break;
}
/* Cap PARange to 48bits */
tmp = FIELD_GET(NV_FTR(MMFR0, PARANGE), val);
if (tmp > 0b0101) {
val &= ~NV_FTR(MMFR0, PARANGE);
val |= FIELD_PREP(NV_FTR(MMFR0, PARANGE), 0b0101);
}
kvm_set_vm_id_reg(kvm, SYS_ID_AA64MMFR0_EL1, val);
val = kvm_read_vm_id_reg(kvm, SYS_ID_AA64MMFR1_EL1);
val &= (NV_FTR(MMFR1, HCX) |
NV_FTR(MMFR1, PAN) |
NV_FTR(MMFR1, LO) |
NV_FTR(MMFR1, HPDS) |
NV_FTR(MMFR1, VH) |
NV_FTR(MMFR1, VMIDBits));
kvm_set_vm_id_reg(kvm, SYS_ID_AA64MMFR1_EL1, val);
val = kvm_read_vm_id_reg(kvm, SYS_ID_AA64MMFR2_EL1);
val &= ~(NV_FTR(MMFR2, BBM) |
NV_FTR(MMFR2, TTL) |
GENMASK_ULL(47, 44) |
NV_FTR(MMFR2, ST) |
NV_FTR(MMFR2, CCIDX) |
NV_FTR(MMFR2, VARange));
/* Force TTL support */
val |= FIELD_PREP(NV_FTR(MMFR2, TTL), 0b0001);
kvm_set_vm_id_reg(kvm, SYS_ID_AA64MMFR2_EL1, val);
val = 0;
if (!cpus_have_final_cap(ARM64_HAS_HCR_NV1))
val |= FIELD_PREP(NV_FTR(MMFR4, E2H0),
ID_AA64MMFR4_EL1_E2H0_NI_NV1);
kvm_set_vm_id_reg(kvm, SYS_ID_AA64MMFR4_EL1, val);
/* Only limited support for PMU, Debug, BPs, WPs, and HPMN0 */
val = kvm_read_vm_id_reg(kvm, SYS_ID_AA64DFR0_EL1);
val &= (NV_FTR(DFR0, PMUVer) |
NV_FTR(DFR0, WRPs) |
NV_FTR(DFR0, BRPs) |
NV_FTR(DFR0, DebugVer) |
NV_FTR(DFR0, HPMN0));
/* Cap Debug to ARMv8.1 */
tmp = FIELD_GET(NV_FTR(DFR0, DebugVer), val);
if (tmp > 0b0111) {
val &= ~NV_FTR(DFR0, DebugVer);
val |= FIELD_PREP(NV_FTR(DFR0, DebugVer), 0b0111);
}
kvm_set_vm_id_reg(kvm, SYS_ID_AA64DFR0_EL1, val);
return val;
}
u64 kvm_vcpu_apply_reg_masks(const struct kvm_vcpu *vcpu,
@ -981,8 +999,6 @@ int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
if (!kvm->arch.sysreg_masks)
return -ENOMEM;
limit_nv_id_regs(kvm);
/* VTTBR_EL2 */
res0 = res1 = 0;
if (!kvm_has_feat_enum(kvm, ID_AA64MMFR1_EL1, VMIDBits, 16))
@ -1021,10 +1037,11 @@ int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
res0 |= HCR_FIEN;
if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, FWB, IMP))
res0 |= HCR_FWB;
if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, NV, NV2))
res0 |= HCR_NV2;
if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, NV, IMP))
res0 |= (HCR_AT | HCR_NV1 | HCR_NV);
/* Implementation choice: NV2 is the only supported config */
if (!kvm_has_feat(kvm, ID_AA64MMFR4_EL1, NV_frac, NV2_ONLY))
res0 |= (HCR_NV2 | HCR_NV | HCR_AT);
if (!kvm_has_feat(kvm, ID_AA64MMFR4_EL1, E2H0, NI))
res0 |= HCR_NV1;
if (!(kvm_vcpu_has_feature(kvm, KVM_ARM_VCPU_PTRAUTH_ADDRESS) &&
kvm_vcpu_has_feature(kvm, KVM_ARM_VCPU_PTRAUTH_GENERIC)))
res0 |= (HCR_API | HCR_APK);
@ -1034,6 +1051,8 @@ int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
res0 |= (HCR_TEA | HCR_TERR);
if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, LO, IMP))
res0 |= HCR_TLOR;
if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, VH, IMP))
res0 |= HCR_E2H;
if (!kvm_has_feat(kvm, ID_AA64MMFR4_EL1, E2H0, IMP))
res1 |= HCR_E2H;
set_sysreg_masks(kvm, HCR_EL2, res0, res1);
@ -1290,6 +1309,15 @@ int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
res0 |= GENMASK(11, 8);
set_sysreg_masks(kvm, CNTHCTL_EL2, res0, res1);
/* ICH_HCR_EL2 */
res0 = ICH_HCR_EL2_RES0;
res1 = ICH_HCR_EL2_RES1;
if (!(kvm_vgic_global_state.ich_vtr_el2 & ICH_VTR_EL2_TDS))
res0 |= ICH_HCR_EL2_TDIR;
/* No GICv4 is presented to the guest */
res0 |= ICH_HCR_EL2_DVIM | ICH_HCR_EL2_vSGIEOICount;
set_sysreg_masks(kvm, ICH_HCR_EL2, res0, res1);
out:
for (enum vcpu_sysreg sr = __SANITISED_REG_START__; sr < NR_SYS_REGS; sr++)
(void)__vcpu_sys_reg(vcpu, sr);
@ -1309,4 +1337,8 @@ void check_nested_vcpu_requests(struct kvm_vcpu *vcpu)
}
write_unlock(&vcpu->kvm->mmu_lock);
}
/* Must be last, as may switch context! */
if (kvm_check_request(KVM_REQ_GUEST_HYP_IRQ_PENDING, vcpu))
kvm_inject_nested_irq(vcpu);
}

View File

@ -111,6 +111,29 @@ static void __pkvm_destroy_hyp_vm(struct kvm *host_kvm)
host_kvm->arch.pkvm.handle = 0;
free_hyp_memcache(&host_kvm->arch.pkvm.teardown_mc);
free_hyp_memcache(&host_kvm->arch.pkvm.stage2_teardown_mc);
}
static int __pkvm_create_hyp_vcpu(struct kvm_vcpu *vcpu)
{
size_t hyp_vcpu_sz = PAGE_ALIGN(PKVM_HYP_VCPU_SIZE);
pkvm_handle_t handle = vcpu->kvm->arch.pkvm.handle;
void *hyp_vcpu;
int ret;
vcpu->arch.pkvm_memcache.flags |= HYP_MEMCACHE_ACCOUNT_STAGE2;
hyp_vcpu = alloc_pages_exact(hyp_vcpu_sz, GFP_KERNEL_ACCOUNT);
if (!hyp_vcpu)
return -ENOMEM;
ret = kvm_call_hyp_nvhe(__pkvm_init_vcpu, handle, vcpu, hyp_vcpu);
if (!ret)
vcpu_set_flag(vcpu, VCPU_PKVM_FINALIZED);
else
free_pages_exact(hyp_vcpu, hyp_vcpu_sz);
return ret;
}
/*
@ -125,11 +148,8 @@ static void __pkvm_destroy_hyp_vm(struct kvm *host_kvm)
*/
static int __pkvm_create_hyp_vm(struct kvm *host_kvm)
{
size_t pgd_sz, hyp_vm_sz, hyp_vcpu_sz;
struct kvm_vcpu *host_vcpu;
pkvm_handle_t handle;
size_t pgd_sz, hyp_vm_sz;
void *pgd, *hyp_vm;
unsigned long idx;
int ret;
if (host_kvm->created_vcpus < 1)
@ -161,40 +181,11 @@ static int __pkvm_create_hyp_vm(struct kvm *host_kvm)
if (ret < 0)
goto free_vm;
handle = ret;
host_kvm->arch.pkvm.handle = handle;
/* Donate memory for the vcpus at hyp and initialize it. */
hyp_vcpu_sz = PAGE_ALIGN(PKVM_HYP_VCPU_SIZE);
kvm_for_each_vcpu(idx, host_vcpu, host_kvm) {
void *hyp_vcpu;
/* Indexing of the vcpus to be sequential starting at 0. */
if (WARN_ON(host_vcpu->vcpu_idx != idx)) {
ret = -EINVAL;
goto destroy_vm;
}
hyp_vcpu = alloc_pages_exact(hyp_vcpu_sz, GFP_KERNEL_ACCOUNT);
if (!hyp_vcpu) {
ret = -ENOMEM;
goto destroy_vm;
}
ret = kvm_call_hyp_nvhe(__pkvm_init_vcpu, handle, host_vcpu,
hyp_vcpu);
if (ret) {
free_pages_exact(hyp_vcpu, hyp_vcpu_sz);
goto destroy_vm;
}
}
host_kvm->arch.pkvm.handle = ret;
host_kvm->arch.pkvm.stage2_teardown_mc.flags |= HYP_MEMCACHE_ACCOUNT_STAGE2;
kvm_account_pgtable_pages(pgd, pgd_sz / PAGE_SIZE);
return 0;
destroy_vm:
__pkvm_destroy_hyp_vm(host_kvm);
return ret;
free_vm:
free_pages_exact(hyp_vm, hyp_vm_sz);
free_pgd:
@ -214,6 +205,18 @@ int pkvm_create_hyp_vm(struct kvm *host_kvm)
return ret;
}
int pkvm_create_hyp_vcpu(struct kvm_vcpu *vcpu)
{
int ret = 0;
mutex_lock(&vcpu->kvm->arch.config_lock);
if (!vcpu_get_flag(vcpu, VCPU_PKVM_FINALIZED))
ret = __pkvm_create_hyp_vcpu(vcpu);
mutex_unlock(&vcpu->kvm->arch.config_lock);
return ret;
}
void pkvm_destroy_hyp_vm(struct kvm *host_kvm)
{
mutex_lock(&host_kvm->arch.config_lock);

View File

@ -17,8 +17,6 @@
#define PERF_ATTR_CFG1_COUNTER_64BIT BIT(0)
DEFINE_STATIC_KEY_FALSE(kvm_arm_pmu_available);
static LIST_HEAD(arm_pmus);
static DEFINE_MUTEX(arm_pmus_lock);
@ -26,6 +24,12 @@ static void kvm_pmu_create_perf_event(struct kvm_pmc *pmc);
static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc);
static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc);
bool kvm_supports_guest_pmuv3(void)
{
guard(mutex)(&arm_pmus_lock);
return !list_empty(&arm_pmus);
}
static struct kvm_vcpu *kvm_pmc_to_vcpu(const struct kvm_pmc *pmc)
{
return container_of(pmc, struct kvm_vcpu, arch.pmu.pmc[pmc->idx]);
@ -150,9 +154,6 @@ static u64 kvm_pmu_get_pmc_value(struct kvm_pmc *pmc)
*/
u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
{
if (!kvm_vcpu_has_pmu(vcpu))
return 0;
return kvm_pmu_get_pmc_value(kvm_vcpu_idx_to_pmc(vcpu, select_idx));
}
@ -191,12 +192,22 @@ static void kvm_pmu_set_pmc_value(struct kvm_pmc *pmc, u64 val, bool force)
*/
void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
{
if (!kvm_vcpu_has_pmu(vcpu))
return;
kvm_pmu_set_pmc_value(kvm_vcpu_idx_to_pmc(vcpu, select_idx), val, false);
}
/**
* kvm_pmu_set_counter_value_user - set PMU counter value from user
* @vcpu: The vcpu pointer
* @select_idx: The counter index
* @val: The counter value
*/
void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
{
kvm_pmu_release_perf_event(kvm_vcpu_idx_to_pmc(vcpu, select_idx));
__vcpu_sys_reg(vcpu, counter_index_to_reg(select_idx)) = val;
kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu);
}
/**
* kvm_pmu_release_perf_event - remove the perf event
* @pmc: The PMU counter pointer
@ -247,20 +258,6 @@ void kvm_pmu_vcpu_init(struct kvm_vcpu *vcpu)
pmu->pmc[i].idx = i;
}
/**
* kvm_pmu_vcpu_reset - reset pmu state for cpu
* @vcpu: The vcpu pointer
*
*/
void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
{
unsigned long mask = kvm_pmu_implemented_counter_mask(vcpu);
int i;
for_each_set_bit(i, &mask, 32)
kvm_pmu_stop_counter(kvm_vcpu_idx_to_pmc(vcpu, i));
}
/**
* kvm_pmu_vcpu_destroy - free perf event of PMU for cpu
* @vcpu: The vcpu pointer
@ -350,7 +347,7 @@ void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vcpu, u64 val)
{
int i;
if (!kvm_vcpu_has_pmu(vcpu) || !val)
if (!val)
return;
for (i = 0; i < KVM_ARMV8_PMU_MAX_COUNTERS; i++) {
@ -401,9 +398,6 @@ static void kvm_pmu_update_state(struct kvm_vcpu *vcpu)
struct kvm_pmu *pmu = &vcpu->arch.pmu;
bool overflow;
if (!kvm_vcpu_has_pmu(vcpu))
return;
overflow = kvm_pmu_overflow_status(vcpu);
if (pmu->irq_level == overflow)
return;
@ -599,9 +593,6 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
{
int i;
if (!kvm_vcpu_has_pmu(vcpu))
return;
/* Fixup PMCR_EL0 to reconcile the PMU version and the LP bit */
if (!kvm_has_feat(vcpu->kvm, ID_AA64DFR0_EL1, PMUVer, V3P5))
val &= ~ARMV8_PMU_PMCR_LP;
@ -673,6 +664,20 @@ static bool kvm_pmc_counts_at_el2(struct kvm_pmc *pmc)
return kvm_pmc_read_evtreg(pmc) & ARMV8_PMU_INCLUDE_EL2;
}
static int kvm_map_pmu_event(struct kvm *kvm, unsigned int eventsel)
{
struct arm_pmu *pmu = kvm->arch.arm_pmu;
/*
* The CPU PMU likely isn't PMUv3; let the driver provide a mapping
* for the guest's PMUv3 event ID.
*/
if (unlikely(pmu->map_pmuv3_event))
return pmu->map_pmuv3_event(eventsel);
return eventsel;
}
/**
* kvm_pmu_create_perf_event - create a perf event for a counter
* @pmc: Counter context
@ -683,7 +688,8 @@ static void kvm_pmu_create_perf_event(struct kvm_pmc *pmc)
struct arm_pmu *arm_pmu = vcpu->kvm->arch.arm_pmu;
struct perf_event *event;
struct perf_event_attr attr;
u64 eventsel, evtreg;
int eventsel;
u64 evtreg;
evtreg = kvm_pmc_read_evtreg(pmc);
@ -709,6 +715,14 @@ static void kvm_pmu_create_perf_event(struct kvm_pmc *pmc)
!test_bit(eventsel, vcpu->kvm->arch.pmu_filter))
return;
/*
* Don't create an event if we're running on hardware that requires
* PMUv3 event translation and we couldn't find a valid mapping.
*/
eventsel = kvm_map_pmu_event(vcpu->kvm, eventsel);
if (eventsel < 0)
return;
memset(&attr, 0, sizeof(struct perf_event_attr));
attr.type = arm_pmu->pmu.type;
attr.size = sizeof(attr);
@ -766,9 +780,6 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
struct kvm_pmc *pmc = kvm_vcpu_idx_to_pmc(vcpu, select_idx);
u64 reg;
if (!kvm_vcpu_has_pmu(vcpu))
return;
reg = counter_index_to_evtreg(pmc->idx);
__vcpu_sys_reg(vcpu, reg) = data & kvm_pmu_evtyper_mask(vcpu->kvm);
@ -786,29 +797,23 @@ void kvm_host_pmu_init(struct arm_pmu *pmu)
if (!pmuv3_implemented(kvm_arm_pmu_get_pmuver_limit()))
return;
mutex_lock(&arm_pmus_lock);
guard(mutex)(&arm_pmus_lock);
entry = kmalloc(sizeof(*entry), GFP_KERNEL);
if (!entry)
goto out_unlock;
return;
entry->arm_pmu = pmu;
list_add_tail(&entry->entry, &arm_pmus);
if (list_is_singular(&arm_pmus))
static_branch_enable(&kvm_arm_pmu_available);
out_unlock:
mutex_unlock(&arm_pmus_lock);
}
static struct arm_pmu *kvm_pmu_probe_armpmu(void)
{
struct arm_pmu *tmp, *pmu = NULL;
struct arm_pmu_entry *entry;
struct arm_pmu *pmu;
int cpu;
mutex_lock(&arm_pmus_lock);
guard(mutex)(&arm_pmus_lock);
/*
* It is safe to use a stale cpu to iterate the list of PMUs so long as
@ -829,42 +834,62 @@ static struct arm_pmu *kvm_pmu_probe_armpmu(void)
*/
cpu = raw_smp_processor_id();
list_for_each_entry(entry, &arm_pmus, entry) {
tmp = entry->arm_pmu;
pmu = entry->arm_pmu;
if (cpumask_test_cpu(cpu, &tmp->supported_cpus)) {
pmu = tmp;
break;
}
if (cpumask_test_cpu(cpu, &pmu->supported_cpus))
return pmu;
}
mutex_unlock(&arm_pmus_lock);
return NULL;
}
return pmu;
static u64 __compute_pmceid(struct arm_pmu *pmu, bool pmceid1)
{
u32 hi[2], lo[2];
bitmap_to_arr32(lo, pmu->pmceid_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS);
bitmap_to_arr32(hi, pmu->pmceid_ext_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS);
return ((u64)hi[pmceid1] << 32) | lo[pmceid1];
}
static u64 compute_pmceid0(struct arm_pmu *pmu)
{
u64 val = __compute_pmceid(pmu, 0);
/* always support SW_INCR */
val |= BIT(ARMV8_PMUV3_PERFCTR_SW_INCR);
/* always support CHAIN */
val |= BIT(ARMV8_PMUV3_PERFCTR_CHAIN);
return val;
}
static u64 compute_pmceid1(struct arm_pmu *pmu)
{
u64 val = __compute_pmceid(pmu, 1);
/*
* Don't advertise STALL_SLOT*, as PMMIR_EL0 is handled
* as RAZ
*/
val &= ~(BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT - 32) |
BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND - 32) |
BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND - 32));
return val;
}
u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
{
struct arm_pmu *cpu_pmu = vcpu->kvm->arch.arm_pmu;
unsigned long *bmap = vcpu->kvm->arch.pmu_filter;
u64 val, mask = 0;
int base, i, nr_events;
if (!kvm_vcpu_has_pmu(vcpu))
return 0;
if (!pmceid1) {
val = read_sysreg(pmceid0_el0);
/* always support CHAIN */
val |= BIT(ARMV8_PMUV3_PERFCTR_CHAIN);
val = compute_pmceid0(cpu_pmu);
base = 0;
} else {
val = read_sysreg(pmceid1_el0);
/*
* Don't advertise STALL_SLOT*, as PMMIR_EL0 is handled
* as RAZ
*/
val &= ~(BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT - 32) |
BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND - 32) |
BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND - 32));
val = compute_pmceid1(cpu_pmu);
base = 32;
}
@ -900,9 +925,6 @@ void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu)
int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu)
{
if (!kvm_vcpu_has_pmu(vcpu))
return 0;
if (!vcpu->arch.pmu.created)
return -EINVAL;
@ -925,9 +947,6 @@ int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu)
return -EINVAL;
}
/* One-off reload of the PMU on first run */
kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu);
return 0;
}
@ -994,6 +1013,13 @@ u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm)
{
struct arm_pmu *arm_pmu = kvm->arch.arm_pmu;
/*
* PMUv3 requires that all event counters are capable of counting any
* event, though the same may not be true of non-PMUv3 hardware.
*/
if (cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS))
return 1;
/*
* The arm_pmu->cntr_mask considers the fixed counter(s) as well.
* Ignore those and return only the general-purpose counters.
@ -1205,13 +1231,26 @@ int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
u8 kvm_arm_pmu_get_pmuver_limit(void)
{
u64 tmp;
unsigned int pmuver;
tmp = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
tmp = cpuid_feature_cap_perfmon_field(tmp,
ID_AA64DFR0_EL1_PMUVer_SHIFT,
ID_AA64DFR0_EL1_PMUVer_V3P5);
return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), tmp);
pmuver = SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer,
read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1));
/*
* Spoof a barebones PMUv3 implementation if the system supports IMPDEF
* traps of the PMUv3 sysregs
*/
if (cpus_have_final_cap(ARM64_WORKAROUND_PMUV3_IMPDEF_TRAPS))
return ID_AA64DFR0_EL1_PMUVer_IMP;
/*
* Otherwise, treat IMPLEMENTATION DEFINED functionality as
* unimplemented
*/
if (pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF)
return 0;
return min(pmuver, ID_AA64DFR0_EL1_PMUVer_V3P5);
}
/**
@ -1231,9 +1270,6 @@ void kvm_pmu_nested_transition(struct kvm_vcpu *vcpu)
unsigned long mask;
int i;
if (!kvm_vcpu_has_pmu(vcpu))
return;
mask = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
for_each_set_bit(i, &mask, 32) {
struct kvm_pmc *pmc = kvm_vcpu_idx_to_pmc(vcpu, i);

View File

@ -41,7 +41,7 @@ void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr)
{
struct kvm_pmu_events *pmu = kvm_get_pmu_events();
if (!kvm_arm_support_pmu_v3() || !kvm_pmu_switch_needed(attr))
if (!system_supports_pmuv3() || !kvm_pmu_switch_needed(attr))
return;
if (!attr->exclude_host)
@ -57,7 +57,7 @@ void kvm_clr_pmu_events(u64 clr)
{
struct kvm_pmu_events *pmu = kvm_get_pmu_events();
if (!kvm_arm_support_pmu_v3())
if (!system_supports_pmuv3())
return;
pmu->events_host &= ~clr;
@ -133,7 +133,7 @@ void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu)
struct kvm_pmu_events *pmu;
u64 events_guest, events_host;
if (!kvm_arm_support_pmu_v3() || !has_vhe())
if (!system_supports_pmuv3() || !has_vhe())
return;
preempt_disable();
@ -154,7 +154,7 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu)
struct kvm_pmu_events *pmu;
u64 events_guest, events_host;
if (!kvm_arm_support_pmu_v3() || !has_vhe())
if (!system_supports_pmuv3() || !has_vhe())
return;
pmu = kvm_get_pmu_events();
@ -180,7 +180,7 @@ bool kvm_set_pmuserenr(u64 val)
struct kvm_cpu_context *hctxt;
struct kvm_vcpu *vcpu;
if (!kvm_arm_support_pmu_v3() || !has_vhe())
if (!system_supports_pmuv3() || !has_vhe())
return false;
vcpu = kvm_get_running_vcpu();

View File

@ -196,9 +196,6 @@ void kvm_reset_vcpu(struct kvm_vcpu *vcpu)
vcpu->arch.reset_state.reset = false;
spin_unlock(&vcpu->arch.mp_state_lock);
/* Reset PMU outside of the non-preemptible section */
kvm_pmu_vcpu_reset(vcpu);
preempt_disable();
loaded = (vcpu->cpu != -1);
if (loaded)

View File

@ -17,6 +17,7 @@
#include <linux/mm.h>
#include <linux/printk.h>
#include <linux/uaccess.h>
#include <linux/irqchip/arm-gic-v3.h>
#include <asm/arm_pmuv3.h>
#include <asm/cacheflush.h>
@ -531,7 +532,13 @@ static bool access_gic_sre(struct kvm_vcpu *vcpu,
if (p->is_write)
return ignore_write(vcpu, p);
p->regval = vcpu->arch.vgic_cpu.vgic_v3.vgic_sre;
if (p->Op1 == 4) { /* ICC_SRE_EL2 */
p->regval = (ICC_SRE_EL2_ENABLE | ICC_SRE_EL2_SRE |
ICC_SRE_EL1_DIB | ICC_SRE_EL1_DFB);
} else { /* ICC_SRE_EL1 */
p->regval = vcpu->arch.vgic_cpu.vgic_v3.vgic_sre;
}
return true;
}
@ -960,6 +967,22 @@ static int get_pmu_evcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
return 0;
}
static int set_pmu_evcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
u64 val)
{
u64 idx;
if (r->CRn == 9 && r->CRm == 13 && r->Op2 == 0)
/* PMCCNTR_EL0 */
idx = ARMV8_PMU_CYCLE_IDX;
else
/* PMEVCNTRn_EL0 */
idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
kvm_pmu_set_counter_value_user(vcpu, idx, val);
return 0;
}
static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
@ -1051,25 +1074,10 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
static int set_pmreg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r, u64 val)
{
bool set;
u64 mask = kvm_pmu_accessible_counter_mask(vcpu);
val &= kvm_pmu_accessible_counter_mask(vcpu);
switch (r->reg) {
case PMOVSSET_EL0:
/* CRm[1] being set indicates a SET register, and CLR otherwise */
set = r->CRm & 2;
break;
default:
/* Op2[0] being set indicates a SET register, and CLR otherwise */
set = r->Op2 & 1;
break;
}
if (set)
__vcpu_sys_reg(vcpu, r->reg) |= val;
else
__vcpu_sys_reg(vcpu, r->reg) &= ~val;
__vcpu_sys_reg(vcpu, r->reg) = val & mask;
kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu);
return 0;
}
@ -1229,6 +1237,8 @@ static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
val |= ARMV8_PMU_PMCR_LC;
__vcpu_sys_reg(vcpu, r->reg) = val;
kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu);
return 0;
}
@ -1255,6 +1265,7 @@ static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
#define PMU_PMEVCNTR_EL0(n) \
{ PMU_SYS_REG(PMEVCNTRn_EL0(n)), \
.reset = reset_pmevcntr, .get_user = get_pmu_evcntr, \
.set_user = set_pmu_evcntr, \
.access = access_pmu_evcntr, .reg = (PMEVCNTR0_EL0 + n), }
/* Macro to expand the PMEVTYPERn_EL0 register */
@ -1627,6 +1638,7 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu,
break;
case SYS_ID_AA64MMFR2_EL1:
val &= ~ID_AA64MMFR2_EL1_CCIDX_MASK;
val &= ~ID_AA64MMFR2_EL1_NV;
break;
case SYS_ID_AA64MMFR3_EL1:
val &= ID_AA64MMFR3_EL1_TCRX | ID_AA64MMFR3_EL1_S1POE |
@ -1637,6 +1649,9 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu,
break;
}
if (vcpu_has_nv(vcpu))
val = limit_nv_id_reg(vcpu->kvm, id, val);
return val;
}
@ -1663,15 +1678,24 @@ static bool is_feature_id_reg(u32 encoding)
* Return true if the register's (Op0, Op1, CRn, CRm, Op2) is
* (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8, which is the range of ID
* registers KVM maintains on a per-VM basis.
*
* Additionally, the implementation ID registers and CTR_EL0 are handled as
* per-VM registers.
*/
static inline bool is_vm_ftr_id_reg(u32 id)
{
if (id == SYS_CTR_EL0)
switch (id) {
case SYS_CTR_EL0:
case SYS_MIDR_EL1:
case SYS_REVIDR_EL1:
case SYS_AIDR_EL1:
return true;
default:
return (sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 &&
sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 1 &&
sys_reg_CRm(id) < 8);
return (sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 &&
sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 1 &&
sys_reg_CRm(id) < 8);
}
}
static inline bool is_vcpu_ftr_id_reg(u32 id)
@ -1802,16 +1826,6 @@ static u64 sanitise_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu, u64 val)
return val;
}
#define ID_REG_LIMIT_FIELD_ENUM(val, reg, field, limit) \
({ \
u64 __f_val = FIELD_GET(reg##_##field##_MASK, val); \
(val) &= ~reg##_##field##_MASK; \
(val) |= FIELD_PREP(reg##_##field##_MASK, \
min(__f_val, \
(u64)SYS_FIELD_VALUE(reg, field, limit))); \
(val); \
})
static u64 sanitise_id_aa64dfr0_el1(const struct kvm_vcpu *vcpu, u64 val)
{
val = ID_REG_LIMIT_FIELD_ENUM(val, ID_AA64DFR0_EL1, DebugVer, V8P8);
@ -1870,12 +1884,14 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
static u64 read_sanitised_id_dfr0_el1(struct kvm_vcpu *vcpu,
const struct sys_reg_desc *rd)
{
u8 perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit());
u8 perfmon;
u64 val = read_sanitised_ftr_reg(SYS_ID_DFR0_EL1);
val &= ~ID_DFR0_EL1_PerfMon_MASK;
if (kvm_vcpu_has_pmu(vcpu))
if (kvm_vcpu_has_pmu(vcpu)) {
perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit());
val |= SYS_FIELD_PREP(ID_DFR0_EL1, PerfMon, perfmon);
}
val = ID_REG_LIMIT_FIELD_ENUM(val, ID_DFR0_EL1, CopDbg, Debugv8p8);
@ -1945,6 +1961,37 @@ static int set_id_aa64pfr1_el1(struct kvm_vcpu *vcpu,
return set_id_reg(vcpu, rd, user_val);
}
static int set_id_aa64mmfr0_el1(struct kvm_vcpu *vcpu,
const struct sys_reg_desc *rd, u64 user_val)
{
u64 sanitized_val = kvm_read_sanitised_id_reg(vcpu, rd);
u64 tgran2_mask = ID_AA64MMFR0_EL1_TGRAN4_2_MASK |
ID_AA64MMFR0_EL1_TGRAN16_2_MASK |
ID_AA64MMFR0_EL1_TGRAN64_2_MASK;
if (vcpu_has_nv(vcpu) &&
((sanitized_val & tgran2_mask) != (user_val & tgran2_mask)))
return -EINVAL;
return set_id_reg(vcpu, rd, user_val);
}
static int set_id_aa64mmfr2_el1(struct kvm_vcpu *vcpu,
const struct sys_reg_desc *rd, u64 user_val)
{
u64 hw_val = read_sanitised_ftr_reg(SYS_ID_AA64MMFR2_EL1);
u64 nv_mask = ID_AA64MMFR2_EL1_NV_MASK;
/*
* We made the mistake to expose the now deprecated NV field,
* so allow userspace to write it, but silently ignore it.
*/
if ((hw_val & nv_mask) == (user_val & nv_mask))
user_val &= ~nv_mask;
return set_id_reg(vcpu, rd, user_val);
}
static int set_ctr_el0(struct kvm_vcpu *vcpu,
const struct sys_reg_desc *rd, u64 user_val)
{
@ -2266,35 +2313,33 @@ static bool bad_redir_trap(struct kvm_vcpu *vcpu,
* from userspace.
*/
#define ID_DESC_DEFAULT_CALLBACKS \
.access = access_id_reg, \
.get_user = get_id_reg, \
.set_user = set_id_reg, \
.visibility = id_visibility, \
.reset = kvm_read_sanitised_id_reg
#define ID_DESC(name) \
SYS_DESC(SYS_##name), \
.access = access_id_reg, \
.get_user = get_id_reg \
ID_DESC_DEFAULT_CALLBACKS
/* sys_reg_desc initialiser for known cpufeature ID registers */
#define ID_SANITISED(name) { \
ID_DESC(name), \
.set_user = set_id_reg, \
.visibility = id_visibility, \
.reset = kvm_read_sanitised_id_reg, \
.val = 0, \
}
/* sys_reg_desc initialiser for known cpufeature ID registers */
#define AA32_ID_SANITISED(name) { \
ID_DESC(name), \
.set_user = set_id_reg, \
.visibility = aa32_id_visibility, \
.reset = kvm_read_sanitised_id_reg, \
.val = 0, \
}
/* sys_reg_desc initialiser for writable ID registers */
#define ID_WRITABLE(name, mask) { \
ID_DESC(name), \
.set_user = set_id_reg, \
.visibility = id_visibility, \
.reset = kvm_read_sanitised_id_reg, \
.val = mask, \
}
@ -2302,8 +2347,6 @@ static bool bad_redir_trap(struct kvm_vcpu *vcpu,
#define ID_FILTERED(sysreg, name, mask) { \
ID_DESC(sysreg), \
.set_user = set_##name, \
.visibility = id_visibility, \
.reset = kvm_read_sanitised_id_reg, \
.val = (mask), \
}
@ -2313,12 +2356,10 @@ static bool bad_redir_trap(struct kvm_vcpu *vcpu,
* (1 <= crm < 8, 0 <= Op2 < 8).
*/
#define ID_UNALLOCATED(crm, op2) { \
.name = "S3_0_0_" #crm "_" #op2, \
Op0(3), Op1(0), CRn(0), CRm(crm), Op2(op2), \
.access = access_id_reg, \
.get_user = get_id_reg, \
.set_user = set_id_reg, \
ID_DESC_DEFAULT_CALLBACKS, \
.visibility = raz_visibility, \
.reset = kvm_read_sanitised_id_reg, \
.val = 0, \
}
@ -2329,9 +2370,7 @@ static bool bad_redir_trap(struct kvm_vcpu *vcpu,
*/
#define ID_HIDDEN(name) { \
ID_DESC(name), \
.set_user = set_id_reg, \
.visibility = raz_visibility, \
.reset = kvm_read_sanitised_id_reg, \
.val = 0, \
}
@ -2426,6 +2465,59 @@ static bool access_zcr_el2(struct kvm_vcpu *vcpu,
vq = SYS_FIELD_GET(ZCR_ELx, LEN, p->regval) + 1;
vq = min(vq, vcpu_sve_max_vq(vcpu));
vcpu_write_sys_reg(vcpu, vq - 1, ZCR_EL2);
return true;
}
static bool access_gic_vtr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
if (p->is_write)
return write_to_read_only(vcpu, p, r);
p->regval = kvm_vgic_global_state.ich_vtr_el2;
p->regval &= ~(ICH_VTR_EL2_DVIM |
ICH_VTR_EL2_A3V |
ICH_VTR_EL2_IDbits);
p->regval |= ICH_VTR_EL2_nV4;
return true;
}
static bool access_gic_misr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
if (p->is_write)
return write_to_read_only(vcpu, p, r);
p->regval = vgic_v3_get_misr(vcpu);
return true;
}
static bool access_gic_eisr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
if (p->is_write)
return write_to_read_only(vcpu, p, r);
p->regval = vgic_v3_get_eisr(vcpu);
return true;
}
static bool access_gic_elrsr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
if (p->is_write)
return write_to_read_only(vcpu, p, r);
p->regval = vgic_v3_get_elrsr(vcpu);
return true;
}
@ -2493,6 +2585,120 @@ static bool access_mdcr(struct kvm_vcpu *vcpu,
return true;
}
/*
* For historical (ahem ABI) reasons, KVM treated MIDR_EL1, REVIDR_EL1, and
* AIDR_EL1 as "invariant" registers, meaning userspace cannot change them.
* The values made visible to userspace were the register values of the boot
* CPU.
*
* At the same time, reads from these registers at EL1 previously were not
* trapped, allowing the guest to read the actual hardware value. On big-little
* machines, this means the VM can see different values depending on where a
* given vCPU got scheduled.
*
* These registers are now trapped as collateral damage from SME, and what
* follows attempts to give a user / guest view consistent with the existing
* ABI.
*/
static bool access_imp_id_reg(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
if (p->is_write)
return write_to_read_only(vcpu, p, r);
/*
* Return the VM-scoped implementation ID register values if userspace
* has made them writable.
*/
if (test_bit(KVM_ARCH_FLAG_WRITABLE_IMP_ID_REGS, &vcpu->kvm->arch.flags))
return access_id_reg(vcpu, p, r);
/*
* Otherwise, fall back to the old behavior of returning the value of
* the current CPU.
*/
switch (reg_to_encoding(r)) {
case SYS_REVIDR_EL1:
p->regval = read_sysreg(revidr_el1);
break;
case SYS_AIDR_EL1:
p->regval = read_sysreg(aidr_el1);
break;
default:
WARN_ON_ONCE(1);
}
return true;
}
static u64 __ro_after_init boot_cpu_midr_val;
static u64 __ro_after_init boot_cpu_revidr_val;
static u64 __ro_after_init boot_cpu_aidr_val;
static void init_imp_id_regs(void)
{
boot_cpu_midr_val = read_sysreg(midr_el1);
boot_cpu_revidr_val = read_sysreg(revidr_el1);
boot_cpu_aidr_val = read_sysreg(aidr_el1);
}
static u64 reset_imp_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
{
switch (reg_to_encoding(r)) {
case SYS_MIDR_EL1:
return boot_cpu_midr_val;
case SYS_REVIDR_EL1:
return boot_cpu_revidr_val;
case SYS_AIDR_EL1:
return boot_cpu_aidr_val;
default:
KVM_BUG_ON(1, vcpu->kvm);
return 0;
}
}
static int set_imp_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
u64 val)
{
struct kvm *kvm = vcpu->kvm;
u64 expected;
guard(mutex)(&kvm->arch.config_lock);
expected = read_id_reg(vcpu, r);
if (expected == val)
return 0;
if (!test_bit(KVM_ARCH_FLAG_WRITABLE_IMP_ID_REGS, &kvm->arch.flags))
return -EINVAL;
/*
* Once the VM has started the ID registers are immutable. Reject the
* write if userspace tries to change it.
*/
if (kvm_vm_has_ran_once(kvm))
return -EBUSY;
/*
* Any value is allowed for the implementation ID registers so long as
* it is within the writable mask.
*/
if ((val & r->val) != val)
return -EINVAL;
kvm_set_vm_id_reg(kvm, reg_to_encoding(r), val);
return 0;
}
#define IMPLEMENTATION_ID(reg, mask) { \
SYS_DESC(SYS_##reg), \
.access = access_imp_id_reg, \
.get_user = get_id_reg, \
.set_user = set_imp_id_reg, \
.reset = reset_imp_id_reg, \
.val = mask, \
}
/*
* Architected system registers.
@ -2542,7 +2748,9 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_DBGVCR32_EL2), undef_access, reset_val, DBGVCR32_EL2, 0 },
IMPLEMENTATION_ID(MIDR_EL1, GENMASK_ULL(31, 0)),
{ SYS_DESC(SYS_MPIDR_EL1), NULL, reset_mpidr, MPIDR_EL1 },
IMPLEMENTATION_ID(REVIDR_EL1, GENMASK_ULL(63, 0)),
/*
* ID regs: all ID_SANITISED() entries here must have corresponding
@ -2660,10 +2868,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
ID_UNALLOCATED(6,7),
/* CRm=7 */
ID_WRITABLE(ID_AA64MMFR0_EL1, ~(ID_AA64MMFR0_EL1_RES0 |
ID_AA64MMFR0_EL1_TGRAN4_2 |
ID_AA64MMFR0_EL1_TGRAN64_2 |
ID_AA64MMFR0_EL1_TGRAN16_2 |
ID_FILTERED(ID_AA64MMFR0_EL1, id_aa64mmfr0_el1,
~(ID_AA64MMFR0_EL1_RES0 |
ID_AA64MMFR0_EL1_ASIDBITS)),
ID_WRITABLE(ID_AA64MMFR1_EL1, ~(ID_AA64MMFR1_EL1_RES0 |
ID_AA64MMFR1_EL1_HCX |
@ -2671,7 +2877,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
ID_AA64MMFR1_EL1_XNX |
ID_AA64MMFR1_EL1_VH |
ID_AA64MMFR1_EL1_VMIDBits)),
ID_WRITABLE(ID_AA64MMFR2_EL1, ~(ID_AA64MMFR2_EL1_RES0 |
ID_FILTERED(ID_AA64MMFR2_EL1,
id_aa64mmfr2_el1, ~(ID_AA64MMFR2_EL1_RES0 |
ID_AA64MMFR2_EL1_EVT |
ID_AA64MMFR2_EL1_FWB |
ID_AA64MMFR2_EL1_IDS |
@ -2680,7 +2887,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
ID_WRITABLE(ID_AA64MMFR3_EL1, (ID_AA64MMFR3_EL1_TCRX |
ID_AA64MMFR3_EL1_S1PIE |
ID_AA64MMFR3_EL1_S1POE)),
ID_SANITISED(ID_AA64MMFR4_EL1),
ID_WRITABLE(ID_AA64MMFR4_EL1, ID_AA64MMFR4_EL1_NV_frac),
ID_UNALLOCATED(7,5),
ID_UNALLOCATED(7,6),
ID_UNALLOCATED(7,7),
@ -2814,6 +3021,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
.set_user = set_clidr, .val = ~CLIDR_EL1_RES0 },
{ SYS_DESC(SYS_CCSIDR2_EL1), undef_access },
{ SYS_DESC(SYS_SMIDR_EL1), undef_access },
IMPLEMENTATION_ID(AIDR_EL1, GENMASK_ULL(63, 0)),
{ SYS_DESC(SYS_CSSELR_EL1), access_csselr, reset_unknown, CSSELR_EL1 },
ID_FILTERED(CTR_EL0, ctr_el0,
CTR_EL0_DIC_MASK |
@ -2850,7 +3058,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
.access = access_pmceid, .reset = NULL },
{ PMU_SYS_REG(PMCCNTR_EL0),
.access = access_pmu_evcntr, .reset = reset_unknown,
.reg = PMCCNTR_EL0, .get_user = get_pmu_evcntr},
.reg = PMCCNTR_EL0, .get_user = get_pmu_evcntr,
.set_user = set_pmu_evcntr },
{ PMU_SYS_REG(PMXEVTYPER_EL0),
.access = access_pmu_evtyper, .reset = NULL },
{ PMU_SYS_REG(PMXEVCNTR_EL0),
@ -3102,7 +3311,40 @@ static const struct sys_reg_desc sys_reg_descs[] = {
EL2_REG(RVBAR_EL2, access_rw, reset_val, 0),
{ SYS_DESC(SYS_RMR_EL2), undef_access },
EL2_REG_VNCR(ICH_AP0R0_EL2, reset_val, 0),
EL2_REG_VNCR(ICH_AP0R1_EL2, reset_val, 0),
EL2_REG_VNCR(ICH_AP0R2_EL2, reset_val, 0),
EL2_REG_VNCR(ICH_AP0R3_EL2, reset_val, 0),
EL2_REG_VNCR(ICH_AP1R0_EL2, reset_val, 0),
EL2_REG_VNCR(ICH_AP1R1_EL2, reset_val, 0),
EL2_REG_VNCR(ICH_AP1R2_EL2, reset_val, 0),
EL2_REG_VNCR(ICH_AP1R3_EL2, reset_val, 0),
{ SYS_DESC(SYS_ICC_SRE_EL2), access_gic_sre },
EL2_REG_VNCR(ICH_HCR_EL2, reset_val, 0),
{ SYS_DESC(SYS_ICH_VTR_EL2), access_gic_vtr },
{ SYS_DESC(SYS_ICH_MISR_EL2), access_gic_misr },
{ SYS_DESC(SYS_ICH_EISR_EL2), access_gic_eisr },
{ SYS_DESC(SYS_ICH_ELRSR_EL2), access_gic_elrsr },
EL2_REG_VNCR(ICH_VMCR_EL2, reset_val, 0),
EL2_REG_VNCR(ICH_LR0_EL2, reset_val, 0),
EL2_REG_VNCR(ICH_LR1_EL2, reset_val, 0),
EL2_REG_VNCR(ICH_LR2_EL2, reset_val, 0),
EL2_REG_VNCR(ICH_LR3_EL2, reset_val, 0),
EL2_REG_VNCR(ICH_LR4_EL2, reset_val, 0),
EL2_REG_VNCR(ICH_LR5_EL2, reset_val, 0),
EL2_REG_VNCR(ICH_LR6_EL2, reset_val, 0),
EL2_REG_VNCR(ICH_LR7_EL2, reset_val, 0),
EL2_REG_VNCR(ICH_LR8_EL2, reset_val, 0),
EL2_REG_VNCR(ICH_LR9_EL2, reset_val, 0),
EL2_REG_VNCR(ICH_LR10_EL2, reset_val, 0),
EL2_REG_VNCR(ICH_LR11_EL2, reset_val, 0),
EL2_REG_VNCR(ICH_LR12_EL2, reset_val, 0),
EL2_REG_VNCR(ICH_LR13_EL2, reset_val, 0),
EL2_REG_VNCR(ICH_LR14_EL2, reset_val, 0),
EL2_REG_VNCR(ICH_LR15_EL2, reset_val, 0),
EL2_REG(CONTEXTIDR_EL2, access_rw, reset_val, 0),
EL2_REG(TPIDR_EL2, access_rw, reset_val, 0),
@ -4272,9 +4514,13 @@ int kvm_handle_cp15_32(struct kvm_vcpu *vcpu)
* Certain AArch32 ID registers are handled by rerouting to the AArch64
* system register table. Registers in the ID range where CRm=0 are
* excluded from this scheme as they do not trivially map into AArch64
* system register encodings.
* system register encodings, except for AIDR/REVIDR.
*/
if (params.Op1 == 0 && params.CRn == 0 && params.CRm)
if (params.Op1 == 0 && params.CRn == 0 &&
(params.CRm || params.Op2 == 6 /* REVIDR */))
return kvm_emulate_cp15_id_reg(vcpu, &params);
if (params.Op1 == 1 && params.CRn == 0 &&
params.CRm == 0 && params.Op2 == 7 /* AIDR */)
return kvm_emulate_cp15_id_reg(vcpu, &params);
return kvm_handle_cp_32(vcpu, &params, cp15_regs, ARRAY_SIZE(cp15_regs));
@ -4473,6 +4719,9 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
}
set_bit(KVM_ARCH_FLAG_ID_REGS_INITIALIZED, &kvm->arch.flags);
if (kvm_vcpu_has_pmu(vcpu))
kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu);
}
/**
@ -4578,65 +4827,6 @@ id_to_sys_reg_desc(struct kvm_vcpu *vcpu, u64 id,
return r;
}
/*
* These are the invariant sys_reg registers: we let the guest see the
* host versions of these, so they're part of the guest state.
*
* A future CPU may provide a mechanism to present different values to
* the guest, or a future kvm may trap them.
*/
#define FUNCTION_INVARIANT(reg) \
static u64 reset_##reg(struct kvm_vcpu *v, \
const struct sys_reg_desc *r) \
{ \
((struct sys_reg_desc *)r)->val = read_sysreg(reg); \
return ((struct sys_reg_desc *)r)->val; \
}
FUNCTION_INVARIANT(midr_el1)
FUNCTION_INVARIANT(revidr_el1)
FUNCTION_INVARIANT(aidr_el1)
/* ->val is filled in by kvm_sys_reg_table_init() */
static struct sys_reg_desc invariant_sys_regs[] __ro_after_init = {
{ SYS_DESC(SYS_MIDR_EL1), NULL, reset_midr_el1 },
{ SYS_DESC(SYS_REVIDR_EL1), NULL, reset_revidr_el1 },
{ SYS_DESC(SYS_AIDR_EL1), NULL, reset_aidr_el1 },
};
static int get_invariant_sys_reg(u64 id, u64 __user *uaddr)
{
const struct sys_reg_desc *r;
r = get_reg_by_id(id, invariant_sys_regs,
ARRAY_SIZE(invariant_sys_regs));
if (!r)
return -ENOENT;
return put_user(r->val, uaddr);
}
static int set_invariant_sys_reg(u64 id, u64 __user *uaddr)
{
const struct sys_reg_desc *r;
u64 val;
r = get_reg_by_id(id, invariant_sys_regs,
ARRAY_SIZE(invariant_sys_regs));
if (!r)
return -ENOENT;
if (get_user(val, uaddr))
return -EFAULT;
/* This is what we mean by invariant: you can't change it. */
if (r->val != val)
return -EINVAL;
return 0;
}
static int demux_c15_get(struct kvm_vcpu *vcpu, u64 id, void __user *uaddr)
{
u32 val;
@ -4718,15 +4908,10 @@ int kvm_sys_reg_get_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg,
int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
{
void __user *uaddr = (void __user *)(unsigned long)reg->addr;
int err;
if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_DEMUX)
return demux_c15_get(vcpu, reg->id, uaddr);
err = get_invariant_sys_reg(reg->id, uaddr);
if (err != -ENOENT)
return err;
return kvm_sys_reg_get_user(vcpu, reg,
sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
}
@ -4762,15 +4947,10 @@ int kvm_sys_reg_set_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg,
int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
{
void __user *uaddr = (void __user *)(unsigned long)reg->addr;
int err;
if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_DEMUX)
return demux_c15_set(vcpu, reg->id, uaddr);
err = set_invariant_sys_reg(reg->id, uaddr);
if (err != -ENOENT)
return err;
return kvm_sys_reg_set_user(vcpu, reg,
sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
}
@ -4859,23 +5039,14 @@ static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind)
unsigned long kvm_arm_num_sys_reg_descs(struct kvm_vcpu *vcpu)
{
return ARRAY_SIZE(invariant_sys_regs)
+ num_demux_regs()
return num_demux_regs()
+ walk_sys_regs(vcpu, (u64 __user *)NULL);
}
int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
{
unsigned int i;
int err;
/* Then give them all the invariant registers' indices. */
for (i = 0; i < ARRAY_SIZE(invariant_sys_regs); i++) {
if (put_user(sys_reg_to_index(&invariant_sys_regs[i]), uindices))
return -EFAULT;
uindices++;
}
err = walk_sys_regs(vcpu, uindices);
if (err < 0)
return err;
@ -4971,25 +5142,7 @@ void kvm_calculate_traps(struct kvm_vcpu *vcpu)
mutex_lock(&kvm->arch.config_lock);
vcpu_set_hcr(vcpu);
vcpu_set_ich_hcr(vcpu);
if (cpus_have_final_cap(ARM64_HAS_HCX)) {
/*
* In general, all HCRX_EL2 bits are gated by a feature.
* The only reason we can set SMPME without checking any
* feature is that its effects are not directly observable
* from the guest.
*/
vcpu->arch.hcrx_el2 = HCRX_EL2_SMPME;
if (kvm_has_feat(kvm, ID_AA64ISAR2_EL1, MOPS, IMP))
vcpu->arch.hcrx_el2 |= (HCRX_EL2_MSCEn | HCRX_EL2_MCE2);
if (kvm_has_tcr2(kvm))
vcpu->arch.hcrx_el2 |= HCRX_EL2_TCR2En;
if (kvm_has_fpmr(kvm))
vcpu->arch.hcrx_el2 |= HCRX_EL2_EnFPM;
}
vcpu_set_hcrx(vcpu);
if (test_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags))
goto out;
@ -5101,15 +5254,12 @@ int __init kvm_sys_reg_table_init(void)
valid &= check_sysreg_table(cp14_64_regs, ARRAY_SIZE(cp14_64_regs), true);
valid &= check_sysreg_table(cp15_regs, ARRAY_SIZE(cp15_regs), true);
valid &= check_sysreg_table(cp15_64_regs, ARRAY_SIZE(cp15_64_regs), true);
valid &= check_sysreg_table(invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs), false);
valid &= check_sysreg_table(sys_insn_descs, ARRAY_SIZE(sys_insn_descs), false);
if (!valid)
return -EINVAL;
/* We abuse the reset function to overwrite the table itself. */
for (i = 0; i < ARRAY_SIZE(invariant_sys_regs); i++)
invariant_sys_regs[i].reset(NULL, &invariant_sys_regs[i]);
init_imp_id_regs();
ret = populate_nv_trap_config();

View File

@ -247,4 +247,14 @@ int kvm_finalize_sys_regs(struct kvm_vcpu *vcpu);
CRn(sys_reg_CRn(reg)), CRm(sys_reg_CRm(reg)), \
Op2(sys_reg_Op2(reg))
#define ID_REG_LIMIT_FIELD_ENUM(val, reg, field, limit) \
({ \
u64 __f_val = FIELD_GET(reg##_##field##_MASK, val); \
(val) &= ~reg##_##field##_MASK; \
(val) |= FIELD_PREP(reg##_##field##_MASK, \
min(__f_val, \
(u64)SYS_FIELD_VALUE(reg, field, limit))); \
(val); \
})
#endif /* __ARM64_KVM_SYS_REGS_LOCAL_H__ */

View File

@ -35,12 +35,12 @@ static int set_gic_ctlr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
vgic_v3_cpu->num_id_bits = host_id_bits;
host_seis = FIELD_GET(ICH_VTR_SEIS_MASK, kvm_vgic_global_state.ich_vtr_el2);
host_seis = FIELD_GET(ICH_VTR_EL2_SEIS, kvm_vgic_global_state.ich_vtr_el2);
seis = FIELD_GET(ICC_CTLR_EL1_SEIS_MASK, val);
if (host_seis != seis)
return -EINVAL;
host_a3v = FIELD_GET(ICH_VTR_A3V_MASK, kvm_vgic_global_state.ich_vtr_el2);
host_a3v = FIELD_GET(ICH_VTR_EL2_A3V, kvm_vgic_global_state.ich_vtr_el2);
a3v = FIELD_GET(ICC_CTLR_EL1_A3V_MASK, val);
if (host_a3v != a3v)
return -EINVAL;
@ -68,10 +68,10 @@ static int get_gic_ctlr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
val |= FIELD_PREP(ICC_CTLR_EL1_PRI_BITS_MASK, vgic_v3_cpu->num_pri_bits - 1);
val |= FIELD_PREP(ICC_CTLR_EL1_ID_BITS_MASK, vgic_v3_cpu->num_id_bits);
val |= FIELD_PREP(ICC_CTLR_EL1_SEIS_MASK,
FIELD_GET(ICH_VTR_SEIS_MASK,
FIELD_GET(ICH_VTR_EL2_SEIS,
kvm_vgic_global_state.ich_vtr_el2));
val |= FIELD_PREP(ICC_CTLR_EL1_A3V_MASK,
FIELD_GET(ICH_VTR_A3V_MASK, kvm_vgic_global_state.ich_vtr_el2));
FIELD_GET(ICH_VTR_EL2_A3V, kvm_vgic_global_state.ich_vtr_el2));
/*
* The VMCR.CTLR value is in ICC_CTLR_EL1 layout.
* Extract it directly using ICC_CTLR_EL1 reg definitions.

View File

@ -198,6 +198,27 @@ static int kvm_vgic_dist_init(struct kvm *kvm, unsigned int nr_spis)
return 0;
}
/* Default GICv3 Maintenance Interrupt INTID, as per SBSA */
#define DEFAULT_MI_INTID 25
int kvm_vgic_vcpu_nv_init(struct kvm_vcpu *vcpu)
{
int ret;
guard(mutex)(&vcpu->kvm->arch.config_lock);
/*
* Matching the tradition established with the timers, provide
* a default PPI for the maintenance interrupt. It makes
* things easier to reason about.
*/
if (vcpu->kvm->arch.vgic.mi_intid == 0)
vcpu->kvm->arch.vgic.mi_intid = DEFAULT_MI_INTID;
ret = kvm_vgic_set_owner(vcpu, vcpu->kvm->arch.vgic.mi_intid, vcpu);
return ret;
}
static int vgic_allocate_private_irqs_locked(struct kvm_vcpu *vcpu, u32 type)
{
struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
@ -588,12 +609,20 @@ void kvm_vgic_cpu_down(void)
static irqreturn_t vgic_maintenance_handler(int irq, void *data)
{
struct kvm_vcpu *vcpu = *(struct kvm_vcpu **)data;
/*
* We cannot rely on the vgic maintenance interrupt to be
* delivered synchronously. This means we can only use it to
* exit the VM, and we perform the handling of EOIed
* interrupts on the exit path (see vgic_fold_lr_state).
*
* Of course, NV throws a wrench in this plan, and needs
* something special.
*/
if (vcpu && vgic_state_is_nested(vcpu))
vgic_v3_handle_nested_maint_irq(vcpu);
return IRQ_HANDLED;
}

View File

@ -303,6 +303,12 @@ static int vgic_get_common_attr(struct kvm_device *dev,
VGIC_NR_PRIVATE_IRQS, uaddr);
break;
}
case KVM_DEV_ARM_VGIC_GRP_MAINT_IRQ: {
u32 __user *uaddr = (u32 __user *)(long)attr->addr;
r = put_user(dev->kvm->arch.vgic.mi_intid, uaddr);
break;
}
}
return r;
@ -517,7 +523,7 @@ static int vgic_v3_attr_regs_access(struct kvm_device *dev,
struct vgic_reg_attr reg_attr;
gpa_t addr;
struct kvm_vcpu *vcpu;
bool uaccess;
bool uaccess, post_init = true;
u32 val;
int ret;
@ -533,6 +539,9 @@ static int vgic_v3_attr_regs_access(struct kvm_device *dev,
/* Sysregs uaccess is performed by the sysreg handling code */
uaccess = false;
break;
case KVM_DEV_ARM_VGIC_GRP_MAINT_IRQ:
post_init = false;
fallthrough;
default:
uaccess = true;
}
@ -552,7 +561,7 @@ static int vgic_v3_attr_regs_access(struct kvm_device *dev,
mutex_lock(&dev->kvm->arch.config_lock);
if (unlikely(!vgic_initialized(dev->kvm))) {
if (post_init != vgic_initialized(dev->kvm)) {
ret = -EBUSY;
goto out;
}
@ -582,6 +591,19 @@ static int vgic_v3_attr_regs_access(struct kvm_device *dev,
}
break;
}
case KVM_DEV_ARM_VGIC_GRP_MAINT_IRQ:
if (!is_write) {
val = dev->kvm->arch.vgic.mi_intid;
ret = 0;
break;
}
ret = -EINVAL;
if ((val < VGIC_NR_PRIVATE_IRQS) && (val >= VGIC_NR_SGIS)) {
dev->kvm->arch.vgic.mi_intid = val;
ret = 0;
}
break;
default:
ret = -EINVAL;
break;
@ -608,6 +630,7 @@ static int vgic_v3_set_attr(struct kvm_device *dev,
case KVM_DEV_ARM_VGIC_GRP_REDIST_REGS:
case KVM_DEV_ARM_VGIC_GRP_CPU_SYSREGS:
case KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO:
case KVM_DEV_ARM_VGIC_GRP_MAINT_IRQ:
return vgic_v3_attr_regs_access(dev, attr, true);
default:
return vgic_set_common_attr(dev, attr);
@ -622,6 +645,7 @@ static int vgic_v3_get_attr(struct kvm_device *dev,
case KVM_DEV_ARM_VGIC_GRP_REDIST_REGS:
case KVM_DEV_ARM_VGIC_GRP_CPU_SYSREGS:
case KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO:
case KVM_DEV_ARM_VGIC_GRP_MAINT_IRQ:
return vgic_v3_attr_regs_access(dev, attr, false);
default:
return vgic_get_common_attr(dev, attr);
@ -645,6 +669,7 @@ static int vgic_v3_has_attr(struct kvm_device *dev,
case KVM_DEV_ARM_VGIC_GRP_CPU_SYSREGS:
return vgic_v3_has_attr_regs(dev, attr);
case KVM_DEV_ARM_VGIC_GRP_NR_IRQS:
case KVM_DEV_ARM_VGIC_GRP_MAINT_IRQ:
return 0;
case KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO: {
if (((attr->attr & KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_MASK) >>

View File

@ -0,0 +1,409 @@
// SPDX-License-Identifier: GPL-2.0-only
#include <linux/cpu.h>
#include <linux/kvm.h>
#include <linux/kvm_host.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/uaccess.h>
#include <kvm/arm_vgic.h>
#include <asm/kvm_arm.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_nested.h>
#include "vgic.h"
#define ICH_LRN(n) (ICH_LR0_EL2 + (n))
#define ICH_AP0RN(n) (ICH_AP0R0_EL2 + (n))
#define ICH_AP1RN(n) (ICH_AP1R0_EL2 + (n))
struct mi_state {
u16 eisr;
u16 elrsr;
bool pend;
};
/*
* The shadow registers loaded to the hardware when running a L2 guest
* with the virtual IMO/FMO bits set.
*/
struct shadow_if {
struct vgic_v3_cpu_if cpuif;
unsigned long lr_map;
};
static DEFINE_PER_CPU(struct shadow_if, shadow_if);
/*
* Nesting GICv3 support
*
* On a non-nesting VM (only running at EL0/EL1), the host hypervisor
* completely controls the interrupts injected via the list registers.
* Consequently, most of the state that is modified by the guest (by ACK-ing
* and EOI-ing interrupts) is synced by KVM on each entry/exit, so that we
* keep a semi-consistent view of the interrupts.
*
* This still applies for a NV guest, but only while "InHost" (either
* running at EL2, or at EL0 with HCR_EL2.{E2H.TGE}=={1,1}.
*
* When running a L2 guest ("not InHost"), things are radically different,
* as the L1 guest is in charge of provisioning the interrupts via its own
* view of the ICH_LR*_EL2 registers, which conveniently live in the VNCR
* page. This means that the flow described above does work (there is no
* state to rebuild in the L0 hypervisor), and that most things happed on L2
* load/put:
*
* - on L2 load: move the in-memory L1 vGIC configuration into a shadow,
* per-CPU data structure that is used to populate the actual LRs. This is
* an extra copy that we could avoid, but life is short. In the process,
* we remap any interrupt that has the HW bit set to the mapped interrupt
* on the host, should the host consider it a HW one. This allows the HW
* deactivation to take its course, such as for the timer.
*
* - on L2 put: perform the inverse transformation, so that the result of L2
* running becomes visible to L1 in the VNCR-accessible registers.
*
* - there is nothing to do on L2 entry, as everything will have happened
* on load. However, this is the point where we detect that an interrupt
* targeting L1 and prepare the grand switcheroo.
*
* - on L2 exit: emulate the HW bit, and deactivate corresponding the L1
* interrupt. The L0 active state will be cleared by the HW if the L1
* interrupt was itself backed by a HW interrupt.
*
* Maintenance Interrupt (MI) management:
*
* Since the L2 guest runs the vgic in its full glory, MIs get delivered and
* used as a handover point between L2 and L1.
*
* - on delivery of a MI to L0 while L2 is running: make the L1 MI pending,
* and let it rip. This will initiate a vcpu_put() on L2, and allow L1 to
* run and process the MI.
*
* - L1 MI is a fully virtual interrupt, not linked to the host's MI. Its
* state must be computed at each entry/exit of the guest, much like we do
* it for the PMU interrupt.
*
* - because most of the ICH_*_EL2 registers live in the VNCR page, the
* quality of emulation is poor: L1 can setup the vgic so that an MI would
* immediately fire, and not observe anything until the next exit. Trying
* to read ICH_MISR_EL2 would do the trick, for example.
*
* System register emulation:
*
* We get two classes of registers:
*
* - those backed by memory (LRs, APRs, HCR, VMCR): L1 can freely access
* them, and L0 doesn't see a thing.
*
* - those that always trap (ELRSR, EISR, MISR): these are status registers
* that are built on the fly based on the in-memory state.
*
* Only L1 can access the ICH_*_EL2 registers. A non-NV L2 obviously cannot,
* and a NV L2 would either access the VNCR page provided by L1 (memory
* based registers), or see the access redirected to L1 (registers that
* trap) thanks to NV being set by L1.
*/
bool vgic_state_is_nested(struct kvm_vcpu *vcpu)
{
u64 xmo;
if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) {
xmo = __vcpu_sys_reg(vcpu, HCR_EL2) & (HCR_IMO | HCR_FMO);
WARN_ONCE(xmo && xmo != (HCR_IMO | HCR_FMO),
"Separate virtual IRQ/FIQ settings not supported\n");
return !!xmo;
}
return false;
}
static struct shadow_if *get_shadow_if(void)
{
return this_cpu_ptr(&shadow_if);
}
static bool lr_triggers_eoi(u64 lr)
{
return !(lr & (ICH_LR_STATE | ICH_LR_HW)) && (lr & ICH_LR_EOI);
}
static void vgic_compute_mi_state(struct kvm_vcpu *vcpu, struct mi_state *mi_state)
{
u16 eisr = 0, elrsr = 0;
bool pend = false;
for (int i = 0; i < kvm_vgic_global_state.nr_lr; i++) {
u64 lr = __vcpu_sys_reg(vcpu, ICH_LRN(i));
if (lr_triggers_eoi(lr))
eisr |= BIT(i);
if (!(lr & ICH_LR_STATE))
elrsr |= BIT(i);
pend |= (lr & ICH_LR_PENDING_BIT);
}
mi_state->eisr = eisr;
mi_state->elrsr = elrsr;
mi_state->pend = pend;
}
u16 vgic_v3_get_eisr(struct kvm_vcpu *vcpu)
{
struct mi_state mi_state;
vgic_compute_mi_state(vcpu, &mi_state);
return mi_state.eisr;
}
u16 vgic_v3_get_elrsr(struct kvm_vcpu *vcpu)
{
struct mi_state mi_state;
vgic_compute_mi_state(vcpu, &mi_state);
return mi_state.elrsr;
}
u64 vgic_v3_get_misr(struct kvm_vcpu *vcpu)
{
struct mi_state mi_state;
u64 reg = 0, hcr, vmcr;
hcr = __vcpu_sys_reg(vcpu, ICH_HCR_EL2);
vmcr = __vcpu_sys_reg(vcpu, ICH_VMCR_EL2);
vgic_compute_mi_state(vcpu, &mi_state);
if (mi_state.eisr)
reg |= ICH_MISR_EL2_EOI;
if (__vcpu_sys_reg(vcpu, ICH_HCR_EL2) & ICH_HCR_EL2_UIE) {
int used_lrs = kvm_vgic_global_state.nr_lr;
used_lrs -= hweight16(mi_state.elrsr);
reg |= (used_lrs <= 1) ? ICH_MISR_EL2_U : 0;
}
if ((hcr & ICH_HCR_EL2_LRENPIE) && FIELD_GET(ICH_HCR_EL2_EOIcount_MASK, hcr))
reg |= ICH_MISR_EL2_LRENP;
if ((hcr & ICH_HCR_EL2_NPIE) && !mi_state.pend)
reg |= ICH_MISR_EL2_NP;
if ((hcr & ICH_HCR_EL2_VGrp0EIE) && (vmcr & ICH_VMCR_ENG0_MASK))
reg |= ICH_MISR_EL2_VGrp0E;
if ((hcr & ICH_HCR_EL2_VGrp0DIE) && !(vmcr & ICH_VMCR_ENG0_MASK))
reg |= ICH_MISR_EL2_VGrp0D;
if ((hcr & ICH_HCR_EL2_VGrp1EIE) && (vmcr & ICH_VMCR_ENG1_MASK))
reg |= ICH_MISR_EL2_VGrp1E;
if ((hcr & ICH_HCR_EL2_VGrp1DIE) && !(vmcr & ICH_VMCR_ENG1_MASK))
reg |= ICH_MISR_EL2_VGrp1D;
return reg;
}
/*
* For LRs which have HW bit set such as timer interrupts, we modify them to
* have the host hardware interrupt number instead of the virtual one programmed
* by the guest hypervisor.
*/
static void vgic_v3_create_shadow_lr(struct kvm_vcpu *vcpu,
struct vgic_v3_cpu_if *s_cpu_if)
{
unsigned long lr_map = 0;
int index = 0;
for (int i = 0; i < kvm_vgic_global_state.nr_lr; i++) {
u64 lr = __vcpu_sys_reg(vcpu, ICH_LRN(i));
struct vgic_irq *irq;
if (!(lr & ICH_LR_STATE))
lr = 0;
if (!(lr & ICH_LR_HW))
goto next;
/* We have the HW bit set, check for validity of pINTID */
irq = vgic_get_vcpu_irq(vcpu, FIELD_GET(ICH_LR_PHYS_ID_MASK, lr));
if (!irq || !irq->hw || irq->intid > VGIC_MAX_SPI ) {
/* There was no real mapping, so nuke the HW bit */
lr &= ~ICH_LR_HW;
if (irq)
vgic_put_irq(vcpu->kvm, irq);
goto next;
}
/* It is illegal to have the EOI bit set with HW */
lr &= ~ICH_LR_EOI;
/* Translate the virtual mapping to the real one */
lr &= ~ICH_LR_PHYS_ID_MASK;
lr |= FIELD_PREP(ICH_LR_PHYS_ID_MASK, (u64)irq->hwintid);
vgic_put_irq(vcpu->kvm, irq);
next:
s_cpu_if->vgic_lr[index] = lr;
if (lr) {
lr_map |= BIT(i);
index++;
}
}
container_of(s_cpu_if, struct shadow_if, cpuif)->lr_map = lr_map;
s_cpu_if->used_lrs = index;
}
void vgic_v3_sync_nested(struct kvm_vcpu *vcpu)
{
struct shadow_if *shadow_if = get_shadow_if();
int i, index = 0;
for_each_set_bit(i, &shadow_if->lr_map, kvm_vgic_global_state.nr_lr) {
u64 lr = __vcpu_sys_reg(vcpu, ICH_LRN(i));
struct vgic_irq *irq;
if (!(lr & ICH_LR_HW) || !(lr & ICH_LR_STATE))
goto next;
/*
* If we had a HW lr programmed by the guest hypervisor, we
* need to emulate the HW effect between the guest hypervisor
* and the nested guest.
*/
irq = vgic_get_vcpu_irq(vcpu, FIELD_GET(ICH_LR_PHYS_ID_MASK, lr));
if (WARN_ON(!irq)) /* Shouldn't happen as we check on load */
goto next;
lr = __gic_v3_get_lr(index);
if (!(lr & ICH_LR_STATE))
irq->active = false;
vgic_put_irq(vcpu->kvm, irq);
next:
index++;
}
}
static void vgic_v3_create_shadow_state(struct kvm_vcpu *vcpu,
struct vgic_v3_cpu_if *s_cpu_if)
{
struct vgic_v3_cpu_if *host_if = &vcpu->arch.vgic_cpu.vgic_v3;
u64 val = 0;
int i;
/*
* If we're on a system with a broken vgic that requires
* trapping, propagate the trapping requirements.
*
* Ah, the smell of rotten fruits...
*/
if (static_branch_unlikely(&vgic_v3_cpuif_trap))
val = host_if->vgic_hcr & (ICH_HCR_EL2_TALL0 | ICH_HCR_EL2_TALL1 |
ICH_HCR_EL2_TC | ICH_HCR_EL2_TDIR);
s_cpu_if->vgic_hcr = __vcpu_sys_reg(vcpu, ICH_HCR_EL2) | val;
s_cpu_if->vgic_vmcr = __vcpu_sys_reg(vcpu, ICH_VMCR_EL2);
s_cpu_if->vgic_sre = host_if->vgic_sre;
for (i = 0; i < 4; i++) {
s_cpu_if->vgic_ap0r[i] = __vcpu_sys_reg(vcpu, ICH_AP0RN(i));
s_cpu_if->vgic_ap1r[i] = __vcpu_sys_reg(vcpu, ICH_AP1RN(i));
}
vgic_v3_create_shadow_lr(vcpu, s_cpu_if);
}
void vgic_v3_load_nested(struct kvm_vcpu *vcpu)
{
struct shadow_if *shadow_if = get_shadow_if();
struct vgic_v3_cpu_if *cpu_if = &shadow_if->cpuif;
BUG_ON(!vgic_state_is_nested(vcpu));
vgic_v3_create_shadow_state(vcpu, cpu_if);
__vgic_v3_restore_vmcr_aprs(cpu_if);
__vgic_v3_activate_traps(cpu_if);
__vgic_v3_restore_state(cpu_if);
/*
* Propagate the number of used LRs for the benefit of the HYP
* GICv3 emulation code. Yes, this is a pretty sorry hack.
*/
vcpu->arch.vgic_cpu.vgic_v3.used_lrs = cpu_if->used_lrs;
}
void vgic_v3_put_nested(struct kvm_vcpu *vcpu)
{
struct shadow_if *shadow_if = get_shadow_if();
struct vgic_v3_cpu_if *s_cpu_if = &shadow_if->cpuif;
u64 val;
int i;
__vgic_v3_save_vmcr_aprs(s_cpu_if);
__vgic_v3_deactivate_traps(s_cpu_if);
__vgic_v3_save_state(s_cpu_if);
/*
* Translate the shadow state HW fields back to the virtual ones
* before copying the shadow struct back to the nested one.
*/
val = __vcpu_sys_reg(vcpu, ICH_HCR_EL2);
val &= ~ICH_HCR_EL2_EOIcount_MASK;
val |= (s_cpu_if->vgic_hcr & ICH_HCR_EL2_EOIcount_MASK);
__vcpu_sys_reg(vcpu, ICH_HCR_EL2) = val;
__vcpu_sys_reg(vcpu, ICH_VMCR_EL2) = s_cpu_if->vgic_vmcr;
for (i = 0; i < 4; i++) {
__vcpu_sys_reg(vcpu, ICH_AP0RN(i)) = s_cpu_if->vgic_ap0r[i];
__vcpu_sys_reg(vcpu, ICH_AP1RN(i)) = s_cpu_if->vgic_ap1r[i];
}
for_each_set_bit(i, &shadow_if->lr_map, kvm_vgic_global_state.nr_lr) {
val = __vcpu_sys_reg(vcpu, ICH_LRN(i));
val &= ~ICH_LR_STATE;
val |= s_cpu_if->vgic_lr[i] & ICH_LR_STATE;
__vcpu_sys_reg(vcpu, ICH_LRN(i)) = val;
s_cpu_if->vgic_lr[i] = 0;
}
shadow_if->lr_map = 0;
vcpu->arch.vgic_cpu.vgic_v3.used_lrs = 0;
}
/*
* If we exit a L2 VM with a pending maintenance interrupt from the GIC,
* then we need to forward this to L1 so that it can re-sync the appropriate
* LRs and sample level triggered interrupts again.
*/
void vgic_v3_handle_nested_maint_irq(struct kvm_vcpu *vcpu)
{
bool state = read_sysreg_s(SYS_ICH_MISR_EL2);
/* This will force a switch back to L1 if the level is high */
kvm_vgic_inject_irq(vcpu->kvm, vcpu,
vcpu->kvm->arch.vgic.mi_intid, state, vcpu);
sysreg_clear_set_s(SYS_ICH_HCR_EL2, ICH_HCR_EL2_En, 0);
}
void vgic_v3_nested_update_mi(struct kvm_vcpu *vcpu)
{
bool level;
level = __vcpu_sys_reg(vcpu, ICH_HCR_EL2) & ICH_HCR_EL2_En;
if (level)
level &= vgic_v3_get_misr(vcpu);
kvm_vgic_inject_irq(vcpu->kvm, vcpu,
vcpu->kvm->arch.vgic.mi_intid, level, vcpu);
}

View File

@ -24,7 +24,7 @@ void vgic_v3_set_underflow(struct kvm_vcpu *vcpu)
{
struct vgic_v3_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v3;
cpuif->vgic_hcr |= ICH_HCR_UIE;
cpuif->vgic_hcr |= ICH_HCR_EL2_UIE;
}
static bool lr_signals_eoi_mi(u64 lr_val)
@ -42,7 +42,7 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
DEBUG_SPINLOCK_BUG_ON(!irqs_disabled());
cpuif->vgic_hcr &= ~ICH_HCR_UIE;
cpuif->vgic_hcr &= ~ICH_HCR_EL2_UIE;
for (lr = 0; lr < cpuif->used_lrs; lr++) {
u64 val = cpuif->vgic_lr[lr];
@ -284,15 +284,13 @@ void vgic_v3_enable(struct kvm_vcpu *vcpu)
vgic_v3->vgic_sre = 0;
}
vcpu->arch.vgic_cpu.num_id_bits = (kvm_vgic_global_state.ich_vtr_el2 &
ICH_VTR_ID_BITS_MASK) >>
ICH_VTR_ID_BITS_SHIFT;
vcpu->arch.vgic_cpu.num_pri_bits = ((kvm_vgic_global_state.ich_vtr_el2 &
ICH_VTR_PRI_BITS_MASK) >>
ICH_VTR_PRI_BITS_SHIFT) + 1;
vcpu->arch.vgic_cpu.num_id_bits = FIELD_GET(ICH_VTR_EL2_IDbits,
kvm_vgic_global_state.ich_vtr_el2);
vcpu->arch.vgic_cpu.num_pri_bits = FIELD_GET(ICH_VTR_EL2_PRIbits,
kvm_vgic_global_state.ich_vtr_el2) + 1;
/* Get the show on the road... */
vgic_v3->vgic_hcr = ICH_HCR_EN;
vgic_v3->vgic_hcr = ICH_HCR_EL2_En;
}
void vcpu_set_ich_hcr(struct kvm_vcpu *vcpu)
@ -301,18 +299,19 @@ void vcpu_set_ich_hcr(struct kvm_vcpu *vcpu)
/* Hide GICv3 sysreg if necessary */
if (!kvm_has_gicv3(vcpu->kvm)) {
vgic_v3->vgic_hcr |= ICH_HCR_TALL0 | ICH_HCR_TALL1 | ICH_HCR_TC;
vgic_v3->vgic_hcr |= (ICH_HCR_EL2_TALL0 | ICH_HCR_EL2_TALL1 |
ICH_HCR_EL2_TC);
return;
}
if (group0_trap)
vgic_v3->vgic_hcr |= ICH_HCR_TALL0;
vgic_v3->vgic_hcr |= ICH_HCR_EL2_TALL0;
if (group1_trap)
vgic_v3->vgic_hcr |= ICH_HCR_TALL1;
vgic_v3->vgic_hcr |= ICH_HCR_EL2_TALL1;
if (common_trap)
vgic_v3->vgic_hcr |= ICH_HCR_TC;
vgic_v3->vgic_hcr |= ICH_HCR_EL2_TC;
if (dir_trap)
vgic_v3->vgic_hcr |= ICH_HCR_TDIR;
vgic_v3->vgic_hcr |= ICH_HCR_EL2_TDIR;
}
int vgic_v3_lpi_sync_pending_status(struct kvm *kvm, struct vgic_irq *irq)
@ -632,8 +631,8 @@ static const struct midr_range broken_seis[] = {
static bool vgic_v3_broken_seis(void)
{
return ((kvm_vgic_global_state.ich_vtr_el2 & ICH_VTR_SEIS_MASK) &&
is_midr_in_range_list(read_cpuid_id(), broken_seis));
return ((kvm_vgic_global_state.ich_vtr_el2 & ICH_VTR_EL2_SEIS) &&
is_midr_in_range_list(broken_seis));
}
/**
@ -706,10 +705,10 @@ int vgic_v3_probe(const struct gic_kvm_info *info)
if (vgic_v3_broken_seis()) {
kvm_info("GICv3 with broken locally generated SEI\n");
kvm_vgic_global_state.ich_vtr_el2 &= ~ICH_VTR_SEIS_MASK;
kvm_vgic_global_state.ich_vtr_el2 &= ~ICH_VTR_EL2_SEIS;
group0_trap = true;
group1_trap = true;
if (ich_vtr_el2 & ICH_VTR_TDS_MASK)
if (ich_vtr_el2 & ICH_VTR_EL2_TDS)
dir_trap = true;
else
common_trap = true;
@ -735,6 +734,12 @@ void vgic_v3_load(struct kvm_vcpu *vcpu)
{
struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
/* If the vgic is nested, perform the full state loading */
if (vgic_state_is_nested(vcpu)) {
vgic_v3_load_nested(vcpu);
return;
}
if (likely(!is_protected_kvm_enabled()))
kvm_call_hyp(__vgic_v3_restore_vmcr_aprs, cpu_if);
@ -748,6 +753,11 @@ void vgic_v3_put(struct kvm_vcpu *vcpu)
{
struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
if (vgic_state_is_nested(vcpu)) {
vgic_v3_put_nested(vcpu);
return;
}
if (likely(!is_protected_kvm_enabled()))
kvm_call_hyp(__vgic_v3_save_vmcr_aprs, cpu_if);
WARN_ON(vgic_v4_put(vcpu));

View File

@ -336,6 +336,22 @@ void vgic_v4_teardown(struct kvm *kvm)
its_vm->vpes = NULL;
}
static inline bool vgic_v4_want_doorbell(struct kvm_vcpu *vcpu)
{
if (vcpu_get_flag(vcpu, IN_WFI))
return true;
if (likely(!vcpu_has_nv(vcpu)))
return false;
/*
* GICv4 hardware is only ever used for the L1. Mark the vPE (i.e. the
* L1 context) nonresident and request a doorbell to kick us out of the
* L2 when an IRQ becomes pending.
*/
return vcpu_get_flag(vcpu, IN_NESTED_ERET);
}
int vgic_v4_put(struct kvm_vcpu *vcpu)
{
struct its_vpe *vpe = &vcpu->arch.vgic_cpu.vgic_v3.its_vpe;
@ -343,7 +359,7 @@ int vgic_v4_put(struct kvm_vcpu *vcpu)
if (!vgic_supports_direct_msis(vcpu->kvm) || !vpe->resident)
return 0;
return its_make_vpe_non_resident(vpe, !!vcpu_get_flag(vcpu, IN_WFI));
return its_make_vpe_non_resident(vpe, vgic_v4_want_doorbell(vcpu));
}
int vgic_v4_load(struct kvm_vcpu *vcpu)
@ -415,7 +431,7 @@ int kvm_vgic_v4_set_forwarding(struct kvm *kvm, int virq,
struct vgic_irq *irq;
struct its_vlpi_map map;
unsigned long flags;
int ret;
int ret = 0;
if (!vgic_supports_direct_msis(kvm))
return 0;
@ -430,10 +446,15 @@ int kvm_vgic_v4_set_forwarding(struct kvm *kvm, int virq,
mutex_lock(&its->its_lock);
/* Perform the actual DevID/EventID -> LPI translation. */
ret = vgic_its_resolve_lpi(kvm, its, irq_entry->msi.devid,
irq_entry->msi.data, &irq);
if (ret)
/*
* Perform the actual DevID/EventID -> LPI translation.
*
* Silently exit if translation fails as the guest (or userspace!) has
* managed to do something stupid. Emulated LPI injection will still
* work if the guest figures itself out at a later time.
*/
if (vgic_its_resolve_lpi(kvm, its, irq_entry->msi.devid,
irq_entry->msi.data, &irq))
goto out;
/* Silently exit if the vLPI is already mapped */
@ -512,7 +533,7 @@ int kvm_vgic_v4_unset_forwarding(struct kvm *kvm, int virq,
if (ret)
goto out;
WARN_ON(!(irq->hw && irq->host_irq == virq));
WARN_ON(irq->hw && irq->host_irq != virq);
if (irq->hw) {
atomic_dec(&irq->target_vcpu->arch.vgic_cpu.vgic_v3.its_vpe.vlpi_count);
irq->hw = false;

View File

@ -872,6 +872,15 @@ void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu)
{
int used_lrs;
/* If nesting, emulate the HW effect from L0 to L1 */
if (vgic_state_is_nested(vcpu)) {
vgic_v3_sync_nested(vcpu);
return;
}
if (vcpu_has_nv(vcpu))
vgic_v3_nested_update_mi(vcpu);
/* An empty ap_list_head implies used_lrs == 0 */
if (list_empty(&vcpu->arch.vgic_cpu.ap_list_head))
return;
@ -900,6 +909,35 @@ static inline void vgic_restore_state(struct kvm_vcpu *vcpu)
/* Flush our emulation state into the GIC hardware before entering the guest. */
void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu)
{
/*
* If in a nested state, we must return early. Two possibilities:
*
* - If we have any pending IRQ for the guest and the guest
* expects IRQs to be handled in its virtual EL2 mode (the
* virtual IMO bit is set) and it is not already running in
* virtual EL2 mode, then we have to emulate an IRQ
* exception to virtual EL2.
*
* We do that by placing a request to ourselves which will
* abort the entry procedure and inject the exception at the
* beginning of the run loop.
*
* - Otherwise, do exactly *NOTHING*. The guest state is
* already loaded, and we can carry on with running it.
*
* If we have NV, but are not in a nested state, compute the
* maintenance interrupt state, as it may fire.
*/
if (vgic_state_is_nested(vcpu)) {
if (kvm_vgic_vcpu_pending_irq(vcpu))
kvm_make_request(KVM_REQ_GUEST_HYP_IRQ_PENDING, vcpu);
return;
}
if (vcpu_has_nv(vcpu))
vgic_v3_nested_update_mi(vcpu);
/*
* If there are no virtual interrupts active or pending for this
* VCPU, then there is no work to do and we can bail out without

View File

@ -353,4 +353,10 @@ static inline bool kvm_has_gicv3(struct kvm *kvm)
return kvm_has_feat(kvm, ID_AA64PFR0_EL1, GIC, IMP);
}
void vgic_v3_sync_nested(struct kvm_vcpu *vcpu);
void vgic_v3_load_nested(struct kvm_vcpu *vcpu);
void vgic_v3_put_nested(struct kvm_vcpu *vcpu);
void vgic_v3_handle_nested_maint_irq(struct kvm_vcpu *vcpu);
void vgic_v3_nested_update_mi(struct kvm_vcpu *vcpu);
#endif

View File

@ -45,6 +45,7 @@ HAS_LSE_ATOMICS
HAS_MOPS
HAS_NESTED_VIRT
HAS_PAN
HAS_PMUV3
HAS_S1PIE
HAS_S1POE
HAS_RAS_EXTN
@ -104,6 +105,7 @@ WORKAROUND_CAVIUM_TX2_219_TVM
WORKAROUND_CLEAN_CACHE
WORKAROUND_DEVICE_LOAD_ACQUIRE
WORKAROUND_NVIDIA_CARMEL_CNP
WORKAROUND_PMUV3_IMPDEF_TRAPS
WORKAROUND_QCOM_FALKOR_E1003
WORKAROUND_QCOM_ORYON_CNTVOFF
WORKAROUND_REPEAT_TLBI

View File

@ -3138,6 +3138,54 @@ Field 31:16 PhyPARTID29
Field 15:0 PhyPARTID28
EndSysreg
Sysreg ICH_HCR_EL2 3 4 12 11 0
Res0 63:32
Field 31:27 EOIcount
Res0 26:16
Field 15 DVIM
Field 14 TDIR
Field 13 TSEI
Field 12 TALL1
Field 11 TALL0
Field 10 TC
Res0 9
Field 8 vSGIEOICount
Field 7 VGrp1DIE
Field 6 VGrp1EIE
Field 5 VGrp0DIE
Field 4 VGrp0EIE
Field 3 NPIE
Field 2 LRENPIE
Field 1 UIE
Field 0 En
EndSysreg
Sysreg ICH_VTR_EL2 3 4 12 11 1
Res0 63:32
Field 31:29 PRIbits
Field 28:26 PREbits
Field 25:23 IDbits
Field 22 SEIS
Field 21 A3V
Field 20 nV4
Field 19 TDS
Field 18 DVIM
Res0 17:5
Field 4:0 ListRegs
EndSysreg
Sysreg ICH_MISR_EL2 3 4 12 11 2
Res0 63:8
Field 7 VGrp1D
Field 6 VGrp1E
Field 5 VGrp0D
Field 4 VGrp0E
Field 3 NP
Field 2 LRENP
Field 1 U
Field 0 EOI
EndSysreg
Sysreg CONTEXTIDR_EL2 3 4 13 0 1
Fields CONTEXTIDR_ELx
EndSysreg

View File

@ -12,6 +12,7 @@
#include <linux/kvm.h>
#include <linux/kvm_types.h>
#include <linux/mutex.h>
#include <linux/perf_event.h>
#include <linux/spinlock.h>
#include <linux/threads.h>
#include <linux/types.h>
@ -176,6 +177,9 @@ struct kvm_vcpu_arch {
/* Pointers stored here for easy accessing from assembly code */
int (*handle_exit)(struct kvm_run *run, struct kvm_vcpu *vcpu);
/* GPA (=HVA) of PGD for secondary mmu */
unsigned long kvm_pgd;
/* Host registers preserved across guest mode execution */
unsigned long host_sp;
unsigned long host_tp;
@ -289,6 +293,8 @@ static inline int kvm_get_pmu_num(struct kvm_vcpu_arch *arch)
return (arch->cpucfg[6] & CPUCFG6_PMNUM) >> CPUCFG6_PMNUM_SHIFT;
}
bool kvm_arch_pmi_in_guest(struct kvm_vcpu *vcpu);
/* Debug: dump vcpu state */
int kvm_arch_vcpu_dump_regs(struct kvm_vcpu *vcpu);
@ -320,7 +326,6 @@ static inline bool kvm_is_ifetch_fault(struct kvm_vcpu_arch *arch)
/* Misc */
static inline void kvm_arch_hardware_unsetup(void) {}
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}

View File

@ -296,6 +296,7 @@ static void __used output_kvm_defines(void)
OFFSET(KVM_ARCH_HSP, kvm_vcpu_arch, host_sp);
OFFSET(KVM_ARCH_HTP, kvm_vcpu_arch, host_tp);
OFFSET(KVM_ARCH_HPGD, kvm_vcpu_arch, host_pgd);
OFFSET(KVM_ARCH_KVMPGD, kvm_vcpu_arch, kvm_pgd);
OFFSET(KVM_ARCH_HANDLE_EXIT, kvm_vcpu_arch, handle_exit);
OFFSET(KVM_ARCH_HEENTRY, kvm_vcpu_arch, host_eentry);
OFFSET(KVM_ARCH_GEENTRY, kvm_vcpu_arch, guest_eentry);

View File

@ -33,6 +33,7 @@ config KVM
select KVM_MMIO
select KVM_XFER_TO_GUEST_WORK
select SCHED_INFO
select GUEST_PERF_EVENTS if PERF_EVENTS
help
Support hosting virtualized guest machines using
hardware virtualization extensions. You will need

View File

@ -3,8 +3,6 @@
# Makefile for LoongArch KVM support
#
ccflags-y += -I $(src)
include $(srctree)/virt/kvm/Makefile.kvm
obj-$(CONFIG_KVM) += kvm.o

View File

@ -394,6 +394,7 @@ static int kvm_loongarch_env_init(void)
}
kvm_init_gcsr_flag();
kvm_register_perf_callbacks(NULL);
/* Register LoongArch IPI interrupt controller interface. */
ret = kvm_loongarch_register_ipi_device();
@ -425,6 +426,8 @@ static void kvm_loongarch_env_exit(void)
}
kfree(kvm_loongarch_ops);
}
kvm_unregister_perf_callbacks();
}
static int kvm_loongarch_init(void)

View File

@ -60,16 +60,8 @@
ld.d t0, a2, KVM_ARCH_GPC
csrwr t0, LOONGARCH_CSR_ERA
/* Save host PGDL */
csrrd t0, LOONGARCH_CSR_PGDL
st.d t0, a2, KVM_ARCH_HPGD
/* Switch to kvm */
ld.d t1, a2, KVM_VCPU_KVM - KVM_VCPU_ARCH
/* Load guest PGDL */
li.w t0, KVM_GPGD
ldx.d t0, t1, t0
/* Load PGD for KVM hypervisor */
ld.d t0, a2, KVM_ARCH_KVMPGD
csrwr t0, LOONGARCH_CSR_PGDL
/* Mix GID and RID */

View File

@ -360,6 +360,34 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu)
}
bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
{
unsigned long val;
preempt_disable();
val = gcsr_read(LOONGARCH_CSR_CRMD);
preempt_enable();
return (val & CSR_PRMD_PPLV) == PLV_KERN;
}
#ifdef CONFIG_GUEST_PERF_EVENTS
unsigned long kvm_arch_vcpu_get_ip(struct kvm_vcpu *vcpu)
{
return vcpu->arch.pc;
}
/*
* Returns true if a Performance Monitoring Interrupt (PMI), a.k.a. perf event,
* arrived in guest context. For LoongArch64, if PMU is not passthrough to VM,
* any event that arrives while a vCPU is loaded is considered to be "in guest".
*/
bool kvm_arch_pmi_in_guest(struct kvm_vcpu *vcpu)
{
return (vcpu && !(vcpu->arch.aux_inuse & KVM_LARCH_PMU));
}
#endif
bool kvm_arch_vcpu_preempted_in_kernel(struct kvm_vcpu *vcpu)
{
return false;
}
@ -1462,6 +1490,15 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
hrtimer_setup(&vcpu->arch.swtimer, kvm_swtimer_wakeup, CLOCK_MONOTONIC,
HRTIMER_MODE_ABS_PINNED_HARD);
/* Get GPA (=HVA) of PGD for kvm hypervisor */
vcpu->arch.kvm_pgd = __pa(vcpu->kvm->arch.pgd);
/*
* Get PGD for primary mmu, virtual address is used since there is
* memory access after loading from CSR_PGD in tlb exception fast path.
*/
vcpu->arch.host_pgd = (unsigned long)vcpu->kvm->mm->pgd;
vcpu->arch.handle_exit = kvm_handle_exit;
vcpu->arch.guest_eentry = (unsigned long)kvm_loongarch_ops->exc_entry;
vcpu->arch.csr = kzalloc(sizeof(struct loongarch_csrs), GFP_KERNEL);

View File

@ -886,7 +886,6 @@ extern unsigned long kvm_mips_get_ramsize(struct kvm *kvm);
extern int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu,
struct kvm_mips_interrupt *irq);
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
static inline void kvm_arch_free_memslot(struct kvm *kvm,
struct kvm_memory_slot *slot) {}
static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}

View File

@ -902,7 +902,6 @@ struct kvm_vcpu_arch {
#define __KVM_HAVE_ARCH_WQP
#define __KVM_HAVE_CREATE_DEVICE
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
static inline void kvm_arch_flush_shadow_all(struct kvm *kvm) {}
static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}

View File

@ -301,8 +301,6 @@ static inline bool kvm_arch_pmi_in_guest(struct kvm_vcpu *vcpu)
return IS_ENABLED(CONFIG_GUEST_PERF_EVENTS) && !!vcpu;
}
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
#define KVM_RISCV_GSTAGE_TLB_MIN_ORDER 12
void kvm_riscv_local_hfence_gvma_vmid_gpa(unsigned long vmid,

View File

@ -172,8 +172,8 @@ module_init(riscv_kvm_init);
static void __exit riscv_kvm_exit(void)
{
kvm_riscv_teardown();
kvm_exit();
kvm_riscv_teardown();
}
module_exit(riscv_kvm_exit);

View File

@ -203,7 +203,7 @@ static bool kvm_riscv_vcpu_isa_disable_allowed(unsigned long ext)
case KVM_RISCV_ISA_EXT_SVADE:
/*
* The henvcfg.ADUE is read-only zero if menvcfg.ADUE is zero.
* Svade is not allowed to disable when the platform use Svade.
* Svade can't be disabled unless we support Svadu.
*/
return arch_has_hw_pte_young();
default:

View File

@ -666,6 +666,7 @@ int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_ba
.type = etype,
.size = sizeof(struct perf_event_attr),
.pinned = true,
.disabled = true,
/*
* It should never reach here if the platform doesn't support the sscofpmf
* extension as mode filtering won't work without it.

View File

@ -1056,7 +1056,6 @@ bool kvm_s390_pv_cpu_is_protected(struct kvm_vcpu *vcpu);
extern int kvm_s390_gisc_register(struct kvm *kvm, u32 gisc);
extern int kvm_s390_gisc_unregister(struct kvm *kvm, u32 gisc);
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
static inline void kvm_arch_free_memslot(struct kvm *kvm,
struct kvm_memory_slot *slot) {}
static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}

View File

@ -378,6 +378,7 @@
#define X86_FEATURE_V_SPEC_CTRL (15*32+20) /* "v_spec_ctrl" Virtual SPEC_CTRL */
#define X86_FEATURE_VNMI (15*32+25) /* "vnmi" Virtual NMI */
#define X86_FEATURE_SVME_ADDR_CHK (15*32+28) /* SVME addr check */
#define X86_FEATURE_IDLE_HLT (15*32+30) /* IDLE HLT intercept */
/* Intel-defined CPU features, CPUID level 0x00000007:0 (ECX), word 16 */
#define X86_FEATURE_AVX512VBMI (16*32+ 1) /* "avx512vbmi" AVX512 Vector Bit Manipulation instructions*/

View File

@ -27,6 +27,7 @@
#include <linux/kfifo.h>
#include <linux/sched/vhost_task.h>
#include <linux/call_once.h>
#include <linux/atomic.h>
#include <asm/apic.h>
#include <asm/pvclock-abi.h>
@ -405,7 +406,7 @@ union kvm_cpu_role {
};
struct kvm_rmap_head {
unsigned long val;
atomic_long_t val;
};
struct kvm_pio_request {
@ -880,6 +881,7 @@ struct kvm_vcpu_arch {
int cpuid_nent;
struct kvm_cpuid_entry2 *cpuid_entries;
bool cpuid_dynamic_bits_dirty;
bool is_amd_compatible;
/*
@ -909,7 +911,8 @@ struct kvm_vcpu_arch {
int (*complete_userspace_io)(struct kvm_vcpu *vcpu);
gpa_t time;
struct pvclock_vcpu_time_info hv_clock;
s8 pvclock_tsc_shift;
u32 pvclock_tsc_mul;
unsigned int hw_tsc_khz;
struct gfn_to_pfn_cache pv_time;
/* set guest stopped flag in pvclock flags field */
@ -997,8 +1000,8 @@ struct kvm_vcpu_arch {
u64 msr_int_val; /* MSR_KVM_ASYNC_PF_INT */
u16 vec;
u32 id;
bool send_user_only;
u32 host_apf_flags;
bool send_always;
bool delivery_as_pf_vmexit;
bool pageready_pending;
} apf;
@ -1053,6 +1056,7 @@ struct kvm_vcpu_arch {
/* Protected Guests */
bool guest_state_protected;
bool guest_tsc_protected;
/*
* Set when PDPTS were loaded directly by the userspace without
@ -1189,6 +1193,8 @@ struct kvm_xen {
struct gfn_to_pfn_cache shinfo_cache;
struct idr evtchn_ports;
unsigned long poll_mask[BITS_TO_LONGS(KVM_MAX_VCPUS)];
struct kvm_xen_hvm_config hvm_config;
};
#endif
@ -1354,8 +1360,6 @@ struct kvm_arch {
u64 shadow_mmio_value;
struct iommu_domain *iommu_domain;
bool iommu_noncoherent;
#define __KVM_HAVE_ARCH_NONCOHERENT_DMA
atomic_t noncoherent_dma_count;
#define __KVM_HAVE_ARCH_ASSIGNED_DEVICE
@ -1411,8 +1415,6 @@ struct kvm_arch {
struct delayed_work kvmclock_update_work;
struct delayed_work kvmclock_sync_work;
struct kvm_xen_hvm_config xen_hvm_config;
/* reads protected by irq_srcu, writes by irq_lock */
struct hlist_head mask_notifier_list;
@ -1479,6 +1481,7 @@ struct kvm_arch {
* tdp_mmu_page set.
*
* For reads, this list is protected by:
* RCU alone or
* the MMU lock in read mode + RCU or
* the MMU lock in write mode
*
@ -2164,8 +2167,8 @@ int kvm_emulate_rdpmc(struct kvm_vcpu *vcpu);
void kvm_queue_exception(struct kvm_vcpu *vcpu, unsigned nr);
void kvm_queue_exception_e(struct kvm_vcpu *vcpu, unsigned nr, u32 error_code);
void kvm_queue_exception_p(struct kvm_vcpu *vcpu, unsigned nr, unsigned long payload);
void kvm_requeue_exception(struct kvm_vcpu *vcpu, unsigned nr);
void kvm_requeue_exception_e(struct kvm_vcpu *vcpu, unsigned nr, u32 error_code);
void kvm_requeue_exception(struct kvm_vcpu *vcpu, unsigned int nr,
bool has_error_code, u32 error_code);
void kvm_inject_page_fault(struct kvm_vcpu *vcpu, struct x86_exception *fault);
void kvm_inject_emulated_page_fault(struct kvm_vcpu *vcpu,
struct x86_exception *fault);

View File

@ -212,8 +212,16 @@ struct snp_psc_desc {
#define GHCB_RESP_CODE(v) ((v) & GHCB_MSR_INFO_MASK)
/*
* Error codes related to GHCB input that can be communicated back to the guest
* by setting the lower 32-bits of the GHCB SW_EXITINFO1 field to 2.
* GHCB-defined return codes that are communicated back to the guest via
* SW_EXITINFO1.
*/
#define GHCB_HV_RESP_NO_ACTION 0
#define GHCB_HV_RESP_ISSUE_EXCEPTION 1
#define GHCB_HV_RESP_MALFORMED_INPUT 2
/*
* GHCB-defined sub-error codes for malformed input (see above) that are
* communicated back to the guest via SW_EXITINFO2[31:0].
*/
#define GHCB_ERR_NOT_REGISTERED 1
#define GHCB_ERR_INVALID_USAGE 2

View File

@ -116,6 +116,7 @@ enum {
INTERCEPT_INVPCID,
INTERCEPT_MCOMMIT,
INTERCEPT_TLBSYNC,
INTERCEPT_IDLE_HLT = 166,
};
@ -290,10 +291,6 @@ static_assert((X2AVIC_MAX_PHYSICAL_ID & AVIC_PHYSICAL_MAX_INDEX_MASK) == X2AVIC_
#define SVM_SEV_FEAT_ALTERNATE_INJECTION BIT(4)
#define SVM_SEV_FEAT_DEBUG_SWAP BIT(5)
#define SVM_SEV_FEAT_INT_INJ_MODES \
(SVM_SEV_FEAT_RESTRICTED_INJECTION | \
SVM_SEV_FEAT_ALTERNATE_INJECTION)
struct vmcb_seg {
u16 selector;
u16 attrib;

View File

@ -580,18 +580,22 @@ enum vm_entry_failure_code {
/*
* Exit Qualifications for EPT Violations
*/
#define EPT_VIOLATION_ACC_READ_BIT 0
#define EPT_VIOLATION_ACC_WRITE_BIT 1
#define EPT_VIOLATION_ACC_INSTR_BIT 2
#define EPT_VIOLATION_RWX_SHIFT 3
#define EPT_VIOLATION_GVA_IS_VALID_BIT 7
#define EPT_VIOLATION_GVA_TRANSLATED_BIT 8
#define EPT_VIOLATION_ACC_READ (1 << EPT_VIOLATION_ACC_READ_BIT)
#define EPT_VIOLATION_ACC_WRITE (1 << EPT_VIOLATION_ACC_WRITE_BIT)
#define EPT_VIOLATION_ACC_INSTR (1 << EPT_VIOLATION_ACC_INSTR_BIT)
#define EPT_VIOLATION_RWX_MASK (VMX_EPT_RWX_MASK << EPT_VIOLATION_RWX_SHIFT)
#define EPT_VIOLATION_GVA_IS_VALID (1 << EPT_VIOLATION_GVA_IS_VALID_BIT)
#define EPT_VIOLATION_GVA_TRANSLATED (1 << EPT_VIOLATION_GVA_TRANSLATED_BIT)
#define EPT_VIOLATION_ACC_READ BIT(0)
#define EPT_VIOLATION_ACC_WRITE BIT(1)
#define EPT_VIOLATION_ACC_INSTR BIT(2)
#define EPT_VIOLATION_PROT_READ BIT(3)
#define EPT_VIOLATION_PROT_WRITE BIT(4)
#define EPT_VIOLATION_PROT_EXEC BIT(5)
#define EPT_VIOLATION_PROT_MASK (EPT_VIOLATION_PROT_READ | \
EPT_VIOLATION_PROT_WRITE | \
EPT_VIOLATION_PROT_EXEC)
#define EPT_VIOLATION_GVA_IS_VALID BIT(7)
#define EPT_VIOLATION_GVA_TRANSLATED BIT(8)
#define EPT_VIOLATION_RWX_TO_PROT(__epte) (((__epte) & VMX_EPT_RWX_MASK) << 3)
static_assert(EPT_VIOLATION_RWX_TO_PROT(VMX_EPT_RWX_MASK) ==
(EPT_VIOLATION_PROT_READ | EPT_VIOLATION_PROT_WRITE | EPT_VIOLATION_PROT_EXEC));
/*
* Exit Qualifications for NOTIFY VM EXIT

View File

@ -559,6 +559,9 @@ struct kvm_x86_mce {
#define KVM_XEN_HVM_CONFIG_PVCLOCK_TSC_UNSTABLE (1 << 7)
#define KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA (1 << 8)
#define KVM_XEN_MSR_MIN_INDEX 0x40000000u
#define KVM_XEN_MSR_MAX_INDEX 0x4fffffffu
struct kvm_xen_hvm_config {
__u32 flags;
__u32 msr;

View File

@ -95,6 +95,7 @@
#define SVM_EXIT_CR14_WRITE_TRAP 0x09e
#define SVM_EXIT_CR15_WRITE_TRAP 0x09f
#define SVM_EXIT_INVPCID 0x0a2
#define SVM_EXIT_IDLE_HLT 0x0a6
#define SVM_EXIT_NPF 0x400
#define SVM_EXIT_AVIC_INCOMPLETE_IPI 0x401
#define SVM_EXIT_AVIC_UNACCELERATED_ACCESS 0x402
@ -224,6 +225,7 @@
{ SVM_EXIT_CR4_WRITE_TRAP, "write_cr4_trap" }, \
{ SVM_EXIT_CR8_WRITE_TRAP, "write_cr8_trap" }, \
{ SVM_EXIT_INVPCID, "invpcid" }, \
{ SVM_EXIT_IDLE_HLT, "idle-halt" }, \
{ SVM_EXIT_NPF, "npf" }, \
{ SVM_EXIT_AVIC_INCOMPLETE_IPI, "avic_incomplete_ipi" }, \
{ SVM_EXIT_AVIC_UNACCELERATED_ACCESS, "avic_unaccelerated_access" }, \

View File

@ -22,6 +22,7 @@ config KVM_X86
select KVM_COMMON
select KVM_GENERIC_MMU_NOTIFIER
select KVM_ELIDE_TLB_FLUSH_IF_YOUNG
select KVM_MMU_LOCKLESS_AGING
select HAVE_KVM_IRQCHIP
select HAVE_KVM_PFNCACHE
select HAVE_KVM_DIRTY_RING_TSO

View File

@ -58,25 +58,24 @@ void __init kvm_init_xstate_sizes(void)
u32 xstate_required_size(u64 xstate_bv, bool compacted)
{
int feature_bit = 0;
u32 ret = XSAVE_HDR_SIZE + XSAVE_HDR_OFFSET;
int i;
xstate_bv &= XFEATURE_MASK_EXTEND;
while (xstate_bv) {
if (xstate_bv & 0x1) {
struct cpuid_xstate_sizes *xs = &xstate_sizes[feature_bit];
u32 offset;
for (i = XFEATURE_YMM; i < ARRAY_SIZE(xstate_sizes) && xstate_bv; i++) {
struct cpuid_xstate_sizes *xs = &xstate_sizes[i];
u32 offset;
/* ECX[1]: 64B alignment in compacted form */
if (compacted)
offset = (xs->ecx & 0x2) ? ALIGN(ret, 64) : ret;
else
offset = xs->ebx;
ret = max(ret, offset + xs->eax);
}
if (!(xstate_bv & BIT_ULL(i)))
continue;
xstate_bv >>= 1;
feature_bit++;
/* ECX[1]: 64B alignment in compacted form */
if (compacted)
offset = (xs->ecx & 0x2) ? ALIGN(ret, 64) : ret;
else
offset = xs->ebx;
ret = max(ret, offset + xs->eax);
xstate_bv &= ~BIT_ULL(i);
}
return ret;
@ -196,6 +195,7 @@ static int kvm_check_cpuid(struct kvm_vcpu *vcpu)
}
static u32 kvm_apply_cpuid_pv_features_quirk(struct kvm_vcpu *vcpu);
static void kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu);
/* Check whether the supplied CPUID data is equal to what is already set for the vCPU. */
static int kvm_cpuid_check_equal(struct kvm_vcpu *vcpu, struct kvm_cpuid_entry2 *e2,
@ -300,10 +300,12 @@ static __always_inline void kvm_update_feature_runtime(struct kvm_vcpu *vcpu,
guest_cpu_cap_change(vcpu, x86_feature, has_feature);
}
void kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu)
static void kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu)
{
struct kvm_cpuid_entry2 *best;
vcpu->arch.cpuid_dynamic_bits_dirty = false;
best = kvm_find_cpuid_entry(vcpu, 1);
if (best) {
kvm_update_feature_runtime(vcpu, best, X86_FEATURE_OSXSAVE,
@ -333,7 +335,6 @@ void kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu)
cpuid_entry_has(best, X86_FEATURE_XSAVEC)))
best->ebx = xstate_required_size(vcpu->arch.xcr0, true);
}
EXPORT_SYMBOL_GPL(kvm_update_cpuid_runtime);
static bool kvm_cpuid_has_hyperv(struct kvm_vcpu *vcpu)
{
@ -646,6 +647,9 @@ int kvm_vcpu_ioctl_get_cpuid2(struct kvm_vcpu *vcpu,
if (cpuid->nent < vcpu->arch.cpuid_nent)
return -E2BIG;
if (vcpu->arch.cpuid_dynamic_bits_dirty)
kvm_update_cpuid_runtime(vcpu);
if (copy_to_user(entries, vcpu->arch.cpuid_entries,
vcpu->arch.cpuid_nent * sizeof(struct kvm_cpuid_entry2)))
return -EFAULT;
@ -1704,7 +1708,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
phys_as = entry->eax & 0xff;
g_phys_as = phys_as;
if (kvm_mmu_get_max_tdp_level() < 5)
g_phys_as = min(g_phys_as, 48);
g_phys_as = min(g_phys_as, 48U);
}
entry->eax = phys_as | (virt_as << 8) | (g_phys_as << 16);
@ -1769,13 +1773,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
cpuid_entry_override(entry, CPUID_8000_0022_EAX);
if (kvm_cpu_cap_has(X86_FEATURE_PERFMON_V2))
ebx.split.num_core_pmc = kvm_pmu_cap.num_counters_gp;
else if (kvm_cpu_cap_has(X86_FEATURE_PERFCTR_CORE))
ebx.split.num_core_pmc = AMD64_NUM_COUNTERS_CORE;
else
ebx.split.num_core_pmc = AMD64_NUM_COUNTERS;
ebx.split.num_core_pmc = kvm_pmu_cap.num_counters_gp;
entry->ebx = ebx.full;
break;
}
@ -1985,6 +1983,9 @@ bool kvm_cpuid(struct kvm_vcpu *vcpu, u32 *eax, u32 *ebx,
struct kvm_cpuid_entry2 *entry;
bool exact, used_max_basic = false;
if (vcpu->arch.cpuid_dynamic_bits_dirty)
kvm_update_cpuid_runtime(vcpu);
entry = kvm_find_cpuid_entry_index(vcpu, function, index);
exact = !!entry;
@ -2000,12 +2001,29 @@ bool kvm_cpuid(struct kvm_vcpu *vcpu, u32 *eax, u32 *ebx,
*edx = entry->edx;
if (function == 7 && index == 0) {
u64 data;
if (!__kvm_get_msr(vcpu, MSR_IA32_TSX_CTRL, &data, true) &&
if ((*ebx & (feature_bit(RTM) | feature_bit(HLE))) &&
!__kvm_get_msr(vcpu, MSR_IA32_TSX_CTRL, &data, true) &&
(data & TSX_CTRL_CPUID_CLEAR))
*ebx &= ~(feature_bit(RTM) | feature_bit(HLE));
} else if (function == 0x80000007) {
if (kvm_hv_invtsc_suppressed(vcpu))
*edx &= ~feature_bit(CONSTANT_TSC);
} else if (IS_ENABLED(CONFIG_KVM_XEN) &&
kvm_xen_is_tsc_leaf(vcpu, function)) {
/*
* Update guest TSC frequency information if necessary.
* Ignore failures, there is no sane value that can be
* provided if KVM can't get the TSC frequency.
*/
if (kvm_check_request(KVM_REQ_CLOCK_UPDATE, vcpu))
kvm_guest_time_update(vcpu);
if (index == 1) {
*ecx = vcpu->arch.pvclock_tsc_mul;
*edx = vcpu->arch.pvclock_tsc_shift;
} else if (index == 2) {
*eax = vcpu->arch.hw_tsc_khz;
}
}
} else {
*eax = *ebx = *ecx = *edx = 0;

View File

@ -11,7 +11,6 @@ extern u32 kvm_cpu_caps[NR_KVM_CPU_CAPS] __read_mostly;
void kvm_set_cpu_caps(void);
void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu);
void kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu);
struct kvm_cpuid_entry2 *kvm_find_cpuid_entry_index(struct kvm_vcpu *vcpu,
u32 function, u32 index);
struct kvm_cpuid_entry2 *kvm_find_cpuid_entry(struct kvm_vcpu *vcpu,
@ -232,6 +231,14 @@ static __always_inline bool guest_cpu_cap_has(struct kvm_vcpu *vcpu,
{
unsigned int x86_leaf = __feature_leaf(x86_feature);
/*
* Except for MWAIT, querying dynamic feature bits is disallowed, so
* that KVM can defer runtime updates until the next CPUID emulation.
*/
BUILD_BUG_ON(x86_feature == X86_FEATURE_APIC ||
x86_feature == X86_FEATURE_OSXSAVE ||
x86_feature == X86_FEATURE_OSPKE);
return vcpu->arch.cpu_caps[x86_leaf] & __feature_bit(x86_feature);
}

View File

@ -477,8 +477,11 @@ static int emulator_check_intercept(struct x86_emulate_ctxt *ctxt,
.dst_val = ctxt->dst.val64,
.src_bytes = ctxt->src.bytes,
.dst_bytes = ctxt->dst.bytes,
.src_type = ctxt->src.type,
.dst_type = ctxt->dst.type,
.ad_bytes = ctxt->ad_bytes,
.next_rip = ctxt->eip,
.rip = ctxt->eip,
.next_rip = ctxt->_eip,
};
return ctxt->ops->intercept(ctxt, &info, stage);

View File

@ -567,7 +567,7 @@ static void pic_irq_request(struct kvm *kvm, int level)
{
struct kvm_pic *s = kvm->arch.vpic;
if (!s->output)
if (!s->output && level)
s->wakeup_needed = true;
s->output = level;
}

View File

@ -44,7 +44,10 @@ struct x86_instruction_info {
u64 dst_val; /* value of destination operand */
u8 src_bytes; /* size of source operand */
u8 dst_bytes; /* size of destination operand */
u8 src_type; /* type of source operand */
u8 dst_type; /* type of destination operand */
u8 ad_bytes; /* size of src/dst address */
u64 rip; /* rip of the instruction */
u64 next_rip; /* rip following the instruction */
};
@ -272,8 +275,10 @@ struct operand {
};
};
#define X86_MAX_INSTRUCTION_LENGTH 15
struct fetch_cache {
u8 data[15];
u8 data[X86_MAX_INSTRUCTION_LENGTH];
u8 *ptr;
u8 *end;
};

View File

@ -221,13 +221,6 @@ static inline bool kvm_apic_map_get_logical_dest(struct kvm_apic_map *map,
}
}
static void kvm_apic_map_free(struct rcu_head *rcu)
{
struct kvm_apic_map *map = container_of(rcu, struct kvm_apic_map, rcu);
kvfree(map);
}
static int kvm_recalculate_phys_map(struct kvm_apic_map *new,
struct kvm_vcpu *vcpu,
bool *xapic_id_mismatch)
@ -489,7 +482,7 @@ out:
mutex_unlock(&kvm->arch.apic_map_lock);
if (old)
call_rcu(&old->rcu, kvm_apic_map_free);
kvfree_rcu(old, rcu);
kvm_make_scan_ioapic_request(kvm);
}
@ -2593,7 +2586,7 @@ static void __kvm_apic_set_base(struct kvm_vcpu *vcpu, u64 value)
vcpu->arch.apic_base = value;
if ((old_value ^ value) & MSR_IA32_APICBASE_ENABLE)
kvm_update_cpuid_runtime(vcpu);
vcpu->arch.cpuid_dynamic_bits_dirty = true;
if (!apic)
return;
@ -3396,9 +3389,9 @@ int kvm_apic_accept_events(struct kvm_vcpu *vcpu)
if (test_and_clear_bit(KVM_APIC_INIT, &apic->pending_events)) {
kvm_vcpu_reset(vcpu, true);
if (kvm_vcpu_is_bsp(apic->vcpu))
vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
kvm_set_mp_state(vcpu, KVM_MP_STATE_RUNNABLE);
else
vcpu->arch.mp_state = KVM_MP_STATE_INIT_RECEIVED;
kvm_set_mp_state(vcpu, KVM_MP_STATE_INIT_RECEIVED);
}
if (test_and_clear_bit(KVM_APIC_SIPI, &apic->pending_events)) {
if (vcpu->arch.mp_state == KVM_MP_STATE_INIT_RECEIVED) {
@ -3407,7 +3400,7 @@ int kvm_apic_accept_events(struct kvm_vcpu *vcpu)
sipi_vector = apic->sipi_vector;
kvm_x86_call(vcpu_deliver_sipi_vector)(vcpu,
sipi_vector);
vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
kvm_set_mp_state(vcpu, KVM_MP_STATE_RUNNABLE);
}
}
return 0;

View File

@ -501,7 +501,7 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte)
return false;
}
if (!spte_has_volatile_bits(old_spte))
if (!spte_needs_atomic_update(old_spte))
__update_clear_spte_fast(sptep, new_spte);
else
old_spte = __update_clear_spte_slow(sptep, new_spte);
@ -524,7 +524,7 @@ static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep)
int level = sptep_to_sp(sptep)->role.level;
if (!is_shadow_present_pte(old_spte) ||
!spte_has_volatile_bits(old_spte))
!spte_needs_atomic_update(old_spte))
__update_clear_spte_fast(sptep, SHADOW_NONPRESENT_VALUE);
else
old_spte = __update_clear_spte_slow(sptep, SHADOW_NONPRESENT_VALUE);
@ -853,32 +853,173 @@ static struct kvm_memory_slot *gfn_to_memslot_dirty_bitmap(struct kvm_vcpu *vcpu
* About rmap_head encoding:
*
* If the bit zero of rmap_head->val is clear, then it points to the only spte
* in this rmap chain. Otherwise, (rmap_head->val & ~1) points to a struct
* in this rmap chain. Otherwise, (rmap_head->val & ~3) points to a struct
* pte_list_desc containing more mappings.
*/
#define KVM_RMAP_MANY BIT(0)
/*
* rmaps and PTE lists are mostly protected by mmu_lock (the shadow MMU always
* operates with mmu_lock held for write), but rmaps can be walked without
* holding mmu_lock so long as the caller can tolerate SPTEs in the rmap chain
* being zapped/dropped _while the rmap is locked_.
*
* Other than the KVM_RMAP_LOCKED flag, modifications to rmap entries must be
* done while holding mmu_lock for write. This allows a task walking rmaps
* without holding mmu_lock to concurrently walk the same entries as a task
* that is holding mmu_lock but _not_ the rmap lock. Neither task will modify
* the rmaps, thus the walks are stable.
*
* As alluded to above, SPTEs in rmaps are _not_ protected by KVM_RMAP_LOCKED,
* only the rmap chains themselves are protected. E.g. holding an rmap's lock
* ensures all "struct pte_list_desc" fields are stable.
*/
#define KVM_RMAP_LOCKED BIT(1)
static unsigned long __kvm_rmap_lock(struct kvm_rmap_head *rmap_head)
{
unsigned long old_val, new_val;
lockdep_assert_preemption_disabled();
/*
* Elide the lock if the rmap is empty, as lockless walkers (read-only
* mode) don't need to (and can't) walk an empty rmap, nor can they add
* entries to the rmap. I.e. the only paths that process empty rmaps
* do so while holding mmu_lock for write, and are mutually exclusive.
*/
old_val = atomic_long_read(&rmap_head->val);
if (!old_val)
return 0;
do {
/*
* If the rmap is locked, wait for it to be unlocked before
* trying acquire the lock, e.g. to avoid bouncing the cache
* line.
*/
while (old_val & KVM_RMAP_LOCKED) {
cpu_relax();
old_val = atomic_long_read(&rmap_head->val);
}
/*
* Recheck for an empty rmap, it may have been purged by the
* task that held the lock.
*/
if (!old_val)
return 0;
new_val = old_val | KVM_RMAP_LOCKED;
/*
* Use try_cmpxchg_acquire() to prevent reads and writes to the rmap
* from being reordered outside of the critical section created by
* __kvm_rmap_lock().
*
* Pairs with the atomic_long_set_release() in kvm_rmap_unlock().
*
* For the !old_val case, no ordering is needed, as there is no rmap
* to walk.
*/
} while (!atomic_long_try_cmpxchg_acquire(&rmap_head->val, &old_val, new_val));
/*
* Return the old value, i.e. _without_ the LOCKED bit set. It's
* impossible for the return value to be 0 (see above), i.e. the read-
* only unlock flow can't get a false positive and fail to unlock.
*/
return old_val;
}
static unsigned long kvm_rmap_lock(struct kvm *kvm,
struct kvm_rmap_head *rmap_head)
{
lockdep_assert_held_write(&kvm->mmu_lock);
return __kvm_rmap_lock(rmap_head);
}
static void __kvm_rmap_unlock(struct kvm_rmap_head *rmap_head,
unsigned long val)
{
KVM_MMU_WARN_ON(val & KVM_RMAP_LOCKED);
/*
* Ensure that all accesses to the rmap have completed before unlocking
* the rmap.
*
* Pairs with the atomic_long_try_cmpxchg_acquire() in __kvm_rmap_lock().
*/
atomic_long_set_release(&rmap_head->val, val);
}
static void kvm_rmap_unlock(struct kvm *kvm,
struct kvm_rmap_head *rmap_head,
unsigned long new_val)
{
lockdep_assert_held_write(&kvm->mmu_lock);
__kvm_rmap_unlock(rmap_head, new_val);
}
static unsigned long kvm_rmap_get(struct kvm_rmap_head *rmap_head)
{
return atomic_long_read(&rmap_head->val) & ~KVM_RMAP_LOCKED;
}
/*
* If mmu_lock isn't held, rmaps can only be locked in read-only mode. The
* actual locking is the same, but the caller is disallowed from modifying the
* rmap, and so the unlock flow is a nop if the rmap is/was empty.
*/
static unsigned long kvm_rmap_lock_readonly(struct kvm_rmap_head *rmap_head)
{
unsigned long rmap_val;
preempt_disable();
rmap_val = __kvm_rmap_lock(rmap_head);
if (!rmap_val)
preempt_enable();
return rmap_val;
}
static void kvm_rmap_unlock_readonly(struct kvm_rmap_head *rmap_head,
unsigned long old_val)
{
if (!old_val)
return;
KVM_MMU_WARN_ON(old_val != kvm_rmap_get(rmap_head));
__kvm_rmap_unlock(rmap_head, old_val);
preempt_enable();
}
/*
* Returns the number of pointers in the rmap chain, not counting the new one.
*/
static int pte_list_add(struct kvm_mmu_memory_cache *cache, u64 *spte,
struct kvm_rmap_head *rmap_head)
static int pte_list_add(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
u64 *spte, struct kvm_rmap_head *rmap_head)
{
unsigned long old_val, new_val;
struct pte_list_desc *desc;
int count = 0;
if (!rmap_head->val) {
rmap_head->val = (unsigned long)spte;
} else if (!(rmap_head->val & KVM_RMAP_MANY)) {
old_val = kvm_rmap_lock(kvm, rmap_head);
if (!old_val) {
new_val = (unsigned long)spte;
} else if (!(old_val & KVM_RMAP_MANY)) {
desc = kvm_mmu_memory_cache_alloc(cache);
desc->sptes[0] = (u64 *)rmap_head->val;
desc->sptes[0] = (u64 *)old_val;
desc->sptes[1] = spte;
desc->spte_count = 2;
desc->tail_count = 0;
rmap_head->val = (unsigned long)desc | KVM_RMAP_MANY;
new_val = (unsigned long)desc | KVM_RMAP_MANY;
++count;
} else {
desc = (struct pte_list_desc *)(rmap_head->val & ~KVM_RMAP_MANY);
desc = (struct pte_list_desc *)(old_val & ~KVM_RMAP_MANY);
count = desc->tail_count + desc->spte_count;
/*
@ -887,21 +1028,25 @@ static int pte_list_add(struct kvm_mmu_memory_cache *cache, u64 *spte,
*/
if (desc->spte_count == PTE_LIST_EXT) {
desc = kvm_mmu_memory_cache_alloc(cache);
desc->more = (struct pte_list_desc *)(rmap_head->val & ~KVM_RMAP_MANY);
desc->more = (struct pte_list_desc *)(old_val & ~KVM_RMAP_MANY);
desc->spte_count = 0;
desc->tail_count = count;
rmap_head->val = (unsigned long)desc | KVM_RMAP_MANY;
new_val = (unsigned long)desc | KVM_RMAP_MANY;
} else {
new_val = old_val;
}
desc->sptes[desc->spte_count++] = spte;
}
kvm_rmap_unlock(kvm, rmap_head, new_val);
return count;
}
static void pte_list_desc_remove_entry(struct kvm *kvm,
struct kvm_rmap_head *rmap_head,
static void pte_list_desc_remove_entry(struct kvm *kvm, unsigned long *rmap_val,
struct pte_list_desc *desc, int i)
{
struct pte_list_desc *head_desc = (struct pte_list_desc *)(rmap_head->val & ~KVM_RMAP_MANY);
struct pte_list_desc *head_desc = (struct pte_list_desc *)(*rmap_val & ~KVM_RMAP_MANY);
int j = head_desc->spte_count - 1;
/*
@ -928,9 +1073,9 @@ static void pte_list_desc_remove_entry(struct kvm *kvm,
* head at the next descriptor, i.e. the new head.
*/
if (!head_desc->more)
rmap_head->val = 0;
*rmap_val = 0;
else
rmap_head->val = (unsigned long)head_desc->more | KVM_RMAP_MANY;
*rmap_val = (unsigned long)head_desc->more | KVM_RMAP_MANY;
mmu_free_pte_list_desc(head_desc);
}
@ -938,24 +1083,26 @@ static void pte_list_remove(struct kvm *kvm, u64 *spte,
struct kvm_rmap_head *rmap_head)
{
struct pte_list_desc *desc;
unsigned long rmap_val;
int i;
if (KVM_BUG_ON_DATA_CORRUPTION(!rmap_head->val, kvm))
return;
rmap_val = kvm_rmap_lock(kvm, rmap_head);
if (KVM_BUG_ON_DATA_CORRUPTION(!rmap_val, kvm))
goto out;
if (!(rmap_head->val & KVM_RMAP_MANY)) {
if (KVM_BUG_ON_DATA_CORRUPTION((u64 *)rmap_head->val != spte, kvm))
return;
if (!(rmap_val & KVM_RMAP_MANY)) {
if (KVM_BUG_ON_DATA_CORRUPTION((u64 *)rmap_val != spte, kvm))
goto out;
rmap_head->val = 0;
rmap_val = 0;
} else {
desc = (struct pte_list_desc *)(rmap_head->val & ~KVM_RMAP_MANY);
desc = (struct pte_list_desc *)(rmap_val & ~KVM_RMAP_MANY);
while (desc) {
for (i = 0; i < desc->spte_count; ++i) {
if (desc->sptes[i] == spte) {
pte_list_desc_remove_entry(kvm, rmap_head,
pte_list_desc_remove_entry(kvm, &rmap_val,
desc, i);
return;
goto out;
}
}
desc = desc->more;
@ -963,6 +1110,9 @@ static void pte_list_remove(struct kvm *kvm, u64 *spte,
KVM_BUG_ON_DATA_CORRUPTION(true, kvm);
}
out:
kvm_rmap_unlock(kvm, rmap_head, rmap_val);
}
static void kvm_zap_one_rmap_spte(struct kvm *kvm,
@ -977,17 +1127,19 @@ static bool kvm_zap_all_rmap_sptes(struct kvm *kvm,
struct kvm_rmap_head *rmap_head)
{
struct pte_list_desc *desc, *next;
unsigned long rmap_val;
int i;
if (!rmap_head->val)
rmap_val = kvm_rmap_lock(kvm, rmap_head);
if (!rmap_val)
return false;
if (!(rmap_head->val & KVM_RMAP_MANY)) {
mmu_spte_clear_track_bits(kvm, (u64 *)rmap_head->val);
if (!(rmap_val & KVM_RMAP_MANY)) {
mmu_spte_clear_track_bits(kvm, (u64 *)rmap_val);
goto out;
}
desc = (struct pte_list_desc *)(rmap_head->val & ~KVM_RMAP_MANY);
desc = (struct pte_list_desc *)(rmap_val & ~KVM_RMAP_MANY);
for (; desc; desc = next) {
for (i = 0; i < desc->spte_count; i++)
@ -997,20 +1149,21 @@ static bool kvm_zap_all_rmap_sptes(struct kvm *kvm,
}
out:
/* rmap_head is meaningless now, remember to reset it */
rmap_head->val = 0;
kvm_rmap_unlock(kvm, rmap_head, 0);
return true;
}
unsigned int pte_list_count(struct kvm_rmap_head *rmap_head)
{
unsigned long rmap_val = kvm_rmap_get(rmap_head);
struct pte_list_desc *desc;
if (!rmap_head->val)
if (!rmap_val)
return 0;
else if (!(rmap_head->val & KVM_RMAP_MANY))
else if (!(rmap_val & KVM_RMAP_MANY))
return 1;
desc = (struct pte_list_desc *)(rmap_head->val & ~KVM_RMAP_MANY);
desc = (struct pte_list_desc *)(rmap_val & ~KVM_RMAP_MANY);
return desc->tail_count + desc->spte_count;
}
@ -1053,6 +1206,7 @@ static void rmap_remove(struct kvm *kvm, u64 *spte)
*/
struct rmap_iterator {
/* private fields */
struct rmap_head *head;
struct pte_list_desc *desc; /* holds the sptep if not NULL */
int pos; /* index of the sptep */
};
@ -1067,23 +1221,19 @@ struct rmap_iterator {
static u64 *rmap_get_first(struct kvm_rmap_head *rmap_head,
struct rmap_iterator *iter)
{
u64 *sptep;
unsigned long rmap_val = kvm_rmap_get(rmap_head);
if (!rmap_head->val)
if (!rmap_val)
return NULL;
if (!(rmap_head->val & KVM_RMAP_MANY)) {
if (!(rmap_val & KVM_RMAP_MANY)) {
iter->desc = NULL;
sptep = (u64 *)rmap_head->val;
goto out;
return (u64 *)rmap_val;
}
iter->desc = (struct pte_list_desc *)(rmap_head->val & ~KVM_RMAP_MANY);
iter->desc = (struct pte_list_desc *)(rmap_val & ~KVM_RMAP_MANY);
iter->pos = 0;
sptep = iter->desc->sptes[iter->pos];
out:
BUG_ON(!is_shadow_present_pte(*sptep));
return sptep;
return iter->desc->sptes[iter->pos];
}
/*
@ -1093,14 +1243,11 @@ out:
*/
static u64 *rmap_get_next(struct rmap_iterator *iter)
{
u64 *sptep;
if (iter->desc) {
if (iter->pos < PTE_LIST_EXT - 1) {
++iter->pos;
sptep = iter->desc->sptes[iter->pos];
if (sptep)
goto out;
if (iter->desc->sptes[iter->pos])
return iter->desc->sptes[iter->pos];
}
iter->desc = iter->desc->more;
@ -1108,20 +1255,24 @@ static u64 *rmap_get_next(struct rmap_iterator *iter)
if (iter->desc) {
iter->pos = 0;
/* desc->sptes[0] cannot be NULL */
sptep = iter->desc->sptes[iter->pos];
goto out;
return iter->desc->sptes[iter->pos];
}
}
return NULL;
out:
BUG_ON(!is_shadow_present_pte(*sptep));
return sptep;
}
#define for_each_rmap_spte(_rmap_head_, _iter_, _spte_) \
for (_spte_ = rmap_get_first(_rmap_head_, _iter_); \
_spte_; _spte_ = rmap_get_next(_iter_))
#define __for_each_rmap_spte(_rmap_head_, _iter_, _sptep_) \
for (_sptep_ = rmap_get_first(_rmap_head_, _iter_); \
_sptep_; _sptep_ = rmap_get_next(_iter_))
#define for_each_rmap_spte(_rmap_head_, _iter_, _sptep_) \
__for_each_rmap_spte(_rmap_head_, _iter_, _sptep_) \
if (!WARN_ON_ONCE(!is_shadow_present_pte(*(_sptep_)))) \
#define for_each_rmap_spte_lockless(_rmap_head_, _iter_, _sptep_, _spte_) \
__for_each_rmap_spte(_rmap_head_, _iter_, _sptep_) \
if (is_shadow_present_pte(_spte_ = mmu_spte_get_lockless(sptep)))
static void drop_spte(struct kvm *kvm, u64 *sptep)
{
@ -1207,12 +1358,13 @@ static bool __rmap_clear_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
struct rmap_iterator iter;
bool flush = false;
for_each_rmap_spte(rmap_head, &iter, sptep)
for_each_rmap_spte(rmap_head, &iter, sptep) {
if (spte_ad_need_write_protect(*sptep))
flush |= test_and_clear_bit(PT_WRITABLE_SHIFT,
(unsigned long *)sptep);
else
flush |= spte_clear_dirty(sptep);
}
return flush;
}
@ -1401,7 +1553,7 @@ static void slot_rmap_walk_next(struct slot_rmap_walk_iterator *iterator)
while (++iterator->rmap <= iterator->end_rmap) {
iterator->gfn += KVM_PAGES_PER_HPAGE(iterator->level);
if (iterator->rmap->val)
if (atomic_long_read(&iterator->rmap->val))
return;
}
@ -1533,7 +1685,7 @@ static void __rmap_add(struct kvm *kvm,
kvm_update_page_stats(kvm, sp->role.level, 1);
rmap_head = gfn_to_rmap(gfn, sp->role.level, slot);
rmap_count = pte_list_add(cache, spte, rmap_head);
rmap_count = pte_list_add(kvm, cache, spte, rmap_head);
if (rmap_count > kvm->stat.max_mmu_rmap_size)
kvm->stat.max_mmu_rmap_size = rmap_count;
@ -1552,51 +1704,67 @@ static void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot,
}
static bool kvm_rmap_age_gfn_range(struct kvm *kvm,
struct kvm_gfn_range *range, bool test_only)
struct kvm_gfn_range *range,
bool test_only)
{
struct slot_rmap_walk_iterator iterator;
struct kvm_rmap_head *rmap_head;
struct rmap_iterator iter;
unsigned long rmap_val;
bool young = false;
u64 *sptep;
gfn_t gfn;
int level;
u64 spte;
for_each_slot_rmap_range(range->slot, PG_LEVEL_4K, KVM_MAX_HUGEPAGE_LEVEL,
range->start, range->end - 1, &iterator) {
for_each_rmap_spte(iterator.rmap, &iter, sptep) {
u64 spte = *sptep;
for (level = PG_LEVEL_4K; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) {
for (gfn = range->start; gfn < range->end;
gfn += KVM_PAGES_PER_HPAGE(level)) {
rmap_head = gfn_to_rmap(gfn, level, range->slot);
rmap_val = kvm_rmap_lock_readonly(rmap_head);
if (!is_accessed_spte(spte))
continue;
for_each_rmap_spte_lockless(rmap_head, &iter, sptep, spte) {
if (!is_accessed_spte(spte))
continue;
if (test_only)
return true;
if (test_only) {
kvm_rmap_unlock_readonly(rmap_head, rmap_val);
return true;
}
if (spte_ad_enabled(spte)) {
clear_bit((ffs(shadow_accessed_mask) - 1),
(unsigned long *)sptep);
} else {
/*
* WARN if mmu_spte_update() signals the need
* for a TLB flush, as Access tracking a SPTE
* should never trigger an _immediate_ flush.
*/
spte = mark_spte_for_access_track(spte);
WARN_ON_ONCE(mmu_spte_update(sptep, spte));
if (spte_ad_enabled(spte))
clear_bit((ffs(shadow_accessed_mask) - 1),
(unsigned long *)sptep);
else
/*
* If the following cmpxchg fails, the
* spte is being concurrently modified
* and should most likely stay young.
*/
cmpxchg64(sptep, spte,
mark_spte_for_access_track(spte));
young = true;
}
young = true;
kvm_rmap_unlock_readonly(rmap_head, rmap_val);
}
}
return young;
}
static bool kvm_may_have_shadow_mmu_sptes(struct kvm *kvm)
{
return !tdp_mmu_enabled || READ_ONCE(kvm->arch.indirect_shadow_pages);
}
bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
{
bool young = false;
if (kvm_memslots_have_rmaps(kvm))
young = kvm_rmap_age_gfn_range(kvm, range, false);
if (tdp_mmu_enabled)
young |= kvm_tdp_mmu_age_gfn_range(kvm, range);
young = kvm_tdp_mmu_age_gfn_range(kvm, range);
if (kvm_may_have_shadow_mmu_sptes(kvm))
young |= kvm_rmap_age_gfn_range(kvm, range, false);
return young;
}
@ -1605,11 +1773,14 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
{
bool young = false;
if (kvm_memslots_have_rmaps(kvm))
young = kvm_rmap_age_gfn_range(kvm, range, true);
if (tdp_mmu_enabled)
young |= kvm_tdp_mmu_test_age_gfn(kvm, range);
young = kvm_tdp_mmu_test_age_gfn(kvm, range);
if (young)
return young;
if (kvm_may_have_shadow_mmu_sptes(kvm))
young |= kvm_rmap_age_gfn_range(kvm, range, true);
return young;
}
@ -1656,13 +1827,14 @@ static unsigned kvm_page_table_hashfn(gfn_t gfn)
return hash_64(gfn, KVM_MMU_HASH_SHIFT);
}
static void mmu_page_add_parent_pte(struct kvm_mmu_memory_cache *cache,
static void mmu_page_add_parent_pte(struct kvm *kvm,
struct kvm_mmu_memory_cache *cache,
struct kvm_mmu_page *sp, u64 *parent_pte)
{
if (!parent_pte)
return;
pte_list_add(cache, parent_pte, &sp->parent_ptes);
pte_list_add(kvm, cache, parent_pte, &sp->parent_ptes);
}
static void mmu_page_remove_parent_pte(struct kvm *kvm, struct kvm_mmu_page *sp,
@ -2352,7 +2524,7 @@ static void __link_shadow_page(struct kvm *kvm,
mmu_spte_set(sptep, spte);
mmu_page_add_parent_pte(cache, sp, sptep);
mmu_page_add_parent_pte(kvm, cache, sp, sptep);
/*
* The non-direct sub-pagetable must be updated before linking. For
@ -2416,7 +2588,8 @@ static int mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp,
* avoids retaining a large number of stale nested SPs.
*/
if (tdp_enabled && invalid_list &&
child->role.guest_mode && !child->parent_ptes.val)
child->role.guest_mode &&
!atomic_long_read(&child->parent_ptes.val))
return kvm_mmu_prepare_zap_page(kvm, child,
invalid_list);
}

View File

@ -510,8 +510,7 @@ error:
* Note, pte_access holds the raw RWX bits from the EPTE, not
* ACC_*_MASK flags!
*/
walker->fault.exit_qualification |= (pte_access & VMX_EPT_RWX_MASK) <<
EPT_VIOLATION_RWX_SHIFT;
walker->fault.exit_qualification |= EPT_VIOLATION_RWX_TO_PROT(pte_access);
}
#endif
walker->fault.address = addr;

View File

@ -129,25 +129,32 @@ static bool kvm_is_mmio_pfn(kvm_pfn_t pfn)
}
/*
* Returns true if the SPTE has bits that may be set without holding mmu_lock.
* The caller is responsible for checking if the SPTE is shadow-present, and
* for determining whether or not the caller cares about non-leaf SPTEs.
* Returns true if the SPTE needs to be updated atomically due to having bits
* that may be changed without holding mmu_lock, and for which KVM must not
* lose information. E.g. KVM must not drop Dirty bit information. The caller
* is responsible for checking if the SPTE is shadow-present, and for
* determining whether or not the caller cares about non-leaf SPTEs.
*/
bool spte_has_volatile_bits(u64 spte)
bool spte_needs_atomic_update(u64 spte)
{
/* SPTEs can be made Writable bit by KVM's fast page fault handler. */
if (!is_writable_pte(spte) && is_mmu_writable_spte(spte))
return true;
if (is_access_track_spte(spte))
/*
* A/D-disabled SPTEs can be access-tracked by aging, and access-tracked
* SPTEs can be restored by KVM's fast page fault handler.
*/
if (!spte_ad_enabled(spte))
return true;
if (spte_ad_enabled(spte)) {
if (!(spte & shadow_accessed_mask) ||
(is_writable_pte(spte) && !(spte & shadow_dirty_mask)))
return true;
}
return false;
/*
* Dirty and Accessed bits can be set by the CPU. Ignore the Accessed
* bit, as KVM tolerates false negatives/positives, e.g. KVM doesn't
* invalidate TLBs when aging SPTEs, and so it's safe to clobber the
* Accessed bit (and rare in practice).
*/
return is_writable_pte(spte) && !(spte & shadow_dirty_mask);
}
bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,

View File

@ -519,7 +519,7 @@ static inline u64 get_mmio_spte_generation(u64 spte)
return gen;
}
bool spte_has_volatile_bits(u64 spte);
bool spte_needs_atomic_update(u64 spte);
bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
const struct kvm_memory_slot *slot,

View File

@ -25,6 +25,13 @@ static inline u64 kvm_tdp_mmu_write_spte_atomic(tdp_ptep_t sptep, u64 new_spte)
return xchg(rcu_dereference(sptep), new_spte);
}
static inline u64 tdp_mmu_clear_spte_bits_atomic(tdp_ptep_t sptep, u64 mask)
{
atomic64_t *sptep_atomic = (atomic64_t *)rcu_dereference(sptep);
return (u64)atomic64_fetch_and(~mask, sptep_atomic);
}
static inline void __kvm_tdp_mmu_write_spte(tdp_ptep_t sptep, u64 new_spte)
{
KVM_MMU_WARN_ON(is_ept_ve_possible(new_spte));
@ -32,28 +39,21 @@ static inline void __kvm_tdp_mmu_write_spte(tdp_ptep_t sptep, u64 new_spte)
}
/*
* SPTEs must be modified atomically if they are shadow-present, leaf
* SPTEs, and have volatile bits, i.e. has bits that can be set outside
* of mmu_lock. The Writable bit can be set by KVM's fast page fault
* handler, and Accessed and Dirty bits can be set by the CPU.
*
* Note, non-leaf SPTEs do have Accessed bits and those bits are
* technically volatile, but KVM doesn't consume the Accessed bit of
* non-leaf SPTEs, i.e. KVM doesn't care if it clobbers the bit. This
* logic needs to be reassessed if KVM were to use non-leaf Accessed
* bits, e.g. to skip stepping down into child SPTEs when aging SPTEs.
* SPTEs must be modified atomically if they are shadow-present, leaf SPTEs,
* and have volatile bits (bits that can be set outside of mmu_lock) that
* must not be clobbered.
*/
static inline bool kvm_tdp_mmu_spte_need_atomic_write(u64 old_spte, int level)
static inline bool kvm_tdp_mmu_spte_need_atomic_update(u64 old_spte, int level)
{
return is_shadow_present_pte(old_spte) &&
is_last_spte(old_spte, level) &&
spte_has_volatile_bits(old_spte);
spte_needs_atomic_update(old_spte);
}
static inline u64 kvm_tdp_mmu_write_spte(tdp_ptep_t sptep, u64 old_spte,
u64 new_spte, int level)
{
if (kvm_tdp_mmu_spte_need_atomic_write(old_spte, level))
if (kvm_tdp_mmu_spte_need_atomic_update(old_spte, level))
return kvm_tdp_mmu_write_spte_atomic(sptep, new_spte);
__kvm_tdp_mmu_write_spte(sptep, new_spte);
@ -63,12 +63,8 @@ static inline u64 kvm_tdp_mmu_write_spte(tdp_ptep_t sptep, u64 old_spte,
static inline u64 tdp_mmu_clear_spte_bits(tdp_ptep_t sptep, u64 old_spte,
u64 mask, int level)
{
atomic64_t *sptep_atomic;
if (kvm_tdp_mmu_spte_need_atomic_write(old_spte, level)) {
sptep_atomic = (atomic64_t *)rcu_dereference(sptep);
return (u64)atomic64_fetch_and(~mask, sptep_atomic);
}
if (kvm_tdp_mmu_spte_need_atomic_update(old_spte, level))
return tdp_mmu_clear_spte_bits_atomic(sptep, mask);
__kvm_tdp_mmu_write_spte(sptep, old_spte & ~mask);
return old_spte;

View File

@ -193,6 +193,19 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
!tdp_mmu_root_match((_root), (_types)))) { \
} else
/*
* Iterate over all TDP MMU roots in an RCU read-side critical section.
* It is safe to iterate over the SPTEs under the root, but their values will
* be unstable, so all writes must be atomic. As this routine is meant to be
* used without holding the mmu_lock at all, any bits that are flipped must
* be reflected in kvm_tdp_mmu_spte_need_atomic_write().
*/
#define for_each_tdp_mmu_root_rcu(_kvm, _root, _as_id, _types) \
list_for_each_entry_rcu(_root, &_kvm->arch.tdp_mmu_roots, link) \
if ((_as_id >= 0 && kvm_mmu_page_as_id(_root) != _as_id) || \
!tdp_mmu_root_match((_root), (_types))) { \
} else
#define for_each_valid_tdp_mmu_root(_kvm, _root, _as_id) \
__for_each_tdp_mmu_root(_kvm, _root, _as_id, KVM_VALID_ROOTS)
@ -774,9 +787,6 @@ static inline void tdp_mmu_iter_set_spte(struct kvm *kvm, struct tdp_iter *iter,
continue; \
else
#define tdp_mmu_for_each_pte(_iter, _kvm, _root, _start, _end) \
for_each_tdp_pte(_iter, _kvm, _root, _start, _end)
static inline bool __must_check tdp_mmu_iter_need_resched(struct kvm *kvm,
struct tdp_iter *iter)
{
@ -1235,7 +1245,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
rcu_read_lock();
tdp_mmu_for_each_pte(iter, kvm, root, fault->gfn, fault->gfn + 1) {
for_each_tdp_pte(iter, kvm, root, fault->gfn, fault->gfn + 1) {
int r;
if (fault->nx_huge_page_workaround_enabled)
@ -1332,21 +1342,22 @@ bool kvm_tdp_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range,
* from the clear_young() or clear_flush_young() notifier, which uses the
* return value to determine if the page has been accessed.
*/
static void kvm_tdp_mmu_age_spte(struct tdp_iter *iter)
static void kvm_tdp_mmu_age_spte(struct kvm *kvm, struct tdp_iter *iter)
{
u64 new_spte;
if (spte_ad_enabled(iter->old_spte)) {
iter->old_spte = tdp_mmu_clear_spte_bits(iter->sptep,
iter->old_spte,
shadow_accessed_mask,
iter->level);
iter->old_spte = tdp_mmu_clear_spte_bits_atomic(iter->sptep,
shadow_accessed_mask);
new_spte = iter->old_spte & ~shadow_accessed_mask;
} else {
new_spte = mark_spte_for_access_track(iter->old_spte);
iter->old_spte = kvm_tdp_mmu_write_spte(iter->sptep,
iter->old_spte, new_spte,
iter->level);
/*
* It is safe for the following cmpxchg to fail. Leave the
* Accessed bit set, as the spte is most likely young anyway.
*/
if (__tdp_mmu_set_spte_atomic(kvm, iter, new_spte))
return;
}
trace_kvm_tdp_mmu_spte_changed(iter->as_id, iter->gfn, iter->level,
@ -1371,9 +1382,9 @@ static bool __kvm_tdp_mmu_age_gfn_range(struct kvm *kvm,
* valid roots!
*/
WARN_ON(types & ~KVM_VALID_ROOTS);
__for_each_tdp_mmu_root(kvm, root, range->slot->as_id, types) {
guard(rcu)();
guard(rcu)();
for_each_tdp_mmu_root_rcu(kvm, root, range->slot->as_id, types) {
tdp_root_for_each_leaf_pte(iter, kvm, root, range->start, range->end) {
if (!is_accessed_spte(iter.old_spte))
continue;
@ -1382,7 +1393,7 @@ static bool __kvm_tdp_mmu_age_gfn_range(struct kvm *kvm,
return true;
ret = true;
kvm_tdp_mmu_age_spte(&iter);
kvm_tdp_mmu_age_spte(kvm, &iter);
}
}
@ -1904,7 +1915,7 @@ int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes,
*root_level = vcpu->arch.mmu->root_role.level;
tdp_mmu_for_each_pte(iter, vcpu->kvm, root, gfn, gfn + 1) {
for_each_tdp_pte(iter, vcpu->kvm, root, gfn, gfn + 1) {
leaf = iter.level;
sptes[leaf] = iter.old_spte;
}
@ -1931,7 +1942,7 @@ u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, gfn_t gfn,
struct tdp_iter iter;
tdp_ptep_t sptep = NULL;
tdp_mmu_for_each_pte(iter, vcpu->kvm, root, gfn, gfn + 1) {
for_each_tdp_pte(iter, vcpu->kvm, root, gfn, gfn + 1) {
*spte = iter.old_spte;
sptep = iter.sptep;
}

View File

@ -358,7 +358,7 @@ void enter_smm(struct kvm_vcpu *vcpu)
goto error;
#endif
kvm_update_cpuid_runtime(vcpu);
vcpu->arch.cpuid_dynamic_bits_dirty = true;
kvm_mmu_reset_context(vcpu);
return;
error:

View File

@ -994,7 +994,7 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
kvm_clear_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);
/* in case we halted in L2 */
svm->vcpu.arch.mp_state = KVM_MP_STATE_RUNNABLE;
kvm_set_mp_state(vcpu, KVM_MP_STATE_RUNNABLE);
/* Give the current vmcb to the guest */

View File

@ -140,7 +140,7 @@ static inline bool is_mirroring_enc_context(struct kvm *kvm)
static bool sev_vcpu_has_debug_swap(struct vcpu_svm *svm)
{
struct kvm_vcpu *vcpu = &svm->vcpu;
struct kvm_sev_info *sev = &to_kvm_svm(vcpu->kvm)->sev_info;
struct kvm_sev_info *sev = to_kvm_sev_info(vcpu->kvm);
return sev->vmsa_features & SVM_SEV_FEAT_DEBUG_SWAP;
}
@ -226,9 +226,7 @@ e_uncharge:
static unsigned int sev_get_asid(struct kvm *kvm)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
return sev->asid;
return to_kvm_sev_info(kvm)->asid;
}
static void sev_asid_free(struct kvm_sev_info *sev)
@ -403,7 +401,7 @@ static int __sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp,
struct kvm_sev_init *data,
unsigned long vm_type)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct kvm_sev_info *sev = to_kvm_sev_info(kvm);
struct sev_platform_init_args init_args = {0};
bool es_active = vm_type != KVM_X86_SEV_VM;
u64 valid_vmsa_features = es_active ? sev_supported_vmsa_features : 0;
@ -500,10 +498,9 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
static int sev_guest_init2(struct kvm *kvm, struct kvm_sev_cmd *argp)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct kvm_sev_init data;
if (!sev->need_init)
if (!to_kvm_sev_info(kvm)->need_init)
return -EINVAL;
if (kvm->arch.vm_type != KVM_X86_SEV_VM &&
@ -543,14 +540,14 @@ static int __sev_issue_cmd(int fd, int id, void *data, int *error)
static int sev_issue_cmd(struct kvm *kvm, int id, void *data, int *error)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct kvm_sev_info *sev = to_kvm_sev_info(kvm);
return __sev_issue_cmd(sev->fd, id, data, error);
}
static int sev_launch_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct kvm_sev_info *sev = to_kvm_sev_info(kvm);
struct sev_data_launch_start start;
struct kvm_sev_launch_start params;
void *dh_blob, *session_blob;
@ -622,9 +619,9 @@ e_free_dh:
static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr,
unsigned long ulen, unsigned long *n,
int write)
unsigned int flags)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct kvm_sev_info *sev = to_kvm_sev_info(kvm);
unsigned long npages, size;
int npinned;
unsigned long locked, lock_limit;
@ -663,7 +660,7 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr,
return ERR_PTR(-ENOMEM);
/* Pin the user virtual address. */
npinned = pin_user_pages_fast(uaddr, npages, write ? FOLL_WRITE : 0, pages);
npinned = pin_user_pages_fast(uaddr, npages, flags, pages);
if (npinned != npages) {
pr_err("SEV: Failure locking %lu pages.\n", npages);
ret = -ENOMEM;
@ -686,11 +683,9 @@ err:
static void sev_unpin_memory(struct kvm *kvm, struct page **pages,
unsigned long npages)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
unpin_user_pages(pages, npages);
kvfree(pages);
sev->pages_locked -= npages;
to_kvm_sev_info(kvm)->pages_locked -= npages;
}
static void sev_clflush_pages(struct page *pages[], unsigned long npages)
@ -734,7 +729,6 @@ static unsigned long get_num_contig_pages(unsigned long idx,
static int sev_launch_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
{
unsigned long vaddr, vaddr_end, next_vaddr, npages, pages, size, i;
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct kvm_sev_launch_update_data params;
struct sev_data_launch_update_data data;
struct page **inpages;
@ -751,7 +745,7 @@ static int sev_launch_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
vaddr_end = vaddr + size;
/* Lock the user memory. */
inpages = sev_pin_memory(kvm, vaddr, size, &npages, 1);
inpages = sev_pin_memory(kvm, vaddr, size, &npages, FOLL_WRITE);
if (IS_ERR(inpages))
return PTR_ERR(inpages);
@ -762,7 +756,7 @@ static int sev_launch_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
sev_clflush_pages(inpages, npages);
data.reserved = 0;
data.handle = sev->handle;
data.handle = to_kvm_sev_info(kvm)->handle;
for (i = 0; vaddr < vaddr_end; vaddr = next_vaddr, i += pages) {
int offset, len;
@ -802,7 +796,7 @@ e_unpin:
static int sev_es_sync_vmsa(struct vcpu_svm *svm)
{
struct kvm_vcpu *vcpu = &svm->vcpu;
struct kvm_sev_info *sev = &to_kvm_svm(vcpu->kvm)->sev_info;
struct kvm_sev_info *sev = to_kvm_sev_info(vcpu->kvm);
struct sev_es_save_area *save = svm->sev_es.vmsa;
struct xregs_state *xsave;
const u8 *s;
@ -972,7 +966,6 @@ static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp)
static int sev_launch_measure(struct kvm *kvm, struct kvm_sev_cmd *argp)
{
void __user *measure = u64_to_user_ptr(argp->data);
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct sev_data_launch_measure data;
struct kvm_sev_launch_measure params;
void __user *p = NULL;
@ -1005,7 +998,7 @@ static int sev_launch_measure(struct kvm *kvm, struct kvm_sev_cmd *argp)
}
cmd:
data.handle = sev->handle;
data.handle = to_kvm_sev_info(kvm)->handle;
ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_MEASURE, &data, &argp->error);
/*
@ -1033,19 +1026,17 @@ e_free_blob:
static int sev_launch_finish(struct kvm *kvm, struct kvm_sev_cmd *argp)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct sev_data_launch_finish data;
if (!sev_guest(kvm))
return -ENOTTY;
data.handle = sev->handle;
data.handle = to_kvm_sev_info(kvm)->handle;
return sev_issue_cmd(kvm, SEV_CMD_LAUNCH_FINISH, &data, &argp->error);
}
static int sev_guest_status(struct kvm *kvm, struct kvm_sev_cmd *argp)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct kvm_sev_guest_status params;
struct sev_data_guest_status data;
int ret;
@ -1055,7 +1046,7 @@ static int sev_guest_status(struct kvm *kvm, struct kvm_sev_cmd *argp)
memset(&data, 0, sizeof(data));
data.handle = sev->handle;
data.handle = to_kvm_sev_info(kvm)->handle;
ret = sev_issue_cmd(kvm, SEV_CMD_GUEST_STATUS, &data, &argp->error);
if (ret)
return ret;
@ -1074,11 +1065,10 @@ static int __sev_issue_dbg_cmd(struct kvm *kvm, unsigned long src,
unsigned long dst, int size,
int *error, bool enc)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct sev_data_dbg data;
data.reserved = 0;
data.handle = sev->handle;
data.handle = to_kvm_sev_info(kvm)->handle;
data.dst_addr = dst;
data.src_addr = src;
data.len = size;
@ -1250,7 +1240,7 @@ static int sev_dbg_crypt(struct kvm *kvm, struct kvm_sev_cmd *argp, bool dec)
if (IS_ERR(src_p))
return PTR_ERR(src_p);
dst_p = sev_pin_memory(kvm, dst_vaddr & PAGE_MASK, PAGE_SIZE, &n, 1);
dst_p = sev_pin_memory(kvm, dst_vaddr & PAGE_MASK, PAGE_SIZE, &n, FOLL_WRITE);
if (IS_ERR(dst_p)) {
sev_unpin_memory(kvm, src_p, n);
return PTR_ERR(dst_p);
@ -1302,7 +1292,6 @@ err:
static int sev_launch_secret(struct kvm *kvm, struct kvm_sev_cmd *argp)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct sev_data_launch_secret data;
struct kvm_sev_launch_secret params;
struct page **pages;
@ -1316,7 +1305,7 @@ static int sev_launch_secret(struct kvm *kvm, struct kvm_sev_cmd *argp)
if (copy_from_user(&params, u64_to_user_ptr(argp->data), sizeof(params)))
return -EFAULT;
pages = sev_pin_memory(kvm, params.guest_uaddr, params.guest_len, &n, 1);
pages = sev_pin_memory(kvm, params.guest_uaddr, params.guest_len, &n, FOLL_WRITE);
if (IS_ERR(pages))
return PTR_ERR(pages);
@ -1358,7 +1347,7 @@ static int sev_launch_secret(struct kvm *kvm, struct kvm_sev_cmd *argp)
data.hdr_address = __psp_pa(hdr);
data.hdr_len = params.hdr_len;
data.handle = sev->handle;
data.handle = to_kvm_sev_info(kvm)->handle;
ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_SECRET, &data, &argp->error);
kfree(hdr);
@ -1378,7 +1367,6 @@ e_unpin_memory:
static int sev_get_attestation_report(struct kvm *kvm, struct kvm_sev_cmd *argp)
{
void __user *report = u64_to_user_ptr(argp->data);
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct sev_data_attestation_report data;
struct kvm_sev_attestation_report params;
void __user *p;
@ -1411,7 +1399,7 @@ static int sev_get_attestation_report(struct kvm *kvm, struct kvm_sev_cmd *argp)
memcpy(data.mnonce, params.mnonce, sizeof(params.mnonce));
}
cmd:
data.handle = sev->handle;
data.handle = to_kvm_sev_info(kvm)->handle;
ret = sev_issue_cmd(kvm, SEV_CMD_ATTESTATION_REPORT, &data, &argp->error);
/*
* If we query the session length, FW responded with expected data.
@ -1441,12 +1429,11 @@ static int
__sev_send_start_query_session_length(struct kvm *kvm, struct kvm_sev_cmd *argp,
struct kvm_sev_send_start *params)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct sev_data_send_start data;
int ret;
memset(&data, 0, sizeof(data));
data.handle = sev->handle;
data.handle = to_kvm_sev_info(kvm)->handle;
ret = sev_issue_cmd(kvm, SEV_CMD_SEND_START, &data, &argp->error);
params->session_len = data.session_len;
@ -1459,7 +1446,6 @@ __sev_send_start_query_session_length(struct kvm *kvm, struct kvm_sev_cmd *argp,
static int sev_send_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct sev_data_send_start data;
struct kvm_sev_send_start params;
void *amd_certs, *session_data;
@ -1520,7 +1506,7 @@ static int sev_send_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
data.amd_certs_len = params.amd_certs_len;
data.session_address = __psp_pa(session_data);
data.session_len = params.session_len;
data.handle = sev->handle;
data.handle = to_kvm_sev_info(kvm)->handle;
ret = sev_issue_cmd(kvm, SEV_CMD_SEND_START, &data, &argp->error);
@ -1552,12 +1538,11 @@ static int
__sev_send_update_data_query_lengths(struct kvm *kvm, struct kvm_sev_cmd *argp,
struct kvm_sev_send_update_data *params)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct sev_data_send_update_data data;
int ret;
memset(&data, 0, sizeof(data));
data.handle = sev->handle;
data.handle = to_kvm_sev_info(kvm)->handle;
ret = sev_issue_cmd(kvm, SEV_CMD_SEND_UPDATE_DATA, &data, &argp->error);
params->hdr_len = data.hdr_len;
@ -1572,7 +1557,6 @@ __sev_send_update_data_query_lengths(struct kvm *kvm, struct kvm_sev_cmd *argp,
static int sev_send_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct sev_data_send_update_data data;
struct kvm_sev_send_update_data params;
void *hdr, *trans_data;
@ -1626,7 +1610,7 @@ static int sev_send_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
data.guest_address = (page_to_pfn(guest_page[0]) << PAGE_SHIFT) + offset;
data.guest_address |= sev_me_mask;
data.guest_len = params.guest_len;
data.handle = sev->handle;
data.handle = to_kvm_sev_info(kvm)->handle;
ret = sev_issue_cmd(kvm, SEV_CMD_SEND_UPDATE_DATA, &data, &argp->error);
@ -1657,31 +1641,29 @@ e_unpin:
static int sev_send_finish(struct kvm *kvm, struct kvm_sev_cmd *argp)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct sev_data_send_finish data;
if (!sev_guest(kvm))
return -ENOTTY;
data.handle = sev->handle;
data.handle = to_kvm_sev_info(kvm)->handle;
return sev_issue_cmd(kvm, SEV_CMD_SEND_FINISH, &data, &argp->error);
}
static int sev_send_cancel(struct kvm *kvm, struct kvm_sev_cmd *argp)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct sev_data_send_cancel data;
if (!sev_guest(kvm))
return -ENOTTY;
data.handle = sev->handle;
data.handle = to_kvm_sev_info(kvm)->handle;
return sev_issue_cmd(kvm, SEV_CMD_SEND_CANCEL, &data, &argp->error);
}
static int sev_receive_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct kvm_sev_info *sev = to_kvm_sev_info(kvm);
struct sev_data_receive_start start;
struct kvm_sev_receive_start params;
int *error = &argp->error;
@ -1755,7 +1737,6 @@ e_free_pdh:
static int sev_receive_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct kvm_sev_receive_update_data params;
struct sev_data_receive_update_data data;
void *hdr = NULL, *trans = NULL;
@ -1798,7 +1779,7 @@ static int sev_receive_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
/* Pin guest memory */
guest_page = sev_pin_memory(kvm, params.guest_uaddr & PAGE_MASK,
PAGE_SIZE, &n, 1);
PAGE_SIZE, &n, FOLL_WRITE);
if (IS_ERR(guest_page)) {
ret = PTR_ERR(guest_page);
goto e_free_trans;
@ -1815,7 +1796,7 @@ static int sev_receive_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
data.guest_address = (page_to_pfn(guest_page[0]) << PAGE_SHIFT) + offset;
data.guest_address |= sev_me_mask;
data.guest_len = params.guest_len;
data.handle = sev->handle;
data.handle = to_kvm_sev_info(kvm)->handle;
ret = sev_issue_cmd(kvm, SEV_CMD_RECEIVE_UPDATE_DATA, &data,
&argp->error);
@ -1832,13 +1813,12 @@ e_free_hdr:
static int sev_receive_finish(struct kvm *kvm, struct kvm_sev_cmd *argp)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct sev_data_receive_finish data;
if (!sev_guest(kvm))
return -ENOTTY;
data.handle = sev->handle;
data.handle = to_kvm_sev_info(kvm)->handle;
return sev_issue_cmd(kvm, SEV_CMD_RECEIVE_FINISH, &data, &argp->error);
}
@ -1858,8 +1838,8 @@ static bool is_cmd_allowed_from_mirror(u32 cmd_id)
static int sev_lock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm)
{
struct kvm_sev_info *dst_sev = &to_kvm_svm(dst_kvm)->sev_info;
struct kvm_sev_info *src_sev = &to_kvm_svm(src_kvm)->sev_info;
struct kvm_sev_info *dst_sev = to_kvm_sev_info(dst_kvm);
struct kvm_sev_info *src_sev = to_kvm_sev_info(src_kvm);
int r = -EBUSY;
if (dst_kvm == src_kvm)
@ -1893,8 +1873,8 @@ release_dst:
static void sev_unlock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm)
{
struct kvm_sev_info *dst_sev = &to_kvm_svm(dst_kvm)->sev_info;
struct kvm_sev_info *src_sev = &to_kvm_svm(src_kvm)->sev_info;
struct kvm_sev_info *dst_sev = to_kvm_sev_info(dst_kvm);
struct kvm_sev_info *src_sev = to_kvm_sev_info(src_kvm);
mutex_unlock(&dst_kvm->lock);
mutex_unlock(&src_kvm->lock);
@ -1968,8 +1948,8 @@ static void sev_unlock_vcpus_for_migration(struct kvm *kvm)
static void sev_migrate_from(struct kvm *dst_kvm, struct kvm *src_kvm)
{
struct kvm_sev_info *dst = &to_kvm_svm(dst_kvm)->sev_info;
struct kvm_sev_info *src = &to_kvm_svm(src_kvm)->sev_info;
struct kvm_sev_info *dst = to_kvm_sev_info(dst_kvm);
struct kvm_sev_info *src = to_kvm_sev_info(src_kvm);
struct kvm_vcpu *dst_vcpu, *src_vcpu;
struct vcpu_svm *dst_svm, *src_svm;
struct kvm_sev_info *mirror;
@ -2009,8 +1989,7 @@ static void sev_migrate_from(struct kvm *dst_kvm, struct kvm *src_kvm)
* and add the new mirror to the list.
*/
if (is_mirroring_enc_context(dst_kvm)) {
struct kvm_sev_info *owner_sev_info =
&to_kvm_svm(dst->enc_context_owner)->sev_info;
struct kvm_sev_info *owner_sev_info = to_kvm_sev_info(dst->enc_context_owner);
list_del(&src->mirror_entry);
list_add_tail(&dst->mirror_entry, &owner_sev_info->mirror_vms);
@ -2069,7 +2048,7 @@ static int sev_check_source_vcpus(struct kvm *dst, struct kvm *src)
int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd)
{
struct kvm_sev_info *dst_sev = &to_kvm_svm(kvm)->sev_info;
struct kvm_sev_info *dst_sev = to_kvm_sev_info(kvm);
struct kvm_sev_info *src_sev, *cg_cleanup_sev;
CLASS(fd, f)(source_fd);
struct kvm *source_kvm;
@ -2093,7 +2072,7 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd)
goto out_unlock;
}
src_sev = &to_kvm_svm(source_kvm)->sev_info;
src_sev = to_kvm_sev_info(source_kvm);
dst_sev->misc_cg = get_current_misc_cg();
cg_cleanup_sev = dst_sev;
@ -2181,7 +2160,7 @@ static void *snp_context_create(struct kvm *kvm, struct kvm_sev_cmd *argp)
static int snp_bind_asid(struct kvm *kvm, int *error)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct kvm_sev_info *sev = to_kvm_sev_info(kvm);
struct sev_data_snp_activate data = {0};
data.gctx_paddr = __psp_pa(sev->snp_context);
@ -2191,7 +2170,7 @@ static int snp_bind_asid(struct kvm *kvm, int *error)
static int snp_launch_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct kvm_sev_info *sev = to_kvm_sev_info(kvm);
struct sev_data_snp_launch_start start = {0};
struct kvm_sev_snp_launch_start params;
int rc;
@ -2260,7 +2239,7 @@ static int sev_gmem_post_populate(struct kvm *kvm, gfn_t gfn_start, kvm_pfn_t pf
void __user *src, int order, void *opaque)
{
struct sev_gmem_populate_args *sev_populate_args = opaque;
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct kvm_sev_info *sev = to_kvm_sev_info(kvm);
int n_private = 0, ret, i;
int npages = (1 << order);
gfn_t gfn;
@ -2350,7 +2329,7 @@ err:
static int snp_launch_update(struct kvm *kvm, struct kvm_sev_cmd *argp)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct kvm_sev_info *sev = to_kvm_sev_info(kvm);
struct sev_gmem_populate_args sev_populate_args = {0};
struct kvm_sev_snp_launch_update params;
struct kvm_memory_slot *memslot;
@ -2434,7 +2413,7 @@ out:
static int snp_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct kvm_sev_info *sev = to_kvm_sev_info(kvm);
struct sev_data_snp_launch_update data = {};
struct kvm_vcpu *vcpu;
unsigned long i;
@ -2482,7 +2461,7 @@ static int snp_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp)
static int snp_launch_finish(struct kvm *kvm, struct kvm_sev_cmd *argp)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct kvm_sev_info *sev = to_kvm_sev_info(kvm);
struct kvm_sev_snp_launch_finish params;
struct sev_data_snp_launch_finish *data;
void *id_block = NULL, *id_auth = NULL;
@ -2677,7 +2656,7 @@ out:
int sev_mem_enc_register_region(struct kvm *kvm,
struct kvm_enc_region *range)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct kvm_sev_info *sev = to_kvm_sev_info(kvm);
struct enc_region *region;
int ret = 0;
@ -2696,7 +2675,8 @@ int sev_mem_enc_register_region(struct kvm *kvm,
return -ENOMEM;
mutex_lock(&kvm->lock);
region->pages = sev_pin_memory(kvm, range->addr, range->size, &region->npages, 1);
region->pages = sev_pin_memory(kvm, range->addr, range->size, &region->npages,
FOLL_WRITE | FOLL_LONGTERM);
if (IS_ERR(region->pages)) {
ret = PTR_ERR(region->pages);
mutex_unlock(&kvm->lock);
@ -2729,7 +2709,7 @@ e_free:
static struct enc_region *
find_enc_region(struct kvm *kvm, struct kvm_enc_region *range)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct kvm_sev_info *sev = to_kvm_sev_info(kvm);
struct list_head *head = &sev->regions_list;
struct enc_region *i;
@ -2824,9 +2804,9 @@ int sev_vm_copy_enc_context_from(struct kvm *kvm, unsigned int source_fd)
* The mirror kvm holds an enc_context_owner ref so its asid can't
* disappear until we're done with it
*/
source_sev = &to_kvm_svm(source_kvm)->sev_info;
source_sev = to_kvm_sev_info(source_kvm);
kvm_get_kvm(source_kvm);
mirror_sev = &to_kvm_svm(kvm)->sev_info;
mirror_sev = to_kvm_sev_info(kvm);
list_add_tail(&mirror_sev->mirror_entry, &source_sev->mirror_vms);
/* Set enc_context_owner and copy its encryption context over */
@ -2854,7 +2834,7 @@ e_unlock:
static int snp_decommission_context(struct kvm *kvm)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct kvm_sev_info *sev = to_kvm_sev_info(kvm);
struct sev_data_snp_addr data = {};
int ret;
@ -2879,7 +2859,7 @@ static int snp_decommission_context(struct kvm *kvm)
void sev_vm_destroy(struct kvm *kvm)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct kvm_sev_info *sev = to_kvm_sev_info(kvm);
struct list_head *head = &sev->regions_list;
struct list_head *pos, *q;
@ -3271,7 +3251,7 @@ static void sev_es_sync_from_ghcb(struct vcpu_svm *svm)
if (kvm_ghcb_xcr0_is_valid(svm)) {
vcpu->arch.xcr0 = ghcb_get_xcr0(ghcb);
kvm_update_cpuid_runtime(vcpu);
vcpu->arch.cpuid_dynamic_bits_dirty = true;
}
/* Copy the GHCB exit information into the VMCB fields */
@ -3430,8 +3410,7 @@ vmgexit_err:
dump_ghcb(svm);
}
ghcb_set_sw_exit_info_1(svm->sev_es.ghcb, 2);
ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, reason);
svm_vmgexit_bad_input(svm, reason);
/* Resume the guest to "return" the error code. */
return 1;
@ -3472,10 +3451,19 @@ void sev_es_unmap_ghcb(struct vcpu_svm *svm)
svm->sev_es.ghcb = NULL;
}
void pre_sev_run(struct vcpu_svm *svm, int cpu)
int pre_sev_run(struct vcpu_svm *svm, int cpu)
{
struct svm_cpu_data *sd = per_cpu_ptr(&svm_data, cpu);
unsigned int asid = sev_get_asid(svm->vcpu.kvm);
struct kvm *kvm = svm->vcpu.kvm;
unsigned int asid = sev_get_asid(kvm);
/*
* Reject KVM_RUN if userspace attempts to run the vCPU with an invalid
* VMSA, e.g. if userspace forces the vCPU to be RUNNABLE after an SNP
* AP Destroy event.
*/
if (sev_es_guest(kvm) && !VALID_PAGE(svm->vmcb->control.vmsa_pa))
return -EINVAL;
/* Assign the asid allocated with this SEV guest */
svm->asid = asid;
@ -3488,11 +3476,12 @@ void pre_sev_run(struct vcpu_svm *svm, int cpu)
*/
if (sd->sev_vmcbs[asid] == svm->vmcb &&
svm->vcpu.arch.last_vmentry_cpu == cpu)
return;
return 0;
sd->sev_vmcbs[asid] = svm->vmcb;
svm->vmcb->control.tlb_ctl = TLB_CONTROL_FLUSH_ASID;
vmcb_mark_dirty(svm->vmcb, VMCB_ASID);
return 0;
}
#define GHCB_SCRATCH_AREA_LIMIT (16ULL * PAGE_SIZE)
@ -3574,8 +3563,7 @@ static int setup_vmgexit_scratch(struct vcpu_svm *svm, bool sync, u64 len)
return 0;
e_scratch:
ghcb_set_sw_exit_info_1(svm->sev_es.ghcb, 2);
ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, GHCB_ERR_INVALID_SCRATCH_AREA);
svm_vmgexit_bad_input(svm, GHCB_ERR_INVALID_SCRATCH_AREA);
return 1;
}
@ -3675,7 +3663,14 @@ static void snp_complete_psc(struct vcpu_svm *svm, u64 psc_ret)
svm->sev_es.psc_inflight = 0;
svm->sev_es.psc_idx = 0;
svm->sev_es.psc_2m = false;
ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, psc_ret);
/*
* PSC requests always get a "no action" response in SW_EXITINFO1, with
* a PSC-specific return code in SW_EXITINFO2 that provides the "real"
* return code. E.g. if the PSC request was interrupted, the need to
* retry is communicated via SW_EXITINFO2, not SW_EXITINFO1.
*/
svm_vmgexit_no_action(svm, psc_ret);
}
static void __snp_complete_one_psc(struct vcpu_svm *svm)
@ -3847,110 +3842,90 @@ next_range:
BUG();
}
static int __sev_snp_update_protected_guest_state(struct kvm_vcpu *vcpu)
/*
* Invoked as part of svm_vcpu_reset() processing of an init event.
*/
void sev_snp_init_protected_guest_state(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);
struct kvm_memory_slot *slot;
struct page *page;
kvm_pfn_t pfn;
gfn_t gfn;
WARN_ON(!mutex_is_locked(&svm->sev_es.snp_vmsa_mutex));
if (!sev_snp_guest(vcpu->kvm))
return;
guard(mutex)(&svm->sev_es.snp_vmsa_mutex);
if (!svm->sev_es.snp_ap_waiting_for_reset)
return;
svm->sev_es.snp_ap_waiting_for_reset = false;
/* Mark the vCPU as offline and not runnable */
vcpu->arch.pv.pv_unhalted = false;
vcpu->arch.mp_state = KVM_MP_STATE_HALTED;
kvm_set_mp_state(vcpu, KVM_MP_STATE_HALTED);
/* Clear use of the VMSA */
svm->vmcb->control.vmsa_pa = INVALID_PAGE;
if (VALID_PAGE(svm->sev_es.snp_vmsa_gpa)) {
gfn_t gfn = gpa_to_gfn(svm->sev_es.snp_vmsa_gpa);
struct kvm_memory_slot *slot;
struct page *page;
kvm_pfn_t pfn;
slot = gfn_to_memslot(vcpu->kvm, gfn);
if (!slot)
return -EINVAL;
/*
* The new VMSA will be private memory guest memory, so
* retrieve the PFN from the gmem backend.
*/
if (kvm_gmem_get_pfn(vcpu->kvm, slot, gfn, &pfn, &page, NULL))
return -EINVAL;
/*
* From this point forward, the VMSA will always be a
* guest-mapped page rather than the initial one allocated
* by KVM in svm->sev_es.vmsa. In theory, svm->sev_es.vmsa
* could be free'd and cleaned up here, but that involves
* cleanups like wbinvd_on_all_cpus() which would ideally
* be handled during teardown rather than guest boot.
* Deferring that also allows the existing logic for SEV-ES
* VMSAs to be re-used with minimal SNP-specific changes.
*/
svm->sev_es.snp_has_guest_vmsa = true;
/* Use the new VMSA */
svm->vmcb->control.vmsa_pa = pfn_to_hpa(pfn);
/* Mark the vCPU as runnable */
vcpu->arch.pv.pv_unhalted = false;
vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
svm->sev_es.snp_vmsa_gpa = INVALID_PAGE;
/*
* gmem pages aren't currently migratable, but if this ever
* changes then care should be taken to ensure
* svm->sev_es.vmsa is pinned through some other means.
*/
kvm_release_page_clean(page);
}
/*
* When replacing the VMSA during SEV-SNP AP creation,
* mark the VMCB dirty so that full state is always reloaded.
*/
vmcb_mark_all_dirty(svm->vmcb);
return 0;
}
/*
* Invoked as part of svm_vcpu_reset() processing of an init event.
*/
void sev_snp_init_protected_guest_state(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);
int ret;
if (!sev_snp_guest(vcpu->kvm))
if (!VALID_PAGE(svm->sev_es.snp_vmsa_gpa))
return;
mutex_lock(&svm->sev_es.snp_vmsa_mutex);
gfn = gpa_to_gfn(svm->sev_es.snp_vmsa_gpa);
svm->sev_es.snp_vmsa_gpa = INVALID_PAGE;
if (!svm->sev_es.snp_ap_waiting_for_reset)
goto unlock;
slot = gfn_to_memslot(vcpu->kvm, gfn);
if (!slot)
return;
svm->sev_es.snp_ap_waiting_for_reset = false;
/*
* The new VMSA will be private memory guest memory, so retrieve the
* PFN from the gmem backend.
*/
if (kvm_gmem_get_pfn(vcpu->kvm, slot, gfn, &pfn, &page, NULL))
return;
ret = __sev_snp_update_protected_guest_state(vcpu);
if (ret)
vcpu_unimpl(vcpu, "snp: AP state update on init failed\n");
/*
* From this point forward, the VMSA will always be a guest-mapped page
* rather than the initial one allocated by KVM in svm->sev_es.vmsa. In
* theory, svm->sev_es.vmsa could be free'd and cleaned up here, but
* that involves cleanups like wbinvd_on_all_cpus() which would ideally
* be handled during teardown rather than guest boot. Deferring that
* also allows the existing logic for SEV-ES VMSAs to be re-used with
* minimal SNP-specific changes.
*/
svm->sev_es.snp_has_guest_vmsa = true;
unlock:
mutex_unlock(&svm->sev_es.snp_vmsa_mutex);
/* Use the new VMSA */
svm->vmcb->control.vmsa_pa = pfn_to_hpa(pfn);
/* Mark the vCPU as runnable */
kvm_set_mp_state(vcpu, KVM_MP_STATE_RUNNABLE);
/*
* gmem pages aren't currently migratable, but if this ever changes
* then care should be taken to ensure svm->sev_es.vmsa is pinned
* through some other means.
*/
kvm_release_page_clean(page);
}
static int sev_snp_ap_creation(struct vcpu_svm *svm)
{
struct kvm_sev_info *sev = &to_kvm_svm(svm->vcpu.kvm)->sev_info;
struct kvm_sev_info *sev = to_kvm_sev_info(svm->vcpu.kvm);
struct kvm_vcpu *vcpu = &svm->vcpu;
struct kvm_vcpu *target_vcpu;
struct vcpu_svm *target_svm;
unsigned int request;
unsigned int apic_id;
bool kick;
int ret;
request = lower_32_bits(svm->vmcb->control.exit_info_1);
apic_id = upper_32_bits(svm->vmcb->control.exit_info_1);
@ -3963,47 +3938,23 @@ static int sev_snp_ap_creation(struct vcpu_svm *svm)
return -EINVAL;
}
ret = 0;
target_svm = to_svm(target_vcpu);
/*
* The target vCPU is valid, so the vCPU will be kicked unless the
* request is for CREATE_ON_INIT. For any errors at this stage, the
* kick will place the vCPU in an non-runnable state.
*/
kick = true;
mutex_lock(&target_svm->sev_es.snp_vmsa_mutex);
target_svm->sev_es.snp_vmsa_gpa = INVALID_PAGE;
target_svm->sev_es.snp_ap_waiting_for_reset = true;
/* Interrupt injection mode shouldn't change for AP creation */
if (request < SVM_VMGEXIT_AP_DESTROY) {
u64 sev_features;
sev_features = vcpu->arch.regs[VCPU_REGS_RAX];
sev_features ^= sev->vmsa_features;
if (sev_features & SVM_SEV_FEAT_INT_INJ_MODES) {
vcpu_unimpl(vcpu, "vmgexit: invalid AP injection mode [%#lx] from guest\n",
vcpu->arch.regs[VCPU_REGS_RAX]);
ret = -EINVAL;
goto out;
}
}
guard(mutex)(&target_svm->sev_es.snp_vmsa_mutex);
switch (request) {
case SVM_VMGEXIT_AP_CREATE_ON_INIT:
kick = false;
fallthrough;
case SVM_VMGEXIT_AP_CREATE:
if (vcpu->arch.regs[VCPU_REGS_RAX] != sev->vmsa_features) {
vcpu_unimpl(vcpu, "vmgexit: mismatched AP sev_features [%#lx] != [%#llx] from guest\n",
vcpu->arch.regs[VCPU_REGS_RAX], sev->vmsa_features);
return -EINVAL;
}
if (!page_address_valid(vcpu, svm->vmcb->control.exit_info_2)) {
vcpu_unimpl(vcpu, "vmgexit: invalid AP VMSA address [%#llx] from guest\n",
svm->vmcb->control.exit_info_2);
ret = -EINVAL;
goto out;
return -EINVAL;
}
/*
@ -4017,30 +3968,32 @@ static int sev_snp_ap_creation(struct vcpu_svm *svm)
vcpu_unimpl(vcpu,
"vmgexit: AP VMSA address [%llx] from guest is unsafe as it is 2M aligned\n",
svm->vmcb->control.exit_info_2);
ret = -EINVAL;
goto out;
return -EINVAL;
}
target_svm->sev_es.snp_vmsa_gpa = svm->vmcb->control.exit_info_2;
break;
case SVM_VMGEXIT_AP_DESTROY:
target_svm->sev_es.snp_vmsa_gpa = INVALID_PAGE;
break;
default:
vcpu_unimpl(vcpu, "vmgexit: invalid AP creation request [%#x] from guest\n",
request);
ret = -EINVAL;
break;
return -EINVAL;
}
out:
if (kick) {
target_svm->sev_es.snp_ap_waiting_for_reset = true;
/*
* Unless Creation is deferred until INIT, signal the vCPU to update
* its state.
*/
if (request != SVM_VMGEXIT_AP_CREATE_ON_INIT) {
kvm_make_request(KVM_REQ_UPDATE_PROTECTED_GUEST_STATE, target_vcpu);
kvm_vcpu_kick(target_vcpu);
}
mutex_unlock(&target_svm->sev_es.snp_vmsa_mutex);
return ret;
return 0;
}
static int snp_handle_guest_req(struct vcpu_svm *svm, gpa_t req_gpa, gpa_t resp_gpa)
@ -4079,7 +4032,8 @@ static int snp_handle_guest_req(struct vcpu_svm *svm, gpa_t req_gpa, gpa_t resp_
goto out_unlock;
}
ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, SNP_GUEST_ERR(0, fw_err));
/* No action is requested *from KVM* if there was a firmware error. */
svm_vmgexit_no_action(svm, SNP_GUEST_ERR(0, fw_err));
ret = 1; /* resume guest */
@ -4135,8 +4089,7 @@ static int snp_handle_ext_guest_req(struct vcpu_svm *svm, gpa_t req_gpa, gpa_t r
return snp_handle_guest_req(svm, req_gpa, resp_gpa);
request_invalid:
ghcb_set_sw_exit_info_1(svm->sev_es.ghcb, 2);
ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, GHCB_ERR_INVALID_INPUT);
svm_vmgexit_bad_input(svm, GHCB_ERR_INVALID_INPUT);
return 1; /* resume guest */
}
@ -4144,7 +4097,7 @@ static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm)
{
struct vmcb_control_area *control = &svm->vmcb->control;
struct kvm_vcpu *vcpu = &svm->vcpu;
struct kvm_sev_info *sev = &to_kvm_svm(vcpu->kvm)->sev_info;
struct kvm_sev_info *sev = to_kvm_sev_info(vcpu->kvm);
u64 ghcb_info;
int ret = 1;
@ -4328,8 +4281,7 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu)
if (ret)
return ret;
ghcb_set_sw_exit_info_1(svm->sev_es.ghcb, 0);
ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, 0);
svm_vmgexit_success(svm, 0);
exit_code = kvm_ghcb_get_sw_exit_code(control);
switch (exit_code) {
@ -4364,7 +4316,7 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu)
ret = kvm_emulate_ap_reset_hold(vcpu);
break;
case SVM_VMGEXIT_AP_JUMP_TABLE: {
struct kvm_sev_info *sev = &to_kvm_svm(vcpu->kvm)->sev_info;
struct kvm_sev_info *sev = to_kvm_sev_info(vcpu->kvm);
switch (control->exit_info_1) {
case 0:
@ -4373,21 +4325,19 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu)
break;
case 1:
/* Get AP jump table address */
ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, sev->ap_jump_table);
svm_vmgexit_success(svm, sev->ap_jump_table);
break;
default:
pr_err("svm: vmgexit: unsupported AP jump table request - exit_info_1=%#llx\n",
control->exit_info_1);
ghcb_set_sw_exit_info_1(svm->sev_es.ghcb, 2);
ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, GHCB_ERR_INVALID_INPUT);
svm_vmgexit_bad_input(svm, GHCB_ERR_INVALID_INPUT);
}
ret = 1;
break;
}
case SVM_VMGEXIT_HV_FEATURES:
ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, GHCB_HV_FT_SUPPORTED);
svm_vmgexit_success(svm, GHCB_HV_FT_SUPPORTED);
ret = 1;
break;
case SVM_VMGEXIT_TERM_REQUEST:
@ -4408,8 +4358,7 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu)
case SVM_VMGEXIT_AP_CREATION:
ret = sev_snp_ap_creation(svm);
if (ret) {
ghcb_set_sw_exit_info_1(svm->sev_es.ghcb, 2);
ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, GHCB_ERR_INVALID_INPUT);
svm_vmgexit_bad_input(svm, GHCB_ERR_INVALID_INPUT);
}
ret = 1;
@ -4575,7 +4524,7 @@ void sev_init_vmcb(struct vcpu_svm *svm)
void sev_es_vcpu_reset(struct vcpu_svm *svm)
{
struct kvm_vcpu *vcpu = &svm->vcpu;
struct kvm_sev_info *sev = &to_kvm_svm(vcpu->kvm)->sev_info;
struct kvm_sev_info *sev = to_kvm_sev_info(vcpu->kvm);
/*
* Set the GHCB MSR value as per the GHCB specification when emulating
@ -4655,7 +4604,7 @@ void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector)
* Return from an AP Reset Hold VMGEXIT, where the guest will
* set the CS and RIP. Set SW_EXIT_INFO_2 to a non-zero value.
*/
ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, 1);
svm_vmgexit_success(svm, 1);
break;
case AP_RESET_HOLD_MSR_PROTO:
/*
@ -4853,7 +4802,7 @@ static bool is_large_rmp_possible(struct kvm *kvm, kvm_pfn_t pfn, int order)
int sev_gmem_prepare(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, int max_order)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct kvm_sev_info *sev = to_kvm_sev_info(kvm);
kvm_pfn_t pfn_aligned;
gfn_t gfn_aligned;
int level, rc;

View File

@ -1303,8 +1303,12 @@ static void init_vmcb(struct kvm_vcpu *vcpu)
svm_set_intercept(svm, INTERCEPT_MWAIT);
}
if (!kvm_hlt_in_guest(vcpu->kvm))
svm_set_intercept(svm, INTERCEPT_HLT);
if (!kvm_hlt_in_guest(vcpu->kvm)) {
if (cpu_feature_enabled(X86_FEATURE_IDLE_HLT))
svm_set_intercept(svm, INTERCEPT_IDLE_HLT);
else
svm_set_intercept(svm, INTERCEPT_HLT);
}
control->iopm_base_pa = iopm_base;
control->msrpm_base_pa = __sme_set(__pa(svm->msrpm));
@ -1939,7 +1943,7 @@ void svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
vmcb_mark_dirty(to_svm(vcpu)->vmcb, VMCB_CR);
if ((cr4 ^ old_cr4) & (X86_CR4_OSXSAVE | X86_CR4_PKE))
kvm_update_cpuid_runtime(vcpu);
vcpu->arch.cpuid_dynamic_bits_dirty = true;
}
static void svm_set_segment(struct kvm_vcpu *vcpu,
@ -2980,11 +2984,7 @@ static int svm_complete_emulated_msr(struct kvm_vcpu *vcpu, int err)
if (!err || !sev_es_guest(vcpu->kvm) || WARN_ON_ONCE(!svm->sev_es.ghcb))
return kvm_complete_insn_gp(vcpu, err);
ghcb_set_sw_exit_info_1(svm->sev_es.ghcb, 1);
ghcb_set_sw_exit_info_2(svm->sev_es.ghcb,
X86_TRAP_GP |
SVM_EVTINJ_TYPE_EXEPT |
SVM_EVTINJ_VALID);
svm_vmgexit_inject_exception(svm, X86_TRAP_GP);
return 1;
}
@ -3300,6 +3300,17 @@ static int invpcid_interception(struct kvm_vcpu *vcpu)
type = svm->vmcb->control.exit_info_2;
gva = svm->vmcb->control.exit_info_1;
/*
* FIXME: Perform segment checks for 32-bit mode, and inject #SS if the
* stack segment is used. The intercept takes priority over all
* #GP checks except CPL>0, but somehow still generates a linear
* address? The APM is sorely lacking.
*/
if (is_noncanonical_address(gva, vcpu, 0)) {
kvm_queue_exception_e(vcpu, GP_VECTOR, 0);
return 1;
}
return kvm_handle_invpcid(vcpu, type, gva);
}
@ -3370,6 +3381,7 @@ static int (*const svm_exit_handlers[])(struct kvm_vcpu *vcpu) = {
[SVM_EXIT_CR4_WRITE_TRAP] = cr_trap,
[SVM_EXIT_CR8_WRITE_TRAP] = cr_trap,
[SVM_EXIT_INVPCID] = invpcid_interception,
[SVM_EXIT_IDLE_HLT] = kvm_emulate_halt,
[SVM_EXIT_NPF] = npf_interception,
[SVM_EXIT_RSM] = rsm_interception,
[SVM_EXIT_AVIC_INCOMPLETE_IPI] = avic_incomplete_ipi_interception,
@ -3532,7 +3544,7 @@ int svm_invoke_exit_handler(struct kvm_vcpu *vcpu, u64 exit_code)
return interrupt_window_interception(vcpu);
else if (exit_code == SVM_EXIT_INTR)
return intr_interception(vcpu);
else if (exit_code == SVM_EXIT_HLT)
else if (exit_code == SVM_EXIT_HLT || exit_code == SVM_EXIT_IDLE_HLT)
return kvm_emulate_halt(vcpu);
else if (exit_code == SVM_EXIT_NPF)
return npf_interception(vcpu);
@ -3615,7 +3627,7 @@ static int svm_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
return svm_invoke_exit_handler(vcpu, exit_code);
}
static void pre_svm_run(struct kvm_vcpu *vcpu)
static int pre_svm_run(struct kvm_vcpu *vcpu)
{
struct svm_cpu_data *sd = per_cpu_ptr(&svm_data, vcpu->cpu);
struct vcpu_svm *svm = to_svm(vcpu);
@ -3637,6 +3649,8 @@ static void pre_svm_run(struct kvm_vcpu *vcpu)
/* FIXME: handle wraparound of asid_generation */
if (svm->current_vmcb->asid_generation != sd->asid_generation)
new_asid(svm, sd);
return 0;
}
static void svm_inject_nmi(struct kvm_vcpu *vcpu)
@ -4144,20 +4158,23 @@ static void svm_complete_interrupts(struct kvm_vcpu *vcpu)
vcpu->arch.nmi_injected = true;
svm->nmi_l1_to_l2 = nmi_l1_to_l2;
break;
case SVM_EXITINTINFO_TYPE_EXEPT:
case SVM_EXITINTINFO_TYPE_EXEPT: {
u32 error_code = 0;
/*
* Never re-inject a #VC exception.
*/
if (vector == X86_TRAP_VC)
break;
if (exitintinfo & SVM_EXITINTINFO_VALID_ERR) {
u32 err = svm->vmcb->control.exit_int_info_err;
kvm_requeue_exception_e(vcpu, vector, err);
if (exitintinfo & SVM_EXITINTINFO_VALID_ERR)
error_code = svm->vmcb->control.exit_int_info_err;
} else
kvm_requeue_exception(vcpu, vector);
kvm_requeue_exception(vcpu, vector,
exitintinfo & SVM_EXITINTINFO_VALID_ERR,
error_code);
break;
}
case SVM_EXITINTINFO_TYPE_INTR:
kvm_queue_interrupt(vcpu, vector, false);
break;
@ -4273,7 +4290,12 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu,
if (force_immediate_exit)
smp_send_reschedule(vcpu->cpu);
pre_svm_run(vcpu);
if (pre_svm_run(vcpu)) {
vcpu->run->exit_reason = KVM_EXIT_FAIL_ENTRY;
vcpu->run->fail_entry.hardware_entry_failure_reason = SVM_EXIT_ERR;
vcpu->run->fail_entry.cpu = vcpu->cpu;
return EXIT_FASTPATH_EXIT_USERSPACE;
}
sync_lapic_to_cr8(vcpu);

View File

@ -361,20 +361,18 @@ static __always_inline struct kvm_sev_info *to_kvm_sev_info(struct kvm *kvm)
#ifdef CONFIG_KVM_AMD_SEV
static __always_inline bool sev_guest(struct kvm *kvm)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
return sev->active;
return to_kvm_sev_info(kvm)->active;
}
static __always_inline bool sev_es_guest(struct kvm *kvm)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct kvm_sev_info *sev = to_kvm_sev_info(kvm);
return sev->es_active && !WARN_ON_ONCE(!sev->active);
}
static __always_inline bool sev_snp_guest(struct kvm *kvm)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct kvm_sev_info *sev = to_kvm_sev_info(kvm);
return (sev->vmsa_features & SVM_SEV_FEAT_SNP_ACTIVE) &&
!WARN_ON_ONCE(!sev_es_guest(kvm));
@ -581,6 +579,35 @@ static inline bool is_vnmi_enabled(struct vcpu_svm *svm)
return false;
}
static inline void svm_vmgexit_set_return_code(struct vcpu_svm *svm,
u64 response, u64 data)
{
ghcb_set_sw_exit_info_1(svm->sev_es.ghcb, response);
ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, data);
}
static inline void svm_vmgexit_inject_exception(struct vcpu_svm *svm, u8 vector)
{
u64 data = SVM_EVTINJ_VALID | SVM_EVTINJ_TYPE_EXEPT | vector;
svm_vmgexit_set_return_code(svm, GHCB_HV_RESP_ISSUE_EXCEPTION, data);
}
static inline void svm_vmgexit_bad_input(struct vcpu_svm *svm, u64 suberror)
{
svm_vmgexit_set_return_code(svm, GHCB_HV_RESP_MALFORMED_INPUT, suberror);
}
static inline void svm_vmgexit_success(struct vcpu_svm *svm, u64 data)
{
svm_vmgexit_set_return_code(svm, GHCB_HV_RESP_NO_ACTION, data);
}
static inline void svm_vmgexit_no_action(struct vcpu_svm *svm, u64 data)
{
svm_vmgexit_set_return_code(svm, GHCB_HV_RESP_NO_ACTION, data);
}
/* svm.c */
#define MSR_INVALID 0xffffffffU
@ -715,7 +742,7 @@ void avic_refresh_virtual_apic_mode(struct kvm_vcpu *vcpu);
/* sev.c */
void pre_sev_run(struct vcpu_svm *svm, int cpu);
int pre_sev_run(struct vcpu_svm *svm, int cpu);
void sev_init_vmcb(struct vcpu_svm *svm);
void sev_vcpu_after_set_cpuid(struct vcpu_svm *svm);
int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in);

View File

@ -830,12 +830,12 @@ TRACE_EVENT(kvm_emulate_insn,
TP_ARGS(vcpu, failed),
TP_STRUCT__entry(
__field( __u64, rip )
__field( __u32, csbase )
__field( __u8, len )
__array( __u8, insn, 15 )
__field( __u8, flags )
__field( __u8, failed )
__field( __u64, rip )
__field( __u32, csbase )
__field( __u8, len )
__array( __u8, insn, X86_MAX_INSTRUCTION_LENGTH )
__field( __u8, flags )
__field( __u8, failed )
),
TP_fast_assign(
@ -846,7 +846,7 @@ TRACE_EVENT(kvm_emulate_insn,
__entry->rip = vcpu->arch.emulate_ctxt->_eip - __entry->len;
memcpy(__entry->insn,
vcpu->arch.emulate_ctxt->fetch.data,
15);
X86_MAX_INSTRUCTION_LENGTH);
__entry->flags = kei_decode_mode(vcpu->arch.emulate_ctxt->mode);
__entry->failed = failed;
),

View File

@ -2970,7 +2970,7 @@ static int nested_check_vm_entry_controls(struct kvm_vcpu *vcpu,
case INTR_TYPE_SOFT_EXCEPTION:
case INTR_TYPE_SOFT_INTR:
case INTR_TYPE_PRIV_SW_EXCEPTION:
if (CC(vmcs12->vm_entry_instruction_len > 15) ||
if (CC(vmcs12->vm_entry_instruction_len > X86_MAX_INSTRUCTION_LENGTH) ||
CC(vmcs12->vm_entry_instruction_len == 0 &&
CC(!nested_cpu_has_zero_length_injection(vcpu))))
return -EINVAL;
@ -3771,7 +3771,7 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch)
break;
case GUEST_ACTIVITY_WAIT_SIPI:
vmx->nested.nested_run_pending = 0;
vcpu->arch.mp_state = KVM_MP_STATE_INIT_RECEIVED;
kvm_set_mp_state(vcpu, KVM_MP_STATE_INIT_RECEIVED);
break;
default:
break;
@ -4618,7 +4618,7 @@ static void sync_vmcs02_to_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
*/
static void prepare_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
u32 vm_exit_reason, u32 exit_intr_info,
unsigned long exit_qualification)
unsigned long exit_qualification, u32 exit_insn_len)
{
/* update exit information fields: */
vmcs12->vm_exit_reason = vm_exit_reason;
@ -4646,7 +4646,7 @@ static void prepare_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
vm_exit_reason, exit_intr_info);
vmcs12->vm_exit_intr_info = exit_intr_info;
vmcs12->vm_exit_instruction_len = vmcs_read32(VM_EXIT_INSTRUCTION_LEN);
vmcs12->vm_exit_instruction_len = exit_insn_len;
vmcs12->vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO);
/*
@ -4930,8 +4930,9 @@ vmabort:
* and modify vmcs12 to make it see what it would expect to see there if
* L2 was its real guest. Must only be called when in L2 (is_guest_mode())
*/
void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
u32 exit_intr_info, unsigned long exit_qualification)
void __nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
u32 exit_intr_info, unsigned long exit_qualification,
u32 exit_insn_len)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
@ -4981,7 +4982,8 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
if (vm_exit_reason != -1)
prepare_vmcs12(vcpu, vmcs12, vm_exit_reason,
exit_intr_info, exit_qualification);
exit_intr_info, exit_qualification,
exit_insn_len);
/*
* Must happen outside of sync_vmcs02_to_vmcs12() as it will
@ -5071,7 +5073,7 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
vmx->nested.need_vmcs12_to_shadow_sync = true;
/* in case we halted in L2 */
vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
kvm_set_mp_state(vcpu, KVM_MP_STATE_RUNNABLE);
if (likely(!vmx->fail)) {
if (vm_exit_reason != -1)

View File

@ -26,8 +26,26 @@ void nested_vmx_free_vcpu(struct kvm_vcpu *vcpu);
enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu,
bool from_vmentry);
bool nested_vmx_reflect_vmexit(struct kvm_vcpu *vcpu);
void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
u32 exit_intr_info, unsigned long exit_qualification);
void __nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
u32 exit_intr_info, unsigned long exit_qualification,
u32 exit_insn_len);
static inline void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
u32 exit_intr_info,
unsigned long exit_qualification)
{
u32 exit_insn_len;
if (to_vmx(vcpu)->fail || vm_exit_reason == -1 ||
(vm_exit_reason & VMX_EXIT_REASONS_FAILED_VMENTRY))
exit_insn_len = 0;
else
exit_insn_len = vmcs_read32(VM_EXIT_INSTRUCTION_LEN);
__nested_vmx_vmexit(vcpu, vm_exit_reason, exit_intr_info,
exit_qualification, exit_insn_len);
}
void nested_sync_vmcs12_to_shadow(struct kvm_vcpu *vcpu);
int vmx_set_vmx_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data);
int vmx_get_vmx_msr(struct nested_vmx_msrs *msrs, u32 msr_index, u64 *pdata);

View File

@ -2579,6 +2579,34 @@ static u64 adjust_vmx_controls64(u64 ctl_opt, u32 msr)
return ctl_opt & allowed;
}
#define vmx_check_entry_exit_pairs(pairs, entry_controls, exit_controls) \
({ \
int i, r = 0; \
\
BUILD_BUG_ON(sizeof(pairs[0].entry_control) != sizeof(entry_controls)); \
BUILD_BUG_ON(sizeof(pairs[0].exit_control) != sizeof(exit_controls)); \
\
for (i = 0; i < ARRAY_SIZE(pairs); i++) { \
typeof(entry_controls) n_ctrl = pairs[i].entry_control; \
typeof(exit_controls) x_ctrl = pairs[i].exit_control; \
\
if (!(entry_controls & n_ctrl) == !(exit_controls & x_ctrl)) \
continue; \
\
pr_warn_once("Inconsistent VM-Entry/VM-Exit pair, " \
"entry = %llx (%llx), exit = %llx (%llx)\n", \
(u64)(entry_controls & n_ctrl), (u64)n_ctrl, \
(u64)(exit_controls & x_ctrl), (u64)x_ctrl); \
\
if (error_on_inconsistent_vmcs_config) \
r = -EIO; \
\
entry_controls &= ~n_ctrl; \
exit_controls &= ~x_ctrl; \
} \
r; \
})
static int setup_vmcs_config(struct vmcs_config *vmcs_conf,
struct vmx_capability *vmx_cap)
{
@ -2590,7 +2618,6 @@ static int setup_vmcs_config(struct vmcs_config *vmcs_conf,
u32 _vmentry_control = 0;
u64 basic_msr;
u64 misc_msr;
int i;
/*
* LOAD/SAVE_DEBUG_CONTROLS are absent because both are mandatory.
@ -2694,22 +2721,9 @@ static int setup_vmcs_config(struct vmcs_config *vmcs_conf,
&_vmentry_control))
return -EIO;
for (i = 0; i < ARRAY_SIZE(vmcs_entry_exit_pairs); i++) {
u32 n_ctrl = vmcs_entry_exit_pairs[i].entry_control;
u32 x_ctrl = vmcs_entry_exit_pairs[i].exit_control;
if (!(_vmentry_control & n_ctrl) == !(_vmexit_control & x_ctrl))
continue;
pr_warn_once("Inconsistent VM-Entry/VM-Exit pair, entry = %x, exit = %x\n",
_vmentry_control & n_ctrl, _vmexit_control & x_ctrl);
if (error_on_inconsistent_vmcs_config)
return -EIO;
_vmentry_control &= ~n_ctrl;
_vmexit_control &= ~x_ctrl;
}
if (vmx_check_entry_exit_pairs(vmcs_entry_exit_pairs,
_vmentry_control, _vmexit_control))
return -EIO;
/*
* Some cpus support VM_{ENTRY,EXIT}_IA32_PERF_GLOBAL_CTRL but they
@ -3520,7 +3534,7 @@ void vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
vmcs_writel(GUEST_CR4, hw_cr4);
if ((cr4 ^ old_cr4) & (X86_CR4_OSXSAVE | X86_CR4_PKE))
kvm_update_cpuid_runtime(vcpu);
vcpu->arch.cpuid_dynamic_bits_dirty = true;
}
void vmx_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg)
@ -5212,6 +5226,12 @@ bool vmx_guest_inject_ac(struct kvm_vcpu *vcpu)
(kvm_get_rflags(vcpu) & X86_EFLAGS_AC);
}
static bool is_xfd_nm_fault(struct kvm_vcpu *vcpu)
{
return vcpu->arch.guest_fpu.fpstate->xfd &&
!kvm_is_cr0_bit_set(vcpu, X86_CR0_TS);
}
static int handle_exception_nmi(struct kvm_vcpu *vcpu)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
@ -5238,7 +5258,8 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu)
* point.
*/
if (is_nm_fault(intr_info)) {
kvm_queue_exception(vcpu, NM_VECTOR);
kvm_queue_exception_p(vcpu, NM_VECTOR,
is_xfd_nm_fault(vcpu) ? vcpu->arch.guest_fpu.xfd_err : 0);
return 1;
}
@ -5818,7 +5839,7 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu)
error_code |= (exit_qualification & EPT_VIOLATION_ACC_INSTR)
? PFERR_FETCH_MASK : 0;
/* ept page table entry is present? */
error_code |= (exit_qualification & EPT_VIOLATION_RWX_MASK)
error_code |= (exit_qualification & EPT_VIOLATION_PROT_MASK)
? PFERR_PRESENT_MASK : 0;
if (error_code & EPT_VIOLATION_GVA_IS_VALID)
@ -5872,11 +5893,35 @@ static int handle_nmi_window(struct kvm_vcpu *vcpu)
return 1;
}
static bool vmx_emulation_required_with_pending_exception(struct kvm_vcpu *vcpu)
/*
* Returns true if emulation is required (due to the vCPU having invalid state
* with unsrestricted guest mode disabled) and KVM can't faithfully emulate the
* current vCPU state.
*/
static bool vmx_unhandleable_emulation_required(struct kvm_vcpu *vcpu)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
return vmx->emulation_required && !vmx->rmode.vm86_active &&
if (!vmx->emulation_required)
return false;
/*
* It is architecturally impossible for emulation to be required when a
* nested VM-Enter is pending completion, as VM-Enter will VM-Fail if
* guest state is invalid and unrestricted guest is disabled, i.e. KVM
* should synthesize VM-Fail instead emulation L2 code. This path is
* only reachable if userspace modifies L2 guest state after KVM has
* performed the nested VM-Enter consistency checks.
*/
if (vmx->nested.nested_run_pending)
return true;
/*
* KVM only supports emulating exceptions if the vCPU is in Real Mode.
* If emulation is required, KVM can't perform a successful VM-Enter to
* inject the exception.
*/
return !vmx->rmode.vm86_active &&
(kvm_is_exception_pending(vcpu) || vcpu->arch.exception.injected);
}
@ -5899,7 +5944,7 @@ static int handle_invalid_guest_state(struct kvm_vcpu *vcpu)
if (!kvm_emulate_instruction(vcpu, 0))
return 0;
if (vmx_emulation_required_with_pending_exception(vcpu)) {
if (vmx_unhandleable_emulation_required(vcpu)) {
kvm_prepare_emulation_failure_exit(vcpu);
return 0;
}
@ -5923,7 +5968,7 @@ static int handle_invalid_guest_state(struct kvm_vcpu *vcpu)
int vmx_vcpu_pre_run(struct kvm_vcpu *vcpu)
{
if (vmx_emulation_required_with_pending_exception(vcpu)) {
if (vmx_unhandleable_emulation_required(vcpu)) {
kvm_prepare_emulation_failure_exit(vcpu);
return 0;
}
@ -6998,16 +7043,15 @@ static void handle_nm_fault_irqoff(struct kvm_vcpu *vcpu)
* MSR value is not clobbered by the host activity before the guest
* has chance to consume it.
*
* Do not blindly read xfd_err here, since this exception might
* be caused by L1 interception on a platform which doesn't
* support xfd at all.
* Update the guest's XFD_ERR if and only if XFD is enabled, as the #NM
* interception may have been caused by L1 interception. Per the SDM,
* XFD_ERR is not modified for non-XFD #NM, i.e. if CR0.TS=1.
*
* Do it conditionally upon guest_fpu::xfd. xfd_err matters
* only when xfd contains a non-zero value.
*
* Queuing exception is done in vmx_handle_exit. See comment there.
* Note, XFD_ERR is updated _before_ the #NM interception check, i.e.
* unlike CR2 and DR6, the value is not a payload that is attached to
* the #NM exception.
*/
if (vcpu->arch.guest_fpu.fpstate->xfd)
if (is_xfd_nm_fault(vcpu))
rdmsrl(MSR_IA32_XFD_ERR, vcpu->arch.guest_fpu.xfd_err);
}
@ -7158,13 +7202,17 @@ static void __vmx_complete_interrupts(struct kvm_vcpu *vcpu,
case INTR_TYPE_SOFT_EXCEPTION:
vcpu->arch.event_exit_inst_len = vmcs_read32(instr_len_field);
fallthrough;
case INTR_TYPE_HARD_EXCEPTION:
if (idt_vectoring_info & VECTORING_INFO_DELIVER_CODE_MASK) {
u32 err = vmcs_read32(error_code_field);
kvm_requeue_exception_e(vcpu, vector, err);
} else
kvm_requeue_exception(vcpu, vector);
case INTR_TYPE_HARD_EXCEPTION: {
u32 error_code = 0;
if (idt_vectoring_info & VECTORING_INFO_DELIVER_CODE_MASK)
error_code = vmcs_read32(error_code_field);
kvm_requeue_exception(vcpu, vector,
idt_vectoring_info & VECTORING_INFO_DELIVER_CODE_MASK,
error_code);
break;
}
case INTR_TYPE_SOFT_INTR:
vcpu->arch.event_exit_inst_len = vmcs_read32(instr_len_field);
fallthrough;
@ -8006,22 +8054,14 @@ static __init void vmx_set_cpu_caps(void)
kvm_cpu_cap_check_and_set(X86_FEATURE_WAITPKG);
}
static int vmx_check_intercept_io(struct kvm_vcpu *vcpu,
struct x86_instruction_info *info)
static bool vmx_is_io_intercepted(struct kvm_vcpu *vcpu,
struct x86_instruction_info *info,
unsigned long *exit_qualification)
{
struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
unsigned short port;
bool intercept;
int size;
if (info->intercept == x86_intercept_in ||
info->intercept == x86_intercept_ins) {
port = info->src_val;
size = info->dst_bytes;
} else {
port = info->dst_val;
size = info->src_bytes;
}
bool imm;
/*
* If the 'use IO bitmaps' VM-execution control is 0, IO instruction
@ -8031,13 +8071,33 @@ static int vmx_check_intercept_io(struct kvm_vcpu *vcpu,
* Otherwise, IO instruction VM-exits are controlled by the IO bitmaps.
*/
if (!nested_cpu_has(vmcs12, CPU_BASED_USE_IO_BITMAPS))
intercept = nested_cpu_has(vmcs12,
CPU_BASED_UNCOND_IO_EXITING);
else
intercept = nested_vmx_check_io_bitmaps(vcpu, port, size);
return nested_cpu_has(vmcs12, CPU_BASED_UNCOND_IO_EXITING);
/* FIXME: produce nested vmexit and return X86EMUL_INTERCEPTED. */
return intercept ? X86EMUL_UNHANDLEABLE : X86EMUL_CONTINUE;
if (info->intercept == x86_intercept_in ||
info->intercept == x86_intercept_ins) {
port = info->src_val;
size = info->dst_bytes;
imm = info->src_type == OP_IMM;
} else {
port = info->dst_val;
size = info->src_bytes;
imm = info->dst_type == OP_IMM;
}
*exit_qualification = ((unsigned long)port << 16) | (size - 1);
if (info->intercept == x86_intercept_ins ||
info->intercept == x86_intercept_outs)
*exit_qualification |= BIT(4);
if (info->rep_prefix)
*exit_qualification |= BIT(5);
if (imm)
*exit_qualification |= BIT(6);
return nested_vmx_check_io_bitmaps(vcpu, port, size);
}
int vmx_check_intercept(struct kvm_vcpu *vcpu,
@ -8046,26 +8106,34 @@ int vmx_check_intercept(struct kvm_vcpu *vcpu,
struct x86_exception *exception)
{
struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
unsigned long exit_qualification = 0;
u32 vm_exit_reason;
u64 exit_insn_len;
switch (info->intercept) {
/*
* RDPID causes #UD if disabled through secondary execution controls.
* Because it is marked as EmulateOnUD, we need to intercept it here.
* Note, RDPID is hidden behind ENABLE_RDTSCP.
*/
case x86_intercept_rdpid:
/*
* RDPID causes #UD if not enabled through secondary execution
* controls (ENABLE_RDTSCP). Note, the implicit MSR access to
* TSC_AUX is NOT subject to interception, i.e. checking only
* the dedicated execution control is architecturally correct.
*/
if (!nested_cpu_has2(vmcs12, SECONDARY_EXEC_ENABLE_RDTSCP)) {
exception->vector = UD_VECTOR;
exception->error_code_valid = false;
return X86EMUL_PROPAGATE_FAULT;
}
break;
return X86EMUL_CONTINUE;
case x86_intercept_in:
case x86_intercept_ins:
case x86_intercept_out:
case x86_intercept_outs:
return vmx_check_intercept_io(vcpu, info);
if (!vmx_is_io_intercepted(vcpu, info, &exit_qualification))
return X86EMUL_CONTINUE;
vm_exit_reason = EXIT_REASON_IO_INSTRUCTION;
break;
case x86_intercept_lgdt:
case x86_intercept_lidt:
@ -8078,7 +8146,24 @@ int vmx_check_intercept(struct kvm_vcpu *vcpu,
if (!nested_cpu_has2(vmcs12, SECONDARY_EXEC_DESC))
return X86EMUL_CONTINUE;
/* FIXME: produce nested vmexit and return X86EMUL_INTERCEPTED. */
if (info->intercept == x86_intercept_lldt ||
info->intercept == x86_intercept_ltr ||
info->intercept == x86_intercept_sldt ||
info->intercept == x86_intercept_str)
vm_exit_reason = EXIT_REASON_LDTR_TR;
else
vm_exit_reason = EXIT_REASON_GDTR_IDTR;
/*
* FIXME: Decode the ModR/M to generate the correct exit
* qualification for memory operands.
*/
break;
case x86_intercept_hlt:
if (!nested_cpu_has(vmcs12, CPU_BASED_HLT_EXITING))
return X86EMUL_CONTINUE;
vm_exit_reason = EXIT_REASON_HLT;
break;
case x86_intercept_pause:
@ -8091,17 +8176,24 @@ int vmx_check_intercept(struct kvm_vcpu *vcpu,
* the PAUSE.
*/
if ((info->rep_prefix != REPE_PREFIX) ||
!nested_cpu_has2(vmcs12, CPU_BASED_PAUSE_EXITING))
!nested_cpu_has(vmcs12, CPU_BASED_PAUSE_EXITING))
return X86EMUL_CONTINUE;
vm_exit_reason = EXIT_REASON_PAUSE_INSTRUCTION;
break;
/* TODO: check more intercepts... */
default:
break;
return X86EMUL_UNHANDLEABLE;
}
return X86EMUL_UNHANDLEABLE;
exit_insn_len = abs_diff((s64)info->next_rip, (s64)info->rip);
if (!exit_insn_len || exit_insn_len > X86_MAX_INSTRUCTION_LENGTH)
return X86EMUL_UNHANDLEABLE;
__nested_vmx_vmexit(vcpu, vm_exit_reason, 0, exit_qualification,
exit_insn_len);
return X86EMUL_INTERCEPTED;
}
#ifdef CONFIG_X86_64

View File

@ -800,9 +800,9 @@ static void kvm_queue_exception_vmexit(struct kvm_vcpu *vcpu, unsigned int vecto
ex->payload = payload;
}
static void kvm_multiple_exception(struct kvm_vcpu *vcpu,
unsigned nr, bool has_error, u32 error_code,
bool has_payload, unsigned long payload, bool reinject)
static void kvm_multiple_exception(struct kvm_vcpu *vcpu, unsigned int nr,
bool has_error, u32 error_code,
bool has_payload, unsigned long payload)
{
u32 prev_nr;
int class1, class2;
@ -810,13 +810,10 @@ static void kvm_multiple_exception(struct kvm_vcpu *vcpu,
kvm_make_request(KVM_REQ_EVENT, vcpu);
/*
* If the exception is destined for L2 and isn't being reinjected,
* morph it to a VM-Exit if L1 wants to intercept the exception. A
* previously injected exception is not checked because it was checked
* when it was original queued, and re-checking is incorrect if _L1_
* injected the exception, in which case it's exempt from interception.
* If the exception is destined for L2, morph it to a VM-Exit if L1
* wants to intercept the exception.
*/
if (!reinject && is_guest_mode(vcpu) &&
if (is_guest_mode(vcpu) &&
kvm_x86_ops.nested_ops->is_exception_vmexit(vcpu, nr, error_code)) {
kvm_queue_exception_vmexit(vcpu, nr, has_error, error_code,
has_payload, payload);
@ -825,28 +822,9 @@ static void kvm_multiple_exception(struct kvm_vcpu *vcpu,
if (!vcpu->arch.exception.pending && !vcpu->arch.exception.injected) {
queue:
if (reinject) {
/*
* On VM-Entry, an exception can be pending if and only
* if event injection was blocked by nested_run_pending.
* In that case, however, vcpu_enter_guest() requests an
* immediate exit, and the guest shouldn't proceed far
* enough to need reinjection.
*/
WARN_ON_ONCE(kvm_is_exception_pending(vcpu));
vcpu->arch.exception.injected = true;
if (WARN_ON_ONCE(has_payload)) {
/*
* A reinjected event has already
* delivered its payload.
*/
has_payload = false;
payload = 0;
}
} else {
vcpu->arch.exception.pending = true;
vcpu->arch.exception.injected = false;
}
vcpu->arch.exception.pending = true;
vcpu->arch.exception.injected = false;
vcpu->arch.exception.has_error_code = has_error;
vcpu->arch.exception.vector = nr;
vcpu->arch.exception.error_code = error_code;
@ -887,30 +865,53 @@ static void kvm_multiple_exception(struct kvm_vcpu *vcpu,
void kvm_queue_exception(struct kvm_vcpu *vcpu, unsigned nr)
{
kvm_multiple_exception(vcpu, nr, false, 0, false, 0, false);
kvm_multiple_exception(vcpu, nr, false, 0, false, 0);
}
EXPORT_SYMBOL_GPL(kvm_queue_exception);
void kvm_requeue_exception(struct kvm_vcpu *vcpu, unsigned nr)
{
kvm_multiple_exception(vcpu, nr, false, 0, false, 0, true);
}
EXPORT_SYMBOL_GPL(kvm_requeue_exception);
void kvm_queue_exception_p(struct kvm_vcpu *vcpu, unsigned nr,
unsigned long payload)
{
kvm_multiple_exception(vcpu, nr, false, 0, true, payload, false);
kvm_multiple_exception(vcpu, nr, false, 0, true, payload);
}
EXPORT_SYMBOL_GPL(kvm_queue_exception_p);
static void kvm_queue_exception_e_p(struct kvm_vcpu *vcpu, unsigned nr,
u32 error_code, unsigned long payload)
{
kvm_multiple_exception(vcpu, nr, true, error_code,
true, payload, false);
kvm_multiple_exception(vcpu, nr, true, error_code, true, payload);
}
void kvm_requeue_exception(struct kvm_vcpu *vcpu, unsigned int nr,
bool has_error_code, u32 error_code)
{
/*
* On VM-Entry, an exception can be pending if and only if event
* injection was blocked by nested_run_pending. In that case, however,
* vcpu_enter_guest() requests an immediate exit, and the guest
* shouldn't proceed far enough to need reinjection.
*/
WARN_ON_ONCE(kvm_is_exception_pending(vcpu));
/*
* Do not check for interception when injecting an event for L2, as the
* exception was checked for intercept when it was original queued, and
* re-checking is incorrect if _L1_ injected the exception, in which
* case it's exempt from interception.
*/
kvm_make_request(KVM_REQ_EVENT, vcpu);
vcpu->arch.exception.injected = true;
vcpu->arch.exception.has_error_code = has_error_code;
vcpu->arch.exception.vector = nr;
vcpu->arch.exception.error_code = error_code;
vcpu->arch.exception.has_payload = false;
vcpu->arch.exception.payload = 0;
}
EXPORT_SYMBOL_GPL(kvm_requeue_exception);
int kvm_complete_insn_gp(struct kvm_vcpu *vcpu, int err)
{
if (err)
@ -980,16 +981,10 @@ void kvm_inject_nmi(struct kvm_vcpu *vcpu)
void kvm_queue_exception_e(struct kvm_vcpu *vcpu, unsigned nr, u32 error_code)
{
kvm_multiple_exception(vcpu, nr, true, error_code, false, 0, false);
kvm_multiple_exception(vcpu, nr, true, error_code, false, 0);
}
EXPORT_SYMBOL_GPL(kvm_queue_exception_e);
void kvm_requeue_exception_e(struct kvm_vcpu *vcpu, unsigned nr, u32 error_code)
{
kvm_multiple_exception(vcpu, nr, true, error_code, false, 0, true);
}
EXPORT_SYMBOL_GPL(kvm_requeue_exception_e);
/*
* Checks if cpl <= required_cpl; if true, return true. Otherwise queue
* a #GP and return false.
@ -1264,7 +1259,7 @@ static int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr)
vcpu->arch.xcr0 = xcr0;
if ((xcr0 ^ old_xcr0) & XFEATURE_MASK_EXTEND)
kvm_update_cpuid_runtime(vcpu);
vcpu->arch.cpuid_dynamic_bits_dirty = true;
return 0;
}
@ -2080,10 +2075,20 @@ EXPORT_SYMBOL_GPL(kvm_handle_invalid_op);
static int kvm_emulate_monitor_mwait(struct kvm_vcpu *vcpu, const char *insn)
{
if (!kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_MWAIT_NEVER_UD_FAULTS) &&
!guest_cpu_cap_has(vcpu, X86_FEATURE_MWAIT))
bool enabled;
if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_MWAIT_NEVER_UD_FAULTS))
goto emulate_as_nop;
if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT))
enabled = guest_cpu_cap_has(vcpu, X86_FEATURE_MWAIT);
else
enabled = vcpu->arch.ia32_misc_enable_msr & MSR_IA32_MISC_ENABLE_MWAIT;
if (!enabled)
return kvm_handle_invalid_op(vcpu);
emulate_as_nop:
pr_warn_once("%s instruction emulated as NOP!\n", insn);
return kvm_emulate_as_nop(vcpu);
}
@ -2569,6 +2574,9 @@ EXPORT_SYMBOL_GPL(kvm_calc_nested_tsc_multiplier);
static void kvm_vcpu_write_tsc_offset(struct kvm_vcpu *vcpu, u64 l1_offset)
{
if (vcpu->arch.guest_tsc_protected)
return;
trace_kvm_write_tsc_offset(vcpu->vcpu_id,
vcpu->arch.l1_tsc_offset,
l1_offset);
@ -2626,12 +2634,18 @@ static inline bool kvm_check_tsc_unstable(void)
* participates in.
*/
static void __kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 offset, u64 tsc,
u64 ns, bool matched)
u64 ns, bool matched, bool user_set_tsc)
{
struct kvm *kvm = vcpu->kvm;
lockdep_assert_held(&kvm->arch.tsc_write_lock);
if (vcpu->arch.guest_tsc_protected)
return;
if (user_set_tsc)
vcpu->kvm->arch.user_set_tsc = true;
/*
* We also track th most recent recorded KHZ, write and time to
* allow the matching interval to be extended at each write.
@ -2717,8 +2731,6 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 *user_value)
}
}
if (user_value)
kvm->arch.user_set_tsc = true;
/*
* For a reliable TSC, we can match TSC offsets, and for an unstable
@ -2738,7 +2750,7 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 *user_value)
matched = true;
}
__kvm_synchronize_tsc(vcpu, offset, data, ns, matched);
__kvm_synchronize_tsc(vcpu, offset, data, ns, matched, !!user_value);
raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags);
}
@ -3116,15 +3128,17 @@ u64 get_kvmclock_ns(struct kvm *kvm)
return data.clock;
}
static void kvm_setup_guest_pvclock(struct kvm_vcpu *v,
static void kvm_setup_guest_pvclock(struct pvclock_vcpu_time_info *ref_hv_clock,
struct kvm_vcpu *vcpu,
struct gfn_to_pfn_cache *gpc,
unsigned int offset,
bool force_tsc_unstable)
unsigned int offset)
{
struct kvm_vcpu_arch *vcpu = &v->arch;
struct pvclock_vcpu_time_info *guest_hv_clock;
struct pvclock_vcpu_time_info hv_clock;
unsigned long flags;
memcpy(&hv_clock, ref_hv_clock, sizeof(hv_clock));
read_lock_irqsave(&gpc->lock, flags);
while (!kvm_gpc_check(gpc, offset + sizeof(*guest_hv_clock))) {
read_unlock_irqrestore(&gpc->lock, flags);
@ -3144,52 +3158,34 @@ static void kvm_setup_guest_pvclock(struct kvm_vcpu *v,
* it is consistent.
*/
guest_hv_clock->version = vcpu->hv_clock.version = (guest_hv_clock->version + 1) | 1;
guest_hv_clock->version = hv_clock.version = (guest_hv_clock->version + 1) | 1;
smp_wmb();
/* retain PVCLOCK_GUEST_STOPPED if set in guest copy */
vcpu->hv_clock.flags |= (guest_hv_clock->flags & PVCLOCK_GUEST_STOPPED);
hv_clock.flags |= (guest_hv_clock->flags & PVCLOCK_GUEST_STOPPED);
if (vcpu->pvclock_set_guest_stopped_request) {
vcpu->hv_clock.flags |= PVCLOCK_GUEST_STOPPED;
vcpu->pvclock_set_guest_stopped_request = false;
}
memcpy(guest_hv_clock, &vcpu->hv_clock, sizeof(*guest_hv_clock));
if (force_tsc_unstable)
guest_hv_clock->flags &= ~PVCLOCK_TSC_STABLE_BIT;
memcpy(guest_hv_clock, &hv_clock, sizeof(*guest_hv_clock));
smp_wmb();
guest_hv_clock->version = ++vcpu->hv_clock.version;
guest_hv_clock->version = ++hv_clock.version;
kvm_gpc_mark_dirty_in_slot(gpc);
read_unlock_irqrestore(&gpc->lock, flags);
trace_kvm_pvclock_update(v->vcpu_id, &vcpu->hv_clock);
trace_kvm_pvclock_update(vcpu->vcpu_id, &hv_clock);
}
static int kvm_guest_time_update(struct kvm_vcpu *v)
int kvm_guest_time_update(struct kvm_vcpu *v)
{
struct pvclock_vcpu_time_info hv_clock = {};
unsigned long flags, tgt_tsc_khz;
unsigned seq;
struct kvm_vcpu_arch *vcpu = &v->arch;
struct kvm_arch *ka = &v->kvm->arch;
s64 kernel_ns;
u64 tsc_timestamp, host_tsc;
u8 pvclock_flags;
bool use_master_clock;
#ifdef CONFIG_KVM_XEN
/*
* For Xen guests we may need to override PVCLOCK_TSC_STABLE_BIT as unless
* explicitly told to use TSC as its clocksource Xen will not set this bit.
* This default behaviour led to bugs in some guest kernels which cause
* problems if they observe PVCLOCK_TSC_STABLE_BIT in the pvclock flags.
*/
bool xen_pvclock_tsc_unstable =
ka->xen_hvm_config.flags & KVM_XEN_HVM_CONFIG_PVCLOCK_TSC_UNSTABLE;
#endif
kernel_ns = 0;
host_tsc = 0;
@ -3250,35 +3246,57 @@ static int kvm_guest_time_update(struct kvm_vcpu *v)
if (unlikely(vcpu->hw_tsc_khz != tgt_tsc_khz)) {
kvm_get_time_scale(NSEC_PER_SEC, tgt_tsc_khz * 1000LL,
&vcpu->hv_clock.tsc_shift,
&vcpu->hv_clock.tsc_to_system_mul);
&vcpu->pvclock_tsc_shift,
&vcpu->pvclock_tsc_mul);
vcpu->hw_tsc_khz = tgt_tsc_khz;
kvm_xen_update_tsc_info(v);
}
vcpu->hv_clock.tsc_timestamp = tsc_timestamp;
vcpu->hv_clock.system_time = kernel_ns + v->kvm->arch.kvmclock_offset;
hv_clock.tsc_shift = vcpu->pvclock_tsc_shift;
hv_clock.tsc_to_system_mul = vcpu->pvclock_tsc_mul;
hv_clock.tsc_timestamp = tsc_timestamp;
hv_clock.system_time = kernel_ns + v->kvm->arch.kvmclock_offset;
vcpu->last_guest_tsc = tsc_timestamp;
/* If the host uses TSC clocksource, then it is stable */
pvclock_flags = 0;
hv_clock.flags = 0;
if (use_master_clock)
pvclock_flags |= PVCLOCK_TSC_STABLE_BIT;
hv_clock.flags |= PVCLOCK_TSC_STABLE_BIT;
vcpu->hv_clock.flags = pvclock_flags;
if (vcpu->pv_time.active) {
/*
* GUEST_STOPPED is only supported by kvmclock, and KVM's
* historic behavior is to only process the request if kvmclock
* is active/enabled.
*/
if (vcpu->pvclock_set_guest_stopped_request) {
hv_clock.flags |= PVCLOCK_GUEST_STOPPED;
vcpu->pvclock_set_guest_stopped_request = false;
}
kvm_setup_guest_pvclock(&hv_clock, v, &vcpu->pv_time, 0);
hv_clock.flags &= ~PVCLOCK_GUEST_STOPPED;
}
kvm_hv_setup_tsc_page(v->kvm, &hv_clock);
if (vcpu->pv_time.active)
kvm_setup_guest_pvclock(v, &vcpu->pv_time, 0, false);
#ifdef CONFIG_KVM_XEN
/*
* For Xen guests we may need to override PVCLOCK_TSC_STABLE_BIT as unless
* explicitly told to use TSC as its clocksource Xen will not set this bit.
* This default behaviour led to bugs in some guest kernels which cause
* problems if they observe PVCLOCK_TSC_STABLE_BIT in the pvclock flags.
*
* Note! Clear TSC_STABLE only for Xen clocks, i.e. the order matters!
*/
if (ka->xen.hvm_config.flags & KVM_XEN_HVM_CONFIG_PVCLOCK_TSC_UNSTABLE)
hv_clock.flags &= ~PVCLOCK_TSC_STABLE_BIT;
if (vcpu->xen.vcpu_info_cache.active)
kvm_setup_guest_pvclock(v, &vcpu->xen.vcpu_info_cache,
offsetof(struct compat_vcpu_info, time),
xen_pvclock_tsc_unstable);
kvm_setup_guest_pvclock(&hv_clock, v, &vcpu->xen.vcpu_info_cache,
offsetof(struct compat_vcpu_info, time));
if (vcpu->xen.vcpu_time_info_cache.active)
kvm_setup_guest_pvclock(v, &vcpu->xen.vcpu_time_info_cache, 0,
xen_pvclock_tsc_unstable);
kvm_setup_guest_pvclock(&hv_clock, v, &vcpu->xen.vcpu_time_info_cache, 0);
#endif
kvm_hv_setup_tsc_page(v->kvm, &vcpu->hv_clock);
return 0;
}
@ -3544,7 +3562,7 @@ static int kvm_pv_enable_async_pf(struct kvm_vcpu *vcpu, u64 data)
sizeof(u64)))
return 1;
vcpu->arch.apf.send_user_only = !(data & KVM_ASYNC_PF_SEND_ALWAYS);
vcpu->arch.apf.send_always = (data & KVM_ASYNC_PF_SEND_ALWAYS);
vcpu->arch.apf.delivery_as_pf_vmexit = data & KVM_ASYNC_PF_DELIVERY_AS_PF_VMEXIT;
kvm_async_pf_wakeup_all(vcpu);
@ -3733,7 +3751,13 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
u32 msr = msr_info->index;
u64 data = msr_info->data;
if (msr && msr == vcpu->kvm->arch.xen_hvm_config.msr)
/*
* Do not allow host-initiated writes to trigger the Xen hypercall
* page setup; it could incur locking paths which are not expected
* if userspace sets the MSR in an unusual location.
*/
if (kvm_xen_is_hypercall_page_msr(vcpu->kvm, msr) &&
!msr_info->host_initiated)
return kvm_xen_write_hypercall_page(vcpu, data);
switch (msr) {
@ -3889,7 +3913,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
if (!guest_cpu_cap_has(vcpu, X86_FEATURE_XMM3))
return 1;
vcpu->arch.ia32_misc_enable_msr = data;
kvm_update_cpuid_runtime(vcpu);
vcpu->arch.cpuid_dynamic_bits_dirty = true;
} else {
vcpu->arch.ia32_misc_enable_msr = data;
}
@ -3906,7 +3930,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
case MSR_IA32_TSC:
if (msr_info->host_initiated) {
kvm_synchronize_tsc(vcpu, &data);
} else {
} else if (!vcpu->arch.guest_tsc_protected) {
u64 adj = kvm_compute_l1_tsc_offset(vcpu, data) - vcpu->arch.l1_tsc_offset;
adjust_tsc_offset_guest(vcpu, adj);
vcpu->arch.ia32_tsc_adjust_msr += adj;
@ -3924,7 +3948,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
if (data & ~kvm_caps.supported_xss)
return 1;
vcpu->arch.ia32_xss = data;
kvm_update_cpuid_runtime(vcpu);
vcpu->arch.cpuid_dynamic_bits_dirty = true;
break;
case MSR_SMI_COUNT:
if (!msr_info->host_initiated)
@ -4573,6 +4597,11 @@ static bool kvm_is_vm_type_supported(unsigned long type)
return type < 32 && (kvm_caps.supported_vm_types & BIT(type));
}
static inline u32 kvm_sync_valid_fields(struct kvm *kvm)
{
return kvm && kvm->arch.has_protected_state ? 0 : KVM_SYNC_X86_VALID_FIELDS;
}
int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
{
int r = 0;
@ -4681,7 +4710,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
break;
#endif
case KVM_CAP_SYNC_REGS:
r = KVM_SYNC_X86_VALID_FIELDS;
r = kvm_sync_valid_fields(kvm);
break;
case KVM_CAP_ADJUST_CLOCK:
r = KVM_CLOCK_VALID_FLAGS;
@ -4986,7 +5015,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
u64 offset = kvm_compute_l1_tsc_offset(vcpu,
vcpu->arch.last_guest_tsc);
kvm_vcpu_write_tsc_offset(vcpu, offset);
vcpu->arch.tsc_catchup = 1;
if (!vcpu->arch.guest_tsc_protected)
vcpu->arch.tsc_catchup = 1;
}
if (kvm_lapic_hv_timer_in_use(vcpu))
@ -5725,8 +5755,7 @@ static int kvm_arch_tsc_set_attr(struct kvm_vcpu *vcpu,
tsc = kvm_scale_tsc(rdtsc(), vcpu->arch.l1_tsc_scaling_ratio) + offset;
ns = get_kvmclock_base_ns();
kvm->arch.user_set_tsc = true;
__kvm_synchronize_tsc(vcpu, offset, tsc, ns, matched);
__kvm_synchronize_tsc(vcpu, offset, tsc, ns, matched, true);
raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags);
r = 0;
@ -6905,23 +6934,15 @@ static int kvm_arch_suspend_notifier(struct kvm *kvm)
{
struct kvm_vcpu *vcpu;
unsigned long i;
int ret = 0;
mutex_lock(&kvm->lock);
kvm_for_each_vcpu(i, vcpu, kvm) {
if (!vcpu->arch.pv_time.active)
continue;
/*
* Ignore the return, marking the guest paused only "fails" if the vCPU
* isn't using kvmclock; continuing on is correct and desirable.
*/
kvm_for_each_vcpu(i, vcpu, kvm)
(void)kvm_set_guest_paused(vcpu);
ret = kvm_set_guest_paused(vcpu);
if (ret) {
kvm_err("Failed to pause guest VCPU%d: %d\n",
vcpu->vcpu_id, ret);
break;
}
}
mutex_unlock(&kvm->lock);
return ret ? NOTIFY_BAD : NOTIFY_DONE;
return NOTIFY_DONE;
}
int kvm_arch_pm_notifier(struct kvm *kvm, unsigned long state)
@ -11220,9 +11241,7 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
switch(vcpu->arch.mp_state) {
case KVM_MP_STATE_HALTED:
case KVM_MP_STATE_AP_RESET_HOLD:
vcpu->arch.pv.pv_unhalted = false;
vcpu->arch.mp_state =
KVM_MP_STATE_RUNNABLE;
kvm_set_mp_state(vcpu, KVM_MP_STATE_RUNNABLE);
fallthrough;
case KVM_MP_STATE_RUNNABLE:
vcpu->arch.apf.halted = false;
@ -11299,9 +11318,8 @@ static int __kvm_emulate_halt(struct kvm_vcpu *vcpu, int state, int reason)
++vcpu->stat.halt_exits;
if (lapic_in_kernel(vcpu)) {
if (kvm_vcpu_has_events(vcpu))
vcpu->arch.pv.pv_unhalted = false;
else
vcpu->arch.mp_state = state;
state = KVM_MP_STATE_RUNNABLE;
kvm_set_mp_state(vcpu, state);
return 1;
} else {
vcpu->run->exit_reason = reason;
@ -11474,6 +11492,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
{
struct kvm_queued_exception *ex = &vcpu->arch.exception;
struct kvm_run *kvm_run = vcpu->run;
u32 sync_valid_fields;
int r;
r = kvm_mmu_post_init_vm(vcpu->kvm);
@ -11519,8 +11538,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
goto out;
}
if ((kvm_run->kvm_valid_regs & ~KVM_SYNC_X86_VALID_FIELDS) ||
(kvm_run->kvm_dirty_regs & ~KVM_SYNC_X86_VALID_FIELDS)) {
sync_valid_fields = kvm_sync_valid_fields(vcpu->kvm);
if ((kvm_run->kvm_valid_regs & ~sync_valid_fields) ||
(kvm_run->kvm_dirty_regs & ~sync_valid_fields)) {
r = -EINVAL;
goto out;
}
@ -11578,7 +11598,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
out:
kvm_put_guest_fpu(vcpu);
if (kvm_run->kvm_valid_regs)
if (kvm_run->kvm_valid_regs && likely(!vcpu->arch.guest_state_protected))
store_regs(vcpu);
post_kvm_run_save(vcpu);
kvm_vcpu_srcu_read_unlock(vcpu);
@ -11821,10 +11841,10 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
goto out;
if (mp_state->mp_state == KVM_MP_STATE_SIPI_RECEIVED) {
vcpu->arch.mp_state = KVM_MP_STATE_INIT_RECEIVED;
kvm_set_mp_state(vcpu, KVM_MP_STATE_INIT_RECEIVED);
set_bit(KVM_APIC_SIPI, &vcpu->arch.apic->pending_events);
} else
vcpu->arch.mp_state = mp_state->mp_state;
kvm_set_mp_state(vcpu, mp_state->mp_state);
kvm_make_request(KVM_REQ_EVENT, vcpu);
ret = 0;
@ -11951,7 +11971,7 @@ static int __set_sregs_common(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs,
if (kvm_vcpu_is_bsp(vcpu) && kvm_rip_read(vcpu) == 0xfff0 &&
sregs->cs.selector == 0xf000 && sregs->cs.base == 0xffff0000 &&
!is_protmode(vcpu))
vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
kvm_set_mp_state(vcpu, KVM_MP_STATE_RUNNABLE);
return 0;
}
@ -12254,9 +12274,9 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
kvm_gpc_init(&vcpu->arch.pv_time, vcpu->kvm);
if (!irqchip_in_kernel(vcpu->kvm) || kvm_vcpu_is_reset_bsp(vcpu))
vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
kvm_set_mp_state(vcpu, KVM_MP_STATE_RUNNABLE);
else
vcpu->arch.mp_state = KVM_MP_STATE_UNINITIALIZED;
kvm_set_mp_state(vcpu, KVM_MP_STATE_UNINITIALIZED);
r = kvm_mmu_create(vcpu);
if (r < 0)
@ -12363,6 +12383,9 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
{
int idx;
kvm_clear_async_pf_completion_queue(vcpu);
kvm_mmu_unload(vcpu);
kvmclock_reset(vcpu);
kvm_x86_call(vcpu_free)(vcpu);
@ -12756,31 +12779,6 @@ out:
return ret;
}
static void kvm_unload_vcpu_mmu(struct kvm_vcpu *vcpu)
{
vcpu_load(vcpu);
kvm_mmu_unload(vcpu);
vcpu_put(vcpu);
}
static void kvm_unload_vcpu_mmus(struct kvm *kvm)
{
unsigned long i;
struct kvm_vcpu *vcpu;
kvm_for_each_vcpu(i, vcpu, kvm) {
kvm_clear_async_pf_completion_queue(vcpu);
kvm_unload_vcpu_mmu(vcpu);
}
}
void kvm_arch_sync_events(struct kvm *kvm)
{
cancel_delayed_work_sync(&kvm->arch.kvmclock_sync_work);
cancel_delayed_work_sync(&kvm->arch.kvmclock_update_work);
kvm_free_pit(kvm);
}
/**
* __x86_set_memory_region: Setup KVM internal memory slot
*
@ -12859,6 +12857,17 @@ EXPORT_SYMBOL_GPL(__x86_set_memory_region);
void kvm_arch_pre_destroy_vm(struct kvm *kvm)
{
/*
* Stop all background workers and kthreads before destroying vCPUs, as
* iterating over vCPUs in a different task while vCPUs are being freed
* is unsafe, i.e. will lead to use-after-free. The PIT also needs to
* be stopped before IRQ routing is freed.
*/
cancel_delayed_work_sync(&kvm->arch.kvmclock_sync_work);
cancel_delayed_work_sync(&kvm->arch.kvmclock_update_work);
kvm_free_pit(kvm);
kvm_mmu_pre_destroy_vm(kvm);
}
@ -12878,9 +12887,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
__x86_set_memory_region(kvm, TSS_PRIVATE_MEMSLOT, 0, 0);
mutex_unlock(&kvm->slots_lock);
}
kvm_unload_vcpu_mmus(kvm);
kvm_destroy_vcpus(kvm);
kvm_x86_call(vm_destroy)(kvm);
kvm_free_msr_filter(srcu_dereference_check(kvm->arch.msr_filter, &kvm->srcu, 1));
kvm_pic_destroy(kvm);
kvm_ioapic_destroy(kvm);
@ -12890,6 +12897,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
kvm_page_track_cleanup(kvm);
kvm_xen_destroy_vm(kvm);
kvm_hv_destroy_vm(kvm);
kvm_x86_call(vm_destroy)(kvm);
}
static void memslot_rmap_free(struct kvm_memory_slot *slot)
@ -13383,8 +13391,8 @@ static bool kvm_can_deliver_async_pf(struct kvm_vcpu *vcpu)
if (!kvm_pv_async_pf_enabled(vcpu))
return false;
if (vcpu->arch.apf.send_user_only &&
kvm_x86_call(get_cpl)(vcpu) == 0)
if (!vcpu->arch.apf.send_always &&
(vcpu->arch.guest_state_protected || !kvm_x86_call(get_cpl)(vcpu)))
return false;
if (is_guest_mode(vcpu)) {
@ -13474,7 +13482,7 @@ void kvm_arch_async_page_present(struct kvm_vcpu *vcpu,
}
vcpu->arch.apf.halted = false;
vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
kvm_set_mp_state(vcpu, KVM_MP_STATE_RUNNABLE);
}
void kvm_arch_async_page_present_queued(struct kvm_vcpu *vcpu)

Some files were not shown because too many files have changed in this diff Show More