Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/torvalds/linux.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorSean Christopherson <seanjc@google.com>2021-02-25 23:47:29 +0300
committerPaolo Bonzini <pbonzini@redhat.com>2021-03-15 11:43:36 +0300
commit44aaa0150bfd576dc5043094fd1a23699cf280e8 (patch)
tree42eb2da38e7b76cc1c78310dff96976aa78ed9e8 /arch/x86/kvm/mmu/spte.c
parentec89e643867148ab4a2a856a38717d2e89692be7 (diff)
KVM: x86/mmu: Disable MMIO caching if MMIO value collides with L1TF
Disable MMIO caching if the MMIO value collides with the L1TF mitigation that usurps high PFN bits. In practice this should never happen as only CPUs with SME support can generate such a collision (because the MMIO value can theoretically get adjusted into legal memory), and no CPUs exist that support SME and are susceptible to L1TF. But, closing the hole is trivial. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210225204749.1512652-5-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'arch/x86/kvm/mmu/spte.c')
-rw-r--r--arch/x86/kvm/mmu/spte.c13
1 files changed, 12 insertions, 1 deletions
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index ef55f0bc4ccf..9ea097bcb491 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -245,8 +245,19 @@ u64 mark_spte_for_access_track(u64 spte)
void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 access_mask)
{
BUG_ON((u64)(unsigned)access_mask != access_mask);
- WARN_ON(mmio_value & (shadow_nonpresent_or_rsvd_mask << SHADOW_NONPRESENT_OR_RSVD_MASK_LEN));
WARN_ON(mmio_value & shadow_nonpresent_or_rsvd_lower_gfn_mask);
+
+ /*
+ * Disable MMIO caching if the MMIO value collides with the bits that
+ * are used to hold the relocated GFN when the L1TF mitigation is
+ * enabled. This should never fire as there is no known hardware that
+ * can trigger this condition, e.g. SME/SEV CPUs that require a custom
+ * MMIO value are not susceptible to L1TF.
+ */
+ if (WARN_ON(mmio_value & (shadow_nonpresent_or_rsvd_mask <<
+ SHADOW_NONPRESENT_OR_RSVD_MASK_LEN)))
+ mmio_value = 0;
+
shadow_mmio_value = mmio_value | SPTE_MMIO_MASK;
shadow_mmio_access_mask = access_mask;
}