diff options
author | Alexey Kardashevskiy <aik@ozlabs.ru> | 2018-09-10 10:29:08 +0200 |
---|---|---|
committer | Michael Ellerman <mpe@ellerman.id.au> | 2018-10-02 15:09:26 +0200 |
commit | e199ad2bf5cf7a6d15747a97da4ec1084bd6aae1 (patch) | |
tree | 63f4857c8f033bbe7e76576a3561c1f7962c6cdc | |
parent | Merge branch 'kvm-ppc-fixes' of paulus/powerpc into topic/ppc-kvm (diff) | |
download | linux-e199ad2bf5cf7a6d15747a97da4ec1084bd6aae1.tar.xz linux-e199ad2bf5cf7a6d15747a97da4ec1084bd6aae1.zip |
KVM: PPC: Validate all tces before updating tables
The KVM TCE handlers are written in a way so they fail when either
something went horribly wrong or the userspace did some obvious mistake
such as passing a misaligned address.
We are going to enhance the TCE checker to fail on attempts to map bigger
IOMMU page than the underlying pinned memory so let's valitate TCE
beforehand.
This should cause no behavioral change.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-rw-r--r-- | arch/powerpc/kvm/book3s_64_vio.c | 18 | ||||
-rw-r--r-- | arch/powerpc/kvm/book3s_64_vio_hv.c | 4 |
2 files changed, 22 insertions, 0 deletions
diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c index 9a3f2646ecc7..3c17977ffea6 100644 --- a/arch/powerpc/kvm/book3s_64_vio.c +++ b/arch/powerpc/kvm/book3s_64_vio.c @@ -599,6 +599,24 @@ long kvmppc_h_put_tce_indirect(struct kvm_vcpu *vcpu, ret = kvmppc_tce_validate(stt, tce); if (ret != H_SUCCESS) goto unlock_exit; + } + + for (i = 0; i < npages; ++i) { + /* + * This looks unsafe, because we validate, then regrab + * the TCE from userspace which could have been changed by + * another thread. + * + * But it actually is safe, because the relevant checks will be + * re-executed in the following code. If userspace tries to + * change this dodgily it will result in a messier failure mode + * but won't threaten the host. + */ + if (get_user(tce, tces + i)) { + ret = H_TOO_HARD; + goto unlock_exit; + } + tce = be64_to_cpu(tce); if (kvmppc_gpa_to_ua(vcpu->kvm, tce & ~(TCE_PCI_READ | TCE_PCI_WRITE), diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c b/arch/powerpc/kvm/book3s_64_vio_hv.c index 6821ead4b4eb..c2848e0b1b71 100644 --- a/arch/powerpc/kvm/book3s_64_vio_hv.c +++ b/arch/powerpc/kvm/book3s_64_vio_hv.c @@ -524,6 +524,10 @@ long kvmppc_rm_h_put_tce_indirect(struct kvm_vcpu *vcpu, ret = kvmppc_tce_validate(stt, tce); if (ret != H_SUCCESS) goto unlock_exit; + } + + for (i = 0; i < npages; ++i) { + unsigned long tce = be64_to_cpu(((u64 *)tces)[i]); ua = 0; if (kvmppc_gpa_to_ua(vcpu->kvm, |