Mail archive
alpine-aports

[alpine-aports] [PATCH edge] main/xen: security fixes for (XSA-248, XSA-249, XSA-250, XSA-251)

From: Daniel Sabogal <dsabogalcc_at_gmail.com>
Date: Sat, 20 Jan 2018 18:58:42 -0500

---
Rebased for edge. These fixes have already been applied to 3.7-stable.
---
 main/xen/APKBUILD     |  15 ++++-
 main/xen/xsa248.patch | 164 ++++++++++++++++++++++++++++++++++++++++++++++++++
 main/xen/xsa249.patch |  42 +++++++++++++
 main/xen/xsa250.patch |  67 +++++++++++++++++++++
 main/xen/xsa251.patch |  21 +++++++
 5 files changed, 308 insertions(+), 1 deletion(-)
 create mode 100644 main/xen/xsa248.patch
 create mode 100644 main/xen/xsa249.patch
 create mode 100644 main/xen/xsa250.patch
 create mode 100644 main/xen/xsa251.patch
diff --git a/main/xen/APKBUILD b/main/xen/APKBUILD
index 64289fb261..b05d25d32d 100644
--- a/main/xen/APKBUILD
+++ b/main/xen/APKBUILD
_at_@ -3,7 +3,7 @@
 # Maintainer: William Pitcock <nenolod_at_dereferenced.org>
 pkgname=xen
 pkgver=4.9.1
-pkgrel=2
+pkgrel=3
 pkgdesc="Xen hypervisor"
 url="http://www.xen.org/"
 arch="x86_64 armhf aarch64"
_at_@ -103,6 +103,11 @@ options="!strip"
 #     - XSA-247
 #   4.9.1-r2:
 #     - XSA-254 XPTI
+#   4.9.1-r3:
+#     - XSA-248
+#     - XSA-249
+#     - XSA-250
+#     - XSA-251
 
 case "$CARCH" in
 x86*)
_at_@ -153,6 +158,10 @@ source="https://downloads.xenproject.org/release/$pkgname/$pkgver/$pkgname-$pkgv
 	xsa246-4.9.patch
 	xsa247-4.9-1.patch
 	xsa247-4.9-2.patch
+	xsa248.patch
+	xsa249.patch
+	xsa250.patch
+	xsa251.patch
 
 	0001-x86-entry-Remove-support-for-partial-cpu_user_regs-f.patch
 	0002-x86-mm-Always-set-_PAGE_ACCESSED-on-L4e-updates.patch
_at_@ -420,6 +429,10 @@ c2bc9ffc8583aeae71cee9ddcc4418969768d4e3764d47307da54f93981c0109fb07d84b061b3a36
 b00f42d2069f273e204698177d2c36950cee759a92dfe7833c812ddff4dedde2c4a842980927ec4fc46d1f54b49879bf3a3681c6faf30b72fb3ad6a7eba060b2  xsa246-4.9.patch
 c5e064543048751fda86ce64587493518da87d219ff077abb83ac13d8381ceb29f1b6479fc0b761b8f7a04c8c70203791ac4a8cc79bbc6f4dcfa6661c4790c5e  xsa247-4.9-1.patch
 71aefbe27cbd1d1d363b7d5826c69a238e4aad2958a1c6da330ae5daee791f54ce1d01fb79db84ed4248ab8b1593c9c28c3de5108f4d0953b04f7819af23a1d1  xsa247-4.9-2.patch
+6415689190b8f4ead7a3482a2285485af4acd4f3565521736f8fe975c74c7c70b27608e0142a7165b4f735b547b688db99a6027697e77b3e1d15c09e14b4f0a6  xsa248.patch
+05a2e954bab1877500eb5ed3a8c49edb27411ed3ec9dbfb2115b7804a3b03c6d45c9f08a7ed96ff2b586346f321142065a8c5a5d996468496b373637b6ee31b9  xsa249.patch
+b3030f09ddb4f9e4a356519c7b74d393e8db085278a1e616788c81d19988699a6efdd8568277c25514f3298ca92e5a09e3cd08b0a308a4d2ddb55374a8445657  xsa250.patch
+928153b48af2bd6b334058c5919880cfc7d665c63e0232932866941cbea6deb8d0d83f70dff0974d3df27fc84096beca51139a0b1c0585978f298256b3fd82eb  xsa251.patch
 cda45e5a564e429a1299f07ea496b0e0614f6b2d71a5dcd24f5efdb571cc54d74d04c8e0766279fe2acb7d9bb9cf8505281d6c7ba2d6334009e14a10f83096ee  0001-x86-entry-Remove-support-for-partial-cpu_user_regs-f.patch
 bce07e4094ae3036dafdf9fe3aeb1f566281484e1398184d774af9ad371066c0e8af232b8d1ab5d450923fb482e6dea6dfb921976b87b20ab56a3f2b4486d0d4  0002-x86-mm-Always-set-_PAGE_ACCESSED-on-L4e-updates.patch
 ba09c54451fae35f3fc70e4f2a76791bc652ad373e87402ebc30c53f8e7db2368d52a9018cc28a5efcbcd77e85c9ae45d9580550f215a3f9bbf63bbd21ef938d  0003-x86-Meltdown-band-aid-against-malicious-64-bit-PV-gu.patch
diff --git a/main/xen/xsa248.patch b/main/xen/xsa248.patch
new file mode 100644
index 0000000000..966c16e043
--- /dev/null
+++ b/main/xen/xsa248.patch
_at_@ -0,0 +1,164 @@
+From: Jan Beulich <jbeulich_at_suse.com>
+Subject: x86/mm: don't wrongly set page ownership
+
+PV domains can obtain mappings of any pages owned by the correct domain,
+including ones that aren't actually assigned as "normal" RAM, but used
+by Xen internally.  At the moment such "internal" pages marked as owned
+by a guest include pages used to track logdirty bits, as well as p2m
+pages and the "unpaged pagetable" for HVM guests. Since the PV memory
+management and shadow code conflict in their use of struct page_info
+fields, and since shadow code is being used for log-dirty handling for
+PV domains, pages coming from the shadow pool must, for PV domains, not
+have the domain set as their owner.
+
+While the change could be done conditionally for just the PV case in
+shadow code, do it unconditionally (and for consistency also for HAP),
+just to be on the safe side.
+
+There's one special case though for shadow code: The page table used for
+running a HVM guest in unpaged mode is subject to get_page() (in
+set_shadow_status()) and hence must have its owner set.
+
+This is XSA-248.
+
+Signed-off-by: Jan Beulich <jbeulich_at_suse.com>
+Reviewed-by: Tim Deegan <tim_at_xen.org>
+Reviewed-by: George Dunlap <george.dunlap_at_citrix.com>
+---
+v2: Drop PGC_page_table related pieces.
+
+--- a/xen/arch/x86/mm/hap/hap.c
++++ b/xen/arch/x86/mm/hap/hap.c
+_at_@ -286,8 +286,7 @@ static struct page_info *hap_alloc_p2m_p
+     {
+         d->arch.paging.hap.total_pages--;
+         d->arch.paging.hap.p2m_pages++;
+-        page_set_owner(pg, d);
+-        pg->count_info |= 1;
++        ASSERT(!page_get_owner(pg) && !(pg->count_info & PGC_count_mask));
+     }
+     else if ( !d->arch.paging.p2m_alloc_failed )
+     {
+_at_@ -302,21 +301,23 @@ static struct page_info *hap_alloc_p2m_p
+ 
+ static void hap_free_p2m_page(struct domain *d, struct page_info *pg)
+ {
++    struct domain *owner = page_get_owner(pg);
++
+     /* This is called both from the p2m code (which never holds the 
+      * paging lock) and the log-dirty code (which always does). */
+     paging_lock_recursive(d);
+ 
+-    ASSERT(page_get_owner(pg) == d);
+-    /* Should have just the one ref we gave it in alloc_p2m_page() */
+-    if ( (pg->count_info & PGC_count_mask) != 1 ) {
+-        HAP_ERROR("Odd p2m page %p count c=%#lx t=%"PRtype_info"\n",
+-                     pg, pg->count_info, pg->u.inuse.type_info);
++    /* Should still have no owner and count zero. */
++    if ( owner || (pg->count_info & PGC_count_mask) )
++    {
++        HAP_ERROR("d%d: Odd p2m page %"PRI_mfn" d=%d c=%lx t=%"PRtype_info"\n",
++                  d->domain_id, mfn_x(page_to_mfn(pg)),
++                  owner ? owner->domain_id : DOMID_INVALID,
++                  pg->count_info, pg->u.inuse.type_info);
+         WARN();
++        pg->count_info &= ~PGC_count_mask;
++        page_set_owner(pg, NULL);
+     }
+-    pg->count_info &= ~PGC_count_mask;
+-    /* Free should not decrement domain's total allocation, since
+-     * these pages were allocated without an owner. */
+-    page_set_owner(pg, NULL);
+     d->arch.paging.hap.p2m_pages--;
+     d->arch.paging.hap.total_pages++;
+     hap_free(d, page_to_mfn(pg));
+--- a/xen/arch/x86/mm/shadow/common.c
++++ b/xen/arch/x86/mm/shadow/common.c
+_at_@ -1503,32 +1503,29 @@ shadow_alloc_p2m_page(struct domain *d)
+     pg = mfn_to_page(shadow_alloc(d, SH_type_p2m_table, 0));
+     d->arch.paging.shadow.p2m_pages++;
+     d->arch.paging.shadow.total_pages--;
++    ASSERT(!page_get_owner(pg) && !(pg->count_info & PGC_count_mask));
+ 
+     paging_unlock(d);
+ 
+-    /* Unlike shadow pages, mark p2m pages as owned by the domain.
+-     * Marking the domain as the owner would normally allow the guest to
+-     * create mappings of these pages, but these p2m pages will never be
+-     * in the domain's guest-physical address space, and so that is not
+-     * believed to be a concern. */
+-    page_set_owner(pg, d);
+-    pg->count_info |= 1;
+     return pg;
+ }
+ 
+ static void
+ shadow_free_p2m_page(struct domain *d, struct page_info *pg)
+ {
+-    ASSERT(page_get_owner(pg) == d);
+-    /* Should have just the one ref we gave it in alloc_p2m_page() */
+-    if ( (pg->count_info & PGC_count_mask) != 1 )
++    struct domain *owner = page_get_owner(pg);
++
++    /* Should still have no owner and count zero. */
++    if ( owner || (pg->count_info & PGC_count_mask) )
+     {
+-        SHADOW_ERROR("Odd p2m page count c=%#lx t=%"PRtype_info"\n",
++        SHADOW_ERROR("d%d: Odd p2m page %"PRI_mfn" d=%d c=%lx t=%"PRtype_info"\n",
++                     d->domain_id, mfn_x(page_to_mfn(pg)),
++                     owner ? owner->domain_id : DOMID_INVALID,
+                      pg->count_info, pg->u.inuse.type_info);
++        pg->count_info &= ~PGC_count_mask;
++        page_set_owner(pg, NULL);
+     }
+-    pg->count_info &= ~PGC_count_mask;
+     pg->u.sh.type = SH_type_p2m_table; /* p2m code reuses type-info */
+-    page_set_owner(pg, NULL);
+ 
+     /* This is called both from the p2m code (which never holds the
+      * paging lock) and the log-dirty code (which always does). */
+_at_@ -3132,7 +3129,9 @@ int shadow_enable(struct domain *d, u32
+         e = __map_domain_page(pg);
+         write_32bit_pse_identmap(e);
+         unmap_domain_page(e);
++        pg->count_info = 1;
+         pg->u.inuse.type_info = PGT_l2_page_table | 1 | PGT_validated;
++        page_set_owner(pg, d);
+     }
+ 
+     paging_lock(d);
+_at_@ -3170,7 +3169,11 @@ int shadow_enable(struct domain *d, u32
+     if ( rv != 0 && !pagetable_is_null(p2m_get_pagetable(p2m)) )
+         p2m_teardown(p2m);
+     if ( rv != 0 && pg != NULL )
++    {
++        pg->count_info &= ~PGC_count_mask;
++        page_set_owner(pg, NULL);
+         shadow_free_p2m_page(d, pg);
++    }
+     domain_unpause(d);
+     return rv;
+ }
+_at_@ -3279,7 +3282,22 @@ out:
+ 
+     /* Must be called outside the lock */
+     if ( unpaged_pagetable )
++    {
++        if ( page_get_owner(unpaged_pagetable) == d &&
++             (unpaged_pagetable->count_info & PGC_count_mask) == 1 )
++        {
++            unpaged_pagetable->count_info &= ~PGC_count_mask;
++            page_set_owner(unpaged_pagetable, NULL);
++        }
++        /* Complain here in cases where shadow_free_p2m_page() won't. */
++        else if ( !page_get_owner(unpaged_pagetable) &&
++                  !(unpaged_pagetable->count_info & PGC_count_mask) )
++            SHADOW_ERROR("d%d: Odd unpaged pt %"PRI_mfn" c=%lx t=%"PRtype_info"\n",
++                         d->domain_id, mfn_x(page_to_mfn(unpaged_pagetable)),
++                         unpaged_pagetable->count_info,
++                         unpaged_pagetable->u.inuse.type_info);
+         shadow_free_p2m_page(d, unpaged_pagetable);
++    }
+ }
+ 
+ void shadow_final_teardown(struct domain *d)
diff --git a/main/xen/xsa249.patch b/main/xen/xsa249.patch
new file mode 100644
index 0000000000..ecfa4305e5
--- /dev/null
+++ b/main/xen/xsa249.patch
_at_@ -0,0 +1,42 @@
+From: Jan Beulich <jbeulich_at_suse.com>
+Subject: x86/shadow: fix refcount overflow check
+
+Commit c385d27079 ("x86 shadow: for multi-page shadows, explicitly track
+the first page") reduced the refcount width to 25, without adjusting the
+overflow check. Eliminate the disconnect by using a manifest constant.
+
+Interestingly, up to commit 047782fa01 ("Out-of-sync L1 shadows: OOS
+snapshot") the refcount was 27 bits wide, yet the check was already
+using 26.
+
+This is XSA-249.
+
+Signed-off-by: Jan Beulich <jbeulich_at_suse.com>
+Reviewed-by: George Dunlap <george.dunlap_at_citrix.com>
+Reviewed-by: Tim Deegan <tim_at_xen.org>
+---
+v2: Simplify expression back to the style it was.
+
+--- a/xen/arch/x86/mm/shadow/private.h
++++ b/xen/arch/x86/mm/shadow/private.h
+_at_@ -529,7 +529,7 @@ static inline int sh_get_ref(struct doma
+     x = sp->u.sh.count;
+     nx = x + 1;
+ 
+-    if ( unlikely(nx >= 1U<<26) )
++    if ( unlikely(nx >= (1U << PAGE_SH_REFCOUNT_WIDTH)) )
+     {
+         SHADOW_PRINTK("shadow ref overflow, gmfn=%lx smfn=%lx\n",
+                        __backpointer(sp), mfn_x(smfn));
+--- a/xen/include/asm-x86/mm.h
++++ b/xen/include/asm-x86/mm.h
+_at_@ -82,7 +82,8 @@ struct page_info
+             unsigned long type:5;   /* What kind of shadow is this? */
+             unsigned long pinned:1; /* Is the shadow pinned? */
+             unsigned long head:1;   /* Is this the first page of the shadow? */
+-            unsigned long count:25; /* Reference count */
++#define PAGE_SH_REFCOUNT_WIDTH 25
++            unsigned long count:PAGE_SH_REFCOUNT_WIDTH; /* Reference count */
+         } sh;
+ 
+         /* Page is on a free list: ((count_info & PGC_count_mask) == 0). */
diff --git a/main/xen/xsa250.patch b/main/xen/xsa250.patch
new file mode 100644
index 0000000000..26aeb33fed
--- /dev/null
+++ b/main/xen/xsa250.patch
_at_@ -0,0 +1,67 @@
+From: Jan Beulich <jbeulich_at_suse.com>
+Subject: x86/shadow: fix ref-counting error handling
+
+The old-Linux handling in shadow_set_l4e() mistakenly ORed together the
+results of sh_get_ref() and sh_pin(). As the latter failing is not a
+correctness problem, simply ignore its return value.
+
+In sh_set_toplevel_shadow() a failing sh_get_ref() must not be
+accompanied by installing the entry, despite the domain being crashed.
+
+This is XSA-250.
+
+Signed-off-by: Jan Beulich <jbeulich_at_suse.com>
+Reviewed-by: Tim Deegan <tim_at_xen.org>
+
+--- a/xen/arch/x86/mm/shadow/multi.c
++++ b/xen/arch/x86/mm/shadow/multi.c
+_at_@ -923,7 +923,7 @@ static int shadow_set_l4e(struct domain
+                           shadow_l4e_t new_sl4e,
+                           mfn_t sl4mfn)
+ {
+-    int flags = 0, ok;
++    int flags = 0;
+     shadow_l4e_t old_sl4e;
+     paddr_t paddr;
+     ASSERT(sl4e != NULL);
+_at_@ -938,15 +938,16 @@ static int shadow_set_l4e(struct domain
+     {
+         /* About to install a new reference */
+         mfn_t sl3mfn = shadow_l4e_get_mfn(new_sl4e);
+-        ok = sh_get_ref(d, sl3mfn, paddr);
+-        /* Are we pinning l3 shadows to handle wierd linux behaviour? */
+-        if ( sh_type_is_pinnable(d, SH_type_l3_64_shadow) )
+-            ok |= sh_pin(d, sl3mfn);
+-        if ( !ok )
++
++        if ( !sh_get_ref(d, sl3mfn, paddr) )
+         {
+             domain_crash(d);
+             return SHADOW_SET_ERROR;
+         }
++
++        /* Are we pinning l3 shadows to handle weird Linux behaviour? */
++        if ( sh_type_is_pinnable(d, SH_type_l3_64_shadow) )
++            sh_pin(d, sl3mfn);
+     }
+ 
+     /* Write the new entry */
+_at_@ -3965,14 +3966,15 @@ sh_set_toplevel_shadow(struct vcpu *v,
+ 
+     /* Take a ref to this page: it will be released in sh_detach_old_tables()
+      * or the next call to set_toplevel_shadow() */
+-    if ( !sh_get_ref(d, smfn, 0) )
++    if ( sh_get_ref(d, smfn, 0) )
++        new_entry = pagetable_from_mfn(smfn);
++    else
+     {
+         SHADOW_ERROR("can't install %#lx as toplevel shadow\n", mfn_x(smfn));
+         domain_crash(d);
++        new_entry = pagetable_null();
+     }
+ 
+-    new_entry = pagetable_from_mfn(smfn);
+-
+  install_new_entry:
+     /* Done.  Install it */
+     SHADOW_PRINTK("%u/%u [%u] gmfn %#"PRI_mfn" smfn %#"PRI_mfn"\n",
diff --git a/main/xen/xsa251.patch b/main/xen/xsa251.patch
new file mode 100644
index 0000000000..582ef622eb
--- /dev/null
+++ b/main/xen/xsa251.patch
_at_@ -0,0 +1,21 @@
+From: Jan Beulich <jbeulich_at_suse.com>
+Subject: x86/paging: don't unconditionally BUG() on finding SHARED_M2P_ENTRY
+
+PV guests can fully control the values written into the P2M.
+
+This is XSA-251.
+
+Signed-off-by: Jan Beulich <jbeulich_at_suse.com>
+Reviewed-by: Andrew Cooper <andrew.cooper3_at_citrix.com>
+
+--- a/xen/arch/x86/mm/paging.c
++++ b/xen/arch/x86/mm/paging.c
+_at_@ -274,7 +274,7 @@ void paging_mark_pfn_dirty(struct domain
+         return;
+ 
+     /* Shared MFNs should NEVER be marked dirty */
+-    BUG_ON(SHARED_M2P(pfn_x(pfn)));
++    BUG_ON(paging_mode_translate(d) && SHARED_M2P(pfn_x(pfn)));
+ 
+     /*
+      * Values with the MSB set denote MFNs that aren't really part of the
-- 
2.15.0
---
Unsubscribe:  alpine-aports+unsubscribe_at_lists.alpinelinux.org
Help:         alpine-aports+help_at_lists.alpinelinux.org
---
Received on Sat Jan 20 2018 - 18:58:42 GMT