X-Original-To: alpine-aports@lists.alpinelinux.org Received: from mail-qk0-f196.google.com (mail-qk0-f196.google.com [209.85.220.196]) by lists.alpinelinux.org (Postfix) with ESMTP id 342315C5874 for ; Fri, 15 Dec 2017 22:36:00 +0000 (GMT) Received: by mail-qk0-f196.google.com with SMTP id b123so12210125qkg.7 for ; Fri, 15 Dec 2017 14:36:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references; bh=S/O2QoHqbt8eF9tZ7OVmiUTNmWBDKtkIyhneVfZ0U8c=; b=VqIPtBFJlqeKyHQOl8MtgHcMBruheIYsKiiw/1R3lVrx9vTSGx2B4nXo0qmuOwM4IQ qZcZLZ2HQ37m/WsQuEORitPSFFofzRo9q/SjGopslsl/jonURz0CjJspSp4Zj/D6h3NL zWj/ncRoesEd8siOwEPVex5G87quSr/Zq8uPKbWQfgYVVg+IF+TVJ19nKTFwn3jcisKC JkVQNUH6rgAozz7dvX5jQL8mf+6VjftFdkDJcBQoYNrcfxyccKvXoCmvUkXrMNLiidvW B2rIWkZibJOnqLa+oHHwg/F3eOSqwvQGOqDby1Xx73UTKwG6f9BLOVvqCXGMos+BsjH2 HqbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=S/O2QoHqbt8eF9tZ7OVmiUTNmWBDKtkIyhneVfZ0U8c=; b=m2YfdqG2Ysabu5+pD3X2CUVY2N2Grb7h2XxE44XcYSWkLJbSIzvFcSvWpFld/2Ds9m qLiGEdfsYm3459E8jVND7x/P2qHPONUi08MuJoVdJhq5NWAxVBQM5bCwdp+4vrikufDH wmghO+FKlIua1UdpElJLc5coDvNOx/dAxN93FdJRXTrjPFNPnNCbSoq8MfChALVLj/mH igdn5i7Eq0zNRyO5CyRDHgOTbCDLcEj/7ZXAB9mxUiunGZky7Heq8XNYFZWr5P4JahSJ b0wuXy/cAA5S5m2gc6Os0rf+Gyh40NyMFuHwoTzZPqZtclRbNIkxdMybRINVE4tHFBuV B1mA== X-Gm-Message-State: AKGB3mLdMY5vjKm1ZPeMln2N9YXzuGimPXF3n94LIpCbDdQrRghKj+I5 p1cbnIxYqdxO0qZhHTY9rzLd9g== X-Google-Smtp-Source: ACJfBotrppnuWXrKUks5UVOSarEWnJFwSXlIi7FKoLYOfghm93K4Thl/Cjkn+b9zw7Sbv70T0fdWvA== X-Received: by 10.55.214.143 with SMTP id p15mr20792469qkl.324.1513377359296; Fri, 15 Dec 2017 14:35:59 -0800 (PST) Received: from localhost.localdomain (c-71-60-35-21.hsd1.pa.comcast.net. [71.60.35.21]) by smtp.googlemail.com with ESMTPSA id g200sm4694140qke.55.2017.12.15.14.35.58 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 15 Dec 2017 14:35:58 -0800 (PST) From: Daniel Sabogal To: alpine-aports@lists.alpinelinux.org Subject: [alpine-aports] [PATCH 3.7-stable/edge] main/xen: security fixes for (XSA-248, XSA-249, XSA-250, XSA-251) Date: Fri, 15 Dec 2017 17:38:55 -0500 Message-Id: <20171215223855.3700-2-dsabogalcc@gmail.com> X-Mailer: git-send-email 2.15.0 In-Reply-To: <20171215223855.3700-1-dsabogalcc@gmail.com> References: <20171215223855.3700-1-dsabogalcc@gmail.com> X-Mailinglist: alpine-aports Precedence: list List-Id: Alpine Development List-Unsubscribe: List-Post: List-Help: List-Subscribe: --- main/xen/APKBUILD | 15 ++++- main/xen/xsa248.patch | 164 ++++++++++++++++++++++++++++++++++++++++++++++++++ main/xen/xsa249.patch | 42 +++++++++++++ main/xen/xsa250.patch | 67 +++++++++++++++++++++ main/xen/xsa251.patch | 21 +++++++ 5 files changed, 308 insertions(+), 1 deletion(-) create mode 100644 main/xen/xsa248.patch create mode 100644 main/xen/xsa249.patch create mode 100644 main/xen/xsa250.patch create mode 100644 main/xen/xsa251.patch diff --git a/main/xen/APKBUILD b/main/xen/APKBUILD index bb02b2bee9..067f1b3648 100644 --- a/main/xen/APKBUILD +++ b/main/xen/APKBUILD @@ -3,7 +3,7 @@ # Maintainer: William Pitcock pkgname=xen pkgver=4.9.1 -pkgrel=1 +pkgrel=2 pkgdesc="Xen hypervisor" url="http://www.xen.org/" arch="x86_64 armhf aarch64" @@ -101,6 +101,11 @@ options="!strip" # 4.9.1-r1: # - XSA-246 # - XSA-247 +# 4.9.1-r2: +# - XSA-248 +# - XSA-249 +# - XSA-250 +# - XSA-251 case "$CARCH" in x86*) @@ -151,6 +156,10 @@ source="https://downloads.xenproject.org/release/$pkgname/$pkgver/$pkgname-$pkgv xsa246-4.9.patch xsa247-4.9-1.patch xsa247-4.9-2.patch + xsa248.patch + xsa249.patch + xsa250.patch + xsa251.patch qemu-coroutine-gthread.patch qemu-xen_paths.patch @@ -413,6 +422,10 @@ c2bc9ffc8583aeae71cee9ddcc4418969768d4e3764d47307da54f93981c0109fb07d84b061b3a36 b00f42d2069f273e204698177d2c36950cee759a92dfe7833c812ddff4dedde2c4a842980927ec4fc46d1f54b49879bf3a3681c6faf30b72fb3ad6a7eba060b2 xsa246-4.9.patch c5e064543048751fda86ce64587493518da87d219ff077abb83ac13d8381ceb29f1b6479fc0b761b8f7a04c8c70203791ac4a8cc79bbc6f4dcfa6661c4790c5e xsa247-4.9-1.patch 71aefbe27cbd1d1d363b7d5826c69a238e4aad2958a1c6da330ae5daee791f54ce1d01fb79db84ed4248ab8b1593c9c28c3de5108f4d0953b04f7819af23a1d1 xsa247-4.9-2.patch +6415689190b8f4ead7a3482a2285485af4acd4f3565521736f8fe975c74c7c70b27608e0142a7165b4f735b547b688db99a6027697e77b3e1d15c09e14b4f0a6 xsa248.patch +05a2e954bab1877500eb5ed3a8c49edb27411ed3ec9dbfb2115b7804a3b03c6d45c9f08a7ed96ff2b586346f321142065a8c5a5d996468496b373637b6ee31b9 xsa249.patch +b3030f09ddb4f9e4a356519c7b74d393e8db085278a1e616788c81d19988699a6efdd8568277c25514f3298ca92e5a09e3cd08b0a308a4d2ddb55374a8445657 xsa250.patch +928153b48af2bd6b334058c5919880cfc7d665c63e0232932866941cbea6deb8d0d83f70dff0974d3df27fc84096beca51139a0b1c0585978f298256b3fd82eb xsa251.patch c3c46f232f0bd9f767b232af7e8ce910a6166b126bd5427bb8dc325aeb2c634b956de3fc225cab5af72649070c8205cc8e1cab7689fc266c204f525086f1a562 qemu-coroutine-gthread.patch 1936ab39a1867957fa640eb81c4070214ca4856a2743ba7e49c0cd017917071a9680d015f002c57fa7b9600dbadd29dcea5887f50e6c133305df2669a7a933f3 qemu-xen_paths.patch f095ea373f36381491ad36f0662fb4f53665031973721256b23166e596318581da7cbb0146d0beb2446729adfdb321e01468e377793f6563a67d68b8b0f7ffe3 hotplug-vif-vtrill.patch diff --git a/main/xen/xsa248.patch b/main/xen/xsa248.patch new file mode 100644 index 0000000000..966c16e043 --- /dev/null +++ b/main/xen/xsa248.patch @@ -0,0 +1,164 @@ +From: Jan Beulich +Subject: x86/mm: don't wrongly set page ownership + +PV domains can obtain mappings of any pages owned by the correct domain, +including ones that aren't actually assigned as "normal" RAM, but used +by Xen internally. At the moment such "internal" pages marked as owned +by a guest include pages used to track logdirty bits, as well as p2m +pages and the "unpaged pagetable" for HVM guests. Since the PV memory +management and shadow code conflict in their use of struct page_info +fields, and since shadow code is being used for log-dirty handling for +PV domains, pages coming from the shadow pool must, for PV domains, not +have the domain set as their owner. + +While the change could be done conditionally for just the PV case in +shadow code, do it unconditionally (and for consistency also for HAP), +just to be on the safe side. + +There's one special case though for shadow code: The page table used for +running a HVM guest in unpaged mode is subject to get_page() (in +set_shadow_status()) and hence must have its owner set. + +This is XSA-248. + +Signed-off-by: Jan Beulich +Reviewed-by: Tim Deegan +Reviewed-by: George Dunlap +--- +v2: Drop PGC_page_table related pieces. + +--- a/xen/arch/x86/mm/hap/hap.c ++++ b/xen/arch/x86/mm/hap/hap.c +@@ -286,8 +286,7 @@ static struct page_info *hap_alloc_p2m_p + { + d->arch.paging.hap.total_pages--; + d->arch.paging.hap.p2m_pages++; +- page_set_owner(pg, d); +- pg->count_info |= 1; ++ ASSERT(!page_get_owner(pg) && !(pg->count_info & PGC_count_mask)); + } + else if ( !d->arch.paging.p2m_alloc_failed ) + { +@@ -302,21 +301,23 @@ static struct page_info *hap_alloc_p2m_p + + static void hap_free_p2m_page(struct domain *d, struct page_info *pg) + { ++ struct domain *owner = page_get_owner(pg); ++ + /* This is called both from the p2m code (which never holds the + * paging lock) and the log-dirty code (which always does). */ + paging_lock_recursive(d); + +- ASSERT(page_get_owner(pg) == d); +- /* Should have just the one ref we gave it in alloc_p2m_page() */ +- if ( (pg->count_info & PGC_count_mask) != 1 ) { +- HAP_ERROR("Odd p2m page %p count c=%#lx t=%"PRtype_info"\n", +- pg, pg->count_info, pg->u.inuse.type_info); ++ /* Should still have no owner and count zero. */ ++ if ( owner || (pg->count_info & PGC_count_mask) ) ++ { ++ HAP_ERROR("d%d: Odd p2m page %"PRI_mfn" d=%d c=%lx t=%"PRtype_info"\n", ++ d->domain_id, mfn_x(page_to_mfn(pg)), ++ owner ? owner->domain_id : DOMID_INVALID, ++ pg->count_info, pg->u.inuse.type_info); + WARN(); ++ pg->count_info &= ~PGC_count_mask; ++ page_set_owner(pg, NULL); + } +- pg->count_info &= ~PGC_count_mask; +- /* Free should not decrement domain's total allocation, since +- * these pages were allocated without an owner. */ +- page_set_owner(pg, NULL); + d->arch.paging.hap.p2m_pages--; + d->arch.paging.hap.total_pages++; + hap_free(d, page_to_mfn(pg)); +--- a/xen/arch/x86/mm/shadow/common.c ++++ b/xen/arch/x86/mm/shadow/common.c +@@ -1503,32 +1503,29 @@ shadow_alloc_p2m_page(struct domain *d) + pg = mfn_to_page(shadow_alloc(d, SH_type_p2m_table, 0)); + d->arch.paging.shadow.p2m_pages++; + d->arch.paging.shadow.total_pages--; ++ ASSERT(!page_get_owner(pg) && !(pg->count_info & PGC_count_mask)); + + paging_unlock(d); + +- /* Unlike shadow pages, mark p2m pages as owned by the domain. +- * Marking the domain as the owner would normally allow the guest to +- * create mappings of these pages, but these p2m pages will never be +- * in the domain's guest-physical address space, and so that is not +- * believed to be a concern. */ +- page_set_owner(pg, d); +- pg->count_info |= 1; + return pg; + } + + static void + shadow_free_p2m_page(struct domain *d, struct page_info *pg) + { +- ASSERT(page_get_owner(pg) == d); +- /* Should have just the one ref we gave it in alloc_p2m_page() */ +- if ( (pg->count_info & PGC_count_mask) != 1 ) ++ struct domain *owner = page_get_owner(pg); ++ ++ /* Should still have no owner and count zero. */ ++ if ( owner || (pg->count_info & PGC_count_mask) ) + { +- SHADOW_ERROR("Odd p2m page count c=%#lx t=%"PRtype_info"\n", ++ SHADOW_ERROR("d%d: Odd p2m page %"PRI_mfn" d=%d c=%lx t=%"PRtype_info"\n", ++ d->domain_id, mfn_x(page_to_mfn(pg)), ++ owner ? owner->domain_id : DOMID_INVALID, + pg->count_info, pg->u.inuse.type_info); ++ pg->count_info &= ~PGC_count_mask; ++ page_set_owner(pg, NULL); + } +- pg->count_info &= ~PGC_count_mask; + pg->u.sh.type = SH_type_p2m_table; /* p2m code reuses type-info */ +- page_set_owner(pg, NULL); + + /* This is called both from the p2m code (which never holds the + * paging lock) and the log-dirty code (which always does). */ +@@ -3132,7 +3129,9 @@ int shadow_enable(struct domain *d, u32 + e = __map_domain_page(pg); + write_32bit_pse_identmap(e); + unmap_domain_page(e); ++ pg->count_info = 1; + pg->u.inuse.type_info = PGT_l2_page_table | 1 | PGT_validated; ++ page_set_owner(pg, d); + } + + paging_lock(d); +@@ -3170,7 +3169,11 @@ int shadow_enable(struct domain *d, u32 + if ( rv != 0 && !pagetable_is_null(p2m_get_pagetable(p2m)) ) + p2m_teardown(p2m); + if ( rv != 0 && pg != NULL ) ++ { ++ pg->count_info &= ~PGC_count_mask; ++ page_set_owner(pg, NULL); + shadow_free_p2m_page(d, pg); ++ } + domain_unpause(d); + return rv; + } +@@ -3279,7 +3282,22 @@ out: + + /* Must be called outside the lock */ + if ( unpaged_pagetable ) ++ { ++ if ( page_get_owner(unpaged_pagetable) == d && ++ (unpaged_pagetable->count_info & PGC_count_mask) == 1 ) ++ { ++ unpaged_pagetable->count_info &= ~PGC_count_mask; ++ page_set_owner(unpaged_pagetable, NULL); ++ } ++ /* Complain here in cases where shadow_free_p2m_page() won't. */ ++ else if ( !page_get_owner(unpaged_pagetable) && ++ !(unpaged_pagetable->count_info & PGC_count_mask) ) ++ SHADOW_ERROR("d%d: Odd unpaged pt %"PRI_mfn" c=%lx t=%"PRtype_info"\n", ++ d->domain_id, mfn_x(page_to_mfn(unpaged_pagetable)), ++ unpaged_pagetable->count_info, ++ unpaged_pagetable->u.inuse.type_info); + shadow_free_p2m_page(d, unpaged_pagetable); ++ } + } + + void shadow_final_teardown(struct domain *d) diff --git a/main/xen/xsa249.patch b/main/xen/xsa249.patch new file mode 100644 index 0000000000..ecfa4305e5 --- /dev/null +++ b/main/xen/xsa249.patch @@ -0,0 +1,42 @@ +From: Jan Beulich +Subject: x86/shadow: fix refcount overflow check + +Commit c385d27079 ("x86 shadow: for multi-page shadows, explicitly track +the first page") reduced the refcount width to 25, without adjusting the +overflow check. Eliminate the disconnect by using a manifest constant. + +Interestingly, up to commit 047782fa01 ("Out-of-sync L1 shadows: OOS +snapshot") the refcount was 27 bits wide, yet the check was already +using 26. + +This is XSA-249. + +Signed-off-by: Jan Beulich +Reviewed-by: George Dunlap +Reviewed-by: Tim Deegan +--- +v2: Simplify expression back to the style it was. + +--- a/xen/arch/x86/mm/shadow/private.h ++++ b/xen/arch/x86/mm/shadow/private.h +@@ -529,7 +529,7 @@ static inline int sh_get_ref(struct doma + x = sp->u.sh.count; + nx = x + 1; + +- if ( unlikely(nx >= 1U<<26) ) ++ if ( unlikely(nx >= (1U << PAGE_SH_REFCOUNT_WIDTH)) ) + { + SHADOW_PRINTK("shadow ref overflow, gmfn=%lx smfn=%lx\n", + __backpointer(sp), mfn_x(smfn)); +--- a/xen/include/asm-x86/mm.h ++++ b/xen/include/asm-x86/mm.h +@@ -82,7 +82,8 @@ struct page_info + unsigned long type:5; /* What kind of shadow is this? */ + unsigned long pinned:1; /* Is the shadow pinned? */ + unsigned long head:1; /* Is this the first page of the shadow? */ +- unsigned long count:25; /* Reference count */ ++#define PAGE_SH_REFCOUNT_WIDTH 25 ++ unsigned long count:PAGE_SH_REFCOUNT_WIDTH; /* Reference count */ + } sh; + + /* Page is on a free list: ((count_info & PGC_count_mask) == 0). */ diff --git a/main/xen/xsa250.patch b/main/xen/xsa250.patch new file mode 100644 index 0000000000..26aeb33fed --- /dev/null +++ b/main/xen/xsa250.patch @@ -0,0 +1,67 @@ +From: Jan Beulich +Subject: x86/shadow: fix ref-counting error handling + +The old-Linux handling in shadow_set_l4e() mistakenly ORed together the +results of sh_get_ref() and sh_pin(). As the latter failing is not a +correctness problem, simply ignore its return value. + +In sh_set_toplevel_shadow() a failing sh_get_ref() must not be +accompanied by installing the entry, despite the domain being crashed. + +This is XSA-250. + +Signed-off-by: Jan Beulich +Reviewed-by: Tim Deegan + +--- a/xen/arch/x86/mm/shadow/multi.c ++++ b/xen/arch/x86/mm/shadow/multi.c +@@ -923,7 +923,7 @@ static int shadow_set_l4e(struct domain + shadow_l4e_t new_sl4e, + mfn_t sl4mfn) + { +- int flags = 0, ok; ++ int flags = 0; + shadow_l4e_t old_sl4e; + paddr_t paddr; + ASSERT(sl4e != NULL); +@@ -938,15 +938,16 @@ static int shadow_set_l4e(struct domain + { + /* About to install a new reference */ + mfn_t sl3mfn = shadow_l4e_get_mfn(new_sl4e); +- ok = sh_get_ref(d, sl3mfn, paddr); +- /* Are we pinning l3 shadows to handle wierd linux behaviour? */ +- if ( sh_type_is_pinnable(d, SH_type_l3_64_shadow) ) +- ok |= sh_pin(d, sl3mfn); +- if ( !ok ) ++ ++ if ( !sh_get_ref(d, sl3mfn, paddr) ) + { + domain_crash(d); + return SHADOW_SET_ERROR; + } ++ ++ /* Are we pinning l3 shadows to handle weird Linux behaviour? */ ++ if ( sh_type_is_pinnable(d, SH_type_l3_64_shadow) ) ++ sh_pin(d, sl3mfn); + } + + /* Write the new entry */ +@@ -3965,14 +3966,15 @@ sh_set_toplevel_shadow(struct vcpu *v, + + /* Take a ref to this page: it will be released in sh_detach_old_tables() + * or the next call to set_toplevel_shadow() */ +- if ( !sh_get_ref(d, smfn, 0) ) ++ if ( sh_get_ref(d, smfn, 0) ) ++ new_entry = pagetable_from_mfn(smfn); ++ else + { + SHADOW_ERROR("can't install %#lx as toplevel shadow\n", mfn_x(smfn)); + domain_crash(d); ++ new_entry = pagetable_null(); + } + +- new_entry = pagetable_from_mfn(smfn); +- + install_new_entry: + /* Done. Install it */ + SHADOW_PRINTK("%u/%u [%u] gmfn %#"PRI_mfn" smfn %#"PRI_mfn"\n", diff --git a/main/xen/xsa251.patch b/main/xen/xsa251.patch new file mode 100644 index 0000000000..582ef622eb --- /dev/null +++ b/main/xen/xsa251.patch @@ -0,0 +1,21 @@ +From: Jan Beulich +Subject: x86/paging: don't unconditionally BUG() on finding SHARED_M2P_ENTRY + +PV guests can fully control the values written into the P2M. + +This is XSA-251. + +Signed-off-by: Jan Beulich +Reviewed-by: Andrew Cooper + +--- a/xen/arch/x86/mm/paging.c ++++ b/xen/arch/x86/mm/paging.c +@@ -274,7 +274,7 @@ void paging_mark_pfn_dirty(struct domain + return; + + /* Shared MFNs should NEVER be marked dirty */ +- BUG_ON(SHARED_M2P(pfn_x(pfn))); ++ BUG_ON(paging_mode_translate(d) && SHARED_M2P(pfn_x(pfn))); + + /* + * Values with the MSB set denote MFNs that aren't really part of the -- 2.15.0 --- Unsubscribe: alpine-aports+unsubscribe@lists.alpinelinux.org Help: alpine-aports+help@lists.alpinelinux.org ---