X-Original-To: alpine-devel@lists.alpinelinux.org Delivered-To: alpine-devel@mail.alpinelinux.org Received: from SMTP.EU.CITRIX.COM (smtp.eu.citrix.com [46.33.159.39]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (No client certificate requested) by mail.alpinelinux.org (Postfix) with ESMTPS id 254CEDC0164 for ; Mon, 3 Dec 2012 18:51:57 +0000 (UTC) X-IronPort-AV: E=Sophos;i="4.84,209,1355097600"; d="scan'208";a="16131334" Received: from lonpmailmx01.citrite.net ([10.30.203.162]) by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5; 03 Dec 2012 18:51:55 +0000 Received: from mac.citrite.net (10.31.3.229) by LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Mon, 3 Dec 2012 18:51:55 +0000 From: Roger Pau Monne To: CC: Roger Pau Monne Subject: [alpine-devel] [PATCH] xen: security fixes Date: Mon, 3 Dec 2012 19:51:49 +0100 Message-ID: <1354560709-99004-1-git-send-email-roger.pau@citrix.com> X-Mailer: git-send-email 1.7.7.5 (Apple Git-26) X-Mailinglist: alpine-devel Precedence: list List-Id: Alpine Development List-Unsubscribe: List-Post: List-Help: List-Subscribe: MIME-Version: 1.0 Content-Type: text/plain This covers: XSA-26 (CVE-2012-5510) XSA-27 (CVE-2012-5511) XSA-29 (CVE-2012-5513) XSA-30 (CVE-2012-5514) XSA-31 (CVE-2012-5515) XSA-32 (CVE-2012-5525) --- main/xen/APKBUILD | 14 ++++- main/xen/xsa26-4.2.patch | 105 ++++++++++++++++++++++++++++ main/xen/xsa27-4.2.patch | 136 +++++++++++++++++++++++++++++++++++++ main/xen/xsa29-4.2-unstable.patch | 49 +++++++++++++ main/xen/xsa30-4.2.patch | 56 +++++++++++++++ main/xen/xsa31-4.2-unstable.patch | 50 ++++++++++++++ main/xen/xsa32-4.2.patch | 22 ++++++ 7 files changed, 431 insertions(+), 1 deletions(-) create mode 100644 main/xen/xsa26-4.2.patch create mode 100644 main/xen/xsa27-4.2.patch create mode 100644 main/xen/xsa29-4.2-unstable.patch create mode 100644 main/xen/xsa30-4.2.patch create mode 100644 main/xen/xsa31-4.2-unstable.patch create mode 100644 main/xen/xsa32-4.2.patch diff --git a/main/xen/APKBUILD b/main/xen/APKBUILD index fcb9306..e9503e8 100644 --- a/main/xen/APKBUILD +++ b/main/xen/APKBUILD @@ -3,7 +3,7 @@ # Maintainer: William Pitcock pkgname=xen pkgver=4.2.0 -pkgrel=6 +pkgrel=7 pkgdesc="Xen hypervisor" url="http://www.xen.org/" arch="x86 x86_64" @@ -24,6 +24,12 @@ source="http://bits.xensource.com/oss-xen/release/$pkgver/$pkgname-$pkgver.tar.g xsa23-4.2-unstable.patch xsa24.patch xsa25-4.2.patch + xsa26-4.2.patch + xsa27-4.2.patch + xsa29-4.2-unstable.patch + xsa30-4.2.patch + xsa31-4.2-unstable.patch + xsa32-4.2.patch xenstored.initd xenstored.confd @@ -143,6 +149,12 @@ fb7e76f00c2a4e63b408cb67df7d1a7b xsa20.patch 9151e7c648b12f518826ad0f0a67da42 xsa23-4.2-unstable.patch 9bd8b30094f8eb2408846c1b6ed0cad6 xsa24.patch 9fc7097ed2e5e756c4ae91145c143433 xsa25-4.2.patch +281ad5fefa8856a5b431a7830be6c370 xsa26-4.2.patch +d8cb820b85f86caa58ce1cc215aac069 xsa27-4.2.patch +405531d7e434be9bc663c601d4dc67a4 xsa29-4.2-unstable.patch +23f5ca5789f5358b8d2f8ce998db5ed6 xsa30-4.2.patch +78fa8ac0ac907dd3ae7ef02bea623bb5 xsa31-4.2-unstable.patch +2bd8f676273e644910e6a907372dfa31 xsa32-4.2.patch 95d8af17bf844d41a015ff32aae51ba1 xenstored.initd b017ccdd5e1c27bbf1513e3569d4ff07 xenstored.confd ed262f15fb880badb53575539468646c xenconsoled.initd diff --git a/main/xen/xsa26-4.2.patch b/main/xen/xsa26-4.2.patch new file mode 100644 index 0000000..44b8f34 --- /dev/null +++ b/main/xen/xsa26-4.2.patch @@ -0,0 +1,105 @@ +gnttab: fix releasing of memory upon switches between versions + +gnttab_unpopulate_status_frames() incompletely freed the pages +previously used as status frame in that they did not get removed from +the domain's xenpage_list, thus causing subsequent list corruption +when those pages did get allocated again for the same or another purpose. + +Similarly, grant_table_create() and gnttab_grow_table() both improperly +clean up in the event of an error - pages already shared with the guest +can't be freed by just passing them to free_xenheap_page(). Fix this by +sharing the pages only after all allocations succeeded. + +This is CVE-2012-5510 / XSA-26. + +Signed-off-by: Jan Beulich +Acked-by: Ian Campbell + +diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c +index c01ad00..6fb2be9 100644 +--- a/xen/common/grant_table.c ++++ b/xen/common/grant_table.c +@@ -1173,12 +1173,13 @@ fault: + } + + static int +-gnttab_populate_status_frames(struct domain *d, struct grant_table *gt) ++gnttab_populate_status_frames(struct domain *d, struct grant_table *gt, ++ unsigned int req_nr_frames) + { + unsigned i; + unsigned req_status_frames; + +- req_status_frames = grant_to_status_frames(gt->nr_grant_frames); ++ req_status_frames = grant_to_status_frames(req_nr_frames); + for ( i = nr_status_frames(gt); i < req_status_frames; i++ ) + { + if ( (gt->status[i] = alloc_xenheap_page()) == NULL ) +@@ -1209,7 +1210,12 @@ gnttab_unpopulate_status_frames(struct domain *d, struct grant_table *gt) + + for ( i = 0; i < nr_status_frames(gt); i++ ) + { +- page_set_owner(virt_to_page(gt->status[i]), dom_xen); ++ struct page_info *pg = virt_to_page(gt->status[i]); ++ ++ BUG_ON(page_get_owner(pg) != d); ++ if ( test_and_clear_bit(_PGC_allocated, &pg->count_info) ) ++ put_page(pg); ++ BUG_ON(pg->count_info & ~PGC_xen_heap); + free_xenheap_page(gt->status[i]); + gt->status[i] = NULL; + } +@@ -1247,19 +1253,18 @@ gnttab_grow_table(struct domain *d, unsigned int req_nr_frames) + clear_page(gt->shared_raw[i]); + } + +- /* Share the new shared frames with the recipient domain */ +- for ( i = nr_grant_frames(gt); i < req_nr_frames; i++ ) +- gnttab_create_shared_page(d, gt, i); +- +- gt->nr_grant_frames = req_nr_frames; +- + /* Status pages - version 2 */ + if (gt->gt_version > 1) + { +- if ( gnttab_populate_status_frames(d, gt) ) ++ if ( gnttab_populate_status_frames(d, gt, req_nr_frames) ) + goto shared_alloc_failed; + } + ++ /* Share the new shared frames with the recipient domain */ ++ for ( i = nr_grant_frames(gt); i < req_nr_frames; i++ ) ++ gnttab_create_shared_page(d, gt, i); ++ gt->nr_grant_frames = req_nr_frames; ++ + return 1; + + shared_alloc_failed: +@@ -2157,7 +2162,7 @@ gnttab_set_version(XEN_GUEST_HANDLE(gnttab_set_version_t uop)) + + if ( op.version == 2 && gt->gt_version < 2 ) + { +- res = gnttab_populate_status_frames(d, gt); ++ res = gnttab_populate_status_frames(d, gt, nr_grant_frames(gt)); + if ( res < 0) + goto out_unlock; + } +@@ -2600,14 +2605,15 @@ grant_table_create( + clear_page(t->shared_raw[i]); + } + +- for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ ) +- gnttab_create_shared_page(d, t, i); +- + /* Status pages for grant table - for version 2 */ + t->status = xzalloc_array(grant_status_t *, + grant_to_status_frames(max_nr_grant_frames)); + if ( t->status == NULL ) + goto no_mem_4; ++ ++ for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ ) ++ gnttab_create_shared_page(d, t, i); ++ + t->nr_status_frames = 0; + + /* Okay, install the structure. */ diff --git a/main/xen/xsa27-4.2.patch b/main/xen/xsa27-4.2.patch new file mode 100644 index 0000000..62a8d76 --- /dev/null +++ b/main/xen/xsa27-4.2.patch @@ -0,0 +1,136 @@ +hvm: Limit the size of large HVM op batches + +Doing large p2m updates for HVMOP_track_dirty_vram without preemption +ties up the physical processor. Integrating preemption into the p2m +updates is hard so simply limit to 1GB which is sufficient for a 15000 +* 15000 * 32bpp framebuffer. + +For HVMOP_modified_memory and HVMOP_set_mem_type preemptible add the +necessary machinery to handle preemption. + +This is CVE-2012-5511 / XSA-27. + +Signed-off-by: Tim Deegan +Signed-off-by: Ian Campbell +Acked-by: Ian Jackson + +v2: Provide definition of GB to fix x86-32 compile. + +Signed-off-by: Jan Beulich +Acked-by: Ian Jackson + + +diff -r 7c4d806b3753 xen/arch/x86/hvm/hvm.c +--- a/xen/arch/x86/hvm/hvm.c Fri Nov 16 15:56:14 2012 +0000 ++++ b/xen/arch/x86/hvm/hvm.c Mon Nov 19 14:42:10 2012 +0000 +@@ -3969,6 +3969,9 @@ long do_hvm_op(unsigned long op, XEN_GUE + if ( !is_hvm_domain(d) ) + goto param_fail2; + ++ if ( a.nr > GB(1) >> PAGE_SHIFT ) ++ goto param_fail2; ++ + rc = xsm_hvm_param(d, op); + if ( rc ) + goto param_fail2; +@@ -3995,7 +3998,6 @@ long do_hvm_op(unsigned long op, XEN_GUE + { + struct xen_hvm_modified_memory a; + struct domain *d; +- unsigned long pfn; + + if ( copy_from_guest(&a, arg, 1) ) + return -EFAULT; +@@ -4022,9 +4024,11 @@ long do_hvm_op(unsigned long op, XEN_GUE + if ( !paging_mode_log_dirty(d) ) + goto param_fail3; + +- for ( pfn = a.first_pfn; pfn < a.first_pfn + a.nr; pfn++ ) ++ while ( a.nr > 0 ) + { ++ unsigned long pfn = a.first_pfn; + struct page_info *page; ++ + page = get_page_from_gfn(d, pfn, NULL, P2M_UNSHARE); + if ( page ) + { +@@ -4034,6 +4038,19 @@ long do_hvm_op(unsigned long op, XEN_GUE + sh_remove_shadows(d->vcpu[0], _mfn(page_to_mfn(page)), 1, 0); + put_page(page); + } ++ ++ a.first_pfn++; ++ a.nr--; ++ ++ /* Check for continuation if it's not the last interation */ ++ if ( a.nr > 0 && hypercall_preempt_check() ) ++ { ++ if ( copy_to_guest(arg, &a, 1) ) ++ rc = -EFAULT; ++ else ++ rc = -EAGAIN; ++ break; ++ } + } + + param_fail3: +@@ -4089,7 +4106,6 @@ long do_hvm_op(unsigned long op, XEN_GUE + { + struct xen_hvm_set_mem_type a; + struct domain *d; +- unsigned long pfn; + + /* Interface types to internal p2m types */ + p2m_type_t memtype[] = { +@@ -4122,8 +4138,9 @@ long do_hvm_op(unsigned long op, XEN_GUE + if ( a.hvmmem_type >= ARRAY_SIZE(memtype) ) + goto param_fail4; + +- for ( pfn = a.first_pfn; pfn < a.first_pfn + a.nr; pfn++ ) ++ while ( a.nr ) + { ++ unsigned long pfn = a.first_pfn; + p2m_type_t t; + p2m_type_t nt; + mfn_t mfn; +@@ -4163,6 +4180,19 @@ long do_hvm_op(unsigned long op, XEN_GUE + } + } + put_gfn(d, pfn); ++ ++ a.first_pfn++; ++ a.nr--; ++ ++ /* Check for continuation if it's not the last interation */ ++ if ( a.nr > 0 && hypercall_preempt_check() ) ++ { ++ if ( copy_to_guest(arg, &a, 1) ) ++ rc = -EFAULT; ++ else ++ rc = -EAGAIN; ++ goto param_fail4; ++ } + } + + rc = 0; +diff -r 7c4d806b3753 xen/include/asm-x86/config.h +--- a/xen/include/asm-x86/config.h Fri Nov 16 15:56:14 2012 +0000 ++++ b/xen/include/asm-x86/config.h Mon Nov 19 14:42:10 2012 +0000 +@@ -119,6 +119,9 @@ extern char wakeup_start[]; + extern unsigned int video_mode, video_flags; + extern unsigned short boot_edid_caps; + extern unsigned char boot_edid_info[128]; ++ ++#define GB(_gb) (_gb ## UL << 30) ++ + #endif + + #define asmlinkage +@@ -134,7 +137,6 @@ extern unsigned char boot_edid_info[128] + #define PML4_ADDR(_slot) \ + ((((_slot ## UL) >> 8) * 0xffff000000000000UL) | \ + (_slot ## UL << PML4_ENTRY_BITS)) +-#define GB(_gb) (_gb ## UL << 30) + #else + #define PML4_ENTRY_BYTES (1 << PML4_ENTRY_BITS) + #define PML4_ADDR(_slot) \ diff --git a/main/xen/xsa29-4.2-unstable.patch b/main/xen/xsa29-4.2-unstable.patch new file mode 100644 index 0000000..ec3111f --- /dev/null +++ b/main/xen/xsa29-4.2-unstable.patch @@ -0,0 +1,49 @@ +xen: add missing guest address range checks to XENMEM_exchange handlers + +Ever since its existence (3.0.3 iirc) the handler for this has been +using non address range checking guest memory accessors (i.e. +the ones prefixed with two underscores) without first range +checking the accessed space (via guest_handle_okay()), allowing +a guest to access and overwrite hypervisor memory. + +This is XSA-29 / CVE-2012-5513. + +Signed-off-by: Jan Beulich +Acked-by: Ian Campbell +Acked-by: Ian Jackson + +diff --git a/xen/common/compat/memory.c b/xen/common/compat/memory.c +index 996151c..a49f51b 100644 +--- a/xen/common/compat/memory.c ++++ b/xen/common/compat/memory.c +@@ -115,6 +115,12 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat) + (cmp.xchg.out.nr_extents << cmp.xchg.out.extent_order)) ) + return -EINVAL; + ++ if ( !compat_handle_okay(cmp.xchg.in.extent_start, ++ cmp.xchg.in.nr_extents) || ++ !compat_handle_okay(cmp.xchg.out.extent_start, ++ cmp.xchg.out.nr_extents) ) ++ return -EFAULT; ++ + start_extent = cmp.xchg.nr_exchanged; + end_extent = (COMPAT_ARG_XLAT_SIZE - sizeof(*nat.xchg)) / + (((1U << ABS(order_delta)) + 1) * +diff --git a/xen/common/memory.c b/xen/common/memory.c +index 83e2666..bdb6ed8 100644 +--- a/xen/common/memory.c ++++ b/xen/common/memory.c +@@ -308,6 +308,13 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) + goto fail_early; + } + ++ if ( !guest_handle_okay(exch.in.extent_start, exch.in.nr_extents) || ++ !guest_handle_okay(exch.out.extent_start, exch.out.nr_extents) ) ++ { ++ rc = -EFAULT; ++ goto fail_early; ++ } ++ + /* Only privileged guests can allocate multi-page contiguous extents. */ + if ( !multipage_allocation_permitted(current->domain, + exch.in.extent_order) || diff --git a/main/xen/xsa30-4.2.patch b/main/xen/xsa30-4.2.patch new file mode 100644 index 0000000..c46571d --- /dev/null +++ b/main/xen/xsa30-4.2.patch @@ -0,0 +1,56 @@ +xen: fix error handling of guest_physmap_mark_populate_on_demand() + +The only user of the "out" label bypasses a necessary unlock, thus +enabling the caller to lock up Xen. + +Also, the function was never meant to be called by a guest for itself, +so rather than inspecting the code paths in depth for potential other +problems this might cause, and adjusting e.g. the non-guest printk() +in the above error path, just disallow the guest access to it. + +Finally, the printk() (considering its potential of spamming the log, +the more that it's not using XENLOG_GUEST), is being converted to +P2M_DEBUG(), as debugging is what it apparently was added for in the +first place. + +This is XSA-30 / CVE-2012-5514. + +Signed-off-by: Jan Beulich +Acked-by: Ian Campbell +Acked-by: George Dunlap +Acked-by: Ian Jackson + +diff -r 7c4d806b3753 xen/arch/x86/mm/p2m-pod.c +--- a/xen/arch/x86/mm/p2m-pod.c Fri Nov 16 15:56:14 2012 +0000 ++++ b/xen/arch/x86/mm/p2m-pod.c Thu Nov 22 17:02:32 2012 +0000 +@@ -1117,6 +1117,9 @@ guest_physmap_mark_populate_on_demand(st + mfn_t omfn; + int rc = 0; + ++ if ( !IS_PRIV_FOR(current->domain, d) ) ++ return -EPERM; ++ + if ( !paging_mode_translate(d) ) + return -EINVAL; + +@@ -1135,8 +1138,7 @@ guest_physmap_mark_populate_on_demand(st + omfn = p2m->get_entry(p2m, gfn + i, &ot, &a, 0, NULL); + if ( p2m_is_ram(ot) ) + { +- printk("%s: gfn_to_mfn returned type %d!\n", +- __func__, ot); ++ P2M_DEBUG("gfn_to_mfn returned type %d!\n", ot); + rc = -EBUSY; + goto out; + } +@@ -1160,9 +1162,9 @@ guest_physmap_mark_populate_on_demand(st + pod_unlock(p2m); + } + ++out: + gfn_unlock(p2m, gfn, order); + +-out: + return rc; + } + diff --git a/main/xen/xsa31-4.2-unstable.patch b/main/xen/xsa31-4.2-unstable.patch new file mode 100644 index 0000000..2229c4c --- /dev/null +++ b/main/xen/xsa31-4.2-unstable.patch @@ -0,0 +1,50 @@ +memop: limit guest specified extent order + +Allowing unbounded order values here causes almost unbounded loops +and/or partially incomplete requests, particularly in PoD code. + +The added range checks in populate_physmap(), decrease_reservation(), +and the "in" one in memory_exchange() architecturally all could use +PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to +MAX_ORDER. + +This is XSA-31 / CVE-2012-5515. + +Signed-off-by: Jan Beulich +Acked-by: Tim Deegan +Acked-by: Ian Jackson + +diff --git a/xen/common/memory.c b/xen/common/memory.c +index 83e2666..2e56d46 100644 +--- a/xen/common/memory.c ++++ b/xen/common/memory.c +@@ -115,7 +115,8 @@ static void populate_physmap(struct memop_args *a) + + if ( a->memflags & MEMF_populate_on_demand ) + { +- if ( guest_physmap_mark_populate_on_demand(d, gpfn, ++ if ( a->extent_order > MAX_ORDER || ++ guest_physmap_mark_populate_on_demand(d, gpfn, + a->extent_order) < 0 ) + goto out; + } +@@ -235,7 +236,8 @@ static void decrease_reservation(struct memop_args *a) + xen_pfn_t gmfn; + + if ( !guest_handle_subrange_okay(a->extent_list, a->nr_done, +- a->nr_extents-1) ) ++ a->nr_extents-1) || ++ a->extent_order > MAX_ORDER ) + return; + + for ( i = a->nr_done; i < a->nr_extents; i++ ) +@@ -297,6 +299,9 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) + if ( (exch.nr_exchanged > exch.in.nr_extents) || + /* Input and output domain identifiers match? */ + (exch.in.domid != exch.out.domid) || ++ /* Extent orders are sensible? */ ++ (exch.in.extent_order > MAX_ORDER) || ++ (exch.out.extent_order > MAX_ORDER) || + /* Sizes of input and output lists do not overflow a long? */ + ((~0UL >> exch.in.extent_order) < exch.in.nr_extents) || + ((~0UL >> exch.out.extent_order) < exch.out.nr_extents) || diff --git a/main/xen/xsa32-4.2.patch b/main/xen/xsa32-4.2.patch new file mode 100644 index 0000000..9800609 --- /dev/null +++ b/main/xen/xsa32-4.2.patch @@ -0,0 +1,22 @@ +x86: get_page_from_gfn() must return NULL for invalid GFNs + +... also in the non-translated case. + +This is XSA-32 / CVE-2012-xxxx. + +Signed-off-by: Jan Beulich +Acked-by: Tim Deegan + +diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h +index 7a7c7eb..d5665b8 100644 +--- a/xen/include/asm-x86/p2m.h ++++ b/xen/include/asm-x86/p2m.h +@@ -400,7 +400,7 @@ static inline struct page_info *get_page_from_gfn( + if (t) + *t = p2m_ram_rw; + page = __mfn_to_page(gfn); +- return get_page(page, d) ? page : NULL; ++ return mfn_valid(gfn) && get_page(page, d) ? page : NULL; + } + + -- 1.7.7.5 (Apple Git-26) --- Unsubscribe: alpine-devel+unsubscribe@lists.alpinelinux.org Help: alpine-devel+help@lists.alpinelinux.org ---