X-Original-To: alpine-devel@lists.alpinelinux.org Delivered-To: alpine-devel@mail.alpinelinux.org Received: from SMTP.EU.CITRIX.COM (smtp.eu.citrix.com [46.33.159.39]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (No client certificate requested) by mail.alpinelinux.org (Postfix) with ESMTPS id F275ADC0153 for ; Mon, 10 Dec 2012 11:20:20 +0000 (UTC) X-IronPort-AV: E=Sophos;i="4.84,250,1355097600"; d="scan'208";a="30632" Received: from lonpmailmx01.citrite.net ([10.30.203.162]) by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5; 10 Dec 2012 11:20:19 +0000 Received: from mac.citrite.net (10.31.3.228) by LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id 8.3.279.5; Mon, 10 Dec 2012 11:20:17 +0000 From: Roger Pau Monne To: CC: Roger Pau Monne Subject: [alpine-devel] [PATCH] xen: security fixes Date: Mon, 10 Dec 2012 12:20:12 +0100 Message-ID: <1355138412-18333-1-git-send-email-roger.pau@citrix.com> X-Mailer: git-send-email 1.7.7.5 (Apple Git-26) X-Mailinglist: alpine-devel Precedence: list List-Id: Alpine Development List-Unsubscribe: List-Post: List-Help: List-Subscribe: MIME-Version: 1.0 Content-Type: text/plain This covers from XSA-12 to XSA-31. To apply against branch 2.4-stable. fixes #1407 --- main/xen/APKBUILD | 46 ++++- main/xen/xsa-12.patch | 32 +++ main/xen/xsa-13.patch | 41 ++++ main/xen/xsa-14.patch | 32 +++ main/xen/xsa-15-1.patch | 37 +++ main/xen/xsa-15-2.patch | 54 +++++ main/xen/xsa-15-3.patch | 32 +++ main/xen/xsa-15-4.patch | 51 +++++ main/xen/xsa-15-5.patch | 563 ++++++++++++++++++++++++++++++++++++++++++++++ main/xen/xsa-16.patch | 42 ++++ main/xen/xsa-17.patch | 122 ++++++++++ main/xen/xsa-20.patch | 56 +++++ main/xen/xsa-21.patch | 42 ++++ main/xen/xsa-22.patch | 51 +++++ main/xen/xsa-23.patch | 44 ++++ main/xen/xsa-24.patch | 41 ++++ main/xen/xsa-25.patch | 474 ++++++++++++++++++++++++++++++++++++++ main/xen/xsa26-4.1.patch | 107 +++++++++ main/xen/xsa27-4.1.patch | 168 ++++++++++++++ main/xen/xsa28-4.1.patch | 36 +++ main/xen/xsa29-4.1.patch | 49 ++++ main/xen/xsa30-4.1.patch | 57 +++++ main/xen/xsa31-4.1.patch | 50 ++++ 23 files changed, 2226 insertions(+), 1 deletions(-) create mode 100644 main/xen/xsa-12.patch create mode 100644 main/xen/xsa-13.patch create mode 100644 main/xen/xsa-14.patch create mode 100644 main/xen/xsa-15-1.patch create mode 100644 main/xen/xsa-15-2.patch create mode 100644 main/xen/xsa-15-3.patch create mode 100644 main/xen/xsa-15-4.patch create mode 100644 main/xen/xsa-15-5.patch create mode 100644 main/xen/xsa-16.patch create mode 100644 main/xen/xsa-17.patch create mode 100644 main/xen/xsa-20.patch create mode 100644 main/xen/xsa-21.patch create mode 100644 main/xen/xsa-22.patch create mode 100644 main/xen/xsa-23.patch create mode 100644 main/xen/xsa-24.patch create mode 100644 main/xen/xsa-25.patch create mode 100644 main/xen/xsa26-4.1.patch create mode 100644 main/xen/xsa27-4.1.patch create mode 100644 main/xen/xsa28-4.1.patch create mode 100644 main/xen/xsa29-4.1.patch create mode 100644 main/xen/xsa30-4.1.patch create mode 100644 main/xen/xsa31-4.1.patch diff --git a/main/xen/APKBUILD b/main/xen/APKBUILD index 007f300..2d8c316 100644 --- a/main/xen/APKBUILD +++ b/main/xen/APKBUILD @@ -3,7 +3,7 @@ # Maintainer: William Pitcock pkgname=xen pkgver=4.1.3 -pkgrel=0 +pkgrel=1 pkgdesc="Xen hypervisor" url="http://www.xen.org/" arch="x86 x86_64" @@ -22,6 +22,28 @@ source="http://bits.xensource.com/oss-xen/release/$pkgver/$pkgname-$pkgver.tar.g define_fsimage_dir.patch librt.patch busybox-sed.patch + xsa-12.patch + xsa-13.patch + xsa-14.patch + xsa-15-1.patch + xsa-15-2.patch + xsa-15-3.patch + xsa-15-4.patch + xsa-15-5.patch + xsa-16.patch + xsa-17.patch + xsa-20.patch + xsa-21.patch + xsa-22.patch + xsa-23.patch + xsa-24.patch + xsa-25.patch + xsa26-4.1.patch + xsa27-4.1.patch + xsa28-4.1.patch + xsa29-4.1.patch + xsa30-4.1.patch + xsa31-4.1.patch xenstored.initd xenstored.confd @@ -121,6 +143,28 @@ b973dc1ffcc6872e222b36f3b7b4836b fix_bswap_blktap2.patch 0bb8a435020a5a49b38b1a447fb69977 define_fsimage_dir.patch fa06495a175571f4aa3b6cb88937953e librt.patch 1bea3543ddc712330527b62fd9ff6520 busybox-sed.patch +8a7bf334786c3fcafa9f9ae407c56a86 xsa-12.patch +73ba610d9098022f4fc3fb8b10111836 xsa-13.patch +e7b2e777128fb45386642be324fbd85e xsa-14.patch +e38a8799c358c5959bc0146f7fb7bd2f xsa-15-1.patch +8097d6451ab6d33f8374df316562a961 xsa-15-2.patch +75a12e9f62188677e5be6b33990b8b3c xsa-15-3.patch +060667428ae4f12ab93a218238d3ff14 xsa-15-4.patch +ff8c8c952808a5d1c8258013608b8f19 xsa-15-5.patch +bbd0801bd2f8b501b366368be1520090 xsa-16.patch +a9d0b6ea849e40be80bed23b93138bba xsa-17.patch +c7ba117dba1ee05c8944cc62c4dd1ce1 xsa-20.patch +1324c3a8999ca113ca0b60daad361bb0 xsa-21.patch +24e3db4c01c2ad8d17539b7a20d057d2 xsa-22.patch +4d16a72c8c5e28ab439227e7e96d5650 xsa-23.patch +8d58e94aa0d3ed43178fdcbd1af7d9fc xsa-24.patch +e5f0a25cc5cb7245be0be5c9d19adef2 xsa-25.patch +0ef1933756429c5204d4cb59e20a609a xsa26-4.1.patch +075932924d01e4fc231bf37a14241ccb xsa27-4.1.patch +0d59195f99e5f871cd4606cfb9c2ffdf xsa28-4.1.patch +8e69a2b2819a26504ca12353dc6ce829 xsa29-4.1.patch +4acab55051e542761d9ac5f782964dcb xsa30-4.1.patch +2b9c5737b2910dc077c6e654b973947f xsa31-4.1.patch 6e5739dad7e2bd1b625e55ddc6c782b7 xenstored.initd b017ccdd5e1c27bbf1513e3569d4ff07 xenstored.confd ed262f15fb880badb53575539468646c xenconsoled.initd diff --git a/main/xen/xsa-12.patch b/main/xen/xsa-12.patch new file mode 100644 index 0000000..8f7295a --- /dev/null +++ b/main/xen/xsa-12.patch @@ -0,0 +1,32 @@ +From ab4f401a1a2149dff14667a383f2e4a1757682ef Mon Sep 17 00:00:00 2001 +From: Ian Jackson +Date: Wed, 5 Sep 2012 12:27:54 +0100 +Subject: [PATCH] xen: prevent a 64 bit guest setting reserved bits in DR7 + +The upper 32 bits of this register are reserved and should be written as +zero. + +This is XSA-12 / CVE-2012-3494 + +Signed-off-by: Jan Beulich +Reviewed-by: Ian Campbell +--- + xen/include/asm-x86/debugreg.h | 2 +- + 1 files changed, 1 insertions(+), 1 deletions(-) + +diff --git a/xen/include/asm-x86/debugreg.h b/xen/include/asm-x86/debugreg.h +index 9b7e971..24021b8 100644 +--- a/xen/include/asm-x86/debugreg.h ++++ b/xen/include/asm-x86/debugreg.h +@@ -58,7 +58,7 @@ + We can slow the instruction pipeline for instructions coming via the + gdt or the ldt if we want to. I am not sure why this is an advantage */ + +-#define DR_CONTROL_RESERVED_ZERO (0x0000d800ul) /* Reserved, read as zero */ ++#define DR_CONTROL_RESERVED_ZERO (~0xffff27fful) /* Reserved, read as zero */ + #define DR_CONTROL_RESERVED_ONE (0x00000400ul) /* Reserved, read as one */ + #define DR_LOCAL_EXACT_ENABLE (0x00000100ul) /* Local exact enable */ + #define DR_GLOBAL_EXACT_ENABLE (0x00000200ul) /* Global exact enable */ +-- +1.7.7.5 (Apple Git-26) + diff --git a/main/xen/xsa-13.patch b/main/xen/xsa-13.patch new file mode 100644 index 0000000..1a28194 --- /dev/null +++ b/main/xen/xsa-13.patch @@ -0,0 +1,41 @@ +From 06f0a9bfc333916cece9516a7ff1e35289e9f16f Mon Sep 17 00:00:00 2001 +From: Ian Jackson +Date: Wed, 5 Sep 2012 12:28:17 +0100 +Subject: [PATCH] xen: handle out-of-pirq condition correctly in + PHYSDEVOP_get_free_pirq + +This is XSA-13 / CVE-2012-3495 + +Signed-off-by: Ian Campbell +Signed-off-by: Jan Beulich +--- + xen/arch/x86/physdev.c | 11 ++++++++--- + 1 files changed, 8 insertions(+), 3 deletions(-) + +diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c +index 8c24559..e037a94 100644 +--- a/xen/arch/x86/physdev.c ++++ b/xen/arch/x86/physdev.c +@@ -587,11 +587,16 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg) + break; + + spin_lock(&d->event_lock); +- out.pirq = get_free_pirq(d, out.type, 0); +- d->arch.pirq_irq[out.pirq] = PIRQ_ALLOCATED; ++ ret = get_free_pirq(d, out.type, 0); ++ if ( ret >= 0 ) ++ d->arch.pirq_irq[ret] = PIRQ_ALLOCATED; + spin_unlock(&d->event_lock); + +- ret = copy_to_guest(arg, &out, 1) ? -EFAULT : 0; ++ if ( ret >= 0 ) ++ { ++ out.pirq = ret; ++ ret = copy_to_guest(arg, &out, 1) ? -EFAULT : 0; ++ } + + rcu_unlock_domain(d); + break; +-- +1.7.7.5 (Apple Git-26) + diff --git a/main/xen/xsa-14.patch b/main/xen/xsa-14.patch new file mode 100644 index 0000000..6029ced --- /dev/null +++ b/main/xen/xsa-14.patch @@ -0,0 +1,32 @@ +From 161e4805de91e10fbe30d91c1f1bbe967d4365d8 Mon Sep 17 00:00:00 2001 +From: Ian Jackson +Date: Wed, 5 Sep 2012 12:29:05 +0100 +Subject: [PATCH] xen: Don't BUG_ON() PoD operations on a non-translated + guest. + +This is XSA-14 / CVE-2012-3496 + +Signed-off-by: Tim Deegan +Reviewed-by: Ian Campbell +Tested-by: Ian Campbell +--- + xen/arch/x86/mm/p2m.c | 3 ++- + 1 files changed, 2 insertions(+), 1 deletions(-) + +diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c +index e095118..eda9c8f 100644 +--- a/xen/arch/x86/mm/p2m.c ++++ b/xen/arch/x86/mm/p2m.c +@@ -2414,7 +2414,8 @@ guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn, + int pod_count = 0; + int rc = 0; + +- BUG_ON(!paging_mode_translate(d)); ++ if ( !paging_mode_translate(d) ) ++ return -EINVAL; + + rc = gfn_check_limit(d, gfn, order); + if ( rc != 0 ) +-- +1.7.7.5 (Apple Git-26) + diff --git a/main/xen/xsa-15-1.patch b/main/xen/xsa-15-1.patch new file mode 100644 index 0000000..4a68bc5 --- /dev/null +++ b/main/xen/xsa-15-1.patch @@ -0,0 +1,37 @@ +From 16bae237b197bc925a60c5c783004e609e9b43c1 Mon Sep 17 00:00:00 2001 +From: Ian Campbell +Date: Tue, 25 Sep 2012 12:24:06 +0200 +Subject: [PATCH] tmem: only allow tmem control operations from privileged + domains + +This is part of XSA-15 / CVE-2012-3497. + +Signed-off-by: Ian Campbell +Acked-by: Dan Magenheimer +Acked-by: Jan Beulich +xen-unstable changeset: 25850:0dba5a888655 +xen-unstable date: Tue Sep 11 12:06:30 UTC 2012 +--- + xen/common/tmem.c | 6 ++---- + 1 files changed, 2 insertions(+), 4 deletions(-) + +diff --git a/xen/common/tmem.c b/xen/common/tmem.c +index 1c155db..06c9e0e 100644 +--- a/xen/common/tmem.c ++++ b/xen/common/tmem.c +@@ -2544,10 +2544,8 @@ static NOINLINE int do_tmem_control(struct tmem_op *op) + OID *oidp = (OID *)(&op->u.ctrl.oid[0]); + + if (!tmh_current_is_privileged()) +- { +- /* don't fail... mystery: sometimes dom0 fails here */ +- /* return -EPERM; */ +- } ++ return -EPERM; ++ + switch(subop) + { + case TMEMC_THAW: +-- +1.7.7.5 (Apple Git-26) + diff --git a/main/xen/xsa-15-2.patch b/main/xen/xsa-15-2.patch new file mode 100644 index 0000000..c099431 --- /dev/null +++ b/main/xen/xsa-15-2.patch @@ -0,0 +1,54 @@ +From afdc11f69ab71efa24145f9e747964066bcecefd Mon Sep 17 00:00:00 2001 +From: Ian Campbell +Date: Tue, 25 Sep 2012 12:24:37 +0200 +Subject: [PATCH] tmem: consistently make pool_id a uint32_t + +Treating it as an int could allow a malicious guest to provide a +negative pool_Id, by passing the MAX_POOLS_PER_DOMAIN limit check and +allowing access to the negative offsets of the pool array. + +This is part of XSA-15 / CVE-2012-3497. + +Signed-off-by: Ian Campbell +Acked-by: Dan Magenheimer +Acked-by: Jan Beulich +xen-unstable changeset: 25851:fcf567acc92a +xen-unstable date: Tue Sep 11 12:06:43 UTC 2012 +--- + xen/common/tmem.c | 6 +++--- + 1 files changed, 3 insertions(+), 3 deletions(-) + +diff --git a/xen/common/tmem.c b/xen/common/tmem.c +index 06c9e0e..63a9ed0 100644 +--- a/xen/common/tmem.c ++++ b/xen/common/tmem.c +@@ -2420,7 +2420,7 @@ static NOINLINE int tmemc_save_subop(int cli_id, uint32_t pool_id, + return rc; + } + +-static NOINLINE int tmemc_save_get_next_page(int cli_id, int pool_id, ++static NOINLINE int tmemc_save_get_next_page(int cli_id, uint32_t pool_id, + tmem_cli_va_t buf, uint32_t bufsize) + { + client_t *client = tmh_client_from_cli_id(cli_id); +@@ -2512,7 +2512,7 @@ out: + return ret; + } + +-static int tmemc_restore_put_page(int cli_id, int pool_id, OID *oidp, ++static int tmemc_restore_put_page(int cli_id, uint32_t pool_id, OID *oidp, + uint32_t index, tmem_cli_va_t buf, uint32_t bufsize) + { + client_t *client = tmh_client_from_cli_id(cli_id); +@@ -2524,7 +2524,7 @@ static int tmemc_restore_put_page(int cli_id, int pool_id, OID *oidp, + return do_tmem_put(pool,oidp,index,0,0,0,bufsize,buf.p); + } + +-static int tmemc_restore_flush_page(int cli_id, int pool_id, OID *oidp, ++static int tmemc_restore_flush_page(int cli_id, uint32_t pool_id, OID *oidp, + uint32_t index) + { + client_t *client = tmh_client_from_cli_id(cli_id); +-- +1.7.7.5 (Apple Git-26) + diff --git a/main/xen/xsa-15-3.patch b/main/xen/xsa-15-3.patch new file mode 100644 index 0000000..7a4dec5 --- /dev/null +++ b/main/xen/xsa-15-3.patch @@ -0,0 +1,32 @@ +From c7cafe83e0bee450cb0bde6c4921d5819dc96542 Mon Sep 17 00:00:00 2001 +From: Ian Campbell +Date: Tue, 25 Sep 2012 12:24:57 +0200 +Subject: [PATCH] tmem: check the pool_id is valid when destroying a tmem pool + +This is part of XSA-15 / CVE-2012-3497. + +Signed-off-by: Ian Campbell +Acked-by: Dan Magenheimer +Acked-by: Jan Beulich +xen-unstable changeset: 25852:d189d99ef00c +xen-unstable date: Tue Sep 11 12:06:54 UTC 2012 +--- + xen/common/tmem.c | 2 ++ + 1 files changed, 2 insertions(+), 0 deletions(-) + +diff --git a/xen/common/tmem.c b/xen/common/tmem.c +index 63a9ed0..6ad0670 100644 +--- a/xen/common/tmem.c ++++ b/xen/common/tmem.c +@@ -1873,6 +1873,8 @@ static NOINLINE int do_tmem_destroy_pool(uint32_t pool_id) + + if ( client->pools == NULL ) + return 0; ++ if ( pool_id >= MAX_POOLS_PER_DOMAIN ) ++ return 0; + if ( (pool = client->pools[pool_id]) == NULL ) + return 0; + client->pools[pool_id] = NULL; +-- +1.7.7.5 (Apple Git-26) + diff --git a/main/xen/xsa-15-4.patch b/main/xen/xsa-15-4.patch new file mode 100644 index 0000000..b9c6cfe --- /dev/null +++ b/main/xen/xsa-15-4.patch @@ -0,0 +1,51 @@ +From e28173699e5651c323f66062e73cca29a65c097f Mon Sep 17 00:00:00 2001 +From: Ian Campbell +Date: Tue, 25 Sep 2012 12:25:25 +0200 +Subject: [PATCH] tmem: check for a valid client ("domain") in the save subops + +This is part of XSA-15 / CVE-2012-3497. + +Signed-off-by: Ian Campbell +Acked-by: Jan Beulich +Acked-by: Dan Magenheimer +xen-unstable changeset: 25853:f53c5aadbba9 +xen-unstable date: Tue Sep 11 12:17:27 UTC 2012 +--- + xen/common/tmem.c | 8 ++++++++ + 1 files changed, 8 insertions(+), 0 deletions(-) + +diff --git a/xen/common/tmem.c b/xen/common/tmem.c +index 6ad0670..83e3a06 100644 +--- a/xen/common/tmem.c ++++ b/xen/common/tmem.c +@@ -2382,12 +2382,18 @@ static NOINLINE int tmemc_save_subop(int cli_id, uint32_t pool_id, + rc = MAX_POOLS_PER_DOMAIN; + break; + case TMEMC_SAVE_GET_CLIENT_WEIGHT: ++ if ( client == NULL ) ++ break; + rc = client->weight == -1 ? -2 : client->weight; + break; + case TMEMC_SAVE_GET_CLIENT_CAP: ++ if ( client == NULL ) ++ break; + rc = client->cap == -1 ? -2 : client->cap; + break; + case TMEMC_SAVE_GET_CLIENT_FLAGS: ++ if ( client == NULL ) ++ break; + rc = (client->compress ? TMEM_CLIENT_COMPRESS : 0 ) | + (client->was_frozen ? TMEM_CLIENT_FROZEN : 0 ); + break; +@@ -2411,6 +2417,8 @@ static NOINLINE int tmemc_save_subop(int cli_id, uint32_t pool_id, + *uuid = pool->uuid[1]; + rc = 0; + case TMEMC_SAVE_END: ++ if ( client == NULL ) ++ break; + client->live_migrating = 0; + if ( !list_empty(&client->persistent_invalidated_list) ) + list_for_each_entry_safe(pgp,pgp2, +-- +1.7.7.5 (Apple Git-26) + diff --git a/main/xen/xsa-15-5.patch b/main/xen/xsa-15-5.patch new file mode 100644 index 0000000..d45e96d --- /dev/null +++ b/main/xen/xsa-15-5.patch @@ -0,0 +1,563 @@ +From 1266749162fd232c79898706f01082847a0e618e Mon Sep 17 00:00:00 2001 +From: Jan Beulich +Date: Tue, 25 Sep 2012 12:26:06 +0200 +Subject: [PATCH] tmem: don't access guest memory without using the accessors + intended for this + +This is not permitted, not even for buffers coming from Dom0 (and it +would also break the moment Dom0 runs in HVM mode). An implication from +the changes here is that tmh_copy_page() can't be used anymore for +control operations calling tmh_copy_{from,to}_client() (as those pass +the buffer by virtual address rather than MFN). + +Note that tmemc_save_get_next_page() previously didn't set the returned +handle's pool_id field, while the new code does. It need to be +confirmed that this is not a problem (otherwise the copy-out operation +will require further tmh_...() abstractions to be added). + +Further note that the patch removes (rather than adjusts) an invalid +call to unmap_domain_page() (no matching map_domain_page()) from +tmh_compress_from_client() and adds a missing one to an error return +path in tmh_copy_from_client(). + +Finally note that the patch adds a previously missing return statement +to cli_get_page() (without which that function could de-reference a +NULL pointer, triggerable from guest mode). + +This is part of XSA-15 / CVE-2012-3497. + +Signed-off-by: Jan Beulich +Acked-by: Dan Magenheimer +xen-unstable changeset: 25854:ccd60ed6c555 +xen-unstable date: Tue Sep 11 12:17:49 UTC 2012 +--- + xen/common/tmem.c | 90 ++++++++++++++++++++++++-------------------- + xen/common/tmem_xen.c | 90 ++++++++++++++++++++++++++++++------------- + xen/include/xen/tmem_xen.h | 20 ++++++--- + 3 files changed, 125 insertions(+), 75 deletions(-) + +diff --git a/xen/common/tmem.c b/xen/common/tmem.c +index 83e3a06..ae02b37 100644 +--- a/xen/common/tmem.c ++++ b/xen/common/tmem.c +@@ -387,11 +387,13 @@ static NOINLINE int pcd_copy_to_client(tmem_cli_mfn_t cmfn, pgp_t *pgp) + pcd = pgp->pcd; + if ( pgp->size < PAGE_SIZE && pgp->size != 0 && + pcd->size < PAGE_SIZE && pcd->size != 0 ) +- ret = tmh_decompress_to_client(cmfn, pcd->cdata, pcd->size, NULL); ++ ret = tmh_decompress_to_client(cmfn, pcd->cdata, pcd->size, ++ tmh_cli_buf_null); + else if ( tmh_tze_enabled() && pcd->size < PAGE_SIZE ) + ret = tmh_copy_tze_to_client(cmfn, pcd->tze, pcd->size); + else +- ret = tmh_copy_to_client(cmfn, pcd->pfp, 0, 0, PAGE_SIZE, NULL); ++ ret = tmh_copy_to_client(cmfn, pcd->pfp, 0, 0, PAGE_SIZE, ++ tmh_cli_buf_null); + tmem_read_unlock(&pcd_tree_rwlocks[firstbyte]); + return ret; + } +@@ -1447,7 +1449,7 @@ static inline void tmem_ensure_avail_pages(void) + /************ TMEM CORE OPERATIONS ************************************/ + + static NOINLINE int do_tmem_put_compress(pgp_t *pgp, tmem_cli_mfn_t cmfn, +- void *cva) ++ tmem_cli_va_t clibuf) + { + void *dst, *p; + size_t size; +@@ -1466,7 +1468,7 @@ static NOINLINE int do_tmem_put_compress(pgp_t *pgp, tmem_cli_mfn_t cmfn, + if ( pgp->pfp != NULL ) + pgp_free_data(pgp, pgp->us.obj->pool); + START_CYC_COUNTER(compress); +- ret = tmh_compress_from_client(cmfn, &dst, &size, cva); ++ ret = tmh_compress_from_client(cmfn, &dst, &size, clibuf); + if ( (ret == -EFAULT) || (ret == 0) ) + goto out; + else if ( (size == 0) || (size >= tmem_subpage_maxsize()) ) { +@@ -1493,7 +1495,8 @@ out: + } + + static NOINLINE int do_tmem_dup_put(pgp_t *pgp, tmem_cli_mfn_t cmfn, +- pagesize_t tmem_offset, pagesize_t pfn_offset, pagesize_t len, void *cva) ++ pagesize_t tmem_offset, pagesize_t pfn_offset, pagesize_t len, ++ tmem_cli_va_t clibuf) + { + pool_t *pool; + obj_t *obj; +@@ -1515,7 +1518,7 @@ static NOINLINE int do_tmem_dup_put(pgp_t *pgp, tmem_cli_mfn_t cmfn, + /* can we successfully manipulate pgp to change out the data? */ + if ( len != 0 && client->compress && pgp->size != 0 ) + { +- ret = do_tmem_put_compress(pgp,cmfn,cva); ++ ret = do_tmem_put_compress(pgp, cmfn, clibuf); + if ( ret == 1 ) + goto done; + else if ( ret == 0 ) +@@ -1533,7 +1536,8 @@ copy_uncompressed: + goto failed_dup; + pgp->size = 0; + /* tmh_copy_from_client properly handles len==0 and offsets != 0 */ +- ret = tmh_copy_from_client(pgp->pfp,cmfn,tmem_offset,pfn_offset,len,0); ++ ret = tmh_copy_from_client(pgp->pfp, cmfn, tmem_offset, pfn_offset, len, ++ tmh_cli_buf_null); + if ( ret == -EFAULT ) + goto bad_copy; + if ( tmh_dedup_enabled() && !is_persistent(pool) ) +@@ -1585,7 +1589,7 @@ cleanup: + static NOINLINE int do_tmem_put(pool_t *pool, + OID *oidp, uint32_t index, + tmem_cli_mfn_t cmfn, pagesize_t tmem_offset, +- pagesize_t pfn_offset, pagesize_t len, void *cva) ++ pagesize_t pfn_offset, pagesize_t len, tmem_cli_va_t clibuf) + { + obj_t *obj = NULL, *objfound = NULL, *objnew = NULL; + pgp_t *pgp = NULL, *pgpdel = NULL; +@@ -1599,7 +1603,8 @@ static NOINLINE int do_tmem_put(pool_t *pool, + { + ASSERT_SPINLOCK(&objfound->obj_spinlock); + if ((pgp = pgp_lookup_in_obj(objfound, index)) != NULL) +- return do_tmem_dup_put(pgp,cmfn,tmem_offset,pfn_offset,len,cva); ++ return do_tmem_dup_put(pgp, cmfn, tmem_offset, pfn_offset, len, ++ clibuf); + } + + /* no puts allowed into a frozen pool (except dup puts) */ +@@ -1634,7 +1639,7 @@ static NOINLINE int do_tmem_put(pool_t *pool, + if ( len != 0 && client->compress ) + { + ASSERT(pgp->pfp == NULL); +- ret = do_tmem_put_compress(pgp,cmfn,cva); ++ ret = do_tmem_put_compress(pgp, cmfn, clibuf); + if ( ret == 1 ) + goto insert_page; + if ( ret == -ENOMEM ) +@@ -1658,7 +1663,8 @@ copy_uncompressed: + goto delete_and_free; + } + /* tmh_copy_from_client properly handles len==0 (TMEM_NEW_PAGE) */ +- ret = tmh_copy_from_client(pgp->pfp,cmfn,tmem_offset,pfn_offset,len,cva); ++ ret = tmh_copy_from_client(pgp->pfp, cmfn, tmem_offset, pfn_offset, len, ++ clibuf); + if ( ret == -EFAULT ) + goto bad_copy; + if ( tmh_dedup_enabled() && !is_persistent(pool) ) +@@ -1728,12 +1734,13 @@ free: + + static NOINLINE int do_tmem_get(pool_t *pool, OID *oidp, uint32_t index, + tmem_cli_mfn_t cmfn, pagesize_t tmem_offset, +- pagesize_t pfn_offset, pagesize_t len, void *cva) ++ pagesize_t pfn_offset, pagesize_t len, tmem_cli_va_t clibuf) + { + obj_t *obj; + pgp_t *pgp; + client_t *client = pool->client; + DECL_LOCAL_CYC_COUNTER(decompress); ++ int rc = -EFAULT; + + if ( !_atomic_read(pool->pgp_count) ) + return -EEMPTY; +@@ -1758,16 +1765,18 @@ static NOINLINE int do_tmem_get(pool_t *pool, OID *oidp, uint32_t index, + if ( tmh_dedup_enabled() && !is_persistent(pool) && + pgp->firstbyte != NOT_SHAREABLE ) + { +- if ( pcd_copy_to_client(cmfn, pgp) == -EFAULT ) ++ rc = pcd_copy_to_client(cmfn, pgp); ++ if ( rc <= 0 ) + goto bad_copy; + } else if ( pgp->size != 0 ) { + START_CYC_COUNTER(decompress); +- if ( tmh_decompress_to_client(cmfn, pgp->cdata, +- pgp->size, cva) == -EFAULT ) ++ rc = tmh_decompress_to_client(cmfn, pgp->cdata, ++ pgp->size, clibuf); ++ if ( rc <= 0 ) + goto bad_copy; + END_CYC_COUNTER(decompress); + } else if ( tmh_copy_to_client(cmfn, pgp->pfp, tmem_offset, +- pfn_offset, len, cva) == -EFAULT) ++ pfn_offset, len, clibuf) == -EFAULT) + goto bad_copy; + if ( is_ephemeral(pool) ) + { +@@ -1807,8 +1816,7 @@ static NOINLINE int do_tmem_get(pool_t *pool, OID *oidp, uint32_t index, + bad_copy: + /* this should only happen if the client passed a bad mfn */ + failed_copies++; +- return -EFAULT; +- ++ return rc; + } + + static NOINLINE int do_tmem_flush_page(pool_t *pool, OID *oidp, uint32_t index) +@@ -2348,7 +2356,6 @@ static NOINLINE int tmemc_save_subop(int cli_id, uint32_t pool_id, + pool_t *pool = (client == NULL || pool_id >= MAX_POOLS_PER_DOMAIN) + ? NULL : client->pools[pool_id]; + uint32_t p; +- uint64_t *uuid; + pgp_t *pgp, *pgp2; + int rc = -1; + +@@ -2412,9 +2419,7 @@ static NOINLINE int tmemc_save_subop(int cli_id, uint32_t pool_id, + case TMEMC_SAVE_GET_POOL_UUID: + if ( pool == NULL ) + break; +- uuid = (uint64_t *)buf.p; +- *uuid++ = pool->uuid[0]; +- *uuid = pool->uuid[1]; ++ tmh_copy_to_client_buf(buf, pool->uuid, 2); + rc = 0; + case TMEMC_SAVE_END: + if ( client == NULL ) +@@ -2439,7 +2444,7 @@ static NOINLINE int tmemc_save_get_next_page(int cli_id, uint32_t pool_id, + pgp_t *pgp; + OID oid; + int ret = 0; +- struct tmem_handle *h; ++ struct tmem_handle h; + unsigned int pagesize = 1 << (pool->pageshift+12); + + if ( pool == NULL || is_ephemeral(pool) ) +@@ -2470,11 +2475,13 @@ static NOINLINE int tmemc_save_get_next_page(int cli_id, uint32_t pool_id, + pgp_t,us.pool_pers_pages); + pool->cur_pgp = pgp; + oid = pgp->us.obj->oid; +- h = (struct tmem_handle *)buf.p; +- *(OID *)&h->oid[0] = oid; +- h->index = pgp->index; +- buf.p = (void *)(h+1); +- ret = do_tmem_get(pool, &oid, h->index,0,0,0,pagesize,buf.p); ++ h.pool_id = pool_id; ++ BUILD_BUG_ON(sizeof(h.oid) != sizeof(oid)); ++ memcpy(h.oid, oid.oid, sizeof(h.oid)); ++ h.index = pgp->index; ++ tmh_copy_to_client_buf(buf, &h, 1); ++ tmh_client_buf_add(buf, sizeof(h)); ++ ret = do_tmem_get(pool, &oid, pgp->index, 0, 0, 0, pagesize, buf); + + out: + tmem_spin_unlock(&pers_lists_spinlock); +@@ -2486,7 +2493,7 @@ static NOINLINE int tmemc_save_get_next_inv(int cli_id, tmem_cli_va_t buf, + { + client_t *client = tmh_client_from_cli_id(cli_id); + pgp_t *pgp; +- struct tmem_handle *h; ++ struct tmem_handle h; + int ret = 0; + + if ( client == NULL ) +@@ -2512,10 +2519,11 @@ static NOINLINE int tmemc_save_get_next_inv(int cli_id, tmem_cli_va_t buf, + pgp_t,client_inv_pages); + client->cur_pgp = pgp; + } +- h = (struct tmem_handle *)buf.p; +- h->pool_id = pgp->pool_id; +- *(OID *)&h->oid = pgp->inv_oid; +- h->index = pgp->index; ++ h.pool_id = pgp->pool_id; ++ BUILD_BUG_ON(sizeof(h.oid) != sizeof(pgp->inv_oid)); ++ memcpy(h.oid, pgp->inv_oid.oid, sizeof(h.oid)); ++ h.index = pgp->index; ++ tmh_copy_to_client_buf(buf, &h, 1); + ret = 1; + out: + tmem_spin_unlock(&pers_lists_spinlock); +@@ -2531,7 +2539,7 @@ static int tmemc_restore_put_page(int cli_id, uint32_t pool_id, OID *oidp, + + if ( pool == NULL ) + return -1; +- return do_tmem_put(pool,oidp,index,0,0,0,bufsize,buf.p); ++ return do_tmem_put(pool, oidp, index, 0, 0, 0, bufsize, buf); + } + + static int tmemc_restore_flush_page(int cli_id, uint32_t pool_id, OID *oidp, +@@ -2735,19 +2743,19 @@ EXPORT long do_tmem_op(tmem_cli_op_t uops) + break; + case TMEM_NEW_PAGE: + tmem_ensure_avail_pages(); +- rc = do_tmem_put(pool, oidp, +- op.u.gen.index, op.u.gen.cmfn, 0, 0, 0, NULL); ++ rc = do_tmem_put(pool, oidp, op.u.gen.index, op.u.gen.cmfn, 0, 0, 0, ++ tmh_cli_buf_null); + break; + case TMEM_PUT_PAGE: + tmem_ensure_avail_pages(); +- rc = do_tmem_put(pool, oidp, +- op.u.gen.index, op.u.gen.cmfn, 0, 0, PAGE_SIZE, NULL); ++ rc = do_tmem_put(pool, oidp, op.u.gen.index, op.u.gen.cmfn, 0, 0, ++ PAGE_SIZE, tmh_cli_buf_null); + if (rc == 1) succ_put = 1; + else non_succ_put = 1; + break; + case TMEM_GET_PAGE: + rc = do_tmem_get(pool, oidp, op.u.gen.index, op.u.gen.cmfn, +- 0, 0, PAGE_SIZE, 0); ++ 0, 0, PAGE_SIZE, tmh_cli_buf_null); + if (rc == 1) succ_get = 1; + else non_succ_get = 1; + break; +@@ -2766,13 +2774,13 @@ EXPORT long do_tmem_op(tmem_cli_op_t uops) + case TMEM_READ: + rc = do_tmem_get(pool, oidp, op.u.gen.index, op.u.gen.cmfn, + op.u.gen.tmem_offset, op.u.gen.pfn_offset, +- op.u.gen.len,0); ++ op.u.gen.len, tmh_cli_buf_null); + break; + case TMEM_WRITE: + rc = do_tmem_put(pool, oidp, + op.u.gen.index, op.u.gen.cmfn, + op.u.gen.tmem_offset, op.u.gen.pfn_offset, +- op.u.gen.len, NULL); ++ op.u.gen.len, tmh_cli_buf_null); + break; + case TMEM_XCHG: + /* need to hold global lock to ensure xchg is atomic */ +diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c +index 4934c42..bcd49f2 100644 +--- a/xen/common/tmem_xen.c ++++ b/xen/common/tmem_xen.c +@@ -50,6 +50,7 @@ DECL_CYC_COUNTER(pg_copy); + #define LZO_DSTMEM_PAGES 2 + static DEFINE_PER_CPU_READ_MOSTLY(unsigned char *, workmem); + static DEFINE_PER_CPU_READ_MOSTLY(unsigned char *, dstmem); ++static DEFINE_PER_CPU_READ_MOSTLY(void *, scratch_page); + + #ifdef COMPARE_COPY_PAGE_SSE2 + #include /* REMOVE ME AFTER TEST */ +@@ -140,12 +141,12 @@ static inline void cli_put_page(void *cli_va, pfp_t *cli_pfp, + + EXPORT int tmh_copy_from_client(pfp_t *pfp, + tmem_cli_mfn_t cmfn, pagesize_t tmem_offset, +- pagesize_t pfn_offset, pagesize_t len, void *cli_va) ++ pagesize_t pfn_offset, pagesize_t len, tmem_cli_va_t clibuf) + { + unsigned long tmem_mfn, cli_mfn = 0; +- void *tmem_va; ++ char *tmem_va, *cli_va = NULL; + pfp_t *cli_pfp = NULL; +- bool_t tmemc = cli_va != NULL; /* if true, cli_va is control-op buffer */ ++ int rc = 1; + + ASSERT(pfp != NULL); + tmem_mfn = page_to_mfn(pfp); +@@ -156,62 +157,76 @@ EXPORT int tmh_copy_from_client(pfp_t *pfp, + unmap_domain_page(tmem_va); + return 1; + } +- if ( !tmemc ) ++ if ( guest_handle_is_null(clibuf) ) + { + cli_va = cli_get_page(cmfn, &cli_mfn, &cli_pfp, 0); + if ( cli_va == NULL ) ++ { ++ unmap_domain_page(tmem_va); + return -EFAULT; ++ } + } + mb(); +- if (len == PAGE_SIZE && !tmem_offset && !pfn_offset) ++ if ( len == PAGE_SIZE && !tmem_offset && !pfn_offset && cli_va ) + tmh_copy_page(tmem_va, cli_va); + else if ( (tmem_offset+len <= PAGE_SIZE) && + (pfn_offset+len <= PAGE_SIZE) ) +- memcpy((char *)tmem_va+tmem_offset,(char *)cli_va+pfn_offset,len); +- if ( !tmemc ) ++ { ++ if ( cli_va ) ++ memcpy(tmem_va + tmem_offset, cli_va + pfn_offset, len); ++ else if ( copy_from_guest_offset(tmem_va + tmem_offset, clibuf, ++ pfn_offset, len) ) ++ rc = -EFAULT; ++ } ++ if ( cli_va ) + cli_put_page(cli_va, cli_pfp, cli_mfn, 0); + unmap_domain_page(tmem_va); +- return 1; ++ return rc; + } + + EXPORT int tmh_compress_from_client(tmem_cli_mfn_t cmfn, +- void **out_va, size_t *out_len, void *cli_va) ++ void **out_va, size_t *out_len, tmem_cli_va_t clibuf) + { + int ret = 0; + unsigned char *dmem = this_cpu(dstmem); + unsigned char *wmem = this_cpu(workmem); ++ char *scratch = this_cpu(scratch_page); + pfp_t *cli_pfp = NULL; + unsigned long cli_mfn = 0; +- bool_t tmemc = cli_va != NULL; /* if true, cli_va is control-op buffer */ ++ void *cli_va = NULL; + + if ( dmem == NULL || wmem == NULL ) + return 0; /* no buffer, so can't compress */ +- if ( !tmemc ) ++ if ( guest_handle_is_null(clibuf) ) + { + cli_va = cli_get_page(cmfn, &cli_mfn, &cli_pfp, 0); + if ( cli_va == NULL ) + return -EFAULT; + } ++ else if ( !scratch ) ++ return 0; ++ else if ( copy_from_guest(scratch, clibuf, PAGE_SIZE) ) ++ return -EFAULT; + mb(); +- ret = lzo1x_1_compress(cli_va, PAGE_SIZE, dmem, out_len, wmem); ++ ret = lzo1x_1_compress(cli_va ?: scratch, PAGE_SIZE, dmem, out_len, wmem); + ASSERT(ret == LZO_E_OK); + *out_va = dmem; +- if ( !tmemc ) ++ if ( cli_va ) + cli_put_page(cli_va, cli_pfp, cli_mfn, 0); +- unmap_domain_page(cli_va); + return 1; + } + + EXPORT int tmh_copy_to_client(tmem_cli_mfn_t cmfn, pfp_t *pfp, +- pagesize_t tmem_offset, pagesize_t pfn_offset, pagesize_t len, void *cli_va) ++ pagesize_t tmem_offset, pagesize_t pfn_offset, pagesize_t len, ++ tmem_cli_va_t clibuf) + { + unsigned long tmem_mfn, cli_mfn = 0; +- void *tmem_va; ++ char *tmem_va, *cli_va = NULL; + pfp_t *cli_pfp = NULL; +- bool_t tmemc = cli_va != NULL; /* if true, cli_va is control-op buffer */ ++ int rc = 1; + + ASSERT(pfp != NULL); +- if ( !tmemc ) ++ if ( guest_handle_is_null(clibuf) ) + { + cli_va = cli_get_page(cmfn, &cli_mfn, &cli_pfp, 1); + if ( cli_va == NULL ) +@@ -219,37 +234,48 @@ EXPORT int tmh_copy_to_client(tmem_cli_mfn_t cmfn, pfp_t *pfp, + } + tmem_mfn = page_to_mfn(pfp); + tmem_va = map_domain_page(tmem_mfn); +- if (len == PAGE_SIZE && !tmem_offset && !pfn_offset) ++ if ( len == PAGE_SIZE && !tmem_offset && !pfn_offset && cli_va ) + tmh_copy_page(cli_va, tmem_va); + else if ( (tmem_offset+len <= PAGE_SIZE) && (pfn_offset+len <= PAGE_SIZE) ) +- memcpy((char *)cli_va+pfn_offset,(char *)tmem_va+tmem_offset,len); ++ { ++ if ( cli_va ) ++ memcpy(cli_va + pfn_offset, tmem_va + tmem_offset, len); ++ else if ( copy_to_guest_offset(clibuf, pfn_offset, ++ tmem_va + tmem_offset, len) ) ++ rc = -EFAULT; ++ } + unmap_domain_page(tmem_va); +- if ( !tmemc ) ++ if ( cli_va ) + cli_put_page(cli_va, cli_pfp, cli_mfn, 1); + mb(); +- return 1; ++ return rc; + } + + EXPORT int tmh_decompress_to_client(tmem_cli_mfn_t cmfn, void *tmem_va, +- size_t size, void *cli_va) ++ size_t size, tmem_cli_va_t clibuf) + { + unsigned long cli_mfn = 0; + pfp_t *cli_pfp = NULL; ++ void *cli_va = NULL; ++ char *scratch = this_cpu(scratch_page); + size_t out_len = PAGE_SIZE; +- bool_t tmemc = cli_va != NULL; /* if true, cli_va is control-op buffer */ + int ret; + +- if ( !tmemc ) ++ if ( guest_handle_is_null(clibuf) ) + { + cli_va = cli_get_page(cmfn, &cli_mfn, &cli_pfp, 1); + if ( cli_va == NULL ) + return -EFAULT; + } +- ret = lzo1x_decompress_safe(tmem_va, size, cli_va, &out_len); ++ else if ( !scratch ) ++ return 0; ++ ret = lzo1x_decompress_safe(tmem_va, size, cli_va ?: scratch, &out_len); + ASSERT(ret == LZO_E_OK); + ASSERT(out_len == PAGE_SIZE); +- if ( !tmemc ) ++ if ( cli_va ) + cli_put_page(cli_va, cli_pfp, cli_mfn, 1); ++ else if ( copy_to_guest(clibuf, scratch, PAGE_SIZE) ) ++ return -EFAULT; + mb(); + return 1; + } +@@ -419,6 +445,11 @@ static int cpu_callback( + struct page_info *p = alloc_domheap_pages(0, workmem_order, 0); + per_cpu(workmem, cpu) = p ? page_to_virt(p) : NULL; + } ++ if ( per_cpu(scratch_page, cpu) == NULL ) ++ { ++ struct page_info *p = alloc_domheap_page(NULL, 0); ++ per_cpu(scratch_page, cpu) = p ? page_to_virt(p) : NULL; ++ } + break; + } + case CPU_DEAD: +@@ -435,6 +466,11 @@ static int cpu_callback( + free_domheap_pages(p, workmem_order); + per_cpu(workmem, cpu) = NULL; + } ++ if ( per_cpu(scratch_page, cpu) != NULL ) ++ { ++ free_domheap_page(virt_to_page(per_cpu(scratch_page, cpu))); ++ per_cpu(scratch_page, cpu) = NULL; ++ } + break; + } + default: +diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h +index 3dbf6f8..68ece29 100644 +--- a/xen/include/xen/tmem_xen.h ++++ b/xen/include/xen/tmem_xen.h +@@ -482,27 +482,33 @@ static inline int tmh_get_tmemop_from_client(tmem_op_t *op, tmem_cli_op_t uops) + return copy_from_guest(op, uops, 1); + } + ++#define tmh_cli_buf_null guest_handle_from_ptr(NULL, char) ++ + static inline void tmh_copy_to_client_buf_offset(tmem_cli_va_t clibuf, int off, + char *tmembuf, int len) + { + copy_to_guest_offset(clibuf,off,tmembuf,len); + } + ++#define tmh_copy_to_client_buf(clibuf, tmembuf, cnt) \ ++ copy_to_guest(guest_handle_cast(clibuf, void), tmembuf, cnt) ++ ++#define tmh_client_buf_add guest_handle_add_offset ++ + #define TMH_CLI_ID_NULL ((cli_id_t)((domid_t)-1L)) + + #define tmh_cli_id_str "domid" + #define tmh_client_str "domain" + +-extern int tmh_decompress_to_client(tmem_cli_mfn_t,void*,size_t,void*); ++int tmh_decompress_to_client(tmem_cli_mfn_t, void *, size_t, tmem_cli_va_t); + +-extern int tmh_compress_from_client(tmem_cli_mfn_t,void**,size_t *,void*); ++int tmh_compress_from_client(tmem_cli_mfn_t, void **, size_t *, tmem_cli_va_t); + +-extern int tmh_copy_from_client(pfp_t *pfp, +- tmem_cli_mfn_t cmfn, pagesize_t tmem_offset, +- pagesize_t pfn_offset, pagesize_t len, void *cva); ++int tmh_copy_from_client(pfp_t *, tmem_cli_mfn_t, pagesize_t tmem_offset, ++ pagesize_t pfn_offset, pagesize_t len, tmem_cli_va_t); + +-extern int tmh_copy_to_client(tmem_cli_mfn_t cmfn, pfp_t *pfp, +- pagesize_t tmem_offset, pagesize_t pfn_offset, pagesize_t len, void *cva); ++int tmh_copy_to_client(tmem_cli_mfn_t, pfp_t *, pagesize_t tmem_offset, ++ pagesize_t pfn_offset, pagesize_t len, tmem_cli_va_t); + + extern int tmh_copy_tze_to_client(tmem_cli_mfn_t cmfn, void *tmem_va, pagesize_t len); + +-- +1.7.7.5 (Apple Git-26) + diff --git a/main/xen/xsa-16.patch b/main/xen/xsa-16.patch new file mode 100644 index 0000000..a61988e --- /dev/null +++ b/main/xen/xsa-16.patch @@ -0,0 +1,42 @@ +From 8195a57243996267cdd9ba84dfea3c954a217a75 Mon Sep 17 00:00:00 2001 +From: Ian Jackson +Date: Wed, 5 Sep 2012 12:29:56 +0100 +Subject: [PATCH] x86/pvhvm: properly range-check + PHYSDEVOP_map_pirq/MAP_PIRQ_TYPE_GSI + +This is being used as a array index, and hence must be validated before +use. + +This is XSA-16 / CVE-2012-3498. + +Signed-off-by: Jan Beulich +--- + xen/arch/x86/physdev.c | 7 +++++++ + 1 files changed, 7 insertions(+), 0 deletions(-) + +diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c +index e037a94..a5a58a1 100644 +--- a/xen/arch/x86/physdev.c ++++ b/xen/arch/x86/physdev.c +@@ -40,11 +40,18 @@ static int physdev_hvm_map_pirq( + struct hvm_girq_dpci_mapping *girq; + uint32_t machine_gsi = 0; + ++ if ( map->index < 0 || map->index >= NR_HVM_IRQS ) ++ { ++ ret = -EINVAL; ++ break; ++ } ++ + /* find the machine gsi corresponding to the + * emulated gsi */ + hvm_irq_dpci = domain_get_irq_dpci(d); + if ( hvm_irq_dpci ) + { ++ BUILD_BUG_ON(ARRAY_SIZE(hvm_irq_dpci->girq) < NR_HVM_IRQS); + list_for_each_entry ( girq, + &hvm_irq_dpci->girq[map->index], + list ) +-- +1.7.7.5 (Apple Git-26) + diff --git a/main/xen/xsa-17.patch b/main/xen/xsa-17.patch new file mode 100644 index 0000000..891037b --- /dev/null +++ b/main/xen/xsa-17.patch @@ -0,0 +1,122 @@ +From a56ae4b5069c7b23ee657b15f08443a9b14a8e7b Mon Sep 17 00:00:00 2001 +From: Ian Campbell +Date: Wed, 5 Sep 2012 12:31:40 +0100 +Subject: [PATCH] console: bounds check whenever changing the cursor due to an escape code + +This is XSA-17 / CVE-2012-3515 + +Signed-off-by: Ian Campbell +--- + console.c | 57 ++++++++++++++++++++++++++++----------------------------- + 1 files changed, 28 insertions(+), 29 deletions(-) + +diff --git a/tools/ioemu-qemu-xen/console.c b/tools/ioemu-qemu-xen/console.c +index 5e6e3d0..9984d6f 100644 +--- a/tools/ioemu-qemu-xen/console.c ++++ b/tools/ioemu-qemu-xen/console.c +@@ -794,6 +794,26 @@ static void console_clear_xy(TextConsole *s, int x, int y) + update_xy(s, x, y); + } + ++/* set cursor, checking bounds */ ++static void set_cursor(TextConsole *s, int x, int y) ++{ ++ if (x < 0) { ++ x = 0; ++ } ++ if (y < 0) { ++ y = 0; ++ } ++ if (y >= s->height) { ++ y = s->height - 1; ++ } ++ if (x >= s->width) { ++ x = s->width - 1; ++ } ++ ++ s->x = x; ++ s->y = y; ++} ++ + static void console_putchar(TextConsole *s, int ch) + { + TextCell *c; +@@ -869,7 +889,8 @@ static void console_putchar(TextConsole *s, int ch) + s->esc_params[s->nb_esc_params] * 10 + ch - '0'; + } + } else { +- s->nb_esc_params++; ++ if (s->nb_esc_params < MAX_ESC_PARAMS) ++ s->nb_esc_params++; + if (ch == ';') + break; + #ifdef DEBUG_CONSOLE +@@ -883,59 +904,37 @@ static void console_putchar(TextConsole *s, int ch) + if (s->esc_params[0] == 0) { + s->esc_params[0] = 1; + } +- s->y -= s->esc_params[0]; +- if (s->y < 0) { +- s->y = 0; +- } ++ set_cursor(s, s->x, s->y - s->esc_params[0]); + break; + case 'B': + /* move cursor down */ + if (s->esc_params[0] == 0) { + s->esc_params[0] = 1; + } +- s->y += s->esc_params[0]; +- if (s->y >= s->height) { +- s->y = s->height - 1; +- } ++ set_cursor(s, s->x, s->y + s->esc_params[0]); + break; + case 'C': + /* move cursor right */ + if (s->esc_params[0] == 0) { + s->esc_params[0] = 1; + } +- s->x += s->esc_params[0]; +- if (s->x >= s->width) { +- s->x = s->width - 1; +- } ++ set_cursor(s, s->x + s->esc_params[0], s->y); + break; + case 'D': + /* move cursor left */ + if (s->esc_params[0] == 0) { + s->esc_params[0] = 1; + } +- s->x -= s->esc_params[0]; +- if (s->x < 0) { +- s->x = 0; +- } ++ set_cursor(s, s->x - s->esc_params[0], s->y); + break; + case 'G': + /* move cursor to column */ +- s->x = s->esc_params[0] - 1; +- if (s->x < 0) { +- s->x = 0; +- } ++ set_cursor(s, s->esc_params[0] - 1, s->y); + break; + case 'f': + case 'H': + /* move cursor to row, column */ +- s->x = s->esc_params[1] - 1; +- if (s->x < 0) { +- s->x = 0; +- } +- s->y = s->esc_params[0] - 1; +- if (s->y < 0) { +- s->y = 0; +- } ++ set_cursor(s, s->esc_params[1] - 1, s->esc_params[0] - 1); + break; + case 'J': + switch (s->esc_params[0]) { +-- +1.7.2.5 + diff --git a/main/xen/xsa-20.patch b/main/xen/xsa-20.patch new file mode 100644 index 0000000..a8c8838 --- /dev/null +++ b/main/xen/xsa-20.patch @@ -0,0 +1,56 @@ +From f2e0eab9afae9245f38a31deeafe90953d63ea07 Mon Sep 17 00:00:00 2001 +From: Ian Jackson +Date: Wed, 14 Nov 2012 11:33:36 +0000 +Subject: [PATCH] VCPU/timers: Prevent overflow in calculations, leading to + DoS vulnerability + +The timer action for a vcpu periodic timer is to calculate the next +expiry time, and to reinsert itself into the timer queue. If the +deadline ends up in the past, Xen never leaves __do_softirq(). The +affected PCPU will stay in an infinite loop until Xen is killed by the +watchdog (if enabled). + +This is a security problem, XSA-20 / CVE-2012-4535. + +Signed-off-by: Andrew Cooper +Acked-by: Ian Campbell +Committed-by: Ian Jackson + +xen-unstable changeset: 26148:bf58b94b3cef +Backport-requested-by: security@xen.org +Committed-by: Ian Jackson +--- + xen/common/domain.c | 3 +++ + xen/include/xen/time.h | 2 ++ + 2 files changed, 5 insertions(+), 0 deletions(-) + +diff --git a/xen/common/domain.c b/xen/common/domain.c +index 98e36ee..054f7c4 100644 +--- a/xen/common/domain.c ++++ b/xen/common/domain.c +@@ -873,6 +873,9 @@ long do_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg) + if ( set.period_ns < MILLISECS(1) ) + return -EINVAL; + ++ if ( set.period_ns > STIME_DELTA_MAX ) ++ return -EINVAL; ++ + v->periodic_period = set.period_ns; + vcpu_force_reschedule(v); + +diff --git a/xen/include/xen/time.h b/xen/include/xen/time.h +index 0f67e90..c7b420b 100644 +--- a/xen/include/xen/time.h ++++ b/xen/include/xen/time.h +@@ -53,6 +53,8 @@ struct tm gmtime(unsigned long t); + #define MILLISECS(_ms) ((s_time_t)((_ms) * 1000000ULL)) + #define MICROSECS(_us) ((s_time_t)((_us) * 1000ULL)) + #define STIME_MAX ((s_time_t)((uint64_t)~0ull>>1)) ++/* Chosen so (NOW() + delta) wont overflow without an uptime of 200 years */ ++#define STIME_DELTA_MAX ((s_time_t)((uint64_t)~0ull>>2)) + + extern void update_vcpu_system_time(struct vcpu *v); + extern void update_domain_wallclock_time(struct domain *d); +-- +1.7.7.5 (Apple Git-26) + diff --git a/main/xen/xsa-21.patch b/main/xen/xsa-21.patch new file mode 100644 index 0000000..d82e570 --- /dev/null +++ b/main/xen/xsa-21.patch @@ -0,0 +1,42 @@ +From 50d1a7fea0a60cd66733cfd8666a95f00d586549 Mon Sep 17 00:00:00 2001 +From: Ian Jackson +Date: Wed, 14 Nov 2012 11:35:06 +0000 +Subject: [PATCH] x86/physdev: Range check pirq parameter from guests + +Otherwise Xen will read beyond either end of the struct +domain.arch.pirq_emuirq array, usually resulting in a fatal page fault. + +This vulnerability was introduced by c/s 23241:d21100f1d00e, which adds +a call to domain_pirq_to_emuirq() which uses the guest provided pirq +value before range checking it, and was fixed by c/s 23573:584c2e5e03d9 +which changed the behaviour of the domain_pirq_to_emuirq() macro to use +radix trees instead of a flat array. + +This is XSA-21 / CVE-2012-4536. + +Signed-off-by: Andrew Cooper +Acked-by: Jan Beulich +Acked-by: Ian Campbell +Committed-by: Ian Jackson +--- + xen/arch/x86/physdev.c | 4 ++++ + 1 files changed, 4 insertions(+), 0 deletions(-) + +diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c +index 7d5a023..de364f0 100644 +--- a/xen/arch/x86/physdev.c ++++ b/xen/arch/x86/physdev.c +@@ -234,6 +234,10 @@ static int physdev_unmap_pirq(struct physdev_unmap_pirq *unmap) + if ( ret ) + return ret; + ++ ret = -EINVAL; ++ if ( unmap->pirq < 0 || unmap->pirq >= d->nr_pirqs ) ++ goto free_domain; ++ + if ( is_hvm_domain(d) ) + { + spin_lock(&d->event_lock); +-- +1.7.7.5 (Apple Git-26) + diff --git a/main/xen/xsa-22.patch b/main/xen/xsa-22.patch new file mode 100644 index 0000000..570005f --- /dev/null +++ b/main/xen/xsa-22.patch @@ -0,0 +1,51 @@ +From 9070a6ef041756341286e88e7fad7de3e01c66f9 Mon Sep 17 00:00:00 2001 +From: Ian Jackson +Date: Wed, 14 Nov 2012 11:40:45 +0000 +Subject: [PATCH] x86/physmap: Prevent incorrect updates of m2p mappings + +In certain conditions, such as low memory, set_p2m_entry() can fail. +Currently, the p2m and m2p tables will get out of sync because we still +update the m2p table after the p2m update has failed. + +If that happens, subsequent guest-invoked memory operations can cause +BUG()s and ASSERT()s to kill Xen. + +This is fixed by only updating the m2p table iff the p2m was +successfully updated. + +This is a security problem, XSA-22 / CVE-2012-4537. + +Signed-off-by: Andrew Cooper +Acked-by: Ian Campbell +Acked-by: Ian Jackson +Committed-by: Ian Jackson +--- + xen/arch/x86/mm/p2m.c | 4 ++++ + 1 files changed, 4 insertions(+), 0 deletions(-) + +diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c +index 82e1b55..f494d25 100644 +--- a/xen/arch/x86/mm/p2m.c ++++ b/xen/arch/x86/mm/p2m.c +@@ -2558,7 +2558,10 @@ guest_physmap_add_entry(struct p2m_domain *p2m, unsigned long gfn, + if ( mfn_valid(_mfn(mfn)) ) + { + if ( !set_p2m_entry(p2m, gfn, _mfn(mfn), page_order, t, p2m->default_access) ) ++ { + rc = -EINVAL; ++ goto out; /* Failed to update p2m, bail without updating m2p. */ ++ } + if ( !p2m_is_grant(t) ) + { + for ( i = 0; i < (1UL << page_order); i++ ) +@@ -2579,6 +2582,7 @@ guest_physmap_add_entry(struct p2m_domain *p2m, unsigned long gfn, + } + } + ++out: + audit_p2m(p2m, 1); + p2m_unlock(p2m); + +-- +1.7.7.5 (Apple Git-26) + diff --git a/main/xen/xsa-23.patch b/main/xen/xsa-23.patch new file mode 100644 index 0000000..35a6470 --- /dev/null +++ b/main/xen/xsa-23.patch @@ -0,0 +1,44 @@ +From 1d435032de14b9dec808102940309047903f7ed0 Mon Sep 17 00:00:00 2001 +From: Ian Jackson +Date: Wed, 14 Nov 2012 11:43:29 +0000 +Subject: [PATCH] xen/mm/shadow: check toplevel pagetables are present before + unhooking them. + +If the guest has not fully populated its top-level PAE entries when it calls +HVMOP_pagetable_dying, the shadow code could try to unhook entries from +MFN 0. Add a check to avoid that case. + +This issue was introduced by c/s 21239:b9d2db109cf5. + +This is a security problem, XSA-23 / CVE-2012-4538. + +Signed-off-by: Tim Deegan +Tested-by: Andrew Cooper +Acked-by: Ian Campbell +Committed-by: Ian Jackson +--- + xen/arch/x86/mm/shadow/multi.c | 8 ++++++-- + 1 files changed, 6 insertions(+), 2 deletions(-) + +diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c +index d510e7f..91879cf 100644 +--- a/xen/arch/x86/mm/shadow/multi.c ++++ b/xen/arch/x86/mm/shadow/multi.c +@@ -4737,8 +4737,12 @@ static void sh_pagetable_dying(struct vcpu *v, paddr_t gpa) + } + for ( i = 0; i < 4; i++ ) + { +- if ( fast_path ) +- smfn = _mfn(pagetable_get_pfn(v->arch.shadow_table[i])); ++ if ( fast_path ) { ++ if ( pagetable_is_null(v->arch.shadow_table[i]) ) ++ smfn = _mfn(INVALID_MFN); ++ else ++ smfn = _mfn(pagetable_get_pfn(v->arch.shadow_table[i])); ++ } + else + { + /* retrieving the l2s */ +-- +1.7.7.5 (Apple Git-26) + diff --git a/main/xen/xsa-24.patch b/main/xen/xsa-24.patch new file mode 100644 index 0000000..53c6fe7 --- /dev/null +++ b/main/xen/xsa-24.patch @@ -0,0 +1,41 @@ +From 95a0c10357c502a0b0b960e7133e792695ef0b58 Mon Sep 17 00:00:00 2001 +From: Ian Jackson +Date: Wed, 14 Nov 2012 11:46:12 +0000 +Subject: [PATCH] compat/gnttab: Prevent infinite loop in compat code + +c/s 20281:95ea2052b41b, which introduces Grant Table version 2 +hypercalls introduces a vulnerability whereby the compat hypercall +handler can fall into an infinite loop. + +If the watchdog is enabled, Xen will die after the timeout. + +This is a security problem, XSA-24 / CVE-2012-4539. + +Signed-off-by: Andrew Cooper +Acked-by: Jan Beulich +Acked-by: Ian Jackson +Committed-by: Ian Jackson + +xen-unstable changeset: 26151:b64a7d868f06 +Backport-requested-by: security@xen.org +Committed-by: Ian Jackson +--- + xen/common/compat/grant_table.c | 2 ++ + 1 files changed, 2 insertions(+), 0 deletions(-) + +diff --git a/xen/common/compat/grant_table.c b/xen/common/compat/grant_table.c +index ca60395..d09a65b 100644 +--- a/xen/common/compat/grant_table.c ++++ b/xen/common/compat/grant_table.c +@@ -310,6 +310,8 @@ int compat_grant_table_op(unsigned int cmd, + #undef XLAT_gnttab_get_status_frames_HNDL_frame_list + if ( unlikely(__copy_to_guest(cmp_uop, &cmp.get_status, 1)) ) + rc = -EFAULT; ++ else ++ i = 1; + } + break; + } +-- +1.7.7.5 (Apple Git-26) + diff --git a/main/xen/xsa-25.patch b/main/xen/xsa-25.patch new file mode 100644 index 0000000..0157f20 --- /dev/null +++ b/main/xen/xsa-25.patch @@ -0,0 +1,474 @@ +From 7ce4c765975097bddffaa2fc482d5c16355687af Mon Sep 17 00:00:00 2001 +From: Ian Jackson +Date: Fri, 26 Oct 2012 16:10:04 +0100 +Subject: [PATCH] libxc: builder: limit maximum size of kernel/ramdisk. + +Allowing user supplied kernels of arbitrary sizes, especially during +decompression, can swallow up dom0 memory leading to either virtual +address space exhaustion in the builder process or allocation +failures/OOM killing of both toolstack and unrelated processes. + +We disable these checks when building in a stub domain for pvgrub +since this uses the guest's own memory and is isolated. + +Decompression of gzip compressed kernels and ramdisks has been safe +since 14954:58205257517d (Xen 3.1.0 onwards). + +This is XSA-25 / CVE-2012-4544. + +Also make explicit checks for buffer overflows in various +decompression routines. These were already ruled out due to other +properties of the code but check them as a belt-and-braces measure. + +Signed-off-by: Ian Campbell +Acked-by: Ian Jackson +[ Includes 25589:60f09d1ab1fe for CVE-2012-2625 ] +--- + stubdom/grub/kexec.c | 4 ++ + tools/libxc/xc_dom.h | 23 ++++++++++- + tools/libxc/xc_dom_bzimageloader.c | 59 +++++++++++++++++++++++++-- + tools/libxc/xc_dom_core.c | 77 ++++++++++++++++++++++++++++++++++-- + tools/pygrub/src/pygrub | 61 ++++++++++++++++++++-------- + 5 files changed, 198 insertions(+), 26 deletions(-) + +diff --git a/stubdom/grub/kexec.c b/stubdom/grub/kexec.c +index 06bef52..b21c91a 100644 +--- a/stubdom/grub/kexec.c ++++ b/stubdom/grub/kexec.c +@@ -137,6 +137,10 @@ void kexec(void *kernel, long kernel_size, void *module, long module_size, char + dom = xc_dom_allocate(xc_handle, cmdline, features); + dom->allocate = kexec_allocate; + ++ /* We are using guest owned memory, therefore no limits. */ ++ xc_dom_kernel_max_size(dom, 0); ++ xc_dom_ramdisk_max_size(dom, 0); ++ + dom->kernel_blob = kernel; + dom->kernel_size = kernel_size; + +diff --git a/tools/libxc/xc_dom.h b/tools/libxc/xc_dom.h +index e72f066..7043f96 100644 +--- a/tools/libxc/xc_dom.h ++++ b/tools/libxc/xc_dom.h +@@ -52,6 +52,9 @@ struct xc_dom_image { + void *ramdisk_blob; + size_t ramdisk_size; + ++ size_t max_kernel_size; ++ size_t max_ramdisk_size; ++ + /* arguments and parameters */ + char *cmdline; + uint32_t f_requested[XENFEAT_NR_SUBMAPS]; +@@ -175,6 +178,23 @@ void xc_dom_release_phys(struct xc_dom_image *dom); + void xc_dom_release(struct xc_dom_image *dom); + int xc_dom_mem_init(struct xc_dom_image *dom, unsigned int mem_mb); + ++/* Set this larger if you have enormous ramdisks/kernels. Note that ++ * you should trust all kernels not to be maliciously large (e.g. to ++ * exhaust all dom0 memory) if you do this (see CVE-2012-4544 / ++ * XSA-25). You can also set the default independently for ++ * ramdisks/kernels in xc_dom_allocate() or call ++ * xc_dom_{kernel,ramdisk}_max_size. ++ */ ++#ifndef XC_DOM_DECOMPRESS_MAX ++#define XC_DOM_DECOMPRESS_MAX (1024*1024*1024) /* 1GB */ ++#endif ++ ++int xc_dom_kernel_check_size(struct xc_dom_image *dom, size_t sz); ++int xc_dom_kernel_max_size(struct xc_dom_image *dom, size_t sz); ++ ++int xc_dom_ramdisk_check_size(struct xc_dom_image *dom, size_t sz); ++int xc_dom_ramdisk_max_size(struct xc_dom_image *dom, size_t sz); ++ + size_t xc_dom_check_gzip(xc_interface *xch, + void *blob, size_t ziplen); + int xc_dom_do_gunzip(xc_interface *xch, +@@ -224,7 +244,8 @@ void xc_dom_log_memory_footprint(struct xc_dom_image *dom); + void *xc_dom_malloc(struct xc_dom_image *dom, size_t size); + void *xc_dom_malloc_page_aligned(struct xc_dom_image *dom, size_t size); + void *xc_dom_malloc_filemap(struct xc_dom_image *dom, +- const char *filename, size_t * size); ++ const char *filename, size_t * size, ++ const size_t max_size); + char *xc_dom_strdup(struct xc_dom_image *dom, const char *str); + + /* --- alloc memory pool ------------------------------------------- */ +diff --git a/tools/libxc/xc_dom_bzimageloader.c b/tools/libxc/xc_dom_bzimageloader.c +index 9852e67..73cfad1 100644 +--- a/tools/libxc/xc_dom_bzimageloader.c ++++ b/tools/libxc/xc_dom_bzimageloader.c +@@ -47,13 +47,19 @@ static int xc_try_bzip2_decode( + char *out_buf; + char *tmp_buf; + int retval = -1; +- int outsize; ++ unsigned int outsize; + uint64_t total; + + stream.bzalloc = NULL; + stream.bzfree = NULL; + stream.opaque = NULL; + ++ if ( dom->kernel_size == 0) ++ { ++ DOMPRINTF("BZIP2: Input is 0 size"); ++ return -1; ++ } ++ + ret = BZ2_bzDecompressInit(&stream, 0, 0); + if ( ret != BZ_OK ) + { +@@ -66,6 +72,17 @@ static int xc_try_bzip2_decode( + * the input buffer to start, and we'll realloc as needed. + */ + outsize = dom->kernel_size; ++ ++ /* ++ * stream.avail_in and outsize are unsigned int, while kernel_size ++ * is a size_t. Check we aren't overflowing. ++ */ ++ if ( outsize != dom->kernel_size ) ++ { ++ DOMPRINTF("BZIP2: Input too large"); ++ goto bzip2_cleanup; ++ } ++ + out_buf = malloc(outsize); + if ( out_buf == NULL ) + { +@@ -98,13 +115,20 @@ static int xc_try_bzip2_decode( + if ( stream.avail_out == 0 ) + { + /* Protect against output buffer overflow */ +- if ( outsize > INT_MAX / 2 ) ++ if ( outsize > UINT_MAX / 2 ) + { + DOMPRINTF("BZIP2: output buffer overflow"); + free(out_buf); + goto bzip2_cleanup; + } + ++ if ( xc_dom_kernel_check_size(dom, outsize * 2) ) ++ { ++ DOMPRINTF("BZIP2: output too large"); ++ free(out_buf); ++ goto bzip2_cleanup; ++ } ++ + tmp_buf = realloc(out_buf, outsize * 2); + if ( tmp_buf == NULL ) + { +@@ -172,9 +196,15 @@ static int xc_try_lzma_decode( + unsigned char *out_buf; + unsigned char *tmp_buf; + int retval = -1; +- int outsize; ++ size_t outsize; + const char *msg; + ++ if ( dom->kernel_size == 0) ++ { ++ DOMPRINTF("LZMA: Input is 0 size"); ++ return -1; ++ } ++ + ret = lzma_alone_decoder(&stream, 128*1024*1024); + if ( ret != LZMA_OK ) + { +@@ -251,13 +281,20 @@ static int xc_try_lzma_decode( + if ( stream.avail_out == 0 ) + { + /* Protect against output buffer overflow */ +- if ( outsize > INT_MAX / 2 ) ++ if ( outsize > SIZE_MAX / 2 ) + { + DOMPRINTF("LZMA: output buffer overflow"); + free(out_buf); + goto lzma_cleanup; + } + ++ if ( xc_dom_kernel_check_size(dom, outsize * 2) ) ++ { ++ DOMPRINTF("LZMA: output too large"); ++ free(out_buf); ++ goto lzma_cleanup; ++ } ++ + tmp_buf = realloc(out_buf, outsize * 2); + if ( tmp_buf == NULL ) + { +@@ -327,6 +364,12 @@ static int xc_try_lzo1x_decode( + 0x89, 0x4c, 0x5a, 0x4f, 0x00, 0x0d, 0x0a, 0x1a, 0x0a + }; + ++ /* ++ * lzo_uint should match size_t. Check that this is the case to be ++ * sure we won't overflow various lzo_uint fields. ++ */ ++ XC_BUILD_BUG_ON(sizeof(lzo_uint) != sizeof(size_t)); ++ + ret = lzo_init(); + if ( ret != LZO_E_OK ) + { +@@ -406,6 +449,14 @@ static int xc_try_lzo1x_decode( + if ( src_len <= 0 || src_len > dst_len || src_len > left ) + break; + ++ msg = "Output buffer overflow"; ++ if ( *size > SIZE_MAX - dst_len ) ++ break; ++ ++ msg = "Decompressed image too large"; ++ if ( xc_dom_kernel_check_size(dom, *size + dst_len) ) ++ break; ++ + msg = "Failed to (re)alloc memory"; + tmp_buf = realloc(out_buf, *size + dst_len); + if ( tmp_buf == NULL ) +diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c +index fea9de5..2a01d7c 100644 +--- a/tools/libxc/xc_dom_core.c ++++ b/tools/libxc/xc_dom_core.c +@@ -159,7 +159,8 @@ void *xc_dom_malloc_page_aligned(struct xc_dom_image *dom, size_t size) + } + + void *xc_dom_malloc_filemap(struct xc_dom_image *dom, +- const char *filename, size_t * size) ++ const char *filename, size_t * size, ++ const size_t max_size) + { + struct xc_dom_mem *block = NULL; + int fd = -1; +@@ -171,6 +172,13 @@ void *xc_dom_malloc_filemap(struct xc_dom_image *dom, + lseek(fd, 0, SEEK_SET); + *size = lseek(fd, 0, SEEK_END); + ++ if ( max_size && *size > max_size ) ++ { ++ xc_dom_panic(dom->xch, XC_OUT_OF_MEMORY, ++ "tried to map file which is too large"); ++ goto err; ++ } ++ + block = malloc(sizeof(*block)); + if ( block == NULL ) + goto err; +@@ -222,6 +230,40 @@ char *xc_dom_strdup(struct xc_dom_image *dom, const char *str) + } + + /* ------------------------------------------------------------------------ */ ++/* decompression buffer sizing */ ++int xc_dom_kernel_check_size(struct xc_dom_image *dom, size_t sz) ++{ ++ /* No limit */ ++ if ( !dom->max_kernel_size ) ++ return 0; ++ ++ if ( sz > dom->max_kernel_size ) ++ { ++ xc_dom_panic(dom->xch, XC_INVALID_KERNEL, ++ "kernel image too large"); ++ return 1; ++ } ++ ++ return 0; ++} ++ ++int xc_dom_ramdisk_check_size(struct xc_dom_image *dom, size_t sz) ++{ ++ /* No limit */ ++ if ( !dom->max_ramdisk_size ) ++ return 0; ++ ++ if ( sz > dom->max_ramdisk_size ) ++ { ++ xc_dom_panic(dom->xch, XC_INVALID_KERNEL, ++ "ramdisk image too large"); ++ return 1; ++ } ++ ++ return 0; ++} ++ ++/* ------------------------------------------------------------------------ */ + /* read files, copy memory blocks, with transparent gunzip */ + + size_t xc_dom_check_gzip(xc_interface *xch, void *blob, size_t ziplen) +@@ -235,7 +277,7 @@ size_t xc_dom_check_gzip(xc_interface *xch, void *blob, size_t ziplen) + + gzlen = blob + ziplen - 4; + unziplen = gzlen[3] << 24 | gzlen[2] << 16 | gzlen[1] << 8 | gzlen[0]; +- if ( (unziplen < 0) || (unziplen > (1024*1024*1024)) ) /* 1GB limit */ ++ if ( (unziplen < 0) || (unziplen > XC_DOM_DECOMPRESS_MAX) ) + { + xc_dom_printf + (xch, +@@ -288,6 +330,9 @@ int xc_dom_try_gunzip(struct xc_dom_image *dom, void **blob, size_t * size) + if ( unziplen == 0 ) + return 0; + ++ if ( xc_dom_kernel_check_size(dom, unziplen) ) ++ return 0; ++ + unzip = xc_dom_malloc(dom, unziplen); + if ( unzip == NULL ) + return -1; +@@ -588,6 +633,9 @@ struct xc_dom_image *xc_dom_allocate(xc_interface *xch, + memset(dom, 0, sizeof(*dom)); + dom->xch = xch; + ++ dom->max_kernel_size = XC_DOM_DECOMPRESS_MAX; ++ dom->max_ramdisk_size = XC_DOM_DECOMPRESS_MAX; ++ + if ( cmdline ) + dom->cmdline = xc_dom_strdup(dom, cmdline); + if ( features ) +@@ -608,10 +656,25 @@ struct xc_dom_image *xc_dom_allocate(xc_interface *xch, + return NULL; + } + ++int xc_dom_kernel_max_size(struct xc_dom_image *dom, size_t sz) ++{ ++ DOMPRINTF("%s: kernel_max_size=%zx", __FUNCTION__, sz); ++ dom->max_kernel_size = sz; ++ return 0; ++} ++ ++int xc_dom_ramdisk_max_size(struct xc_dom_image *dom, size_t sz) ++{ ++ DOMPRINTF("%s: ramdisk_max_size=%zx", __FUNCTION__, sz); ++ dom->max_ramdisk_size = sz; ++ return 0; ++} ++ + int xc_dom_kernel_file(struct xc_dom_image *dom, const char *filename) + { + DOMPRINTF("%s: filename=\"%s\"", __FUNCTION__, filename); +- dom->kernel_blob = xc_dom_malloc_filemap(dom, filename, &dom->kernel_size); ++ dom->kernel_blob = xc_dom_malloc_filemap(dom, filename, &dom->kernel_size, ++ dom->max_kernel_size); + if ( dom->kernel_blob == NULL ) + return -1; + return xc_dom_try_gunzip(dom, &dom->kernel_blob, &dom->kernel_size); +@@ -621,7 +684,9 @@ int xc_dom_ramdisk_file(struct xc_dom_image *dom, const char *filename) + { + DOMPRINTF("%s: filename=\"%s\"", __FUNCTION__, filename); + dom->ramdisk_blob = +- xc_dom_malloc_filemap(dom, filename, &dom->ramdisk_size); ++ xc_dom_malloc_filemap(dom, filename, &dom->ramdisk_size, ++ dom->max_ramdisk_size); ++ + if ( dom->ramdisk_blob == NULL ) + return -1; + // return xc_dom_try_gunzip(dom, &dom->ramdisk_blob, &dom->ramdisk_size); +@@ -781,7 +846,11 @@ int xc_dom_build_image(struct xc_dom_image *dom) + void *ramdiskmap; + + unziplen = xc_dom_check_gzip(dom->xch, dom->ramdisk_blob, dom->ramdisk_size); ++ if ( xc_dom_ramdisk_check_size(dom, unziplen) != 0 ) ++ unziplen = 0; ++ + ramdisklen = unziplen ? unziplen : dom->ramdisk_size; ++ + if ( xc_dom_alloc_segment(dom, &dom->ramdisk_seg, "ramdisk", 0, + ramdisklen) != 0 ) + goto err; +diff --git a/tools/pygrub/src/pygrub b/tools/pygrub/src/pygrub +index 17c0083..1a3c1c3 100644 +--- a/tools/pygrub/src/pygrub ++++ b/tools/pygrub/src/pygrub +@@ -28,6 +28,7 @@ import grub.LiloConf + import grub.ExtLinuxConf + + PYGRUB_VER = 0.6 ++FS_READ_MAX = 1024 * 1024 + + def enable_cursor(ison): + if ison: +@@ -421,7 +422,8 @@ class Grub: + if self.__dict__.get('cf', None) is None: + raise RuntimeError, "couldn't find bootloader config file in the image provided." + f = fs.open_file(self.cf.filename) +- buf = f.read() ++ # limit read size to avoid pathological cases ++ buf = f.read(FS_READ_MAX) + del f + self.cf.parse(buf) + +@@ -670,6 +672,37 @@ if __name__ == "__main__": + def usage(): + print >> sys.stderr, "Usage: %s [-q|--quiet] [-i|--interactive] [-n|--not-really] [--output=] [--kernel=] [--ramdisk=] [--args=] [--entry=] [--output-directory=] [--output-format=sxp|simple|simple0] " %(sys.argv[0],) + ++ def copy_from_image(fs, file_to_read, file_type, output_directory, ++ not_really): ++ if not_really: ++ if fs.file_exists(file_to_read): ++ return "<%s:%s>" % (file_type, file_to_read) ++ else: ++ sys.exit("The requested %s file does not exist" % file_type) ++ try: ++ datafile = fs.open_file(file_to_read) ++ except Exception, e: ++ print >>sys.stderr, e ++ sys.exit("Error opening %s in guest" % file_to_read) ++ (tfd, ret) = tempfile.mkstemp(prefix="boot_"+file_type+".", ++ dir=output_directory) ++ dataoff = 0 ++ while True: ++ data = datafile.read(FS_READ_MAX, dataoff) ++ if len(data) == 0: ++ os.close(tfd) ++ del datafile ++ return ret ++ try: ++ os.write(tfd, data) ++ except Exception, e: ++ print >>sys.stderr, e ++ os.close(tfd) ++ os.unlink(ret) ++ del datafile ++ sys.exit("Error writing temporary copy of "+file_type) ++ dataoff += len(data) ++ + try: + opts, args = getopt.gnu_getopt(sys.argv[1:], 'qinh::', + ["quiet", "interactive", "not-really", "help", +@@ -786,24 +819,18 @@ if __name__ == "__main__": + if not fs: + raise RuntimeError, "Unable to find partition containing kernel" + +- if not_really: +- bootcfg["kernel"] = "" % chosencfg["kernel"] +- else: +- data = fs.open_file(chosencfg["kernel"]).read() +- (tfd, bootcfg["kernel"]) = tempfile.mkstemp(prefix="boot_kernel.", +- dir=output_directory) +- os.write(tfd, data) +- os.close(tfd) ++ bootcfg["kernel"] = copy_from_image(fs, chosencfg["kernel"], "kernel", ++ output_directory, not_really) + + if chosencfg["ramdisk"]: +- if not_really: +- bootcfg["ramdisk"] = "" % chosencfg["ramdisk"] +- else: +- data = fs.open_file(chosencfg["ramdisk"],).read() +- (tfd, bootcfg["ramdisk"]) = tempfile.mkstemp( +- prefix="boot_ramdisk.", dir=output_directory) +- os.write(tfd, data) +- os.close(tfd) ++ try: ++ bootcfg["ramdisk"] = copy_from_image(fs, chosencfg["ramdisk"], ++ "ramdisk", output_directory, ++ not_really) ++ except: ++ if not not_really: ++ os.unlink(bootcfg["kernel"]) ++ raise + else: + initrd = None + +-- +1.7.7.5 (Apple Git-26) + diff --git a/main/xen/xsa26-4.1.patch b/main/xen/xsa26-4.1.patch new file mode 100644 index 0000000..e8b8e7d --- /dev/null +++ b/main/xen/xsa26-4.1.patch @@ -0,0 +1,107 @@ +gnttab: fix releasing of memory upon switches between versions + +gnttab_unpopulate_status_frames() incompletely freed the pages +previously used as status frame in that they did not get removed from +the domain's xenpage_list, thus causing subsequent list corruption +when those pages did get allocated again for the same or another purpose. + +Similarly, grant_table_create() and gnttab_grow_table() both improperly +clean up in the event of an error - pages already shared with the guest +can't be freed by just passing them to free_xenheap_page(). Fix this by +sharing the pages only after all allocations succeeded. + +This is CVE-2012-5510 / XSA-26. + +Signed-off-by: Jan Beulich +Acked-by: Ian Campbell + +diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c +index 6c0aa6f..a180aef 100644 +--- a/xen/common/grant_table.c ++++ b/xen/common/grant_table.c +@@ -1126,12 +1126,13 @@ fault: + } + + static int +-gnttab_populate_status_frames(struct domain *d, struct grant_table *gt) ++gnttab_populate_status_frames(struct domain *d, struct grant_table *gt, ++ unsigned int req_nr_frames) + { + unsigned i; + unsigned req_status_frames; + +- req_status_frames = grant_to_status_frames(gt->nr_grant_frames); ++ req_status_frames = grant_to_status_frames(req_nr_frames); + for ( i = nr_status_frames(gt); i < req_status_frames; i++ ) + { + if ( (gt->status[i] = alloc_xenheap_page()) == NULL ) +@@ -1162,7 +1163,12 @@ gnttab_unpopulate_status_frames(struct domain *d, struct grant_table *gt) + + for ( i = 0; i < nr_status_frames(gt); i++ ) + { +- page_set_owner(virt_to_page(gt->status[i]), dom_xen); ++ struct page_info *pg = virt_to_page(gt->status[i]); ++ ++ BUG_ON(page_get_owner(pg) != d); ++ if ( test_and_clear_bit(_PGC_allocated, &pg->count_info) ) ++ put_page(pg); ++ BUG_ON(pg->count_info & ~PGC_xen_heap); + free_xenheap_page(gt->status[i]); + gt->status[i] = NULL; + } +@@ -1200,19 +1206,18 @@ gnttab_grow_table(struct domain *d, unsigned int req_nr_frames) + clear_page(gt->shared_raw[i]); + } + +- /* Share the new shared frames with the recipient domain */ +- for ( i = nr_grant_frames(gt); i < req_nr_frames; i++ ) +- gnttab_create_shared_page(d, gt, i); +- +- gt->nr_grant_frames = req_nr_frames; +- + /* Status pages - version 2 */ + if (gt->gt_version > 1) + { +- if ( gnttab_populate_status_frames(d, gt) ) ++ if ( gnttab_populate_status_frames(d, gt, req_nr_frames) ) + goto shared_alloc_failed; + } + ++ /* Share the new shared frames with the recipient domain */ ++ for ( i = nr_grant_frames(gt); i < req_nr_frames; i++ ) ++ gnttab_create_shared_page(d, gt, i); ++ gt->nr_grant_frames = req_nr_frames; ++ + return 1; + + shared_alloc_failed: +@@ -2134,7 +2139,7 @@ gnttab_set_version(XEN_GUEST_HANDLE(gnttab_set_version_t uop)) + + if ( op.version == 2 && gt->gt_version < 2 ) + { +- res = gnttab_populate_status_frames(d, gt); ++ res = gnttab_populate_status_frames(d, gt, nr_grant_frames(gt)); + if ( res < 0) + goto out_unlock; + } +@@ -2449,9 +2454,6 @@ grant_table_create( + clear_page(t->shared_raw[i]); + } + +- for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ ) +- gnttab_create_shared_page(d, t, i); +- + /* Status pages for grant table - for version 2 */ + t->status = xmalloc_array(grant_status_t *, + grant_to_status_frames(max_nr_grant_frames)); +@@ -2459,6 +2461,10 @@ grant_table_create( + goto no_mem_4; + memset(t->status, 0, + grant_to_status_frames(max_nr_grant_frames) * sizeof(t->status[0])); ++ ++ for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ ) ++ gnttab_create_shared_page(d, t, i); ++ + t->nr_status_frames = 0; + + /* Okay, install the structure. */ diff --git a/main/xen/xsa27-4.1.patch b/main/xen/xsa27-4.1.patch new file mode 100644 index 0000000..f0764cb --- /dev/null +++ b/main/xen/xsa27-4.1.patch @@ -0,0 +1,168 @@ +hvm: Limit the size of large HVM op batches + +Doing large p2m updates for HVMOP_track_dirty_vram without preemption +ties up the physical processor. Integrating preemption into the p2m +updates is hard so simply limit to 1GB which is sufficient for a 15000 +* 15000 * 32bpp framebuffer. + +For HVMOP_modified_memory and HVMOP_set_mem_type preemptible add the +necessary machinery to handle preemption. + +This is CVE-2012-5511 / XSA-27. + +Signed-off-by: Tim Deegan +Signed-off-by: Ian Campbell +Acked-by: Ian Jackson + +x86/paging: Don't allocate user-controlled amounts of stack memory. + +This is XSA-27 / CVE-2012-5511. + +Signed-off-by: Tim Deegan +Acked-by: Jan Beulich +v2: Provide definition of GB to fix x86-32 compile. + +Signed-off-by: Jan Beulich +Acked-by: Ian Jackson + + +diff -r 5639047d6c9f xen/arch/x86/hvm/hvm.c +--- a/xen/arch/x86/hvm/hvm.c Mon Nov 19 09:43:48 2012 +0100 ++++ b/xen/arch/x86/hvm/hvm.c Mon Nov 19 16:00:33 2012 +0000 +@@ -3471,6 +3471,9 @@ long do_hvm_op(unsigned long op, XEN_GUE + if ( !is_hvm_domain(d) ) + goto param_fail2; + ++ if ( a.nr > GB(1) >> PAGE_SHIFT ) ++ goto param_fail2; ++ + rc = xsm_hvm_param(d, op); + if ( rc ) + goto param_fail2; +@@ -3498,7 +3501,6 @@ long do_hvm_op(unsigned long op, XEN_GUE + struct xen_hvm_modified_memory a; + struct domain *d; + struct p2m_domain *p2m; +- unsigned long pfn; + + if ( copy_from_guest(&a, arg, 1) ) + return -EFAULT; +@@ -3526,8 +3528,9 @@ long do_hvm_op(unsigned long op, XEN_GUE + goto param_fail3; + + p2m = p2m_get_hostp2m(d); +- for ( pfn = a.first_pfn; pfn < a.first_pfn + a.nr; pfn++ ) ++ while ( a.nr > 0 ) + { ++ unsigned long pfn = a.first_pfn; + p2m_type_t t; + mfn_t mfn = gfn_to_mfn(p2m, pfn, &t); + if ( p2m_is_paging(t) ) +@@ -3548,6 +3551,19 @@ long do_hvm_op(unsigned long op, XEN_GUE + /* don't take a long time and don't die either */ + sh_remove_shadows(d->vcpu[0], mfn, 1, 0); + } ++ ++ a.first_pfn++; ++ a.nr--; ++ ++ /* Check for continuation if it's not the last interation */ ++ if ( a.nr > 0 && hypercall_preempt_check() ) ++ { ++ if ( copy_to_guest(arg, &a, 1) ) ++ rc = -EFAULT; ++ else ++ rc = -EAGAIN; ++ break; ++ } + } + + param_fail3: +@@ -3595,7 +3611,6 @@ long do_hvm_op(unsigned long op, XEN_GUE + struct xen_hvm_set_mem_type a; + struct domain *d; + struct p2m_domain *p2m; +- unsigned long pfn; + + /* Interface types to internal p2m types */ + p2m_type_t memtype[] = { +@@ -3625,8 +3640,9 @@ long do_hvm_op(unsigned long op, XEN_GUE + goto param_fail4; + + p2m = p2m_get_hostp2m(d); +- for ( pfn = a.first_pfn; pfn < a.first_pfn + a.nr; pfn++ ) ++ while ( a.nr > 0 ) + { ++ unsigned long pfn = a.first_pfn; + p2m_type_t t; + p2m_type_t nt; + mfn_t mfn; +@@ -3662,6 +3678,19 @@ long do_hvm_op(unsigned long op, XEN_GUE + goto param_fail4; + } + } ++ ++ a.first_pfn++; ++ a.nr--; ++ ++ /* Check for continuation if it's not the last interation */ ++ if ( a.nr > 0 && hypercall_preempt_check() ) ++ { ++ if ( copy_to_guest(arg, &a, 1) ) ++ rc = -EFAULT; ++ else ++ rc = -EAGAIN; ++ goto param_fail4; ++ } + } + + rc = 0; +diff -r 5639047d6c9f xen/arch/x86/mm/paging.c +--- a/xen/arch/x86/mm/paging.c Mon Nov 19 09:43:48 2012 +0100 ++++ b/xen/arch/x86/mm/paging.c Mon Nov 19 16:00:33 2012 +0000 +@@ -529,13 +529,18 @@ int paging_log_dirty_range(struct domain + + if ( !d->arch.paging.log_dirty.fault_count && + !d->arch.paging.log_dirty.dirty_count ) { +- int size = (nr + BITS_PER_LONG - 1) / BITS_PER_LONG; +- unsigned long zeroes[size]; +- memset(zeroes, 0x00, size * BYTES_PER_LONG); ++ static uint8_t zeroes[PAGE_SIZE]; ++ int off, size; ++ ++ size = ((nr + BITS_PER_LONG - 1) / BITS_PER_LONG) * sizeof (long); + rv = 0; +- if ( copy_to_guest_offset(dirty_bitmap, 0, (uint8_t *) zeroes, +- size * BYTES_PER_LONG) != 0 ) +- rv = -EFAULT; ++ for ( off = 0; !rv && off < size; off += sizeof zeroes ) ++ { ++ int todo = min(size - off, (int) PAGE_SIZE); ++ if ( copy_to_guest_offset(dirty_bitmap, off, zeroes, todo) ) ++ rv = -EFAULT; ++ off += todo; ++ } + goto out; + } + d->arch.paging.log_dirty.fault_count = 0; +diff -r 5639047d6c9f xen/include/asm-x86/config.h +--- a/xen/include/asm-x86/config.h Mon Nov 19 09:43:48 2012 +0100 ++++ b/xen/include/asm-x86/config.h Mon Nov 19 16:00:33 2012 +0000 +@@ -108,6 +108,9 @@ extern unsigned int trampoline_xen_phys_ + extern unsigned char trampoline_cpu_started; + extern char wakeup_start[]; + extern unsigned int video_mode, video_flags; ++ ++#define GB(_gb) (_gb ## UL << 30) ++ + #endif + + #define asmlinkage +@@ -123,7 +126,6 @@ extern unsigned int video_mode, video_fl + #define PML4_ADDR(_slot) \ + ((((_slot ## UL) >> 8) * 0xffff000000000000UL) | \ + (_slot ## UL << PML4_ENTRY_BITS)) +-#define GB(_gb) (_gb ## UL << 30) + #else + #define PML4_ENTRY_BYTES (1 << PML4_ENTRY_BITS) + #define PML4_ADDR(_slot) \ diff --git a/main/xen/xsa28-4.1.patch b/main/xen/xsa28-4.1.patch new file mode 100644 index 0000000..fe4638e --- /dev/null +++ b/main/xen/xsa28-4.1.patch @@ -0,0 +1,36 @@ +x86/HVM: range check xen_hvm_set_mem_access.hvmmem_access before use + +Otherwise an out of bounds array access can happen if changing the +default access is being requested, which - if it doesn't crash Xen - +would subsequently allow reading arbitrary memory through +HVMOP_get_mem_access (again, unless that operation crashes Xen). + +This is XSA-28 / CVE-2012-5512. + +Signed-off-by: Jan Beulich +Acked-by: Tim Deegan +Acked-by: Ian Campbell + +diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c +index 66cf805..08b6418 100644 +--- a/xen/arch/x86/hvm/hvm.c ++++ b/xen/arch/x86/hvm/hvm.c +@@ -3699,7 +3699,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg) + return rc; + + rc = -EINVAL; +- if ( !is_hvm_domain(d) ) ++ if ( !is_hvm_domain(d) || a.hvmmem_access >= ARRAY_SIZE(memaccess) ) + goto param_fail5; + + p2m = p2m_get_hostp2m(d); +@@ -3719,9 +3719,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg) + ((a.first_pfn + a.nr - 1) > domain_get_maximum_gpfn(d)) ) + goto param_fail5; + +- if ( a.hvmmem_access >= ARRAY_SIZE(memaccess) ) +- goto param_fail5; +- + for ( pfn = a.first_pfn; pfn < a.first_pfn + a.nr; pfn++ ) + { + p2m_type_t t; diff --git a/main/xen/xsa29-4.1.patch b/main/xen/xsa29-4.1.patch new file mode 100644 index 0000000..f8f6e38 --- /dev/null +++ b/main/xen/xsa29-4.1.patch @@ -0,0 +1,49 @@ +xen: add missing guest address range checks to XENMEM_exchange handlers + +Ever since its existence (3.0.3 iirc) the handler for this has been +using non address range checking guest memory accessors (i.e. +the ones prefixed with two underscores) without first range +checking the accessed space (via guest_handle_okay()), allowing +a guest to access and overwrite hypervisor memory. + +This is XSA-29 / CVE-2012-5513. + +Signed-off-by: Jan Beulich +Acked-by: Ian Campbell +Acked-by: Ian Jackson + +diff --git a/xen/common/compat/memory.c b/xen/common/compat/memory.c +index 2402984..1d877fc 100644 +--- a/xen/common/compat/memory.c ++++ b/xen/common/compat/memory.c +@@ -114,6 +114,12 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE(void) compat) + (cmp.xchg.out.nr_extents << cmp.xchg.out.extent_order)) ) + return -EINVAL; + ++ if ( !compat_handle_okay(cmp.xchg.in.extent_start, ++ cmp.xchg.in.nr_extents) || ++ !compat_handle_okay(cmp.xchg.out.extent_start, ++ cmp.xchg.out.nr_extents) ) ++ return -EFAULT; ++ + start_extent = cmp.xchg.nr_exchanged; + end_extent = (COMPAT_ARG_XLAT_SIZE - sizeof(*nat.xchg)) / + (((1U << ABS(order_delta)) + 1) * +diff --git a/xen/common/memory.c b/xen/common/memory.c +index 4e7c234..59379d3 100644 +--- a/xen/common/memory.c ++++ b/xen/common/memory.c +@@ -289,6 +289,13 @@ static long memory_exchange(XEN_GUEST_HANDLE(xen_memory_exchange_t) arg) + goto fail_early; + } + ++ if ( !guest_handle_okay(exch.in.extent_start, exch.in.nr_extents) || ++ !guest_handle_okay(exch.out.extent_start, exch.out.nr_extents) ) ++ { ++ rc = -EFAULT; ++ goto fail_early; ++ } ++ + /* Only privileged guests can allocate multi-page contiguous extents. */ + if ( !multipage_allocation_permitted(current->domain, + exch.in.extent_order) || diff --git a/main/xen/xsa30-4.1.patch b/main/xen/xsa30-4.1.patch new file mode 100644 index 0000000..817879a --- /dev/null +++ b/main/xen/xsa30-4.1.patch @@ -0,0 +1,57 @@ +xen: fix error handling of guest_physmap_mark_populate_on_demand() + +The only user of the "out" label bypasses a necessary unlock, thus +enabling the caller to lock up Xen. + +Also, the function was never meant to be called by a guest for itself, +so rather than inspecting the code paths in depth for potential other +problems this might cause, and adjusting e.g. the non-guest printk() +in the above error path, just disallow the guest access to it. + +Finally, the printk() (considering its potential of spamming the log, +the more that it's not using XENLOG_GUEST), is being converted to +P2M_DEBUG(), as debugging is what it apparently was added for in the +first place. + +This is XSA-30 / CVE-2012-5514. + +Signed-off-by: Jan Beulich +Acked-by: Ian Campbell +Acked-by: George Dunlap +Acked-by: Ian Jackson + +diff -r 5639047d6c9f xen/arch/x86/mm/p2m.c +--- a/xen/arch/x86/mm/p2m.c Mon Nov 19 09:43:48 2012 +0100 ++++ b/xen/arch/x86/mm/p2m.c Thu Nov 22 17:07:37 2012 +0000 +@@ -2412,6 +2412,9 @@ guest_physmap_mark_populate_on_demand(st + mfn_t omfn; + int rc = 0; + ++ if ( !IS_PRIV_FOR(current->domain, d) ) ++ return -EPERM; ++ + if ( !paging_mode_translate(d) ) + return -EINVAL; + +@@ -2430,8 +2433,7 @@ guest_physmap_mark_populate_on_demand(st + omfn = gfn_to_mfn_query(p2m, gfn + i, &ot); + if ( p2m_is_ram(ot) ) + { +- printk("%s: gfn_to_mfn returned type %d!\n", +- __func__, ot); ++ P2M_DEBUG("gfn_to_mfn returned type %d!\n", ot); + rc = -EBUSY; + goto out; + } +@@ -2453,10 +2455,10 @@ guest_physmap_mark_populate_on_demand(st + BUG_ON(p2m->pod.entry_count < 0); + } + ++out: + audit_p2m(p2m, 1); + p2m_unlock(p2m); + +-out: + return rc; + } + diff --git a/main/xen/xsa31-4.1.patch b/main/xen/xsa31-4.1.patch new file mode 100644 index 0000000..1f3d929 --- /dev/null +++ b/main/xen/xsa31-4.1.patch @@ -0,0 +1,50 @@ +memop: limit guest specified extent order + +Allowing unbounded order values here causes almost unbounded loops +and/or partially incomplete requests, particularly in PoD code. + +The added range checks in populate_physmap(), decrease_reservation(), +and the "in" one in memory_exchange() architecturally all could use +PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to +MAX_ORDER. + +This is XSA-31 / CVE-2012-5515. + +Signed-off-by: Jan Beulich +Acked-by: Tim Deegan +Acked-by: Ian Jackson + +diff --git a/xen/common/memory.c b/xen/common/memory.c +index 4e7c234..9b9fb18 100644 +--- a/xen/common/memory.c ++++ b/xen/common/memory.c +@@ -117,7 +117,8 @@ static void populate_physmap(struct memop_args *a) + + if ( a->memflags & MEMF_populate_on_demand ) + { +- if ( guest_physmap_mark_populate_on_demand(d, gpfn, ++ if ( a->extent_order > MAX_ORDER || ++ guest_physmap_mark_populate_on_demand(d, gpfn, + a->extent_order) < 0 ) + goto out; + } +@@ -216,7 +217,8 @@ static void decrease_reservation(struct memop_args *a) + xen_pfn_t gmfn; + + if ( !guest_handle_subrange_okay(a->extent_list, a->nr_done, +- a->nr_extents-1) ) ++ a->nr_extents-1) || ++ a->extent_order > MAX_ORDER ) + return; + + for ( i = a->nr_done; i < a->nr_extents; i++ ) +@@ -278,6 +280,9 @@ static long memory_exchange(XEN_GUEST_HANDLE(xen_memory_exchange_t) arg) + if ( (exch.nr_exchanged > exch.in.nr_extents) || + /* Input and output domain identifiers match? */ + (exch.in.domid != exch.out.domid) || ++ /* Extent orders are sensible? */ ++ (exch.in.extent_order > MAX_ORDER) || ++ (exch.out.extent_order > MAX_ORDER) || + /* Sizes of input and output lists do not overflow a long? */ + ((~0UL >> exch.in.extent_order) < exch.in.nr_extents) || + ((~0UL >> exch.out.extent_order) < exch.out.nr_extents) || -- 1.7.7.5 (Apple Git-26) --- Unsubscribe: alpine-devel+unsubscribe@lists.alpinelinux.org Help: alpine-devel+help@lists.alpinelinux.org ---