diff options
author | Ard Biesheuvel <ardb@kernel.org> | 2022-10-20 15:54:33 +0200 |
---|---|---|
committer | Ard Biesheuvel <ardb@kernel.org> | 2023-09-11 10:13:17 +0200 |
commit | cf8e8658100d4eae80ce9b21f7a81cb024dd5057 (patch) | |
tree | 31d3b640bebf97c33d354768fc44dfd532c2df81 /arch/ia64/mm/contig.c | |
parent | acpi: Provide ia64 dummy implementation of acpi_proc_quirk_mwait_check() (diff) | |
download | linux-cf8e8658100d4eae80ce9b21f7a81cb024dd5057.tar.xz linux-cf8e8658100d4eae80ce9b21f7a81cb024dd5057.zip |
arch: Remove Itanium (IA-64) architecture
The Itanium architecture is obsolete, and an informal survey [0] reveals
that any residual use of Itanium hardware in production is mostly HP-UX
or OpenVMS based. The use of Linux on Itanium appears to be limited to
enthusiasts that occasionally boot a fresh Linux kernel to see whether
things are still working as intended, and perhaps to churn out some
distro packages that are rarely used in practice.
None of the original companies behind Itanium still produce or support
any hardware or software for the architecture, and it is listed as
'Orphaned' in the MAINTAINERS file, as apparently, none of the engineers
that contributed on behalf of those companies (nor anyone else, for that
matter) have been willing to support or maintain the architecture
upstream or even be responsible for applying the odd fix. The Intel
firmware team removed all IA-64 support from the Tianocore/EDK2
reference implementation of EFI in 2018. (Itanium is the original
architecture for which EFI was developed, and the way Linux supports it
deviates significantly from other architectures.) Some distros, such as
Debian and Gentoo, still maintain [unofficial] ia64 ports, but many have
dropped support years ago.
While the argument is being made [1] that there is a 'for the common
good' angle to being able to build and run existing projects such as the
Grid Community Toolkit [2] on Itanium for interoperability testing, the
fact remains that none of those projects are known to be deployed on
Linux/ia64, and very few people actually have access to such a system in
the first place. Even if there were ways imaginable in which Linux/ia64
could be put to good use today, what matters is whether anyone is
actually doing that, and this does not appear to be the case.
There are no emulators widely available, and so boot testing Itanium is
generally infeasible for ordinary contributors. GCC still supports IA-64
but its compile farm [3] no longer has any IA-64 machines. GLIBC would
like to get rid of IA-64 [4] too because it would permit some overdue
code cleanups. In summary, the benefits to the ecosystem of having IA-64
be part of it are mostly theoretical, whereas the maintenance overhead
of keeping it supported is real.
So let's rip off the band aid, and remove the IA-64 arch code entirely.
This follows the timeline proposed by the Debian/ia64 maintainer [5],
which removes support in a controlled manner, leaving IA-64 in a known
good state in the most recent LTS release. Other projects will follow
once the kernel support is removed.
[0] https://lore.kernel.org/all/CAMj1kXFCMh_578jniKpUtx_j8ByHnt=s7S+yQ+vGbKt9ud7+kQ@mail.gmail.com/
[1] https://lore.kernel.org/all/0075883c-7c51-00f5-2c2d-5119c1820410@web.de/
[2] https://gridcf.org/gct-docs/latest/index.html
[3] https://cfarm.tetaneutral.net/machines/list/
[4] https://lore.kernel.org/all/87bkiilpc4.fsf@mid.deneb.enyo.de/
[5] https://lore.kernel.org/all/ff58a3e76e5102c94bb5946d99187b358def688a.camel@physik.fu-berlin.de/
Acked-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Diffstat (limited to 'arch/ia64/mm/contig.c')
-rw-r--r-- | arch/ia64/mm/contig.c | 208 |
1 files changed, 0 insertions, 208 deletions
diff --git a/arch/ia64/mm/contig.c b/arch/ia64/mm/contig.c deleted file mode 100644 index 1e9eaa107eb7..000000000000 --- a/arch/ia64/mm/contig.c +++ /dev/null @@ -1,208 +0,0 @@ -/* - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 1998-2003 Hewlett-Packard Co - * David Mosberger-Tang <davidm@hpl.hp.com> - * Stephane Eranian <eranian@hpl.hp.com> - * Copyright (C) 2000, Rohit Seth <rohit.seth@intel.com> - * Copyright (C) 1999 VA Linux Systems - * Copyright (C) 1999 Walt Drummond <drummond@valinux.com> - * Copyright (C) 2003 Silicon Graphics, Inc. All rights reserved. - * - * Routines used by ia64 machines with contiguous (or virtually contiguous) - * memory. - */ -#include <linux/efi.h> -#include <linux/memblock.h> -#include <linux/mm.h> -#include <linux/nmi.h> -#include <linux/swap.h> -#include <linux/sizes.h> - -#include <asm/efi.h> -#include <asm/meminit.h> -#include <asm/sections.h> -#include <asm/mca.h> - -/* physical address where the bootmem map is located */ -unsigned long bootmap_start; - -#ifdef CONFIG_SMP -static void *cpu_data; -/** - * per_cpu_init - setup per-cpu variables - * - * Allocate and setup per-cpu data areas. - */ -void *per_cpu_init(void) -{ - static bool first_time = true; - void *cpu0_data = __cpu0_per_cpu; - unsigned int cpu; - - if (!first_time) - goto skip; - first_time = false; - - /* - * get_free_pages() cannot be used before cpu_init() done. - * BSP allocates PERCPU_PAGE_SIZE bytes for all possible CPUs - * to avoid that AP calls get_zeroed_page(). - */ - for_each_possible_cpu(cpu) { - void *src = cpu == 0 ? cpu0_data : __phys_per_cpu_start; - - memcpy(cpu_data, src, __per_cpu_end - __per_cpu_start); - __per_cpu_offset[cpu] = (char *)cpu_data - __per_cpu_start; - per_cpu(local_per_cpu_offset, cpu) = __per_cpu_offset[cpu]; - - /* - * percpu area for cpu0 is moved from the __init area - * which is setup by head.S and used till this point. - * Update ar.k3. This move is ensures that percpu - * area for cpu0 is on the correct node and its - * virtual address isn't insanely far from other - * percpu areas which is important for congruent - * percpu allocator. - */ - if (cpu == 0) - ia64_set_kr(IA64_KR_PER_CPU_DATA, __pa(cpu_data) - - (unsigned long)__per_cpu_start); - - cpu_data += PERCPU_PAGE_SIZE; - } -skip: - return __per_cpu_start + __per_cpu_offset[smp_processor_id()]; -} - -static inline __init void -alloc_per_cpu_data(void) -{ - size_t size = PERCPU_PAGE_SIZE * num_possible_cpus(); - - cpu_data = memblock_alloc_from(size, PERCPU_PAGE_SIZE, - __pa(MAX_DMA_ADDRESS)); - if (!cpu_data) - panic("%s: Failed to allocate %lu bytes align=%lx from=%lx\n", - __func__, size, PERCPU_PAGE_SIZE, __pa(MAX_DMA_ADDRESS)); -} - -/** - * setup_per_cpu_areas - setup percpu areas - * - * Arch code has already allocated and initialized percpu areas. All - * this function has to do is to teach the determined layout to the - * dynamic percpu allocator, which happens to be more complex than - * creating whole new ones using helpers. - */ -void __init -setup_per_cpu_areas(void) -{ - struct pcpu_alloc_info *ai; - struct pcpu_group_info *gi; - unsigned int cpu; - ssize_t static_size, reserved_size, dyn_size; - - ai = pcpu_alloc_alloc_info(1, num_possible_cpus()); - if (!ai) - panic("failed to allocate pcpu_alloc_info"); - gi = &ai->groups[0]; - - /* units are assigned consecutively to possible cpus */ - for_each_possible_cpu(cpu) - gi->cpu_map[gi->nr_units++] = cpu; - - /* set parameters */ - static_size = __per_cpu_end - __per_cpu_start; - reserved_size = PERCPU_MODULE_RESERVE; - dyn_size = PERCPU_PAGE_SIZE - static_size - reserved_size; - if (dyn_size < 0) - panic("percpu area overflow static=%zd reserved=%zd\n", - static_size, reserved_size); - - ai->static_size = static_size; - ai->reserved_size = reserved_size; - ai->dyn_size = dyn_size; - ai->unit_size = PERCPU_PAGE_SIZE; - ai->atom_size = PAGE_SIZE; - ai->alloc_size = PERCPU_PAGE_SIZE; - - pcpu_setup_first_chunk(ai, __per_cpu_start + __per_cpu_offset[0]); - pcpu_free_alloc_info(ai); -} -#else -#define alloc_per_cpu_data() do { } while (0) -#endif /* CONFIG_SMP */ - -/** - * find_memory - setup memory map - * - * Walk the EFI memory map and find usable memory for the system, taking - * into account reserved areas. - */ -void __init -find_memory (void) -{ - reserve_memory(); - - /* first find highest page frame number */ - min_low_pfn = ~0UL; - max_low_pfn = 0; - efi_memmap_walk(find_max_min_low_pfn, NULL); - max_pfn = max_low_pfn; - - memblock_add_node(0, PFN_PHYS(max_low_pfn), 0, MEMBLOCK_NONE); - - find_initrd(); - - alloc_per_cpu_data(); -} - -static int __init find_largest_hole(u64 start, u64 end, void *arg) -{ - u64 *max_gap = arg; - - static u64 last_end = PAGE_OFFSET; - - /* NOTE: this algorithm assumes efi memmap table is ordered */ - - if (*max_gap < (start - last_end)) - *max_gap = start - last_end; - last_end = end; - return 0; -} - -static void __init verify_gap_absence(void) -{ - unsigned long max_gap; - - /* Forbid FLATMEM if hole is > than 1G */ - efi_memmap_walk(find_largest_hole, (u64 *)&max_gap); - if (max_gap >= SZ_1G) - panic("Cannot use FLATMEM with %ldMB hole\n" - "Please switch over to SPARSEMEM\n", - (max_gap >> 20)); -} - -/* - * Set up the page tables. - */ - -void __init -paging_init (void) -{ - unsigned long max_dma; - unsigned long max_zone_pfns[MAX_NR_ZONES]; - - memset(max_zone_pfns, 0, sizeof(max_zone_pfns)); - max_dma = virt_to_phys((void *) MAX_DMA_ADDRESS) >> PAGE_SHIFT; - max_zone_pfns[ZONE_DMA32] = max_dma; - max_zone_pfns[ZONE_NORMAL] = max_low_pfn; - - verify_gap_absence(); - - free_area_init(max_zone_pfns); - zero_page_memmap_ptr = virt_to_page(ia64_imva(empty_zero_page)); -} |