[ Merge note: this pull request depends on you having merged

two locking commits in the locking tree,
 	      part of the locking-core-2025-03-22 pull request. ]
 
 x86 CPU features support:
   - Generate the <asm/cpufeaturemasks.h> header based on build config
     (H. Peter Anvin, Xin Li)
   - x86 CPUID parsing updates and fixes (Ahmed S. Darwish)
   - Introduce the 'setcpuid=' boot parameter (Brendan Jackman)
   - Enable modifying CPU bug flags with '{clear,set}puid='
     (Brendan Jackman)
   - Utilize CPU-type for CPU matching (Pawan Gupta)
   - Warn about unmet CPU feature dependencies (Sohil Mehta)
   - Prepare for new Intel Family numbers (Sohil Mehta)
 
 Percpu code:
   - Standardize & reorganize the x86 percpu layout and
     related cleanups (Brian Gerst)
   - Convert the stackprotector canary to a regular percpu
     variable (Brian Gerst)
   - Add a percpu subsection for cache hot data (Brian Gerst)
   - Unify __pcpu_op{1,2}_N() macros to __pcpu_op_N() (Uros Bizjak)
   - Construct __percpu_seg_override from __percpu_seg (Uros Bizjak)
 
 MM:
   - Add support for broadcast TLB invalidation using AMD's INVLPGB instruction
     (Rik van Riel)
   - Rework ROX cache to avoid writable copy (Mike Rapoport)
   - PAT: restore large ROX pages after fragmentation
     (Kirill A. Shutemov, Mike Rapoport)
   - Make memremap(MEMREMAP_WB) map memory as encrypted by default
     (Kirill A. Shutemov)
   - Robustify page table initialization (Kirill A. Shutemov)
   - Fix flush_tlb_range() when used for zapping normal PMDs (Jann Horn)
   - Clear _PAGE_DIRTY for kernel mappings when we clear _PAGE_RW
     (Matthew Wilcox)
 
 KASLR:
   - x86/kaslr: Reduce KASLR entropy on most x86 systems,
     to support PCI BAR space beyond the 10TiB region
     (CONFIG_PCI_P2PDMA=y) (Balbir Singh)
 
 CPU bugs:
   - Implement FineIBT-BHI mitigation (Peter Zijlstra)
   - speculation: Simplify and make CALL_NOSPEC consistent (Pawan Gupta)
   - speculation: Add a conditional CS prefix to CALL_NOSPEC (Pawan Gupta)
   - RFDS: Exclude P-only parts from the RFDS affected list (Pawan Gupta)
 
 System calls:
   - Break up entry/common.c (Brian Gerst)
   - Move sysctls into arch/x86 (Joel Granados)
 
 Intel LAM support updates: (Maciej Wieczor-Retman)
   - selftests/lam: Move cpu_has_la57() to use cpuinfo flag
   - selftests/lam: Skip test if LAM is disabled
   - selftests/lam: Test get_user() LAM pointer handling
 
 AMD SMN access updates:
   - Add SMN offsets to exclusive region access (Mario Limonciello)
   - Add support for debugfs access to SMN registers (Mario Limonciello)
   - Have HSMP use SMN through AMD_NODE (Yazen Ghannam)
 
 Power management updates: (Patryk Wlazlyn)
   - Allow calling mwait_play_dead with an arbitrary hint
   - ACPI/processor_idle: Add FFH state handling
   - intel_idle: Provide the default enter_dead() handler
   - Eliminate mwait_play_dead_cpuid_hint()
 
 Bootup:
 
 Build system:
   - Raise the minimum GCC version to 8.1 (Brian Gerst)
   - Raise the minimum LLVM version to 15.0.0
     (Nathan Chancellor)
 
 Kconfig: (Arnd Bergmann)
   - Add cmpxchg8b support back to Geode CPUs
   - Drop 32-bit "bigsmp" machine support
   - Rework CONFIG_GENERIC_CPU compiler flags
   - Drop configuration options for early 64-bit CPUs
   - Remove CONFIG_HIGHMEM64G support
   - Drop CONFIG_SWIOTLB for PAE
   - Drop support for CONFIG_HIGHPTE
   - Document CONFIG_X86_INTEL_MID as 64-bit-only
   - Remove old STA2x11 support
   - Only allow CONFIG_EISA for 32-bit
 
 Headers:
   - Replace __ASSEMBLY__ with __ASSEMBLER__ in UAPI and non-UAPI headers
     (Thomas Huth)
 
 Assembly code & machine code patching:
   - x86/alternatives: Simplify alternative_call() interface (Josh Poimboeuf)
   - x86/alternatives: Simplify callthunk patching (Peter Zijlstra)
   - KVM: VMX: Use named operands in inline asm (Josh Poimboeuf)
   - x86/hyperv: Use named operands in inline asm (Josh Poimboeuf)
   - x86/traps: Cleanup and robustify decode_bug() (Peter Zijlstra)
   - x86/kexec: Merge x86_32 and x86_64 code using macros from <asm/asm.h>
     (Uros Bizjak)
   - Use named operands in inline asm (Uros Bizjak)
   - Improve performance by using asm_inline() for atomic locking instructions
     (Uros Bizjak)
 
 Earlyprintk:
   - Harden early_serial (Peter Zijlstra)
 
 NMI handler:
   - Add an emergency handler in nmi_desc & use it in nmi_shootdown_cpus()
     (Waiman Long)
 
 Miscellaneous fixes and cleanups:
 
   - by Ahmed S. Darwish, Andy Shevchenko, Ard Biesheuvel,
     Artem Bityutskiy, Borislav Petkov, Brendan Jackman, Brian Gerst,
     Dan Carpenter, Dr. David Alan Gilbert, H. Peter Anvin,
     Ingo Molnar, Josh Poimboeuf, Kevin Brodsky, Mike Rapoport,
     Lukas Bulwahn, Maciej Wieczor-Retman, Max Grobecker,
     Patryk Wlazlyn, Pawan Gupta, Peter Zijlstra,
     Philip Redkin, Qasim Ijaz, Rik van Riel, Thomas Gleixner,
     Thorsten Blum, Tom Lendacky, Tony Luck, Uros Bizjak,
     Vitaly Kuznetsov, Xin Li, liuye.
 
 Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmfenkQRHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1g1FRAAi6OFTSn/5aeLMI0IMNBxJ6ddQiFc3imd
 7+C/vU5nul4CyDs8mKyj/+f/DDrbkG9lKz3VG631Yl237lXHjD8XWcVMeC/1z/q0
 3zInDIloE9/nBHRPkF6F7fARBLBZ0LFgaBsGrCo7mwpGybiQdqGcqcxllvTbtXaw
 OHta4q6ok+lBDNlfc0v6H4cRnzhmmlKu6Ng0j6UI3V7uFhi3vtxas32ltDQtzorq
 2+jbV6/+kbrrv+xPC+jlzOFhTEKRupNPQXmvyQteoQg6G3kqAKMDvBthGXd1rHuX
 Qa+BoDIifE/2NiVeRwNrhoqYH/pHCzUzDREW5IW8+ca+4XNKuzAC6EuC8CeCzyK1
 q8ZjZjooQW4zEeVFeJYllHONzJYfxfSH5CLsnbcuhq99yfGlrQhF1qL72/Omn1w/
 DfPJM8Zt5zyKvLqUg3Md+fkVCO2wyDNhB61QPzRgHF+yD+rvuDpoqvUWir+w7cSn
 fwEDVZGXlFx6dumtSrqRaTd1nvFt80s8yP2ll09DMvGQ8D/yruS7hndGAmmJVCSW
 NAfd8pSjq5v2+ux2UR92/Cc3VF3SjaUqHBOp/Nq9rESya18ZVa3cJpHhVYYtPIVf
 THW0h07RIkGVKs1uq+5ekLCr/8uAZg58UPIqmhTuW0ttymRHCNfohR45FQZzy+0M
 tJj1oc2TIZw=
 =Dcb3
 -----END PGP SIGNATURE-----

Merge tag 'x86-core-2025-03-22' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull core x86 updates from Ingo Molnar:
 "x86 CPU features support:
   - Generate the <asm/cpufeaturemasks.h> header based on build config
     (H. Peter Anvin, Xin Li)
   - x86 CPUID parsing updates and fixes (Ahmed S. Darwish)
   - Introduce the 'setcpuid=' boot parameter (Brendan Jackman)
   - Enable modifying CPU bug flags with '{clear,set}puid=' (Brendan
     Jackman)
   - Utilize CPU-type for CPU matching (Pawan Gupta)
   - Warn about unmet CPU feature dependencies (Sohil Mehta)
   - Prepare for new Intel Family numbers (Sohil Mehta)

  Percpu code:
   - Standardize & reorganize the x86 percpu layout and related cleanups
     (Brian Gerst)
   - Convert the stackprotector canary to a regular percpu variable
     (Brian Gerst)
   - Add a percpu subsection for cache hot data (Brian Gerst)
   - Unify __pcpu_op{1,2}_N() macros to __pcpu_op_N() (Uros Bizjak)
   - Construct __percpu_seg_override from __percpu_seg (Uros Bizjak)

  MM:
   - Add support for broadcast TLB invalidation using AMD's INVLPGB
     instruction (Rik van Riel)
   - Rework ROX cache to avoid writable copy (Mike Rapoport)
   - PAT: restore large ROX pages after fragmentation (Kirill A.
     Shutemov, Mike Rapoport)
   - Make memremap(MEMREMAP_WB) map memory as encrypted by default
     (Kirill A. Shutemov)
   - Robustify page table initialization (Kirill A. Shutemov)
   - Fix flush_tlb_range() when used for zapping normal PMDs (Jann Horn)
   - Clear _PAGE_DIRTY for kernel mappings when we clear _PAGE_RW
     (Matthew Wilcox)

  KASLR:
   - x86/kaslr: Reduce KASLR entropy on most x86 systems, to support PCI
     BAR space beyond the 10TiB region (CONFIG_PCI_P2PDMA=y) (Balbir
     Singh)

  CPU bugs:
   - Implement FineIBT-BHI mitigation (Peter Zijlstra)
   - speculation: Simplify and make CALL_NOSPEC consistent (Pawan Gupta)
   - speculation: Add a conditional CS prefix to CALL_NOSPEC (Pawan
     Gupta)
   - RFDS: Exclude P-only parts from the RFDS affected list (Pawan
     Gupta)

  System calls:
   - Break up entry/common.c (Brian Gerst)
   - Move sysctls into arch/x86 (Joel Granados)

  Intel LAM support updates: (Maciej Wieczor-Retman)
   - selftests/lam: Move cpu_has_la57() to use cpuinfo flag
   - selftests/lam: Skip test if LAM is disabled
   - selftests/lam: Test get_user() LAM pointer handling

  AMD SMN access updates:
   - Add SMN offsets to exclusive region access (Mario Limonciello)
   - Add support for debugfs access to SMN registers (Mario Limonciello)
   - Have HSMP use SMN through AMD_NODE (Yazen Ghannam)

  Power management updates: (Patryk Wlazlyn)
   - Allow calling mwait_play_dead with an arbitrary hint
   - ACPI/processor_idle: Add FFH state handling
   - intel_idle: Provide the default enter_dead() handler
   - Eliminate mwait_play_dead_cpuid_hint()

  Build system:
   - Raise the minimum GCC version to 8.1 (Brian Gerst)
   - Raise the minimum LLVM version to 15.0.0 (Nathan Chancellor)

  Kconfig: (Arnd Bergmann)
   - Add cmpxchg8b support back to Geode CPUs
   - Drop 32-bit "bigsmp" machine support
   - Rework CONFIG_GENERIC_CPU compiler flags
   - Drop configuration options for early 64-bit CPUs
   - Remove CONFIG_HIGHMEM64G support
   - Drop CONFIG_SWIOTLB for PAE
   - Drop support for CONFIG_HIGHPTE
   - Document CONFIG_X86_INTEL_MID as 64-bit-only
   - Remove old STA2x11 support
   - Only allow CONFIG_EISA for 32-bit

  Headers:
   - Replace __ASSEMBLY__ with __ASSEMBLER__ in UAPI and non-UAPI
     headers (Thomas Huth)

  Assembly code & machine code patching:
   - x86/alternatives: Simplify alternative_call() interface (Josh
     Poimboeuf)
   - x86/alternatives: Simplify callthunk patching (Peter Zijlstra)
   - KVM: VMX: Use named operands in inline asm (Josh Poimboeuf)
   - x86/hyperv: Use named operands in inline asm (Josh Poimboeuf)
   - x86/traps: Cleanup and robustify decode_bug() (Peter Zijlstra)
   - x86/kexec: Merge x86_32 and x86_64 code using macros from
     <asm/asm.h> (Uros Bizjak)
   - Use named operands in inline asm (Uros Bizjak)
   - Improve performance by using asm_inline() for atomic locking
     instructions (Uros Bizjak)

  Earlyprintk:
   - Harden early_serial (Peter Zijlstra)

  NMI handler:
   - Add an emergency handler in nmi_desc & use it in
     nmi_shootdown_cpus() (Waiman Long)

  Miscellaneous fixes and cleanups:
   - by Ahmed S. Darwish, Andy Shevchenko, Ard Biesheuvel, Artem
     Bityutskiy, Borislav Petkov, Brendan Jackman, Brian Gerst, Dan
     Carpenter, Dr. David Alan Gilbert, H. Peter Anvin, Ingo Molnar,
     Josh Poimboeuf, Kevin Brodsky, Mike Rapoport, Lukas Bulwahn, Maciej
     Wieczor-Retman, Max Grobecker, Patryk Wlazlyn, Pawan Gupta, Peter
     Zijlstra, Philip Redkin, Qasim Ijaz, Rik van Riel, Thomas Gleixner,
     Thorsten Blum, Tom Lendacky, Tony Luck, Uros Bizjak, Vitaly
     Kuznetsov, Xin Li, liuye"

* tag 'x86-core-2025-03-22' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (211 commits)
  zstd: Increase DYNAMIC_BMI2 GCC version cutoff from 4.8 to 11.0 to work around compiler segfault
  x86/asm: Make asm export of __ref_stack_chk_guard unconditional
  x86/mm: Only do broadcast flush from reclaim if pages were unmapped
  perf/x86/intel, x86/cpu: Replace Pentium 4 model checks with VFM ones
  perf/x86/intel, x86/cpu: Simplify Intel PMU initialization
  x86/headers: Replace __ASSEMBLY__ with __ASSEMBLER__ in non-UAPI headers
  x86/headers: Replace __ASSEMBLY__ with __ASSEMBLER__ in UAPI headers
  x86/locking/atomic: Improve performance by using asm_inline() for atomic locking instructions
  x86/asm: Use asm_inline() instead of asm() in clwb()
  x86/asm: Use CLFLUSHOPT and CLWB mnemonics in <asm/special_insns.h>
  x86/hweight: Use asm_inline() instead of asm()
  x86/hweight: Use ASM_CALL_CONSTRAINT in inline asm()
  x86/hweight: Use named operands in inline asm()
  x86/stackprotector/64: Only export __ref_stack_chk_guard on CONFIG_SMP
  x86/head/64: Avoid Clang < 17 stack protector in startup code
  x86/kexec: Merge x86_32 and x86_64 code using macros from <asm/asm.h>
  x86/runtime-const: Add the RUNTIME_CONST_PTR assembly macro
  x86/cpu/intel: Limit the non-architectural constant_tsc model checks
  x86/mm/pat: Replace Intel x86_model checks with VFM ones
  x86/cpu/intel: Fix fast string initialization for extended Families
  ...
This commit is contained in:
Linus Torvalds 2025-03-24 22:06:11 -07:00
commit e34c38057a
346 changed files with 5008 additions and 4677 deletions

View File

@ -29,14 +29,6 @@ Below is the list of affected Intel processors [#f1]_:
RAPTORLAKE_S 06_BFH
=================== ============
As an exception to this table, Intel Xeon E family parts ALDERLAKE(06_97H) and
RAPTORLAKE(06_B7H) codenamed Catlow are not affected. They are reported as
vulnerable in Linux because they share the same family/model with an affected
part. Unlike their affected counterparts, they do not enumerate RFDS_CLEAR or
CPUID.HYBRID. This information could be used to distinguish between the
affected and unaffected parts, but it is deemed not worth adding complexity as
the reporting is fixed automatically when these parts enumerate RFDS_NO.
Mitigation
==========
Intel released a microcode update that enables software to clear sensitive

View File

@ -180,10 +180,6 @@ Dump-capture kernel config options (Arch Dependent, i386 and x86_64)
1) On i386, enable high memory support under "Processor type and
features"::
CONFIG_HIGHMEM64G=y
or::
CONFIG_HIGHMEM4G
2) With CONFIG_SMP=y, usually nr_cpus=1 need specified on the kernel

View File

@ -416,10 +416,6 @@
Format: { quiet (default) | verbose | debug }
Change the amount of debugging information output
when initialising the APIC and IO-APIC components.
For X86-32, this can also be used to specify an APIC
driver name.
Format: apic=driver_name
Examples: apic=bigsmp
apic_extnmi= [APIC,X86,EARLY] External NMI delivery setting
Format: { bsp (default) | all | none }
@ -7679,13 +7675,6 @@
16 - SIGBUS faults
Example: user_debug=31
userpte=
[X86,EARLY] Flags controlling user PTE allocations.
nohigh = do not allocate PTE pages in
HIGHMEM regardless of setting
of CONFIG_HIGHPTE.
vdso= [X86,SH,SPARC]
On X86_32, this is an alias for vdso32=. Otherwise:

View File

@ -20,11 +20,7 @@ It has several drawbacks, though:
features (wheel, extra buttons, touchpad mode) of the real PS/2 mouse may
not be available.
2) If CONFIG_HIGHMEM64G is enabled, the PS/2 mouse emulation can cause
system crashes, because the SMM BIOS is not expecting to be in PAE mode.
The Intel E7505 is a typical machine where this happens.
3) If AMD64 64-bit mode is enabled, again system crashes often happen,
2) If AMD64 64-bit mode is enabled, again system crashes often happen,
because the SMM BIOS isn't expecting the CPU to be in 64-bit mode. The
BIOS manufacturers only test with Windows, and Windows doesn't do 64-bit
yet.
@ -38,11 +34,6 @@ Problem 1)
compiled-in, too.
Problem 2)
can currently only be solved by either disabling HIGHMEM64G
in the kernel config or USB Legacy support in the BIOS. A BIOS update
could help, but so far no such update exists.
Problem 3)
is usually fixed by a BIOS update. Check the board
manufacturers web site. If an update is not available, disable USB
Legacy support in the BIOS. If this alone doesn't help, try also adding

View File

@ -1014,6 +1014,9 @@ CC_FLAGS_CFI := -fsanitize=kcfi
ifdef CONFIG_CFI_ICALL_NORMALIZE_INTEGERS
CC_FLAGS_CFI += -fsanitize-cfi-icall-experimental-normalize-integers
endif
ifdef CONFIG_FINEIBT_BHI
CC_FLAGS_CFI += -fsanitize-kcfi-arity
endif
ifdef CONFIG_RUST
# Always pass -Zsanitizer-cfi-normalize-integers as CONFIG_RUST selects
# CONFIG_CFI_ICALL_NORMALIZE_INTEGERS.

View File

@ -381,7 +381,7 @@ void __iomem *ioremap_wc(resource_size_t res_cookie, size_t size);
void iounmap(volatile void __iomem *io_addr);
#define iounmap iounmap
void *arch_memremap_wb(phys_addr_t phys_addr, size_t size);
void *arch_memremap_wb(phys_addr_t phys_addr, size_t size, unsigned long flags);
#define arch_memremap_wb arch_memremap_wb
/*

View File

@ -436,7 +436,7 @@ void __arm_iomem_set_ro(void __iomem *ptr, size_t size)
set_memory_ro((unsigned long)ptr, PAGE_ALIGN(size) / PAGE_SIZE);
}
void *arch_memremap_wb(phys_addr_t phys_addr, size_t size)
void *arch_memremap_wb(phys_addr_t phys_addr, size_t size, unsigned long flags)
{
return (__force void *)arch_ioremap_caller(phys_addr, size,
MT_MEMORY_RW,

View File

@ -248,7 +248,7 @@ void __iomem *pci_remap_cfgspace(resource_size_t res_cookie, size_t size)
EXPORT_SYMBOL_GPL(pci_remap_cfgspace);
#endif
void *arch_memremap_wb(phys_addr_t phys_addr, size_t size)
void *arch_memremap_wb(phys_addr_t phys_addr, size_t size, unsigned long flags)
{
return (void *)phys_addr;
}

View File

@ -136,7 +136,7 @@ __io_writes_outs(outs, u64, q, __io_pbr(), __io_paw())
#include <asm-generic/io.h>
#ifdef CONFIG_MMU
#define arch_memremap_wb(addr, size) \
#define arch_memremap_wb(addr, size, flags) \
((__force void *)ioremap_prot((addr), (size), _PAGE_KERNEL))
#endif

View File

@ -440,25 +440,24 @@ void __init arch_cpu_finalize_init(void)
os_check_bugs();
}
void apply_seal_endbr(s32 *start, s32 *end, struct module *mod)
void apply_seal_endbr(s32 *start, s32 *end)
{
}
void apply_retpolines(s32 *start, s32 *end, struct module *mod)
void apply_retpolines(s32 *start, s32 *end)
{
}
void apply_returns(s32 *start, s32 *end, struct module *mod)
void apply_returns(s32 *start, s32 *end)
{
}
void apply_fineibt(s32 *start_retpoline, s32 *end_retpoline,
s32 *start_cfi, s32 *end_cfi, struct module *mod)
s32 *start_cfi, s32 *end_cfi)
{
}
void apply_alternatives(struct alt_instr *start, struct alt_instr *end,
struct module *mod)
void apply_alternatives(struct alt_instr *start, struct alt_instr *end)
{
}

View File

@ -85,6 +85,7 @@ config X86
select ARCH_HAS_DMA_OPS if GART_IOMMU || XEN
select ARCH_HAS_EARLY_DEBUG if KGDB
select ARCH_HAS_ELF_RANDOMIZE
select ARCH_HAS_EXECMEM_ROX if X86_64
select ARCH_HAS_FAST_MULTIPLIER
select ARCH_HAS_FORTIFY_SOURCE
select ARCH_HAS_GCOV_PROFILE_ALL
@ -132,7 +133,7 @@ config X86
select ARCH_SUPPORTS_AUTOFDO_CLANG
select ARCH_SUPPORTS_PROPELLER_CLANG if X86_64
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_CMPXCHG_LOCKREF if X86_CMPXCHG64
select ARCH_USE_CMPXCHG_LOCKREF if X86_CX8
select ARCH_USE_MEMTEST
select ARCH_USE_QUEUED_RWLOCKS
select ARCH_USE_QUEUED_SPINLOCKS
@ -232,7 +233,7 @@ config X86
select HAVE_SAMPLE_FTRACE_DIRECT_MULTI if X86_64
select HAVE_EBPF_JIT
select HAVE_EFFICIENT_UNALIGNED_ACCESS
select HAVE_EISA
select HAVE_EISA if X86_32
select HAVE_EXIT_THREAD
select HAVE_GUP_FAST
select HAVE_FENTRY if X86_64 || DYNAMIC_FTRACE
@ -277,7 +278,7 @@ config X86
select HAVE_PCI
select HAVE_PERF_REGS
select HAVE_PERF_USER_STACK_DUMP
select MMU_GATHER_RCU_TABLE_FREE if PARAVIRT
select MMU_GATHER_RCU_TABLE_FREE
select MMU_GATHER_MERGE_VMAS
select HAVE_POSIX_CPU_TIMERS_TASK_WORK
select HAVE_REGS_AND_STACK_ACCESS_API
@ -285,7 +286,7 @@ config X86
select HAVE_FUNCTION_ARG_ACCESS_API
select HAVE_SETUP_PER_CPU_AREA
select HAVE_SOFTIRQ_ON_OWN_STACK
select HAVE_STACKPROTECTOR if CC_HAS_SANE_STACKPROTECTOR
select HAVE_STACKPROTECTOR
select HAVE_STACK_VALIDATION if HAVE_OBJTOOL
select HAVE_STATIC_CALL
select HAVE_STATIC_CALL_INLINE if HAVE_OBJTOOL
@ -426,15 +427,6 @@ config PGTABLE_LEVELS
default 3 if X86_PAE
default 2
config CC_HAS_SANE_STACKPROTECTOR
bool
default $(success,$(srctree)/scripts/gcc-x86_64-has-stack-protector.sh $(CC) $(CLANG_FLAGS)) if 64BIT
default $(success,$(srctree)/scripts/gcc-x86_32-has-stack-protector.sh $(CC) $(CLANG_FLAGS))
help
We have to make sure stack protector is unconditionally disabled if
the compiler produces broken code or if it does not let us control
the segment on 32-bit kernels.
menu "Processor type and features"
config SMP
@ -530,12 +522,6 @@ config X86_FRED
ring transitions and exception/interrupt handling if the
system supports it.
config X86_BIGSMP
bool "Support for big SMP systems with more than 8 CPUs"
depends on SMP && X86_32
help
This option is needed for the systems that have more than 8 CPUs.
config X86_EXTENDED_PLATFORM
bool "Support for extended (non-PC) x86 platforms"
default y
@ -553,13 +539,12 @@ config X86_EXTENDED_PLATFORM
AMD Elan
RDC R-321x SoC
SGI 320/540 (Visual Workstation)
STA2X11-based (e.g. Northville)
Moorestown MID devices
64-bit platforms (CONFIG_64BIT=y):
Numascale NumaChip
ScaleMP vSMP
SGI Ultraviolet
Merrifield/Moorefield MID devices
If you have one of these systems, or if you want to build a
generic distribution kernel, say Y here - otherwise say N.
@ -604,8 +589,31 @@ config X86_UV
This option is needed in order to support SGI Ultraviolet systems.
If you don't have one of these, you should say N here.
# Following is an alphabetically sorted list of 32 bit extended platforms
# Please maintain the alphabetic order if and when there are additions
config X86_INTEL_MID
bool "Intel Z34xx/Z35xx MID platform support"
depends on X86_EXTENDED_PLATFORM
depends on X86_PLATFORM_DEVICES
depends on PCI
depends on X86_64 || (EXPERT && PCI_GOANY)
depends on X86_IO_APIC
select I2C
select DW_APB_TIMER
select INTEL_SCU_PCI
help
Select to build a kernel capable of supporting 64-bit Intel MID
(Mobile Internet Device) platform systems which do not have
the PCI legacy interfaces.
The only supported devices are the 22nm Merrified (Z34xx)
and Moorefield (Z35xx) SoC used in the Intel Edison board and
a small number of Android devices such as the Asus Zenfone 2,
Asus FonePad 8 and Dell Venue 7.
If you are building for a PC class system or non-MID tablet
SoCs like Bay Trail (Z36xx/Z37xx), say N here.
Intel MID platforms are based on an Intel processor and chipset which
consume less power than most of the x86 derivatives.
config X86_GOLDFISH
bool "Goldfish (Virtual Platform)"
@ -615,6 +623,9 @@ config X86_GOLDFISH
for Android development. Unless you are building for the Android
Goldfish emulator say N here.
# Following is an alphabetically sorted list of 32 bit extended platforms
# Please maintain the alphabetic order if and when there are additions
config X86_INTEL_CE
bool "CE4100 TV platform"
depends on PCI
@ -630,24 +641,6 @@ config X86_INTEL_CE
This option compiles in support for the CE4100 SOC for settop
boxes and media devices.
config X86_INTEL_MID
bool "Intel MID platform support"
depends on X86_EXTENDED_PLATFORM
depends on X86_PLATFORM_DEVICES
depends on PCI
depends on X86_64 || (PCI_GOANY && X86_32)
depends on X86_IO_APIC
select I2C
select DW_APB_TIMER
select INTEL_SCU_PCI
help
Select to build a kernel capable of supporting Intel MID (Mobile
Internet Device) platform systems which do not have the PCI legacy
interfaces. If you are building for a PC class system say N here.
Intel MID platforms are based on an Intel processor and chipset which
consume less power than most of the x86 derivatives.
config X86_INTEL_QUARK
bool "Intel Quark platform support"
depends on X86_32
@ -729,18 +722,6 @@ config X86_RDC321X
as R-8610-(G).
If you don't have one of these chips, you should say N here.
config X86_32_NON_STANDARD
bool "Support non-standard 32-bit SMP architectures"
depends on X86_32 && SMP
depends on X86_EXTENDED_PLATFORM
help
This option compiles in the bigsmp and STA2X11 default
subarchitectures. It is intended for a generic binary
kernel. If you select them all, kernel will probe it one by
one and will fallback to default.
# Alphabetically sorted list of Non standard 32 bit platforms
config X86_SUPPORTS_MEMORY_FAILURE
def_bool y
# MCE code calls memory_failure():
@ -750,19 +731,6 @@ config X86_SUPPORTS_MEMORY_FAILURE
depends on X86_64 || !SPARSEMEM
select ARCH_SUPPORTS_MEMORY_FAILURE
config STA2X11
bool "STA2X11 Companion Chip Support"
depends on X86_32_NON_STANDARD && PCI
select SWIOTLB
select MFD_STA2X11
select GPIOLIB
help
This adds support for boards based on the STA2X11 IO-Hub,
a.k.a. "ConneXt". The chip is used in place of the standard
PC chipset, so all "standard" peripherals are missing. If this
option is selected the kernel will still be able to boot on
standard PC machines.
config X86_32_IRIS
tristate "Eurobraille/Iris poweroff module"
depends on X86_32
@ -1012,8 +980,7 @@ config NR_CPUS_RANGE_BEGIN
config NR_CPUS_RANGE_END
int
depends on X86_32
default 64 if SMP && X86_BIGSMP
default 8 if SMP && !X86_BIGSMP
default 8 if SMP
default 1 if !SMP
config NR_CPUS_RANGE_END
@ -1026,7 +993,6 @@ config NR_CPUS_RANGE_END
config NR_CPUS_DEFAULT
int
depends on X86_32
default 32 if X86_BIGSMP
default 8 if SMP
default 1 if !SMP
@ -1102,7 +1068,7 @@ config UP_LATE_INIT
config X86_UP_APIC
bool "Local APIC support on uniprocessors" if !PCI_MSI
default PCI_MSI
depends on X86_32 && !SMP && !X86_32_NON_STANDARD
depends on X86_32 && !SMP
help
A local APIC (Advanced Programmable Interrupt Controller) is an
integrated interrupt controller in the CPU. If you have a single-CPU
@ -1127,7 +1093,7 @@ config X86_UP_IOAPIC
config X86_LOCAL_APIC
def_bool y
depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_APIC || PCI_MSI
depends on X86_64 || SMP || X86_UP_APIC || PCI_MSI
select IRQ_DOMAIN_HIERARCHY
config ACPI_MADT_WAKEUP
@ -1396,15 +1362,11 @@ config X86_CPUID
with major 203 and minors 0 to 31 for /dev/cpu/0/cpuid to
/dev/cpu/31/cpuid.
choice
prompt "High Memory Support"
default HIGHMEM4G
config HIGHMEM4G
bool "High Memory Support"
depends on X86_32
config NOHIGHMEM
bool "off"
help
Linux can use up to 64 Gigabytes of physical memory on x86 systems.
Linux can use up to 4 Gigabytes of physical memory on x86 systems.
However, the address space of 32-bit x86 processors is only 4
Gigabytes large. That means that, if you have a large amount of
physical memory, not all of it can be "permanently mapped" by the
@ -1420,38 +1382,9 @@ config NOHIGHMEM
possible.
If the machine has between 1 and 4 Gigabytes physical RAM, then
answer "4GB" here.
answer "Y" here.
If more than 4 Gigabytes is used then answer "64GB" here. This
selection turns Intel PAE (Physical Address Extension) mode on.
PAE implements 3-level paging on IA32 processors. PAE is fully
supported by Linux, PAE mode is implemented on all recent Intel
processors (Pentium Pro and better). NOTE: If you say "64GB" here,
then the kernel will not boot on CPUs that don't support PAE!
The actual amount of total physical memory will either be
auto detected or can be forced by using a kernel command line option
such as "mem=256M". (Try "man bootparam" or see the documentation of
your boot loader (lilo or loadlin) about how to pass options to the
kernel at boot time.)
If unsure, say "off".
config HIGHMEM4G
bool "4GB"
help
Select this if you have a 32-bit processor and between 1 and 4
gigabytes of physical RAM.
config HIGHMEM64G
bool "64GB"
depends on X86_HAVE_PAE
select X86_PAE
help
Select this if you have a 32-bit processor and more than 4
gigabytes of physical RAM.
endchoice
If unsure, say N.
choice
prompt "Memory split" if EXPERT
@ -1497,14 +1430,12 @@ config PAGE_OFFSET
depends on X86_32
config HIGHMEM
def_bool y
depends on X86_32 && (HIGHMEM64G || HIGHMEM4G)
def_bool HIGHMEM4G
config X86_PAE
bool "PAE (Physical Address Extension) Support"
depends on X86_32 && X86_HAVE_PAE
select PHYS_ADDR_T_64BIT
select SWIOTLB
help
PAE is required for NX support, and furthermore enables
larger swapspace support for non-overcommit purposes. It
@ -1574,8 +1505,7 @@ config AMD_MEM_ENCRYPT
config NUMA
bool "NUMA Memory Allocation and Scheduler Support"
depends on SMP
depends on X86_64 || (X86_32 && HIGHMEM64G && X86_BIGSMP)
default y if X86_BIGSMP
depends on X86_64
select USE_PERCPU_NUMA_NODE_ID
select OF_NUMA if OF
help
@ -1588,9 +1518,6 @@ config NUMA
For 64-bit this is recommended if the system is Intel Core i7
(or later), AMD Opteron, or EM64T NUMA.
For 32-bit this is only needed if you boot a 32-bit
kernel on a 64-bit NUMA platform.
Otherwise, you should say N.
config AMD_NUMA
@ -1629,7 +1556,7 @@ config ARCH_FLATMEM_ENABLE
config ARCH_SPARSEMEM_ENABLE
def_bool y
depends on X86_64 || NUMA || X86_32 || X86_32_NON_STANDARD
depends on X86_64 || NUMA || X86_32
select SPARSEMEM_STATIC if X86_32
select SPARSEMEM_VMEMMAP_ENABLE if X86_64
@ -1675,15 +1602,6 @@ config X86_PMEM_LEGACY
Say Y if unsure.
config HIGHPTE
bool "Allocate 3rd-level pagetables from highmem"
depends on HIGHMEM
help
The VM uses one page table entry for each page of physical memory.
For systems with a lot of RAM, this can be wasteful of precious
low memory. Setting this option will put user-space page table
entries in high memory.
config X86_CHECK_BIOS_CORRUPTION
bool "Check for low memory corruption"
help
@ -2451,18 +2369,20 @@ config CC_HAS_NAMED_AS
def_bool $(success,echo 'int __seg_fs fs; int __seg_gs gs;' | $(CC) -x c - -S -o /dev/null)
depends on CC_IS_GCC
#
# -fsanitize=kernel-address (KASAN) and -fsanitize=thread (KCSAN)
# are incompatible with named address spaces with GCC < 13.3
# (see GCC PR sanitizer/111736 and also PR sanitizer/115172).
#
config CC_HAS_NAMED_AS_FIXED_SANITIZERS
def_bool CC_IS_GCC && GCC_VERSION >= 130300
def_bool y
depends on !(KASAN || KCSAN) || GCC_VERSION >= 130300
depends on !(UBSAN_BOOL && KASAN) || GCC_VERSION >= 140200
config USE_X86_SEG_SUPPORT
def_bool y
depends on CC_HAS_NAMED_AS
#
# -fsanitize=kernel-address (KASAN) and -fsanitize=thread
# (KCSAN) are incompatible with named address spaces with
# GCC < 13.3 - see GCC PR sanitizer/111736.
#
depends on !(KASAN || KCSAN) || CC_HAS_NAMED_AS_FIXED_SANITIZERS
def_bool CC_HAS_NAMED_AS
depends on CC_HAS_NAMED_AS_FIXED_SANITIZERS
config CC_HAS_SLS
def_bool $(cc-option,-mharden-sls=all)
@ -2473,6 +2393,10 @@ config CC_HAS_RETURN_THUNK
config CC_HAS_ENTRY_PADDING
def_bool $(cc-option,-fpatchable-function-entry=16,16)
config CC_HAS_KCFI_ARITY
def_bool $(cc-option,-fsanitize=kcfi -fsanitize-kcfi-arity)
depends on CC_IS_CLANG && !RUST
config FUNCTION_PADDING_CFI
int
default 59 if FUNCTION_ALIGNMENT_64B
@ -2498,6 +2422,10 @@ config FINEIBT
depends on X86_KERNEL_IBT && CFI_CLANG && MITIGATION_RETPOLINE
select CALL_PADDING
config FINEIBT_BHI
def_bool y
depends on FINEIBT && CC_HAS_KCFI_ARITY
config HAVE_CALL_THUNKS
def_bool y
depends on CC_HAS_ENTRY_PADDING && MITIGATION_RETHUNK && OBJTOOL
@ -3202,4 +3130,6 @@ config HAVE_ATOMIC_IOMAP
source "arch/x86/kvm/Kconfig"
source "arch/x86/Kconfig.cpufeatures"
source "arch/x86/Kconfig.assembler"

View File

@ -1,9 +1,9 @@
# SPDX-License-Identifier: GPL-2.0
# Put here option for CPU selection and depending optimization
choice
prompt "Processor family"
default M686 if X86_32
default GENERIC_CPU if X86_64
prompt "x86-32 Processor family"
depends on X86_32
default M686
help
This is the processor type of your CPU. This information is
used for optimizing purposes. In order to compile a kernel
@ -31,7 +31,6 @@ choice
- "Pentium-4" for the Intel Pentium 4 or P4-based Celeron.
- "K6" for the AMD K6, K6-II and K6-III (aka K6-3D).
- "Athlon" for the AMD K7 family (Athlon/Duron/Thunderbird).
- "Opteron/Athlon64/Hammer/K8" for all K8 and newer AMD CPUs.
- "Crusoe" for the Transmeta Crusoe series.
- "Efficeon" for the Transmeta Efficeon series.
- "Winchip-C6" for original IDT Winchip.
@ -42,13 +41,10 @@ choice
- "CyrixIII/VIA C3" for VIA Cyrix III or VIA C3.
- "VIA C3-2" for VIA C3-2 "Nehemiah" (model 9 and above).
- "VIA C7" for VIA C7.
- "Intel P4" for the Pentium 4/Netburst microarchitecture.
- "Core 2/newer Xeon" for all core2 and newer Intel CPUs.
- "Intel Atom" for the Atom-microarchitecture CPUs.
- "Generic-x86-64" for a kernel which runs on any x86-64 CPU.
See each option's help text for additional details. If you don't know
what to do, choose "486".
what to do, choose "Pentium-Pro".
config M486SX
bool "486SX"
@ -114,11 +110,11 @@ config MPENTIUMIII
extensions.
config MPENTIUMM
bool "Pentium M"
bool "Pentium M/Pentium Dual Core/Core Solo/Core Duo"
depends on X86_32
help
Select this for Intel Pentium M (not Pentium-4 M)
notebook chips.
"Merom" Core Solo/Duo notebook chips
config MPENTIUM4
bool "Pentium-4/Celeron(P4-based)/Pentium-4 M/older Xeon"
@ -139,22 +135,10 @@ config MPENTIUM4
-Mobile Pentium 4
-Mobile Pentium 4 M
-Extreme Edition (Gallatin)
-Prescott
-Prescott 2M
-Cedar Mill
-Presler
-Smithfiled
Xeons (Intel Xeon, Xeon MP, Xeon LV, Xeon MV) corename:
-Foster
-Prestonia
-Gallatin
-Nocona
-Irwindale
-Cranford
-Potomac
-Paxville
-Dempsey
config MK6
bool "K6/K6-II/K6-III"
@ -172,13 +156,6 @@ config MK7
some extended instructions, and passes appropriate optimization
flags to GCC.
config MK8
bool "Opteron/Athlon64/Hammer/K8"
help
Select this for an AMD Opteron or Athlon64 Hammer-family processor.
Enables use of some extended instructions, and passes appropriate
optimization flags to GCC.
config MCRUSOE
bool "Crusoe"
depends on X86_32
@ -258,42 +235,14 @@ config MVIAC7
Select this for a VIA C7. Selecting this uses the correct cache
shift and tells gcc to treat the CPU as a 686.
config MPSC
bool "Intel P4 / older Netburst based Xeon"
depends on X86_64
help
Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey
Xeon CPUs with Intel 64bit which is compatible with x86-64.
Note that the latest Xeons (Xeon 51xx and 53xx) are not based on the
Netburst core and shouldn't use this option. You can distinguish them
using the cpu family field
in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
config MCORE2
bool "Core 2/newer Xeon"
help
Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
53xx) CPUs. You can distinguish newer from older Xeons by the CPU
family in /proc/cpuinfo. Newer ones have 6 and older ones 15
(not a typo)
config MATOM
bool "Intel Atom"
help
Select this for the Intel Atom platform. Intel Atom CPUs have an
in-order pipelining architecture and thus can benefit from
accordingly optimized code. Use a recent GCC with specific Atom
support in order to fully benefit from selecting this option.
config GENERIC_CPU
bool "Generic-x86-64"
depends on X86_64
help
Generic x86-64 CPU.
Run equally well on all x86-64 CPUs.
endchoice
config X86_GENERIC
@ -317,8 +266,8 @@ config X86_INTERNODE_CACHE_SHIFT
config X86_L1_CACHE_SHIFT
int
default "7" if MPENTIUM4 || MPSC
default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
default "7" if MPENTIUM4
default "6" if MK7 || MPENTIUMM || MATOM || MVIAC7 || X86_GENERIC || X86_64
default "4" if MELAN || M486SX || M486 || MGEODEGX1
default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
@ -336,51 +285,35 @@ config X86_ALIGNMENT_16
config X86_INTEL_USERCOPY
def_bool y
depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK7 || MEFFICEON
config X86_USE_PPRO_CHECKSUM
def_bool y
depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
#
# P6_NOPs are a relatively minor optimization that require a family >=
# 6 processor, except that it is broken on certain VIA chips.
# Furthermore, AMD chips prefer a totally different sequence of NOPs
# (which work on all CPUs). In addition, it looks like Virtual PC
# does not understand them.
#
# As a result, disallow these if we're not compiling for X86_64 (these
# NOPs do work on all x86-64 capable chips); the list of processors in
# the right-hand clause are the cores that benefit from this optimization.
#
config X86_P6_NOP
def_bool y
depends on X86_64
depends on (MCORE2 || MPENTIUM4 || MPSC)
depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MATOM
config X86_TSC
def_bool y
depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MATOM) || X86_64
config X86_HAVE_PAE
def_bool y
depends on MCRUSOE || MEFFICEON || MCYRIXIII || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC7 || MCORE2 || MATOM || X86_64
depends on MCRUSOE || MEFFICEON || MCYRIXIII || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC7 || MATOM || X86_64
config X86_CMPXCHG64
config X86_CX8
def_bool y
depends on X86_HAVE_PAE || M586TSC || M586MMX || MK6 || MK7
depends on X86_HAVE_PAE || M586TSC || M586MMX || MK6 || MK7 || MGEODEGX1 || MGEODE_LX
# this should be set for all -march=.. options where the compiler
# generates cmov.
config X86_CMOV
def_bool y
depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
depends on (MK7 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || MATOM || MGEODE_LX || X86_64)
config X86_MINIMUM_CPU_FAMILY
int
default "64" if X86_64
default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCORE2 || MK7 || MK8)
default "5" if X86_32 && X86_CMPXCHG64
default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MK7)
default "5" if X86_32 && X86_CX8
default "4"
config X86_DEBUGCTLMSR
@ -401,6 +334,10 @@ menuconfig PROCESSOR_SELECT
This lets you choose what x86 vendor support code your kernel
will include.
config BROADCAST_TLB_FLUSH
def_bool y
depends on CPU_SUP_AMD && 64BIT
config CPU_SUP_INTEL
default y
bool "Support Intel processors" if PROCESSOR_SELECT

View File

@ -0,0 +1,201 @@
# SPDX-License-Identifier: GPL-2.0
#
# x86 feature bits (see arch/x86/include/asm/cpufeatures.h) that are
# either REQUIRED to be enabled, or DISABLED (always ignored) for this
# particular compile-time configuration. The tests for these features
# are turned into compile-time constants via the generated
# <asm/cpufeaturemasks.h>.
#
# The naming of these variables *must* match asm/cpufeatures.h, e.g.,
# X86_FEATURE_ALWAYS <==> X86_REQUIRED_FEATURE_ALWAYS
# X86_FEATURE_FRED <==> X86_DISABLED_FEATURE_FRED
#
# And these REQUIRED and DISABLED config options are manipulated in an
# AWK script as the following example:
#
# +----------------------+
# | X86_FRED = y ? |
# +----------------------+
# / \
# Y / \ N
# +-------------------------------------+ +-------------------------------+
# | X86_DISABLED_FEATURE_FRED undefined | | X86_DISABLED_FEATURE_FRED = y |
# +-------------------------------------+ +-------------------------------+
# |
# |
# +-------------------------------------------+ |
# | X86_FEATURE_FRED: feature word 12, bit 17 | ---->|
# +-------------------------------------------+ |
# |
# |
# +-------------------------------+
# | set bit 17 of DISABLED_MASK12 |
# +-------------------------------+
#
config X86_REQUIRED_FEATURE_ALWAYS
def_bool y
config X86_REQUIRED_FEATURE_NOPL
def_bool y
depends on X86_64 || X86_P6_NOP
config X86_REQUIRED_FEATURE_CX8
def_bool y
depends on X86_CX8
# this should be set for all -march=.. options where the compiler
# generates cmov.
config X86_REQUIRED_FEATURE_CMOV
def_bool y
depends on X86_CMOV
# this should be set for all -march= options where the compiler
# generates movbe.
config X86_REQUIRED_FEATURE_MOVBE
def_bool y
depends on MATOM
config X86_REQUIRED_FEATURE_CPUID
def_bool y
depends on X86_64
config X86_REQUIRED_FEATURE_UP
def_bool y
depends on !SMP
config X86_REQUIRED_FEATURE_FPU
def_bool y
depends on !MATH_EMULATION
config X86_REQUIRED_FEATURE_PAE
def_bool y
depends on X86_64 || X86_PAE
config X86_REQUIRED_FEATURE_PSE
def_bool y
depends on X86_64 && !PARAVIRT_XXL
config X86_REQUIRED_FEATURE_PGE
def_bool y
depends on X86_64 && !PARAVIRT_XXL
config X86_REQUIRED_FEATURE_MSR
def_bool y
depends on X86_64
config X86_REQUIRED_FEATURE_FXSR
def_bool y
depends on X86_64
config X86_REQUIRED_FEATURE_XMM
def_bool y
depends on X86_64
config X86_REQUIRED_FEATURE_XMM2
def_bool y
depends on X86_64
config X86_REQUIRED_FEATURE_LM
def_bool y
depends on X86_64
config X86_DISABLED_FEATURE_UMIP
def_bool y
depends on !X86_UMIP
config X86_DISABLED_FEATURE_VME
def_bool y
depends on X86_64
config X86_DISABLED_FEATURE_K6_MTRR
def_bool y
depends on X86_64
config X86_DISABLED_FEATURE_CYRIX_ARR
def_bool y
depends on X86_64
config X86_DISABLED_FEATURE_CENTAUR_MCR
def_bool y
depends on X86_64
config X86_DISABLED_FEATURE_PCID
def_bool y
depends on !X86_64
config X86_DISABLED_FEATURE_PKU
def_bool y
depends on !X86_INTEL_MEMORY_PROTECTION_KEYS
config X86_DISABLED_FEATURE_OSPKE
def_bool y
depends on !X86_INTEL_MEMORY_PROTECTION_KEYS
config X86_DISABLED_FEATURE_LA57
def_bool y
depends on !X86_5LEVEL
config X86_DISABLED_FEATURE_PTI
def_bool y
depends on !MITIGATION_PAGE_TABLE_ISOLATION
config X86_DISABLED_FEATURE_RETPOLINE
def_bool y
depends on !MITIGATION_RETPOLINE
config X86_DISABLED_FEATURE_RETPOLINE_LFENCE
def_bool y
depends on !MITIGATION_RETPOLINE
config X86_DISABLED_FEATURE_RETHUNK
def_bool y
depends on !MITIGATION_RETHUNK
config X86_DISABLED_FEATURE_UNRET
def_bool y
depends on !MITIGATION_UNRET_ENTRY
config X86_DISABLED_FEATURE_CALL_DEPTH
def_bool y
depends on !MITIGATION_CALL_DEPTH_TRACKING
config X86_DISABLED_FEATURE_LAM
def_bool y
depends on !ADDRESS_MASKING
config X86_DISABLED_FEATURE_ENQCMD
def_bool y
depends on !INTEL_IOMMU_SVM
config X86_DISABLED_FEATURE_SGX
def_bool y
depends on !X86_SGX
config X86_DISABLED_FEATURE_XENPV
def_bool y
depends on !XEN_PV
config X86_DISABLED_FEATURE_TDX_GUEST
def_bool y
depends on !INTEL_TDX_GUEST
config X86_DISABLED_FEATURE_USER_SHSTK
def_bool y
depends on !X86_USER_SHADOW_STACK
config X86_DISABLED_FEATURE_IBT
def_bool y
depends on !X86_KERNEL_IBT
config X86_DISABLED_FEATURE_FRED
def_bool y
depends on !X86_FRED
config X86_DISABLED_FEATURE_SEV_SNP
def_bool y
depends on !KVM_AMD_SEV
config X86_DISABLED_FEATURE_INVLPGB
def_bool y
depends on !BROADCAST_TLB_FLUSH

View File

@ -142,14 +142,7 @@ ifeq ($(CONFIG_X86_32),y)
KBUILD_CFLAGS += -ffreestanding
endif
ifeq ($(CONFIG_STACKPROTECTOR),y)
ifeq ($(CONFIG_SMP),y)
KBUILD_CFLAGS += -mstack-protector-guard-reg=fs \
-mstack-protector-guard-symbol=__ref_stack_chk_guard
else
KBUILD_CFLAGS += -mstack-protector-guard=global
endif
endif
percpu_seg := fs
else
BITS := 64
UTS_MACHINE := x86_64
@ -180,25 +173,24 @@ else
# Use -mskip-rax-setup if supported.
KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
# FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
cflags-$(CONFIG_MK8) += -march=k8
cflags-$(CONFIG_MPSC) += -march=nocona
cflags-$(CONFIG_MCORE2) += -march=core2
cflags-$(CONFIG_MATOM) += -march=atom
cflags-$(CONFIG_GENERIC_CPU) += -mtune=generic
KBUILD_CFLAGS += $(cflags-y)
rustflags-$(CONFIG_MK8) += -Ctarget-cpu=k8
rustflags-$(CONFIG_MPSC) += -Ctarget-cpu=nocona
rustflags-$(CONFIG_MCORE2) += -Ctarget-cpu=core2
rustflags-$(CONFIG_MATOM) += -Ctarget-cpu=atom
rustflags-$(CONFIG_GENERIC_CPU) += -Ztune-cpu=generic
KBUILD_RUSTFLAGS += $(rustflags-y)
KBUILD_CFLAGS += -march=x86-64 -mtune=generic
KBUILD_RUSTFLAGS += -Ctarget-cpu=x86-64 -Ztune-cpu=generic
KBUILD_CFLAGS += -mno-red-zone
KBUILD_CFLAGS += -mcmodel=kernel
KBUILD_RUSTFLAGS += -Cno-redzone=y
KBUILD_RUSTFLAGS += -Ccode-model=kernel
percpu_seg := gs
endif
ifeq ($(CONFIG_STACKPROTECTOR),y)
ifeq ($(CONFIG_SMP),y)
KBUILD_CFLAGS += -mstack-protector-guard-reg=$(percpu_seg)
KBUILD_CFLAGS += -mstack-protector-guard-symbol=__ref_stack_chk_guard
else
KBUILD_CFLAGS += -mstack-protector-guard=global
endif
endif
#
@ -278,6 +270,21 @@ archscripts: scripts_basic
archheaders:
$(Q)$(MAKE) $(build)=arch/x86/entry/syscalls all
###
# <asm/cpufeaturemasks.h> header generation
cpufeaturemasks.hdr := arch/x86/include/generated/asm/cpufeaturemasks.h
cpufeaturemasks.awk := $(srctree)/arch/x86/tools/cpufeaturemasks.awk
cpufeatures_hdr := $(srctree)/arch/x86/include/asm/cpufeatures.h
targets += $(cpufeaturemasks.hdr)
quiet_cmd_gen_featuremasks = GEN $@
cmd_gen_featuremasks = $(AWK) -f $(cpufeaturemasks.awk) $(cpufeatures_hdr) $(KCONFIG_CONFIG) > $@
$(cpufeaturemasks.hdr): $(cpufeaturemasks.awk) $(cpufeatures_hdr) $(KCONFIG_CONFIG) FORCE
$(shell mkdir -p $(dir $@))
$(call if_changed,gen_featuremasks)
archprepare: $(cpufeaturemasks.hdr)
###
# Kernel objects

View File

@ -24,7 +24,6 @@ cflags-$(CONFIG_MK6) += -march=k6
# Please note, that patches that add -march=athlon-xp and friends are pointless.
# They make zero difference whatsosever to performance at this time.
cflags-$(CONFIG_MK7) += -march=athlon
cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8,-march=athlon)
cflags-$(CONFIG_MCRUSOE) += -march=i686 $(align)
cflags-$(CONFIG_MEFFICEON) += -march=i686 $(call tune,pentium3) $(align)
cflags-$(CONFIG_MWINCHIPC6) += $(call cc-option,-march=winchip-c6,-march=i586)
@ -32,9 +31,7 @@ cflags-$(CONFIG_MWINCHIP3D) += $(call cc-option,-march=winchip2,-march=i586)
cflags-$(CONFIG_MCYRIXIII) += $(call cc-option,-march=c3,-march=i486) $(align)
cflags-$(CONFIG_MVIAC3_2) += $(call cc-option,-march=c3-2,-march=i686)
cflags-$(CONFIG_MVIAC7) += -march=i686
cflags-$(CONFIG_MCORE2) += -march=i686 $(call tune,core2)
cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
cflags-$(CONFIG_MATOM) += -march=atom
# AMD Elan support
cflags-$(CONFIG_MELAN) += -march=i486

View File

@ -16,7 +16,7 @@
#define STACK_SIZE 1024 /* Minimum number of bytes for stack */
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/stdarg.h>
#include <linux/types.h>
@ -327,6 +327,6 @@ void probe_cards(int unsafe);
/* video-vesa.c */
void vesa_store_edid(void);
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* BOOT_BOOT_H */

View File

@ -235,7 +235,7 @@ static void handle_relocations(void *output, unsigned long output_len,
/*
* Process relocations: 32 bit relocations first then 64 bit after.
* Three sets of binary relocations are added to the end of the kernel
* Two sets of binary relocations are added to the end of the kernel
* before compression. Each relocation table entry is the kernel
* address of the location which needs to be updated stored as a
* 32-bit value which is sign extended to 64 bits.
@ -245,8 +245,6 @@ static void handle_relocations(void *output, unsigned long output_len,
* kernel bits...
* 0 - zero terminator for 64 bit relocations
* 64 bit relocation repeated
* 0 - zero terminator for inverse 32 bit relocations
* 32 bit inverse relocation repeated
* 0 - zero terminator for 32 bit relocations
* 32 bit relocation repeated
*
@ -263,16 +261,6 @@ static void handle_relocations(void *output, unsigned long output_len,
*(uint32_t *)ptr += delta;
}
#ifdef CONFIG_X86_64
while (*--reloc) {
long extended = *reloc;
extended += map;
ptr = (unsigned long)extended;
if (ptr < min_addr || ptr > max_addr)
error("inverse 32-bit relocation outside of kernel!\n");
*(int32_t *)ptr -= delta;
}
for (reloc--; *reloc; reloc--) {
long extended = *reloc;
extended += map;

View File

@ -22,10 +22,11 @@
# include "boot.h"
#endif
#include <linux/types.h>
#include <asm/cpufeaturemasks.h>
#include <asm/intel-family.h>
#include <asm/processor-flags.h>
#include <asm/required-features.h>
#include <asm/msr-index.h>
#include "string.h"
#include "msr.h"

View File

@ -3,7 +3,6 @@
#include "bitops.h"
#include <asm/processor-flags.h>
#include <asm/required-features.h>
#include <asm/msr-index.h>
#include "cpuflags.h"

View File

@ -12,8 +12,6 @@
#include <stdio.h>
#include "../include/asm/required-features.h"
#include "../include/asm/disabled-features.h"
#include "../include/asm/cpufeatures.h"
#include "../include/asm/vmxfeatures.h"
#include "../kernel/cpu/capflags.c"
@ -23,6 +21,7 @@ int main(void)
int i, j;
const char *str;
printf("#include <asm/cpufeaturemasks.h>\n\n");
printf("static const char x86_cap_strs[] =\n");
for (i = 0; i < NCAPINTS; i++) {

View File

@ -1,6 +1,4 @@
# global x86 required specific stuff
# On 32-bit HIGHMEM4G is not allowed
CONFIG_HIGHMEM64G=y
CONFIG_64BIT=y
# These enable us to allow some of the

View File

@ -17,6 +17,7 @@
*/
#include <linux/linkage.h>
#include <linux/objtool.h>
#include <asm/frame.h>
#define STATE1 %xmm0
@ -1071,6 +1072,7 @@ SYM_FUNC_END(_aesni_inc)
* size_t len, u8 *iv)
*/
SYM_FUNC_START(aesni_ctr_enc)
ANNOTATE_NOENDBR
FRAME_BEGIN
cmp $16, LEN
jb .Lctr_enc_just_ret

View File

@ -16,6 +16,7 @@
*/
#include <linux/linkage.h>
#include <linux/cfi_types.h>
#include <asm/frame.h>
#define CAMELLIA_TABLE_BYTE_LEN 272
@ -882,7 +883,7 @@ SYM_FUNC_START_LOCAL(__camellia_dec_blk16)
jmp .Ldec_max24;
SYM_FUNC_END(__camellia_dec_blk16)
SYM_FUNC_START(camellia_ecb_enc_16way)
SYM_TYPED_FUNC_START(camellia_ecb_enc_16way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst (16 blocks)
@ -907,7 +908,7 @@ SYM_FUNC_START(camellia_ecb_enc_16way)
RET;
SYM_FUNC_END(camellia_ecb_enc_16way)
SYM_FUNC_START(camellia_ecb_dec_16way)
SYM_TYPED_FUNC_START(camellia_ecb_dec_16way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst (16 blocks)
@ -937,7 +938,7 @@ SYM_FUNC_START(camellia_ecb_dec_16way)
RET;
SYM_FUNC_END(camellia_ecb_dec_16way)
SYM_FUNC_START(camellia_cbc_dec_16way)
SYM_TYPED_FUNC_START(camellia_cbc_dec_16way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst (16 blocks)

View File

@ -6,6 +6,7 @@
*/
#include <linux/linkage.h>
#include <linux/cfi_types.h>
#include <asm/frame.h>
#define CAMELLIA_TABLE_BYTE_LEN 272

View File

@ -6,6 +6,7 @@
*/
#include <linux/linkage.h>
#include <linux/cfi_types.h>
.file "camellia-x86_64-asm_64.S"
.text
@ -177,7 +178,7 @@
bswapq RAB0; \
movq RAB0, 4*2(RIO);
SYM_FUNC_START(__camellia_enc_blk)
SYM_TYPED_FUNC_START(__camellia_enc_blk)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@ -224,7 +225,7 @@ SYM_FUNC_START(__camellia_enc_blk)
RET;
SYM_FUNC_END(__camellia_enc_blk)
SYM_FUNC_START(camellia_dec_blk)
SYM_TYPED_FUNC_START(camellia_dec_blk)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@ -411,7 +412,7 @@ SYM_FUNC_END(camellia_dec_blk)
bswapq RAB1; \
movq RAB1, 12*2(RIO);
SYM_FUNC_START(__camellia_enc_blk_2way)
SYM_TYPED_FUNC_START(__camellia_enc_blk_2way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@ -460,7 +461,7 @@ SYM_FUNC_START(__camellia_enc_blk_2way)
RET;
SYM_FUNC_END(__camellia_enc_blk_2way)
SYM_FUNC_START(camellia_dec_blk_2way)
SYM_TYPED_FUNC_START(camellia_dec_blk_2way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst

View File

@ -9,6 +9,7 @@
*/
#include <linux/linkage.h>
#include <linux/cfi_types.h>
#include <asm/frame.h>
#include "glue_helper-asm-avx.S"
@ -656,7 +657,7 @@ SYM_FUNC_START_LOCAL(__serpent_dec_blk8_avx)
RET;
SYM_FUNC_END(__serpent_dec_blk8_avx)
SYM_FUNC_START(serpent_ecb_enc_8way_avx)
SYM_TYPED_FUNC_START(serpent_ecb_enc_8way_avx)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@ -674,7 +675,7 @@ SYM_FUNC_START(serpent_ecb_enc_8way_avx)
RET;
SYM_FUNC_END(serpent_ecb_enc_8way_avx)
SYM_FUNC_START(serpent_ecb_dec_8way_avx)
SYM_TYPED_FUNC_START(serpent_ecb_dec_8way_avx)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@ -692,7 +693,7 @@ SYM_FUNC_START(serpent_ecb_dec_8way_avx)
RET;
SYM_FUNC_END(serpent_ecb_dec_8way_avx)
SYM_FUNC_START(serpent_cbc_dec_8way_avx)
SYM_TYPED_FUNC_START(serpent_cbc_dec_8way_avx)
/* input:
* %rdi: ctx, CTX
* %rsi: dst

View File

@ -6,6 +6,7 @@
*/
#include <linux/linkage.h>
#include <linux/cfi_types.h>
.file "twofish-x86_64-asm-3way.S"
.text
@ -220,7 +221,7 @@
rorq $32, RAB2; \
outunpack3(mov, RIO, 2, RAB, 2);
SYM_FUNC_START(__twofish_enc_blk_3way)
SYM_TYPED_FUNC_START(__twofish_enc_blk_3way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@ -269,7 +270,7 @@ SYM_FUNC_START(__twofish_enc_blk_3way)
RET;
SYM_FUNC_END(__twofish_enc_blk_3way)
SYM_FUNC_START(twofish_dec_blk_3way)
SYM_TYPED_FUNC_START(twofish_dec_blk_3way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst

View File

@ -8,6 +8,7 @@
.text
#include <linux/linkage.h>
#include <linux/cfi_types.h>
#include <asm/asm-offsets.h>
#define a_offset 0
@ -202,7 +203,7 @@
xor %r8d, d ## D;\
ror $1, d ## D;
SYM_FUNC_START(twofish_enc_blk)
SYM_TYPED_FUNC_START(twofish_enc_blk)
pushq R1
/* %rdi contains the ctx address */
@ -255,7 +256,7 @@ SYM_FUNC_START(twofish_enc_blk)
RET
SYM_FUNC_END(twofish_enc_blk)
SYM_FUNC_START(twofish_dec_blk)
SYM_TYPED_FUNC_START(twofish_dec_blk)
pushq R1
/* %rdi contains the ctx address */

View File

@ -7,12 +7,13 @@ KASAN_SANITIZE := n
UBSAN_SANITIZE := n
KCOV_INSTRUMENT := n
CFLAGS_REMOVE_common.o = $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_syscall_32.o = $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_syscall_64.o = $(CC_FLAGS_FTRACE)
CFLAGS_common.o += -fno-stack-protector
CFLAGS_syscall_32.o += -fno-stack-protector
CFLAGS_syscall_64.o += -fno-stack-protector
obj-y := entry.o entry_$(BITS).o syscall_$(BITS).o
obj-y += common.o
obj-y += vdso/
obj-y += vsyscall/
@ -23,4 +24,3 @@ CFLAGS_REMOVE_entry_fred.o += -pg $(CC_FLAGS_FTRACE)
obj-$(CONFIG_X86_FRED) += entry_64_fred.o entry_fred.o
obj-$(CONFIG_IA32_EMULATION) += entry_64_compat.o syscall_32.o
obj-$(CONFIG_X86_X32_ABI) += syscall_x32.o

View File

@ -431,6 +431,7 @@ For 32-bit we have the following conventions - kernel is built with
/* rdi: arg1 ... normal C conventions. rax is saved/restored. */
.macro THUNK name, func
SYM_FUNC_START(\name)
ANNOTATE_NOENDBR
pushq %rbp
movq %rsp, %rbp

View File

@ -1,524 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* common.c - C code for kernel entry and exit
* Copyright (c) 2015 Andrew Lutomirski
*
* Based on asm and ptrace code by many authors. The code here originated
* in ptrace.c and signal.c.
*/
#include <linux/kernel.h>
#include <linux/sched.h>
#include <linux/sched/task_stack.h>
#include <linux/entry-common.h>
#include <linux/mm.h>
#include <linux/smp.h>
#include <linux/errno.h>
#include <linux/ptrace.h>
#include <linux/export.h>
#include <linux/nospec.h>
#include <linux/syscalls.h>
#include <linux/uaccess.h>
#include <linux/init.h>
#ifdef CONFIG_XEN_PV
#include <xen/xen-ops.h>
#include <xen/events.h>
#endif
#include <asm/apic.h>
#include <asm/desc.h>
#include <asm/traps.h>
#include <asm/vdso.h>
#include <asm/cpufeature.h>
#include <asm/fpu/api.h>
#include <asm/nospec-branch.h>
#include <asm/io_bitmap.h>
#include <asm/syscall.h>
#include <asm/irq_stack.h>
#ifdef CONFIG_X86_64
static __always_inline bool do_syscall_x64(struct pt_regs *regs, int nr)
{
/*
* Convert negative numbers to very high and thus out of range
* numbers for comparisons.
*/
unsigned int unr = nr;
if (likely(unr < NR_syscalls)) {
unr = array_index_nospec(unr, NR_syscalls);
regs->ax = x64_sys_call(regs, unr);
return true;
}
return false;
}
static __always_inline bool do_syscall_x32(struct pt_regs *regs, int nr)
{
/*
* Adjust the starting offset of the table, and convert numbers
* < __X32_SYSCALL_BIT to very high and thus out of range
* numbers for comparisons.
*/
unsigned int xnr = nr - __X32_SYSCALL_BIT;
if (IS_ENABLED(CONFIG_X86_X32_ABI) && likely(xnr < X32_NR_syscalls)) {
xnr = array_index_nospec(xnr, X32_NR_syscalls);
regs->ax = x32_sys_call(regs, xnr);
return true;
}
return false;
}
/* Returns true to return using SYSRET, or false to use IRET */
__visible noinstr bool do_syscall_64(struct pt_regs *regs, int nr)
{
add_random_kstack_offset();
nr = syscall_enter_from_user_mode(regs, nr);
instrumentation_begin();
if (!do_syscall_x64(regs, nr) && !do_syscall_x32(regs, nr) && nr != -1) {
/* Invalid system call, but still a system call. */
regs->ax = __x64_sys_ni_syscall(regs);
}
instrumentation_end();
syscall_exit_to_user_mode(regs);
/*
* Check that the register state is valid for using SYSRET to exit
* to userspace. Otherwise use the slower but fully capable IRET
* exit path.
*/
/* XEN PV guests always use the IRET path */
if (cpu_feature_enabled(X86_FEATURE_XENPV))
return false;
/* SYSRET requires RCX == RIP and R11 == EFLAGS */
if (unlikely(regs->cx != regs->ip || regs->r11 != regs->flags))
return false;
/* CS and SS must match the values set in MSR_STAR */
if (unlikely(regs->cs != __USER_CS || regs->ss != __USER_DS))
return false;
/*
* On Intel CPUs, SYSRET with non-canonical RCX/RIP will #GP
* in kernel space. This essentially lets the user take over
* the kernel, since userspace controls RSP.
*
* TASK_SIZE_MAX covers all user-accessible addresses other than
* the deprecated vsyscall page.
*/
if (unlikely(regs->ip >= TASK_SIZE_MAX))
return false;
/*
* SYSRET cannot restore RF. It can restore TF, but unlike IRET,
* restoring TF results in a trap from userspace immediately after
* SYSRET.
*/
if (unlikely(regs->flags & (X86_EFLAGS_RF | X86_EFLAGS_TF)))
return false;
/* Use SYSRET to exit to userspace */
return true;
}
#endif
#if defined(CONFIG_X86_32) || defined(CONFIG_IA32_EMULATION)
static __always_inline int syscall_32_enter(struct pt_regs *regs)
{
if (IS_ENABLED(CONFIG_IA32_EMULATION))
current_thread_info()->status |= TS_COMPAT;
return (int)regs->orig_ax;
}
#ifdef CONFIG_IA32_EMULATION
bool __ia32_enabled __ro_after_init = !IS_ENABLED(CONFIG_IA32_EMULATION_DEFAULT_DISABLED);
static int ia32_emulation_override_cmdline(char *arg)
{
return kstrtobool(arg, &__ia32_enabled);
}
early_param("ia32_emulation", ia32_emulation_override_cmdline);
#endif
/*
* Invoke a 32-bit syscall. Called with IRQs on in CT_STATE_KERNEL.
*/
static __always_inline void do_syscall_32_irqs_on(struct pt_regs *regs, int nr)
{
/*
* Convert negative numbers to very high and thus out of range
* numbers for comparisons.
*/
unsigned int unr = nr;
if (likely(unr < IA32_NR_syscalls)) {
unr = array_index_nospec(unr, IA32_NR_syscalls);
regs->ax = ia32_sys_call(regs, unr);
} else if (nr != -1) {
regs->ax = __ia32_sys_ni_syscall(regs);
}
}
#ifdef CONFIG_IA32_EMULATION
static __always_inline bool int80_is_external(void)
{
const unsigned int offs = (0x80 / 32) * 0x10;
const u32 bit = BIT(0x80 % 32);
/* The local APIC on XENPV guests is fake */
if (cpu_feature_enabled(X86_FEATURE_XENPV))
return false;
/*
* If vector 0x80 is set in the APIC ISR then this is an external
* interrupt. Either from broken hardware or injected by a VMM.
*
* Note: In guest mode this is only valid for secure guests where
* the secure module fully controls the vAPIC exposed to the guest.
*/
return apic_read(APIC_ISR + offs) & bit;
}
/**
* do_int80_emulation - 32-bit legacy syscall C entry from asm
* @regs: syscall arguments in struct pt_args on the stack.
*
* This entry point can be used by 32-bit and 64-bit programs to perform
* 32-bit system calls. Instances of INT $0x80 can be found inline in
* various programs and libraries. It is also used by the vDSO's
* __kernel_vsyscall fallback for hardware that doesn't support a faster
* entry method. Restarted 32-bit system calls also fall back to INT
* $0x80 regardless of what instruction was originally used to do the
* system call.
*
* This is considered a slow path. It is not used by most libc
* implementations on modern hardware except during process startup.
*
* The arguments for the INT $0x80 based syscall are on stack in the
* pt_regs structure:
* eax: system call number
* ebx, ecx, edx, esi, edi, ebp: arg1 - arg 6
*/
__visible noinstr void do_int80_emulation(struct pt_regs *regs)
{
int nr;
/* Kernel does not use INT $0x80! */
if (unlikely(!user_mode(regs))) {
irqentry_enter(regs);
instrumentation_begin();
panic("Unexpected external interrupt 0x80\n");
}
/*
* Establish kernel context for instrumentation, including for
* int80_is_external() below which calls into the APIC driver.
* Identical for soft and external interrupts.
*/
enter_from_user_mode(regs);
instrumentation_begin();
add_random_kstack_offset();
/* Validate that this is a soft interrupt to the extent possible */
if (unlikely(int80_is_external()))
panic("Unexpected external interrupt 0x80\n");
/*
* The low level idtentry code pushed -1 into regs::orig_ax
* and regs::ax contains the syscall number.
*
* User tracing code (ptrace or signal handlers) might assume
* that the regs::orig_ax contains a 32-bit number on invoking
* a 32-bit syscall.
*
* Establish the syscall convention by saving the 32bit truncated
* syscall number in regs::orig_ax and by invalidating regs::ax.
*/
regs->orig_ax = regs->ax & GENMASK(31, 0);
regs->ax = -ENOSYS;
nr = syscall_32_enter(regs);
local_irq_enable();
nr = syscall_enter_from_user_mode_work(regs, nr);
do_syscall_32_irqs_on(regs, nr);
instrumentation_end();
syscall_exit_to_user_mode(regs);
}
#ifdef CONFIG_X86_FRED
/*
* A FRED-specific INT80 handler is warranted for the follwing reasons:
*
* 1) As INT instructions and hardware interrupts are separate event
* types, FRED does not preclude the use of vector 0x80 for external
* interrupts. As a result, the FRED setup code does not reserve
* vector 0x80 and calling int80_is_external() is not merely
* suboptimal but actively incorrect: it could cause a system call
* to be incorrectly ignored.
*
* 2) It is called only for handling vector 0x80 of event type
* EVENT_TYPE_SWINT and will never be called to handle any external
* interrupt (event type EVENT_TYPE_EXTINT).
*
* 3) FRED has separate entry flows depending on if the event came from
* user space or kernel space, and because the kernel does not use
* INT insns, the FRED kernel entry handler fred_entry_from_kernel()
* falls through to fred_bad_type() if the event type is
* EVENT_TYPE_SWINT, i.e., INT insns. So if the kernel is handling
* an INT insn, it can only be from a user level.
*
* 4) int80_emulation() does a CLEAR_BRANCH_HISTORY. While FRED will
* likely take a different approach if it is ever needed: it
* probably belongs in either fred_intx()/ fred_other() or
* asm_fred_entrypoint_user(), depending on if this ought to be done
* for all entries from userspace or only system
* calls.
*
* 5) INT $0x80 is the fast path for 32-bit system calls under FRED.
*/
DEFINE_FREDENTRY_RAW(int80_emulation)
{
int nr;
enter_from_user_mode(regs);
instrumentation_begin();
add_random_kstack_offset();
/*
* FRED pushed 0 into regs::orig_ax and regs::ax contains the
* syscall number.
*
* User tracing code (ptrace or signal handlers) might assume
* that the regs::orig_ax contains a 32-bit number on invoking
* a 32-bit syscall.
*
* Establish the syscall convention by saving the 32bit truncated
* syscall number in regs::orig_ax and by invalidating regs::ax.
*/
regs->orig_ax = regs->ax & GENMASK(31, 0);
regs->ax = -ENOSYS;
nr = syscall_32_enter(regs);
local_irq_enable();
nr = syscall_enter_from_user_mode_work(regs, nr);
do_syscall_32_irqs_on(regs, nr);
instrumentation_end();
syscall_exit_to_user_mode(regs);
}
#endif
#else /* CONFIG_IA32_EMULATION */
/* Handles int $0x80 on a 32bit kernel */
__visible noinstr void do_int80_syscall_32(struct pt_regs *regs)
{
int nr = syscall_32_enter(regs);
add_random_kstack_offset();
/*
* Subtlety here: if ptrace pokes something larger than 2^31-1 into
* orig_ax, the int return value truncates it. This matches
* the semantics of syscall_get_nr().
*/
nr = syscall_enter_from_user_mode(regs, nr);
instrumentation_begin();
do_syscall_32_irqs_on(regs, nr);
instrumentation_end();
syscall_exit_to_user_mode(regs);
}
#endif /* !CONFIG_IA32_EMULATION */
static noinstr bool __do_fast_syscall_32(struct pt_regs *regs)
{
int nr = syscall_32_enter(regs);
int res;
add_random_kstack_offset();
/*
* This cannot use syscall_enter_from_user_mode() as it has to
* fetch EBP before invoking any of the syscall entry work
* functions.
*/
syscall_enter_from_user_mode_prepare(regs);
instrumentation_begin();
/* Fetch EBP from where the vDSO stashed it. */
if (IS_ENABLED(CONFIG_X86_64)) {
/*
* Micro-optimization: the pointer we're following is
* explicitly 32 bits, so it can't be out of range.
*/
res = __get_user(*(u32 *)&regs->bp,
(u32 __user __force *)(unsigned long)(u32)regs->sp);
} else {
res = get_user(*(u32 *)&regs->bp,
(u32 __user __force *)(unsigned long)(u32)regs->sp);
}
if (res) {
/* User code screwed up. */
regs->ax = -EFAULT;
local_irq_disable();
instrumentation_end();
irqentry_exit_to_user_mode(regs);
return false;
}
nr = syscall_enter_from_user_mode_work(regs, nr);
/* Now this is just like a normal syscall. */
do_syscall_32_irqs_on(regs, nr);
instrumentation_end();
syscall_exit_to_user_mode(regs);
return true;
}
/* Returns true to return using SYSEXIT/SYSRETL, or false to use IRET */
__visible noinstr bool do_fast_syscall_32(struct pt_regs *regs)
{
/*
* Called using the internal vDSO SYSENTER/SYSCALL32 calling
* convention. Adjust regs so it looks like we entered using int80.
*/
unsigned long landing_pad = (unsigned long)current->mm->context.vdso +
vdso_image_32.sym_int80_landing_pad;
/*
* SYSENTER loses EIP, and even SYSCALL32 needs us to skip forward
* so that 'regs->ip -= 2' lands back on an int $0x80 instruction.
* Fix it up.
*/
regs->ip = landing_pad;
/* Invoke the syscall. If it failed, keep it simple: use IRET. */
if (!__do_fast_syscall_32(regs))
return false;
/*
* Check that the register state is valid for using SYSRETL/SYSEXIT
* to exit to userspace. Otherwise use the slower but fully capable
* IRET exit path.
*/
/* XEN PV guests always use the IRET path */
if (cpu_feature_enabled(X86_FEATURE_XENPV))
return false;
/* EIP must point to the VDSO landing pad */
if (unlikely(regs->ip != landing_pad))
return false;
/* CS and SS must match the values set in MSR_STAR */
if (unlikely(regs->cs != __USER32_CS || regs->ss != __USER_DS))
return false;
/* If the TF, RF, or VM flags are set, use IRET */
if (unlikely(regs->flags & (X86_EFLAGS_RF | X86_EFLAGS_TF | X86_EFLAGS_VM)))
return false;
/* Use SYSRETL/SYSEXIT to exit to userspace */
return true;
}
/* Returns true to return using SYSEXIT/SYSRETL, or false to use IRET */
__visible noinstr bool do_SYSENTER_32(struct pt_regs *regs)
{
/* SYSENTER loses RSP, but the vDSO saved it in RBP. */
regs->sp = regs->bp;
/* SYSENTER clobbers EFLAGS.IF. Assume it was set in usermode. */
regs->flags |= X86_EFLAGS_IF;
return do_fast_syscall_32(regs);
}
#endif
SYSCALL_DEFINE0(ni_syscall)
{
return -ENOSYS;
}
#ifdef CONFIG_XEN_PV
#ifndef CONFIG_PREEMPTION
/*
* Some hypercalls issued by the toolstack can take many 10s of
* seconds. Allow tasks running hypercalls via the privcmd driver to
* be voluntarily preempted even if full kernel preemption is
* disabled.
*
* Such preemptible hypercalls are bracketed by
* xen_preemptible_hcall_begin() and xen_preemptible_hcall_end()
* calls.
*/
DEFINE_PER_CPU(bool, xen_in_preemptible_hcall);
EXPORT_SYMBOL_GPL(xen_in_preemptible_hcall);
/*
* In case of scheduling the flag must be cleared and restored after
* returning from schedule as the task might move to a different CPU.
*/
static __always_inline bool get_and_clear_inhcall(void)
{
bool inhcall = __this_cpu_read(xen_in_preemptible_hcall);
__this_cpu_write(xen_in_preemptible_hcall, false);
return inhcall;
}
static __always_inline void restore_inhcall(bool inhcall)
{
__this_cpu_write(xen_in_preemptible_hcall, inhcall);
}
#else
static __always_inline bool get_and_clear_inhcall(void) { return false; }
static __always_inline void restore_inhcall(bool inhcall) { }
#endif
static void __xen_pv_evtchn_do_upcall(struct pt_regs *regs)
{
struct pt_regs *old_regs = set_irq_regs(regs);
inc_irq_stat(irq_hv_callback_count);
xen_evtchn_do_upcall();
set_irq_regs(old_regs);
}
__visible noinstr void xen_pv_evtchn_do_upcall(struct pt_regs *regs)
{
irqentry_state_t state = irqentry_enter(regs);
bool inhcall;
instrumentation_begin();
run_sysvec_on_irqstack_cond(__xen_pv_evtchn_do_upcall, regs);
inhcall = get_and_clear_inhcall();
if (inhcall && !WARN_ON_ONCE(state.exit_rcu)) {
irqentry_exit_cond_resched();
instrumentation_end();
restore_inhcall(inhcall);
} else {
instrumentation_end();
irqentry_exit(regs, state);
}
}
#endif /* CONFIG_XEN_PV */

View File

@ -5,6 +5,7 @@
#include <linux/export.h>
#include <linux/linkage.h>
#include <linux/objtool.h>
#include <asm/msr-index.h>
#include <asm/unwind_hints.h>
#include <asm/segment.h>
@ -17,6 +18,7 @@
.pushsection .noinstr.text, "ax"
SYM_FUNC_START(entry_ibpb)
ANNOTATE_NOENDBR
movl $MSR_IA32_PRED_CMD, %ecx
movl $PRED_CMD_IBPB, %eax
xorl %edx, %edx
@ -52,7 +54,6 @@ EXPORT_SYMBOL_GPL(mds_verw_sel);
THUNK warn_thunk_thunk, __warn_thunk
#ifndef CONFIG_X86_64
/*
* Clang's implementation of TLS stack cookies requires the variable in
* question to be a TLS variable. If the variable happens to be defined as an
@ -63,7 +64,6 @@ THUNK warn_thunk_thunk, __warn_thunk
* entirely in the C code, and use an alias emitted by the linker script
* instead.
*/
#ifdef CONFIG_STACKPROTECTOR
#if defined(CONFIG_STACKPROTECTOR) && defined(CONFIG_SMP)
EXPORT_SYMBOL(__ref_stack_chk_guard);
#endif
#endif

View File

@ -1153,7 +1153,7 @@ SYM_CODE_START(asm_exc_nmi)
* is using the thread stack right now, so it's safe for us to use it.
*/
movl %esp, %ebx
movl PER_CPU_VAR(pcpu_hot + X86_top_of_stack), %esp
movl PER_CPU_VAR(cpu_current_top_of_stack), %esp
call exc_nmi
movl %ebx, %esp
@ -1217,7 +1217,7 @@ SYM_CODE_START(rewind_stack_and_make_dead)
/* Prevent any naive code from trying to unwind to our caller. */
xorl %ebp, %ebp
movl PER_CPU_VAR(pcpu_hot + X86_top_of_stack), %esi
movl PER_CPU_VAR(cpu_current_top_of_stack), %esi
leal -TOP_OF_KERNEL_STACK_PADDING-PTREGS_SIZE(%esi), %esp
call make_task_dead

View File

@ -92,7 +92,7 @@ SYM_CODE_START(entry_SYSCALL_64)
/* tss.sp2 is scratch space. */
movq %rsp, PER_CPU_VAR(cpu_tss_rw + TSS_sp2)
SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp
movq PER_CPU_VAR(pcpu_hot + X86_top_of_stack), %rsp
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
SYM_INNER_LABEL(entry_SYSCALL_64_safe_stack, SYM_L_GLOBAL)
ANNOTATE_NOENDBR
@ -175,6 +175,7 @@ SYM_CODE_END(entry_SYSCALL_64)
*/
.pushsection .text, "ax"
SYM_FUNC_START(__switch_to_asm)
ANNOTATE_NOENDBR
/*
* Save callee-saved registers
* This must match the order in inactive_task_frame
@ -192,7 +193,7 @@ SYM_FUNC_START(__switch_to_asm)
#ifdef CONFIG_STACKPROTECTOR
movq TASK_stack_canary(%rsi), %rbx
movq %rbx, PER_CPU_VAR(fixed_percpu_data + FIXED_stack_canary)
movq %rbx, PER_CPU_VAR(__stack_chk_guard)
#endif
/*
@ -742,6 +743,7 @@ _ASM_NOKPROBE(common_interrupt_return)
* Is in entry.text as it shouldn't be instrumented.
*/
SYM_FUNC_START(asm_load_gs_index)
ANNOTATE_NOENDBR
FRAME_BEGIN
swapgs
.Lgs_change:
@ -1166,7 +1168,7 @@ SYM_CODE_START(asm_exc_nmi)
FENCE_SWAPGS_USER_ENTRY
SWITCH_TO_KERNEL_CR3 scratch_reg=%rdx
movq %rsp, %rdx
movq PER_CPU_VAR(pcpu_hot + X86_top_of_stack), %rsp
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
UNWIND_HINT_IRET_REGS base=%rdx offset=8
pushq 5*8(%rdx) /* pt_regs->ss */
pushq 4*8(%rdx) /* pt_regs->rsp */
@ -1484,7 +1486,7 @@ SYM_CODE_START_NOALIGN(rewind_stack_and_make_dead)
/* Prevent any naive code from trying to unwind to our caller. */
xorl %ebp, %ebp
movq PER_CPU_VAR(pcpu_hot + X86_top_of_stack), %rax
movq PER_CPU_VAR(cpu_current_top_of_stack), %rax
leaq -PTREGS_SIZE(%rax), %rsp
UNWIND_HINT_REGS
@ -1526,6 +1528,7 @@ SYM_CODE_END(rewind_stack_and_make_dead)
* refactored in the future if needed.
*/
SYM_FUNC_START(clear_bhb_loop)
ANNOTATE_NOENDBR
push %rbp
mov %rsp, %rbp
movl $5, %ecx

View File

@ -57,7 +57,7 @@ SYM_CODE_START(entry_SYSENTER_compat)
SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
popq %rax
movq PER_CPU_VAR(pcpu_hot + X86_top_of_stack), %rsp
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
/* Construct struct pt_regs on stack */
pushq $__USER_DS /* pt_regs->ss */
@ -193,7 +193,7 @@ SYM_CODE_START(entry_SYSCALL_compat)
SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp
/* Switch to the kernel stack */
movq PER_CPU_VAR(pcpu_hot + X86_top_of_stack), %rsp
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
SYM_INNER_LABEL(entry_SYSCALL_compat_safe_stack, SYM_L_GLOBAL)
ANNOTATE_NOENDBR

View File

@ -58,6 +58,7 @@ SYM_CODE_END(asm_fred_entrypoint_kernel)
#if IS_ENABLED(CONFIG_KVM_INTEL)
SYM_FUNC_START(asm_fred_entry_from_kvm)
ANNOTATE_NOENDBR
push %rbp
mov %rsp, %rbp

View File

@ -1,10 +1,16 @@
// SPDX-License-Identifier: GPL-2.0
/* System call table for i386. */
// SPDX-License-Identifier: GPL-2.0-only
/* 32-bit system call dispatch */
#include <linux/linkage.h>
#include <linux/sys.h>
#include <linux/cache.h>
#include <linux/syscalls.h>
#include <linux/entry-common.h>
#include <linux/nospec.h>
#include <linux/uaccess.h>
#include <asm/apic.h>
#include <asm/traps.h>
#include <asm/cpufeature.h>
#include <asm/syscall.h>
#ifdef CONFIG_IA32_EMULATION
@ -41,4 +47,324 @@ long ia32_sys_call(const struct pt_regs *regs, unsigned int nr)
#include <asm/syscalls_32.h>
default: return __ia32_sys_ni_syscall(regs);
}
};
}
static __always_inline int syscall_32_enter(struct pt_regs *regs)
{
if (IS_ENABLED(CONFIG_IA32_EMULATION))
current_thread_info()->status |= TS_COMPAT;
return (int)regs->orig_ax;
}
#ifdef CONFIG_IA32_EMULATION
bool __ia32_enabled __ro_after_init = !IS_ENABLED(CONFIG_IA32_EMULATION_DEFAULT_DISABLED);
static int __init ia32_emulation_override_cmdline(char *arg)
{
return kstrtobool(arg, &__ia32_enabled);
}
early_param("ia32_emulation", ia32_emulation_override_cmdline);
#endif
/*
* Invoke a 32-bit syscall. Called with IRQs on in CT_STATE_KERNEL.
*/
static __always_inline void do_syscall_32_irqs_on(struct pt_regs *regs, int nr)
{
/*
* Convert negative numbers to very high and thus out of range
* numbers for comparisons.
*/
unsigned int unr = nr;
if (likely(unr < IA32_NR_syscalls)) {
unr = array_index_nospec(unr, IA32_NR_syscalls);
regs->ax = ia32_sys_call(regs, unr);
} else if (nr != -1) {
regs->ax = __ia32_sys_ni_syscall(regs);
}
}
#ifdef CONFIG_IA32_EMULATION
static __always_inline bool int80_is_external(void)
{
const unsigned int offs = (0x80 / 32) * 0x10;
const u32 bit = BIT(0x80 % 32);
/* The local APIC on XENPV guests is fake */
if (cpu_feature_enabled(X86_FEATURE_XENPV))
return false;
/*
* If vector 0x80 is set in the APIC ISR then this is an external
* interrupt. Either from broken hardware or injected by a VMM.
*
* Note: In guest mode this is only valid for secure guests where
* the secure module fully controls the vAPIC exposed to the guest.
*/
return apic_read(APIC_ISR + offs) & bit;
}
/**
* do_int80_emulation - 32-bit legacy syscall C entry from asm
* @regs: syscall arguments in struct pt_args on the stack.
*
* This entry point can be used by 32-bit and 64-bit programs to perform
* 32-bit system calls. Instances of INT $0x80 can be found inline in
* various programs and libraries. It is also used by the vDSO's
* __kernel_vsyscall fallback for hardware that doesn't support a faster
* entry method. Restarted 32-bit system calls also fall back to INT
* $0x80 regardless of what instruction was originally used to do the
* system call.
*
* This is considered a slow path. It is not used by most libc
* implementations on modern hardware except during process startup.
*
* The arguments for the INT $0x80 based syscall are on stack in the
* pt_regs structure:
* eax: system call number
* ebx, ecx, edx, esi, edi, ebp: arg1 - arg 6
*/
__visible noinstr void do_int80_emulation(struct pt_regs *regs)
{
int nr;
/* Kernel does not use INT $0x80! */
if (unlikely(!user_mode(regs))) {
irqentry_enter(regs);
instrumentation_begin();
panic("Unexpected external interrupt 0x80\n");
}
/*
* Establish kernel context for instrumentation, including for
* int80_is_external() below which calls into the APIC driver.
* Identical for soft and external interrupts.
*/
enter_from_user_mode(regs);
instrumentation_begin();
add_random_kstack_offset();
/* Validate that this is a soft interrupt to the extent possible */
if (unlikely(int80_is_external()))
panic("Unexpected external interrupt 0x80\n");
/*
* The low level idtentry code pushed -1 into regs::orig_ax
* and regs::ax contains the syscall number.
*
* User tracing code (ptrace or signal handlers) might assume
* that the regs::orig_ax contains a 32-bit number on invoking
* a 32-bit syscall.
*
* Establish the syscall convention by saving the 32bit truncated
* syscall number in regs::orig_ax and by invalidating regs::ax.
*/
regs->orig_ax = regs->ax & GENMASK(31, 0);
regs->ax = -ENOSYS;
nr = syscall_32_enter(regs);
local_irq_enable();
nr = syscall_enter_from_user_mode_work(regs, nr);
do_syscall_32_irqs_on(regs, nr);
instrumentation_end();
syscall_exit_to_user_mode(regs);
}
#ifdef CONFIG_X86_FRED
/*
* A FRED-specific INT80 handler is warranted for the follwing reasons:
*
* 1) As INT instructions and hardware interrupts are separate event
* types, FRED does not preclude the use of vector 0x80 for external
* interrupts. As a result, the FRED setup code does not reserve
* vector 0x80 and calling int80_is_external() is not merely
* suboptimal but actively incorrect: it could cause a system call
* to be incorrectly ignored.
*
* 2) It is called only for handling vector 0x80 of event type
* EVENT_TYPE_SWINT and will never be called to handle any external
* interrupt (event type EVENT_TYPE_EXTINT).
*
* 3) FRED has separate entry flows depending on if the event came from
* user space or kernel space, and because the kernel does not use
* INT insns, the FRED kernel entry handler fred_entry_from_kernel()
* falls through to fred_bad_type() if the event type is
* EVENT_TYPE_SWINT, i.e., INT insns. So if the kernel is handling
* an INT insn, it can only be from a user level.
*
* 4) int80_emulation() does a CLEAR_BRANCH_HISTORY. While FRED will
* likely take a different approach if it is ever needed: it
* probably belongs in either fred_intx()/ fred_other() or
* asm_fred_entrypoint_user(), depending on if this ought to be done
* for all entries from userspace or only system
* calls.
*
* 5) INT $0x80 is the fast path for 32-bit system calls under FRED.
*/
DEFINE_FREDENTRY_RAW(int80_emulation)
{
int nr;
enter_from_user_mode(regs);
instrumentation_begin();
add_random_kstack_offset();
/*
* FRED pushed 0 into regs::orig_ax and regs::ax contains the
* syscall number.
*
* User tracing code (ptrace or signal handlers) might assume
* that the regs::orig_ax contains a 32-bit number on invoking
* a 32-bit syscall.
*
* Establish the syscall convention by saving the 32bit truncated
* syscall number in regs::orig_ax and by invalidating regs::ax.
*/
regs->orig_ax = regs->ax & GENMASK(31, 0);
regs->ax = -ENOSYS;
nr = syscall_32_enter(regs);
local_irq_enable();
nr = syscall_enter_from_user_mode_work(regs, nr);
do_syscall_32_irqs_on(regs, nr);
instrumentation_end();
syscall_exit_to_user_mode(regs);
}
#endif /* CONFIG_X86_FRED */
#else /* CONFIG_IA32_EMULATION */
/* Handles int $0x80 on a 32bit kernel */
__visible noinstr void do_int80_syscall_32(struct pt_regs *regs)
{
int nr = syscall_32_enter(regs);
add_random_kstack_offset();
/*
* Subtlety here: if ptrace pokes something larger than 2^31-1 into
* orig_ax, the int return value truncates it. This matches
* the semantics of syscall_get_nr().
*/
nr = syscall_enter_from_user_mode(regs, nr);
instrumentation_begin();
do_syscall_32_irqs_on(regs, nr);
instrumentation_end();
syscall_exit_to_user_mode(regs);
}
#endif /* !CONFIG_IA32_EMULATION */
static noinstr bool __do_fast_syscall_32(struct pt_regs *regs)
{
int nr = syscall_32_enter(regs);
int res;
add_random_kstack_offset();
/*
* This cannot use syscall_enter_from_user_mode() as it has to
* fetch EBP before invoking any of the syscall entry work
* functions.
*/
syscall_enter_from_user_mode_prepare(regs);
instrumentation_begin();
/* Fetch EBP from where the vDSO stashed it. */
if (IS_ENABLED(CONFIG_X86_64)) {
/*
* Micro-optimization: the pointer we're following is
* explicitly 32 bits, so it can't be out of range.
*/
res = __get_user(*(u32 *)&regs->bp,
(u32 __user __force *)(unsigned long)(u32)regs->sp);
} else {
res = get_user(*(u32 *)&regs->bp,
(u32 __user __force *)(unsigned long)(u32)regs->sp);
}
if (res) {
/* User code screwed up. */
regs->ax = -EFAULT;
local_irq_disable();
instrumentation_end();
irqentry_exit_to_user_mode(regs);
return false;
}
nr = syscall_enter_from_user_mode_work(regs, nr);
/* Now this is just like a normal syscall. */
do_syscall_32_irqs_on(regs, nr);
instrumentation_end();
syscall_exit_to_user_mode(regs);
return true;
}
/* Returns true to return using SYSEXIT/SYSRETL, or false to use IRET */
__visible noinstr bool do_fast_syscall_32(struct pt_regs *regs)
{
/*
* Called using the internal vDSO SYSENTER/SYSCALL32 calling
* convention. Adjust regs so it looks like we entered using int80.
*/
unsigned long landing_pad = (unsigned long)current->mm->context.vdso +
vdso_image_32.sym_int80_landing_pad;
/*
* SYSENTER loses EIP, and even SYSCALL32 needs us to skip forward
* so that 'regs->ip -= 2' lands back on an int $0x80 instruction.
* Fix it up.
*/
regs->ip = landing_pad;
/* Invoke the syscall. If it failed, keep it simple: use IRET. */
if (!__do_fast_syscall_32(regs))
return false;
/*
* Check that the register state is valid for using SYSRETL/SYSEXIT
* to exit to userspace. Otherwise use the slower but fully capable
* IRET exit path.
*/
/* XEN PV guests always use the IRET path */
if (cpu_feature_enabled(X86_FEATURE_XENPV))
return false;
/* EIP must point to the VDSO landing pad */
if (unlikely(regs->ip != landing_pad))
return false;
/* CS and SS must match the values set in MSR_STAR */
if (unlikely(regs->cs != __USER32_CS || regs->ss != __USER_DS))
return false;
/* If the TF, RF, or VM flags are set, use IRET */
if (unlikely(regs->flags & (X86_EFLAGS_RF | X86_EFLAGS_TF | X86_EFLAGS_VM)))
return false;
/* Use SYSRETL/SYSEXIT to exit to userspace */
return true;
}
/* Returns true to return using SYSEXIT/SYSRETL, or false to use IRET */
__visible noinstr bool do_SYSENTER_32(struct pt_regs *regs)
{
/* SYSENTER loses RSP, but the vDSO saved it in RBP. */
regs->sp = regs->bp;
/* SYSENTER clobbers EFLAGS.IF. Assume it was set in usermode. */
regs->flags |= X86_EFLAGS_IF;
return do_fast_syscall_32(regs);
}

View File

@ -1,15 +1,20 @@
// SPDX-License-Identifier: GPL-2.0
/* System call table for x86-64. */
// SPDX-License-Identifier: GPL-2.0-only
/* 64-bit system call dispatch */
#include <linux/linkage.h>
#include <linux/sys.h>
#include <linux/cache.h>
#include <linux/syscalls.h>
#include <linux/entry-common.h>
#include <linux/nospec.h>
#include <asm/syscall.h>
#define __SYSCALL(nr, sym) extern long __x64_##sym(const struct pt_regs *);
#define __SYSCALL_NORETURN(nr, sym) extern long __noreturn __x64_##sym(const struct pt_regs *);
#include <asm/syscalls_64.h>
#ifdef CONFIG_X86_X32_ABI
#include <asm/syscalls_x32.h>
#endif
#undef __SYSCALL
#undef __SYSCALL_NORETURN
@ -33,4 +38,104 @@ long x64_sys_call(const struct pt_regs *regs, unsigned int nr)
#include <asm/syscalls_64.h>
default: return __x64_sys_ni_syscall(regs);
}
};
}
#ifdef CONFIG_X86_X32_ABI
long x32_sys_call(const struct pt_regs *regs, unsigned int nr)
{
switch (nr) {
#include <asm/syscalls_x32.h>
default: return __x64_sys_ni_syscall(regs);
}
}
#endif
static __always_inline bool do_syscall_x64(struct pt_regs *regs, int nr)
{
/*
* Convert negative numbers to very high and thus out of range
* numbers for comparisons.
*/
unsigned int unr = nr;
if (likely(unr < NR_syscalls)) {
unr = array_index_nospec(unr, NR_syscalls);
regs->ax = x64_sys_call(regs, unr);
return true;
}
return false;
}
static __always_inline bool do_syscall_x32(struct pt_regs *regs, int nr)
{
/*
* Adjust the starting offset of the table, and convert numbers
* < __X32_SYSCALL_BIT to very high and thus out of range
* numbers for comparisons.
*/
unsigned int xnr = nr - __X32_SYSCALL_BIT;
if (IS_ENABLED(CONFIG_X86_X32_ABI) && likely(xnr < X32_NR_syscalls)) {
xnr = array_index_nospec(xnr, X32_NR_syscalls);
regs->ax = x32_sys_call(regs, xnr);
return true;
}
return false;
}
/* Returns true to return using SYSRET, or false to use IRET */
__visible noinstr bool do_syscall_64(struct pt_regs *regs, int nr)
{
add_random_kstack_offset();
nr = syscall_enter_from_user_mode(regs, nr);
instrumentation_begin();
if (!do_syscall_x64(regs, nr) && !do_syscall_x32(regs, nr) && nr != -1) {
/* Invalid system call, but still a system call. */
regs->ax = __x64_sys_ni_syscall(regs);
}
instrumentation_end();
syscall_exit_to_user_mode(regs);
/*
* Check that the register state is valid for using SYSRET to exit
* to userspace. Otherwise use the slower but fully capable IRET
* exit path.
*/
/* XEN PV guests always use the IRET path */
if (cpu_feature_enabled(X86_FEATURE_XENPV))
return false;
/* SYSRET requires RCX == RIP and R11 == EFLAGS */
if (unlikely(regs->cx != regs->ip || regs->r11 != regs->flags))
return false;
/* CS and SS must match the values set in MSR_STAR */
if (unlikely(regs->cs != __USER_CS || regs->ss != __USER_DS))
return false;
/*
* On Intel CPUs, SYSRET with non-canonical RCX/RIP will #GP
* in kernel space. This essentially lets the user take over
* the kernel, since userspace controls RSP.
*
* TASK_SIZE_MAX covers all user-accessible addresses other than
* the deprecated vsyscall page.
*/
if (unlikely(regs->ip >= TASK_SIZE_MAX))
return false;
/*
* SYSRET cannot restore RF. It can restore TF, but unlike IRET,
* restoring TF results in a trap from userspace immediately after
* SYSRET.
*/
if (unlikely(regs->flags & (X86_EFLAGS_RF | X86_EFLAGS_TF)))
return false;
/* Use SYSRET to exit to userspace */
return true;
}

View File

@ -1,25 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
/* System call table for x32 ABI. */
#include <linux/linkage.h>
#include <linux/sys.h>
#include <linux/cache.h>
#include <linux/syscalls.h>
#include <asm/syscall.h>
#define __SYSCALL(nr, sym) extern long __x64_##sym(const struct pt_regs *);
#define __SYSCALL_NORETURN(nr, sym) extern long __noreturn __x64_##sym(const struct pt_regs *);
#include <asm/syscalls_x32.h>
#undef __SYSCALL
#undef __SYSCALL_NORETURN
#define __SYSCALL_NORETURN __SYSCALL
#define __SYSCALL(nr, sym) case nr: return __x64_##sym(regs);
long x32_sys_call(const struct pt_regs *regs, unsigned int nr)
{
switch (nr) {
#include <asm/syscalls_x32.h>
default: return __x64_sys_ni_syscall(regs);
}
};

View File

@ -396,7 +396,7 @@
381 i386 pkey_alloc sys_pkey_alloc
382 i386 pkey_free sys_pkey_free
383 i386 statx sys_statx
384 i386 arch_prctl sys_arch_prctl compat_sys_arch_prctl
384 i386 arch_prctl sys_arch_prctl
385 i386 io_pgetevents sys_io_pgetevents_time32 compat_sys_io_pgetevents
386 i386 rseq sys_rseq
393 i386 semget sys_semget

View File

@ -133,6 +133,7 @@ KBUILD_CFLAGS_32 += -fno-stack-protector
KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls)
KBUILD_CFLAGS_32 += -fno-omit-frame-pointer
KBUILD_CFLAGS_32 += -DDISABLE_BRANCH_PROFILING
KBUILD_CFLAGS_32 += -DBUILD_VDSO
ifdef CONFIG_MITIGATION_RETPOLINE
ifneq ($(RETPOLINE_VDSO_CFLAGS),)

View File

@ -7,7 +7,7 @@
* vDSO uses a dedicated handler the addresses are relative to the overall
* exception table, not each individual entry.
*/
#ifdef __ASSEMBLY__
#ifdef __ASSEMBLER__
#define _ASM_VDSO_EXTABLE_HANDLE(from, to) \
ASM_VDSO_EXTABLE_HANDLE from to

View File

@ -48,8 +48,7 @@ int __init init_vdso_image(const struct vdso_image *image)
apply_alternatives((struct alt_instr *)(image->data + image->alt),
(struct alt_instr *)(image->data + image->alt +
image->alt_len),
NULL);
image->alt_len));
return 0;
}

View File

@ -2849,7 +2849,7 @@ static bool is_uprobe_at_func_entry(struct pt_regs *regs)
return true;
/* endbr64 (64-bit only) */
if (user_64bit_mode(regs) && is_endbr(*(u32 *)auprobe->insn))
if (user_64bit_mode(regs) && is_endbr((u32 *)auprobe->insn))
return true;
return false;

View File

@ -4735,9 +4735,9 @@ static int adl_hw_config(struct perf_event *event)
return -EOPNOTSUPP;
}
static enum hybrid_cpu_type adl_get_hybrid_cpu_type(void)
static enum intel_cpu_type adl_get_hybrid_cpu_type(void)
{
return HYBRID_INTEL_CORE;
return INTEL_CPU_TYPE_CORE;
}
static inline bool erratum_hsw11(struct perf_event *event)
@ -5082,7 +5082,8 @@ static void intel_pmu_check_hybrid_pmus(struct x86_hybrid_pmu *pmu)
static struct x86_hybrid_pmu *find_hybrid_pmu_for_cpu(void)
{
u8 cpu_type = get_this_hybrid_cpu_type();
struct cpuinfo_x86 *c = &cpu_data(smp_processor_id());
enum intel_cpu_type cpu_type = c->topo.intel_type;
int i;
/*
@ -5091,7 +5092,7 @@ static struct x86_hybrid_pmu *find_hybrid_pmu_for_cpu(void)
* on it. There should be a fixup function provided for these
* troublesome CPUs (->get_hybrid_cpu_type).
*/
if (cpu_type == HYBRID_INTEL_NONE) {
if (cpu_type == INTEL_CPU_TYPE_UNKNOWN) {
if (x86_pmu.get_hybrid_cpu_type)
cpu_type = x86_pmu.get_hybrid_cpu_type();
else
@ -5108,16 +5109,16 @@ static struct x86_hybrid_pmu *find_hybrid_pmu_for_cpu(void)
enum hybrid_pmu_type pmu_type = x86_pmu.hybrid_pmu[i].pmu_type;
u32 native_id;
if (cpu_type == HYBRID_INTEL_CORE && pmu_type == hybrid_big)
if (cpu_type == INTEL_CPU_TYPE_CORE && pmu_type == hybrid_big)
return &x86_pmu.hybrid_pmu[i];
if (cpu_type == HYBRID_INTEL_ATOM) {
if (cpu_type == INTEL_CPU_TYPE_ATOM) {
if (x86_pmu.num_hybrid_pmus == 2 && pmu_type == hybrid_small)
return &x86_pmu.hybrid_pmu[i];
native_id = get_this_hybrid_cpu_native_id();
if (native_id == skt_native_id && pmu_type == hybrid_small)
native_id = c->topo.intel_native_model_id;
if (native_id == INTEL_ATOM_SKT_NATIVE_ID && pmu_type == hybrid_small)
return &x86_pmu.hybrid_pmu[i];
if (native_id == cmt_native_id && pmu_type == hybrid_tiny)
if (native_id == INTEL_ATOM_CMT_NATIVE_ID && pmu_type == hybrid_tiny)
return &x86_pmu.hybrid_pmu[i];
}
}
@ -6583,15 +6584,21 @@ __init int intel_pmu_init(void)
char *name;
struct x86_hybrid_pmu *pmu;
/* Architectural Perfmon was introduced starting with Core "Yonah" */
if (!cpu_has(&boot_cpu_data, X86_FEATURE_ARCH_PERFMON)) {
switch (boot_cpu_data.x86) {
case 0x6:
return p6_pmu_init();
case 0xb:
case 6:
if (boot_cpu_data.x86_vfm < INTEL_CORE_YONAH)
return p6_pmu_init();
break;
case 11:
return knc_pmu_init();
case 0xf:
case 15:
return p4_pmu_init();
}
pr_cont("unsupported CPU family %d model %d ",
boot_cpu_data.x86, boot_cpu_data.x86_model);
return -ENODEV;
}
@ -6739,7 +6746,7 @@ __init int intel_pmu_init(void)
case INTEL_ATOM_SILVERMONT_D:
case INTEL_ATOM_SILVERMONT_MID:
case INTEL_ATOM_AIRMONT:
case INTEL_ATOM_AIRMONT_MID:
case INTEL_ATOM_SILVERMONT_MID2:
memcpy(hw_cache_event_ids, slm_hw_cache_event_ids,
sizeof(hw_cache_event_ids));
memcpy(hw_cache_extra_regs, slm_hw_cache_extra_regs,

View File

@ -10,6 +10,7 @@
#include <linux/perf_event.h>
#include <asm/perf_event_p4.h>
#include <asm/cpu_device_id.h>
#include <asm/hardirq.h>
#include <asm/apic.h>
@ -732,9 +733,9 @@ static bool p4_event_match_cpu_model(unsigned int event_idx)
{
/* INSTR_COMPLETED event only exist for model 3, 4, 6 (Prescott) */
if (event_idx == P4_EVENT_INSTR_COMPLETED) {
if (boot_cpu_data.x86_model != 3 &&
boot_cpu_data.x86_model != 4 &&
boot_cpu_data.x86_model != 6)
if (boot_cpu_data.x86_vfm != INTEL_P4_PRESCOTT &&
boot_cpu_data.x86_vfm != INTEL_P4_PRESCOTT_2M &&
boot_cpu_data.x86_vfm != INTEL_P4_CEDARMILL)
return false;
}

View File

@ -2,6 +2,8 @@
#include <linux/perf_event.h>
#include <linux/types.h>
#include <asm/cpu_device_id.h>
#include "../perf_event.h"
/*
@ -248,30 +250,8 @@ __init int p6_pmu_init(void)
{
x86_pmu = p6_pmu;
switch (boot_cpu_data.x86_model) {
case 1: /* Pentium Pro */
if (boot_cpu_data.x86_vfm == INTEL_PENTIUM_PRO)
x86_add_quirk(p6_pmu_rdpmc_quirk);
break;
case 3: /* Pentium II - Klamath */
case 5: /* Pentium II - Deschutes */
case 6: /* Pentium II - Mendocino */
break;
case 7: /* Pentium III - Katmai */
case 8: /* Pentium III - Coppermine */
case 10: /* Pentium III Xeon */
case 11: /* Pentium III - Tualatin */
break;
case 9: /* Pentium M - Banias */
case 13: /* Pentium M - Dothan */
break;
default:
pr_cont("unsupported p6 CPU model %d ", boot_cpu_data.x86_model);
return -ENODEV;
}
memcpy(hw_cache_event_ids, p6_hw_cache_event_ids,
sizeof(hw_cache_event_ids));

View File

@ -674,18 +674,6 @@ enum {
#define PERF_PEBS_DATA_SOURCE_GRT_MAX 0x10
#define PERF_PEBS_DATA_SOURCE_GRT_MASK (PERF_PEBS_DATA_SOURCE_GRT_MAX - 1)
/*
* CPUID.1AH.EAX[31:0] uniquely identifies the microarchitecture
* of the core. Bits 31-24 indicates its core type (Core or Atom)
* and Bits [23:0] indicates the native model ID of the core.
* Core type and native model ID are defined in below enumerations.
*/
enum hybrid_cpu_type {
HYBRID_INTEL_NONE,
HYBRID_INTEL_ATOM = 0x20,
HYBRID_INTEL_CORE = 0x40,
};
#define X86_HYBRID_PMU_ATOM_IDX 0
#define X86_HYBRID_PMU_CORE_IDX 1
#define X86_HYBRID_PMU_TINY_IDX 2
@ -702,11 +690,6 @@ enum hybrid_pmu_type {
hybrid_big_small_tiny = hybrid_big | hybrid_small_tiny,
};
enum atom_native_id {
cmt_native_id = 0x2, /* Crestmont */
skt_native_id = 0x3, /* Skymont */
};
struct x86_hybrid_pmu {
struct pmu pmu;
const char *name;
@ -993,7 +976,7 @@ struct x86_pmu {
*/
int num_hybrid_pmus;
struct x86_hybrid_pmu *hybrid_pmu;
enum hybrid_cpu_type (*get_hybrid_cpu_type) (void);
enum intel_cpu_type (*get_hybrid_cpu_type) (void);
};
struct x86_perf_task_context_opt {

View File

@ -239,5 +239,4 @@ void hyperv_setup_mmu_ops(void)
pr_info("Using hypercall for remote TLB flush\n");
pv_ops.mmu.flush_tlb_multi = hyperv_flush_tlb_multi;
pv_ops.mmu.tlb_remove_table = tlb_remove_table;
}

View File

@ -8,6 +8,7 @@ generated-y += syscalls_x32.h
generated-y += unistd_32_ia32.h
generated-y += unistd_64_x32.h
generated-y += xen-hypercalls.h
generated-y += cpufeaturemasks.h
generic-y += early_ioremap.h
generic-y += fprobe.h

View File

@ -15,7 +15,7 @@
#define ALT_DIRECT_CALL(feature) ((ALT_FLAG_DIRECT_CALL << ALT_FLAGS_SHIFT) | (feature))
#define ALT_CALL_ALWAYS ALT_DIRECT_CALL(X86_FEATURE_ALWAYS)
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/stddef.h>
@ -87,20 +87,19 @@ extern struct alt_instr __alt_instructions[], __alt_instructions_end[];
* instructions were patched in already:
*/
extern int alternatives_patched;
struct module;
extern void alternative_instructions(void);
extern void apply_alternatives(struct alt_instr *start, struct alt_instr *end,
struct module *mod);
extern void apply_retpolines(s32 *start, s32 *end, struct module *mod);
extern void apply_returns(s32 *start, s32 *end, struct module *mod);
extern void apply_seal_endbr(s32 *start, s32 *end, struct module *mod);
extern void apply_alternatives(struct alt_instr *start, struct alt_instr *end);
extern void apply_retpolines(s32 *start, s32 *end);
extern void apply_returns(s32 *start, s32 *end);
extern void apply_seal_endbr(s32 *start, s32 *end);
extern void apply_fineibt(s32 *start_retpoline, s32 *end_retpoine,
s32 *start_cfi, s32 *end_cfi, struct module *mod);
s32 *start_cfi, s32 *end_cfi);
struct module;
struct callthunk_sites {
s32 *call_start, *call_end;
struct alt_instr *alt_start, *alt_end;
};
#ifdef CONFIG_CALL_THUNKS
@ -237,10 +236,12 @@ static inline int alternatives_text_reserved(void *start, void *end)
* references: i.e., if used for a function, it would add the PLT
* suffix.
*/
#define alternative_call(oldfunc, newfunc, ft_flags, output, input...) \
#define alternative_call(oldfunc, newfunc, ft_flags, output, input, clobbers...) \
asm_inline volatile(ALTERNATIVE("call %c[old]", "call %c[new]", ft_flags) \
: ALT_OUTPUT_SP(output) \
: [old] "i" (oldfunc), [new] "i" (newfunc), ## input)
: [old] "i" (oldfunc), [new] "i" (newfunc) \
COMMA(input) \
: clobbers)
/*
* Like alternative_call, but there are two features and respective functions.
@ -249,24 +250,14 @@ static inline int alternatives_text_reserved(void *start, void *end)
* Otherwise, old function is used.
*/
#define alternative_call_2(oldfunc, newfunc1, ft_flags1, newfunc2, ft_flags2, \
output, input...) \
output, input, clobbers...) \
asm_inline volatile(ALTERNATIVE_2("call %c[old]", "call %c[new1]", ft_flags1, \
"call %c[new2]", ft_flags2) \
: ALT_OUTPUT_SP(output) \
: [old] "i" (oldfunc), [new1] "i" (newfunc1), \
[new2] "i" (newfunc2), ## input)
/*
* use this macro(s) if you need more than one output parameter
* in alternative_io
*/
#define ASM_OUTPUT2(a...) a
/*
* use this macro if you need clobbers but no inputs in
* alternative_{input,io,call}()
*/
#define ASM_NO_INPUT_CLOBBER(clbr...) "i" (0) : clbr
[new2] "i" (newfunc2) \
COMMA(input) \
: clobbers)
#define ALT_OUTPUT_SP(...) ASM_CALL_CONSTRAINT, ## __VA_ARGS__
@ -286,7 +277,7 @@ static inline int alternatives_text_reserved(void *start, void *end)
void BUG_func(void);
void nop_func(void);
#else /* __ASSEMBLY__ */
#else /* __ASSEMBLER__ */
#ifdef CONFIG_SMP
.macro LOCK_PREFIX
@ -369,6 +360,6 @@ void nop_func(void);
ALTERNATIVE_2 oldinstr, newinstr_no, X86_FEATURE_ALWAYS, \
newinstr_yes, ft_flags
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* _ASM_X86_ALTERNATIVE_H */

View File

@ -27,7 +27,6 @@ struct amd_l3_cache {
};
struct amd_northbridge {
struct pci_dev *root;
struct pci_dev *misc;
struct pci_dev *link;
struct amd_l3_cache l3_cache;

View File

@ -30,7 +30,31 @@ static inline u16 amd_num_nodes(void)
return topology_amd_nodes_per_pkg() * topology_max_packages();
}
#ifdef CONFIG_AMD_NODE
int __must_check amd_smn_read(u16 node, u32 address, u32 *value);
int __must_check amd_smn_write(u16 node, u32 address, u32 value);
/* Should only be used by the HSMP driver. */
int __must_check amd_smn_hsmp_rdwr(u16 node, u32 address, u32 *value, bool write);
#else
static inline int __must_check amd_smn_read(u16 node, u32 address, u32 *value) { return -ENODEV; }
static inline int __must_check amd_smn_write(u16 node, u32 address, u32 value) { return -ENODEV; }
static inline int __must_check amd_smn_hsmp_rdwr(u16 node, u32 address, u32 *value, bool write)
{
return -ENODEV;
}
#endif /* CONFIG_AMD_NODE */
/* helper for use with read_poll_timeout */
static inline int smn_read_register(u32 reg)
{
int data, rc;
rc = amd_smn_read(0, reg, &data);
if (rc)
return rc;
return data;
}
#endif /*_ASM_X86_AMD_NODE_H_*/

View File

@ -99,8 +99,8 @@ static inline void native_apic_mem_write(u32 reg, u32 v)
volatile u32 *addr = (volatile u32 *)(APIC_BASE + reg);
alternative_io("movl %0, %1", "xchgl %0, %1", X86_BUG_11AP,
ASM_OUTPUT2("=r" (v), "=m" (*addr)),
ASM_OUTPUT2("0" (v), "m" (*addr)));
ASM_OUTPUT("=r" (v), "=m" (*addr)),
ASM_INPUT("0" (v), "m" (*addr)));
}
static inline u32 native_apic_mem_read(u32 reg)

View File

@ -16,9 +16,10 @@ static __always_inline unsigned int __arch_hweight32(unsigned int w)
{
unsigned int res;
asm (ALTERNATIVE("call __sw_hweight32", "popcntl %1, %0", X86_FEATURE_POPCNT)
: "="REG_OUT (res)
: REG_IN (w));
asm_inline (ALTERNATIVE("call __sw_hweight32",
"popcntl %[val], %[cnt]", X86_FEATURE_POPCNT)
: [cnt] "=" REG_OUT (res), ASM_CALL_CONSTRAINT
: [val] REG_IN (w));
return res;
}
@ -44,9 +45,10 @@ static __always_inline unsigned long __arch_hweight64(__u64 w)
{
unsigned long res;
asm (ALTERNATIVE("call __sw_hweight64", "popcntq %1, %0", X86_FEATURE_POPCNT)
: "="REG_OUT (res)
: REG_IN (w));
asm_inline (ALTERNATIVE("call __sw_hweight64",
"popcntq %[val], %[cnt]", X86_FEATURE_POPCNT)
: [cnt] "=" REG_OUT (res), ASM_CALL_CONSTRAINT
: [val] REG_IN (w));
return res;
}

View File

@ -16,10 +16,10 @@
#include <asm/gsseg.h>
#include <asm/nospec-branch.h>
#ifndef CONFIG_X86_CMPXCHG64
#ifndef CONFIG_X86_CX8
extern void cmpxchg8b_emu(void);
#endif
#if defined(__GENKSYMS__) && defined(CONFIG_STACKPROTECTOR)
#ifdef CONFIG_STACKPROTECTOR
extern unsigned long __ref_stack_chk_guard;
#endif

View File

@ -2,7 +2,7 @@
#ifndef _ASM_X86_ASM_H
#define _ASM_X86_ASM_H
#ifdef __ASSEMBLY__
#ifdef __ASSEMBLER__
# define __ASM_FORM(x, ...) x,## __VA_ARGS__
# define __ASM_FORM_RAW(x, ...) x,## __VA_ARGS__
# define __ASM_FORM_COMMA(x, ...) x,## __VA_ARGS__,
@ -113,7 +113,7 @@
#endif
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#ifndef __pic__
static __always_inline __pure void *rip_rel_ptr(void *p)
{
@ -144,7 +144,7 @@ static __always_inline __pure void *rip_rel_ptr(void *p)
# include <asm/extable_fixup_types.h>
/* Exception table entry */
#ifdef __ASSEMBLY__
#ifdef __ASSEMBLER__
# define _ASM_EXTABLE_TYPE(from, to, type) \
.pushsection "__ex_table","a" ; \
@ -164,7 +164,7 @@ static __always_inline __pure void *rip_rel_ptr(void *p)
# define _ASM_NOKPROBE(entry)
# endif
#else /* ! __ASSEMBLY__ */
#else /* ! __ASSEMBLER__ */
# define DEFINE_EXTABLE_TYPE_REG \
".macro extable_type_reg type:req reg:req\n" \
@ -213,6 +213,17 @@ static __always_inline __pure void *rip_rel_ptr(void *p)
/* For C file, we already have NOKPROBE_SYMBOL macro */
/* Insert a comma if args are non-empty */
#define COMMA(x...) __COMMA(x)
#define __COMMA(...) , ##__VA_ARGS__
/*
* Combine multiple asm inline constraint args into a single arg for passing to
* another macro.
*/
#define ASM_OUTPUT(x...) x
#define ASM_INPUT(x...) x
/*
* This output constraint should be used for any inline asm which has a "call"
* instruction. Otherwise the asm may be inserted before the frame pointer
@ -221,7 +232,7 @@ static __always_inline __pure void *rip_rel_ptr(void *p)
*/
register unsigned long current_stack_pointer asm(_ASM_SP);
#define ASM_CALL_CONSTRAINT "+r" (current_stack_pointer)
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#define _ASM_EXTABLE(from, to) \
_ASM_EXTABLE_TYPE(from, to, EX_TYPE_DEFAULT)

View File

@ -30,14 +30,14 @@ static __always_inline void arch_atomic_set(atomic_t *v, int i)
static __always_inline void arch_atomic_add(int i, atomic_t *v)
{
asm volatile(LOCK_PREFIX "addl %1,%0"
asm_inline volatile(LOCK_PREFIX "addl %1, %0"
: "+m" (v->counter)
: "ir" (i) : "memory");
}
static __always_inline void arch_atomic_sub(int i, atomic_t *v)
{
asm volatile(LOCK_PREFIX "subl %1,%0"
asm_inline volatile(LOCK_PREFIX "subl %1, %0"
: "+m" (v->counter)
: "ir" (i) : "memory");
}
@ -50,14 +50,14 @@ static __always_inline bool arch_atomic_sub_and_test(int i, atomic_t *v)
static __always_inline void arch_atomic_inc(atomic_t *v)
{
asm volatile(LOCK_PREFIX "incl %0"
asm_inline volatile(LOCK_PREFIX "incl %0"
: "+m" (v->counter) :: "memory");
}
#define arch_atomic_inc arch_atomic_inc
static __always_inline void arch_atomic_dec(atomic_t *v)
{
asm volatile(LOCK_PREFIX "decl %0"
asm_inline volatile(LOCK_PREFIX "decl %0"
: "+m" (v->counter) :: "memory");
}
#define arch_atomic_dec arch_atomic_dec
@ -116,7 +116,7 @@ static __always_inline int arch_atomic_xchg(atomic_t *v, int new)
static __always_inline void arch_atomic_and(int i, atomic_t *v)
{
asm volatile(LOCK_PREFIX "andl %1,%0"
asm_inline volatile(LOCK_PREFIX "andl %1, %0"
: "+m" (v->counter)
: "ir" (i)
: "memory");
@ -134,7 +134,7 @@ static __always_inline int arch_atomic_fetch_and(int i, atomic_t *v)
static __always_inline void arch_atomic_or(int i, atomic_t *v)
{
asm volatile(LOCK_PREFIX "orl %1,%0"
asm_inline volatile(LOCK_PREFIX "orl %1, %0"
: "+m" (v->counter)
: "ir" (i)
: "memory");
@ -152,7 +152,7 @@ static __always_inline int arch_atomic_fetch_or(int i, atomic_t *v)
static __always_inline void arch_atomic_xor(int i, atomic_t *v)
{
asm volatile(LOCK_PREFIX "xorl %1,%0"
asm_inline volatile(LOCK_PREFIX "xorl %1, %0"
: "+m" (v->counter)
: "ir" (i)
: "memory");

View File

@ -48,17 +48,20 @@ static __always_inline s64 arch_atomic64_read_nonatomic(const atomic64_t *v)
ATOMIC64_EXPORT(atomic64_##sym)
#endif
#ifdef CONFIG_X86_CMPXCHG64
#define __alternative_atomic64(f, g, out, in...) \
asm volatile("call %c[func]" \
#ifdef CONFIG_X86_CX8
#define __alternative_atomic64(f, g, out, in, clobbers...) \
asm volatile("call %c[func]" \
: ALT_OUTPUT_SP(out) \
: [func] "i" (atomic64_##g##_cx8), ## in)
: [func] "i" (atomic64_##g##_cx8) \
COMMA(in) \
: clobbers)
#define ATOMIC64_DECL(sym) ATOMIC64_DECL_ONE(sym##_cx8)
#else
#define __alternative_atomic64(f, g, out, in...) \
alternative_call(atomic64_##f##_386, atomic64_##g##_cx8, \
X86_FEATURE_CX8, ASM_OUTPUT2(out), ## in)
#define __alternative_atomic64(f, g, out, in, clobbers...) \
alternative_call(atomic64_##f##_386, atomic64_##g##_cx8, \
X86_FEATURE_CX8, ASM_OUTPUT(out), \
ASM_INPUT(in), clobbers)
#define ATOMIC64_DECL(sym) ATOMIC64_DECL_ONE(sym##_cx8); \
ATOMIC64_DECL_ONE(sym##_386)
@ -69,8 +72,8 @@ ATOMIC64_DECL_ONE(inc_386);
ATOMIC64_DECL_ONE(dec_386);
#endif
#define alternative_atomic64(f, out, in...) \
__alternative_atomic64(f, f, ASM_OUTPUT2(out), ## in)
#define alternative_atomic64(f, out, in, clobbers...) \
__alternative_atomic64(f, f, ASM_OUTPUT(out), ASM_INPUT(in), clobbers)
ATOMIC64_DECL(read);
ATOMIC64_DECL(set);
@ -105,9 +108,10 @@ static __always_inline s64 arch_atomic64_xchg(atomic64_t *v, s64 n)
s64 o;
unsigned high = (unsigned)(n >> 32);
unsigned low = (unsigned)n;
alternative_atomic64(xchg, "=&A" (o),
"S" (v), "b" (low), "c" (high)
: "memory");
alternative_atomic64(xchg,
"=&A" (o),
ASM_INPUT("S" (v), "b" (low), "c" (high)),
"memory");
return o;
}
#define arch_atomic64_xchg arch_atomic64_xchg
@ -116,23 +120,25 @@ static __always_inline void arch_atomic64_set(atomic64_t *v, s64 i)
{
unsigned high = (unsigned)(i >> 32);
unsigned low = (unsigned)i;
alternative_atomic64(set, /* no output */,
"S" (v), "b" (low), "c" (high)
: "eax", "edx", "memory");
alternative_atomic64(set,
/* no output */,
ASM_INPUT("S" (v), "b" (low), "c" (high)),
"eax", "edx", "memory");
}
static __always_inline s64 arch_atomic64_read(const atomic64_t *v)
{
s64 r;
alternative_atomic64(read, "=&A" (r), "c" (v) : "memory");
alternative_atomic64(read, "=&A" (r), "c" (v), "memory");
return r;
}
static __always_inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v)
{
alternative_atomic64(add_return,
ASM_OUTPUT2("+A" (i), "+c" (v)),
ASM_NO_INPUT_CLOBBER("memory"));
ASM_OUTPUT("+A" (i), "+c" (v)),
/* no input */,
"memory");
return i;
}
#define arch_atomic64_add_return arch_atomic64_add_return
@ -140,8 +146,9 @@ static __always_inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v)
static __always_inline s64 arch_atomic64_sub_return(s64 i, atomic64_t *v)
{
alternative_atomic64(sub_return,
ASM_OUTPUT2("+A" (i), "+c" (v)),
ASM_NO_INPUT_CLOBBER("memory"));
ASM_OUTPUT("+A" (i), "+c" (v)),
/* no input */,
"memory");
return i;
}
#define arch_atomic64_sub_return arch_atomic64_sub_return
@ -149,8 +156,10 @@ static __always_inline s64 arch_atomic64_sub_return(s64 i, atomic64_t *v)
static __always_inline s64 arch_atomic64_inc_return(atomic64_t *v)
{
s64 a;
alternative_atomic64(inc_return, "=&A" (a),
"S" (v) : "memory", "ecx");
alternative_atomic64(inc_return,
"=&A" (a),
"S" (v),
"memory", "ecx");
return a;
}
#define arch_atomic64_inc_return arch_atomic64_inc_return
@ -158,8 +167,10 @@ static __always_inline s64 arch_atomic64_inc_return(atomic64_t *v)
static __always_inline s64 arch_atomic64_dec_return(atomic64_t *v)
{
s64 a;
alternative_atomic64(dec_return, "=&A" (a),
"S" (v) : "memory", "ecx");
alternative_atomic64(dec_return,
"=&A" (a),
"S" (v),
"memory", "ecx");
return a;
}
#define arch_atomic64_dec_return arch_atomic64_dec_return
@ -167,28 +178,34 @@ static __always_inline s64 arch_atomic64_dec_return(atomic64_t *v)
static __always_inline void arch_atomic64_add(s64 i, atomic64_t *v)
{
__alternative_atomic64(add, add_return,
ASM_OUTPUT2("+A" (i), "+c" (v)),
ASM_NO_INPUT_CLOBBER("memory"));
ASM_OUTPUT("+A" (i), "+c" (v)),
/* no input */,
"memory");
}
static __always_inline void arch_atomic64_sub(s64 i, atomic64_t *v)
{
__alternative_atomic64(sub, sub_return,
ASM_OUTPUT2("+A" (i), "+c" (v)),
ASM_NO_INPUT_CLOBBER("memory"));
ASM_OUTPUT("+A" (i), "+c" (v)),
/* no input */,
"memory");
}
static __always_inline void arch_atomic64_inc(atomic64_t *v)
{
__alternative_atomic64(inc, inc_return, /* no output */,
"S" (v) : "memory", "eax", "ecx", "edx");
__alternative_atomic64(inc, inc_return,
/* no output */,
"S" (v),
"memory", "eax", "ecx", "edx");
}
#define arch_atomic64_inc arch_atomic64_inc
static __always_inline void arch_atomic64_dec(atomic64_t *v)
{
__alternative_atomic64(dec, dec_return, /* no output */,
"S" (v) : "memory", "eax", "ecx", "edx");
__alternative_atomic64(dec, dec_return,
/* no output */,
"S" (v),
"memory", "eax", "ecx", "edx");
}
#define arch_atomic64_dec arch_atomic64_dec
@ -197,8 +214,9 @@ static __always_inline int arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
unsigned low = (unsigned)u;
unsigned high = (unsigned)(u >> 32);
alternative_atomic64(add_unless,
ASM_OUTPUT2("+A" (a), "+c" (low), "+D" (high)),
"S" (v) : "memory");
ASM_OUTPUT("+A" (a), "+c" (low), "+D" (high)),
"S" (v),
"memory");
return (int)a;
}
#define arch_atomic64_add_unless arch_atomic64_add_unless
@ -206,8 +224,10 @@ static __always_inline int arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
static __always_inline int arch_atomic64_inc_not_zero(atomic64_t *v)
{
int r;
alternative_atomic64(inc_not_zero, "=&a" (r),
"S" (v) : "ecx", "edx", "memory");
alternative_atomic64(inc_not_zero,
"=&a" (r),
"S" (v),
"ecx", "edx", "memory");
return r;
}
#define arch_atomic64_inc_not_zero arch_atomic64_inc_not_zero
@ -215,8 +235,10 @@ static __always_inline int arch_atomic64_inc_not_zero(atomic64_t *v)
static __always_inline s64 arch_atomic64_dec_if_positive(atomic64_t *v)
{
s64 r;
alternative_atomic64(dec_if_positive, "=&A" (r),
"S" (v) : "ecx", "memory");
alternative_atomic64(dec_if_positive,
"=&A" (r),
"S" (v),
"ecx", "memory");
return r;
}
#define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive

View File

@ -22,14 +22,14 @@ static __always_inline void arch_atomic64_set(atomic64_t *v, s64 i)
static __always_inline void arch_atomic64_add(s64 i, atomic64_t *v)
{
asm volatile(LOCK_PREFIX "addq %1,%0"
asm_inline volatile(LOCK_PREFIX "addq %1, %0"
: "=m" (v->counter)
: "er" (i), "m" (v->counter) : "memory");
}
static __always_inline void arch_atomic64_sub(s64 i, atomic64_t *v)
{
asm volatile(LOCK_PREFIX "subq %1,%0"
asm_inline volatile(LOCK_PREFIX "subq %1, %0"
: "=m" (v->counter)
: "er" (i), "m" (v->counter) : "memory");
}
@ -42,7 +42,7 @@ static __always_inline bool arch_atomic64_sub_and_test(s64 i, atomic64_t *v)
static __always_inline void arch_atomic64_inc(atomic64_t *v)
{
asm volatile(LOCK_PREFIX "incq %0"
asm_inline volatile(LOCK_PREFIX "incq %0"
: "=m" (v->counter)
: "m" (v->counter) : "memory");
}
@ -50,7 +50,7 @@ static __always_inline void arch_atomic64_inc(atomic64_t *v)
static __always_inline void arch_atomic64_dec(atomic64_t *v)
{
asm volatile(LOCK_PREFIX "decq %0"
asm_inline volatile(LOCK_PREFIX "decq %0"
: "=m" (v->counter)
: "m" (v->counter) : "memory");
}
@ -110,7 +110,7 @@ static __always_inline s64 arch_atomic64_xchg(atomic64_t *v, s64 new)
static __always_inline void arch_atomic64_and(s64 i, atomic64_t *v)
{
asm volatile(LOCK_PREFIX "andq %1,%0"
asm_inline volatile(LOCK_PREFIX "andq %1, %0"
: "+m" (v->counter)
: "er" (i)
: "memory");
@ -128,7 +128,7 @@ static __always_inline s64 arch_atomic64_fetch_and(s64 i, atomic64_t *v)
static __always_inline void arch_atomic64_or(s64 i, atomic64_t *v)
{
asm volatile(LOCK_PREFIX "orq %1,%0"
asm_inline volatile(LOCK_PREFIX "orq %1, %0"
: "+m" (v->counter)
: "er" (i)
: "memory");
@ -146,7 +146,7 @@ static __always_inline s64 arch_atomic64_fetch_or(s64 i, atomic64_t *v)
static __always_inline void arch_atomic64_xor(s64 i, atomic64_t *v)
{
asm volatile(LOCK_PREFIX "xorq %1,%0"
asm_inline volatile(LOCK_PREFIX "xorq %1, %0"
: "+m" (v->counter)
: "er" (i)
: "memory");

View File

@ -52,12 +52,12 @@ static __always_inline void
arch_set_bit(long nr, volatile unsigned long *addr)
{
if (__builtin_constant_p(nr)) {
asm volatile(LOCK_PREFIX "orb %b1,%0"
asm_inline volatile(LOCK_PREFIX "orb %b1,%0"
: CONST_MASK_ADDR(nr, addr)
: "iq" (CONST_MASK(nr))
: "memory");
} else {
asm volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0"
asm_inline volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0"
: : RLONG_ADDR(addr), "Ir" (nr) : "memory");
}
}
@ -72,11 +72,11 @@ static __always_inline void
arch_clear_bit(long nr, volatile unsigned long *addr)
{
if (__builtin_constant_p(nr)) {
asm volatile(LOCK_PREFIX "andb %b1,%0"
asm_inline volatile(LOCK_PREFIX "andb %b1,%0"
: CONST_MASK_ADDR(nr, addr)
: "iq" (~CONST_MASK(nr)));
} else {
asm volatile(LOCK_PREFIX __ASM_SIZE(btr) " %1,%0"
asm_inline volatile(LOCK_PREFIX __ASM_SIZE(btr) " %1,%0"
: : RLONG_ADDR(addr), "Ir" (nr) : "memory");
}
}
@ -98,7 +98,7 @@ static __always_inline bool arch_xor_unlock_is_negative_byte(unsigned long mask,
volatile unsigned long *addr)
{
bool negative;
asm volatile(LOCK_PREFIX "xorb %2,%1"
asm_inline volatile(LOCK_PREFIX "xorb %2,%1"
CC_SET(s)
: CC_OUT(s) (negative), WBYTE_ADDR(addr)
: "iq" ((char)mask) : "memory");
@ -122,11 +122,11 @@ static __always_inline void
arch_change_bit(long nr, volatile unsigned long *addr)
{
if (__builtin_constant_p(nr)) {
asm volatile(LOCK_PREFIX "xorb %b1,%0"
asm_inline volatile(LOCK_PREFIX "xorb %b1,%0"
: CONST_MASK_ADDR(nr, addr)
: "iq" (CONST_MASK(nr)));
} else {
asm volatile(LOCK_PREFIX __ASM_SIZE(btc) " %1,%0"
asm_inline volatile(LOCK_PREFIX __ASM_SIZE(btc) " %1,%0"
: : RLONG_ADDR(addr), "Ir" (nr) : "memory");
}
}

View File

@ -74,7 +74,7 @@
# define BOOT_STACK_SIZE 0x1000
#endif
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
extern unsigned int output_len;
extern const unsigned long kernel_text_size;
extern const unsigned long kernel_total_size;

View File

@ -17,13 +17,17 @@
* In clang we have UD1s reporting UBSAN failures on X86, 64 and 32bit.
*/
#define INSN_ASOP 0x67
#define INSN_LOCK 0xf0
#define OPCODE_ESCAPE 0x0f
#define SECOND_BYTE_OPCODE_UD1 0xb9
#define SECOND_BYTE_OPCODE_UD2 0x0b
#define BUG_NONE 0xffff
#define BUG_UD1 0xfffe
#define BUG_UD2 0xfffd
#define BUG_UD2 0xfffe
#define BUG_UD1 0xfffd
#define BUG_UD1_UBSAN 0xfffc
#define BUG_EA 0xffea
#define BUG_LOCK 0xfff0
#ifdef CONFIG_GENERIC_BUG

View File

@ -101,6 +101,16 @@ enum cfi_mode {
extern enum cfi_mode cfi_mode;
#ifdef CONFIG_FINEIBT_BHI
extern bool cfi_bhi;
#else
#define cfi_bhi (0)
#endif
typedef u8 bhi_thunk[32];
extern bhi_thunk __bhi_args[];
extern bhi_thunk __bhi_args_end[];
struct pt_regs;
#ifdef CONFIG_CFI_CLANG
@ -125,6 +135,18 @@ static inline int cfi_get_offset(void)
#define cfi_get_offset cfi_get_offset
extern u32 cfi_get_func_hash(void *func);
extern int cfi_get_func_arity(void *func);
#ifdef CONFIG_FINEIBT
extern bool decode_fineibt_insn(struct pt_regs *regs, unsigned long *target, u32 *type);
#else
static inline bool
decode_fineibt_insn(struct pt_regs *regs, unsigned long *target, u32 *type)
{
return false;
}
#endif
#else
static inline enum bug_trap_type handle_cfi_failure(struct pt_regs *regs)
@ -137,6 +159,10 @@ static inline u32 cfi_get_func_hash(void *func)
{
return 0;
}
static inline int cfi_get_func_arity(void *func)
{
return 0;
}
#endif /* CONFIG_CFI_CLANG */
#if HAS_KERNEL_IBT == 1

View File

@ -44,22 +44,22 @@ extern void __add_wrong_size(void)
__typeof__ (*(ptr)) __ret = (arg); \
switch (sizeof(*(ptr))) { \
case __X86_CASE_B: \
asm volatile (lock #op "b %b0, %1\n" \
asm_inline volatile (lock #op "b %b0, %1" \
: "+q" (__ret), "+m" (*(ptr)) \
: : "memory", "cc"); \
break; \
case __X86_CASE_W: \
asm volatile (lock #op "w %w0, %1\n" \
asm_inline volatile (lock #op "w %w0, %1" \
: "+r" (__ret), "+m" (*(ptr)) \
: : "memory", "cc"); \
break; \
case __X86_CASE_L: \
asm volatile (lock #op "l %0, %1\n" \
asm_inline volatile (lock #op "l %0, %1" \
: "+r" (__ret), "+m" (*(ptr)) \
: : "memory", "cc"); \
break; \
case __X86_CASE_Q: \
asm volatile (lock #op "q %q0, %1\n" \
asm_inline volatile (lock #op "q %q0, %1" \
: "+r" (__ret), "+m" (*(ptr)) \
: : "memory", "cc"); \
break; \
@ -91,7 +91,7 @@ extern void __add_wrong_size(void)
case __X86_CASE_B: \
{ \
volatile u8 *__ptr = (volatile u8 *)(ptr); \
asm volatile(lock "cmpxchgb %2,%1" \
asm_inline volatile(lock "cmpxchgb %2, %1" \
: "=a" (__ret), "+m" (*__ptr) \
: "q" (__new), "0" (__old) \
: "memory"); \
@ -100,7 +100,7 @@ extern void __add_wrong_size(void)
case __X86_CASE_W: \
{ \
volatile u16 *__ptr = (volatile u16 *)(ptr); \
asm volatile(lock "cmpxchgw %2,%1" \
asm_inline volatile(lock "cmpxchgw %2, %1" \
: "=a" (__ret), "+m" (*__ptr) \
: "r" (__new), "0" (__old) \
: "memory"); \
@ -109,7 +109,7 @@ extern void __add_wrong_size(void)
case __X86_CASE_L: \
{ \
volatile u32 *__ptr = (volatile u32 *)(ptr); \
asm volatile(lock "cmpxchgl %2,%1" \
asm_inline volatile(lock "cmpxchgl %2, %1" \
: "=a" (__ret), "+m" (*__ptr) \
: "r" (__new), "0" (__old) \
: "memory"); \
@ -118,7 +118,7 @@ extern void __add_wrong_size(void)
case __X86_CASE_Q: \
{ \
volatile u64 *__ptr = (volatile u64 *)(ptr); \
asm volatile(lock "cmpxchgq %2,%1" \
asm_inline volatile(lock "cmpxchgq %2, %1" \
: "=a" (__ret), "+m" (*__ptr) \
: "r" (__new), "0" (__old) \
: "memory"); \
@ -165,7 +165,7 @@ extern void __add_wrong_size(void)
case __X86_CASE_B: \
{ \
volatile u8 *__ptr = (volatile u8 *)(_ptr); \
asm volatile(lock "cmpxchgb %[new], %[ptr]" \
asm_inline volatile(lock "cmpxchgb %[new], %[ptr]" \
CC_SET(z) \
: CC_OUT(z) (success), \
[ptr] "+m" (*__ptr), \
@ -177,7 +177,7 @@ extern void __add_wrong_size(void)
case __X86_CASE_W: \
{ \
volatile u16 *__ptr = (volatile u16 *)(_ptr); \
asm volatile(lock "cmpxchgw %[new], %[ptr]" \
asm_inline volatile(lock "cmpxchgw %[new], %[ptr]" \
CC_SET(z) \
: CC_OUT(z) (success), \
[ptr] "+m" (*__ptr), \
@ -189,7 +189,7 @@ extern void __add_wrong_size(void)
case __X86_CASE_L: \
{ \
volatile u32 *__ptr = (volatile u32 *)(_ptr); \
asm volatile(lock "cmpxchgl %[new], %[ptr]" \
asm_inline volatile(lock "cmpxchgl %[new], %[ptr]" \
CC_SET(z) \
: CC_OUT(z) (success), \
[ptr] "+m" (*__ptr), \
@ -201,7 +201,7 @@ extern void __add_wrong_size(void)
case __X86_CASE_Q: \
{ \
volatile u64 *__ptr = (volatile u64 *)(_ptr); \
asm volatile(lock "cmpxchgq %[new], %[ptr]" \
asm_inline volatile(lock "cmpxchgq %[new], %[ptr]" \
CC_SET(z) \
: CC_OUT(z) (success), \
[ptr] "+m" (*__ptr), \

View File

@ -19,7 +19,7 @@ union __u64_halves {
union __u64_halves o = { .full = (_old), }, \
n = { .full = (_new), }; \
\
asm volatile(_lock "cmpxchg8b %[ptr]" \
asm_inline volatile(_lock "cmpxchg8b %[ptr]" \
: [ptr] "+m" (*(_ptr)), \
"+a" (o.low), "+d" (o.high) \
: "b" (n.low), "c" (n.high) \
@ -45,7 +45,7 @@ static __always_inline u64 __cmpxchg64_local(volatile u64 *ptr, u64 old, u64 new
n = { .full = (_new), }; \
bool ret; \
\
asm volatile(_lock "cmpxchg8b %[ptr]" \
asm_inline volatile(_lock "cmpxchg8b %[ptr]" \
CC_SET(e) \
: CC_OUT(e) (ret), \
[ptr] "+m" (*(_ptr)), \
@ -69,7 +69,7 @@ static __always_inline bool __try_cmpxchg64_local(volatile u64 *ptr, u64 *oldp,
return __arch_try_cmpxchg64(ptr, oldp, new,);
}
#ifdef CONFIG_X86_CMPXCHG64
#ifdef CONFIG_X86_CX8
#define arch_cmpxchg64 __cmpxchg64

View File

@ -38,7 +38,7 @@ union __u128_halves {
union __u128_halves o = { .full = (_old), }, \
n = { .full = (_new), }; \
\
asm volatile(_lock "cmpxchg16b %[ptr]" \
asm_inline volatile(_lock "cmpxchg16b %[ptr]" \
: [ptr] "+m" (*(_ptr)), \
"+a" (o.low), "+d" (o.high) \
: "b" (n.low), "c" (n.high) \
@ -65,7 +65,7 @@ static __always_inline u128 arch_cmpxchg128_local(volatile u128 *ptr, u128 old,
n = { .full = (_new), }; \
bool ret; \
\
asm volatile(_lock "cmpxchg16b %[ptr]" \
asm_inline volatile(_lock "cmpxchg16b %[ptr]" \
CC_SET(e) \
: CC_OUT(e) (ret), \
[ptr] "+m" (*(_ptr)), \

View File

@ -12,7 +12,6 @@
#ifndef CONFIG_SMP
#define cpu_physical_id(cpu) boot_cpu_physical_apicid
#define cpu_acpi_id(cpu) 0
#define safe_smp_processor_id() 0
#endif /* CONFIG_SMP */
#ifdef CONFIG_HOTPLUG_CPU
@ -50,20 +49,6 @@ static inline void split_lock_init(void) {}
static inline void bus_lock_init(void) {}
#endif
#ifdef CONFIG_CPU_SUP_INTEL
u8 get_this_hybrid_cpu_type(void);
u32 get_this_hybrid_cpu_native_id(void);
#else
static inline u8 get_this_hybrid_cpu_type(void)
{
return 0;
}
static inline u32 get_this_hybrid_cpu_native_id(void)
{
return 0;
}
#endif
#ifdef CONFIG_IA32_FEAT_CTL
void init_ia32_feat_ctl(struct cpuinfo_x86 *c);
#else

View File

@ -57,7 +57,7 @@
#define X86_CPU_ID_FLAG_ENTRY_VALID BIT(0)
/**
* X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE - Base macro for CPU matching
* X86_MATCH_CPU - Base macro for CPU matching
* @_vendor: The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
* The name is expanded to X86_VENDOR_@_vendor
* @_family: The family number or X86_FAMILY_ANY
@ -74,46 +74,17 @@
* into another macro at the usage site for good reasons, then please
* start this local macro with X86_MATCH to allow easy grepping.
*/
#define X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(_vendor, _family, _model, \
_steppings, _feature, _data) { \
.vendor = X86_VENDOR_##_vendor, \
.family = _family, \
.model = _model, \
.steppings = _steppings, \
.feature = _feature, \
.flags = X86_CPU_ID_FLAG_ENTRY_VALID, \
.driver_data = (unsigned long) _data \
}
#define X86_MATCH_VENDORID_FAM_MODEL_STEPPINGS_FEATURE(_vendor, _family, _model, \
_steppings, _feature, _data) { \
#define X86_MATCH_CPU(_vendor, _family, _model, _steppings, _feature, _type, _data) { \
.vendor = _vendor, \
.family = _family, \
.model = _model, \
.steppings = _steppings, \
.feature = _feature, \
.flags = X86_CPU_ID_FLAG_ENTRY_VALID, \
.type = _type, \
.driver_data = (unsigned long) _data \
}
/**
* X86_MATCH_VENDOR_FAM_MODEL_FEATURE - Macro for CPU matching
* @_vendor: The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
* The name is expanded to X86_VENDOR_@_vendor
* @_family: The family number or X86_FAMILY_ANY
* @_model: The model number, model constant or X86_MODEL_ANY
* @_feature: A X86_FEATURE bit or X86_FEATURE_ANY
* @_data: Driver specific data or NULL. The internal storage
* format is unsigned long. The supplied value, pointer
* etc. is casted to unsigned long internally.
*
* The steppings arguments of X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE() is
* set to wildcards.
*/
#define X86_MATCH_VENDOR_FAM_MODEL_FEATURE(vendor, family, model, feature, data) \
X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(vendor, family, model, \
X86_STEPPING_ANY, feature, data)
/**
* X86_MATCH_VENDOR_FAM_FEATURE - Macro for matching vendor, family and CPU feature
* @vendor: The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
@ -123,13 +94,10 @@
* @data: Driver specific data or NULL. The internal storage
* format is unsigned long. The supplied value, pointer
* etc. is casted to unsigned long internally.
*
* All other missing arguments of X86_MATCH_VENDOR_FAM_MODEL_FEATURE() are
* set to wildcards.
*/
#define X86_MATCH_VENDOR_FAM_FEATURE(vendor, family, feature, data) \
X86_MATCH_VENDOR_FAM_MODEL_FEATURE(vendor, family, \
X86_MODEL_ANY, feature, data)
#define X86_MATCH_VENDOR_FAM_FEATURE(vendor, family, feature, data) \
X86_MATCH_CPU(X86_VENDOR_##vendor, family, X86_MODEL_ANY, \
X86_STEPPING_ANY, feature, X86_CPU_TYPE_ANY, data)
/**
* X86_MATCH_VENDOR_FEATURE - Macro for matching vendor and CPU feature
@ -139,12 +107,10 @@
* @data: Driver specific data or NULL. The internal storage
* format is unsigned long. The supplied value, pointer
* etc. is casted to unsigned long internally.
*
* All other missing arguments of X86_MATCH_VENDOR_FAM_MODEL_FEATURE() are
* set to wildcards.
*/
#define X86_MATCH_VENDOR_FEATURE(vendor, feature, data) \
X86_MATCH_VENDOR_FAM_FEATURE(vendor, X86_FAMILY_ANY, feature, data)
#define X86_MATCH_VENDOR_FEATURE(vendor, feature, data) \
X86_MATCH_CPU(X86_VENDOR_##vendor, X86_FAMILY_ANY, X86_MODEL_ANY, \
X86_STEPPING_ANY, feature, X86_CPU_TYPE_ANY, data)
/**
* X86_MATCH_FEATURE - Macro for matching a CPU feature
@ -152,12 +118,10 @@
* @data: Driver specific data or NULL. The internal storage
* format is unsigned long. The supplied value, pointer
* etc. is casted to unsigned long internally.
*
* All other missing arguments of X86_MATCH_VENDOR_FAM_MODEL_FEATURE() are
* set to wildcards.
*/
#define X86_MATCH_FEATURE(feature, data) \
X86_MATCH_VENDOR_FEATURE(ANY, feature, data)
#define X86_MATCH_FEATURE(feature, data) \
X86_MATCH_CPU(X86_VENDOR_ANY, X86_FAMILY_ANY, X86_MODEL_ANY, \
X86_STEPPING_ANY, feature, X86_CPU_TYPE_ANY, data)
/**
* X86_MATCH_VENDOR_FAM_MODEL - Match vendor, family and model
@ -168,13 +132,10 @@
* @data: Driver specific data or NULL. The internal storage
* format is unsigned long. The supplied value, pointer
* etc. is casted to unsigned long internally.
*
* All other missing arguments of X86_MATCH_VENDOR_FAM_MODEL_FEATURE() are
* set to wildcards.
*/
#define X86_MATCH_VENDOR_FAM_MODEL(vendor, family, model, data) \
X86_MATCH_VENDOR_FAM_MODEL_FEATURE(vendor, family, model, \
X86_FEATURE_ANY, data)
#define X86_MATCH_VENDOR_FAM_MODEL(vendor, family, model, data) \
X86_MATCH_CPU(X86_VENDOR_##vendor, family, model, X86_STEPPING_ANY, \
X86_FEATURE_ANY, X86_CPU_TYPE_ANY, data)
/**
* X86_MATCH_VENDOR_FAM - Match vendor and family
@ -184,12 +145,10 @@
* @data: Driver specific data or NULL. The internal storage
* format is unsigned long. The supplied value, pointer
* etc. is casted to unsigned long internally.
*
* All other missing arguments to X86_MATCH_VENDOR_FAM_MODEL_FEATURE() are
* set of wildcards.
*/
#define X86_MATCH_VENDOR_FAM(vendor, family, data) \
X86_MATCH_VENDOR_FAM_MODEL(vendor, family, X86_MODEL_ANY, data)
#define X86_MATCH_VENDOR_FAM(vendor, family, data) \
X86_MATCH_CPU(X86_VENDOR_##vendor, family, X86_MODEL_ANY, \
X86_STEPPING_ANY, X86_FEATURE_ANY, X86_CPU_TYPE_ANY, data)
/**
* X86_MATCH_VFM - Match encoded vendor/family/model
@ -197,34 +156,26 @@
* @data: Driver specific data or NULL. The internal storage
* format is unsigned long. The supplied value, pointer
* etc. is cast to unsigned long internally.
*
* Stepping and feature are set to wildcards
*/
#define X86_MATCH_VFM(vfm, data) \
X86_MATCH_VENDORID_FAM_MODEL_STEPPINGS_FEATURE( \
VFM_VENDOR(vfm), \
VFM_FAMILY(vfm), \
VFM_MODEL(vfm), \
X86_STEPPING_ANY, X86_FEATURE_ANY, data)
#define X86_MATCH_VFM(vfm, data) \
X86_MATCH_CPU(VFM_VENDOR(vfm), VFM_FAMILY(vfm), VFM_MODEL(vfm), \
X86_STEPPING_ANY, X86_FEATURE_ANY, X86_CPU_TYPE_ANY, data)
#define __X86_STEPPINGS(mins, maxs) GENMASK(maxs, mins)
/**
* X86_MATCH_VFM_STEPPINGS - Match encoded vendor/family/model/stepping
* X86_MATCH_VFM_STEPS - Match encoded vendor/family/model and steppings
* range.
* @vfm: Encoded 8-bits each for vendor, family, model
* @steppings: Bitmask of steppings to match
* @min_step: Lowest stepping number to match
* @max_step: Highest stepping number to match
* @data: Driver specific data or NULL. The internal storage
* format is unsigned long. The supplied value, pointer
* etc. is cast to unsigned long internally.
*
* feature is set to wildcard
*/
#define X86_MATCH_VFM_STEPS(vfm, min_step, max_step, data) \
X86_MATCH_VENDORID_FAM_MODEL_STEPPINGS_FEATURE( \
VFM_VENDOR(vfm), \
VFM_FAMILY(vfm), \
VFM_MODEL(vfm), \
__X86_STEPPINGS(min_step, max_step), \
X86_FEATURE_ANY, data)
#define X86_MATCH_VFM_STEPS(vfm, min_step, max_step, data) \
X86_MATCH_CPU(VFM_VENDOR(vfm), VFM_FAMILY(vfm), VFM_MODEL(vfm), \
__X86_STEPPINGS(min_step, max_step), X86_FEATURE_ANY, \
X86_CPU_TYPE_ANY, data)
/**
* X86_MATCH_VFM_FEATURE - Match encoded vendor/family/model/feature
@ -233,15 +184,22 @@
* @data: Driver specific data or NULL. The internal storage
* format is unsigned long. The supplied value, pointer
* etc. is cast to unsigned long internally.
*
* Steppings is set to wildcard
*/
#define X86_MATCH_VFM_FEATURE(vfm, feature, data) \
X86_MATCH_VENDORID_FAM_MODEL_STEPPINGS_FEATURE( \
VFM_VENDOR(vfm), \
VFM_FAMILY(vfm), \
VFM_MODEL(vfm), \
X86_STEPPING_ANY, feature, data)
#define X86_MATCH_VFM_FEATURE(vfm, feature, data) \
X86_MATCH_CPU(VFM_VENDOR(vfm), VFM_FAMILY(vfm), VFM_MODEL(vfm), \
X86_STEPPING_ANY, feature, X86_CPU_TYPE_ANY, data)
/**
* X86_MATCH_VFM_CPU_TYPE - Match encoded vendor/family/model/type
* @vfm: Encoded 8-bits each for vendor, family, model
* @type: CPU type e.g. P-core, E-core
* @data: Driver specific data or NULL. The internal storage
* format is unsigned long. The supplied value, pointer
* etc. is cast to unsigned long internally.
*/
#define X86_MATCH_VFM_CPU_TYPE(vfm, type, data) \
X86_MATCH_CPU(VFM_VENDOR(vfm), VFM_FAMILY(vfm), VFM_MODEL(vfm), \
X86_STEPPING_ANY, X86_FEATURE_ANY, type, data)
extern const struct x86_cpu_id *x86_match_cpu(const struct x86_cpu_id *match);
extern bool x86_match_min_microcode_rev(const struct x86_cpu_id *table);

View File

@ -4,11 +4,12 @@
#include <asm/processor.h>
#if defined(__KERNEL__) && !defined(__ASSEMBLY__)
#if defined(__KERNEL__) && !defined(__ASSEMBLER__)
#include <asm/asm.h>
#include <linux/bitops.h>
#include <asm/alternative.h>
#include <asm/cpufeaturemasks.h>
enum cpuid_leafs
{
@ -37,92 +38,19 @@ enum cpuid_leafs
NR_CPUID_WORDS,
};
#define X86_CAP_FMT_NUM "%d:%d"
#define x86_cap_flag_num(flag) ((flag) >> 5), ((flag) & 31)
extern const char * const x86_cap_flags[NCAPINTS*32];
extern const char * const x86_power_flags[32];
#define X86_CAP_FMT "%s"
#define x86_cap_flag(flag) x86_cap_flags[flag]
/*
* In order to save room, we index into this array by doing
* X86_BUG_<name> - NCAPINTS*32.
*/
extern const char * const x86_bug_flags[NBUGINTS*32];
#define x86_bug_flag(flag) x86_bug_flags[flag]
#define test_cpu_cap(c, bit) \
arch_test_bit(bit, (unsigned long *)((c)->x86_capability))
/*
* There are 32 bits/features in each mask word. The high bits
* (selected with (bit>>5) give us the word number and the low 5
* bits give us the bit/feature number inside the word.
* (1UL<<((bit)&31) gives us a mask for the feature_bit so we can
* see if it is set in the mask word.
*/
#define CHECK_BIT_IN_MASK_WORD(maskname, word, bit) \
(((bit)>>5)==(word) && (1UL<<((bit)&31) & maskname##word ))
/*
* {REQUIRED,DISABLED}_MASK_CHECK below may seem duplicated with the
* following BUILD_BUG_ON_ZERO() check but when NCAPINTS gets changed, all
* header macros which use NCAPINTS need to be changed. The duplicated macro
* use causes the compiler to issue errors for all headers so that all usage
* sites can be corrected.
*/
#define REQUIRED_MASK_BIT_SET(feature_bit) \
( CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 0, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 1, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 2, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 3, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 4, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 5, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 6, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 7, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 8, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 9, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 10, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 11, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 12, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 13, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 14, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 15, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 16, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 17, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 18, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 19, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 20, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 21, feature_bit) || \
REQUIRED_MASK_CHECK || \
BUILD_BUG_ON_ZERO(NCAPINTS != 22))
#define DISABLED_MASK_BIT_SET(feature_bit) \
( CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 0, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 1, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 2, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 3, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 4, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 5, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 6, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 7, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 8, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 9, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 10, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 11, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 12, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 13, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 14, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 15, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 16, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 17, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 18, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 19, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 20, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 21, feature_bit) || \
DISABLED_MASK_CHECK || \
BUILD_BUG_ON_ZERO(NCAPINTS != 22))
#define cpu_has(c, bit) \
(__builtin_constant_p(bit) && REQUIRED_MASK_BIT_SET(bit) ? 1 : \
test_cpu_cap(c, bit))
@ -149,6 +77,7 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
extern void setup_clear_cpu_cap(unsigned int bit);
extern void clear_cpu_cap(struct cpuinfo_x86 *c, unsigned int bit);
void check_cpufeature_deps(struct cpuinfo_x86 *c);
#define setup_force_cpu_cap(bit) do { \
\
@ -208,5 +137,5 @@ t_no:
#define CPU_FEATURE_TYPEVAL boot_cpu_data.x86_vendor, boot_cpu_data.x86, \
boot_cpu_data.x86_model
#endif /* defined(__KERNEL__) && !defined(__ASSEMBLY__) */
#endif /* defined(__KERNEL__) && !defined(__ASSEMBLER__) */
#endif /* _ASM_X86_CPUFEATURE_H */

View File

@ -2,14 +2,6 @@
#ifndef _ASM_X86_CPUFEATURES_H
#define _ASM_X86_CPUFEATURES_H
#ifndef _ASM_X86_REQUIRED_FEATURES_H
#include <asm/required-features.h>
#endif
#ifndef _ASM_X86_DISABLED_FEATURES_H
#include <asm/disabled-features.h>
#endif
/*
* Defines x86 CPU feature bits
*/
@ -338,6 +330,7 @@
#define X86_FEATURE_CLZERO (13*32+ 0) /* "clzero" CLZERO instruction */
#define X86_FEATURE_IRPERF (13*32+ 1) /* "irperf" Instructions Retired Count */
#define X86_FEATURE_XSAVEERPTR (13*32+ 2) /* "xsaveerptr" Always save/restore FP error pointers */
#define X86_FEATURE_INVLPGB (13*32+ 3) /* INVLPGB and TLBSYNC instructions supported */
#define X86_FEATURE_RDPRU (13*32+ 4) /* "rdpru" Read processor register at user level */
#define X86_FEATURE_WBNOINVD (13*32+ 9) /* "wbnoinvd" WBNOINVD instruction */
#define X86_FEATURE_AMD_IBPB (13*32+12) /* Indirect Branch Prediction Barrier */

View File

@ -1,222 +1,8 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* CPUID-related helpers/definitions
*/
#ifndef _ASM_X86_CPUID_H
#define _ASM_X86_CPUID_H
#include <linux/types.h>
#include <asm/string.h>
struct cpuid_regs {
u32 eax, ebx, ecx, edx;
};
enum cpuid_regs_idx {
CPUID_EAX = 0,
CPUID_EBX,
CPUID_ECX,
CPUID_EDX,
};
#define CPUID_LEAF_MWAIT 0x5
#define CPUID_LEAF_DCA 0x9
#define CPUID_LEAF_XSTATE 0x0d
#define CPUID_LEAF_TSC 0x15
#define CPUID_LEAF_FREQ 0x16
#define CPUID_LEAF_TILE 0x1d
#ifdef CONFIG_X86_32
bool have_cpuid_p(void);
#else
static inline bool have_cpuid_p(void)
{
return true;
}
#endif
static inline void native_cpuid(unsigned int *eax, unsigned int *ebx,
unsigned int *ecx, unsigned int *edx)
{
/* ecx is often an input as well as an output. */
asm volatile("cpuid"
: "=a" (*eax),
"=b" (*ebx),
"=c" (*ecx),
"=d" (*edx)
: "0" (*eax), "2" (*ecx)
: "memory");
}
#define native_cpuid_reg(reg) \
static inline unsigned int native_cpuid_##reg(unsigned int op) \
{ \
unsigned int eax = op, ebx, ecx = 0, edx; \
\
native_cpuid(&eax, &ebx, &ecx, &edx); \
\
return reg; \
}
/*
* Native CPUID functions returning a single datum.
*/
native_cpuid_reg(eax)
native_cpuid_reg(ebx)
native_cpuid_reg(ecx)
native_cpuid_reg(edx)
#ifdef CONFIG_PARAVIRT_XXL
#include <asm/paravirt.h>
#else
#define __cpuid native_cpuid
#endif
/*
* Generic CPUID function
* clear %ecx since some cpus (Cyrix MII) do not set or clear %ecx
* resulting in stale register contents being returned.
*/
static inline void cpuid(unsigned int op,
unsigned int *eax, unsigned int *ebx,
unsigned int *ecx, unsigned int *edx)
{
*eax = op;
*ecx = 0;
__cpuid(eax, ebx, ecx, edx);
}
/* Some CPUID calls want 'count' to be placed in ecx */
static inline void cpuid_count(unsigned int op, int count,
unsigned int *eax, unsigned int *ebx,
unsigned int *ecx, unsigned int *edx)
{
*eax = op;
*ecx = count;
__cpuid(eax, ebx, ecx, edx);
}
/*
* CPUID functions returning a single datum
*/
static inline unsigned int cpuid_eax(unsigned int op)
{
unsigned int eax, ebx, ecx, edx;
cpuid(op, &eax, &ebx, &ecx, &edx);
return eax;
}
static inline unsigned int cpuid_ebx(unsigned int op)
{
unsigned int eax, ebx, ecx, edx;
cpuid(op, &eax, &ebx, &ecx, &edx);
return ebx;
}
static inline unsigned int cpuid_ecx(unsigned int op)
{
unsigned int eax, ebx, ecx, edx;
cpuid(op, &eax, &ebx, &ecx, &edx);
return ecx;
}
static inline unsigned int cpuid_edx(unsigned int op)
{
unsigned int eax, ebx, ecx, edx;
cpuid(op, &eax, &ebx, &ecx, &edx);
return edx;
}
static inline void __cpuid_read(unsigned int leaf, unsigned int subleaf, u32 *regs)
{
regs[CPUID_EAX] = leaf;
regs[CPUID_ECX] = subleaf;
__cpuid(regs + CPUID_EAX, regs + CPUID_EBX, regs + CPUID_ECX, regs + CPUID_EDX);
}
#define cpuid_subleaf(leaf, subleaf, regs) { \
static_assert(sizeof(*(regs)) == 16); \
__cpuid_read(leaf, subleaf, (u32 *)(regs)); \
}
#define cpuid_leaf(leaf, regs) { \
static_assert(sizeof(*(regs)) == 16); \
__cpuid_read(leaf, 0, (u32 *)(regs)); \
}
static inline void __cpuid_read_reg(unsigned int leaf, unsigned int subleaf,
enum cpuid_regs_idx regidx, u32 *reg)
{
u32 regs[4];
__cpuid_read(leaf, subleaf, regs);
*reg = regs[regidx];
}
#define cpuid_subleaf_reg(leaf, subleaf, regidx, reg) { \
static_assert(sizeof(*(reg)) == 4); \
__cpuid_read_reg(leaf, subleaf, regidx, (u32 *)(reg)); \
}
#define cpuid_leaf_reg(leaf, regidx, reg) { \
static_assert(sizeof(*(reg)) == 4); \
__cpuid_read_reg(leaf, 0, regidx, (u32 *)(reg)); \
}
static __always_inline bool cpuid_function_is_indexed(u32 function)
{
switch (function) {
case 4:
case 7:
case 0xb:
case 0xd:
case 0xf:
case 0x10:
case 0x12:
case 0x14:
case 0x17:
case 0x18:
case 0x1d:
case 0x1e:
case 0x1f:
case 0x24:
case 0x8000001d:
return true;
}
return false;
}
#define for_each_possible_hypervisor_cpuid_base(function) \
for (function = 0x40000000; function < 0x40010000; function += 0x100)
static inline uint32_t hypervisor_cpuid_base(const char *sig, uint32_t leaves)
{
uint32_t base, eax, signature[3];
for_each_possible_hypervisor_cpuid_base(base) {
cpuid(base, &eax, &signature[0], &signature[1], &signature[2]);
/*
* This must not compile to "call memcmp" because it's called
* from PVH early boot code before instrumentation is set up
* and memcmp() itself may be instrumented.
*/
if (!__builtin_memcmp(sig, signature, 12) &&
(leaves == 0 || ((eax - base) >= leaves)))
return base;
}
return 0;
}
#include <asm/cpuid/api.h>
#endif /* _ASM_X86_CPUID_H */

View File

@ -0,0 +1,210 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_X86_CPUID_API_H
#define _ASM_X86_CPUID_API_H
#include <asm/cpuid/types.h>
#include <linux/build_bug.h>
#include <linux/types.h>
#include <asm/string.h>
/*
* Raw CPUID accessors:
*/
#ifdef CONFIG_X86_32
bool have_cpuid_p(void);
#else
static inline bool have_cpuid_p(void)
{
return true;
}
#endif
static inline void native_cpuid(u32 *eax, u32 *ebx,
u32 *ecx, u32 *edx)
{
/* ecx is often an input as well as an output. */
asm volatile("cpuid"
: "=a" (*eax),
"=b" (*ebx),
"=c" (*ecx),
"=d" (*edx)
: "0" (*eax), "2" (*ecx)
: "memory");
}
#define NATIVE_CPUID_REG(reg) \
static inline u32 native_cpuid_##reg(u32 op) \
{ \
u32 eax = op, ebx, ecx = 0, edx; \
\
native_cpuid(&eax, &ebx, &ecx, &edx); \
\
return reg; \
}
/*
* Native CPUID functions returning a single datum:
*/
NATIVE_CPUID_REG(eax)
NATIVE_CPUID_REG(ebx)
NATIVE_CPUID_REG(ecx)
NATIVE_CPUID_REG(edx)
#ifdef CONFIG_PARAVIRT_XXL
# include <asm/paravirt.h>
#else
# define __cpuid native_cpuid
#endif
/*
* Generic CPUID function
*
* Clear ECX since some CPUs (Cyrix MII) do not set or clear ECX
* resulting in stale register contents being returned.
*/
static inline void cpuid(u32 op,
u32 *eax, u32 *ebx,
u32 *ecx, u32 *edx)
{
*eax = op;
*ecx = 0;
__cpuid(eax, ebx, ecx, edx);
}
/* Some CPUID calls want 'count' to be placed in ECX */
static inline void cpuid_count(u32 op, int count,
u32 *eax, u32 *ebx,
u32 *ecx, u32 *edx)
{
*eax = op;
*ecx = count;
__cpuid(eax, ebx, ecx, edx);
}
/*
* CPUID functions returning a single datum:
*/
static inline u32 cpuid_eax(u32 op)
{
u32 eax, ebx, ecx, edx;
cpuid(op, &eax, &ebx, &ecx, &edx);
return eax;
}
static inline u32 cpuid_ebx(u32 op)
{
u32 eax, ebx, ecx, edx;
cpuid(op, &eax, &ebx, &ecx, &edx);
return ebx;
}
static inline u32 cpuid_ecx(u32 op)
{
u32 eax, ebx, ecx, edx;
cpuid(op, &eax, &ebx, &ecx, &edx);
return ecx;
}
static inline u32 cpuid_edx(u32 op)
{
u32 eax, ebx, ecx, edx;
cpuid(op, &eax, &ebx, &ecx, &edx);
return edx;
}
static inline void __cpuid_read(u32 leaf, u32 subleaf, u32 *regs)
{
regs[CPUID_EAX] = leaf;
regs[CPUID_ECX] = subleaf;
__cpuid(regs + CPUID_EAX, regs + CPUID_EBX, regs + CPUID_ECX, regs + CPUID_EDX);
}
#define cpuid_subleaf(leaf, subleaf, regs) { \
static_assert(sizeof(*(regs)) == 16); \
__cpuid_read(leaf, subleaf, (u32 *)(regs)); \
}
#define cpuid_leaf(leaf, regs) { \
static_assert(sizeof(*(regs)) == 16); \
__cpuid_read(leaf, 0, (u32 *)(regs)); \
}
static inline void __cpuid_read_reg(u32 leaf, u32 subleaf,
enum cpuid_regs_idx regidx, u32 *reg)
{
u32 regs[4];
__cpuid_read(leaf, subleaf, regs);
*reg = regs[regidx];
}
#define cpuid_subleaf_reg(leaf, subleaf, regidx, reg) { \
static_assert(sizeof(*(reg)) == 4); \
__cpuid_read_reg(leaf, subleaf, regidx, (u32 *)(reg)); \
}
#define cpuid_leaf_reg(leaf, regidx, reg) { \
static_assert(sizeof(*(reg)) == 4); \
__cpuid_read_reg(leaf, 0, regidx, (u32 *)(reg)); \
}
static __always_inline bool cpuid_function_is_indexed(u32 function)
{
switch (function) {
case 4:
case 7:
case 0xb:
case 0xd:
case 0xf:
case 0x10:
case 0x12:
case 0x14:
case 0x17:
case 0x18:
case 0x1d:
case 0x1e:
case 0x1f:
case 0x24:
case 0x8000001d:
return true;
}
return false;
}
#define for_each_possible_hypervisor_cpuid_base(function) \
for (function = 0x40000000; function < 0x40010000; function += 0x100)
static inline u32 hypervisor_cpuid_base(const char *sig, u32 leaves)
{
u32 base, eax, signature[3];
for_each_possible_hypervisor_cpuid_base(base) {
cpuid(base, &eax, &signature[0], &signature[1], &signature[2]);
/*
* This must not compile to "call memcmp" because it's called
* from PVH early boot code before instrumentation is set up
* and memcmp() itself may be instrumented.
*/
if (!__builtin_memcmp(sig, signature, 12) &&
(leaves == 0 || ((eax - base) >= leaves)))
return base;
}
return 0;
}
#endif /* _ASM_X86_CPUID_API_H */

View File

@ -0,0 +1,32 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_X86_CPUID_TYPES_H
#define _ASM_X86_CPUID_TYPES_H
#include <linux/types.h>
/*
* Types for raw CPUID access:
*/
struct cpuid_regs {
u32 eax;
u32 ebx;
u32 ecx;
u32 edx;
};
enum cpuid_regs_idx {
CPUID_EAX = 0,
CPUID_EBX,
CPUID_ECX,
CPUID_EDX,
};
#define CPUID_LEAF_MWAIT 0x05
#define CPUID_LEAF_DCA 0x09
#define CPUID_LEAF_XSTATE 0x0d
#define CPUID_LEAF_TSC 0x15
#define CPUID_LEAF_FREQ 0x16
#define CPUID_LEAF_TILE 0x1d
#endif /* _ASM_X86_CPUID_TYPES_H */

View File

@ -1,7 +1,7 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_X86_CPUMASK_H
#define _ASM_X86_CPUMASK_H
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/cpumask.h>
extern void setup_cpu_local_masks(void);
@ -34,5 +34,5 @@ static __always_inline void arch_cpumask_clear_cpu(int cpu, struct cpumask *dstp
#define arch_cpu_is_offline(cpu) unlikely(!arch_cpu_online(cpu))
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* _ASM_X86_CPUMASK_H */

View File

@ -5,52 +5,28 @@
#include <linux/build_bug.h>
#include <linux/compiler.h>
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/cache.h>
#include <asm/percpu.h>
struct task_struct;
struct pcpu_hot {
union {
struct {
struct task_struct *current_task;
int preempt_count;
int cpu_number;
#ifdef CONFIG_MITIGATION_CALL_DEPTH_TRACKING
u64 call_depth;
#endif
unsigned long top_of_stack;
void *hardirq_stack_ptr;
u16 softirq_pending;
#ifdef CONFIG_X86_64
bool hardirq_stack_inuse;
#else
void *softirq_stack_ptr;
#endif
};
u8 pad[64];
};
};
static_assert(sizeof(struct pcpu_hot) == 64);
DECLARE_PER_CPU_ALIGNED(struct pcpu_hot, pcpu_hot);
/* const-qualified alias to pcpu_hot, aliased by linker. */
DECLARE_PER_CPU_ALIGNED(const struct pcpu_hot __percpu_seg_override,
const_pcpu_hot);
DECLARE_PER_CPU_CACHE_HOT(struct task_struct *, current_task);
/* const-qualified alias provided by the linker. */
DECLARE_PER_CPU_CACHE_HOT(struct task_struct * const __percpu_seg_override,
const_current_task);
static __always_inline struct task_struct *get_current(void)
{
if (IS_ENABLED(CONFIG_USE_X86_SEG_SUPPORT))
return this_cpu_read_const(const_pcpu_hot.current_task);
return this_cpu_read_const(const_current_task);
return this_cpu_read_stable(pcpu_hot.current_task);
return this_cpu_read_stable(current_task);
}
#define current get_current()
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* _ASM_X86_CURRENT_H */

View File

@ -46,7 +46,6 @@ struct gdt_page {
} __attribute__((aligned(PAGE_SIZE)));
DECLARE_PER_CPU_PAGE_ALIGNED(struct gdt_page, gdt_page);
DECLARE_INIT_PER_CPU(gdt_page);
/* Provide the original GDT */
static inline struct desc_struct *get_cpu_gdt_rw(unsigned int cpu)

View File

@ -58,7 +58,7 @@
#define DESC_USER (_DESC_DPL(3))
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/types.h>
@ -166,7 +166,7 @@ struct desc_ptr {
unsigned long address;
} __attribute__((packed)) ;
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
/* Boot IDT definitions */
#define BOOT_IDT_ENTRIES 32

View File

@ -1,161 +0,0 @@
#ifndef _ASM_X86_DISABLED_FEATURES_H
#define _ASM_X86_DISABLED_FEATURES_H
/* These features, although they might be available in a CPU
* will not be used because the compile options to support
* them are not present.
*
* This code allows them to be checked and disabled at
* compile time without an explicit #ifdef. Use
* cpu_feature_enabled().
*/
#ifdef CONFIG_X86_UMIP
# define DISABLE_UMIP 0
#else
# define DISABLE_UMIP (1<<(X86_FEATURE_UMIP & 31))
#endif
#ifdef CONFIG_X86_64
# define DISABLE_VME (1<<(X86_FEATURE_VME & 31))
# define DISABLE_K6_MTRR (1<<(X86_FEATURE_K6_MTRR & 31))
# define DISABLE_CYRIX_ARR (1<<(X86_FEATURE_CYRIX_ARR & 31))
# define DISABLE_CENTAUR_MCR (1<<(X86_FEATURE_CENTAUR_MCR & 31))
# define DISABLE_PCID 0
#else
# define DISABLE_VME 0
# define DISABLE_K6_MTRR 0
# define DISABLE_CYRIX_ARR 0
# define DISABLE_CENTAUR_MCR 0
# define DISABLE_PCID (1<<(X86_FEATURE_PCID & 31))
#endif /* CONFIG_X86_64 */
#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
# define DISABLE_PKU 0
# define DISABLE_OSPKE 0
#else
# define DISABLE_PKU (1<<(X86_FEATURE_PKU & 31))
# define DISABLE_OSPKE (1<<(X86_FEATURE_OSPKE & 31))
#endif /* CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS */
#ifdef CONFIG_X86_5LEVEL
# define DISABLE_LA57 0
#else
# define DISABLE_LA57 (1<<(X86_FEATURE_LA57 & 31))
#endif
#ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
# define DISABLE_PTI 0
#else
# define DISABLE_PTI (1 << (X86_FEATURE_PTI & 31))
#endif
#ifdef CONFIG_MITIGATION_RETPOLINE
# define DISABLE_RETPOLINE 0
#else
# define DISABLE_RETPOLINE ((1 << (X86_FEATURE_RETPOLINE & 31)) | \
(1 << (X86_FEATURE_RETPOLINE_LFENCE & 31)))
#endif
#ifdef CONFIG_MITIGATION_RETHUNK
# define DISABLE_RETHUNK 0
#else
# define DISABLE_RETHUNK (1 << (X86_FEATURE_RETHUNK & 31))
#endif
#ifdef CONFIG_MITIGATION_UNRET_ENTRY
# define DISABLE_UNRET 0
#else
# define DISABLE_UNRET (1 << (X86_FEATURE_UNRET & 31))
#endif
#ifdef CONFIG_MITIGATION_CALL_DEPTH_TRACKING
# define DISABLE_CALL_DEPTH_TRACKING 0
#else
# define DISABLE_CALL_DEPTH_TRACKING (1 << (X86_FEATURE_CALL_DEPTH & 31))
#endif
#ifdef CONFIG_ADDRESS_MASKING
# define DISABLE_LAM 0
#else
# define DISABLE_LAM (1 << (X86_FEATURE_LAM & 31))
#endif
#ifdef CONFIG_INTEL_IOMMU_SVM
# define DISABLE_ENQCMD 0
#else
# define DISABLE_ENQCMD (1 << (X86_FEATURE_ENQCMD & 31))
#endif
#ifdef CONFIG_X86_SGX
# define DISABLE_SGX 0
#else
# define DISABLE_SGX (1 << (X86_FEATURE_SGX & 31))
#endif
#ifdef CONFIG_XEN_PV
# define DISABLE_XENPV 0
#else
# define DISABLE_XENPV (1 << (X86_FEATURE_XENPV & 31))
#endif
#ifdef CONFIG_INTEL_TDX_GUEST
# define DISABLE_TDX_GUEST 0
#else
# define DISABLE_TDX_GUEST (1 << (X86_FEATURE_TDX_GUEST & 31))
#endif
#ifdef CONFIG_X86_USER_SHADOW_STACK
#define DISABLE_USER_SHSTK 0
#else
#define DISABLE_USER_SHSTK (1 << (X86_FEATURE_USER_SHSTK & 31))
#endif
#ifdef CONFIG_X86_KERNEL_IBT
#define DISABLE_IBT 0
#else
#define DISABLE_IBT (1 << (X86_FEATURE_IBT & 31))
#endif
#ifdef CONFIG_X86_FRED
# define DISABLE_FRED 0
#else
# define DISABLE_FRED (1 << (X86_FEATURE_FRED & 31))
#endif
#ifdef CONFIG_KVM_AMD_SEV
#define DISABLE_SEV_SNP 0
#else
#define DISABLE_SEV_SNP (1 << (X86_FEATURE_SEV_SNP & 31))
#endif
/*
* Make sure to add features to the correct mask
*/
#define DISABLED_MASK0 (DISABLE_VME)
#define DISABLED_MASK1 0
#define DISABLED_MASK2 0
#define DISABLED_MASK3 (DISABLE_CYRIX_ARR|DISABLE_CENTAUR_MCR|DISABLE_K6_MTRR)
#define DISABLED_MASK4 (DISABLE_PCID)
#define DISABLED_MASK5 0
#define DISABLED_MASK6 0
#define DISABLED_MASK7 (DISABLE_PTI)
#define DISABLED_MASK8 (DISABLE_XENPV|DISABLE_TDX_GUEST)
#define DISABLED_MASK9 (DISABLE_SGX)
#define DISABLED_MASK10 0
#define DISABLED_MASK11 (DISABLE_RETPOLINE|DISABLE_RETHUNK|DISABLE_UNRET| \
DISABLE_CALL_DEPTH_TRACKING|DISABLE_USER_SHSTK)
#define DISABLED_MASK12 (DISABLE_FRED|DISABLE_LAM)
#define DISABLED_MASK13 0
#define DISABLED_MASK14 0
#define DISABLED_MASK15 0
#define DISABLED_MASK16 (DISABLE_PKU|DISABLE_OSPKE|DISABLE_LA57|DISABLE_UMIP| \
DISABLE_ENQCMD)
#define DISABLED_MASK17 0
#define DISABLED_MASK18 (DISABLE_IBT)
#define DISABLED_MASK19 (DISABLE_SEV_SNP)
#define DISABLED_MASK20 0
#define DISABLED_MASK21 0
#define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 22)
#endif /* _ASM_X86_DISABLED_FEATURES_H */

View File

@ -2,7 +2,7 @@
#ifndef _ASM_X86_DWARF2_H
#define _ASM_X86_DWARF2_H
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#warning "asm/dwarf2.h should be only included in pure assembly files"
#endif

View File

@ -54,8 +54,9 @@ typedef struct user_i387_struct elf_fpregset_t;
#define R_X86_64_GLOB_DAT 6 /* Create GOT entry */
#define R_X86_64_JUMP_SLOT 7 /* Create PLT entry */
#define R_X86_64_RELATIVE 8 /* Adjust by program base */
#define R_X86_64_GOTPCREL 9 /* 32 bit signed pc relative
offset to GOT */
#define R_X86_64_GOTPCREL 9 /* 32 bit signed pc relative offset to GOT */
#define R_X86_64_GOTPCRELX 41
#define R_X86_64_REX_GOTPCRELX 42
#define R_X86_64_32 10 /* Direct 32 bit zero extended */
#define R_X86_64_32S 11 /* Direct 32 bit sign extended */
#define R_X86_64_16 12 /* Direct 16 bit zero extended */

View File

@ -31,7 +31,7 @@
/* fixmap starts downwards from the 507th entry in level2_fixmap_pgt */
#define FIXMAP_PMD_TOP 507
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/kernel.h>
#include <asm/apicdef.h>
#include <asm/page.h>
@ -196,5 +196,5 @@ void __init *early_memremap_decrypted_wp(resource_size_t phys_addr,
void __early_set_fixmap(enum fixed_addresses idx,
phys_addr_t phys, pgprot_t flags);
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
#endif /* _ASM_X86_FIXMAP_H */

View File

@ -11,7 +11,7 @@
#ifdef CONFIG_FRAME_POINTER
#ifdef __ASSEMBLY__
#ifdef __ASSEMBLER__
.macro FRAME_BEGIN
push %_ASM_BP
@ -51,7 +51,7 @@
.endm
#endif /* CONFIG_X86_64 */
#else /* !__ASSEMBLY__ */
#else /* !__ASSEMBLER__ */
#define FRAME_BEGIN \
"push %" _ASM_BP "\n" \
@ -82,18 +82,18 @@ static inline unsigned long encode_frame_pointer(struct pt_regs *regs)
#endif /* CONFIG_X86_64 */
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#define FRAME_OFFSET __ASM_SEL(4, 8)
#else /* !CONFIG_FRAME_POINTER */
#ifdef __ASSEMBLY__
#ifdef __ASSEMBLER__
.macro ENCODE_FRAME_POINTER ptregs_offset=0
.endm
#else /* !__ASSEMBLY */
#else /* !__ASSEMBLER__ */
#define ENCODE_FRAME_POINTER

View File

@ -32,7 +32,7 @@
#define FRED_CONFIG_INT_STKLVL(l) (_AT(unsigned long, l) << 9)
#define FRED_CONFIG_ENTRYPOINT(p) _AT(unsigned long, (p))
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#ifdef CONFIG_X86_FRED
#include <linux/kernel.h>
@ -113,6 +113,6 @@ static inline void fred_entry_from_kvm(unsigned int type, unsigned int vector) {
static inline void fred_sync_rsp0(unsigned long rsp0) { }
static inline void fred_update_rsp0(void) { }
#endif /* CONFIG_X86_FRED */
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
#endif /* ASM_X86_FRED_H */

View File

@ -2,7 +2,7 @@
#ifndef _ASM_FSGSBASE_H
#define _ASM_FSGSBASE_H
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#ifdef CONFIG_X86_64
@ -80,6 +80,6 @@ extern unsigned long x86_fsgsbase_read_task(struct task_struct *task,
#endif /* CONFIG_X86_64 */
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* _ASM_FSGSBASE_H */

View File

@ -22,7 +22,7 @@
#define ARCH_SUPPORTS_FTRACE_OPS 1
#endif
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
extern void __fentry__(void);
static inline unsigned long ftrace_call_adjust(unsigned long addr)
@ -36,21 +36,9 @@ static inline unsigned long ftrace_call_adjust(unsigned long addr)
static inline unsigned long arch_ftrace_get_symaddr(unsigned long fentry_ip)
{
#ifdef CONFIG_X86_KERNEL_IBT
u32 instr;
/* We want to be extra safe in case entry ip is on the page edge,
* but otherwise we need to avoid get_kernel_nofault()'s overhead.
*/
if ((fentry_ip & ~PAGE_MASK) < ENDBR_INSN_SIZE) {
if (get_kernel_nofault(instr, (u32 *)(fentry_ip - ENDBR_INSN_SIZE)))
return fentry_ip;
} else {
instr = *(u32 *)(fentry_ip - ENDBR_INSN_SIZE);
}
if (is_endbr(instr))
if (is_endbr((void*)(fentry_ip - ENDBR_INSN_SIZE)))
fentry_ip -= ENDBR_INSN_SIZE;
#endif
return fentry_ip;
}
#define ftrace_get_symaddr(fentry_ip) arch_ftrace_get_symaddr(fentry_ip)
@ -118,11 +106,11 @@ struct dyn_arch_ftrace {
};
#endif /* CONFIG_DYNAMIC_FTRACE */
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* CONFIG_FUNCTION_TRACER */
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
void prepare_ftrace_return(unsigned long ip, unsigned long *parent,
unsigned long frame_pointer);
@ -166,6 +154,6 @@ static inline bool arch_trace_is_compat_syscall(struct pt_regs *regs)
}
#endif /* CONFIG_FTRACE_SYSCALLS && CONFIG_IA32_EMULATION */
#endif /* !COMPILE_OFFSETS */
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
#endif /* _ASM_X86_FTRACE_H */

View File

@ -3,7 +3,6 @@
#define _ASM_X86_HARDIRQ_H
#include <linux/threads.h>
#include <asm/current.h>
typedef struct {
#if IS_ENABLED(CONFIG_KVM_INTEL)
@ -66,7 +65,8 @@ extern u64 arch_irq_stat_cpu(unsigned int cpu);
extern u64 arch_irq_stat(void);
#define arch_irq_stat arch_irq_stat
#define local_softirq_pending_ref pcpu_hot.softirq_pending
DECLARE_PER_CPU_CACHE_HOT(u16, __softirq_pending);
#define local_softirq_pending_ref __softirq_pending
#if IS_ENABLED(CONFIG_KVM_INTEL)
/*

View File

@ -16,7 +16,7 @@
#include <asm/irq_vectors.h>
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/percpu.h>
#include <linux/profile.h>
@ -128,6 +128,6 @@ extern char spurious_entries_start[];
typedef struct irq_desc* vector_irq_t[NR_VECTORS];
DECLARE_PER_CPU(vector_irq_t, vector_irq);
#endif /* !ASSEMBLY_ */
#endif /* !__ASSEMBLER__ */
#endif /* _ASM_X86_HW_IRQ_H */

View File

@ -21,7 +21,7 @@
#define HAS_KERNEL_IBT 1
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#ifdef CONFIG_X86_64
#define ASM_ENDBR "endbr64\n\t"
@ -41,7 +41,7 @@
_ASM_PTR fname "\n\t" \
".popsection\n\t"
static inline __attribute_const__ u32 gen_endbr(void)
static __always_inline __attribute_const__ u32 gen_endbr(void)
{
u32 endbr;
@ -56,7 +56,7 @@ static inline __attribute_const__ u32 gen_endbr(void)
return endbr;
}
static inline __attribute_const__ u32 gen_endbr_poison(void)
static __always_inline __attribute_const__ u32 gen_endbr_poison(void)
{
/*
* 4 byte NOP that isn't NOP4 (in fact it is OSP NOP3), such that it
@ -65,19 +65,24 @@ static inline __attribute_const__ u32 gen_endbr_poison(void)
return 0x001f0f66; /* osp nopl (%rax) */
}
static inline bool is_endbr(u32 val)
static inline bool __is_endbr(u32 val)
{
if (val == gen_endbr_poison())
return true;
/* See cfi_fineibt_bhi_preamble() */
if (IS_ENABLED(CONFIG_FINEIBT_BHI) && val == 0x001f0ff5)
return true;
val &= ~0x01000000U; /* ENDBR32 -> ENDBR64 */
return val == gen_endbr();
}
extern __noendbr bool is_endbr(u32 *val);
extern __noendbr u64 ibt_save(bool disable);
extern __noendbr void ibt_restore(u64 save);
#else /* __ASSEMBLY__ */
#else /* __ASSEMBLER__ */
#ifdef CONFIG_X86_64
#define ENDBR endbr64
@ -85,29 +90,29 @@ extern __noendbr void ibt_restore(u64 save);
#define ENDBR endbr32
#endif
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#else /* !IBT */
#define HAS_KERNEL_IBT 0
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#define ASM_ENDBR
#define IBT_NOSEAL(name)
#define __noendbr
static inline bool is_endbr(u32 val) { return false; }
static inline bool is_endbr(u32 *val) { return false; }
static inline u64 ibt_save(bool disable) { return 0; }
static inline void ibt_restore(u64 save) { }
#else /* __ASSEMBLY__ */
#else /* __ASSEMBLER__ */
#define ENDBR
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* CONFIG_X86_KERNEL_IBT */

View File

@ -7,7 +7,7 @@
#define IDT_ALIGN (8 * (1 + HAS_KERNEL_IBT))
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/entry-common.h>
#include <linux/hardirq.h>
@ -474,7 +474,7 @@ static inline void fred_install_sysvec(unsigned int vector, const idtentry_t fun
idt_install_sysvec(vector, asm_##function); \
}
#else /* !__ASSEMBLY__ */
#else /* !__ASSEMBLER__ */
/*
* The ASM variants for DECLARE_IDTENTRY*() which emit the ASM entry stubs.
@ -579,7 +579,7 @@ SYM_CODE_START(spurious_entries_start)
SYM_CODE_END(spurious_entries_start)
#endif
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
/*
* The actual entry points. Note that DECLARE_IDTENTRY*() serves two

View File

@ -2,7 +2,11 @@
#ifndef _ASM_X86_INIT_H
#define _ASM_X86_INIT_H
#if defined(CONFIG_CC_IS_CLANG) && CONFIG_CLANG_VERSION < 170000
#define __head __section(".head.text") __no_sanitize_undefined __no_stack_protector
#else
#define __head __section(".head.text") __no_sanitize_undefined
#endif
struct x86_mapping_info {
void *(*alloc_pgt_page)(void *); /* allocate buf for page table */

View File

@ -6,7 +6,7 @@
#ifndef X86_ASM_INST_H
#define X86_ASM_INST_H
#ifdef __ASSEMBLY__
#ifdef __ASSEMBLER__
#define REG_NUM_INVALID 100

View File

@ -45,7 +45,18 @@
/* Wildcard match so X86_MATCH_VFM(ANY) works */
#define INTEL_ANY IFM(X86_FAMILY_ANY, X86_MODEL_ANY)
/* Family 5 */
#define INTEL_FAM5_START IFM(5, 0x00) /* Notational marker, also P5 A-step */
#define INTEL_PENTIUM_75 IFM(5, 0x02) /* P54C */
#define INTEL_PENTIUM_MMX IFM(5, 0x04) /* P55C */
#define INTEL_QUARK_X1000 IFM(5, 0x09) /* Quark X1000 SoC */
/* Family 6 */
#define INTEL_PENTIUM_PRO IFM(6, 0x01)
#define INTEL_PENTIUM_II_KLAMATH IFM(6, 0x03)
#define INTEL_PENTIUM_III_DESCHUTES IFM(6, 0x05)
#define INTEL_PENTIUM_III_TUALATIN IFM(6, 0x0B)
#define INTEL_PENTIUM_M_DOTHAN IFM(6, 0x0D)
#define INTEL_CORE_YONAH IFM(6, 0x0E)
@ -110,9 +121,9 @@
#define INTEL_SAPPHIRERAPIDS_X IFM(6, 0x8F) /* Golden Cove */
#define INTEL_EMERALDRAPIDS_X IFM(6, 0xCF)
#define INTEL_EMERALDRAPIDS_X IFM(6, 0xCF) /* Raptor Cove */
#define INTEL_GRANITERAPIDS_X IFM(6, 0xAD)
#define INTEL_GRANITERAPIDS_X IFM(6, 0xAD) /* Redwood Cove */
#define INTEL_GRANITERAPIDS_D IFM(6, 0xAE)
/* "Hybrid" Processors (P-Core/E-Core) */
@ -126,16 +137,16 @@
#define INTEL_RAPTORLAKE_P IFM(6, 0xBA)
#define INTEL_RAPTORLAKE_S IFM(6, 0xBF)
#define INTEL_METEORLAKE IFM(6, 0xAC)
#define INTEL_METEORLAKE IFM(6, 0xAC) /* Redwood Cove / Crestmont */
#define INTEL_METEORLAKE_L IFM(6, 0xAA)
#define INTEL_ARROWLAKE_H IFM(6, 0xC5)
#define INTEL_ARROWLAKE_H IFM(6, 0xC5) /* Lion Cove / Skymont */
#define INTEL_ARROWLAKE IFM(6, 0xC6)
#define INTEL_ARROWLAKE_U IFM(6, 0xB5)
#define INTEL_LUNARLAKE_M IFM(6, 0xBD)
#define INTEL_LUNARLAKE_M IFM(6, 0xBD) /* Lion Cove / Skymont */
#define INTEL_PANTHERLAKE_L IFM(6, 0xCC)
#define INTEL_PANTHERLAKE_L IFM(6, 0xCC) /* Cougar Cove / Crestmont */
/* "Small Core" Processors (Atom/E-Core) */
@ -149,9 +160,9 @@
#define INTEL_ATOM_SILVERMONT IFM(6, 0x37) /* Bay Trail, Valleyview */
#define INTEL_ATOM_SILVERMONT_D IFM(6, 0x4D) /* Avaton, Rangely */
#define INTEL_ATOM_SILVERMONT_MID IFM(6, 0x4A) /* Merriefield */
#define INTEL_ATOM_SILVERMONT_MID2 IFM(6, 0x5A) /* Anniedale */
#define INTEL_ATOM_AIRMONT IFM(6, 0x4C) /* Cherry Trail, Braswell */
#define INTEL_ATOM_AIRMONT_MID IFM(6, 0x5A) /* Moorefield */
#define INTEL_ATOM_AIRMONT_NP IFM(6, 0x75) /* Lightning Mountain */
#define INTEL_ATOM_GOLDMONT IFM(6, 0x5C) /* Apollo Lake */
@ -176,16 +187,35 @@
#define INTEL_XEON_PHI_KNL IFM(6, 0x57) /* Knights Landing */
#define INTEL_XEON_PHI_KNM IFM(6, 0x85) /* Knights Mill */
/* Family 5 */
#define INTEL_QUARK_X1000 IFM(5, 0x09) /* Quark X1000 SoC */
/* Notational marker denoting the last Family 6 model */
#define INTEL_FAM6_LAST IFM(6, 0xFF)
/* Family 15 - NetBurst */
#define INTEL_P4_WILLAMETTE IFM(15, 0x01) /* Also Xeon Foster */
#define INTEL_P4_PRESCOTT IFM(15, 0x03)
#define INTEL_P4_PRESCOTT_2M IFM(15, 0x04)
#define INTEL_P4_CEDARMILL IFM(15, 0x06) /* Also Xeon Dempsey */
/* Family 19 */
#define INTEL_PANTHERCOVE_X IFM(19, 0x01) /* Diamond Rapids */
/* CPU core types */
/*
* Intel CPU core types
*
* CPUID.1AH.EAX[31:0] uniquely identifies the microarchitecture
* of the core. Bits 31-24 indicates its core type (Core or Atom)
* and Bits [23:0] indicates the native model ID of the core.
* Core type and native model ID are defined in below enumerations.
*/
enum intel_cpu_type {
INTEL_CPU_TYPE_UNKNOWN,
INTEL_CPU_TYPE_ATOM = 0x20,
INTEL_CPU_TYPE_CORE = 0x40,
};
enum intel_native_id {
INTEL_ATOM_CMT_NATIVE_ID = 0x2, /* Crestmont */
INTEL_ATOM_SKT_NATIVE_ID = 0x3, /* Skymont */
};
#endif /* _ASM_X86_INTEL_FAMILY_H */

View File

@ -175,6 +175,9 @@ extern void __iomem *ioremap_prot(resource_size_t offset, unsigned long size, un
extern void __iomem *ioremap_encrypted(resource_size_t phys_addr, unsigned long size);
#define ioremap_encrypted ioremap_encrypted
void *arch_memremap_wb(phys_addr_t phys_addr, size_t size, unsigned long flags);
#define arch_memremap_wb arch_memremap_wb
/**
* ioremap - map bus memory into CPU space
* @offset: bus address of the memory

View File

@ -116,7 +116,7 @@
ASM_CALL_ARG2
#define call_on_irqstack(func, asm_call, argconstr...) \
call_on_stack(__this_cpu_read(pcpu_hot.hardirq_stack_ptr), \
call_on_stack(__this_cpu_read(hardirq_stack_ptr), \
func, asm_call, argconstr)
/* Macros to assert type correctness for run_*_on_irqstack macros */
@ -135,7 +135,7 @@
* User mode entry and interrupt on the irq stack do not \
* switch stacks. If from user mode the task stack is empty. \
*/ \
if (user_mode(regs) || __this_cpu_read(pcpu_hot.hardirq_stack_inuse)) { \
if (user_mode(regs) || __this_cpu_read(hardirq_stack_inuse)) { \
irq_enter_rcu(); \
func(c_args); \
irq_exit_rcu(); \
@ -146,9 +146,9 @@
* places. Invoke the stack switch macro with the call \
* sequence which matches the above direct invocation. \
*/ \
__this_cpu_write(pcpu_hot.hardirq_stack_inuse, true); \
__this_cpu_write(hardirq_stack_inuse, true); \
call_on_irqstack(func, asm_call, constr); \
__this_cpu_write(pcpu_hot.hardirq_stack_inuse, false); \
__this_cpu_write(hardirq_stack_inuse, false); \
} \
}
@ -212,9 +212,9 @@
*/
#define do_softirq_own_stack() \
{ \
__this_cpu_write(pcpu_hot.hardirq_stack_inuse, true); \
__this_cpu_write(hardirq_stack_inuse, true); \
call_on_irqstack(__do_softirq, ASM_CALL_ARG0); \
__this_cpu_write(pcpu_hot.hardirq_stack_inuse, false); \
__this_cpu_write(hardirq_stack_inuse, false); \
}
#endif

View File

@ -4,7 +4,7 @@
#include <asm/processor-flags.h>
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <asm/nospec-branch.h>
@ -79,7 +79,7 @@ static __always_inline void native_local_irq_restore(unsigned long flags)
#ifdef CONFIG_PARAVIRT_XXL
#include <asm/paravirt.h>
#else
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/types.h>
static __always_inline unsigned long arch_local_save_flags(void)
@ -133,10 +133,10 @@ static __always_inline unsigned long arch_local_irq_save(void)
#endif
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* CONFIG_PARAVIRT_XXL */
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
static __always_inline int arch_irqs_disabled_flags(unsigned long flags)
{
return !(flags & X86_EFLAGS_IF);
@ -154,6 +154,6 @@ static __always_inline void arch_local_irq_restore(unsigned long flags)
if (!arch_irqs_disabled_flags(flags))
arch_local_irq_enable();
}
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
#endif

View File

@ -7,7 +7,7 @@
#include <asm/asm.h>
#include <asm/nops.h>
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/stringify.h>
#include <linux/types.h>
@ -55,6 +55,6 @@ l_yes:
extern int arch_jump_entry_size(struct jump_entry *entry);
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif

View File

@ -23,7 +23,7 @@
(1ULL << (__VIRTUAL_MASK_SHIFT - \
KASAN_SHADOW_SCALE_SHIFT)))
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#ifdef CONFIG_KASAN
void __init kasan_early_init(void);

View File

@ -13,11 +13,12 @@
# define KEXEC_CONTROL_PAGE_SIZE 4096
# define KEXEC_CONTROL_CODE_MAX_SIZE 2048
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/string.h>
#include <linux/kernel.h>
#include <asm/asm.h>
#include <asm/page.h>
#include <asm/ptrace.h>
@ -71,41 +72,32 @@ static inline void crash_setup_regs(struct pt_regs *newregs,
if (oldregs) {
memcpy(newregs, oldregs, sizeof(*newregs));
} else {
#ifdef CONFIG_X86_32
asm volatile("movl %%ebx,%0" : "=m"(newregs->bx));
asm volatile("movl %%ecx,%0" : "=m"(newregs->cx));
asm volatile("movl %%edx,%0" : "=m"(newregs->dx));
asm volatile("movl %%esi,%0" : "=m"(newregs->si));
asm volatile("movl %%edi,%0" : "=m"(newregs->di));
asm volatile("movl %%ebp,%0" : "=m"(newregs->bp));
asm volatile("movl %%eax,%0" : "=m"(newregs->ax));
asm volatile("movl %%esp,%0" : "=m"(newregs->sp));
asm volatile("movl %%ss, %%eax;" :"=a"(newregs->ss));
asm volatile("movl %%cs, %%eax;" :"=a"(newregs->cs));
asm volatile("movl %%ds, %%eax;" :"=a"(newregs->ds));
asm volatile("movl %%es, %%eax;" :"=a"(newregs->es));
asm volatile("pushfl; popl %0" :"=m"(newregs->flags));
#else
asm volatile("movq %%rbx,%0" : "=m"(newregs->bx));
asm volatile("movq %%rcx,%0" : "=m"(newregs->cx));
asm volatile("movq %%rdx,%0" : "=m"(newregs->dx));
asm volatile("movq %%rsi,%0" : "=m"(newregs->si));
asm volatile("movq %%rdi,%0" : "=m"(newregs->di));
asm volatile("movq %%rbp,%0" : "=m"(newregs->bp));
asm volatile("movq %%rax,%0" : "=m"(newregs->ax));
asm volatile("movq %%rsp,%0" : "=m"(newregs->sp));
asm volatile("movq %%r8,%0" : "=m"(newregs->r8));
asm volatile("movq %%r9,%0" : "=m"(newregs->r9));
asm volatile("movq %%r10,%0" : "=m"(newregs->r10));
asm volatile("movq %%r11,%0" : "=m"(newregs->r11));
asm volatile("movq %%r12,%0" : "=m"(newregs->r12));
asm volatile("movq %%r13,%0" : "=m"(newregs->r13));
asm volatile("movq %%r14,%0" : "=m"(newregs->r14));
asm volatile("movq %%r15,%0" : "=m"(newregs->r15));
asm volatile("movl %%ss, %%eax;" :"=a"(newregs->ss));
asm volatile("movl %%cs, %%eax;" :"=a"(newregs->cs));
asm volatile("pushfq; popq %0" :"=m"(newregs->flags));
asm volatile("mov %%" _ASM_BX ",%0" : "=m"(newregs->bx));
asm volatile("mov %%" _ASM_CX ",%0" : "=m"(newregs->cx));
asm volatile("mov %%" _ASM_DX ",%0" : "=m"(newregs->dx));
asm volatile("mov %%" _ASM_SI ",%0" : "=m"(newregs->si));
asm volatile("mov %%" _ASM_DI ",%0" : "=m"(newregs->di));
asm volatile("mov %%" _ASM_BP ",%0" : "=m"(newregs->bp));
asm volatile("mov %%" _ASM_AX ",%0" : "=m"(newregs->ax));
asm volatile("mov %%" _ASM_SP ",%0" : "=m"(newregs->sp));
#ifdef CONFIG_X86_64
asm volatile("mov %%r8,%0" : "=m"(newregs->r8));
asm volatile("mov %%r9,%0" : "=m"(newregs->r9));
asm volatile("mov %%r10,%0" : "=m"(newregs->r10));
asm volatile("mov %%r11,%0" : "=m"(newregs->r11));
asm volatile("mov %%r12,%0" : "=m"(newregs->r12));
asm volatile("mov %%r13,%0" : "=m"(newregs->r13));
asm volatile("mov %%r14,%0" : "=m"(newregs->r14));
asm volatile("mov %%r15,%0" : "=m"(newregs->r15));
#endif
asm volatile("mov %%ss,%k0" : "=a"(newregs->ss));
asm volatile("mov %%cs,%k0" : "=a"(newregs->cs));
#ifdef CONFIG_X86_32
asm volatile("mov %%ds,%k0" : "=a"(newregs->ds));
asm volatile("mov %%es,%k0" : "=a"(newregs->es));
#endif
asm volatile("pushf\n\t"
"pop %0" : "=m"(newregs->flags));
newregs->ip = _THIS_IP_;
}
}
@ -225,6 +217,6 @@ unsigned int arch_crash_get_elfcorehdr_size(void);
#define crash_get_elfcorehdr_size arch_crash_get_elfcorehdr_size
#endif
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* _ASM_X86_KEXEC_H */

View File

@ -38,7 +38,7 @@
#define ASM_FUNC_ALIGN __stringify(__FUNC_ALIGN)
#define SYM_F_ALIGN __FUNC_ALIGN
#ifdef __ASSEMBLY__
#ifdef __ASSEMBLER__
#if defined(CONFIG_MITIGATION_RETHUNK) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO)
#define RET jmp __x86_return_thunk
@ -50,7 +50,7 @@
#endif
#endif /* CONFIG_MITIGATION_RETPOLINE */
#else /* __ASSEMBLY__ */
#else /* __ASSEMBLER__ */
#if defined(CONFIG_MITIGATION_RETHUNK) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO)
#define ASM_RET "jmp __x86_return_thunk\n\t"
@ -62,7 +62,7 @@
#endif
#endif /* CONFIG_MITIGATION_RETPOLINE */
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
/*
* Depending on -fpatchable-function-entry=N,N usage (CONFIG_CALL_PADDING) the
@ -119,33 +119,27 @@
/* SYM_FUNC_START -- use for global functions */
#define SYM_FUNC_START(name) \
SYM_START(name, SYM_L_GLOBAL, SYM_F_ALIGN) \
ENDBR
SYM_START(name, SYM_L_GLOBAL, SYM_F_ALIGN)
/* SYM_FUNC_START_NOALIGN -- use for global functions, w/o alignment */
#define SYM_FUNC_START_NOALIGN(name) \
SYM_START(name, SYM_L_GLOBAL, SYM_A_NONE) \
ENDBR
SYM_START(name, SYM_L_GLOBAL, SYM_A_NONE)
/* SYM_FUNC_START_LOCAL -- use for local functions */
#define SYM_FUNC_START_LOCAL(name) \
SYM_START(name, SYM_L_LOCAL, SYM_F_ALIGN) \
ENDBR
SYM_START(name, SYM_L_LOCAL, SYM_F_ALIGN)
/* SYM_FUNC_START_LOCAL_NOALIGN -- use for local functions, w/o alignment */
#define SYM_FUNC_START_LOCAL_NOALIGN(name) \
SYM_START(name, SYM_L_LOCAL, SYM_A_NONE) \
ENDBR
SYM_START(name, SYM_L_LOCAL, SYM_A_NONE)
/* SYM_FUNC_START_WEAK -- use for weak functions */
#define SYM_FUNC_START_WEAK(name) \
SYM_START(name, SYM_L_WEAK, SYM_F_ALIGN) \
ENDBR
SYM_START(name, SYM_L_WEAK, SYM_F_ALIGN)
/* SYM_FUNC_START_WEAK_NOALIGN -- use for weak functions, w/o alignment */
#define SYM_FUNC_START_WEAK_NOALIGN(name) \
SYM_START(name, SYM_L_WEAK, SYM_A_NONE) \
ENDBR
SYM_START(name, SYM_L_WEAK, SYM_A_NONE)
#endif /* _ASM_X86_LINKAGE_H */

Some files were not shown because too many files have changed in this diff Show More