mirror of
https://github.com/torvalds/linux.git
synced 2026-01-25 07:47:50 +00:00
Pull non-MM updates from Andrew Morton:
- "panic: sys_info: Refactor and fix a potential issue" (Andy Shevchenko)
fixes a build issue and does some cleanup in ib/sys_info.c
- "Implement mul_u64_u64_div_u64_roundup()" (David Laight)
enhances the 64-bit math code on behalf of a PWM driver and beefs up
the test module for these library functions
- "scripts/gdb/symbols: make BPF debug info available to GDB" (Ilya Leoshkevich)
makes BPF symbol names, sizes, and line numbers available to the GDB
debugger
- "Enable hung_task and lockup cases to dump system info on demand" (Feng Tang)
adds a sysctl which can be used to cause additional info dumping when
the hung-task and lockup detectors fire
- "lib/base64: add generic encoder/decoder, migrate users" (Kuan-Wei Chiu)
adds a general base64 encoder/decoder to lib/ and migrates several
users away from their private implementations
- "rbree: inline rb_first() and rb_last()" (Eric Dumazet)
makes TCP a little faster
- "liveupdate: Rework KHO for in-kernel users" (Pasha Tatashin)
reworks the KEXEC Handover interfaces in preparation for Live Update
Orchestrator (LUO), and possibly for other future clients
- "kho: simplify state machine and enable dynamic updates" (Pasha Tatashin)
increases the flexibility of KEXEC Handover. Also preparation for LUO
- "Live Update Orchestrator" (Pasha Tatashin)
is a major new feature targeted at cloud environments. Quoting the
cover letter:
This series introduces the Live Update Orchestrator, a kernel
subsystem designed to facilitate live kernel updates using a
kexec-based reboot. This capability is critical for cloud
environments, allowing hypervisors to be updated with minimal
downtime for running virtual machines. LUO achieves this by
preserving the state of selected resources, such as memory,
devices and their dependencies, across the kernel transition.
As a key feature, this series includes support for preserving
memfd file descriptors, which allows critical in-memory data, such
as guest RAM or any other large memory region, to be maintained in
RAM across the kexec reboot.
Mike Rappaport merits a mention here, for his extensive review and
testing work.
- "kexec: reorganize kexec and kdump sysfs" (Sourabh Jain)
moves the kexec and kdump sysfs entries from /sys/kernel/ to
/sys/kernel/kexec/ and adds back-compatibility symlinks which can
hopefully be removed one day
- "kho: fixes for vmalloc restoration" (Mike Rapoport)
fixes a BUG which was being hit during KHO restoration of vmalloc()
regions
* tag 'mm-nonmm-stable-2025-12-06-11-14' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (139 commits)
calibrate: update header inclusion
Reinstate "resource: avoid unnecessary lookups in find_next_iomem_res()"
vmcoreinfo: track and log recoverable hardware errors
kho: fix restoring of contiguous ranges of order-0 pages
kho: kho_restore_vmalloc: fix initialization of pages array
MAINTAINERS: TPM DEVICE DRIVER: update the W-tag
init: replace simple_strtoul with kstrtoul to improve lpj_setup
KHO: fix boot failure due to kmemleak access to non-PRESENT pages
Documentation/ABI: new kexec and kdump sysfs interface
Documentation/ABI: mark old kexec sysfs deprecated
kexec: move sysfs entries to /sys/kernel/kexec
test_kho: always print restore status
kho: free chunks using free_page() instead of kfree()
selftests/liveupdate: add kexec test for multiple and empty sessions
selftests/liveupdate: add simple kexec-based selftest for LUO
selftests/liveupdate: add userspace API selftests
docs: add documentation for memfd preservation via LUO
mm: memfd_luo: allow preserving memfd
liveupdate: luo_file: add private argument to store runtime state
mm: shmem: export some functions to internal.h
...
This directory contains a mix of tests integrated with kselftest and standalone stress tests. kselftest tests =============== sve-probe-vls - Checks the SVE vector length enumeration interface sve-ptrace - Checks the SVE ptrace interface Running the non-kselftest tests =============================== sve-stress performs an SVE context switch stress test, as described below. (The fpsimd-stress test works the same way; just substitute "fpsimd" for "sve" in the following commands.) The test runs until killed by the user. If no context switch error was detected, you will see output such as the following: $ ./sve-stress (wait for some time) ^C Vector length: 512 bits PID: 1573 Terminated by signal 15, no error, iterations=9467, signals=1014 Vector length: 512 bits PID: 1575 Terminated by signal 15, no error, iterations=9448, signals=1028 Vector length: 512 bits PID: 1577 Terminated by signal 15, no error, iterations=9436, signals=1039 Vector length: 512 bits PID: 1579 Terminated by signal 15, no error, iterations=9421, signals=1039 Vector length: 512 bits PID: 1581 Terminated by signal 15, no error, iterations=9403, signals=1039 Vector length: 512 bits PID: 1583 Terminated by signal 15, no error, iterations=9385, signals=1036 Vector length: 512 bits PID: 1585 Terminated by signal 15, no error, iterations=9376, signals=1039 Vector length: 512 bits PID: 1587 Terminated by signal 15, no error, iterations=9361, signals=1039 Vector length: 512 bits PID: 1589 Terminated by signal 15, no error, iterations=9350, signals=1039 If an error was detected, details of the mismatch will be printed instead of "no error". Ideally, the test should be allowed to run for many minutes or hours to maximise test coverage. KVM stress testing ================== To try to reproduce the bugs that we have been observing, sve-stress should be run in parallel in two KVM guests, while simultaneously running on the host. 1) Start 2 guests, using the following command for each: $ lkvm run --console=virtio -pconsole=hvc0 --sve Image (Depending on the hardware GIC implementation, you may also need --irqchip=gicv3. New kvmtool defaults to that if appropriate, but I can't remember whether my branch is new enough for that. Try without the option first.) Kvmtool occupies the terminal until you kill it (Ctrl+A x), or until the guest terminates. It is therefore recommended to run each instance in separate terminal (use screen or ssh etc.) This allows multiple guests to be run in parallel while running other commands on the host. Within the guest, the host filesystem is accessible, mounted on /host. 2) Run the sve-stress on *each* guest with the Vector-Length set to 32: guest$ ./vlset --inherit 32 ./sve-stress 3) Run the sve-stress on the host with the maximum Vector-Length: host$ ./vlset --inherit --max ./sve-stress Again, the test should be allowed to run for many minutes or hours to maximise test coverage. If no error is detected, you will see output from each sve-stress instance similar to that illustrated above; otherwise details of the observed mismatches will be printed.