From 830a9e37cfb203aa0f73cd947eda89eda89cc48c Mon Sep 17 00:00:00 2001 From: Laura Brehm Date: Thu, 3 Jul 2025 14:02:44 +0200 Subject: [PATCH 1/9] coredump: fix PIDFD_INFO_COREDUMP ioctl check In Commit 1d8db6fd698de1f73b1a7d72aea578fdd18d9a87 ("pidfs, coredump: add PIDFD_INFO_COREDUMP"), the following code was added: if (mask & PIDFD_INFO_COREDUMP) { kinfo.mask |= PIDFD_INFO_COREDUMP; kinfo.coredump_mask = READ_ONCE(pidfs_i(inode)->__pei.coredump_mask); } [...] if (!(kinfo.mask & PIDFD_INFO_COREDUMP)) { task_lock(task); if (task->mm) kinfo.coredump_mask = pidfs_coredump_mask(task->mm->flags); task_unlock(task); } The second bit in particular looks off to me - the condition in essence checks whether PIDFD_INFO_COREDUMP was **not** requested, and if so fetches the coredump_mask in kinfo, since it's checking !(kinfo.mask & PIDFD_INFO_COREDUMP), which is unconditionally set in the earlier hunk. I'm tempted to assume the idea in the second hunk was to calculate the coredump mask if one was requested but fetched in the first hunk, in which case the check should be if ((kinfo.mask & PIDFD_INFO_COREDUMP) && !(kinfo.coredump_mask)) which might be more legibly written as if ((mask & PIDFD_INFO_COREDUMP) && !(kinfo.coredump_mask)) This could also instead be achieved by changing the first hunk to be: if (mask & PIDFD_INFO_COREDUMP) { kinfo.coredump_mask = READ_ONCE(pidfs_i(inode)->__pei.coredump_mask); if (kinfo.coredump_mask) kinfo.mask |= PIDFD_INFO_COREDUMP; } and the second hunk to: if ((mask & PIDFD_INFO_COREDUMP) && !(kinfo.mask & PIDFD_INFO_COREDUMP)) { task_lock(task); if (task->mm) { kinfo.coredump_mask = pidfs_coredump_mask(task->mm->flags); kinfo.mask |= PIDFD_INFO_COREDUMP; } task_unlock(task); } However, when looking at this, the supposition that the second hunk means to cover cases where the coredump info was requested but the first hunk failed to get it starts getting doubtful, so apologies if I'm completely off-base. This patch addresses the issue by fixing the check in the second hunk. Signed-off-by: Laura Brehm Link: https://lore.kernel.org/20250703120244.96908-3-laurabrehm@hey.com Cc: brauner@kernel.org Cc: linux-fsdevel@vger.kernel.org Signed-off-by: Christian Brauner --- fs/pidfs.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/pidfs.c b/fs/pidfs.c index 69919be1c9d8..4625e097e3a0 100644 --- a/fs/pidfs.c +++ b/fs/pidfs.c @@ -319,7 +319,7 @@ static long pidfd_info(struct file *file, unsigned int cmd, unsigned long arg) if (!c) return -ESRCH; - if (!(kinfo.mask & PIDFD_INFO_COREDUMP)) { + if ((kinfo.mask & PIDFD_INFO_COREDUMP) && !(kinfo.coredump_mask)) { task_lock(task); if (task->mm) kinfo.coredump_mask = pidfs_coredump_mask(task->mm->flags); From 98f99394a104cc80296da34a62d4e1ad04127013 Mon Sep 17 00:00:00 2001 From: Christian Brauner Date: Mon, 7 Jul 2025 13:21:11 +0200 Subject: [PATCH 2/9] secretmem: use SB_I_NOEXEC Anonymous inodes may never ever be exectuable and the only way to enforce this is to raise SB_I_NOEXEC on the superblock which can never be unset. I've made the exec code yell at anyone who does not abide by this rule. For good measure also kill any pretense that device nodes are supported on the secretmem filesystem. > WARNING: fs/exec.c:119 at path_noexec+0x1af/0x200 fs/exec.c:118, CPU#1: syz-executor260/5835 > Modules linked in: > CPU: 1 UID: 0 PID: 5835 Comm: syz-executor260 Not tainted 6.16.0-rc4-next-20250703-syzkaller #0 PREEMPT(full) > Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 > RIP: 0010:path_noexec+0x1af/0x200 fs/exec.c:118 > Code: 02 31 ff 48 89 de e8 f0 b1 89 ff d1 eb eb 07 e8 07 ad 89 ff b3 01 89 d8 5b 41 5e 41 5f 5d c3 cc cc cc cc cc e8 f2 ac 89 ff 90 <0f> 0b 90 e9 48 ff ff ff 44 89 f1 80 e1 07 80 c1 03 38 c1 0f 8c a6 > RSP: 0018:ffffc90003eefbd8 EFLAGS: 00010293 > RAX: ffffffff8235f22e RBX: ffff888072be0940 RCX: ffff88807763bc00 > RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000 > RBP: 0000000000080000 R08: ffff88807763bc00 R09: 0000000000000003 > R10: 0000000000000003 R11: 0000000000000000 R12: 0000000000000011 > R13: 1ffff920007ddf90 R14: 0000000000000000 R15: dffffc0000000000 > FS: 000055556832d380(0000) GS:ffff888125d1e000(0000) knlGS:0000000000000000 > CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > CR2: 00007f21e34810d0 CR3: 00000000718a8000 CR4: 00000000003526f0 > Call Trace: > > do_mmap+0xa43/0x10d0 mm/mmap.c:472 > vm_mmap_pgoff+0x31b/0x4c0 mm/util.c:579 > ksys_mmap_pgoff+0x51f/0x760 mm/mmap.c:607 > do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] > do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94 > entry_SYSCALL_64_after_hwframe+0x77/0x7f > RIP: 0033:0x7f21e340a9f9 > Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 c1 17 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48 > RSP: 002b:00007ffd23ca3468 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 > RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f21e340a9f9 > RDX: 0000000000000000 RSI: 0000000000004000 RDI: 0000200000ff9000 > RBP: 00007f21e347d5f0 R08: 0000000000000003 R09: 0000000000000000 > R10: 0000000000000011 R11: 0000000000000246 R12: 0000000000000001 > R13: 431bde82d7b634db R14: 0000000000000001 R15: 0000000000000001 Link: https://lore.kernel.org/686ba948.a00a0220.c7b3.0080.GAE@google.com Signed-off-by: Christian Brauner --- mm/secretmem.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/mm/secretmem.c b/mm/secretmem.c index 9a11a38a6770..e042a4a0bc0c 100644 --- a/mm/secretmem.c +++ b/mm/secretmem.c @@ -261,7 +261,15 @@ err_put_fd: static int secretmem_init_fs_context(struct fs_context *fc) { - return init_pseudo(fc, SECRETMEM_MAGIC) ? 0 : -ENOMEM; + struct pseudo_fs_context *ctx; + + ctx = init_pseudo(fc, SECRETMEM_MAGIC); + if (!ctx) + return -ENOMEM; + + fc->s_iflags |= SB_I_NOEXEC; + fc->s_iflags |= SB_I_NODEV; + return 0; } static struct file_system_type secretmem_fs = { @@ -279,9 +287,6 @@ static int __init secretmem_init(void) if (IS_ERR(secretmem_mnt)) return PTR_ERR(secretmem_mnt); - /* prevent secretmem mappings from ever getting PROT_EXEC */ - secretmem_mnt->mnt_flags |= MNT_NOEXEC; - return 0; } fs_initcall(secretmem_init); From 6b89819b06d8d339da414f06ef3242f79508be5e Mon Sep 17 00:00:00 2001 From: Zizhi Wo Date: Thu, 3 Jul 2025 10:44:18 +0800 Subject: [PATCH 3/9] cachefiles: Fix the incorrect return value in __cachefiles_write() In __cachefiles_write(), if the return value of the write operation > 0, it is set to 0. This makes it impossible to distinguish scenarios where a partial write has occurred, and will affect the outer calling functions: 1) cachefiles_write_complete() will call "term_func" such as netfs_write_subrequest_terminated(). When "ret" in __cachefiles_write() is used as the "transferred_or_error" of this function, it can not distinguish the amount of data written, makes the WARN meaningless. 2) cachefiles_ondemand_fd_write_iter() can only assume all writes were successful by default when "ret" is 0, and unconditionally return the full length specified by user space. Fix it by modifying "ret" to reflect the actual number of bytes written. Furthermore, returning a value greater than 0 from __cachefiles_write() does not affect other call paths, such as cachefiles_issue_write() and fscache_write(). Fixes: 047487c947e8 ("cachefiles: Implement the I/O routines") Signed-off-by: Zizhi Wo Link: https://lore.kernel.org/20250703024418.2809353-1-wozizhi@huaweicloud.com Signed-off-by: Christian Brauner --- fs/cachefiles/io.c | 2 -- fs/cachefiles/ondemand.c | 4 +--- 2 files changed, 1 insertion(+), 5 deletions(-) diff --git a/fs/cachefiles/io.c b/fs/cachefiles/io.c index c08e4a66ac07..3e0576d9db1d 100644 --- a/fs/cachefiles/io.c +++ b/fs/cachefiles/io.c @@ -347,8 +347,6 @@ int __cachefiles_write(struct cachefiles_object *object, default: ki->was_async = false; cachefiles_write_complete(&ki->iocb, ret); - if (ret > 0) - ret = 0; break; } diff --git a/fs/cachefiles/ondemand.c b/fs/cachefiles/ondemand.c index d9bc67176128..a7ed86fa98bb 100644 --- a/fs/cachefiles/ondemand.c +++ b/fs/cachefiles/ondemand.c @@ -83,10 +83,8 @@ static ssize_t cachefiles_ondemand_fd_write_iter(struct kiocb *kiocb, trace_cachefiles_ondemand_fd_write(object, file_inode(file), pos, len); ret = __cachefiles_write(object, file, pos, iter, NULL, NULL); - if (!ret) { - ret = len; + if (ret > 0) kiocb->ki_pos += ret; - } out: fput(file); From 0a9e7405131380b57e155f10242b2e25d2e51852 Mon Sep 17 00:00:00 2001 From: Jan Kara Date: Wed, 9 Jul 2025 11:55:46 +0200 Subject: [PATCH 4/9] isofs: Verify inode mode when loading from disk Verify that the inode mode is sane when loading it from the disk to avoid complaints from VFS about setting up invalid inodes. Reported-by: syzbot+895c23f6917da440ed0d@syzkaller.appspotmail.com CC: stable@vger.kernel.org Signed-off-by: Jan Kara Link: https://lore.kernel.org/20250709095545.31062-2-jack@suse.cz Acked-by: Christian Brauner Signed-off-by: Christian Brauner --- fs/isofs/inode.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/fs/isofs/inode.c b/fs/isofs/inode.c index d5da9817df9b..33e6a620c103 100644 --- a/fs/isofs/inode.c +++ b/fs/isofs/inode.c @@ -1440,9 +1440,16 @@ static int isofs_read_inode(struct inode *inode, int relocated) inode->i_op = &page_symlink_inode_operations; inode_nohighmem(inode); inode->i_data.a_ops = &isofs_symlink_aops; - } else + } else if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode) || + S_ISFIFO(inode->i_mode) || S_ISSOCK(inode->i_mode)) { /* XXX - parse_rock_ridge_inode() had already set i_rdev. */ init_special_inode(inode, inode->i_mode, inode->i_rdev); + } else { + printk(KERN_DEBUG "ISOFS: Invalid file type 0%04o for inode %lu.\n", + inode->i_mode, inode->i_ino); + ret = -EIO; + goto fail; + } ret = 0; out: From 177bb4cba97aae951b910d8ca248480715d09009 Mon Sep 17 00:00:00 2001 From: Jinliang Zheng Date: Fri, 11 Jul 2025 16:12:07 +0800 Subject: [PATCH 5/9] iomap: avoid unnecessary ifs_set_range_uptodate() with locks In the buffer write path, iomap_set_range_uptodate() is called every time iomap_end_write() is called. But if folio_test_uptodate() holds, we know that all blocks in this folio are already in the uptodate state, so there is no need to go deep into the critical section of state_lock to execute bitmap_set(). This is because the folios always creep towards ifs_is_fully_uptodate() state and once they've gotten there folio_mark_uptodate() is called, which means the folio is uptodate. Then once a folio is uptodate, there is no route back to !uptodate without going through the removal of the folio from the page cache. Therefore, it's fine to use folio_test_uptodate() to short-circuit unnecessary code paths. Although state_lock may not have significant lock contention due to folio lock, this patch at least reduces the number of instructions, especially the expensive lock-prefixed instructions. Signed-off-by: Jinliang Zheng Link: https://lore.kernel.org/20250711081207.1782667-1-alexjlzheng@tencent.com Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig Reviewed-by: Christoph Hellwig Signed-off-by: Christian Brauner --- fs/iomap/buffered-io.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 3729391a18f3..fb4519158f3a 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -71,6 +71,9 @@ static void iomap_set_range_uptodate(struct folio *folio, size_t off, unsigned long flags; bool uptodate = true; + if (folio_test_uptodate(folio)) + return; + if (ifs) { spin_lock_irqsave(&ifs->state_lock, flags); uptodate = ifs_set_range_uptodate(folio, ifs, off, len); From fdfe0133473a528e3f5da69c35419ce6711d6b89 Mon Sep 17 00:00:00 2001 From: Al Viro Date: Sat, 12 Jul 2025 18:18:43 +0100 Subject: [PATCH 6/9] fix a leak in fcntl_dirnotify() [into #fixes, unless somebody objects] Lifetime of new_dn_mark is controlled by that of its ->fsn_mark, pointed to by new_fsn_mark. Unfortunately, a failure exit had been inserted between the allocation of new_dn_mark and the call of fsnotify_init_mark(), ending up with a leak. Fixes: 1934b212615d "file: reclaim 24 bytes from f_owner" Signed-off-by: Al Viro Link: https://lore.kernel.org/20250712171843.GB1880847@ZenIV Signed-off-by: Christian Brauner --- fs/notify/dnotify/dnotify.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/fs/notify/dnotify/dnotify.c b/fs/notify/dnotify/dnotify.c index c4cdaf5fa7ed..9fb73bafd41d 100644 --- a/fs/notify/dnotify/dnotify.c +++ b/fs/notify/dnotify/dnotify.c @@ -308,6 +308,10 @@ int fcntl_dirnotify(int fd, struct file *filp, unsigned int arg) goto out_err; } + error = file_f_owner_allocate(filp); + if (error) + goto out_err; + /* new fsnotify mark, we expect most fcntl calls to add a new mark */ new_dn_mark = kmem_cache_alloc(dnotify_mark_cache, GFP_KERNEL); if (!new_dn_mark) { @@ -315,10 +319,6 @@ int fcntl_dirnotify(int fd, struct file *filp, unsigned int arg) goto out_err; } - error = file_f_owner_allocate(filp); - if (error) - goto out_err; - /* set up the new_fsn_mark and new_dn_mark */ new_fsn_mark = &new_dn_mark->fsn_mark; fsnotify_init_mark(new_fsn_mark, dnotify_group); From 4c238e30774e3022a505fa54311273add7570f13 Mon Sep 17 00:00:00 2001 From: David Howells Date: Fri, 11 Jul 2025 16:10:00 +0100 Subject: [PATCH 7/9] netfs: Fix copy-to-cache so that it performs collection with ceph+fscache The netfs copy-to-cache that is used by Ceph with local caching sets up a new request to write data just read to the cache. The request is started and then left to look after itself whilst the app continues. The request gets notified by the backing fs upon completion of the async DIO write, but then tries to wake up the app because NETFS_RREQ_OFFLOAD_COLLECTION isn't set - but the app isn't waiting there, and so the request just hangs. Fix this by setting NETFS_RREQ_OFFLOAD_COLLECTION which causes the notification from the backing filesystem to put the collection onto a work queue instead. Fixes: e2d46f2ec332 ("netfs: Change the read result collector to only use one work item") Reported-by: Max Kellermann Link: https://lore.kernel.org/r/CAKPOu+8z_ijTLHdiCYGU_Uk7yYD=shxyGLwfe-L7AV3DhebS3w@mail.gmail.com/ Signed-off-by: David Howells Link: https://lore.kernel.org/20250711151005.2956810-2-dhowells@redhat.com Reviewed-by: Paulo Alcantara (Red Hat) cc: Paulo Alcantara cc: Viacheslav Dubeyko cc: Alex Markuze cc: Ilya Dryomov cc: netfs@lists.linux.dev cc: ceph-devel@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: stable@vger.kernel.org Signed-off-by: Christian Brauner --- fs/netfs/read_pgpriv2.c | 1 + 1 file changed, 1 insertion(+) diff --git a/fs/netfs/read_pgpriv2.c b/fs/netfs/read_pgpriv2.c index 5bbe906a551d..080d2a6a51d9 100644 --- a/fs/netfs/read_pgpriv2.c +++ b/fs/netfs/read_pgpriv2.c @@ -110,6 +110,7 @@ static struct netfs_io_request *netfs_pgpriv2_begin_copy_to_cache( if (!creq->io_streams[1].avail) goto cancel_put; + __set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &creq->flags); trace_netfs_write(creq, netfs_write_trace_copy_to_cache); netfs_stat(&netfs_n_wh_copy_to_cache); rreq->copy_to_cache = creq; From 89635eae076cd8eaa5cb752f66538c9dc6c9fdc3 Mon Sep 17 00:00:00 2001 From: David Howells Date: Fri, 11 Jul 2025 16:10:01 +0100 Subject: [PATCH 8/9] netfs: Fix race between cache write completion and ALL_QUEUED being set When netfslib is issuing subrequests, the subrequests start processing immediately and may complete before we reach the end of the issuing function. At the end of the issuing function we set NETFS_RREQ_ALL_QUEUED to indicate to the collector that we aren't going to issue any more subreqs and that it can do the final notifications and cleanup. Now, this isn't a problem if the request is synchronous (NETFS_RREQ_OFFLOAD_COLLECTION is unset) as the result collection will be done in-thread and we're guaranteed an opportunity to run the collector. However, if the request is asynchronous, collection is primarily triggered by the termination of subrequests queuing it on a workqueue. Now, a race can occur here if the app thread sets ALL_QUEUED after the last subrequest terminates. This can happen most easily with the copy2cache code (as used by Ceph) where, in the collection routine of a read request, an asynchronous write request is spawned to copy data to the cache. Folios are added to the write request as they're unlocked, but there may be a delay before ALL_QUEUED is set as the write subrequests may complete before we get there. If all the write subreqs have finished by the ALL_QUEUED point, no further events happen and the collection never happens, leaving the request hanging. Fix this by queuing the collector after setting ALL_QUEUED. This is a bit heavy-handed and it may be sufficient to do it only if there are no extant subreqs. Also add a tracepoint to cross-reference both requests in a copy-to-request operation and add a trace to the netfs_rreq tracepoint to indicate the setting of ALL_QUEUED. Fixes: e2d46f2ec332 ("netfs: Change the read result collector to only use one work item") Reported-by: Max Kellermann Link: https://lore.kernel.org/r/CAKPOu+8z_ijTLHdiCYGU_Uk7yYD=shxyGLwfe-L7AV3DhebS3w@mail.gmail.com/ Signed-off-by: David Howells Link: https://lore.kernel.org/20250711151005.2956810-3-dhowells@redhat.com Reviewed-by: Paulo Alcantara (Red Hat) cc: Paulo Alcantara cc: Viacheslav Dubeyko cc: Alex Markuze cc: Ilya Dryomov cc: netfs@lists.linux.dev cc: ceph-devel@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: stable@vger.kernel.org Signed-off-by: Christian Brauner --- fs/netfs/read_pgpriv2.c | 4 ++++ include/trace/events/netfs.h | 30 ++++++++++++++++++++++++++++++ 2 files changed, 34 insertions(+) diff --git a/fs/netfs/read_pgpriv2.c b/fs/netfs/read_pgpriv2.c index 080d2a6a51d9..8097bc069c1d 100644 --- a/fs/netfs/read_pgpriv2.c +++ b/fs/netfs/read_pgpriv2.c @@ -111,6 +111,7 @@ static struct netfs_io_request *netfs_pgpriv2_begin_copy_to_cache( goto cancel_put; __set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &creq->flags); + trace_netfs_copy2cache(rreq, creq); trace_netfs_write(creq, netfs_write_trace_copy_to_cache); netfs_stat(&netfs_n_wh_copy_to_cache); rreq->copy_to_cache = creq; @@ -155,6 +156,9 @@ void netfs_pgpriv2_end_copy_to_cache(struct netfs_io_request *rreq) netfs_issue_write(creq, &creq->io_streams[1]); smp_wmb(); /* Write lists before ALL_QUEUED. */ set_bit(NETFS_RREQ_ALL_QUEUED, &creq->flags); + trace_netfs_rreq(rreq, netfs_rreq_trace_end_copy_to_cache); + if (list_empty_careful(&creq->io_streams[1].subrequests)) + netfs_wake_collector(creq); netfs_put_request(creq, netfs_rreq_trace_put_return); creq->copy_to_cache = NULL; diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 73e96ccbe830..64a382fbc31a 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -55,6 +55,7 @@ EM(netfs_rreq_trace_copy, "COPY ") \ EM(netfs_rreq_trace_dirty, "DIRTY ") \ EM(netfs_rreq_trace_done, "DONE ") \ + EM(netfs_rreq_trace_end_copy_to_cache, "END-C2C") \ EM(netfs_rreq_trace_free, "FREE ") \ EM(netfs_rreq_trace_ki_complete, "KI-CMPL") \ EM(netfs_rreq_trace_recollect, "RECLLCT") \ @@ -559,6 +560,35 @@ TRACE_EVENT(netfs_write, __entry->start, __entry->start + __entry->len - 1) ); +TRACE_EVENT(netfs_copy2cache, + TP_PROTO(const struct netfs_io_request *rreq, + const struct netfs_io_request *creq), + + TP_ARGS(rreq, creq), + + TP_STRUCT__entry( + __field(unsigned int, rreq) + __field(unsigned int, creq) + __field(unsigned int, cookie) + __field(unsigned int, ino) + ), + + TP_fast_assign( + struct netfs_inode *__ctx = netfs_inode(rreq->inode); + struct fscache_cookie *__cookie = netfs_i_cookie(__ctx); + __entry->rreq = rreq->debug_id; + __entry->creq = creq->debug_id; + __entry->cookie = __cookie ? __cookie->debug_id : 0; + __entry->ino = rreq->inode->i_ino; + ), + + TP_printk("R=%08x CR=%08x c=%08x i=%x ", + __entry->rreq, + __entry->creq, + __entry->cookie, + __entry->ino) + ); + TRACE_EVENT(netfs_collect, TP_PROTO(const struct netfs_io_request *wreq), From a88cddaaa3bf7445357a79a5c351c6b6d6f95b7d Mon Sep 17 00:00:00 2001 From: Christian Brauner Date: Wed, 16 Jul 2025 10:40:45 +0200 Subject: [PATCH 9/9] MAINTAINERS: add block and fsdevel lists to iov_iter We've had multiple instances where people didn't Cc fsdevel or block which are easily the most affected subsystems by iov_iter changes. Put a stop to that and make sure both lists are Cced so we can catch stuff like [1] early. Link: https://lore.kernel.org/linux-nvme/20250715132750.9619-4-aaptel@nvidia.com [1] Link: https://lore.kernel.org/20250716-eklig-rasten-ec8c4dc05a1e@brauner Reviewed-by: Jens Axboe Signed-off-by: Christian Brauner --- MAINTAINERS | 2 ++ 1 file changed, 2 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index fad6cb025a19..fcefe222f093 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -25907,6 +25907,8 @@ F: fs/hostfs/ USERSPACE COPYIN/COPYOUT (UIOVEC) M: Alexander Viro +L: linux-block@vger.kernel.org +L: linux-fsdevel@vger.kernel.org S: Maintained F: include/linux/uio.h F: lib/iov_iter.c