messages from 2024-03-18 02:25:39 to 2024-03-28 09:29:10 UTC [more...]
[PATCH] [RFC]: fs: claw back a few FMODE_* bits
2024-03-28 9:29 UTC (8+ messages)
[PATCH v4 02/14] mm: Switch mm->get_unmapped_area() to a flag
2024-03-28 3:32 UTC (8+ messages)
[PATCH] io_uring: Remove unused function
2024-03-28 2:23 UTC
[PATCH liburing] io_uring.h: Avoid anonymous enums
2024-03-28 0:16 UTC
[RFC PATCH 0/4] Read/Write with meta buffer
2024-03-27 23:38 UTC (8+ messages)
` [RFC PATCH 1/4] io_uring/rw: Get rid of flags field in struct io_rw
` [RFC PATCH 2/4] io_uring/rw: support read/write with metadata
` [RFC PATCH 3/4] block: modify bio_integrity_map_user to accept iov_iter as argument
` [RFC PATCH 4/4] block: add support to pass the meta buffer
[PATCHSET v2 0/10] Move away from remap_pfn_range()
2024-03-27 20:31 UTC (13+ messages)
` [PATCH 01/10] mm: add nommu variant of vm_insert_pages()
` [PATCH 02/10] io_uring: get rid of remap_pfn_range() for mapping rings/sqes
` [PATCH 03/10] io_uring: use vmap() for ring mapping
` [PATCH 04/10] io_uring: unify io_pin_pages()
` [PATCH 05/10] io_uring/kbuf: get rid of lower BGID lists
` [PATCH 06/10] io_uring/kbuf: get rid of bl->is_ready
` [PATCH 07/10] io_uring/kbuf: vmap pinned buffer ring
` [PATCH 08/10] io_uring/kbuf: protect io_buffer_list teardown with a reference
` [PATCH 09/10] io_uring/kbuf: use vm_insert_pages() for mmap'ed pbuf ring
` [PATCH 10/10] io_uring: use unpin_user_pages() where appropriate
[PATCH v6 00/10] block atomic writes
2024-03-27 20:31 UTC (15+ messages)
` [PATCH v6 01/10] block: Pass blk_queue_get_max_sectors() a request pointer
` [PATCH v6 02/10] block: Call blkdev_dio_unaligned() from blkdev_direct_IO()
` [PATCH v6 03/10] fs: Initial atomic write support
` [PATCH v6 04/10] fs: Add initial atomic write support info to statx
` [PATCH v6 05/10] block: Add core atomic write support
` [PATCH v6 06/10] block: Add atomic write support for statx
` [PATCH v6 07/10] block: Add fops atomic write support
` [PATCH v6 08/10] scsi: sd: Atomic "
` [PATCH v6 09/10] scsi: scsi_debug: "
` [PATCH v6 10/10] nvme: "
[PATCHSET 0/4] Use io_wq_work_list for task_work
2024-03-27 18:04 UTC (14+ messages)
` [PATCH 1/4] io_uring: use the right type for work_llist empty check
` [PATCH 2/4] io_uring: switch deferred task_work to an io_wq_work_list
` [PATCH 3/4] io_uring: switch fallback work to io_wq_work_list
` [PATCH 4/4] io_uring: switch normal task_work "
[PATCH] io_uring: refill request cache in memory order
2024-03-26 18:47 UTC
[PATCH] io_uring/poll: shrink alloc cache size to 32
2024-03-26 18:47 UTC
[PATCH] io_uring: releasing CPU resources when polling
2024-03-26 3:39 UTC (3+ messages)
` "
` io_uring: "
[RFC PATCH 0/2] Introduce per-task io utilization boost
2024-03-25 17:23 UTC (23+ messages)
` [RFC PATCH 1/2] sched/fair: Introduce per-task io util boost
` [RFC PATCH 2/2] cpufreq/schedutil: Remove iowait boost
[PATCHSET v2 0/17] Improve async state handling
2024-03-25 14:55 UTC (26+ messages)
` [PATCH 01/17] io_uring/net: switch io_send() and io_send_zc() to using io_async_msghdr
` [PATCH 02/17] io_uring/net: switch io_recv() "
` [PATCH 03/17] io_uring/net: unify cleanup handling
` [PATCH 04/17] io_uring/net: always setup an io_async_msghdr
` [PATCH 05/17] io_uring/net: get rid of ->prep_async() for receive side
` [PATCH 06/17] io_uring/net: get rid of ->prep_async() for send side
` [PATCH 07/17] io_uring: kill io_msg_alloc_async_prep()
` [PATCH 08/17] io_uring/net: add iovec recycling
` [PATCH 09/17] io_uring/net: drop 'kmsg' parameter from io_req_msg_cleanup()
` [PATCH 10/17] io_uring/rw: always setup io_async_rw for read/write requests
` [PATCH 11/17] io_uring: get rid of struct io_rw_state
` [PATCH 12/17] io_uring/rw: add iovec recycling
` [PATCH 13/17] io_uring/net: move connect to always using async data
` [PATCH 14/17] io_uring/uring_cmd: switch to always allocating "
` [PATCH 15/17] io_uring/uring_cmd: defer SQE copying until we need it
` [PATCH 16/17] io_uring: drop ->prep_async()
` [PATCH 17/17] io_uring/alloc_cache: switch to array based caching
[GIT PULL] io_uring fixes for 6.9-rc1
2024-03-22 20:05 UTC (2+ messages)
[PATCHSET 0/6] Switch kbuf mappings to vm_insert_pages()
2024-03-21 14:45 UTC (7+ messages)
` [PATCH 1/6] io_uring/kbuf: get rid of lower BGID lists
` [PATCH 2/6] io_uring/kbuf: get rid of bl->is_ready
` [PATCH 3/6] io_uring/kbuf: vmap pinned buffer ring
` [PATCH 4/6] io_uring/kbuf: protect io_buffer_list teardown with a reference
` [PATCH 5/6] mm: add nommu variant of vm_insert_pages()
` [PATCH 6/6] io_uring/kbuf: use vm_insert_pages() for mmap'ed pbuf ring
[PATCH] io_uring/net: drop unused 'fast_iov_one' entry
2024-03-20 14:41 UTC (3+ messages)
[PATCHSET 0/15] Get rid of ->prep_async()
2024-03-20 1:17 UTC (16+ messages)
` [PATCH 01/15] io_uring/net: switch io_send() and io_send_zc() to using io_async_msghdr
` [PATCH 02/15] io_uring/net: switch io_recv() "
` [PATCH 03/15] io_uring/net: unify cleanup handling
` [PATCH 04/15] io_uring/net: always setup an io_async_msghdr
` [PATCH 05/15] io_uring/net: get rid of ->prep_async() for receive side
` [PATCH 06/15] io_uring/net: get rid of ->prep_async() for send side
` [PATCH 07/15] io_uring: kill io_msg_alloc_async_prep()
` [PATCH 08/15] io_uring/net: add iovec recycling
` [PATCH 09/15] io_uring/net: drop 'kmsg' parameter from io_req_msg_cleanup()
` [PATCH 10/15] io_uring/rw: always setup io_async_rw for read/write requests
` [PATCH 11/15] io_uring: get rid of struct io_rw_state
` [PATCH 12/15] io_uring/rw: add iovec recycling
` [PATCH 13/15] io_uring/net: move connect to always using async data
` [PATCH 14/15] io_uring/uring_cmd: switch to always allocating "
` [PATCH 15/15] io_uring: drop ->prep_async()
[PATCH] io_uring/alloc_cache: shrink default max entries from 512 to 128
2024-03-19 23:37 UTC
[PATCH] net: Do not break out of sk_stream_wait_memory() with TIF_NOTIFY_SIGNAL
2024-03-19 15:08 UTC (9+ messages)
[PATCH] io_uring/sqpoll: early exit thread if task_context wasn't allocated
2024-03-19 2:25 UTC
[bug report] Kernel panic - not syncing: Fatal hardware error!
2024-03-19 2:20 UTC (2+ messages)
[PATCH v3 00/13] Remove aux CQE caches
2024-03-19 2:19 UTC (21+ messages)
` [PATCH v3 01/13] io_uring/cmd: move io_uring_try_cancel_uring_cmd()
` [PATCH v3 02/13] io_uring/cmd: kill one issue_flags to tw conversion
` [PATCH v3 03/13] io_uring/cmd: fix tw <-> issue_flags conversion
` [PATCH v3 04/13] io_uring/cmd: introduce io_uring_cmd_complete
` [PATCH v3 05/13] nvme/io_uring: don't hard code IO_URING_F_UNLOCKED
` [PATCH v3 06/13] io_uring/rw: avoid punting to io-wq directly
` [PATCH v3 07/13] io_uring: force tw ctx locking
` [PATCH v3 08/13] io_uring: remove struct io_tw_state::locked
` [PATCH v3 09/13] io_uring: refactor io_fill_cqe_req_aux
` [PATCH v3 10/13] io_uring: get rid of intermediate aux cqe caches
` [PATCH v3 11/13] io_uring: remove current check from complete_post
` [PATCH v3 12/13] io_uring: refactor io_req_complete_post()
` [PATCH v3 13/13] io_uring: clean up io_lockdep_assert_cq_locked
[PATCH v2 00/14] remove aux CQE caches
2024-03-18 15:16 UTC (31+ messages)
` [PATCH v2 02/14] io_uring/cmd: fix tw <-> issue_flags conversion
` [PATCH v2 03/14] io_uring/cmd: make io_uring_cmd_done irq safe
` [PATCH v2 05/14] ublk: don't hard code IO_URING_F_UNLOCKED
` [PATCH v2 06/14] nvme/io_uring: "
page: next (older) | prev (newer) | latest
- recent:[subjects (threaded)|topics (new)|topics (active)]
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox