messages from 2024-03-16 16:36:22 to 2024-03-26 17:11:59 UTC [more...]
[PATCH v6 00/10] block atomic writes
2024-03-26 17:11 UTC (12+ messages)
` [PATCH v6 01/10] block: Pass blk_queue_get_max_sectors() a request pointer
` [PATCH v6 02/10] block: Call blkdev_dio_unaligned() from blkdev_direct_IO()
` [PATCH v6 03/10] fs: Initial atomic write support
` [PATCH v6 04/10] fs: Add initial atomic write support info to statx
` [PATCH v6 05/10] block: Add core atomic write support
` [PATCH v6 06/10] block: Add atomic write support for statx
` [PATCH v6 07/10] block: Add fops atomic write support
` [PATCH v6 08/10] scsi: sd: Atomic "
` [PATCH v6 09/10] scsi: scsi_debug: "
` [PATCH v6 10/10] nvme: "
[PATCH v4 02/14] mm: Switch mm->get_unmapped_area() to a flag
2024-03-26 11:57 UTC (3+ messages)
[PATCH] io_uring: releasing CPU resources when polling
2024-03-26 3:39 UTC (3+ messages)
` "
` io_uring: "
[RFC PATCH 0/2] Introduce per-task io utilization boost
2024-03-25 17:23 UTC (23+ messages)
` [RFC PATCH 1/2] sched/fair: Introduce per-task io util boost
` [RFC PATCH 2/2] cpufreq/schedutil: Remove iowait boost
[PATCHSET v2 0/17] Improve async state handling
2024-03-25 14:55 UTC (26+ messages)
` [PATCH 01/17] io_uring/net: switch io_send() and io_send_zc() to using io_async_msghdr
` [PATCH 02/17] io_uring/net: switch io_recv() "
` [PATCH 03/17] io_uring/net: unify cleanup handling
` [PATCH 04/17] io_uring/net: always setup an io_async_msghdr
` [PATCH 05/17] io_uring/net: get rid of ->prep_async() for receive side
` [PATCH 06/17] io_uring/net: get rid of ->prep_async() for send side
` [PATCH 07/17] io_uring: kill io_msg_alloc_async_prep()
` [PATCH 08/17] io_uring/net: add iovec recycling
` [PATCH 09/17] io_uring/net: drop 'kmsg' parameter from io_req_msg_cleanup()
` [PATCH 10/17] io_uring/rw: always setup io_async_rw for read/write requests
` [PATCH 11/17] io_uring: get rid of struct io_rw_state
` [PATCH 12/17] io_uring/rw: add iovec recycling
` [PATCH 13/17] io_uring/net: move connect to always using async data
` [PATCH 14/17] io_uring/uring_cmd: switch to always allocating "
` [PATCH 15/17] io_uring/uring_cmd: defer SQE copying until we need it
` [PATCH 16/17] io_uring: drop ->prep_async()
` [PATCH 17/17] io_uring/alloc_cache: switch to array based caching
[GIT PULL] io_uring fixes for 6.9-rc1
2024-03-22 20:05 UTC (2+ messages)
[RFC PATCH 0/4] Read/Write with meta buffer
2024-03-22 18:50 UTC (5+ messages)
` [RFC PATCH 1/4] io_uring/rw: Get rid of flags field in struct io_rw
` [RFC PATCH 2/4] io_uring/rw: support read/write with metadata
` [RFC PATCH 3/4] block: modify bio_integrity_map_user to accept iov_iter as argument
` [RFC PATCH 4/4] block: add support to pass the meta buffer
[PATCHSET 0/6] Switch kbuf mappings to vm_insert_pages()
2024-03-21 14:45 UTC (7+ messages)
` [PATCH 1/6] io_uring/kbuf: get rid of lower BGID lists
` [PATCH 2/6] io_uring/kbuf: get rid of bl->is_ready
` [PATCH 3/6] io_uring/kbuf: vmap pinned buffer ring
` [PATCH 4/6] io_uring/kbuf: protect io_buffer_list teardown with a reference
` [PATCH 5/6] mm: add nommu variant of vm_insert_pages()
` [PATCH 6/6] io_uring/kbuf: use vm_insert_pages() for mmap'ed pbuf ring
[PATCH] io_uring/net: drop unused 'fast_iov_one' entry
2024-03-20 14:41 UTC (3+ messages)
[PATCHSET 0/15] Get rid of ->prep_async()
2024-03-20 1:17 UTC (16+ messages)
` [PATCH 01/15] io_uring/net: switch io_send() and io_send_zc() to using io_async_msghdr
` [PATCH 02/15] io_uring/net: switch io_recv() "
` [PATCH 03/15] io_uring/net: unify cleanup handling
` [PATCH 04/15] io_uring/net: always setup an io_async_msghdr
` [PATCH 05/15] io_uring/net: get rid of ->prep_async() for receive side
` [PATCH 06/15] io_uring/net: get rid of ->prep_async() for send side
` [PATCH 07/15] io_uring: kill io_msg_alloc_async_prep()
` [PATCH 08/15] io_uring/net: add iovec recycling
` [PATCH 09/15] io_uring/net: drop 'kmsg' parameter from io_req_msg_cleanup()
` [PATCH 10/15] io_uring/rw: always setup io_async_rw for read/write requests
` [PATCH 11/15] io_uring: get rid of struct io_rw_state
` [PATCH 12/15] io_uring/rw: add iovec recycling
` [PATCH 13/15] io_uring/net: move connect to always using async data
` [PATCH 14/15] io_uring/uring_cmd: switch to always allocating "
` [PATCH 15/15] io_uring: drop ->prep_async()
[PATCH] io_uring/alloc_cache: shrink default max entries from 512 to 128
2024-03-19 23:37 UTC
[PATCH] net: Do not break out of sk_stream_wait_memory() with TIF_NOTIFY_SIGNAL
2024-03-19 15:08 UTC (9+ messages)
[PATCH] io_uring/sqpoll: early exit thread if task_context wasn't allocated
2024-03-19 2:25 UTC
[bug report] Kernel panic - not syncing: Fatal hardware error!
2024-03-19 2:20 UTC (2+ messages)
[PATCH v3 00/13] Remove aux CQE caches
2024-03-19 2:19 UTC (21+ messages)
` [PATCH v3 01/13] io_uring/cmd: move io_uring_try_cancel_uring_cmd()
` [PATCH v3 02/13] io_uring/cmd: kill one issue_flags to tw conversion
` [PATCH v3 03/13] io_uring/cmd: fix tw <-> issue_flags conversion
` [PATCH v3 04/13] io_uring/cmd: introduce io_uring_cmd_complete
` [PATCH v3 05/13] nvme/io_uring: don't hard code IO_URING_F_UNLOCKED
` [PATCH v3 06/13] io_uring/rw: avoid punting to io-wq directly
` [PATCH v3 07/13] io_uring: force tw ctx locking
` [PATCH v3 08/13] io_uring: remove struct io_tw_state::locked
` [PATCH v3 09/13] io_uring: refactor io_fill_cqe_req_aux
` [PATCH v3 10/13] io_uring: get rid of intermediate aux cqe caches
` [PATCH v3 11/13] io_uring: remove current check from complete_post
` [PATCH v3 12/13] io_uring: refactor io_req_complete_post()
` [PATCH v3 13/13] io_uring: clean up io_lockdep_assert_cq_locked
[PATCH v2 00/14] remove aux CQE caches
2024-03-18 15:16 UTC (41+ messages)
` [PATCH v2 01/14] io_uring/cmd: kill one issue_flags to tw conversion
` [PATCH v2 02/14] io_uring/cmd: fix tw <-> issue_flags conversion
` [PATCH v2 03/14] io_uring/cmd: make io_uring_cmd_done irq safe
` [PATCH v2 04/14] io_uring/cmd: introduce io_uring_cmd_complete
` [PATCH v2 05/14] ublk: don't hard code IO_URING_F_UNLOCKED
` [PATCH v2 06/14] nvme/io_uring: "
` [PATCH v2 07/14] io_uring/rw: avoid punting to io-wq directly
` [PATCH v2 08/14] io_uring: force tw ctx locking
` [PATCH v2 09/14] io_uring: remove struct io_tw_state::locked
` [PATCH v2 10/14] io_uring: refactor io_fill_cqe_req_aux
` [PATCH v2 11/14] io_uring: get rid of intermediate aux cqe caches
` [PATCH v2 12/14] io_uring: remove current check from complete_post
` [PATCH v2 13/14] io_uring: refactor io_req_complete_post()
` [PATCH v2 14/14] io_uring: clean up io_lockdep_assert_cq_locked
[PATCH 00/11] remove aux CQE caches
2024-03-18 1:49 UTC (19+ messages)
` (subset) "
[RFC PATCH v4 00/16] Zero copy Rx using io_uring
2024-03-17 21:30 UTC (11+ messages)
` [RFC PATCH v4 13/16] io_uring: add io_recvzc request
[PATCH v2] io_uring/net: ensure async prep handlers always initialize ->done_io
2024-03-17 20:45 UTC (22+ messages)
[PATCHSET 0/2] Sanitize request setup
2024-03-16 17:29 UTC (3+ messages)
` [PATCH 1/2] io_uring/net: ensure async prep handlers always initialize ->done_io
` [PATCH 2/2] io_uring: clear opcode specific data for an early failure
[syzbot] [io-uring?] KMSAN: uninit-value in io_sendrecv_fail
2024-03-16 17:18 UTC (3+ messages)
[PATCH] io_uring: remove timeout/poll specific cancelations
2024-03-16 17:12 UTC
page: next (older) | prev (newer) | latest
- recent:[subjects (threaded)|topics (new)|topics (active)]
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox