* [PATCH for-next 0/7] normal tw optimisation + refactoring
@ 2023-01-23 14:37 Pavel Begunkov
2023-01-23 14:37 ` [PATCH for-next 1/7] io_uring: use user visible tail in io_uring_poll() Pavel Begunkov
` (7 more replies)
0 siblings, 8 replies; 12+ messages in thread
From: Pavel Begunkov @ 2023-01-23 14:37 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
1-5 are random refactoring patches
6 is a prep patch, which also helps to inline handle_tw_list
7 returns a link tw run optimisation for normal tw
Pavel Begunkov (7):
io_uring: use user visible tail in io_uring_poll()
io_uring: kill outdated comment about overflow flush
io_uring: improve io_get_sqe
io_uring: refactor req allocation
io_uring: refactor io_put_task helpers
io_uring: refactor tctx_task_work
io_uring: return normal tw run linking optimisation
io_uring/io_uring.c | 57 +++++++++++++++++++++++++++------------------
io_uring/io_uring.h | 19 ++++++++-------
io_uring/notif.c | 3 +--
3 files changed, 46 insertions(+), 33 deletions(-)
--
2.38.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH for-next 1/7] io_uring: use user visible tail in io_uring_poll()
2023-01-23 14:37 [PATCH for-next 0/7] normal tw optimisation + refactoring Pavel Begunkov
@ 2023-01-23 14:37 ` Pavel Begunkov
2023-01-23 18:25 ` Jens Axboe
2023-01-23 14:37 ` [PATCH for-next 2/7] io_uring: kill outdated comment about overflow flush Pavel Begunkov
` (6 subsequent siblings)
7 siblings, 1 reply; 12+ messages in thread
From: Pavel Begunkov @ 2023-01-23 14:37 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
We return POLLIN from io_uring_poll() depending on whether there are
CQEs for the userspace, and so we should use the user visible tail
pointer instead of a transient cached value.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index aef30d265a13..c42c1124ad5c 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -2888,7 +2888,7 @@ static __poll_t io_uring_poll(struct file *file, poll_table *wait)
* pushes them to do the flush.
*/
- if (io_cqring_events(ctx) || io_has_work(ctx))
+ if (__io_cqring_events_user(ctx) || io_has_work(ctx))
mask |= EPOLLIN | EPOLLRDNORM;
return mask;
--
2.38.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH for-next 2/7] io_uring: kill outdated comment about overflow flush
2023-01-23 14:37 [PATCH for-next 0/7] normal tw optimisation + refactoring Pavel Begunkov
2023-01-23 14:37 ` [PATCH for-next 1/7] io_uring: use user visible tail in io_uring_poll() Pavel Begunkov
@ 2023-01-23 14:37 ` Pavel Begunkov
2023-01-23 14:37 ` [PATCH for-next 3/7] io_uring: improve io_get_sqe Pavel Begunkov
` (5 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Pavel Begunkov @ 2023-01-23 14:37 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
__io_cqring_overflow_flush() doesn't return anything anymore, remove
outdate comment.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index c42c1124ad5c..118b2fe254ba 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -666,7 +666,6 @@ static void io_cqring_overflow_kill(struct io_ring_ctx *ctx)
}
}
-/* Returns true if there are no backlogged entries after the flush */
static void __io_cqring_overflow_flush(struct io_ring_ctx *ctx)
{
size_t cqe_size = sizeof(struct io_uring_cqe);
--
2.38.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH for-next 3/7] io_uring: improve io_get_sqe
2023-01-23 14:37 [PATCH for-next 0/7] normal tw optimisation + refactoring Pavel Begunkov
2023-01-23 14:37 ` [PATCH for-next 1/7] io_uring: use user visible tail in io_uring_poll() Pavel Begunkov
2023-01-23 14:37 ` [PATCH for-next 2/7] io_uring: kill outdated comment about overflow flush Pavel Begunkov
@ 2023-01-23 14:37 ` Pavel Begunkov
2023-01-23 14:37 ` [PATCH for-next 4/7] io_uring: refactor req allocation Pavel Begunkov
` (4 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Pavel Begunkov @ 2023-01-23 14:37 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Return an SQE from io_get_sqe() as a parameter and use the return value
to determine if it failed or not. This lets the compiler to compile out
the sqe NULL check when we know that the return SQE is valid.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 118b2fe254ba..6af11a60dc8a 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -2370,7 +2370,7 @@ static void io_commit_sqring(struct io_ring_ctx *ctx)
* used, it's important that those reads are done through READ_ONCE() to
* prevent a re-load down the line.
*/
-static const struct io_uring_sqe *io_get_sqe(struct io_ring_ctx *ctx)
+static const bool io_get_sqe(struct io_ring_ctx *ctx, const struct io_uring_sqe **sqe)
{
unsigned head, mask = ctx->sq_entries - 1;
unsigned sq_idx = ctx->cached_sq_head++ & mask;
@@ -2388,14 +2388,15 @@ static const struct io_uring_sqe *io_get_sqe(struct io_ring_ctx *ctx)
/* double index for 128-byte SQEs, twice as long */
if (ctx->flags & IORING_SETUP_SQE128)
head <<= 1;
- return &ctx->sq_sqes[head];
+ *sqe = &ctx->sq_sqes[head];
+ return true;
}
/* drop invalid entries */
ctx->cq_extra--;
WRITE_ONCE(ctx->rings->sq_dropped,
READ_ONCE(ctx->rings->sq_dropped) + 1);
- return NULL;
+ return false;
}
int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr)
@@ -2419,8 +2420,7 @@ int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr)
if (unlikely(!io_alloc_req_refill(ctx)))
break;
req = io_alloc_req(ctx);
- sqe = io_get_sqe(ctx);
- if (unlikely(!sqe)) {
+ if (unlikely(!io_get_sqe(ctx, &sqe))) {
io_req_add_to_cache(req, ctx);
break;
}
--
2.38.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH for-next 4/7] io_uring: refactor req allocation
2023-01-23 14:37 [PATCH for-next 0/7] normal tw optimisation + refactoring Pavel Begunkov
` (2 preceding siblings ...)
2023-01-23 14:37 ` [PATCH for-next 3/7] io_uring: improve io_get_sqe Pavel Begunkov
@ 2023-01-23 14:37 ` Pavel Begunkov
2023-01-23 14:37 ` [PATCH for-next 5/7] io_uring: refactor io_put_task helpers Pavel Begunkov
` (3 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Pavel Begunkov @ 2023-01-23 14:37 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Follow the io_get_sqe pattern returning the result via a pointer
and hide request cache refill inside io_alloc_req().
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.c | 7 +++----
io_uring/io_uring.h | 19 +++++++++++--------
io_uring/notif.c | 3 +--
3 files changed, 15 insertions(+), 14 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 6af11a60dc8a..8a99791a507a 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -2417,9 +2417,8 @@ int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr)
const struct io_uring_sqe *sqe;
struct io_kiocb *req;
- if (unlikely(!io_alloc_req_refill(ctx)))
+ if (unlikely(!io_alloc_req(ctx, &req)))
break;
- req = io_alloc_req(ctx);
if (unlikely(!io_get_sqe(ctx, &sqe))) {
io_req_add_to_cache(req, ctx);
break;
@@ -2738,14 +2737,14 @@ static int io_eventfd_unregister(struct io_ring_ctx *ctx)
static void io_req_caches_free(struct io_ring_ctx *ctx)
{
+ struct io_kiocb *req;
int nr = 0;
mutex_lock(&ctx->uring_lock);
io_flush_cached_locked_reqs(ctx, &ctx->submit_state);
while (!io_req_cache_empty(ctx)) {
- struct io_kiocb *req = io_alloc_req(ctx);
-
+ req = io_extract_req(ctx);
kmem_cache_free(req_cachep, req);
nr++;
}
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index 3ee6fc74f020..1cc6c2a8696b 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -333,16 +333,9 @@ static inline bool io_req_cache_empty(struct io_ring_ctx *ctx)
return !ctx->submit_state.free_list.next;
}
-static inline bool io_alloc_req_refill(struct io_ring_ctx *ctx)
-{
- if (unlikely(io_req_cache_empty(ctx)))
- return __io_alloc_req_refill(ctx);
- return true;
-}
-
extern struct kmem_cache *req_cachep;
-static inline struct io_kiocb *io_alloc_req(struct io_ring_ctx *ctx)
+static inline struct io_kiocb *io_extract_req(struct io_ring_ctx *ctx)
{
struct io_kiocb *req;
@@ -352,6 +345,16 @@ static inline struct io_kiocb *io_alloc_req(struct io_ring_ctx *ctx)
return req;
}
+static inline bool io_alloc_req(struct io_ring_ctx *ctx, struct io_kiocb **req)
+{
+ if (unlikely(io_req_cache_empty(ctx))) {
+ if (!__io_alloc_req_refill(ctx))
+ return false;
+ }
+ *req = io_extract_req(ctx);
+ return true;
+}
+
static inline bool io_allowed_defer_tw_run(struct io_ring_ctx *ctx)
{
return likely(ctx->submitter_task == current);
diff --git a/io_uring/notif.c b/io_uring/notif.c
index c4bb793ebf0e..09dfd0832d19 100644
--- a/io_uring/notif.c
+++ b/io_uring/notif.c
@@ -68,9 +68,8 @@ struct io_kiocb *io_alloc_notif(struct io_ring_ctx *ctx)
struct io_kiocb *notif;
struct io_notif_data *nd;
- if (unlikely(!io_alloc_req_refill(ctx)))
+ if (unlikely(!io_alloc_req(ctx, ¬if)))
return NULL;
- notif = io_alloc_req(ctx);
notif->opcode = IORING_OP_NOP;
notif->flags = 0;
notif->file = NULL;
--
2.38.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH for-next 5/7] io_uring: refactor io_put_task helpers
2023-01-23 14:37 [PATCH for-next 0/7] normal tw optimisation + refactoring Pavel Begunkov
` (3 preceding siblings ...)
2023-01-23 14:37 ` [PATCH for-next 4/7] io_uring: refactor req allocation Pavel Begunkov
@ 2023-01-23 14:37 ` Pavel Begunkov
2023-01-23 14:37 ` [PATCH for-next 6/7] io_uring: refactor tctx_task_work Pavel Begunkov
` (2 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Pavel Begunkov @ 2023-01-23 14:37 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Add a helper for putting refs from the target task context, rename
__io_put_task() and add a couple of comments around. Use the remote
version for __io_req_complete_post(), the local is only needed for
__io_submit_flush_completions().
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.c | 17 ++++++++++++-----
1 file changed, 12 insertions(+), 5 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 8a99791a507a..faada7e76f2d 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -713,7 +713,8 @@ static void io_cqring_overflow_flush(struct io_ring_ctx *ctx)
io_cqring_do_overflow_flush(ctx);
}
-static void __io_put_task(struct task_struct *task, int nr)
+/* can be called by any task */
+static void io_put_task_remote(struct task_struct *task, int nr)
{
struct io_uring_task *tctx = task->io_uring;
@@ -723,13 +724,19 @@ static void __io_put_task(struct task_struct *task, int nr)
put_task_struct_many(task, nr);
}
+/* used by a task to put its own references */
+static void io_put_task_local(struct task_struct *task, int nr)
+{
+ task->io_uring->cached_refs += nr;
+}
+
/* must to be called somewhat shortly after putting a request */
static inline void io_put_task(struct task_struct *task, int nr)
{
if (likely(task == current))
- task->io_uring->cached_refs += nr;
+ io_put_task_local(task, nr);
else
- __io_put_task(task, nr);
+ io_put_task_remote(task, nr);
}
void io_task_refs_refill(struct io_uring_task *tctx)
@@ -982,7 +989,7 @@ static void __io_req_complete_post(struct io_kiocb *req)
* we don't hold ->completion_lock. Clean them here to avoid
* deadlocks.
*/
- io_put_task(req->task, 1);
+ io_put_task_remote(req->task, 1);
wq_list_add_head(&req->comp_list, &ctx->locked_free_list);
ctx->locked_free_nr++;
}
@@ -1105,7 +1112,7 @@ __cold void io_free_req(struct io_kiocb *req)
io_req_put_rsrc(req);
io_dismantle_req(req);
- io_put_task(req->task, 1);
+ io_put_task_remote(req->task, 1);
spin_lock(&ctx->completion_lock);
wq_list_add_head(&req->comp_list, &ctx->locked_free_list);
--
2.38.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH for-next 6/7] io_uring: refactor tctx_task_work
2023-01-23 14:37 [PATCH for-next 0/7] normal tw optimisation + refactoring Pavel Begunkov
` (4 preceding siblings ...)
2023-01-23 14:37 ` [PATCH for-next 5/7] io_uring: refactor io_put_task helpers Pavel Begunkov
@ 2023-01-23 14:37 ` Pavel Begunkov
2023-01-23 14:37 ` [PATCH for-next 7/7] io_uring: return normal tw run linking optimisation Pavel Begunkov
2023-01-23 21:14 ` [PATCH for-next 0/7] normal tw optimisation + refactoring Jens Axboe
7 siblings, 0 replies; 12+ messages in thread
From: Pavel Begunkov @ 2023-01-23 14:37 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Merge almost identical sections of tctx_task_work(), this will make code
modifications later easier and also inlines handle_tw_list().
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.c | 11 ++++-------
1 file changed, 4 insertions(+), 7 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index faada7e76f2d..586e70f686ce 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1166,7 +1166,7 @@ static unsigned int handle_tw_list(struct llist_node *node,
{
unsigned int count = 0;
- while (node != last) {
+ while (node && node != last) {
struct llist_node *next = node->next;
struct io_kiocb *req = container_of(node, struct io_kiocb,
io_task_work.node);
@@ -1226,7 +1226,7 @@ void tctx_task_work(struct callback_head *cb)
task_work);
struct llist_node fake = {};
struct llist_node *node;
- unsigned int loops = 1;
+ unsigned int loops = 0;
unsigned int count;
if (unlikely(current->flags & PF_EXITING)) {
@@ -1234,15 +1234,12 @@ void tctx_task_work(struct callback_head *cb)
return;
}
- node = io_llist_xchg(&tctx->task_list, &fake);
- count = handle_tw_list(node, &ctx, &uring_locked, NULL);
- node = io_llist_cmpxchg(&tctx->task_list, &fake, NULL);
- while (node != &fake) {
+ do {
loops++;
node = io_llist_xchg(&tctx->task_list, &fake);
count += handle_tw_list(node, &ctx, &uring_locked, &fake);
node = io_llist_cmpxchg(&tctx->task_list, &fake, NULL);
- }
+ } while (node != &fake);
ctx_flush_and_put(ctx, &uring_locked);
--
2.38.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH for-next 7/7] io_uring: return normal tw run linking optimisation
2023-01-23 14:37 [PATCH for-next 0/7] normal tw optimisation + refactoring Pavel Begunkov
` (5 preceding siblings ...)
2023-01-23 14:37 ` [PATCH for-next 6/7] io_uring: refactor tctx_task_work Pavel Begunkov
@ 2023-01-23 14:37 ` Pavel Begunkov
2023-01-23 21:14 ` [PATCH for-next 0/7] normal tw optimisation + refactoring Jens Axboe
7 siblings, 0 replies; 12+ messages in thread
From: Pavel Begunkov @ 2023-01-23 14:37 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
io_submit_flush_completions() may produce new task_work items, so it's a
good idea to recheck the task_work list after flushing completions. The
optimisation is not new and was accidentially removed by
f88262e60bb9 ("io_uring: lockless task list")
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 586e70f686ce..8c4d92e64c20 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1238,6 +1238,15 @@ void tctx_task_work(struct callback_head *cb)
loops++;
node = io_llist_xchg(&tctx->task_list, &fake);
count += handle_tw_list(node, &ctx, &uring_locked, &fake);
+
+ /* skip expensive cmpxchg if there are items in the list */
+ if (READ_ONCE(tctx->task_list.first) != &fake)
+ continue;
+ if (uring_locked && !wq_list_empty(&ctx->submit_state.compl_reqs)) {
+ io_submit_flush_completions(ctx);
+ if (READ_ONCE(tctx->task_list.first) != &fake)
+ continue;
+ }
node = io_llist_cmpxchg(&tctx->task_list, &fake, NULL);
} while (node != &fake);
--
2.38.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH for-next 1/7] io_uring: use user visible tail in io_uring_poll()
2023-01-23 14:37 ` [PATCH for-next 1/7] io_uring: use user visible tail in io_uring_poll() Pavel Begunkov
@ 2023-01-23 18:25 ` Jens Axboe
2023-01-23 20:56 ` Pavel Begunkov
0 siblings, 1 reply; 12+ messages in thread
From: Jens Axboe @ 2023-01-23 18:25 UTC (permalink / raw)
To: Pavel Begunkov, io-uring
On 1/23/23 7:37 AM, Pavel Begunkov wrote:
> We return POLLIN from io_uring_poll() depending on whether there are
> CQEs for the userspace, and so we should use the user visible tail
> pointer instead of a transient cached value.
Should we mark this one for stable as well?
--
Jens Axboe
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH for-next 1/7] io_uring: use user visible tail in io_uring_poll()
2023-01-23 18:25 ` Jens Axboe
@ 2023-01-23 20:56 ` Pavel Begunkov
2023-01-23 21:09 ` Jens Axboe
0 siblings, 1 reply; 12+ messages in thread
From: Pavel Begunkov @ 2023-01-23 20:56 UTC (permalink / raw)
To: Jens Axboe, io-uring
On 1/23/23 18:25, Jens Axboe wrote:
> On 1/23/23 7:37 AM, Pavel Begunkov wrote:
>> We return POLLIN from io_uring_poll() depending on whether there are
>> CQEs for the userspace, and so we should use the user visible tail
>> pointer instead of a transient cached value.
>
> Should we mark this one for stable as well?
Yeah, we can. It makes it to overestimate the number of ready CQEs
and causes spurious POLLINs, but should be extremely rare and happen
only on queue (but not wq wake up).
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH for-next 1/7] io_uring: use user visible tail in io_uring_poll()
2023-01-23 20:56 ` Pavel Begunkov
@ 2023-01-23 21:09 ` Jens Axboe
0 siblings, 0 replies; 12+ messages in thread
From: Jens Axboe @ 2023-01-23 21:09 UTC (permalink / raw)
To: Pavel Begunkov, io-uring
On 1/23/23 1:56 PM, Pavel Begunkov wrote:
> On 1/23/23 18:25, Jens Axboe wrote:
>> On 1/23/23 7:37 AM, Pavel Begunkov wrote:
>>> We return POLLIN from io_uring_poll() depending on whether there are
>>> CQEs for the userspace, and so we should use the user visible tail
>>> pointer instead of a transient cached value.
>>
>> Should we mark this one for stable as well?
> Yeah, we can. It makes it to overestimate the number of ready CQEs
> and causes spurious POLLINs, but should be extremely rare and happen
> only on queue (but not wq wake up).
Right, it's not critical, but we may as well.
--
Jens Axboe
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH for-next 0/7] normal tw optimisation + refactoring
2023-01-23 14:37 [PATCH for-next 0/7] normal tw optimisation + refactoring Pavel Begunkov
` (6 preceding siblings ...)
2023-01-23 14:37 ` [PATCH for-next 7/7] io_uring: return normal tw run linking optimisation Pavel Begunkov
@ 2023-01-23 21:14 ` Jens Axboe
7 siblings, 0 replies; 12+ messages in thread
From: Jens Axboe @ 2023-01-23 21:14 UTC (permalink / raw)
To: io-uring, Pavel Begunkov
On Mon, 23 Jan 2023 14:37:12 +0000, Pavel Begunkov wrote:
> 1-5 are random refactoring patches
> 6 is a prep patch, which also helps to inline handle_tw_list
> 7 returns a link tw run optimisation for normal tw
>
> Pavel Begunkov (7):
> io_uring: use user visible tail in io_uring_poll()
> io_uring: kill outdated comment about overflow flush
> io_uring: improve io_get_sqe
> io_uring: refactor req allocation
> io_uring: refactor io_put_task helpers
> io_uring: refactor tctx_task_work
> io_uring: return normal tw run linking optimisation
>
> [...]
Applied, thanks!
[1/7] io_uring: use user visible tail in io_uring_poll()
commit: 10d6d8338e7b984897ceb905f4b63576aac5b721
[2/7] io_uring: kill outdated comment about overflow flush
commit: 89126f155a5d13c178a3e5d97c6a805626f10406
[3/7] io_uring: improve io_get_sqe
commit: d5a6846a1c5fc7b864b63e90d136a3af6034e37c
[4/7] io_uring: refactor req allocation
commit: 3b70c8766b2a668664e64ee5921a4e300353d451
[5/7] io_uring: refactor io_put_task helpers
commit: dfb27668173462154929f5b8da80cc4b1ba94672
[6/7] io_uring: refactor tctx_task_work
commit: b5b57128d0cd58a487c6ffd04ed526f569232c03
[7/7] io_uring: return normal tw run linking optimisation
commit: 73b62ca46fe7e10334f601643c2ccd4fca4a4874
Best regards,
--
Jens Axboe
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2023-01-23 21:14 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-01-23 14:37 [PATCH for-next 0/7] normal tw optimisation + refactoring Pavel Begunkov
2023-01-23 14:37 ` [PATCH for-next 1/7] io_uring: use user visible tail in io_uring_poll() Pavel Begunkov
2023-01-23 18:25 ` Jens Axboe
2023-01-23 20:56 ` Pavel Begunkov
2023-01-23 21:09 ` Jens Axboe
2023-01-23 14:37 ` [PATCH for-next 2/7] io_uring: kill outdated comment about overflow flush Pavel Begunkov
2023-01-23 14:37 ` [PATCH for-next 3/7] io_uring: improve io_get_sqe Pavel Begunkov
2023-01-23 14:37 ` [PATCH for-next 4/7] io_uring: refactor req allocation Pavel Begunkov
2023-01-23 14:37 ` [PATCH for-next 5/7] io_uring: refactor io_put_task helpers Pavel Begunkov
2023-01-23 14:37 ` [PATCH for-next 6/7] io_uring: refactor tctx_task_work Pavel Begunkov
2023-01-23 14:37 ` [PATCH for-next 7/7] io_uring: return normal tw run linking optimisation Pavel Begunkov
2023-01-23 21:14 ` [PATCH for-next 0/7] normal tw optimisation + refactoring Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox