* [PATCH 5.9] io_uring: replace rw->task_work with rq->task_work
@ 2020-07-12 17:42 Pavel Begunkov
2020-07-12 20:29 ` Jens Axboe
0 siblings, 1 reply; 4+ messages in thread
From: Pavel Begunkov @ 2020-07-12 17:42 UTC (permalink / raw)
To: Jens Axboe, io-uring
io_kiocb::task_work was de-unionised, and is not planned to be shared
back, because it's too useful and commonly used. Hence, instead of
keeping a separate task_work in struct io_async_rw just reuse
req->task_work.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 31 ++++---------------------------
1 file changed, 4 insertions(+), 27 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index fda2089f7b13..6eae2fb469f9 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -505,7 +505,6 @@ struct io_async_rw {
ssize_t nr_segs;
ssize_t size;
struct wait_page_queue wpq;
- struct callback_head task_work;
};
struct io_async_ctx {
@@ -2900,33 +2899,11 @@ static int io_read_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe,
return 0;
}
-static void io_async_buf_cancel(struct callback_head *cb)
-{
- struct io_async_rw *rw;
- struct io_kiocb *req;
-
- rw = container_of(cb, struct io_async_rw, task_work);
- req = rw->wpq.wait.private;
- __io_req_task_cancel(req, -ECANCELED);
-}
-
-static void io_async_buf_retry(struct callback_head *cb)
-{
- struct io_async_rw *rw;
- struct io_kiocb *req;
-
- rw = container_of(cb, struct io_async_rw, task_work);
- req = rw->wpq.wait.private;
-
- __io_req_task_submit(req);
-}
-
static int io_async_buf_func(struct wait_queue_entry *wait, unsigned mode,
int sync, void *arg)
{
struct wait_page_queue *wpq;
struct io_kiocb *req = wait->private;
- struct io_async_rw *rw = &req->io->rw;
struct wait_page_key *key = arg;
int ret;
@@ -2938,17 +2915,17 @@ static int io_async_buf_func(struct wait_queue_entry *wait, unsigned mode,
list_del_init(&wait->entry);
- init_task_work(&rw->task_work, io_async_buf_retry);
+ init_task_work(&req->task_work, io_req_task_submit);
/* submit ref gets dropped, acquire a new one */
refcount_inc(&req->refs);
- ret = io_req_task_work_add(req, &rw->task_work);
+ ret = io_req_task_work_add(req, &req->task_work);
if (unlikely(ret)) {
struct task_struct *tsk;
/* queue just for cancelation */
- init_task_work(&rw->task_work, io_async_buf_cancel);
+ init_task_work(&req->task_work, io_req_task_cancel);
tsk = io_wq_get_task(req->ctx->io_wq);
- task_work_add(tsk, &rw->task_work, 0);
+ task_work_add(tsk, &req->task_work, 0);
wake_up_process(tsk);
}
return 1;
--
2.24.0
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH 5.9] io_uring: replace rw->task_work with rq->task_work
2020-07-12 17:42 [PATCH 5.9] io_uring: replace rw->task_work with rq->task_work Pavel Begunkov
@ 2020-07-12 20:29 ` Jens Axboe
2020-07-13 8:03 ` Pavel Begunkov
0 siblings, 1 reply; 4+ messages in thread
From: Jens Axboe @ 2020-07-12 20:29 UTC (permalink / raw)
To: Pavel Begunkov, io-uring
On 7/12/20 11:42 AM, Pavel Begunkov wrote:
> io_kiocb::task_work was de-unionised, and is not planned to be shared
> back, because it's too useful and commonly used. Hence, instead of
> keeping a separate task_work in struct io_async_rw just reuse
> req->task_work.
This is a good idea, req->task_work is a first class citizen these days.
Unfortunately it doesn't do much good for io_async_ctx, since it's so
huge with the msghdr related bits. It'd be nice to do something about
that too, though not a huge priority as allocating async context is
somewhat of a slow path. Though with the proliferation of task_work,
it's no longer nearly as expensive as it used to be with the async
thread offload. Could be argued to be a full-on fast path these days.
Applied, thanks.
--
Jens Axboe
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH 5.9] io_uring: replace rw->task_work with rq->task_work
2020-07-12 20:29 ` Jens Axboe
@ 2020-07-13 8:03 ` Pavel Begunkov
2020-07-13 14:11 ` Jens Axboe
0 siblings, 1 reply; 4+ messages in thread
From: Pavel Begunkov @ 2020-07-13 8:03 UTC (permalink / raw)
To: Jens Axboe, io-uring
On 12/07/2020 23:29, Jens Axboe wrote:
> On 7/12/20 11:42 AM, Pavel Begunkov wrote:
>> io_kiocb::task_work was de-unionised, and is not planned to be shared
>> back, because it's too useful and commonly used. Hence, instead of
>> keeping a separate task_work in struct io_async_rw just reuse
>> req->task_work.
>
> This is a good idea, req->task_work is a first class citizen these days.
> Unfortunately it doesn't do much good for io_async_ctx, since it's so
> huge with the msghdr related bits. It'd be nice to do something about
> that too, though not a huge priority as allocating async context is
We can allocate not an entire struct/union io_async_ctx but its particular
member. Should be a bit better for writes.
And if we can save another 16B in io_async_rw, it'd be 3 cachelines for
io_async_rw. E.g. there are two 4B holes in struct wait_page_queue, one is
from "int bit_nr", the second is inside "wait_queue_entry_t wait".
# pahole -C io_async_ctx ./fs/io_uring.o
struct io_async_ctx {
union {
struct io_async_rw rw; /* 0 208 */
struct io_async_msghdr msg; /* 0 368 */
struct io_async_connect connect; /* 0 128 */
struct io_timeout_data timeout __attribute__((__aligned__(8)));
/* 0 96 */
} __attribute__((__aligned__(8))); /* 0 368 */
/* size: 368, cachelines: 6, members: 1 */
/* forced alignments: 1 */
/* last cacheline: 48 bytes */
} __attribute__((__aligned__(8)));
> somewhat of a slow path. Though with the proliferation of task_work,
> it's no longer nearly as expensive as it used to be with the async
> thread offload. Could be argued to be a full-on fast path these days.
>
> Applied, thanks.
>
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH 5.9] io_uring: replace rw->task_work with rq->task_work
2020-07-13 8:03 ` Pavel Begunkov
@ 2020-07-13 14:11 ` Jens Axboe
0 siblings, 0 replies; 4+ messages in thread
From: Jens Axboe @ 2020-07-13 14:11 UTC (permalink / raw)
To: Pavel Begunkov, io-uring
On 7/13/20 2:03 AM, Pavel Begunkov wrote:
> On 12/07/2020 23:29, Jens Axboe wrote:
>> On 7/12/20 11:42 AM, Pavel Begunkov wrote:
>>> io_kiocb::task_work was de-unionised, and is not planned to be shared
>>> back, because it's too useful and commonly used. Hence, instead of
>>> keeping a separate task_work in struct io_async_rw just reuse
>>> req->task_work.
>>
>> This is a good idea, req->task_work is a first class citizen these days.
>> Unfortunately it doesn't do much good for io_async_ctx, since it's so
>> huge with the msghdr related bits. It'd be nice to do something about
>> that too, though not a huge priority as allocating async context is
>
> We can allocate not an entire struct/union io_async_ctx but its particular
> member. Should be a bit better for writes.
Right, we probably just need to turn req->io into a:
void *async_ctx;
and have it be assigned with the various types that are needed for
async deferral.
> And if we can save another 16B in io_async_rw, it'd be 3 cachelines for
> io_async_rw. E.g. there are two 4B holes in struct wait_page_queue, one is
> from "int bit_nr", the second is inside "wait_queue_entry_t wait".
An easy 8 bytes is just turning nr_segs and size into 32-bit types. The
size will never be more than 2G, and segs is limited at 1k iirc.
--
Jens Axboe
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2020-07-13 14:11 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-07-12 17:42 [PATCH 5.9] io_uring: replace rw->task_work with rq->task_work Pavel Begunkov
2020-07-12 20:29 ` Jens Axboe
2020-07-13 8:03 ` Pavel Begunkov
2020-07-13 14:11 ` Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox