* [PATCHSET v2 0/2] @ 2020-04-13 17:26 Jens Axboe 2020-04-13 17:26 ` [PATCH 1/2] io_uring: check for need to re-wait in polled async handling Jens Axboe 2020-04-13 17:26 ` [PATCH 2/2] io_uring: io_async_task_func() should check and honor cancelation Jens Axboe 0 siblings, 2 replies; 4+ messages in thread From: Jens Axboe @ 2020-04-13 17:26 UTC (permalink / raw) To: io-uring Just two minor fixes here that should go into 5.7: - Honor async request cancelation instead of trying to re-issue when it triggers - Apply same re-wait approach for async poll that we do for regular poll, in case we get spurious wakeups. Quick v2, since the one I sent out was not the tested on. Patch 2 was missing a return after the io_put_req() on cancelation. -- Jens Axboe ^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH 1/2] io_uring: check for need to re-wait in polled async handling 2020-04-13 17:26 [PATCHSET v2 0/2] Jens Axboe @ 2020-04-13 17:26 ` Jens Axboe 2020-04-13 17:26 ` [PATCH 2/2] io_uring: io_async_task_func() should check and honor cancelation Jens Axboe 1 sibling, 0 replies; 4+ messages in thread From: Jens Axboe @ 2020-04-13 17:26 UTC (permalink / raw) To: io-uring; +Cc: Jens Axboe We added this for just the regular poll requests in commit a6ba632d2c24 ("io_uring: retry poll if we got woken with non-matching mask"), we should do the same for the poll handler used pollable async requests. Move the re-wait check and arm into a helper, and call it from io_async_task_func() as well. Signed-off-by: Jens Axboe <[email protected]> --- fs/io_uring.c | 43 +++++++++++++++++++++++++++++-------------- 1 file changed, 29 insertions(+), 14 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index 0d1b5d5f1251..7b41f6231955 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -4156,6 +4156,26 @@ static int __io_async_wake(struct io_kiocb *req, struct io_poll_iocb *poll, return 1; } +static bool io_poll_rewait(struct io_kiocb *req, struct io_poll_iocb *poll) + __acquires(&req->ctx->completion_lock) +{ + struct io_ring_ctx *ctx = req->ctx; + + if (!req->result && !READ_ONCE(poll->canceled)) { + struct poll_table_struct pt = { ._key = poll->events }; + + req->result = vfs_poll(req->file, &pt) & poll->events; + } + + spin_lock_irq(&ctx->completion_lock); + if (!req->result && !READ_ONCE(poll->canceled)) { + add_wait_queue(poll->head, &poll->wait); + return true; + } + + return false; +} + static void io_async_task_func(struct callback_head *cb) { struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work); @@ -4164,14 +4184,16 @@ static void io_async_task_func(struct callback_head *cb) trace_io_uring_task_run(req->ctx, req->opcode, req->user_data); - WARN_ON_ONCE(!list_empty(&req->apoll->poll.wait.entry)); - - if (hash_hashed(&req->hash_node)) { - spin_lock_irq(&ctx->completion_lock); - hash_del(&req->hash_node); + if (io_poll_rewait(req, &apoll->poll)) { spin_unlock_irq(&ctx->completion_lock); + return; } + if (hash_hashed(&req->hash_node)) + hash_del(&req->hash_node); + + spin_unlock_irq(&ctx->completion_lock); + /* restore ->work in case we need to retry again */ memcpy(&req->work, &apoll->work, sizeof(req->work)); @@ -4436,18 +4458,11 @@ static void io_poll_task_handler(struct io_kiocb *req, struct io_kiocb **nxt) struct io_ring_ctx *ctx = req->ctx; struct io_poll_iocb *poll = &req->poll; - if (!req->result && !READ_ONCE(poll->canceled)) { - struct poll_table_struct pt = { ._key = poll->events }; - - req->result = vfs_poll(req->file, &pt) & poll->events; - } - - spin_lock_irq(&ctx->completion_lock); - if (!req->result && !READ_ONCE(poll->canceled)) { - add_wait_queue(poll->head, &poll->wait); + if (io_poll_rewait(req, poll)) { spin_unlock_irq(&ctx->completion_lock); return; } + hash_del(&req->hash_node); io_poll_complete(req, req->result, 0); req->flags |= REQ_F_COMP_LOCKED; -- 2.26.0 ^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH 2/2] io_uring: io_async_task_func() should check and honor cancelation 2020-04-13 17:26 [PATCHSET v2 0/2] Jens Axboe 2020-04-13 17:26 ` [PATCH 1/2] io_uring: check for need to re-wait in polled async handling Jens Axboe @ 2020-04-13 17:26 ` Jens Axboe 1 sibling, 0 replies; 4+ messages in thread From: Jens Axboe @ 2020-04-13 17:26 UTC (permalink / raw) To: io-uring; +Cc: Jens Axboe If the request has been marked as canceled, don't try and issue it. Instead just fill a canceled event and finish the request. Signed-off-by: Jens Axboe <[email protected]> --- fs/io_uring.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/fs/io_uring.c b/fs/io_uring.c index 7b41f6231955..aac54772e12e 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -4181,6 +4181,7 @@ static void io_async_task_func(struct callback_head *cb) struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work); struct async_poll *apoll = req->apoll; struct io_ring_ctx *ctx = req->ctx; + bool canceled; trace_io_uring_task_run(req->ctx, req->opcode, req->user_data); @@ -4192,8 +4193,22 @@ static void io_async_task_func(struct callback_head *cb) if (hash_hashed(&req->hash_node)) hash_del(&req->hash_node); + canceled = READ_ONCE(apoll->poll.canceled); + if (canceled) { + io_cqring_fill_event(req, -ECANCELED); + io_commit_cqring(ctx); + } + spin_unlock_irq(&ctx->completion_lock); + if (canceled) { + kfree(apoll); + io_cqring_ev_posted(ctx); + req_set_fail_links(req); + io_put_req(req); + return; + } + /* restore ->work in case we need to retry again */ memcpy(&req->work, &apoll->work, sizeof(req->work)); -- 2.26.0 ^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCHSET 0/2] io_uring async poll fixes @ 2020-04-13 17:21 Jens Axboe 2020-04-13 17:21 ` [PATCH 1/2] io_uring: check for need to re-wait in polled async handling Jens Axboe 0 siblings, 1 reply; 4+ messages in thread From: Jens Axboe @ 2020-04-13 17:21 UTC (permalink / raw) To: io-uring Just two minor fixes here that should go into 5.7: - Honor async request cancelation instead of trying to re-issue when it triggers - Apply same re-wait approach for async poll that we do for regular poll, in case we get spurious wakeups. -- Jens Axboe ^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH 1/2] io_uring: check for need to re-wait in polled async handling 2020-04-13 17:21 [PATCHSET 0/2] io_uring async poll fixes Jens Axboe @ 2020-04-13 17:21 ` Jens Axboe 0 siblings, 0 replies; 4+ messages in thread From: Jens Axboe @ 2020-04-13 17:21 UTC (permalink / raw) To: io-uring; +Cc: Jens Axboe We added this for just the regular poll requests in commit a6ba632d2c24 ("io_uring: retry poll if we got woken with non-matching mask"), we should do the same for the poll handler used pollable async requests. Move the re-wait check and arm into a helper, and call it from io_async_task_func() as well. Signed-off-by: Jens Axboe <[email protected]> --- fs/io_uring.c | 43 +++++++++++++++++++++++++++++-------------- 1 file changed, 29 insertions(+), 14 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index 0d1b5d5f1251..7b41f6231955 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -4156,6 +4156,26 @@ static int __io_async_wake(struct io_kiocb *req, struct io_poll_iocb *poll, return 1; } +static bool io_poll_rewait(struct io_kiocb *req, struct io_poll_iocb *poll) + __acquires(&req->ctx->completion_lock) +{ + struct io_ring_ctx *ctx = req->ctx; + + if (!req->result && !READ_ONCE(poll->canceled)) { + struct poll_table_struct pt = { ._key = poll->events }; + + req->result = vfs_poll(req->file, &pt) & poll->events; + } + + spin_lock_irq(&ctx->completion_lock); + if (!req->result && !READ_ONCE(poll->canceled)) { + add_wait_queue(poll->head, &poll->wait); + return true; + } + + return false; +} + static void io_async_task_func(struct callback_head *cb) { struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work); @@ -4164,14 +4184,16 @@ static void io_async_task_func(struct callback_head *cb) trace_io_uring_task_run(req->ctx, req->opcode, req->user_data); - WARN_ON_ONCE(!list_empty(&req->apoll->poll.wait.entry)); - - if (hash_hashed(&req->hash_node)) { - spin_lock_irq(&ctx->completion_lock); - hash_del(&req->hash_node); + if (io_poll_rewait(req, &apoll->poll)) { spin_unlock_irq(&ctx->completion_lock); + return; } + if (hash_hashed(&req->hash_node)) + hash_del(&req->hash_node); + + spin_unlock_irq(&ctx->completion_lock); + /* restore ->work in case we need to retry again */ memcpy(&req->work, &apoll->work, sizeof(req->work)); @@ -4436,18 +4458,11 @@ static void io_poll_task_handler(struct io_kiocb *req, struct io_kiocb **nxt) struct io_ring_ctx *ctx = req->ctx; struct io_poll_iocb *poll = &req->poll; - if (!req->result && !READ_ONCE(poll->canceled)) { - struct poll_table_struct pt = { ._key = poll->events }; - - req->result = vfs_poll(req->file, &pt) & poll->events; - } - - spin_lock_irq(&ctx->completion_lock); - if (!req->result && !READ_ONCE(poll->canceled)) { - add_wait_queue(poll->head, &poll->wait); + if (io_poll_rewait(req, poll)) { spin_unlock_irq(&ctx->completion_lock); return; } + hash_del(&req->hash_node); io_poll_complete(req, req->result, 0); req->flags |= REQ_F_COMP_LOCKED; -- 2.26.0 ^ permalink raw reply related [flat|nested] 4+ messages in thread
end of thread, other threads:[~2020-04-13 17:26 UTC | newest] Thread overview: 4+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2020-04-13 17:26 [PATCHSET v2 0/2] Jens Axboe 2020-04-13 17:26 ` [PATCH 1/2] io_uring: check for need to re-wait in polled async handling Jens Axboe 2020-04-13 17:26 ` [PATCH 2/2] io_uring: io_async_task_func() should check and honor cancelation Jens Axboe -- strict thread matches above, loose matches on Subject: below -- 2020-04-13 17:21 [PATCHSET 0/2] io_uring async poll fixes Jens Axboe 2020-04-13 17:21 ` [PATCH 1/2] io_uring: check for need to re-wait in polled async handling Jens Axboe
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox