From: Hao Xu <[email protected]>
To: Xiaoguang Wang <[email protected]>,
[email protected]
Cc: [email protected], [email protected],
Joseph Qi <[email protected]>
Subject: Re: [RFC 1/3] io_uring: reduce frequent add_wait_queue() overhead for multi-shot poll request
Date: Thu, 23 Sep 2021 01:52:31 +0800 [thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>
在 2021/9/22 下午8:34, Xiaoguang Wang 写道:
> Run echo_server to evaluate io_uring's multi-shot poll performance, perf
> shows that add_wait_queue() has obvious overhead. Intruduce a new state
> 'active' in io_poll_iocb to indicate whether io_poll_wake() should queue
> a task_work. This new state will be set to true initially, be set to false
> when starting to queue a task work, and be set to true again when a poll
> cqe has been committed. One concern is that this method may lost waken-up
> event, but seems it's ok.
>
> io_poll_wake io_poll_task_func
> t1 |
> t2 | WRITE_ONCE(req->poll.active, true);
> t3 |
> t4 | io_commit_cqring(ctx);
> t5 |
> t6 |
>
> If waken-up events happens before or at t4, it's ok, user app will always
> see a cqe. If waken-up events happens after t4 and IIUC, io_poll_wake()
> will see the new req->poll.active value by using READ_ONCE().
>
> With this patch, a pure echo_server(1000 connections, packet is 16 bytes)
> shows about 1.6% reqs improvement.
>
> Signed-off-by: Xiaoguang Wang <[email protected]>
> ---
> fs/io_uring.c | 20 ++++++++++++++++----
> 1 file changed, 16 insertions(+), 4 deletions(-)
>
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index 1294b1ef4acb..ca4464a75c7b 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -487,6 +487,7 @@ struct io_poll_iocb {
> __poll_t events;
> bool done;
> bool canceled;
> + bool active;
> struct wait_queue_entry wait;
> };
>
> @@ -5025,8 +5026,6 @@ static int __io_async_wake(struct io_kiocb *req, struct io_poll_iocb *poll,
>
> trace_io_uring_task_add(req->ctx, req->opcode, req->user_data, mask);
>
> - list_del_init(&poll->wait.entry);
> -
> req->result = mask;
> req->io_task_work.func = func;
>
> @@ -5057,7 +5056,10 @@ static bool io_poll_rewait(struct io_kiocb *req, struct io_poll_iocb *poll)
>
> spin_lock(&ctx->completion_lock);
> if (!req->result && !READ_ONCE(poll->canceled)) {
> - add_wait_queue(poll->head, &poll->wait);
> + if (req->opcode == IORING_OP_POLL_ADD)
> + WRITE_ONCE(req->poll.active, true);
> + else
> + add_wait_queue(poll->head, &poll->wait);
> return true;
> }
>
> @@ -5133,6 +5135,9 @@ static inline bool io_poll_complete(struct io_kiocb *req, __poll_t mask)
> return done;
> }
>
> +static bool __io_poll_remove_one(struct io_kiocb *req,
> + struct io_poll_iocb *poll, bool do_cancel);
> +
> static void io_poll_task_func(struct io_kiocb *req, bool *locked)
> {
> struct io_ring_ctx *ctx = req->ctx;
> @@ -5146,10 +5151,11 @@ static void io_poll_task_func(struct io_kiocb *req, bool *locked)
> done = __io_poll_complete(req, req->result);
> if (done) {
> io_poll_remove_double(req);
> + __io_poll_remove_one(req, io_poll_get_single(req), true);
This may cause race problems, like there may be multiple cancelled cqes
considerring io_poll_add() parallelled. hash_del is redundant either.
__io_poll_remove_one may not be the best choice here, and since we now
don't del wait entry inbetween, code in _arm_poll should probably be
tweaked as well(not very sure, will dive into it tomorrow).
Regards,
Hao
> hash_del(&req->hash_node);
> } else {
> req->result = 0;
> - add_wait_queue(req->poll.head, &req->poll.wait);
> + WRITE_ONCE(req->poll.active, true);
> }
> io_commit_cqring(ctx);
> spin_unlock(&ctx->completion_lock);
> @@ -5204,6 +5210,7 @@ static void io_init_poll_iocb(struct io_poll_iocb *poll, __poll_t events,
> poll->head = NULL;
> poll->done = false;
> poll->canceled = false;
> + poll->active = true;
> #define IO_POLL_UNMASK (EPOLLERR|EPOLLHUP|EPOLLNVAL|EPOLLRDHUP)
> /* mask in events that we always want/need */
> poll->events = events | IO_POLL_UNMASK;
> @@ -5301,6 +5308,7 @@ static int io_async_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
> trace_io_uring_poll_wake(req->ctx, req->opcode, req->user_data,
> key_to_poll(key));
>
> + list_del_init(&poll->wait.entry);
> return __io_async_wake(req, poll, key_to_poll(key), io_async_task_func);
> }
>
> @@ -5569,6 +5577,10 @@ static int io_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
> struct io_kiocb *req = wait->private;
> struct io_poll_iocb *poll = &req->poll;
>
> + if (!READ_ONCE(poll->active))
> + return 0;
> +
> + WRITE_ONCE(poll->active, false);
> return __io_async_wake(req, poll, key_to_poll(key), io_poll_task_func);
> }
>
>
next prev parent reply other threads:[~2021-09-22 17:52 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-09-22 12:34 [RFC 0/3] improvements for poll requests Xiaoguang Wang
2021-09-22 12:34 ` [RFC 1/3] io_uring: reduce frequent add_wait_queue() overhead for multi-shot poll request Xiaoguang Wang
2021-09-22 17:52 ` Hao Xu [this message]
2021-09-22 12:34 ` [RFC 2/3] io_uring: don't get completion_lock in io_poll_rewait() Xiaoguang Wang
2021-09-22 12:34 ` [RFC 3/3] io_uring: try to batch poll request completion Xiaoguang Wang
2021-09-22 16:24 ` Pavel Begunkov
2021-09-24 4:28 ` Xiaoguang Wang
2021-09-22 17:00 ` Hao Xu
2021-09-22 17:01 ` Hao Xu
2021-09-22 17:09 ` Hao Xu
2021-09-22 13:00 ` [RFC 0/3] improvements for poll requests Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d2f55502-bc66-f357-d57f-e9ef280afb34@linux.alibaba.com \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox