From: Jann Horn <[email protected]>
To: Jens Axboe <[email protected]>
Cc: io-uring <[email protected]>,
Will Deacon <[email protected]>, Kees Cook <[email protected]>,
Kernel Hardening <[email protected]>
Subject: Re: [PATCH 07/11] io_uring: use atomic_t for refcounts
Date: Tue, 10 Dec 2019 23:04:58 +0100 [thread overview]
Message-ID: <CAG48ez3yh7zRhMyM+VhH1g9Gp81_3FMjwAyj3TB6HQYETpxHmA@mail.gmail.com> (raw)
In-Reply-To: <[email protected]>
[context preserved for additional CCs]
On Tue, Dec 10, 2019 at 4:57 PM Jens Axboe <[email protected]> wrote:
> Recently had a regression that turned out to be because
> CONFIG_REFCOUNT_FULL was set.
I assume "regression" here refers to a performance regression? Do you
have more concrete numbers on this? Is one of the refcounting calls
particularly problematic compared to the others?
I really don't like it when raw atomic_t is used for refcounting
purposes - not only because that gets rid of the overflow checks, but
also because it is less clear semantically.
> Our ref count usage is really simple,
In my opinion, for a refcount to qualify as "really simple", it must
be possible to annotate each relevant struct member and local variable
with the (fixed) bias it carries when alive and non-NULL. This
refcount is more complicated than that.
> so let's just use atomic_t and get rid of the dependency on the full
> reference count checking being enabled or disabled.
>
> Signed-off-by: Jens Axboe <[email protected]>
> ---
> fs/io_uring.c | 22 +++++++++++-----------
> 1 file changed, 11 insertions(+), 11 deletions(-)
>
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index 9a596b819334..05419a152b32 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -360,7 +360,7 @@ struct io_kiocb {
> };
> struct list_head link_list;
> unsigned int flags;
> - refcount_t refs;
> + atomic_t refs;
> #define REQ_F_NOWAIT 1 /* must not punt to workers */
> #define REQ_F_IOPOLL_COMPLETED 2 /* polled IO has completed */
> #define REQ_F_FIXED_FILE 4 /* ctx owns file */
> @@ -770,7 +770,7 @@ static void io_cqring_fill_event(struct io_kiocb *req, long res)
> WRITE_ONCE(ctx->rings->cq_overflow,
> atomic_inc_return(&ctx->cached_cq_overflow));
> } else {
> - refcount_inc(&req->refs);
> + atomic_inc(&req->refs);
> req->result = res;
> list_add_tail(&req->list, &ctx->cq_overflow_list);
> }
> @@ -852,7 +852,7 @@ static struct io_kiocb *io_get_req(struct io_ring_ctx *ctx,
> req->ctx = ctx;
> req->flags = 0;
> /* one is dropped after submission, the other at completion */
> - refcount_set(&req->refs, 2);
> + atomic_set(&req->refs, 2);
> req->result = 0;
> INIT_IO_WORK(&req->work, io_wq_submit_work);
> return req;
> @@ -1035,13 +1035,13 @@ static void io_put_req_find_next(struct io_kiocb *req, struct io_kiocb **nxtptr)
> {
> io_req_find_next(req, nxtptr);
>
> - if (refcount_dec_and_test(&req->refs))
> + if (atomic_dec_and_test(&req->refs))
> __io_free_req(req);
> }
>
> static void io_put_req(struct io_kiocb *req)
> {
> - if (refcount_dec_and_test(&req->refs))
> + if (atomic_dec_and_test(&req->refs))
> io_free_req(req);
> }
>
> @@ -1052,14 +1052,14 @@ static void io_put_req(struct io_kiocb *req)
> static void __io_double_put_req(struct io_kiocb *req)
> {
> /* drop both submit and complete references */
> - if (refcount_sub_and_test(2, &req->refs))
> + if (atomic_sub_and_test(2, &req->refs))
> __io_free_req(req);
> }
>
> static void io_double_put_req(struct io_kiocb *req)
> {
> /* drop both submit and complete references */
> - if (refcount_sub_and_test(2, &req->refs))
> + if (atomic_sub_and_test(2, &req->refs))
> io_free_req(req);
> }
>
> @@ -1108,7 +1108,7 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
> io_cqring_fill_event(req, req->result);
> (*nr_events)++;
>
> - if (refcount_dec_and_test(&req->refs)) {
> + if (atomic_dec_and_test(&req->refs)) {
> /* If we're not using fixed files, we have to pair the
> * completion part with the file put. Use regular
> * completions for those, only batch free for fixed
> @@ -3169,7 +3169,7 @@ static enum hrtimer_restart io_link_timeout_fn(struct hrtimer *timer)
> if (!list_empty(&req->link_list)) {
> prev = list_entry(req->link_list.prev, struct io_kiocb,
> link_list);
> - if (refcount_inc_not_zero(&prev->refs)) {
> + if (atomic_inc_not_zero(&prev->refs)) {
> list_del_init(&req->link_list);
> prev->flags &= ~REQ_F_LINK_TIMEOUT;
> } else
> @@ -4237,7 +4237,7 @@ static void io_get_work(struct io_wq_work *work)
> {
> struct io_kiocb *req = container_of(work, struct io_kiocb, work);
>
> - refcount_inc(&req->refs);
> + atomic_inc(&req->refs);
> }
>
> static int io_sq_offload_start(struct io_ring_ctx *ctx,
> @@ -4722,7 +4722,7 @@ static void io_uring_cancel_files(struct io_ring_ctx *ctx,
> if (req->work.files != files)
> continue;
> /* req is being completed, ignore */
> - if (!refcount_inc_not_zero(&req->refs))
> + if (!atomic_inc_not_zero(&req->refs))
> continue;
> cancel_req = req;
> break;
> --
> 2.24.0
>
next prev parent reply other threads:[~2019-12-10 22:06 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-12-10 15:57 [PATCHSET 0/11] io_uring improvements/fixes for 5.5-rc Jens Axboe
2019-12-10 15:57 ` [PATCH 01/11] io_uring: allow unbreakable links Jens Axboe
2019-12-10 21:10 ` Pavel Begunkov
2019-12-10 21:12 ` Jens Axboe
2019-12-10 21:28 ` Pavel Begunkov
2019-12-10 22:17 ` Jens Axboe
2019-12-10 15:57 ` [PATCH 02/11] io-wq: remove worker->wait waitqueue Jens Axboe
2019-12-10 15:57 ` [PATCH 03/11] io-wq: briefly spin for new work after finishing work Jens Axboe
2019-12-10 15:57 ` [PATCH 04/11] io_uring: sqthread should grab ctx->uring_lock for submissions Jens Axboe
2019-12-10 15:57 ` [PATCH 05/11] io_uring: deferred send/recvmsg should assign iov Jens Axboe
2019-12-10 15:57 ` [PATCH 06/11] io_uring: don't dynamically allocate poll data Jens Axboe
2019-12-10 15:57 ` [PATCH 07/11] io_uring: use atomic_t for refcounts Jens Axboe
2019-12-10 22:04 ` Jann Horn [this message]
2019-12-10 22:21 ` Jens Axboe
2019-12-10 22:46 ` Kees Cook
2019-12-10 22:55 ` Jens Axboe
2019-12-11 10:20 ` Will Deacon
2019-12-11 16:56 ` Kees Cook
2019-12-11 17:00 ` Jens Axboe
2019-12-10 15:57 ` [PATCH 08/11] io_uring: run next sqe inline if possible Jens Axboe
2019-12-10 15:57 ` [PATCH 09/11] io_uring: only hash regular files for async work execution Jens Axboe
2019-12-10 15:57 ` [PATCH 10/11] net: make socket read/write_iter() honor IOCB_NOWAIT Jens Axboe
2019-12-10 19:37 ` David Miller
2019-12-10 20:43 ` Jens Axboe
2019-12-10 15:57 ` [PATCH 11/11] io_uring: add sockets to list of files that support non-blocking issue Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAG48ez3yh7zRhMyM+VhH1g9Gp81_3FMjwAyj3TB6HQYETpxHmA@mail.gmail.com \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox