public inbox for [email protected]
 help / color / mirror / Atom feed
From: Stefan Roesch <[email protected]>
To: Kanchan Joshi <[email protected]>
Cc: [email protected], [email protected]
Subject: Re: [PATCH v2 06/12] io_uring: modify io_get_cqe for CQE32
Date: Fri, 22 Apr 2022 16:59:25 -0700	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <CA+1E3rKEr4ULc=065kRu_p1265vTE4x+0q+XNa49ie-YRXabdA@mail.gmail.com>



On 4/21/22 6:25 PM, Kanchan Joshi wrote:
> On Thu, Apr 21, 2022 at 3:54 PM Stefan Roesch <[email protected]> wrote:
>>
>> Modify accesses to the CQE array to take large CQE's into account. The
>> index needs to be shifted by one for large CQE's.
>>
>> Signed-off-by: Stefan Roesch <[email protected]>
>> Signed-off-by: Jens Axboe <[email protected]>
>> ---
>>  fs/io_uring.c | 9 +++++++--
>>  1 file changed, 7 insertions(+), 2 deletions(-)
>>
>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>> index c93a9353c88d..bd352815b9e7 100644
>> --- a/fs/io_uring.c
>> +++ b/fs/io_uring.c
>> @@ -1909,8 +1909,12 @@ static noinline struct io_uring_cqe *__io_get_cqe(struct io_ring_ctx *ctx)
>>  {
>>         struct io_rings *rings = ctx->rings;
>>         unsigned int off = ctx->cached_cq_tail & (ctx->cq_entries - 1);
>> +       unsigned int shift = 0;
>>         unsigned int free, queued, len;
>>
>> +       if (ctx->flags & IORING_SETUP_CQE32)
>> +               shift = 1;
>> +
>>         /* userspace may cheat modifying the tail, be safe and do min */
>>         queued = min(__io_cqring_events(ctx), ctx->cq_entries);
>>         free = ctx->cq_entries - queued;
>> @@ -1922,12 +1926,13 @@ static noinline struct io_uring_cqe *__io_get_cqe(struct io_ring_ctx *ctx)
>>         ctx->cached_cq_tail++;
>>         ctx->cqe_cached = &rings->cqes[off];
>>         ctx->cqe_sentinel = ctx->cqe_cached + len;
>> -       return ctx->cqe_cached++;
>> +       ctx->cqe_cached++;
>> +       return &rings->cqes[off << shift];
>>  }
>>
>>  static inline struct io_uring_cqe *io_get_cqe(struct io_ring_ctx *ctx)
>>  {
>> -       if (likely(ctx->cqe_cached < ctx->cqe_sentinel)) {
>> +       if (likely(ctx->cqe_cached < ctx->cqe_sentinel && !(ctx->flags & IORING_SETUP_CQE32))) {
>>                 ctx->cached_cq_tail++;
>>                 return ctx->cqe_cached++;
>>         }
> 
> This excludes CQE-caching for 32b CQEs.
> How about something like below to have that enabled (adding
> io_get_cqe32 for the new ring) -
> 

What you describe below I tried to avoid: keep the current indexes and pointers
as they are and only when we access an element calculate the correct offset into the
cqe array.

I'll add caching support for V3 in a slightly different way.

> +static noinline struct io_uring_cqe *__io_get_cqe32(struct io_ring_ctx *ctx)
> +{
> +       struct io_rings *rings = ctx->rings;
> +       unsigned int off = ctx->cached_cq_tail & (ctx->cq_entries - 1);
> +       unsigned int free, queued, len;
> +
> +       /* userspace may cheat modifying the tail, be safe and do min */
> +       queued = min(__io_cqring_events(ctx), ctx->cq_entries);
> +       free = ctx->cq_entries - queued;
> +       /* we need a contiguous range, limit based on the current
> array offset */
> +       len = min(free, ctx->cq_entries - off);
> +       if (!len)
> +               return NULL;
> +
> +       ctx->cached_cq_tail++;
> +       /* double increment for 32 CQEs */
> +       ctx->cqe_cached = &rings->cqes[off << 1];
> +       ctx->cqe_sentinel = ctx->cqe_cached + (len << 1);
> +       return ctx->cqe_cached;
> +}
> +
> +static inline struct io_uring_cqe *io_get_cqe32(struct io_ring_ctx *ctx)
> +{
> +       struct io_uring_cqe *cqe32;
> +       if (likely(ctx->cqe_cached < ctx->cqe_sentinel)) {
> +               ctx->cached_cq_tail++;
> +               cqe32 = ctx->cqe_cached;
> +       } else
> +               cqe32 = __io_get_cqe32(ctx);
> +       /* double increment for 32b CQE*/
> +       ctx->cqe_cached += 2;
> +       return cqe32;
> +}

  reply	other threads:[~2022-04-22 23:59 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-20 19:14 [PATCH v2 00/12] add large CQE support for io-uring Stefan Roesch
2022-04-20 19:14 ` [PATCH v2 01/12] io_uring: support CQE32 in io_uring_cqe Stefan Roesch
2022-04-20 19:14 ` [PATCH v2 02/12] io_uring: wire up inline completion path for CQE32 Stefan Roesch
2022-04-20 19:14 ` [PATCH v2 03/12] io_uring: change ring size calculation " Stefan Roesch
2022-04-20 19:14 ` [PATCH v2 04/12] io_uring: add CQE32 setup processing Stefan Roesch
2022-04-20 19:14 ` [PATCH v2 05/12] io_uring: add CQE32 completion processing Stefan Roesch
2022-04-22  1:34   ` Kanchan Joshi
2022-04-22 21:39     ` Stefan Roesch
2022-04-20 19:14 ` [PATCH v2 06/12] io_uring: modify io_get_cqe for CQE32 Stefan Roesch
2022-04-22  1:25   ` Kanchan Joshi
2022-04-22 23:59     ` Stefan Roesch [this message]
2022-04-20 19:14 ` [PATCH v2 07/12] io_uring: flush completions " Stefan Roesch
2022-04-20 19:14 ` [PATCH v2 08/12] io_uring: overflow processing " Stefan Roesch
2022-04-22  2:15   ` Kanchan Joshi
2022-04-22 21:27     ` Stefan Roesch
2022-04-25 10:31       ` Kanchan Joshi
2022-04-20 19:14 ` [PATCH v2 09/12] io_uring: add tracing for additional CQE32 fields Stefan Roesch
2022-04-20 19:14 ` [PATCH v2 10/12] io_uring: support CQE32 in /proc info Stefan Roesch
2022-04-20 19:14 ` [PATCH v2 11/12] io_uring: enable CQE32 Stefan Roesch
2022-04-20 19:14 ` [PATCH v2 12/12] io_uring: support CQE32 for nop operation Stefan Roesch
2022-04-20 22:51 ` [PATCH v2 00/12] add large CQE support for io-uring Jens Axboe
2022-04-21 18:42   ` Pavel Begunkov
2022-04-21 18:49     ` Stefan Roesch
2022-04-21 18:54       ` Jens Axboe
2022-04-21 18:57       ` Pavel Begunkov
2022-04-21 18:59         ` Jens Axboe
2022-04-22  3:09           ` Kanchan Joshi
2022-04-22  5:06             ` Kanchan Joshi
2022-04-22 21:03             ` Stefan Roesch

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox