public inbox for [email protected]
 help / color / mirror / Atom feed
From: Pavel Begunkov <[email protected]>
To: Jens Axboe <[email protected]>, [email protected]
Subject: Re: [PATCHSET v2 0/7] Improve MSG_RING DEFER_TASKRUN performance
Date: Wed, 5 Jun 2024 16:50:04 +0100	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>

On 6/4/24 19:57, Jens Axboe wrote:
> On 6/3/24 7:53 AM, Pavel Begunkov wrote:
>> On 5/30/24 16:23, Jens Axboe wrote:
>>> Hi,
>>>
>>> For v1 and replies to that and tons of perf measurements, go here:
>>
>> I'd really prefer the task_work version rather than carving
>> yet another path specific to msg_ring. Perf might sounds better,
>> but it's duplicating wake up paths, not integrated with batch
>> waiting, not clear how affects different workloads with target
>> locking and would work weird in terms of ordering.
> 
> The duplication is really minor, basically non-existent imho. It's a
> wakeup call, it's literally 2 lines of code. I do agree on the batching,

Well, v3 tries to add msg_ring/nr_overflow handling to local
task work, that what I mean by duplicating paths, and we'll
continue gutting the hot path for supporting msg_ring in
this way.

Does it work with eventfd? I can't find any handling, so next
you'd be adding:

io_commit_cqring_flush(ctx);

Likely draining around cq_extra should also be patched.
Yes, fixable, but it'll be a pile of fun, and without many
users, it'll take time to discover it all.

> though I don't think that's really a big concern as most usage I'd
> expect from this would be sending single messages. You're not batch
> waiting on those. But there could obviously be cases where you have a
> lot of mixed traffic, and for those it would make sense to have the
> batch wakeups.
> 
> What I do like with this version is that we end up with just one method
> for delivering the CQE, rather than needing to split it into two. And it
> gets rid of the uring_lock double locking for non-SINGLE_ISSUER. I know

You can't get rid of target locking for fd passing, the file tables
are sync'ed by the lock. Otherwise it's only IOPOLL, because with
normal rings it can and IIRC does take the completion_lock for CQE
posting. I don't see a problem here, unless you care that much about
IOPOLL?

> we always try and push people towards DEFER_TASKRUN|SINGLE_ISSUER, but
> that doesn't mean we should just ignore the cases where that isn't true.
> Unifying that code and making it faster all around is a worthy goal in
> and of itself. The code is CERTAINLY a lot cleaner after the change than
> all the IOPOLL etc.
> 
>> If the swing back is that expensive, another option is to
>> allocate a new request and let the target ring to deallocate
>> it once the message is delivered (similar to that overflow
>> entry).
> 
> I can give it a shot, and then run some testing. If we get close enough
> with the latencies and performance, then I'd certainly be more amenable
> to going either route.
> 
> We'd definitely need to pass in the required memory and avoid the return

Right, same as with CQEs

> round trip, as that basically doubles the cost (and latency) of sending

Sender's latency, which is IMHO not important at all

> a message. The downside of what you suggest here is that while that
> should integrate nicely with existing local task_work, it'll also mean
> that we'll need hot path checks for treating that request type as a
> special thing. Things like req->ctx being not local, freeing the request
> rather than recycling, etc. And that'll need to happen in multiple
> spots.

I'm not suggesting feeding that request into flush_completions()
and common completion infra, can be killed right in the tw callback.

-- 
Pavel Begunkov

  parent reply	other threads:[~2024-06-05 15:50 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-05-30 15:23 [PATCHSET v2 0/7] Improve MSG_RING DEFER_TASKRUN performance Jens Axboe
2024-05-30 15:23 ` [PATCH 1/7] io_uring/msg_ring: split fd installing into a helper Jens Axboe
2024-05-30 15:23 ` [PATCH 2/7] io_uring/msg_ring: tighten requirement for remote posting Jens Axboe
2024-05-30 15:23 ` [PATCH 3/7] io_uring/msg_ring: avoid double indirection task_work for data messages Jens Axboe
2024-05-30 15:23 ` [PATCH 4/7] io_uring/msg_ring: avoid double indirection task_work for fd passing Jens Axboe
2024-05-30 15:23 ` [PATCH 5/7] io_uring/msg_ring: add an alloc cache for CQE entries Jens Axboe
2024-05-30 15:23 ` [PATCH 6/7] io_uring/msg_ring: remove callback_head from struct io_msg Jens Axboe
2024-05-30 15:23 ` [PATCH 7/7] io_uring/msg_ring: remove non-remote message passing Jens Axboe
2024-06-03 13:53 ` [PATCHSET v2 0/7] Improve MSG_RING DEFER_TASKRUN performance Pavel Begunkov
2024-06-04 18:57   ` Jens Axboe
2024-06-04 19:55     ` Jens Axboe
2024-06-05 15:50     ` Pavel Begunkov [this message]
2024-06-05 16:41       ` Jens Axboe
2024-06-05 19:20         ` Pavel Begunkov
2024-06-05 19:36           ` Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox