public inbox for [email protected]
 help / color / mirror / Atom feed
From: dormando <[email protected]>
To: Josef <[email protected]>
Cc: io-uring <[email protected]>
Subject: Re: User questions: client code and SQE/CQE starvation
Date: Fri, 14 Jan 2022 13:25:13 -0800 (PST)	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <CAAss7+qkBUzADaG+B6WTHz5hdZbbGvLFkD56sRhUzni7Js7amA@mail.gmail.com>



On Fri, 14 Jan 2022, Josef wrote:

> sorry i accidentally pressed send message...
>
> run out of SQE should not be problem, when
> io_uring_get_sqe(https://github.com/axboe/liburing/blob/master/src/queue.c#L409)
> returns a null, you can run io_uring_submit
> in netty we do that automatically when its full
> https://github.com/netty/netty-incubator-transport-io_uring/blob/main/transport-classes-io_uring/src/main/java/io/netty/incubator/channel/uring/IOUringSubmissionQueue.java#L117

Thanks! Unless I'm completely misreading the liburing code,
io_uring_submit() can return EBUSY and fail to submit the sqe's, if there
is currently a queue of CQE's beyond the limit (ie; FEAT_NODROP). Which
would mean you can't reliably submit when get_sqe() returns NULL? I hope I
have this wrong since it would be much simpler otherwise :)

> In theory you could run out of CQE, netty io_uring approach is a little
> bit different.
> https://github.com/netty/netty-incubator-transport-io_uring/blob/main/transport-classes-io_uring/src/main/java/io/netty/incubator/channel/uring/IOUringCompletionQueue.java#L86
> (similar to io_uring_for_each_cqe) to make sure the kernel sees that and
> the process function is called here
> https://github.com/netty/netty-incubator-transport-io_uring/blob/main/transport-classes-io_uring/src/main/java/io/netty/incubator/channel/uring/IOUringEventLoop.java#L203

Thanks. I'll study these a bit more.

>
>
> > On Wed, 12 Jan 2022 at 22:17, dormando <[email protected]> wrote:
> > >
> > > Hey,
> > >
> > > Been integrating io_uring in my stack which has been going well-ish.
> > > Wondering if you folks have seen implementations of client libraries that
> > > feel clean and user friendly?
> > >
> > > IE: with poll/select/epoll/kqueue most client libraries (like libcurl)
> > > implement functions like "client_send_data(ctx, etc)", which returns
> > > -WANT_READ/-WANT_WRITE/etc and an fd if it needs more data to move
> > > forward. With the syscalls themselves externalized in io_uring I'm
> > > struggling to come up with abstractions I like and haven't found much
> > > public on a googlin'. Do any public ones exist yet?
> > >
> > > On implementing networked servers, it feels natural to do a core loop
> > > like:
> > >
> > >       while (1) {
> > >           io_uring_submit_and_wait(&t->ring, 1);
> > >
> > >           uint32_t head = 0;
> > >           uint32_t count = 0;
> > >
> > >           io_uring_for_each_cqe(&t->ring, head, cqe) {
> > >
> > >               event *pe = io_uring_cqe_get_data(cqe);
> > >               pe->callback(pe->udata, cqe);
> > >
> > >               count++;
> > >           }
> > >           io_uring_cq_advance(&t->ring, count);
> > >       }
> > >
> > > ... but A) you can run out of SQE's if they're generated from within
> > > callbacks()'s (retries, get further data, writes after reads, etc).
> > > B) Run out of CQE's with IORING_FEAT_NODROP and can no longer free up
> > > SQE's
> > >
> > > So this loop doesn't work under pressure :)
> > >
> > > I see that qemu's implementation walks an object queue, which calls
> > > io_uring_submit() if SQE's are exhausted. I don't recall it trying to do
> > > anything if submit returns EBUSY because of CQE exhaustion? I've not found
> > > other merged code implementing non-toy network servers and most examples
> > > are rewrites of CLI tooling which are much more constrained problems. Have
> > > I missed anything?
> > >
> > > I can make this work but a lot of solutions are double walking lists
> > > (fetch all CQE's into an array, advance them, then process), or not being
> > > able to take advantage of any of the batching API's. Hoping the
> > > community's got some better examples to untwist my brain a bit :)
> > >
> > > For now I have things working but want to do a cleanup pass before making
> > > my clients/server bits public facing.
> > >
> > > Thanks!
> > > -Dormando
> >
> >
> >
> > --
> > Josef Grieb
>
> --
> Josef Grieb
>

  reply	other threads:[~2022-01-14 21:25 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-11 20:39 User questions: client code and SQE/CQE starvation dormando
     [not found] ` <CAAss7+q_qjYBbiN+RaGrd3ngOPPGRwJiQU+Gkq1YPzfy7X8wqg@mail.gmail.com>
2022-01-14  9:19   ` Josef
2022-01-14 21:25     ` dormando [this message]
2022-01-15 23:32 ` Noah Goldstein

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox