public inbox for [email protected]
 help / color / mirror / Atom feed
* Possible unneccessary IORING_OP_READs executed in Async
@ 2021-06-07 20:26 Olivier Langlois
  2021-06-09 22:01 ` Olivier Langlois
  0 siblings, 1 reply; 3+ messages in thread
From: Olivier Langlois @ 2021-06-07 20:26 UTC (permalink / raw)
  To: io-uring

Hi,

I was trying to understand why I was ending up with io worker threads
when io_uring fast polling should have been enough to manage my read
operations.

I have found 2 possible scenarios:

1. Concurrent read requests on the same socket fd.

I have documented this scenario here:
https://github.com/axboe/liburing/issues/351

In a nutshell, the idea is if 2 read operations on the same fd are
queued in io_uring fast poll, on the next io_uring_cqring_wait() call
if events become available on the fd, the first serviced request will
grab all the available data and this will push the second request in
the io-wq because when it will be serviced, the read will return EAGAIN
and req->flags will have REQ_F_POLLED set.

I was supposed to investigate my application to find out why it is
doing that but I have put the investigation on hold to fix the core
dump generation problem that I was experiencing with io_uring. I did
solve that mystery BTW.

io_uring interrupts the core generation by setting TIF_NOTIFY_SIGNAL
through calling task_work_add().
(I have sent out a patch last week that seems to have fallen in
/dev/null. I need resend it...)

Now that I am back to my io worker threads creation concern, I am not
able to recreate scenario #1 but I have found a second way that io-
workers can be spawned:

2.

In __io_queue_sqe():
a) io_issue_sqe() returns EAGAIN
b) in between io_issue_sqe() call and vfs_poll() call done inside
io_arm_poll_handler(), data becomes available
c) io_arm_poll_handler() returns false because vfs_poll() did return an
non-empty mask.

I am throwing this idea to the group.
Would it be a good idea to detect that situation and recall
io_issue_sqe() in that case instead of pushing the request to the io-
wq?

On busy TCP sockets, this scenario seems to happen very often (ie: few
times every second)

Greetings,
Olivier


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Possible unneccessary IORING_OP_READs executed in Async
  2021-06-07 20:26 Possible unneccessary IORING_OP_READs executed in Async Olivier Langlois
@ 2021-06-09 22:01 ` Olivier Langlois
  2021-06-09 22:08   ` Olivier Langlois
  0 siblings, 1 reply; 3+ messages in thread
From: Olivier Langlois @ 2021-06-09 22:01 UTC (permalink / raw)
  To: io-uring

On Mon, 2021-06-07 at 16:26 -0400, Olivier Langlois wrote:
> In __io_queue_sqe():
> a) io_issue_sqe() returns EAGAIN
> b) in between io_issue_sqe() call and vfs_poll() call done inside
> io_arm_poll_handler(), data becomes available
> c) io_arm_poll_handler() returns false because vfs_poll() did return
> an
> non-empty mask.
> 
> I am throwing this idea to the group.
> Would it be a good idea to detect that situation and recall
> io_issue_sqe() in that case instead of pushing the request to the io-
> wq?
> 
> On busy TCP sockets, this scenario seems to happen very often (ie:
> few
> times every second)

I didn't wait for an answer and I went straight to trying out an
io_uring modification.

It works like a charm. My code is using io_uring like a maniac and with
the modification, zero io worker threads get created.

That means a definite gain in terms of latency...

I will send out a patch soon to share this discovery with io_uring
devs.

Greetings,



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Possible unneccessary IORING_OP_READs executed in Async
  2021-06-09 22:01 ` Olivier Langlois
@ 2021-06-09 22:08   ` Olivier Langlois
  0 siblings, 0 replies; 3+ messages in thread
From: Olivier Langlois @ 2021-06-09 22:08 UTC (permalink / raw)
  To: io-uring

On Wed, 2021-06-09 at 18:01 -0400, Olivier Langlois wrote:
> 
> I didn't wait for an answer and I went straight to trying out an
> io_uring modification.
> 
> It works like a charm. My code is using io_uring like a maniac and
> with
> the modification, zero io worker threads get created.
> 
> That means a definite gain in terms of latency...
> 
> I will send out a patch soon to share this discovery with io_uring
> devs.
> 
When reviewing my patch, keep in mind that I have only tested it with
IORING_OP_READ... It might not be universally applicable to all
operations... I haven't tested beyond my personal io_uring usage...



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-06-09 22:08 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-06-07 20:26 Possible unneccessary IORING_OP_READs executed in Async Olivier Langlois
2021-06-09 22:01 ` Olivier Langlois
2021-06-09 22:08   ` Olivier Langlois

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox