* [PATCH liburing] Don't enter the kernel to wait on cqes if they are already available.
@ 2020-12-03 16:07 Marcelo Diop-Gonzalez
2020-12-07 15:41 ` Jens Axboe
0 siblings, 1 reply; 2+ messages in thread
From: Marcelo Diop-Gonzalez @ 2020-12-03 16:07 UTC (permalink / raw)
To: axboe; +Cc: io-uring, Marcelo Diop-Gonzalez
In _io_uring_get_cqe(), if we can see from userspace that there are
already wait_nr cqes available, then there is no need to call
__sys_io_uring_enter2 (unless we have to submit IO or flush overflow),
since the kernel will just end up returning immediately after
performing the same check, anyway.
Signed-off-by: Marcelo Diop-Gonzalez <[email protected]>
---
src/queue.c | 39 +++++++++++++++++++++++++--------------
1 file changed, 25 insertions(+), 14 deletions(-)
diff --git a/src/queue.c b/src/queue.c
index 53ca588..df388f6 100644
--- a/src/queue.c
+++ b/src/queue.c
@@ -39,29 +39,38 @@ static inline bool cq_ring_needs_flush(struct io_uring *ring)
}
static int __io_uring_peek_cqe(struct io_uring *ring,
- struct io_uring_cqe **cqe_ptr)
+ struct io_uring_cqe **cqe_ptr,
+ unsigned *nr_available)
{
struct io_uring_cqe *cqe;
- unsigned head;
int err = 0;
+ unsigned available;
+ unsigned mask = *ring->cq.kring_mask;
do {
- io_uring_for_each_cqe(ring, head, cqe)
+ unsigned tail = io_uring_smp_load_acquire(ring->cq.ktail);
+ unsigned head = *ring->cq.khead;
+
+ cqe = NULL;
+ available = tail - head;
+ if (!available)
break;
- if (cqe) {
- if (cqe->user_data == LIBURING_UDATA_TIMEOUT) {
- if (cqe->res < 0)
- err = cqe->res;
- io_uring_cq_advance(ring, 1);
- if (!err)
- continue;
- cqe = NULL;
- }
+
+ cqe = &ring->cq.cqes[head & mask];
+ if (cqe->user_data == LIBURING_UDATA_TIMEOUT) {
+ if (cqe->res < 0)
+ err = cqe->res;
+ io_uring_cq_advance(ring, 1);
+ if (!err)
+ continue;
+ cqe = NULL;
}
+
break;
} while (1);
*cqe_ptr = cqe;
+ *nr_available = available;
return err;
}
@@ -83,8 +92,9 @@ static int _io_uring_get_cqe(struct io_uring *ring, struct io_uring_cqe **cqe_pt
do {
bool cq_overflow_flush = false;
unsigned flags = 0;
+ unsigned nr_available;
- err = __io_uring_peek_cqe(ring, &cqe);
+ err = __io_uring_peek_cqe(ring, &cqe, &nr_available);
if (err)
break;
if (!cqe && !to_wait && !data->submit) {
@@ -100,7 +110,8 @@ static int _io_uring_get_cqe(struct io_uring *ring, struct io_uring_cqe **cqe_pt
flags = IORING_ENTER_GETEVENTS | data->get_flags;
if (data->submit)
sq_ring_needs_enter(ring, &flags);
- if (data->wait_nr || data->submit || cq_overflow_flush)
+ if (data->wait_nr > nr_available || data->submit ||
+ cq_overflow_flush)
ret = __sys_io_uring_enter2(ring->ring_fd, data->submit,
data->wait_nr, flags, data->arg,
data->sz);
--
2.20.1
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH liburing] Don't enter the kernel to wait on cqes if they are already available.
2020-12-03 16:07 [PATCH liburing] Don't enter the kernel to wait on cqes if they are already available Marcelo Diop-Gonzalez
@ 2020-12-07 15:41 ` Jens Axboe
0 siblings, 0 replies; 2+ messages in thread
From: Jens Axboe @ 2020-12-07 15:41 UTC (permalink / raw)
To: Marcelo Diop-Gonzalez; +Cc: io-uring
On 12/3/20 9:07 AM, Marcelo Diop-Gonzalez wrote:
> In _io_uring_get_cqe(), if we can see from userspace that there are
> already wait_nr cqes available, then there is no need to call
> __sys_io_uring_enter2 (unless we have to submit IO or flush overflow),
> since the kernel will just end up returning immediately after
> performing the same check, anyway.
Applied, thanks.
--
Jens Axboe
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2020-12-07 15:42 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-12-03 16:07 [PATCH liburing] Don't enter the kernel to wait on cqes if they are already available Marcelo Diop-Gonzalez
2020-12-07 15:41 ` Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox