* [PATCHSET next 0/6] Move eventfd cq tracking into io_ev_fd
@ 2024-09-21 7:59 Jens Axboe
2024-09-21 7:59 ` [PATCH 1/6] io_uring/eventfd: abstract out ev_fd put helper Jens Axboe
` (6 more replies)
0 siblings, 7 replies; 9+ messages in thread
From: Jens Axboe @ 2024-09-21 7:59 UTC (permalink / raw)
To: io-uring
Hi,
For some reason this ended up being a series of 6 patches, when the
goal was really just to move evfd_last_cq_tail out of io_ring_ctx
and into struct io_ev_fd, where it belongs. But this slowly builds to
that goal, and the final patch does the move unceremoniously.
Patches are on top of current -git with for-6.12/io_uring pulled in.
io_uring/eventfd.c | 137 ++++++++++++++++++++++++++++++---------------
1 file changed, 92 insertions(+), 45 deletions(-)
--
Jens Axboe
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/6] io_uring/eventfd: abstract out ev_fd put helper
2024-09-21 7:59 [PATCHSET next 0/6] Move eventfd cq tracking into io_ev_fd Jens Axboe
@ 2024-09-21 7:59 ` Jens Axboe
2024-09-21 7:59 ` [PATCH 2/6] io_uring/eventfd: check for the need to async notifier earlier Jens Axboe
` (5 subsequent siblings)
6 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2024-09-21 7:59 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe
We call this in two spot, have a helper for it. In preparation for
extending this part.
Signed-off-by: Jens Axboe <[email protected]>
---
io_uring/eventfd.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/io_uring/eventfd.c b/io_uring/eventfd.c
index e37fddd5d9ce..8b628ab6bbff 100644
--- a/io_uring/eventfd.c
+++ b/io_uring/eventfd.c
@@ -41,6 +41,12 @@ static void io_eventfd_do_signal(struct rcu_head *rcu)
io_eventfd_free(rcu);
}
+static void io_eventfd_put(struct io_ev_fd *ev_fd)
+{
+ if (refcount_dec_and_test(&ev_fd->refs))
+ call_rcu(&ev_fd->rcu, io_eventfd_free);
+}
+
void io_eventfd_signal(struct io_ring_ctx *ctx)
{
struct io_ev_fd *ev_fd = NULL;
@@ -77,8 +83,7 @@ void io_eventfd_signal(struct io_ring_ctx *ctx)
}
}
out:
- if (refcount_dec_and_test(&ev_fd->refs))
- call_rcu(&ev_fd->rcu, io_eventfd_free);
+ io_eventfd_put(ev_fd);
}
void io_eventfd_flush_signal(struct io_ring_ctx *ctx)
@@ -152,8 +157,7 @@ int io_eventfd_unregister(struct io_ring_ctx *ctx)
if (ev_fd) {
ctx->has_evfd = false;
rcu_assign_pointer(ctx->io_ev_fd, NULL);
- if (refcount_dec_and_test(&ev_fd->refs))
- call_rcu(&ev_fd->rcu, io_eventfd_free);
+ io_eventfd_put(ev_fd);
return 0;
}
--
2.45.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/6] io_uring/eventfd: check for the need to async notifier earlier
2024-09-21 7:59 [PATCHSET next 0/6] Move eventfd cq tracking into io_ev_fd Jens Axboe
2024-09-21 7:59 ` [PATCH 1/6] io_uring/eventfd: abstract out ev_fd put helper Jens Axboe
@ 2024-09-21 7:59 ` Jens Axboe
2024-09-21 7:59 ` [PATCH 3/6] io_uring/eventfd: move actual signaling part into separate helper Jens Axboe
` (4 subsequent siblings)
6 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2024-09-21 7:59 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe
It's not necessary to do this post grabbing a reference. With that, we
can drop the out goto path as well.
Signed-off-by: Jens Axboe <[email protected]>
---
io_uring/eventfd.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/io_uring/eventfd.c b/io_uring/eventfd.c
index 8b628ab6bbff..829873806f9f 100644
--- a/io_uring/eventfd.c
+++ b/io_uring/eventfd.c
@@ -69,10 +69,10 @@ void io_eventfd_signal(struct io_ring_ctx *ctx)
*/
if (unlikely(!ev_fd))
return;
+ if (ev_fd->eventfd_async && !io_wq_current_is_worker())
+ return;
if (!refcount_inc_not_zero(&ev_fd->refs))
return;
- if (ev_fd->eventfd_async && !io_wq_current_is_worker())
- goto out;
if (likely(eventfd_signal_allowed())) {
eventfd_signal_mask(ev_fd->cq_ev_fd, EPOLL_URING_WAKE);
@@ -82,7 +82,6 @@ void io_eventfd_signal(struct io_ring_ctx *ctx)
return;
}
}
-out:
io_eventfd_put(ev_fd);
}
--
2.45.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 3/6] io_uring/eventfd: move actual signaling part into separate helper
2024-09-21 7:59 [PATCHSET next 0/6] Move eventfd cq tracking into io_ev_fd Jens Axboe
2024-09-21 7:59 ` [PATCH 1/6] io_uring/eventfd: abstract out ev_fd put helper Jens Axboe
2024-09-21 7:59 ` [PATCH 2/6] io_uring/eventfd: check for the need to async notifier earlier Jens Axboe
@ 2024-09-21 7:59 ` Jens Axboe
2024-09-21 7:59 ` [PATCH 4/6] io_uring/eventfd: move trigger check into a helper Jens Axboe
` (3 subsequent siblings)
6 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2024-09-21 7:59 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe
In preparation for using this from multiple spots, move the signaling
into a helper.
Signed-off-by: Jens Axboe <[email protected]>
---
io_uring/eventfd.c | 28 ++++++++++++++++++----------
1 file changed, 18 insertions(+), 10 deletions(-)
diff --git a/io_uring/eventfd.c b/io_uring/eventfd.c
index 829873806f9f..58e76f4d1e00 100644
--- a/io_uring/eventfd.c
+++ b/io_uring/eventfd.c
@@ -47,6 +47,22 @@ static void io_eventfd_put(struct io_ev_fd *ev_fd)
call_rcu(&ev_fd->rcu, io_eventfd_free);
}
+/*
+ * Returns true if the caller should put the ev_fd reference, false if not.
+ */
+static bool __io_eventfd_signal(struct io_ev_fd *ev_fd)
+{
+ if (eventfd_signal_allowed()) {
+ eventfd_signal_mask(ev_fd->cq_ev_fd, EPOLL_URING_WAKE);
+ return true;
+ }
+ if (!atomic_fetch_or(BIT(IO_EVENTFD_OP_SIGNAL_BIT), &ev_fd->ops)) {
+ call_rcu_hurry(&ev_fd->rcu, io_eventfd_do_signal);
+ return false;
+ }
+ return true;
+}
+
void io_eventfd_signal(struct io_ring_ctx *ctx)
{
struct io_ev_fd *ev_fd = NULL;
@@ -73,16 +89,8 @@ void io_eventfd_signal(struct io_ring_ctx *ctx)
return;
if (!refcount_inc_not_zero(&ev_fd->refs))
return;
-
- if (likely(eventfd_signal_allowed())) {
- eventfd_signal_mask(ev_fd->cq_ev_fd, EPOLL_URING_WAKE);
- } else {
- if (!atomic_fetch_or(BIT(IO_EVENTFD_OP_SIGNAL_BIT), &ev_fd->ops)) {
- call_rcu_hurry(&ev_fd->rcu, io_eventfd_do_signal);
- return;
- }
- }
- io_eventfd_put(ev_fd);
+ if (__io_eventfd_signal(ev_fd))
+ io_eventfd_put(ev_fd);
}
void io_eventfd_flush_signal(struct io_ring_ctx *ctx)
--
2.45.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 4/6] io_uring/eventfd: move trigger check into a helper
2024-09-21 7:59 [PATCHSET next 0/6] Move eventfd cq tracking into io_ev_fd Jens Axboe
` (2 preceding siblings ...)
2024-09-21 7:59 ` [PATCH 3/6] io_uring/eventfd: move actual signaling part into separate helper Jens Axboe
@ 2024-09-21 7:59 ` Jens Axboe
2024-09-21 7:59 ` [PATCH 5/6] io_uring/eventfd: abstract out ev_fd grab + release helpers Jens Axboe
` (2 subsequent siblings)
6 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2024-09-21 7:59 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe
It's a bit hard to read what guards the triggering, move it into a
helper and add a comment explaining it too. This additionally moves
the ev_fd == NULL check in there as well.
Signed-off-by: Jens Axboe <[email protected]>
---
io_uring/eventfd.c | 15 ++++++++++++---
1 file changed, 12 insertions(+), 3 deletions(-)
diff --git a/io_uring/eventfd.c b/io_uring/eventfd.c
index 58e76f4d1e00..0946d3da88d3 100644
--- a/io_uring/eventfd.c
+++ b/io_uring/eventfd.c
@@ -63,6 +63,17 @@ static bool __io_eventfd_signal(struct io_ev_fd *ev_fd)
return true;
}
+/*
+ * Trigger if eventfd_async isn't set, or if it's set and the caller is
+ * an async worker. If ev_fd isn't valid, obviously return false.
+ */
+static bool io_eventfd_trigger(struct io_ev_fd *ev_fd)
+{
+ if (ev_fd)
+ return !ev_fd->eventfd_async || io_wq_current_is_worker();
+ return false;
+}
+
void io_eventfd_signal(struct io_ring_ctx *ctx)
{
struct io_ev_fd *ev_fd = NULL;
@@ -83,9 +94,7 @@ void io_eventfd_signal(struct io_ring_ctx *ctx)
* completed between the NULL check of ctx->io_ev_fd at the start of
* the function and rcu_read_lock.
*/
- if (unlikely(!ev_fd))
- return;
- if (ev_fd->eventfd_async && !io_wq_current_is_worker())
+ if (!io_eventfd_trigger(ev_fd))
return;
if (!refcount_inc_not_zero(&ev_fd->refs))
return;
--
2.45.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 5/6] io_uring/eventfd: abstract out ev_fd grab + release helpers
2024-09-21 7:59 [PATCHSET next 0/6] Move eventfd cq tracking into io_ev_fd Jens Axboe
` (3 preceding siblings ...)
2024-09-21 7:59 ` [PATCH 4/6] io_uring/eventfd: move trigger check into a helper Jens Axboe
@ 2024-09-21 7:59 ` Jens Axboe
2024-09-21 7:59 ` [PATCH 6/6] io_uring/eventfd: move ctx->evfd_last_cq_tail into io_ev_fd Jens Axboe
2024-09-30 14:28 ` [PATCHSET next 0/6] Move eventfd cq tracking " Jens Axboe
6 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2024-09-21 7:59 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe
In preparation for needing the ev_fd grabbing (and releasing) from
another path, abstract out two helpers for that.
Signed-off-by: Jens Axboe <[email protected]>
---
io_uring/eventfd.c | 41 ++++++++++++++++++++++++++++++-----------
1 file changed, 30 insertions(+), 11 deletions(-)
diff --git a/io_uring/eventfd.c b/io_uring/eventfd.c
index 0946d3da88d3..d1fdecd0c458 100644
--- a/io_uring/eventfd.c
+++ b/io_uring/eventfd.c
@@ -47,6 +47,13 @@ static void io_eventfd_put(struct io_ev_fd *ev_fd)
call_rcu(&ev_fd->rcu, io_eventfd_free);
}
+static void io_eventfd_release(struct io_ev_fd *ev_fd, bool put_ref)
+{
+ if (put_ref)
+ io_eventfd_put(ev_fd);
+ rcu_read_unlock();
+}
+
/*
* Returns true if the caller should put the ev_fd reference, false if not.
*/
@@ -74,14 +81,18 @@ static bool io_eventfd_trigger(struct io_ev_fd *ev_fd)
return false;
}
-void io_eventfd_signal(struct io_ring_ctx *ctx)
+/*
+ * On success, returns with an ev_fd reference grabbed and the RCU read
+ * lock held.
+ */
+static struct io_ev_fd *io_eventfd_grab(struct io_ring_ctx *ctx)
{
- struct io_ev_fd *ev_fd = NULL;
+ struct io_ev_fd *ev_fd;
if (READ_ONCE(ctx->rings->cq_flags) & IORING_CQ_EVENTFD_DISABLED)
- return;
+ return NULL;
- guard(rcu)();
+ rcu_read_lock();
/*
* rcu_dereference ctx->io_ev_fd once and use it for both for checking
@@ -90,16 +101,24 @@ void io_eventfd_signal(struct io_ring_ctx *ctx)
ev_fd = rcu_dereference(ctx->io_ev_fd);
/*
- * Check again if ev_fd exists incase an io_eventfd_unregister call
+ * Check again if ev_fd exists in case an io_eventfd_unregister call
* completed between the NULL check of ctx->io_ev_fd at the start of
* the function and rcu_read_lock.
*/
- if (!io_eventfd_trigger(ev_fd))
- return;
- if (!refcount_inc_not_zero(&ev_fd->refs))
- return;
- if (__io_eventfd_signal(ev_fd))
- io_eventfd_put(ev_fd);
+ if (io_eventfd_trigger(ev_fd) && refcount_inc_not_zero(&ev_fd->refs))
+ return ev_fd;
+
+ rcu_read_unlock();
+ return NULL;
+}
+
+void io_eventfd_signal(struct io_ring_ctx *ctx)
+{
+ struct io_ev_fd *ev_fd;
+
+ ev_fd = io_eventfd_grab(ctx);
+ if (ev_fd)
+ io_eventfd_release(ev_fd, __io_eventfd_signal(ev_fd));
}
void io_eventfd_flush_signal(struct io_ring_ctx *ctx)
--
2.45.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 6/6] io_uring/eventfd: move ctx->evfd_last_cq_tail into io_ev_fd
2024-09-21 7:59 [PATCHSET next 0/6] Move eventfd cq tracking into io_ev_fd Jens Axboe
` (4 preceding siblings ...)
2024-09-21 7:59 ` [PATCH 5/6] io_uring/eventfd: abstract out ev_fd grab + release helpers Jens Axboe
@ 2024-09-21 7:59 ` Jens Axboe
2024-09-21 8:18 ` Jens Axboe
2024-09-30 14:28 ` [PATCHSET next 0/6] Move eventfd cq tracking " Jens Axboe
6 siblings, 1 reply; 9+ messages in thread
From: Jens Axboe @ 2024-09-21 7:59 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe
Everything else about the io_uring eventfd support is nicely kept
private to that code, except the cached_cq_tail tracking. With
everything else in place, move io_eventfd_flush_signal() to using
the ev_fd grab+release helpers, which then enables the direct use of
io_ev_fd for this tracking too.
Signed-off-by: Jens Axboe <[email protected]>
---
io_uring/eventfd.c | 50 +++++++++++++++++++++++++++-------------------
1 file changed, 29 insertions(+), 21 deletions(-)
diff --git a/io_uring/eventfd.c b/io_uring/eventfd.c
index d1fdecd0c458..fab936d31ba8 100644
--- a/io_uring/eventfd.c
+++ b/io_uring/eventfd.c
@@ -13,10 +13,12 @@
struct io_ev_fd {
struct eventfd_ctx *cq_ev_fd;
- unsigned int eventfd_async: 1;
- struct rcu_head rcu;
+ unsigned int eventfd_async;
+ /* protected by ->completion_lock */
+ unsigned last_cq_tail;
refcount_t refs;
atomic_t ops;
+ struct rcu_head rcu;
};
enum {
@@ -123,25 +125,31 @@ void io_eventfd_signal(struct io_ring_ctx *ctx)
void io_eventfd_flush_signal(struct io_ring_ctx *ctx)
{
- bool skip;
-
- spin_lock(&ctx->completion_lock);
-
- /*
- * Eventfd should only get triggered when at least one event has been
- * posted. Some applications rely on the eventfd notification count
- * only changing IFF a new CQE has been added to the CQ ring. There's
- * no depedency on 1:1 relationship between how many times this
- * function is called (and hence the eventfd count) and number of CQEs
- * posted to the CQ ring.
- */
- skip = ctx->cached_cq_tail == ctx->evfd_last_cq_tail;
- ctx->evfd_last_cq_tail = ctx->cached_cq_tail;
- spin_unlock(&ctx->completion_lock);
- if (skip)
- return;
+ struct io_ev_fd *ev_fd;
- io_eventfd_signal(ctx);
+ ev_fd = io_eventfd_grab(ctx);
+ if (ev_fd) {
+ bool skip, put_ref = true;
+
+ /*
+ * Eventfd should only get triggered when at least one event
+ * has been posted. Some applications rely on the eventfd
+ * notification count only changing IFF a new CQE has been
+ * added to the CQ ring. There's no dependency on 1:1
+ * relationship between how many times this function is called
+ * (and hence the eventfd count) and number of CQEs posted to
+ * the CQ ring.
+ */
+ spin_lock(&ctx->completion_lock);
+ skip = ctx->cached_cq_tail == ev_fd->last_cq_tail;
+ ev_fd->last_cq_tail = ctx->cached_cq_tail;
+ spin_unlock(&ctx->completion_lock);
+
+ if (!skip)
+ put_ref = __io_eventfd_signal(ev_fd);
+
+ io_eventfd_release(ev_fd, put_ref);
+ }
}
int io_eventfd_register(struct io_ring_ctx *ctx, void __user *arg,
@@ -172,7 +180,7 @@ int io_eventfd_register(struct io_ring_ctx *ctx, void __user *arg,
}
spin_lock(&ctx->completion_lock);
- ctx->evfd_last_cq_tail = ctx->cached_cq_tail;
+ ev_fd->last_cq_tail = ctx->cached_cq_tail;
spin_unlock(&ctx->completion_lock);
ev_fd->eventfd_async = eventfd_async;
--
2.45.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 6/6] io_uring/eventfd: move ctx->evfd_last_cq_tail into io_ev_fd
2024-09-21 7:59 ` [PATCH 6/6] io_uring/eventfd: move ctx->evfd_last_cq_tail into io_ev_fd Jens Axboe
@ 2024-09-21 8:18 ` Jens Axboe
0 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2024-09-21 8:18 UTC (permalink / raw)
To: io-uring
On 9/21/24 1:59 AM, Jens Axboe wrote:> Everything else about the io_uring eventfd support is nicely kept
> private to that code, except the cached_cq_tail tracking. With
> everything else in place, move io_eventfd_flush_signal() to using
> the ev_fd grab+release helpers, which then enables the direct use of
> io_ev_fd for this tracking too.
Of course forgot to refresh io_uring_types.h as well, so this patch is
missing the hunk that deletes it from io_ring_ctx. Refreshed version
below, that's the only change.
diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index 4b9ba523978d..0deec302595a 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -413,10 +413,6 @@ struct io_ring_ctx {
DECLARE_HASHTABLE(napi_ht, 4);
#endif
-
- /* protected by ->completion_lock */
- unsigned evfd_last_cq_tail;
-
/*
* If IORING_SETUP_NO_MMAP is used, then the below holds
* the gup'ed pages for the two rings, and the sqes.
diff --git a/io_uring/eventfd.c b/io_uring/eventfd.c
index d1fdecd0c458..fab936d31ba8 100644
--- a/io_uring/eventfd.c
+++ b/io_uring/eventfd.c
@@ -13,10 +13,12 @@
struct io_ev_fd {
struct eventfd_ctx *cq_ev_fd;
- unsigned int eventfd_async: 1;
- struct rcu_head rcu;
+ unsigned int eventfd_async;
+ /* protected by ->completion_lock */
+ unsigned last_cq_tail;
refcount_t refs;
atomic_t ops;
+ struct rcu_head rcu;
};
enum {
@@ -123,25 +125,31 @@ void io_eventfd_signal(struct io_ring_ctx *ctx)
void io_eventfd_flush_signal(struct io_ring_ctx *ctx)
{
- bool skip;
-
- spin_lock(&ctx->completion_lock);
-
- /*
- * Eventfd should only get triggered when at least one event has been
- * posted. Some applications rely on the eventfd notification count
- * only changing IFF a new CQE has been added to the CQ ring. There's
- * no depedency on 1:1 relationship between how many times this
- * function is called (and hence the eventfd count) and number of CQEs
- * posted to the CQ ring.
- */
- skip = ctx->cached_cq_tail == ctx->evfd_last_cq_tail;
- ctx->evfd_last_cq_tail = ctx->cached_cq_tail;
- spin_unlock(&ctx->completion_lock);
- if (skip)
- return;
+ struct io_ev_fd *ev_fd;
- io_eventfd_signal(ctx);
+ ev_fd = io_eventfd_grab(ctx);
+ if (ev_fd) {
+ bool skip, put_ref = true;
+
+ /*
+ * Eventfd should only get triggered when at least one event
+ * has been posted. Some applications rely on the eventfd
+ * notification count only changing IFF a new CQE has been
+ * added to the CQ ring. There's no dependency on 1:1
+ * relationship between how many times this function is called
+ * (and hence the eventfd count) and number of CQEs posted to
+ * the CQ ring.
+ */
+ spin_lock(&ctx->completion_lock);
+ skip = ctx->cached_cq_tail == ev_fd->last_cq_tail;
+ ev_fd->last_cq_tail = ctx->cached_cq_tail;
+ spin_unlock(&ctx->completion_lock);
+
+ if (!skip)
+ put_ref = __io_eventfd_signal(ev_fd);
+
+ io_eventfd_release(ev_fd, put_ref);
+ }
}
int io_eventfd_register(struct io_ring_ctx *ctx, void __user *arg,
@@ -172,7 +180,7 @@ int io_eventfd_register(struct io_ring_ctx *ctx, void __user *arg,
}
spin_lock(&ctx->completion_lock);
- ctx->evfd_last_cq_tail = ctx->cached_cq_tail;
+ ev_fd->last_cq_tail = ctx->cached_cq_tail;
spin_unlock(&ctx->completion_lock);
ev_fd->eventfd_async = eventfd_async;
--
Jens Axboe
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCHSET next 0/6] Move eventfd cq tracking into io_ev_fd
2024-09-21 7:59 [PATCHSET next 0/6] Move eventfd cq tracking into io_ev_fd Jens Axboe
` (5 preceding siblings ...)
2024-09-21 7:59 ` [PATCH 6/6] io_uring/eventfd: move ctx->evfd_last_cq_tail into io_ev_fd Jens Axboe
@ 2024-09-30 14:28 ` Jens Axboe
6 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2024-09-30 14:28 UTC (permalink / raw)
To: io-uring, Jens Axboe
On Sat, 21 Sep 2024 01:59:46 -0600, Jens Axboe wrote:
> For some reason this ended up being a series of 6 patches, when the
> goal was really just to move evfd_last_cq_tail out of io_ring_ctx
> and into struct io_ev_fd, where it belongs. But this slowly builds to
> that goal, and the final patch does the move unceremoniously.
>
> Patches are on top of current -git with for-6.12/io_uring pulled in.
>
> [...]
Applied, thanks!
[1/6] io_uring/eventfd: abstract out ev_fd put helper
commit: a2656907e0e2bd817f18ae5ba0c0bc47373031e0
[2/6] io_uring/eventfd: check for the need to async notifier earlier
commit: e18ccce7024f18ada55e7bb714b28990e1fd3034
[3/6] io_uring/eventfd: move actual signaling part into separate helper
commit: 0fa1e1b85857005aa7777dd3c92fa1f9324909ef
[4/6] io_uring/eventfd: move trigger check into a helper
commit: bc9cd77386b3e5b9e7e4f2b016d0dbe4db2344bd
[5/6] io_uring/eventfd: abstract out ev_fd grab + release helpers
commit: bb4fd7c94e41777b7980e0d06ea521311f7330be
[6/6] io_uring/eventfd: move ctx->evfd_last_cq_tail into io_ev_fd
commit: e4355ab51156d5ca83733229fc6b3dfd7c5a4785
Best regards,
--
Jens Axboe
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2024-09-30 14:28 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-09-21 7:59 [PATCHSET next 0/6] Move eventfd cq tracking into io_ev_fd Jens Axboe
2024-09-21 7:59 ` [PATCH 1/6] io_uring/eventfd: abstract out ev_fd put helper Jens Axboe
2024-09-21 7:59 ` [PATCH 2/6] io_uring/eventfd: check for the need to async notifier earlier Jens Axboe
2024-09-21 7:59 ` [PATCH 3/6] io_uring/eventfd: move actual signaling part into separate helper Jens Axboe
2024-09-21 7:59 ` [PATCH 4/6] io_uring/eventfd: move trigger check into a helper Jens Axboe
2024-09-21 7:59 ` [PATCH 5/6] io_uring/eventfd: abstract out ev_fd grab + release helpers Jens Axboe
2024-09-21 7:59 ` [PATCH 6/6] io_uring/eventfd: move ctx->evfd_last_cq_tail into io_ev_fd Jens Axboe
2024-09-21 8:18 ` Jens Axboe
2024-09-30 14:28 ` [PATCHSET next 0/6] Move eventfd cq tracking " Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox