* [PATCH v1 00/11] add large CQE support for io-uring
@ 2022-04-19 20:56 Stefan Roesch
2022-04-19 20:56 ` [PATCH v1 01/11] io_uring: support CQE32 in io_uring_cqe Stefan Roesch
` (10 more replies)
0 siblings, 11 replies; 15+ messages in thread
From: Stefan Roesch @ 2022-04-19 20:56 UTC (permalink / raw)
To: io-uring, kernel-team; +Cc: shr
This adds the large CQE support for io-uring. Large CQE's are 16 bytes longer.
To support the longer CQE's the allocation part is changed and when the CQE is
accessed.
The allocation of the large CQE's is twice as big, so the allocation size is
doubled. The ring size calculation needs to take this into account.
All accesses to the large CQE's need to be shifted by 1 to take the bigger size
of each CQE into account. The existing index manipulation does not need to be
changed and can stay the same.
The setup and the completion processing needs to take the new fields into
account and initialize them. For the completion processing these fields need
to be passed through.
The flush completion processing needs to fill the additional CQE32 fields.
The code for overflows needs to be adapted accordingly: the allocation needs to
take large CQE's into account. This means that the order of the fields in the io
overflow structure needs to be changed and the allocation needs to be enlarged
for big CQE's.
In addition the two new fields need to be copied for large CQE's.
The new fields are added to the tracing statements, so the extra1 and extra2
fields are exposed in tracing.
For testing purposes the extra1 and extra2 fields are used by the nop operation.
Testing:
The exisiting tests have been run with the following configurations and they all
pass:
- Default config
- Large SQE
- Large CQE
- Large SQE and large CQE.
In addition a new test has been added to liburing to verify that extra1 and extra2
are set as expected for the nop operation.
Note:
To use this patch also the corresponding changes to the client library
liburing are required. A different patch series is sent out for this.
This is based of the jens/io_uring-big-sqe branch.
Stefan Roesch (11):
io_uring: support CQE32 in io_uring_cqe
io_uring: wire up inline completion path for CQE32
io_uring: change ring size calculation for CQE32
io_uring: add CQE32 setup processing
io_uring: add CQE32 completion processing
io_uring: modify io_get_cqe for CQE32
io_uring: flush completions for CQE32
io_uring: overflow processing for CQE32
io_uring: add tracing for additional CQE32 fields
io_uring: enable CQE32
io_uring: support CQE32 for nop operation
fs/io_uring.c | 209 +++++++++++++++++++++++++++-----
include/trace/events/io_uring.h | 18 ++-
include/uapi/linux/io_uring.h | 12 ++
3 files changed, 207 insertions(+), 32 deletions(-)
--
2.30.2
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v1 01/11] io_uring: support CQE32 in io_uring_cqe
2022-04-19 20:56 [PATCH v1 00/11] add large CQE support for io-uring Stefan Roesch
@ 2022-04-19 20:56 ` Stefan Roesch
2022-04-22 1:51 ` Kanchan Joshi
2022-04-19 20:56 ` [PATCH v1 02/11] io_uring: wire up inline completion path for CQE32 Stefan Roesch
` (9 subsequent siblings)
10 siblings, 1 reply; 15+ messages in thread
From: Stefan Roesch @ 2022-04-19 20:56 UTC (permalink / raw)
To: io-uring, kernel-team; +Cc: shr, Jens Axboe
This adds the struct io_uring_cqe_extra in the structure io_uring_cqe to
support large CQE's.
Co-developed-by: Jens Axboe <[email protected]>
Signed-off-by: Stefan Roesch <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
---
include/uapi/linux/io_uring.h | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index ee677dbd6a6d..6f9f9b6a9d15 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -111,6 +111,7 @@ enum {
#define IORING_SETUP_R_DISABLED (1U << 6) /* start with ring disabled */
#define IORING_SETUP_SUBMIT_ALL (1U << 7) /* continue submit on error */
#define IORING_SETUP_SQE128 (1U << 8) /* SQEs are 128b */
+#define IORING_SETUP_CQE32 (1U << 9) /* CQEs are 32b */
enum {
IORING_OP_NOP,
@@ -201,6 +202,11 @@ enum {
#define IORING_POLL_UPDATE_EVENTS (1U << 1)
#define IORING_POLL_UPDATE_USER_DATA (1U << 2)
+struct io_uring_cqe_extra {
+ __u64 extra1;
+ __u64 extra2;
+};
+
/*
* IO completion data structure (Completion Queue Entry)
*/
@@ -208,6 +214,12 @@ struct io_uring_cqe {
__u64 user_data; /* sqe->data submission passed back */
__s32 res; /* result code for this event */
__u32 flags;
+
+ /*
+ * If the ring is initialized with IORING_SETUP_CQE32, then this field
+ * contains 16-bytes of padding, doubling the size of the CQE.
+ */
+ struct io_uring_cqe_extra b[0];
};
/*
--
2.30.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v1 02/11] io_uring: wire up inline completion path for CQE32
2022-04-19 20:56 [PATCH v1 00/11] add large CQE support for io-uring Stefan Roesch
2022-04-19 20:56 ` [PATCH v1 01/11] io_uring: support CQE32 in io_uring_cqe Stefan Roesch
@ 2022-04-19 20:56 ` Stefan Roesch
2022-04-19 20:56 ` [PATCH v1 03/11] io_uring: change ring size calculation " Stefan Roesch
` (8 subsequent siblings)
10 siblings, 0 replies; 15+ messages in thread
From: Stefan Roesch @ 2022-04-19 20:56 UTC (permalink / raw)
To: io-uring, kernel-team; +Cc: shr, Jens Axboe
Rather than always use the slower locked path, wire up use of the
deferred completion path that normal CQEs can take. This reuses the
hash list node for the storage we need to hold the two 64-bit values
that must be passed back.
Co-developed-by: Jens Axboe <[email protected]>
Signed-off-by: Stefan Roesch <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 4c32cf987ef3..bf2b02518332 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -964,7 +964,13 @@ struct io_kiocb {
atomic_t poll_refs;
struct io_task_work io_task_work;
/* for polled requests, i.e. IORING_OP_POLL_ADD and async armed poll */
- struct hlist_node hash_node;
+ union {
+ struct hlist_node hash_node;
+ struct {
+ u64 extra1;
+ u64 extra2;
+ };
+ };
/* internal polling, see IORING_FEAT_FAST_POLL */
struct async_poll *apoll;
/* opcode allocated if it needs to store data for async defer */
--
2.30.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v1 03/11] io_uring: change ring size calculation for CQE32
2022-04-19 20:56 [PATCH v1 00/11] add large CQE support for io-uring Stefan Roesch
2022-04-19 20:56 ` [PATCH v1 01/11] io_uring: support CQE32 in io_uring_cqe Stefan Roesch
2022-04-19 20:56 ` [PATCH v1 02/11] io_uring: wire up inline completion path for CQE32 Stefan Roesch
@ 2022-04-19 20:56 ` Stefan Roesch
2022-04-19 20:56 ` [PATCH v1 04/11] io_uring: add CQE32 setup processing Stefan Roesch
` (7 subsequent siblings)
10 siblings, 0 replies; 15+ messages in thread
From: Stefan Roesch @ 2022-04-19 20:56 UTC (permalink / raw)
To: io-uring, kernel-team; +Cc: shr, Jens Axboe
This changes the function rings_size to take large CQE's into account.
Co-developed-by: Jens Axboe <[email protected]>
Signed-off-by: Stefan Roesch <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index bf2b02518332..9712483d3a17 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -9693,8 +9693,8 @@ static void *io_mem_alloc(size_t size)
return (void *) __get_free_pages(gfp, get_order(size));
}
-static unsigned long rings_size(unsigned sq_entries, unsigned cq_entries,
- size_t *sq_offset)
+static unsigned long rings_size(struct io_ring_ctx *ctx, unsigned int sq_entries,
+ unsigned int cq_entries, size_t *sq_offset)
{
struct io_rings *rings;
size_t off, sq_array_size;
@@ -9702,6 +9702,10 @@ static unsigned long rings_size(unsigned sq_entries, unsigned cq_entries,
off = struct_size(rings, cqes, cq_entries);
if (off == SIZE_MAX)
return SIZE_MAX;
+ if (ctx->flags & IORING_SETUP_CQE32) {
+ if (check_shl_overflow(off, 1, &off))
+ return SIZE_MAX;
+ }
#ifdef CONFIG_SMP
off = ALIGN(off, SMP_CACHE_BYTES);
@@ -11365,7 +11369,7 @@ static __cold int io_allocate_scq_urings(struct io_ring_ctx *ctx,
ctx->sq_entries = p->sq_entries;
ctx->cq_entries = p->cq_entries;
- size = rings_size(p->sq_entries, p->cq_entries, &sq_array_offset);
+ size = rings_size(ctx, p->sq_entries, p->cq_entries, &sq_array_offset);
if (size == SIZE_MAX)
return -EOVERFLOW;
--
2.30.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v1 04/11] io_uring: add CQE32 setup processing
2022-04-19 20:56 [PATCH v1 00/11] add large CQE support for io-uring Stefan Roesch
` (2 preceding siblings ...)
2022-04-19 20:56 ` [PATCH v1 03/11] io_uring: change ring size calculation " Stefan Roesch
@ 2022-04-19 20:56 ` Stefan Roesch
2022-04-19 20:56 ` [PATCH v1 05/11] io_uring: add CQE32 completion processing Stefan Roesch
` (6 subsequent siblings)
10 siblings, 0 replies; 15+ messages in thread
From: Stefan Roesch @ 2022-04-19 20:56 UTC (permalink / raw)
To: io-uring, kernel-team; +Cc: shr
This adds two new function to setup and fill the CQE32 result structure.
Signed-off-by: Stefan Roesch <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 58 +++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 58 insertions(+)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 9712483d3a17..abbd2efbe255 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2175,12 +2175,70 @@ static inline bool __io_fill_cqe_req_filled(struct io_ring_ctx *ctx,
req->cqe.res, req->cqe.flags);
}
+static inline bool __io_fill_cqe32_req_filled(struct io_ring_ctx *ctx,
+ struct io_kiocb *req)
+{
+ struct io_uring_cqe *cqe;
+ u64 extra1 = req->extra1;
+ u64 extra2 = req->extra2;
+
+ trace_io_uring_complete(req->ctx, req, req->cqe.user_data,
+ req->cqe.res, req->cqe.flags);
+
+ /*
+ * If we can't get a cq entry, userspace overflowed the
+ * submission (by quite a lot). Increment the overflow count in
+ * the ring.
+ */
+ cqe = io_get_cqe(ctx);
+ if (likely(cqe)) {
+ memcpy(cqe, &req->cqe, sizeof(struct io_uring_cqe));
+ cqe->b[0].extra1 = extra1;
+ cqe->b[0].extra2 = extra2;
+ return true;
+ }
+
+ return io_cqring_event_overflow(ctx, req->cqe.user_data,
+ req->cqe.res, req->cqe.flags, extra1, extra2);
+}
+
static inline bool __io_fill_cqe_req(struct io_kiocb *req, s32 res, u32 cflags)
{
trace_io_uring_complete(req->ctx, req, req->cqe.user_data, res, cflags);
return __io_fill_cqe(req->ctx, req->cqe.user_data, res, cflags);
}
+static void __io_fill_cqe32_req(struct io_kiocb *req, s32 res, u32 cflags,
+ u64 extra1, u64 extra2)
+{
+ struct io_ring_ctx *ctx = req->ctx;
+ struct io_uring_cqe *cqe;
+
+ if (WARN_ON_ONCE(!(ctx->flags & IORING_SETUP_CQE32)))
+ return;
+ if (req->flags & REQ_F_CQE_SKIP)
+ return;
+
+ trace_io_uring_complete(ctx, req, req->user_data, res, cflags);
+
+ /*
+ * If we can't get a cq entry, userspace overflowed the
+ * submission (by quite a lot). Increment the overflow count in
+ * the ring.
+ */
+ cqe = io_get_cqe(ctx);
+ if (likely(cqe)) {
+ WRITE_ONCE(cqe->user_data, req->cqe.user_data);
+ WRITE_ONCE(cqe->res, res);
+ WRITE_ONCE(cqe->flags, cflags);
+ WRITE_ONCE(cqe->b[0].extra1, extra1);
+ WRITE_ONCE(cqe->b[0].extra2, extra2);
+ return;
+ }
+
+ io_cqring_event_overflow(ctx, req->cqe.user_data, res, cflags);
+}
+
static noinline bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data,
s32 res, u32 cflags)
{
--
2.30.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v1 05/11] io_uring: add CQE32 completion processing
2022-04-19 20:56 [PATCH v1 00/11] add large CQE support for io-uring Stefan Roesch
` (3 preceding siblings ...)
2022-04-19 20:56 ` [PATCH v1 04/11] io_uring: add CQE32 setup processing Stefan Roesch
@ 2022-04-19 20:56 ` Stefan Roesch
2022-04-19 20:56 ` [PATCH v1 06/11] io_uring: modify io_get_cqe for CQE32 Stefan Roesch
` (5 subsequent siblings)
10 siblings, 0 replies; 15+ messages in thread
From: Stefan Roesch @ 2022-04-19 20:56 UTC (permalink / raw)
To: io-uring, kernel-team; +Cc: shr, Jens Axboe
This adds the completion processing for the large CQE's and makes sure
that the extra1 and extra2 fields are passed through.
Co-developed-by: Jens Axboe <[email protected]>
Signed-off-by: Stefan Roesch <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 55 +++++++++++++++++++++++++++++++++++++++++++--------
1 file changed, 47 insertions(+), 8 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index abbd2efbe255..c93a9353c88d 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2247,18 +2247,15 @@ static noinline bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data,
return __io_fill_cqe(ctx, user_data, res, cflags);
}
-static void __io_req_complete_post(struct io_kiocb *req, s32 res,
- u32 cflags)
+static void __io_req_complete_put(struct io_kiocb *req)
{
- struct io_ring_ctx *ctx = req->ctx;
-
- if (!(req->flags & REQ_F_CQE_SKIP))
- __io_fill_cqe_req(req, res, cflags);
/*
* If we're the last reference to this request, add to our locked
* free_list cache.
*/
if (req_ref_put_and_test(req)) {
+ struct io_ring_ctx *ctx = req->ctx;
+
if (req->flags & IO_REQ_LINK_FLAGS) {
if (req->flags & IO_DISARM_MASK)
io_disarm_next(req);
@@ -2281,8 +2278,23 @@ static void __io_req_complete_post(struct io_kiocb *req, s32 res,
}
}
-static void io_req_complete_post(struct io_kiocb *req, s32 res,
- u32 cflags)
+static void __io_req_complete_post(struct io_kiocb *req, s32 res,
+ u32 cflags)
+{
+ if (!(req->flags & REQ_F_CQE_SKIP))
+ __io_fill_cqe_req(req, res, cflags);
+ __io_req_complete_put(req);
+}
+
+static void __io_req_complete_post32(struct io_kiocb *req, s32 res,
+ u32 cflags, u64 extra1, u64 extra2)
+{
+ if (!(req->flags & REQ_F_CQE_SKIP))
+ __io_fill_cqe32_req(req, res, cflags, extra1, extra2);
+ __io_req_complete_put(req);
+}
+
+static void io_req_complete_post(struct io_kiocb *req, s32 res, u32 cflags)
{
struct io_ring_ctx *ctx = req->ctx;
@@ -2293,6 +2305,18 @@ static void io_req_complete_post(struct io_kiocb *req, s32 res,
io_cqring_ev_posted(ctx);
}
+static void io_req_complete_post32(struct io_kiocb *req, s32 res,
+ u32 cflags, u64 extra1, u64 extra2)
+{
+ struct io_ring_ctx *ctx = req->ctx;
+
+ spin_lock(&ctx->completion_lock);
+ __io_req_complete_post32(req, res, cflags, extra1, extra2);
+ io_commit_cqring(ctx);
+ spin_unlock(&ctx->completion_lock);
+ io_cqring_ev_posted(ctx);
+}
+
static inline void io_req_complete_state(struct io_kiocb *req, s32 res,
u32 cflags)
{
@@ -2310,6 +2334,21 @@ static inline void __io_req_complete(struct io_kiocb *req, unsigned issue_flags,
io_req_complete_post(req, res, cflags);
}
+static inline void __io_req_complete32(struct io_kiocb *req,
+ unsigned int issue_flags, s32 res,
+ u32 cflags, u64 extra1, u64 extra2)
+{
+ if (issue_flags & IO_URING_F_COMPLETE_DEFER) {
+ req->cqe.res = res;
+ req->cqe.flags = cflags;
+ req->extra1 = extra1;
+ req->extra2 = extra2;
+ req->flags |= REQ_F_COMPLETE_INLINE;
+ } else {
+ io_req_complete_post32(req, res, cflags, extra1, extra2);
+ }
+}
+
static inline void io_req_complete(struct io_kiocb *req, s32 res)
{
__io_req_complete(req, 0, res, 0);
--
2.30.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v1 06/11] io_uring: modify io_get_cqe for CQE32
2022-04-19 20:56 [PATCH v1 00/11] add large CQE support for io-uring Stefan Roesch
` (4 preceding siblings ...)
2022-04-19 20:56 ` [PATCH v1 05/11] io_uring: add CQE32 completion processing Stefan Roesch
@ 2022-04-19 20:56 ` Stefan Roesch
2022-04-19 20:56 ` [PATCH v1 07/11] io_uring: flush completions " Stefan Roesch
` (4 subsequent siblings)
10 siblings, 0 replies; 15+ messages in thread
From: Stefan Roesch @ 2022-04-19 20:56 UTC (permalink / raw)
To: io-uring, kernel-team; +Cc: shr
Modify accesses to the CQE array to take large CQE's into account. The
index needs to be shifted by one for large CQE's.
Signed-off-by: Stefan Roesch <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index c93a9353c88d..bd352815b9e7 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1909,8 +1909,12 @@ static noinline struct io_uring_cqe *__io_get_cqe(struct io_ring_ctx *ctx)
{
struct io_rings *rings = ctx->rings;
unsigned int off = ctx->cached_cq_tail & (ctx->cq_entries - 1);
+ unsigned int shift = 0;
unsigned int free, queued, len;
+ if (ctx->flags & IORING_SETUP_CQE32)
+ shift = 1;
+
/* userspace may cheat modifying the tail, be safe and do min */
queued = min(__io_cqring_events(ctx), ctx->cq_entries);
free = ctx->cq_entries - queued;
@@ -1922,12 +1926,13 @@ static noinline struct io_uring_cqe *__io_get_cqe(struct io_ring_ctx *ctx)
ctx->cached_cq_tail++;
ctx->cqe_cached = &rings->cqes[off];
ctx->cqe_sentinel = ctx->cqe_cached + len;
- return ctx->cqe_cached++;
+ ctx->cqe_cached++;
+ return &rings->cqes[off << shift];
}
static inline struct io_uring_cqe *io_get_cqe(struct io_ring_ctx *ctx)
{
- if (likely(ctx->cqe_cached < ctx->cqe_sentinel)) {
+ if (likely(ctx->cqe_cached < ctx->cqe_sentinel && !(ctx->flags & IORING_SETUP_CQE32))) {
ctx->cached_cq_tail++;
return ctx->cqe_cached++;
}
--
2.30.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v1 07/11] io_uring: flush completions for CQE32
2022-04-19 20:56 [PATCH v1 00/11] add large CQE support for io-uring Stefan Roesch
` (5 preceding siblings ...)
2022-04-19 20:56 ` [PATCH v1 06/11] io_uring: modify io_get_cqe for CQE32 Stefan Roesch
@ 2022-04-19 20:56 ` Stefan Roesch
2022-04-19 20:56 ` [PATCH v1 08/11] io_uring: overflow processing " Stefan Roesch
` (3 subsequent siblings)
10 siblings, 0 replies; 15+ messages in thread
From: Stefan Roesch @ 2022-04-19 20:56 UTC (permalink / raw)
To: io-uring, kernel-team; +Cc: shr
This flushes the completions according to their CQE type: the same
processing is done for the default CQE size, but for large CQE's the
extra1 and extra2 fields are filled in.
Signed-off-by: Stefan Roesch <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index bd352815b9e7..ff6229b6df16 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2877,8 +2877,12 @@ static void __io_submit_flush_completions(struct io_ring_ctx *ctx)
struct io_kiocb *req = container_of(node, struct io_kiocb,
comp_list);
- if (!(req->flags & REQ_F_CQE_SKIP))
- __io_fill_cqe_req_filled(ctx, req);
+ if (!(req->flags & REQ_F_CQE_SKIP)) {
+ if (!(ctx->flags & IORING_SETUP_CQE32))
+ __io_fill_cqe_req_filled(ctx, req);
+ else
+ __io_fill_cqe32_req_filled(ctx, req);
+ }
}
io_commit_cqring(ctx);
--
2.30.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v1 08/11] io_uring: overflow processing for CQE32
2022-04-19 20:56 [PATCH v1 00/11] add large CQE support for io-uring Stefan Roesch
` (6 preceding siblings ...)
2022-04-19 20:56 ` [PATCH v1 07/11] io_uring: flush completions " Stefan Roesch
@ 2022-04-19 20:56 ` Stefan Roesch
2022-04-19 20:56 ` [PATCH v1 09/11] io_uring: add tracing for additional CQE32 fields Stefan Roesch
` (2 subsequent siblings)
10 siblings, 0 replies; 15+ messages in thread
From: Stefan Roesch @ 2022-04-19 20:56 UTC (permalink / raw)
To: io-uring, kernel-team; +Cc: shr, Jens Axboe
This adds the overflow processing for large CQE's.
This adds two parameters to the io_cqring_event_overflow function and
uses these fields to initialize the large CQE fields.
Allocate enough space for large CQE's in the overflow structue. If no
large CQE's are used, the size of the allocation is unchanged.
The cqe field can have a different size depending if its a large
CQE or not. To be able to allocate different sizes, the two fields
in the structure are re-ordered.
Co-developed-by: Jens Axboe <[email protected]>
Signed-off-by: Stefan Roesch <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 26 +++++++++++++++++++-------
1 file changed, 19 insertions(+), 7 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index ff6229b6df16..50efced63ec9 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -220,8 +220,8 @@ struct io_mapped_ubuf {
struct io_ring_ctx;
struct io_overflow_cqe {
- struct io_uring_cqe cqe;
struct list_head list;
+ struct io_uring_cqe cqe;
};
struct io_fixed_file {
@@ -2016,13 +2016,17 @@ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force)
while (!list_empty(&ctx->cq_overflow_list)) {
struct io_uring_cqe *cqe = io_get_cqe(ctx);
struct io_overflow_cqe *ocqe;
+ size_t cqe_size = sizeof(struct io_uring_cqe);
+
+ if (ctx->flags & IORING_SETUP_CQE32)
+ cqe_size <<= 1;
if (!cqe && !force)
break;
ocqe = list_first_entry(&ctx->cq_overflow_list,
struct io_overflow_cqe, list);
if (cqe)
- memcpy(cqe, &ocqe->cqe, sizeof(*cqe));
+ memcpy(cqe, &ocqe->cqe, cqe_size);
else
io_account_cq_overflow(ctx);
@@ -2111,11 +2115,15 @@ static __cold void io_uring_drop_tctx_refs(struct task_struct *task)
}
static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
- s32 res, u32 cflags)
+ s32 res, u32 cflags, u64 extra1, u64 extra2)
{
struct io_overflow_cqe *ocqe;
+ size_t ocq_size = sizeof(struct io_overflow_cqe);
- ocqe = kmalloc(sizeof(*ocqe), GFP_ATOMIC | __GFP_ACCOUNT);
+ if (ctx->flags & IORING_SETUP_CQE32)
+ ocq_size += sizeof(struct io_uring_cqe);
+
+ ocqe = kmalloc(ocq_size, GFP_ATOMIC | __GFP_ACCOUNT);
if (!ocqe) {
/*
* If we're in ring overflow flush mode, or in task cancel mode,
@@ -2134,6 +2142,10 @@ static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
ocqe->cqe.user_data = user_data;
ocqe->cqe.res = res;
ocqe->cqe.flags = cflags;
+ if (ctx->flags & IORING_SETUP_CQE32) {
+ ocqe->cqe.b[0].extra1 = extra1;
+ ocqe->cqe.b[0].extra2 = extra2;
+ }
list_add_tail(&ocqe->list, &ctx->cq_overflow_list);
return true;
}
@@ -2155,7 +2167,7 @@ static inline bool __io_fill_cqe(struct io_ring_ctx *ctx, u64 user_data,
WRITE_ONCE(cqe->flags, cflags);
return true;
}
- return io_cqring_event_overflow(ctx, user_data, res, cflags);
+ return io_cqring_event_overflow(ctx, user_data, res, cflags, 0, 0);
}
static inline bool __io_fill_cqe_req_filled(struct io_ring_ctx *ctx,
@@ -2177,7 +2189,7 @@ static inline bool __io_fill_cqe_req_filled(struct io_ring_ctx *ctx,
return true;
}
return io_cqring_event_overflow(ctx, req->cqe.user_data,
- req->cqe.res, req->cqe.flags);
+ req->cqe.res, req->cqe.flags, 0, 0);
}
static inline bool __io_fill_cqe32_req_filled(struct io_ring_ctx *ctx,
@@ -2241,7 +2253,7 @@ static void __io_fill_cqe32_req(struct io_kiocb *req, s32 res, u32 cflags,
return;
}
- io_cqring_event_overflow(ctx, req->cqe.user_data, res, cflags);
+ io_cqring_event_overflow(ctx, req->cqe.user_data, res, cflags, extra1, extra2);
}
static noinline bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data,
--
2.30.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v1 09/11] io_uring: add tracing for additional CQE32 fields
2022-04-19 20:56 [PATCH v1 00/11] add large CQE support for io-uring Stefan Roesch
` (7 preceding siblings ...)
2022-04-19 20:56 ` [PATCH v1 08/11] io_uring: overflow processing " Stefan Roesch
@ 2022-04-19 20:56 ` Stefan Roesch
2022-04-19 20:56 ` [PATCH v1 10/11] io_uring: enable CQE32 Stefan Roesch
2022-04-19 20:56 ` [PATCH v1 11/11] io_uring: support CQE32 for nop operation Stefan Roesch
10 siblings, 0 replies; 15+ messages in thread
From: Stefan Roesch @ 2022-04-19 20:56 UTC (permalink / raw)
To: io-uring, kernel-team; +Cc: shr, Jens Axboe
This adds tracing for the extra1 and extra2 fields.
Co-developed-by: Jens Axboe <[email protected]>
Signed-off-by: Stefan Roesch <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 11 ++++++-----
include/trace/events/io_uring.h | 18 ++++++++++++++----
2 files changed, 20 insertions(+), 9 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 50efced63ec9..366f49969b31 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2176,7 +2176,7 @@ static inline bool __io_fill_cqe_req_filled(struct io_ring_ctx *ctx,
struct io_uring_cqe *cqe;
trace_io_uring_complete(req->ctx, req, req->cqe.user_data,
- req->cqe.res, req->cqe.flags);
+ req->cqe.res, req->cqe.flags, 0, 0);
/*
* If we can't get a cq entry, userspace overflowed the
@@ -2200,7 +2200,7 @@ static inline bool __io_fill_cqe32_req_filled(struct io_ring_ctx *ctx,
u64 extra2 = req->extra2;
trace_io_uring_complete(req->ctx, req, req->cqe.user_data,
- req->cqe.res, req->cqe.flags);
+ req->cqe.res, req->cqe.flags, extra1, extra2);
/*
* If we can't get a cq entry, userspace overflowed the
@@ -2221,7 +2221,7 @@ static inline bool __io_fill_cqe32_req_filled(struct io_ring_ctx *ctx,
static inline bool __io_fill_cqe_req(struct io_kiocb *req, s32 res, u32 cflags)
{
- trace_io_uring_complete(req->ctx, req, req->cqe.user_data, res, cflags);
+ trace_io_uring_complete(req->ctx, req, req->cqe.user_data, res, cflags, 0, 0);
return __io_fill_cqe(req->ctx, req->cqe.user_data, res, cflags);
}
@@ -2236,7 +2236,8 @@ static void __io_fill_cqe32_req(struct io_kiocb *req, s32 res, u32 cflags,
if (req->flags & REQ_F_CQE_SKIP)
return;
- trace_io_uring_complete(ctx, req, req->user_data, res, cflags);
+ trace_io_uring_complete(ctx, req, req->cqe.user_data, res, cflags,
+ extra1, extra2);
/*
* If we can't get a cq entry, userspace overflowed the
@@ -2260,7 +2261,7 @@ static noinline bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data,
s32 res, u32 cflags)
{
ctx->cq_extra++;
- trace_io_uring_complete(ctx, NULL, user_data, res, cflags);
+ trace_io_uring_complete(ctx, NULL, user_data, res, cflags, 0, 0);
return __io_fill_cqe(ctx, user_data, res, cflags);
}
diff --git a/include/trace/events/io_uring.h b/include/trace/events/io_uring.h
index 8477414d6d06..2eb4f4e47de4 100644
--- a/include/trace/events/io_uring.h
+++ b/include/trace/events/io_uring.h
@@ -318,13 +318,16 @@ TRACE_EVENT(io_uring_fail_link,
* @user_data: user data associated with the request
* @res: result of the request
* @cflags: completion flags
+ * @extra1: extra 64-bit data for CQE32
+ * @extra2: extra 64-bit data for CQE32
*
*/
TRACE_EVENT(io_uring_complete,
- TP_PROTO(void *ctx, void *req, u64 user_data, int res, unsigned cflags),
+ TP_PROTO(void *ctx, void *req, u64 user_data, int res, unsigned cflags,
+ u64 extra1, u64 extra2),
- TP_ARGS(ctx, req, user_data, res, cflags),
+ TP_ARGS(ctx, req, user_data, res, cflags, extra1, extra2),
TP_STRUCT__entry (
__field( void *, ctx )
@@ -332,6 +335,8 @@ TRACE_EVENT(io_uring_complete,
__field( u64, user_data )
__field( int, res )
__field( unsigned, cflags )
+ __field( u64, extra1 )
+ __field( u64, extra2 )
),
TP_fast_assign(
@@ -340,12 +345,17 @@ TRACE_EVENT(io_uring_complete,
__entry->user_data = user_data;
__entry->res = res;
__entry->cflags = cflags;
+ __entry->extra1 = extra1;
+ __entry->extra2 = extra2;
),
- TP_printk("ring %p, req %p, user_data 0x%llx, result %d, cflags 0x%x",
+ TP_printk("ring %p, req %p, user_data 0x%llx, result %d, cflags 0x%x "
+ "extra1 %llu extra2 %llu ",
__entry->ctx, __entry->req,
__entry->user_data,
- __entry->res, __entry->cflags)
+ __entry->res, __entry->cflags,
+ (unsigned long long) __entry->extra1,
+ (unsigned long long) __entry->extra2)
);
/**
--
2.30.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v1 10/11] io_uring: enable CQE32
2022-04-19 20:56 [PATCH v1 00/11] add large CQE support for io-uring Stefan Roesch
` (8 preceding siblings ...)
2022-04-19 20:56 ` [PATCH v1 09/11] io_uring: add tracing for additional CQE32 fields Stefan Roesch
@ 2022-04-19 20:56 ` Stefan Roesch
2022-04-19 20:56 ` [PATCH v1 11/11] io_uring: support CQE32 for nop operation Stefan Roesch
10 siblings, 0 replies; 15+ messages in thread
From: Stefan Roesch @ 2022-04-19 20:56 UTC (permalink / raw)
To: io-uring, kernel-team; +Cc: shr, Jens Axboe
This enables large CQE's in the uring setup.
Co-developed-by: Jens Axboe <[email protected]>
Signed-off-by: Stefan Roesch <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 366f49969b31..70877f1ca0a9 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -11731,7 +11731,7 @@ static long io_uring_setup(u32 entries, struct io_uring_params __user *params)
IORING_SETUP_SQ_AFF | IORING_SETUP_CQSIZE |
IORING_SETUP_CLAMP | IORING_SETUP_ATTACH_WQ |
IORING_SETUP_R_DISABLED | IORING_SETUP_SUBMIT_ALL |
- IORING_SETUP_SQE128))
+ IORING_SETUP_SQE128 | IORING_SETUP_CQE32))
return -EINVAL;
return io_uring_create(entries, &p, params);
--
2.30.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v1 11/11] io_uring: support CQE32 for nop operation
2022-04-19 20:56 [PATCH v1 00/11] add large CQE support for io-uring Stefan Roesch
` (9 preceding siblings ...)
2022-04-19 20:56 ` [PATCH v1 10/11] io_uring: enable CQE32 Stefan Roesch
@ 2022-04-19 20:56 ` Stefan Roesch
10 siblings, 0 replies; 15+ messages in thread
From: Stefan Roesch @ 2022-04-19 20:56 UTC (permalink / raw)
To: io-uring, kernel-team; +Cc: shr, Jens Axboe
This adds support for filling the extra1 and extra2 fields for large
CQE's.
Co-developed-by: Jens Axboe <[email protected]>
Signed-off-by: Stefan Roesch <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
---
fs/io_uring.c | 28 ++++++++++++++++++++++++++--
1 file changed, 26 insertions(+), 2 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 70877f1ca0a9..dd00b77742ac 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -744,6 +744,12 @@ struct io_msg {
u32 len;
};
+struct io_nop {
+ struct file *file;
+ u64 extra1;
+ u64 extra2;
+};
+
struct io_async_connect {
struct sockaddr_storage address;
};
@@ -937,6 +943,7 @@ struct io_kiocb {
struct io_msg msg;
struct io_xattr xattr;
struct io_socket sock;
+ struct io_nop nop;
};
u8 opcode;
@@ -4863,6 +4870,19 @@ static int io_splice(struct io_kiocb *req, unsigned int issue_flags)
return 0;
}
+static int io_nop_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+{
+ /*
+ * If the ring is setup with CQE32, relay back addr/addr
+ */
+ if (req->ctx->flags & IORING_SETUP_CQE32) {
+ req->nop.extra1 = READ_ONCE(sqe->addr);
+ req->nop.extra2 = READ_ONCE(sqe->addr2);
+ }
+
+ return 0;
+}
+
/*
* IORING_OP_NOP just posts a completion event, nothing else.
*/
@@ -4873,7 +4893,11 @@ static int io_nop(struct io_kiocb *req, unsigned int issue_flags)
if (unlikely(ctx->flags & IORING_SETUP_IOPOLL))
return -EINVAL;
- __io_req_complete(req, issue_flags, 0, 0);
+ if (!(ctx->flags & IORING_SETUP_CQE32))
+ __io_req_complete(req, issue_flags, 0, 0);
+ else
+ __io_req_complete32(req, issue_flags, 0, 0, req->nop.extra1,
+ req->nop.extra2);
return 0;
}
@@ -7345,7 +7369,7 @@ static int io_req_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
{
switch (req->opcode) {
case IORING_OP_NOP:
- return 0;
+ return io_nop_prep(req, sqe);
case IORING_OP_READV:
case IORING_OP_READ_FIXED:
case IORING_OP_READ:
--
2.30.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH v1 01/11] io_uring: support CQE32 in io_uring_cqe
2022-04-19 20:56 ` [PATCH v1 01/11] io_uring: support CQE32 in io_uring_cqe Stefan Roesch
@ 2022-04-22 1:51 ` Kanchan Joshi
2022-04-22 5:18 ` Kanchan Joshi
2022-04-22 21:26 ` Stefan Roesch
0 siblings, 2 replies; 15+ messages in thread
From: Kanchan Joshi @ 2022-04-22 1:51 UTC (permalink / raw)
To: Stefan Roesch; +Cc: io-uring, kernel-team, Jens Axboe
On Thu, Apr 21, 2022 at 12:02 PM Stefan Roesch <[email protected]> wrote:
>
> This adds the struct io_uring_cqe_extra in the structure io_uring_cqe to
> support large CQE's.
>
> Co-developed-by: Jens Axboe <[email protected]>
> Signed-off-by: Stefan Roesch <[email protected]>
> Signed-off-by: Jens Axboe <[email protected]>
> ---
> include/uapi/linux/io_uring.h | 12 ++++++++++++
> 1 file changed, 12 insertions(+)
>
> diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
> index ee677dbd6a6d..6f9f9b6a9d15 100644
> --- a/include/uapi/linux/io_uring.h
> +++ b/include/uapi/linux/io_uring.h
> @@ -111,6 +111,7 @@ enum {
> #define IORING_SETUP_R_DISABLED (1U << 6) /* start with ring disabled */
> #define IORING_SETUP_SUBMIT_ALL (1U << 7) /* continue submit on error */
> #define IORING_SETUP_SQE128 (1U << 8) /* SQEs are 128b */
> +#define IORING_SETUP_CQE32 (1U << 9) /* CQEs are 32b */
>
> enum {
> IORING_OP_NOP,
> @@ -201,6 +202,11 @@ enum {
> #define IORING_POLL_UPDATE_EVENTS (1U << 1)
> #define IORING_POLL_UPDATE_USER_DATA (1U << 2)
>
> +struct io_uring_cqe_extra {
> + __u64 extra1;
> + __u64 extra2;
> +};
> +
> /*
> * IO completion data structure (Completion Queue Entry)
> */
> @@ -208,6 +214,12 @@ struct io_uring_cqe {
> __u64 user_data; /* sqe->data submission passed back */
> __s32 res; /* result code for this event */
> __u32 flags;
> +
> + /*
> + * If the ring is initialized with IORING_SETUP_CQE32, then this field
> + * contains 16-bytes of padding, doubling the size of the CQE.
> + */
> + struct io_uring_cqe_extra b[0];
> };
Will it be any better to replace struct b[0] with "u64 extra[ ]" ?
With that new fields will be referred as cqe->extra[0] and cqe->extra[1].
And if we go that route, maybe "aux" sounds better than "extra".
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v1 01/11] io_uring: support CQE32 in io_uring_cqe
2022-04-22 1:51 ` Kanchan Joshi
@ 2022-04-22 5:18 ` Kanchan Joshi
2022-04-22 21:26 ` Stefan Roesch
1 sibling, 0 replies; 15+ messages in thread
From: Kanchan Joshi @ 2022-04-22 5:18 UTC (permalink / raw)
To: Kanchan Joshi; +Cc: Stefan Roesch, io-uring, kernel-team, Jens Axboe
[-- Attachment #1: Type: text/plain, Size: 2120 bytes --]
On Fri, Apr 22, 2022 at 07:21:12AM +0530, Kanchan Joshi wrote:
>On Thu, Apr 21, 2022 at 12:02 PM Stefan Roesch <[email protected]> wrote:
>>
>> This adds the struct io_uring_cqe_extra in the structure io_uring_cqe to
>> support large CQE's.
>>
>> Co-developed-by: Jens Axboe <[email protected]>
>> Signed-off-by: Stefan Roesch <[email protected]>
>> Signed-off-by: Jens Axboe <[email protected]>
>> ---
>> include/uapi/linux/io_uring.h | 12 ++++++++++++
>> 1 file changed, 12 insertions(+)
>>
>> diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
>> index ee677dbd6a6d..6f9f9b6a9d15 100644
>> --- a/include/uapi/linux/io_uring.h
>> +++ b/include/uapi/linux/io_uring.h
>> @@ -111,6 +111,7 @@ enum {
>> #define IORING_SETUP_R_DISABLED (1U << 6) /* start with ring disabled */
>> #define IORING_SETUP_SUBMIT_ALL (1U << 7) /* continue submit on error */
>> #define IORING_SETUP_SQE128 (1U << 8) /* SQEs are 128b */
>> +#define IORING_SETUP_CQE32 (1U << 9) /* CQEs are 32b */
>>
>> enum {
>> IORING_OP_NOP,
>> @@ -201,6 +202,11 @@ enum {
>> #define IORING_POLL_UPDATE_EVENTS (1U << 1)
>> #define IORING_POLL_UPDATE_USER_DATA (1U << 2)
>>
>> +struct io_uring_cqe_extra {
>> + __u64 extra1;
>> + __u64 extra2;
>> +};
>> +
>> /*
>> * IO completion data structure (Completion Queue Entry)
>> */
>> @@ -208,6 +214,12 @@ struct io_uring_cqe {
>> __u64 user_data; /* sqe->data submission passed back */
>> __s32 res; /* result code for this event */
>> __u32 flags;
>> +
>> + /*
>> + * If the ring is initialized with IORING_SETUP_CQE32, then this field
>> + * contains 16-bytes of padding, doubling the size of the CQE.
>> + */
>> + struct io_uring_cqe_extra b[0];
>> };
>Will it be any better to replace struct b[0] with "u64 extra[ ]" ?
>With that new fields will be referred as cqe->extra[0] and cqe->extra[1].
>
>And if we go that route, maybe "aux" sounds better than "extra".
sorry, picked v1 (rather than v2) here. This part in same though.
[-- Attachment #2: Type: text/plain, Size: 0 bytes --]
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v1 01/11] io_uring: support CQE32 in io_uring_cqe
2022-04-22 1:51 ` Kanchan Joshi
2022-04-22 5:18 ` Kanchan Joshi
@ 2022-04-22 21:26 ` Stefan Roesch
1 sibling, 0 replies; 15+ messages in thread
From: Stefan Roesch @ 2022-04-22 21:26 UTC (permalink / raw)
To: Kanchan Joshi; +Cc: io-uring, kernel-team, Jens Axboe
On 4/21/22 6:51 PM, Kanchan Joshi wrote:
> On Thu, Apr 21, 2022 at 12:02 PM Stefan Roesch <[email protected]> wrote:
>>
>> This adds the struct io_uring_cqe_extra in the structure io_uring_cqe to
>> support large CQE's.
>>
>> Co-developed-by: Jens Axboe <[email protected]>
>> Signed-off-by: Stefan Roesch <[email protected]>
>> Signed-off-by: Jens Axboe <[email protected]>
>> ---
>> include/uapi/linux/io_uring.h | 12 ++++++++++++
>> 1 file changed, 12 insertions(+)
>>
>> diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
>> index ee677dbd6a6d..6f9f9b6a9d15 100644
>> --- a/include/uapi/linux/io_uring.h
>> +++ b/include/uapi/linux/io_uring.h
>> @@ -111,6 +111,7 @@ enum {
>> #define IORING_SETUP_R_DISABLED (1U << 6) /* start with ring disabled */
>> #define IORING_SETUP_SUBMIT_ALL (1U << 7) /* continue submit on error */
>> #define IORING_SETUP_SQE128 (1U << 8) /* SQEs are 128b */
>> +#define IORING_SETUP_CQE32 (1U << 9) /* CQEs are 32b */
>>
>> enum {
>> IORING_OP_NOP,
>> @@ -201,6 +202,11 @@ enum {
>> #define IORING_POLL_UPDATE_EVENTS (1U << 1)
>> #define IORING_POLL_UPDATE_USER_DATA (1U << 2)
>>
>> +struct io_uring_cqe_extra {
>> + __u64 extra1;
>> + __u64 extra2;
>> +};
>> +
>> /*
>> * IO completion data structure (Completion Queue Entry)
>> */
>> @@ -208,6 +214,12 @@ struct io_uring_cqe {
>> __u64 user_data; /* sqe->data submission passed back */
>> __s32 res; /* result code for this event */
>> __u32 flags;
>> +
>> + /*
>> + * If the ring is initialized with IORING_SETUP_CQE32, then this field
>> + * contains 16-bytes of padding, doubling the size of the CQE.
>> + */
>> + struct io_uring_cqe_extra b[0];
>> };
> Will it be any better to replace struct b[0] with "u64 extra[ ]" ?
> With that new fields will be referred as cqe->extra[0] and cqe->extra[1].
>
> And if we go that route, maybe "aux" sounds better than "extra".
Let's use __u64 big_cqe[];
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2022-04-22 22:30 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-04-19 20:56 [PATCH v1 00/11] add large CQE support for io-uring Stefan Roesch
2022-04-19 20:56 ` [PATCH v1 01/11] io_uring: support CQE32 in io_uring_cqe Stefan Roesch
2022-04-22 1:51 ` Kanchan Joshi
2022-04-22 5:18 ` Kanchan Joshi
2022-04-22 21:26 ` Stefan Roesch
2022-04-19 20:56 ` [PATCH v1 02/11] io_uring: wire up inline completion path for CQE32 Stefan Roesch
2022-04-19 20:56 ` [PATCH v1 03/11] io_uring: change ring size calculation " Stefan Roesch
2022-04-19 20:56 ` [PATCH v1 04/11] io_uring: add CQE32 setup processing Stefan Roesch
2022-04-19 20:56 ` [PATCH v1 05/11] io_uring: add CQE32 completion processing Stefan Roesch
2022-04-19 20:56 ` [PATCH v1 06/11] io_uring: modify io_get_cqe for CQE32 Stefan Roesch
2022-04-19 20:56 ` [PATCH v1 07/11] io_uring: flush completions " Stefan Roesch
2022-04-19 20:56 ` [PATCH v1 08/11] io_uring: overflow processing " Stefan Roesch
2022-04-19 20:56 ` [PATCH v1 09/11] io_uring: add tracing for additional CQE32 fields Stefan Roesch
2022-04-19 20:56 ` [PATCH v1 10/11] io_uring: enable CQE32 Stefan Roesch
2022-04-19 20:56 ` [PATCH v1 11/11] io_uring: support CQE32 for nop operation Stefan Roesch
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox