* [PATCH 1/8] io_uring: kill an outdated comment
2022-09-08 12:20 [PATCH for-next 0/8] random io_uring 6.1 changes Pavel Begunkov
@ 2022-09-08 12:20 ` Pavel Begunkov
2022-09-08 12:20 ` [PATCH 2/8] io_uring: use io_cq_lock consistently Pavel Begunkov
` (7 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Pavel Begunkov @ 2022-09-08 12:20 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Request referencing has changed a while ago and there is no notion left
of submission/completion references, kill an outdated comment.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 0482087b7c64..339bc19a708a 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1885,10 +1885,6 @@ static void io_queue_async(struct io_kiocb *req, int ret)
io_req_task_queue(req);
break;
case IO_APOLL_ABORTED:
- /*
- * Queued up for async execution, worker will release
- * submit reference when the iocb is actually submitted.
- */
io_kbuf_recycle(req, 0);
io_queue_iowq(req, NULL);
break;
--
2.37.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 2/8] io_uring: use io_cq_lock consistently
2022-09-08 12:20 [PATCH for-next 0/8] random io_uring 6.1 changes Pavel Begunkov
2022-09-08 12:20 ` [PATCH 1/8] io_uring: kill an outdated comment Pavel Begunkov
@ 2022-09-08 12:20 ` Pavel Begunkov
2022-09-08 12:20 ` [PATCH 3/8] io_uring/net: reshuffle error handling Pavel Begunkov
` (6 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Pavel Begunkov @ 2022-09-08 12:20 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
There is one place when we forgot to change hand coded spin locking with
io_cq_lock(), change it to be more consistent. Note, the unlock part is
already __io_cq_unlock_post().
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 339bc19a708a..b5245c5d102c 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1327,7 +1327,7 @@ static void __io_submit_flush_completions(struct io_ring_ctx *ctx)
struct io_wq_work_node *node, *prev;
struct io_submit_state *state = &ctx->submit_state;
- spin_lock(&ctx->completion_lock);
+ io_cq_lock(ctx);
wq_list_for_each(node, prev, &state->compl_reqs) {
struct io_kiocb *req = container_of(node, struct io_kiocb,
comp_list);
--
2.37.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 3/8] io_uring/net: reshuffle error handling
2022-09-08 12:20 [PATCH for-next 0/8] random io_uring 6.1 changes Pavel Begunkov
2022-09-08 12:20 ` [PATCH 1/8] io_uring: kill an outdated comment Pavel Begunkov
2022-09-08 12:20 ` [PATCH 2/8] io_uring: use io_cq_lock consistently Pavel Begunkov
@ 2022-09-08 12:20 ` Pavel Begunkov
2022-09-08 12:20 ` [PATCH 4/8] io_uring/net: use async caches for async prep Pavel Begunkov
` (5 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Pavel Begunkov @ 2022-09-08 12:20 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
We should prioritise send/recv retry cases over failures, they're more
important. Shuffle -ERESTARTSYS after we handled retries.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/net.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/io_uring/net.c b/io_uring/net.c
index 7047c1342541..0bba804a955d 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -291,13 +291,13 @@ int io_sendmsg(struct io_kiocb *req, unsigned int issue_flags)
if (ret < min_ret) {
if (ret == -EAGAIN && (issue_flags & IO_URING_F_NONBLOCK))
return io_setup_async_msg(req, kmsg, issue_flags);
- if (ret == -ERESTARTSYS)
- ret = -EINTR;
if (ret > 0 && io_net_retry(sock, flags)) {
sr->done_io += ret;
req->flags |= REQ_F_PARTIAL_IO;
return io_setup_async_msg(req, kmsg, issue_flags);
}
+ if (ret == -ERESTARTSYS)
+ ret = -EINTR;
req_set_fail(req);
}
/* fast path, check for non-NULL to avoid function call */
@@ -352,8 +352,6 @@ int io_send(struct io_kiocb *req, unsigned int issue_flags)
if (ret < min_ret) {
if (ret == -EAGAIN && (issue_flags & IO_URING_F_NONBLOCK))
return -EAGAIN;
- if (ret == -ERESTARTSYS)
- ret = -EINTR;
if (ret > 0 && io_net_retry(sock, flags)) {
sr->len -= ret;
sr->buf += ret;
@@ -361,6 +359,8 @@ int io_send(struct io_kiocb *req, unsigned int issue_flags)
req->flags |= REQ_F_PARTIAL_IO;
return -EAGAIN;
}
+ if (ret == -ERESTARTSYS)
+ ret = -EINTR;
req_set_fail(req);
}
if (ret >= 0)
@@ -751,13 +751,13 @@ int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
}
return ret;
}
- if (ret == -ERESTARTSYS)
- ret = -EINTR;
if (ret > 0 && io_net_retry(sock, flags)) {
sr->done_io += ret;
req->flags |= REQ_F_PARTIAL_IO;
return io_setup_async_msg(req, kmsg, issue_flags);
}
+ if (ret == -ERESTARTSYS)
+ ret = -EINTR;
req_set_fail(req);
} else if ((flags & MSG_WAITALL) && (kmsg->msg.msg_flags & (MSG_TRUNC | MSG_CTRUNC))) {
req_set_fail(req);
@@ -847,8 +847,6 @@ int io_recv(struct io_kiocb *req, unsigned int issue_flags)
return -EAGAIN;
}
- if (ret == -ERESTARTSYS)
- ret = -EINTR;
if (ret > 0 && io_net_retry(sock, flags)) {
sr->len -= ret;
sr->buf += ret;
@@ -856,6 +854,8 @@ int io_recv(struct io_kiocb *req, unsigned int issue_flags)
req->flags |= REQ_F_PARTIAL_IO;
return -EAGAIN;
}
+ if (ret == -ERESTARTSYS)
+ ret = -EINTR;
req_set_fail(req);
} else if ((flags & MSG_WAITALL) && (msg.msg_flags & (MSG_TRUNC | MSG_CTRUNC))) {
out_free:
--
2.37.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 4/8] io_uring/net: use async caches for async prep
2022-09-08 12:20 [PATCH for-next 0/8] random io_uring 6.1 changes Pavel Begunkov
` (2 preceding siblings ...)
2022-09-08 12:20 ` [PATCH 3/8] io_uring/net: reshuffle error handling Pavel Begunkov
@ 2022-09-08 12:20 ` Pavel Begunkov
2022-09-08 12:20 ` [PATCH 5/8] io_uring/net: io_async_msghdr caches for sendzc Pavel Begunkov
` (4 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Pavel Begunkov @ 2022-09-08 12:20 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
send/recv have async_data caches but there are only used from within
issue handlers. Extend their use also to ->prep_async, should be handy
with links and IOSQE_ASYNC.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/net.c | 16 +++++++++++++---
io_uring/opdef.c | 2 ++
2 files changed, 15 insertions(+), 3 deletions(-)
diff --git a/io_uring/net.c b/io_uring/net.c
index 0bba804a955d..fa54a35191d7 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -126,8 +126,8 @@ static void io_netmsg_recycle(struct io_kiocb *req, unsigned int issue_flags)
}
}
-static struct io_async_msghdr *io_recvmsg_alloc_async(struct io_kiocb *req,
- unsigned int issue_flags)
+static struct io_async_msghdr *io_msg_alloc_async(struct io_kiocb *req,
+ unsigned int issue_flags)
{
struct io_ring_ctx *ctx = req->ctx;
struct io_cache_entry *entry;
@@ -148,6 +148,12 @@ static struct io_async_msghdr *io_recvmsg_alloc_async(struct io_kiocb *req,
return NULL;
}
+static inline struct io_async_msghdr *io_msg_alloc_async_prep(struct io_kiocb *req)
+{
+ /* ->prep_async is always called from the submission context */
+ return io_msg_alloc_async(req, 0);
+}
+
static int io_setup_async_msg(struct io_kiocb *req,
struct io_async_msghdr *kmsg,
unsigned int issue_flags)
@@ -156,7 +162,7 @@ static int io_setup_async_msg(struct io_kiocb *req,
if (req_has_async_data(req))
return -EAGAIN;
- async_msg = io_recvmsg_alloc_async(req, issue_flags);
+ async_msg = io_msg_alloc_async(req, issue_flags);
if (!async_msg) {
kfree(kmsg->free_iov);
return -ENOMEM;
@@ -217,6 +223,8 @@ int io_sendmsg_prep_async(struct io_kiocb *req)
{
int ret;
+ if (!io_msg_alloc_async_prep(req))
+ return -ENOMEM;
ret = io_sendmsg_copy_hdr(req, req->async_data);
if (!ret)
req->flags |= REQ_F_NEED_CLEANUP;
@@ -504,6 +512,8 @@ int io_recvmsg_prep_async(struct io_kiocb *req)
{
int ret;
+ if (!io_msg_alloc_async_prep(req))
+ return -ENOMEM;
ret = io_recvmsg_copy_hdr(req, req->async_data);
if (!ret)
req->flags |= REQ_F_NEED_CLEANUP;
diff --git a/io_uring/opdef.c b/io_uring/opdef.c
index 04693e4a33c7..c6e089900394 100644
--- a/io_uring/opdef.c
+++ b/io_uring/opdef.c
@@ -146,6 +146,7 @@ const struct io_op_def io_op_defs[] = {
.unbound_nonreg_file = 1,
.pollout = 1,
.ioprio = 1,
+ .manual_alloc = 1,
.name = "SENDMSG",
#if defined(CONFIG_NET)
.async_size = sizeof(struct io_async_msghdr),
@@ -163,6 +164,7 @@ const struct io_op_def io_op_defs[] = {
.pollin = 1,
.buffer_select = 1,
.ioprio = 1,
+ .manual_alloc = 1,
.name = "RECVMSG",
#if defined(CONFIG_NET)
.async_size = sizeof(struct io_async_msghdr),
--
2.37.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 5/8] io_uring/net: io_async_msghdr caches for sendzc
2022-09-08 12:20 [PATCH for-next 0/8] random io_uring 6.1 changes Pavel Begunkov
` (3 preceding siblings ...)
2022-09-08 12:20 ` [PATCH 4/8] io_uring/net: use async caches for async prep Pavel Begunkov
@ 2022-09-08 12:20 ` Pavel Begunkov
2022-09-08 12:20 ` [PATCH 6/8] io_uring/net: add non-bvec sg chunking callback Pavel Begunkov
` (3 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Pavel Begunkov @ 2022-09-08 12:20 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
We already keep io_async_msghdr caches for normal send/recv requests,
use them also for zerocopy send.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/net.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/io_uring/net.c b/io_uring/net.c
index fa54a35191d7..ff1fed00876f 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -196,10 +196,9 @@ int io_sendzc_prep_async(struct io_kiocb *req)
if (!zc->addr || req_has_async_data(req))
return 0;
- if (io_alloc_async_data(req))
+ io = io_msg_alloc_async_prep(req);
+ if (!io)
return -ENOMEM;
-
- io = req->async_data;
ret = move_addr_to_kernel(zc->addr, zc->addr_len, &io->addr);
return ret;
}
@@ -212,9 +211,9 @@ static int io_setup_async_addr(struct io_kiocb *req,
if (!addr || req_has_async_data(req))
return -EAGAIN;
- if (io_alloc_async_data(req))
+ io = io_msg_alloc_async(req, issue_flags);
+ if (!io)
return -ENOMEM;
- io = req->async_data;
memcpy(&io->addr, addr, sizeof(io->addr));
return -EAGAIN;
}
--
2.37.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 6/8] io_uring/net: add non-bvec sg chunking callback
2022-09-08 12:20 [PATCH for-next 0/8] random io_uring 6.1 changes Pavel Begunkov
` (4 preceding siblings ...)
2022-09-08 12:20 ` [PATCH 5/8] io_uring/net: io_async_msghdr caches for sendzc Pavel Begunkov
@ 2022-09-08 12:20 ` Pavel Begunkov
2022-09-08 12:20 ` [PATCH 7/8] io_uring/net: refactor io_sr_msg types Pavel Begunkov
` (2 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Pavel Begunkov @ 2022-09-08 12:20 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Add a sg_from_iter() for when we initiate non-bvec zerocopy sends, which
helps us to remove some extra steps from io_sg_from_iter(). The only
thing the new function has to do before giving control away to
__zerocopy_sg_from_iter() is to check if the skb has managed frags and
downgrade them if so.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/net.c | 17 +++++++++++------
1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/io_uring/net.c b/io_uring/net.c
index ff1fed00876f..4dbdb59968c3 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -948,6 +948,13 @@ int io_sendzc_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
return 0;
}
+static int io_sg_from_iter_iovec(struct sock *sk, struct sk_buff *skb,
+ struct iov_iter *from, size_t length)
+{
+ skb_zcopy_downgrade_managed(skb);
+ return __zerocopy_sg_from_iter(NULL, sk, skb, from, length);
+}
+
static int io_sg_from_iter(struct sock *sk, struct sk_buff *skb,
struct iov_iter *from, size_t length)
{
@@ -958,13 +965,10 @@ static int io_sg_from_iter(struct sock *sk, struct sk_buff *skb,
ssize_t copied = 0;
unsigned long truesize = 0;
- if (!shinfo->nr_frags)
+ if (!frag)
shinfo->flags |= SKBFL_MANAGED_FRAG_REFS;
-
- if (!skb_zcopy_managed(skb) || !iov_iter_is_bvec(from)) {
- skb_zcopy_downgrade_managed(skb);
+ else if (unlikely(!skb_zcopy_managed(skb)))
return __zerocopy_sg_from_iter(NULL, sk, skb, from, length);
- }
bi.bi_size = min(from->count, length);
bi.bi_bvec_done = from->iov_offset;
@@ -1044,6 +1048,7 @@ int io_sendzc(struct io_kiocb *req, unsigned int issue_flags)
(u64)(uintptr_t)zc->buf, zc->len);
if (unlikely(ret))
return ret;
+ msg.sg_from_iter = io_sg_from_iter;
} else {
ret = import_single_range(WRITE, zc->buf, zc->len, &iov,
&msg.msg_iter);
@@ -1052,6 +1057,7 @@ int io_sendzc(struct io_kiocb *req, unsigned int issue_flags)
ret = io_notif_account_mem(zc->notif, zc->len);
if (unlikely(ret))
return ret;
+ msg.sg_from_iter = io_sg_from_iter_iovec;
}
msg_flags = zc->msg_flags | MSG_ZEROCOPY;
@@ -1062,7 +1068,6 @@ int io_sendzc(struct io_kiocb *req, unsigned int issue_flags)
msg.msg_flags = msg_flags;
msg.msg_ubuf = &io_notif_to_data(zc->notif)->uarg;
- msg.sg_from_iter = io_sg_from_iter;
ret = sock_sendmsg(sock, &msg);
if (unlikely(ret < min_ret)) {
--
2.37.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 7/8] io_uring/net: refactor io_sr_msg types
2022-09-08 12:20 [PATCH for-next 0/8] random io_uring 6.1 changes Pavel Begunkov
` (5 preceding siblings ...)
2022-09-08 12:20 ` [PATCH 6/8] io_uring/net: add non-bvec sg chunking callback Pavel Begunkov
@ 2022-09-08 12:20 ` Pavel Begunkov
2022-09-08 12:20 ` [PATCH 8/8] io_uring/net: use io_sr_msg for sendzc Pavel Begunkov
2022-09-08 14:59 ` [PATCH for-next 0/8] random io_uring 6.1 changes Jens Axboe
8 siblings, 0 replies; 10+ messages in thread
From: Pavel Begunkov @ 2022-09-08 12:20 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
In preparation for using struct io_sr_msg for zerocopy sends, clean up
types. First, flags can be u16 as it's provided by the userspace in u16
ioprio, as well as addr_len. This saves us 4 bytes. Also use unsigned
for size and done_io, both are as well limited to u32.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/net.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/io_uring/net.c b/io_uring/net.c
index 4dbdb59968c3..97778cd1306c 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -55,21 +55,21 @@ struct io_sr_msg {
struct user_msghdr __user *umsg;
void __user *buf;
};
+ unsigned len;
+ unsigned done_io;
unsigned msg_flags;
- unsigned flags;
- size_t len;
- size_t done_io;
+ u16 flags;
};
struct io_sendzc {
struct file *file;
void __user *buf;
- size_t len;
+ unsigned len;
+ unsigned done_io;
unsigned msg_flags;
- unsigned flags;
- unsigned addr_len;
+ u16 flags;
+ u16 addr_len;
void __user *addr;
- size_t done_io;
struct io_kiocb *notif;
};
--
2.37.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 8/8] io_uring/net: use io_sr_msg for sendzc
2022-09-08 12:20 [PATCH for-next 0/8] random io_uring 6.1 changes Pavel Begunkov
` (6 preceding siblings ...)
2022-09-08 12:20 ` [PATCH 7/8] io_uring/net: refactor io_sr_msg types Pavel Begunkov
@ 2022-09-08 12:20 ` Pavel Begunkov
2022-09-08 14:59 ` [PATCH for-next 0/8] random io_uring 6.1 changes Jens Axboe
8 siblings, 0 replies; 10+ messages in thread
From: Pavel Begunkov @ 2022-09-08 12:20 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Reuse struct io_sr_msg for zerocopy sends, which is handy. There is
only one zerocopy specific field, namely .notif, and we have enough
space for it.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/net.c | 18 +++++-------------
1 file changed, 5 insertions(+), 13 deletions(-)
diff --git a/io_uring/net.c b/io_uring/net.c
index 97778cd1306c..acafb2e3dd09 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -59,15 +59,7 @@ struct io_sr_msg {
unsigned done_io;
unsigned msg_flags;
u16 flags;
-};
-
-struct io_sendzc {
- struct file *file;
- void __user *buf;
- unsigned len;
- unsigned done_io;
- unsigned msg_flags;
- u16 flags;
+ /* used only for sendzc */
u16 addr_len;
void __user *addr;
struct io_kiocb *notif;
@@ -190,7 +182,7 @@ static int io_sendmsg_copy_hdr(struct io_kiocb *req,
int io_sendzc_prep_async(struct io_kiocb *req)
{
- struct io_sendzc *zc = io_kiocb_to_cmd(req, struct io_sendzc);
+ struct io_sr_msg *zc = io_kiocb_to_cmd(req, struct io_sr_msg);
struct io_async_msghdr *io;
int ret;
@@ -890,7 +882,7 @@ int io_recv(struct io_kiocb *req, unsigned int issue_flags)
void io_sendzc_cleanup(struct io_kiocb *req)
{
- struct io_sendzc *zc = io_kiocb_to_cmd(req, struct io_sendzc);
+ struct io_sr_msg *zc = io_kiocb_to_cmd(req, struct io_sr_msg);
zc->notif->flags |= REQ_F_CQE_SKIP;
io_notif_flush(zc->notif);
@@ -899,7 +891,7 @@ void io_sendzc_cleanup(struct io_kiocb *req)
int io_sendzc_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
{
- struct io_sendzc *zc = io_kiocb_to_cmd(req, struct io_sendzc);
+ struct io_sr_msg *zc = io_kiocb_to_cmd(req, struct io_sr_msg);
struct io_ring_ctx *ctx = req->ctx;
struct io_kiocb *notif;
@@ -1009,7 +1001,7 @@ static int io_sg_from_iter(struct sock *sk, struct sk_buff *skb,
int io_sendzc(struct io_kiocb *req, unsigned int issue_flags)
{
struct sockaddr_storage __address, *addr = NULL;
- struct io_sendzc *zc = io_kiocb_to_cmd(req, struct io_sendzc);
+ struct io_sr_msg *zc = io_kiocb_to_cmd(req, struct io_sr_msg);
struct msghdr msg;
struct iovec iov;
struct socket *sock;
--
2.37.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH for-next 0/8] random io_uring 6.1 changes
2022-09-08 12:20 [PATCH for-next 0/8] random io_uring 6.1 changes Pavel Begunkov
` (7 preceding siblings ...)
2022-09-08 12:20 ` [PATCH 8/8] io_uring/net: use io_sr_msg for sendzc Pavel Begunkov
@ 2022-09-08 14:59 ` Jens Axboe
8 siblings, 0 replies; 10+ messages in thread
From: Jens Axboe @ 2022-09-08 14:59 UTC (permalink / raw)
To: io-uring, Pavel Begunkov
On Thu, 8 Sep 2022 13:20:26 +0100, Pavel Begunkov wrote:
> The highligth here is expanding use of io_sr_msg caches, but nothing
> noteworthy in general, just prep cleanups before some other upcoming
> work.
>
> Pavel Begunkov (8):
> io_uring: kill an outdated comment
> io_uring: use io_cq_lock consistently
> io_uring/net: reshuffle error handling
> io_uring/net: use async caches for async prep
> io_uring/net: io_async_msghdr caches for sendzc
> io_uring/net: add non-bvec sg chunking callback
> io_uring/net: refactor io_sr_msg types
> io_uring/net: use io_sr_msg for sendzc
>
> [...]
Applied, thanks!
[1/8] io_uring: kill an outdated comment
commit: 89030aa91f5f8bab7e35c5a82e0b279975034b2c
[2/8] io_uring: use io_cq_lock consistently
commit: 794aeb01d83ec5c1cb5bcf4a14de60303f82aea4
[3/8] io_uring/net: reshuffle error handling
commit: 6b505138766270555754093a47fb3400cff167f1
[4/8] io_uring/net: use async caches for async prep
commit: 211fd9172521f9631ba9d207410810f40e6990e2
[5/8] io_uring/net: io_async_msghdr caches for sendzc
commit: 4331248c61de902ac5831f5c0c55a3d93ab2e3ba
[6/8] io_uring/net: add non-bvec sg chunking callback
commit: 6f8a4bc02e2f9d2e66d3d06eb8323dbd344ec417
[7/8] io_uring/net: refactor io_sr_msg types
commit: b3f3e9e18b240f0ecde85901ad0c7f19e12870b9
[8/8] io_uring/net: use io_sr_msg for sendzc
commit: d5e0b61591ea1993c8c79bfa1aedb437016d212c
Best regards,
--
Jens Axboe
^ permalink raw reply [flat|nested] 10+ messages in thread