* [PATCH 00/12] bundled cleanups and improvements
@ 2020-10-10 17:34 Pavel Begunkov
2020-10-10 17:34 ` [PATCH 01/12] io_uring: don't io_prep_async_work() linked reqs Pavel Begunkov
` (12 more replies)
0 siblings, 13 replies; 17+ messages in thread
From: Pavel Begunkov @ 2020-10-10 17:34 UTC (permalink / raw)
To: Jens Axboe, io-uring
Only [1] considerably affects performance (as by Roman Gershman), others
are rather cleanups.
[1-2] are on the surface cleanups following ->files changes.
[3-5] address ->file grabbing
[6-7] are some preparations around timeouts
[8,9] are independent cleanups
[10-12] toss around files_register() bits
Pavel Begunkov (12):
io_uring: don't io_prep_async_work() linked reqs
io_uring: clean up ->files grabbing
io_uring: kill extra check in fixed io_file_get()
io_uring: simplify io_file_get()
io_uring: improve submit_state.ios_left accounting
io_uring: use a separate struct for timeout_remove
io_uring: remove timeout.list after hrtimer cancel
io_uring: clean leftovers after splitting issue
io_uring: don't delay io_init_req() error check
io_uring: clean file_data access in files_register
io_uring: refactor *files_register()'s error paths
io_uring: keep a pointer ref_node in file_data
fs/io_uring.c | 275 ++++++++++++++++++++------------------------------
1 file changed, 107 insertions(+), 168 deletions(-)
--
2.24.0
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH 01/12] io_uring: don't io_prep_async_work() linked reqs
2020-10-10 17:34 [PATCH 00/12] bundled cleanups and improvements Pavel Begunkov
@ 2020-10-10 17:34 ` Pavel Begunkov
2020-10-10 18:45 ` Pavel Begunkov
2020-10-10 17:34 ` [PATCH 02/12] io_uring: clean up ->files grabbing Pavel Begunkov
` (11 subsequent siblings)
12 siblings, 1 reply; 17+ messages in thread
From: Pavel Begunkov @ 2020-10-10 17:34 UTC (permalink / raw)
To: Jens Axboe, io-uring; +Cc: Reported-by : Roman Gershman
There is no real reason left for preparing io-wq work context for linked
requests in advance, remove it as this might become a bottleneck in some
cases.
Reported-by: Reported-by: Roman Gershman <[email protected]>
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 09494ca1b990..272abe03a79e 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5672,9 +5672,6 @@ static int io_req_defer_prep(struct io_kiocb *req,
ret = io_prep_work_files(req);
if (unlikely(ret))
return ret;
-
- io_prep_async_work(req);
-
return io_req_prep(req, sqe);
}
--
2.24.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 02/12] io_uring: clean up ->files grabbing
2020-10-10 17:34 [PATCH 00/12] bundled cleanups and improvements Pavel Begunkov
2020-10-10 17:34 ` [PATCH 01/12] io_uring: don't io_prep_async_work() linked reqs Pavel Begunkov
@ 2020-10-10 17:34 ` Pavel Begunkov
2020-10-10 17:34 ` [PATCH 03/12] io_uring: kill extra check in fixed io_file_get() Pavel Begunkov
` (10 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2020-10-10 17:34 UTC (permalink / raw)
To: Jens Axboe, io-uring
Move work.files grabbing into io_prep_async_work() to all other work
resources initialisation. We don't need to keep it separately now, as
->ring_fd/file are gone. It also allows to not grab it when a request
is not going to io-wq.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 52 +++++++++++++--------------------------------------
1 file changed, 13 insertions(+), 39 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 272abe03a79e..3a65bcba5a7b 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -967,7 +967,6 @@ static void io_queue_linked_timeout(struct io_kiocb *req);
static int __io_sqe_files_update(struct io_ring_ctx *ctx,
struct io_uring_files_update *ip,
unsigned nr_args);
-static int io_prep_work_files(struct io_kiocb *req);
static void __io_clean_op(struct io_kiocb *req);
static int io_file_get(struct io_submit_state *state, struct io_kiocb *req,
int fd, struct file **out_file, bool fixed);
@@ -1222,16 +1221,28 @@ static bool io_req_clean_work(struct io_kiocb *req)
static void io_prep_async_work(struct io_kiocb *req)
{
const struct io_op_def *def = &io_op_defs[req->opcode];
+ struct io_ring_ctx *ctx = req->ctx;
io_req_init_async(req);
if (req->flags & REQ_F_ISREG) {
- if (def->hash_reg_file || (req->ctx->flags & IORING_SETUP_IOPOLL))
+ if (def->hash_reg_file || (ctx->flags & IORING_SETUP_IOPOLL))
io_wq_hash_work(&req->work, file_inode(req->file));
} else {
if (def->unbound_nonreg_file)
req->work.flags |= IO_WQ_WORK_UNBOUND;
}
+ if (!req->work.files && io_op_defs[req->opcode].file_table &&
+ !(req->flags & REQ_F_NO_FILE_TABLE)) {
+ req->work.files = get_files_struct(current);
+ get_nsproxy(current->nsproxy);
+ req->work.nsproxy = current->nsproxy;
+ req->flags |= REQ_F_INFLIGHT;
+
+ spin_lock_irq(&ctx->inflight_lock);
+ list_add(&req->inflight_entry, &ctx->inflight_list);
+ spin_unlock_irq(&ctx->inflight_lock);
+ }
if (!req->work.mm && def->needs_mm) {
mmgrab(current->mm);
req->work.mm = current->mm;
@@ -5662,16 +5673,10 @@ static int io_req_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
static int io_req_defer_prep(struct io_kiocb *req,
const struct io_uring_sqe *sqe)
{
- int ret;
-
if (!sqe)
return 0;
if (io_alloc_async_data(req))
return -EAGAIN;
-
- ret = io_prep_work_files(req);
- if (unlikely(ret))
- return ret;
return io_req_prep(req, sqe);
}
@@ -6015,33 +6020,6 @@ static int io_req_set_file(struct io_submit_state *state, struct io_kiocb *req,
return io_file_get(state, req, fd, &req->file, fixed);
}
-static int io_grab_files(struct io_kiocb *req)
-{
- struct io_ring_ctx *ctx = req->ctx;
-
- io_req_init_async(req);
-
- if (req->work.files || (req->flags & REQ_F_NO_FILE_TABLE))
- return 0;
-
- req->work.files = get_files_struct(current);
- get_nsproxy(current->nsproxy);
- req->work.nsproxy = current->nsproxy;
- req->flags |= REQ_F_INFLIGHT;
-
- spin_lock_irq(&ctx->inflight_lock);
- list_add(&req->inflight_entry, &ctx->inflight_list);
- spin_unlock_irq(&ctx->inflight_lock);
- return 0;
-}
-
-static inline int io_prep_work_files(struct io_kiocb *req)
-{
- if (!io_op_defs[req->opcode].file_table)
- return 0;
- return io_grab_files(req);
-}
-
static enum hrtimer_restart io_link_timeout_fn(struct hrtimer *timer)
{
struct io_timeout_data *data = container_of(timer,
@@ -6153,9 +6131,6 @@ static void __io_queue_sqe(struct io_kiocb *req, struct io_comp_state *cs)
if (ret == -EAGAIN && !(req->flags & REQ_F_NOWAIT)) {
if (!io_arm_poll_handler(req)) {
punt:
- ret = io_prep_work_files(req);
- if (unlikely(ret))
- goto err;
/*
* Queued up for async execution, worker will release
* submit reference when the iocb is actually submitted.
@@ -6169,7 +6144,6 @@ static void __io_queue_sqe(struct io_kiocb *req, struct io_comp_state *cs)
}
if (unlikely(ret)) {
-err:
/* un-prep timeout, so it'll be killed as any other linked */
req->flags &= ~REQ_F_LINK_TIMEOUT;
req_set_fail_links(req);
--
2.24.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 03/12] io_uring: kill extra check in fixed io_file_get()
2020-10-10 17:34 [PATCH 00/12] bundled cleanups and improvements Pavel Begunkov
2020-10-10 17:34 ` [PATCH 01/12] io_uring: don't io_prep_async_work() linked reqs Pavel Begunkov
2020-10-10 17:34 ` [PATCH 02/12] io_uring: clean up ->files grabbing Pavel Begunkov
@ 2020-10-10 17:34 ` Pavel Begunkov
2020-10-10 17:34 ` [PATCH 04/12] io_uring: simplify io_file_get() Pavel Begunkov
` (9 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2020-10-10 17:34 UTC (permalink / raw)
To: Jens Axboe, io-uring
ctx->nr_user_files == 0 IFF ctx->file_data == NULL and there fixed files
are not used. Hence, verifying fds only against ctx->nr_user_files is
enough. Remove the other check from hot path.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 3a65bcba5a7b..39c37cef9ce0 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5987,8 +5987,7 @@ static int io_file_get(struct io_submit_state *state, struct io_kiocb *req,
struct file *file;
if (fixed) {
- if (unlikely(!ctx->file_data ||
- (unsigned) fd >= ctx->nr_user_files))
+ if (unlikely((unsigned int)fd >= ctx->nr_user_files))
return -EBADF;
fd = array_index_nospec(fd, ctx->nr_user_files);
file = io_file_from_index(ctx, fd);
--
2.24.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 04/12] io_uring: simplify io_file_get()
2020-10-10 17:34 [PATCH 00/12] bundled cleanups and improvements Pavel Begunkov
` (2 preceding siblings ...)
2020-10-10 17:34 ` [PATCH 03/12] io_uring: kill extra check in fixed io_file_get() Pavel Begunkov
@ 2020-10-10 17:34 ` Pavel Begunkov
2020-10-10 17:34 ` [PATCH 05/12] io_uring: improve submit_state.ios_left accounting Pavel Begunkov
` (8 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2020-10-10 17:34 UTC (permalink / raw)
To: Jens Axboe, io-uring
Keep ->needs_file_no_error check out of io_file_get(), and let callers
handle it. It makes it more straightforward. Also, as the only error it
can hand back -EBADF, make it return a file or NULL.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 30 ++++++++++++++----------------
1 file changed, 14 insertions(+), 16 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 39c37cef9ce0..ffdaea55e820 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -968,8 +968,8 @@ static int __io_sqe_files_update(struct io_ring_ctx *ctx,
struct io_uring_files_update *ip,
unsigned nr_args);
static void __io_clean_op(struct io_kiocb *req);
-static int io_file_get(struct io_submit_state *state, struct io_kiocb *req,
- int fd, struct file **out_file, bool fixed);
+static struct file *io_file_get(struct io_submit_state *state,
+ struct io_kiocb *req, int fd, bool fixed);
static void __io_queue_sqe(struct io_kiocb *req, struct io_comp_state *cs);
static void io_file_put_work(struct work_struct *work);
@@ -3486,7 +3486,6 @@ static int __io_splice_prep(struct io_kiocb *req,
{
struct io_splice* sp = &req->splice;
unsigned int valid_flags = SPLICE_F_FD_IN_FIXED | SPLICE_F_ALL;
- int ret;
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
return -EINVAL;
@@ -3498,10 +3497,10 @@ static int __io_splice_prep(struct io_kiocb *req,
if (unlikely(sp->flags & ~valid_flags))
return -EINVAL;
- ret = io_file_get(NULL, req, READ_ONCE(sqe->splice_fd_in), &sp->file_in,
- (sp->flags & SPLICE_F_FD_IN_FIXED));
- if (ret)
- return ret;
+ sp->file_in = io_file_get(NULL, req, READ_ONCE(sqe->splice_fd_in),
+ (sp->flags & SPLICE_F_FD_IN_FIXED));
+ if (!sp->file_in)
+ return -EBADF;
req->flags |= REQ_F_NEED_CLEANUP;
if (!S_ISREG(file_inode(sp->file_in)->i_mode)) {
@@ -5980,15 +5979,15 @@ static inline struct file *io_file_from_index(struct io_ring_ctx *ctx,
return table->files[index & IORING_FILE_TABLE_MASK];
}
-static int io_file_get(struct io_submit_state *state, struct io_kiocb *req,
- int fd, struct file **out_file, bool fixed)
+static struct file *io_file_get(struct io_submit_state *state,
+ struct io_kiocb *req, int fd, bool fixed)
{
struct io_ring_ctx *ctx = req->ctx;
struct file *file;
if (fixed) {
if (unlikely((unsigned int)fd >= ctx->nr_user_files))
- return -EBADF;
+ return NULL;
fd = array_index_nospec(fd, ctx->nr_user_files);
file = io_file_from_index(ctx, fd);
if (file) {
@@ -6000,11 +5999,7 @@ static int io_file_get(struct io_submit_state *state, struct io_kiocb *req,
file = __io_file_get(state, fd);
}
- if (file || io_op_defs[req->opcode].needs_file_no_error) {
- *out_file = file;
- return 0;
- }
- return -EBADF;
+ return file;
}
static int io_req_set_file(struct io_submit_state *state, struct io_kiocb *req,
@@ -6016,7 +6011,10 @@ static int io_req_set_file(struct io_submit_state *state, struct io_kiocb *req,
if (unlikely(!fixed && io_async_submit(req->ctx)))
return -EBADF;
- return io_file_get(state, req, fd, &req->file, fixed);
+ req->file = io_file_get(state, req, fd, fixed);
+ if (req->file || io_op_defs[req->opcode].needs_file_no_error)
+ return 0;
+ return -EBADF;
}
static enum hrtimer_restart io_link_timeout_fn(struct hrtimer *timer)
--
2.24.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 05/12] io_uring: improve submit_state.ios_left accounting
2020-10-10 17:34 [PATCH 00/12] bundled cleanups and improvements Pavel Begunkov
` (3 preceding siblings ...)
2020-10-10 17:34 ` [PATCH 04/12] io_uring: simplify io_file_get() Pavel Begunkov
@ 2020-10-10 17:34 ` Pavel Begunkov
2020-10-10 17:34 ` [PATCH 06/12] io_uring: use a separate struct for timeout_remove Pavel Begunkov
` (7 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2020-10-10 17:34 UTC (permalink / raw)
To: Jens Axboe, io-uring
state->ios_left isn't decremented for requests that don't need a file,
so it might be larger than number of SQEs left. That in some
circumstances makes us to grab more files that is needed so imposing
extra put.
Deaccount one ios_left for each request.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index ffdaea55e820..250eefbe13cb 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2581,7 +2581,6 @@ static struct file *__io_file_get(struct io_submit_state *state, int fd)
if (state->file) {
if (state->fd == fd) {
state->has_refs--;
- state->ios_left--;
return state->file;
}
__io_state_file_put(state);
@@ -2591,8 +2590,7 @@ static struct file *__io_file_get(struct io_submit_state *state, int fd)
return NULL;
state->fd = fd;
- state->ios_left--;
- state->has_refs = state->ios_left;
+ state->has_refs = state->ios_left - 1;
return state->file;
}
@@ -6386,7 +6384,7 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
struct io_submit_state *state)
{
unsigned int sqe_flags;
- int id;
+ int id, ret;
req->opcode = READ_ONCE(sqe->opcode);
req->user_data = READ_ONCE(sqe->user_data);
@@ -6432,7 +6430,9 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
if (!io_op_defs[req->opcode].needs_file)
return 0;
- return io_req_set_file(state, req, READ_ONCE(sqe->fd));
+ ret = io_req_set_file(state, req, READ_ONCE(sqe->fd));
+ state->ios_left--;
+ return ret;
}
static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr)
--
2.24.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 06/12] io_uring: use a separate struct for timeout_remove
2020-10-10 17:34 [PATCH 00/12] bundled cleanups and improvements Pavel Begunkov
` (4 preceding siblings ...)
2020-10-10 17:34 ` [PATCH 05/12] io_uring: improve submit_state.ios_left accounting Pavel Begunkov
@ 2020-10-10 17:34 ` Pavel Begunkov
2020-10-10 17:34 ` [PATCH 07/12] io_uring: remove timeout.list after hrtimer cancel Pavel Begunkov
` (6 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2020-10-10 17:34 UTC (permalink / raw)
To: Jens Axboe, io-uring
Don't use struct io_timeout for both IORING_OP_TIMEOUT and
IORING_OP_TIMEOUT_REMOVE, they're quite different. Split them in two,
that allows to remove an unused field in struct io_timeout, and btw kill
->flags not used by either. This also easier to follow, especially for
timeout remove.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 250eefbe13cb..09b8f2c9ae7e 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -434,13 +434,16 @@ struct io_cancel {
struct io_timeout {
struct file *file;
- u64 addr;
- int flags;
u32 off;
u32 target_seq;
struct list_head list;
};
+struct io_timeout_rem {
+ struct file *file;
+ u64 addr;
+};
+
struct io_rw {
/* NOTE: kiocb has the file as the first member, so don't do it here */
struct kiocb kiocb;
@@ -644,6 +647,7 @@ struct io_kiocb {
struct io_sync sync;
struct io_cancel cancel;
struct io_timeout timeout;
+ struct io_timeout_rem timeout_rem;
struct io_connect connect;
struct io_sr_msg sr_msg;
struct io_open open;
@@ -5360,14 +5364,10 @@ static int io_timeout_remove_prep(struct io_kiocb *req,
return -EINVAL;
if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
return -EINVAL;
- if (sqe->ioprio || sqe->buf_index || sqe->len)
- return -EINVAL;
-
- req->timeout.addr = READ_ONCE(sqe->addr);
- req->timeout.flags = READ_ONCE(sqe->timeout_flags);
- if (req->timeout.flags)
+ if (sqe->ioprio || sqe->buf_index || sqe->len || sqe->timeout_flags)
return -EINVAL;
+ req->timeout_rem.addr = READ_ONCE(sqe->addr);
return 0;
}
@@ -5380,7 +5380,7 @@ static int io_timeout_remove(struct io_kiocb *req)
int ret;
spin_lock_irq(&ctx->completion_lock);
- ret = io_timeout_cancel(ctx, req->timeout.addr);
+ ret = io_timeout_cancel(ctx, req->timeout_rem.addr);
io_cqring_fill_event(req, ret);
io_commit_cqring(ctx);
--
2.24.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 07/12] io_uring: remove timeout.list after hrtimer cancel
2020-10-10 17:34 [PATCH 00/12] bundled cleanups and improvements Pavel Begunkov
` (5 preceding siblings ...)
2020-10-10 17:34 ` [PATCH 06/12] io_uring: use a separate struct for timeout_remove Pavel Begunkov
@ 2020-10-10 17:34 ` Pavel Begunkov
2020-10-10 17:34 ` [PATCH 08/12] io_uring: clean leftovers after splitting issue Pavel Begunkov
` (5 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2020-10-10 17:34 UTC (permalink / raw)
To: Jens Axboe, io-uring
Remove timeouts from ctx->timeout_list after hrtimer_try_to_cancel()
successfully cancels it. With this we don't need to care whether there
was a race and it was removed in io_timeout_fn(), and that will be handy
for following patches.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 11 ++---------
1 file changed, 2 insertions(+), 9 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 09b8f2c9ae7e..3ce72d48eb21 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5301,16 +5301,10 @@ static enum hrtimer_restart io_timeout_fn(struct hrtimer *timer)
unsigned long flags;
spin_lock_irqsave(&ctx->completion_lock, flags);
+ list_del_init(&req->timeout.list);
atomic_set(&req->ctx->cq_timeouts,
atomic_read(&req->ctx->cq_timeouts) + 1);
- /*
- * We could be racing with timeout deletion. If the list is empty,
- * then timeout lookup already found it and will be handling it.
- */
- if (!list_empty(&req->timeout.list))
- list_del_init(&req->timeout.list);
-
io_cqring_fill_event(req, -ETIME);
io_commit_cqring(ctx);
spin_unlock_irqrestore(&ctx->completion_lock, flags);
@@ -5326,11 +5320,10 @@ static int __io_timeout_cancel(struct io_kiocb *req)
struct io_timeout_data *io = req->async_data;
int ret;
- list_del_init(&req->timeout.list);
-
ret = hrtimer_try_to_cancel(&io->timer);
if (ret == -1)
return -EALREADY;
+ list_del_init(&req->timeout.list);
req_set_fail_links(req);
req->flags |= REQ_F_COMP_LOCKED;
--
2.24.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 08/12] io_uring: clean leftovers after splitting issue
2020-10-10 17:34 [PATCH 00/12] bundled cleanups and improvements Pavel Begunkov
` (6 preceding siblings ...)
2020-10-10 17:34 ` [PATCH 07/12] io_uring: remove timeout.list after hrtimer cancel Pavel Begunkov
@ 2020-10-10 17:34 ` Pavel Begunkov
2020-10-10 17:34 ` [PATCH 09/12] io_uring: don't delay io_init_req() error check Pavel Begunkov
` (4 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2020-10-10 17:34 UTC (permalink / raw)
To: Jens Axboe, io-uring
Kill extra if in io_issue_sqe() and place send/recv[msg] calls
appropriately under switch's cases.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 14 ++++++--------
1 file changed, 6 insertions(+), 8 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 3ce72d48eb21..2e0105c373ae 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5831,18 +5831,16 @@ static int io_issue_sqe(struct io_kiocb *req, bool force_nonblock,
ret = io_sync_file_range(req, force_nonblock);
break;
case IORING_OP_SENDMSG:
+ ret = io_sendmsg(req, force_nonblock, cs);
+ break;
case IORING_OP_SEND:
- if (req->opcode == IORING_OP_SENDMSG)
- ret = io_sendmsg(req, force_nonblock, cs);
- else
- ret = io_send(req, force_nonblock, cs);
+ ret = io_send(req, force_nonblock, cs);
break;
case IORING_OP_RECVMSG:
+ ret = io_recvmsg(req, force_nonblock, cs);
+ break;
case IORING_OP_RECV:
- if (req->opcode == IORING_OP_RECVMSG)
- ret = io_recvmsg(req, force_nonblock, cs);
- else
- ret = io_recv(req, force_nonblock, cs);
+ ret = io_recv(req, force_nonblock, cs);
break;
case IORING_OP_TIMEOUT:
ret = io_timeout(req);
--
2.24.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 09/12] io_uring: don't delay io_init_req() error check
2020-10-10 17:34 [PATCH 00/12] bundled cleanups and improvements Pavel Begunkov
` (7 preceding siblings ...)
2020-10-10 17:34 ` [PATCH 08/12] io_uring: clean leftovers after splitting issue Pavel Begunkov
@ 2020-10-10 17:34 ` Pavel Begunkov
2020-10-10 17:34 ` [PATCH 10/12] io_uring: clean file_data access in files_register Pavel Begunkov
` (3 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2020-10-10 17:34 UTC (permalink / raw)
To: Jens Axboe, io-uring
Don't postpone io_init_req() error checks and do that right after
calling it. There is no control-flow statements or dependencies with
sqe/submitted accounting, so do those earlier, that makes the code flow
a bit more natural.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 2e0105c373ae..22d1fb9cc80f 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -6466,12 +6466,11 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr)
submitted = -EAGAIN;
break;
}
-
- err = io_init_req(ctx, req, sqe, &state);
io_consume_sqe(ctx);
/* will complete beyond this point, count as submitted */
submitted++;
+ err = io_init_req(ctx, req, sqe, &state);
if (unlikely(err)) {
fail_req:
io_put_req(req);
--
2.24.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 10/12] io_uring: clean file_data access in files_register
2020-10-10 17:34 [PATCH 00/12] bundled cleanups and improvements Pavel Begunkov
` (8 preceding siblings ...)
2020-10-10 17:34 ` [PATCH 09/12] io_uring: don't delay io_init_req() error check Pavel Begunkov
@ 2020-10-10 17:34 ` Pavel Begunkov
2020-10-10 17:34 ` [PATCH 11/12] io_uring: refactor *files_register()'s error paths Pavel Begunkov
` (2 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2020-10-10 17:34 UTC (permalink / raw)
To: Jens Axboe, io-uring
Keep file_data in a local var and replace with it complex references
such as ctx->file_data.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 69 ++++++++++++++++++++++++---------------------------
1 file changed, 33 insertions(+), 36 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 22d1fb9cc80f..c3ca82f20f3d 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -7099,13 +7099,13 @@ static int io_sqe_files_scm(struct io_ring_ctx *ctx)
}
#endif
-static int io_sqe_alloc_file_tables(struct io_ring_ctx *ctx, unsigned nr_tables,
- unsigned nr_files)
+static int io_sqe_alloc_file_tables(struct fixed_file_data *file_data,
+ unsigned nr_tables, unsigned nr_files)
{
int i;
for (i = 0; i < nr_tables; i++) {
- struct fixed_file_table *table = &ctx->file_data->table[i];
+ struct fixed_file_table *table = &file_data->table[i];
unsigned this_files;
this_files = min(nr_files, IORING_MAX_FILES_TABLE);
@@ -7120,7 +7120,7 @@ static int io_sqe_alloc_file_tables(struct io_ring_ctx *ctx, unsigned nr_tables,
return 0;
for (i = 0; i < nr_tables; i++) {
- struct fixed_file_table *table = &ctx->file_data->table[i];
+ struct fixed_file_table *table = &file_data->table[i];
kfree(table->files);
}
return 1;
@@ -7287,6 +7287,7 @@ static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
int fd, ret = 0;
unsigned i;
struct fixed_file_ref_node *ref_node;
+ struct fixed_file_data *file_data;
if (ctx->file_data)
return -EBUSY;
@@ -7295,37 +7296,33 @@ static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
if (nr_args > IORING_MAX_FIXED_FILES)
return -EMFILE;
- ctx->file_data = kzalloc(sizeof(*ctx->file_data), GFP_KERNEL);
- if (!ctx->file_data)
+ file_data = kzalloc(sizeof(*ctx->file_data), GFP_KERNEL);
+ if (!file_data)
return -ENOMEM;
- ctx->file_data->ctx = ctx;
- init_completion(&ctx->file_data->done);
- INIT_LIST_HEAD(&ctx->file_data->ref_list);
- spin_lock_init(&ctx->file_data->lock);
+ file_data->ctx = ctx;
+ init_completion(&file_data->done);
+ INIT_LIST_HEAD(&file_data->ref_list);
+ spin_lock_init(&file_data->lock);
nr_tables = DIV_ROUND_UP(nr_args, IORING_MAX_FILES_TABLE);
- ctx->file_data->table = kcalloc(nr_tables,
- sizeof(struct fixed_file_table),
- GFP_KERNEL);
- if (!ctx->file_data->table) {
- kfree(ctx->file_data);
- ctx->file_data = NULL;
+ file_data->table = kcalloc(nr_tables, sizeof(file_data->table),
+ GFP_KERNEL);
+ if (!file_data->table) {
+ kfree(file_data);
return -ENOMEM;
}
- if (percpu_ref_init(&ctx->file_data->refs, io_file_ref_kill,
+ if (percpu_ref_init(&file_data->refs, io_file_ref_kill,
PERCPU_REF_ALLOW_REINIT, GFP_KERNEL)) {
- kfree(ctx->file_data->table);
- kfree(ctx->file_data);
- ctx->file_data = NULL;
+ kfree(file_data->table);
+ kfree(file_data);
return -ENOMEM;
}
- if (io_sqe_alloc_file_tables(ctx, nr_tables, nr_args)) {
- percpu_ref_exit(&ctx->file_data->refs);
- kfree(ctx->file_data->table);
- kfree(ctx->file_data);
- ctx->file_data = NULL;
+ if (io_sqe_alloc_file_tables(file_data, nr_tables, nr_args)) {
+ percpu_ref_exit(&file_data->refs);
+ kfree(file_data->table);
+ kfree(file_data);
return -ENOMEM;
}
@@ -7342,7 +7339,7 @@ static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
continue;
}
- table = &ctx->file_data->table[i >> IORING_FILE_TABLE_SHIFT];
+ table = &file_data->table[i >> IORING_FILE_TABLE_SHIFT];
index = i & IORING_FILE_TABLE_MASK;
file = fget(fd);
@@ -7372,16 +7369,16 @@ static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
fput(file);
}
for (i = 0; i < nr_tables; i++)
- kfree(ctx->file_data->table[i].files);
+ kfree(file_data->table[i].files);
- percpu_ref_exit(&ctx->file_data->refs);
- kfree(ctx->file_data->table);
- kfree(ctx->file_data);
- ctx->file_data = NULL;
+ percpu_ref_exit(&file_data->refs);
+ kfree(file_data->table);
+ kfree(file_data);
ctx->nr_user_files = 0;
return ret;
}
+ ctx->file_data = file_data;
ret = io_sqe_files_scm(ctx);
if (ret) {
io_sqe_files_unregister(ctx);
@@ -7394,11 +7391,11 @@ static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
return PTR_ERR(ref_node);
}
- ctx->file_data->cur_refs = &ref_node->refs;
- spin_lock(&ctx->file_data->lock);
- list_add(&ref_node->node, &ctx->file_data->ref_list);
- spin_unlock(&ctx->file_data->lock);
- percpu_ref_get(&ctx->file_data->refs);
+ file_data->cur_refs = &ref_node->refs;
+ spin_lock(&file_data->lock);
+ list_add(&ref_node->node, &file_data->ref_list);
+ spin_unlock(&file_data->lock);
+ percpu_ref_get(&file_data->refs);
return ret;
}
--
2.24.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 11/12] io_uring: refactor *files_register()'s error paths
2020-10-10 17:34 [PATCH 00/12] bundled cleanups and improvements Pavel Begunkov
` (9 preceding siblings ...)
2020-10-10 17:34 ` [PATCH 10/12] io_uring: clean file_data access in files_register Pavel Begunkov
@ 2020-10-10 17:34 ` Pavel Begunkov
2020-10-10 17:34 ` [PATCH 12/12] io_uring: keep a pointer ref_node in file_data Pavel Begunkov
2020-10-10 18:48 ` [PATCH 00/12] bundled cleanups and improvements Jens Axboe
12 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2020-10-10 17:34 UTC (permalink / raw)
To: Jens Axboe, io-uring
Don't keep repeating cleaning sequences in error paths, write it once
in the and use labels. It's less error prone and looks cleaner.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 78 +++++++++++++++++++++------------------------------
1 file changed, 32 insertions(+), 46 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index c3ca82f20f3d..fc4ef725ae09 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -7282,10 +7282,9 @@ static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
unsigned nr_args)
{
__s32 __user *fds = (__s32 __user *) arg;
- unsigned nr_tables;
+ unsigned nr_tables, i;
struct file *file;
- int fd, ret = 0;
- unsigned i;
+ int fd, ret = -ENOMEM;
struct fixed_file_ref_node *ref_node;
struct fixed_file_data *file_data;
@@ -7307,45 +7306,32 @@ static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
nr_tables = DIV_ROUND_UP(nr_args, IORING_MAX_FILES_TABLE);
file_data->table = kcalloc(nr_tables, sizeof(file_data->table),
GFP_KERNEL);
- if (!file_data->table) {
- kfree(file_data);
- return -ENOMEM;
- }
+ if (!file_data->table)
+ goto out_free;
if (percpu_ref_init(&file_data->refs, io_file_ref_kill,
- PERCPU_REF_ALLOW_REINIT, GFP_KERNEL)) {
- kfree(file_data->table);
- kfree(file_data);
- return -ENOMEM;
- }
+ PERCPU_REF_ALLOW_REINIT, GFP_KERNEL))
+ goto out_free;
- if (io_sqe_alloc_file_tables(file_data, nr_tables, nr_args)) {
- percpu_ref_exit(&file_data->refs);
- kfree(file_data->table);
- kfree(file_data);
- return -ENOMEM;
- }
+ if (io_sqe_alloc_file_tables(file_data, nr_tables, nr_args))
+ goto out_ref;
for (i = 0; i < nr_args; i++, ctx->nr_user_files++) {
struct fixed_file_table *table;
unsigned index;
- ret = -EFAULT;
- if (copy_from_user(&fd, &fds[i], sizeof(fd)))
- break;
+ if (copy_from_user(&fd, &fds[i], sizeof(fd))) {
+ ret = -EFAULT;
+ goto out_fput;
+ }
/* allow sparse sets */
- if (fd == -1) {
- ret = 0;
+ if (fd == -1)
continue;
- }
- table = &file_data->table[i >> IORING_FILE_TABLE_SHIFT];
- index = i & IORING_FILE_TABLE_MASK;
file = fget(fd);
-
ret = -EBADF;
if (!file)
- break;
+ goto out_fput;
/*
* Don't allow io_uring instances to be registered. If UNIX
@@ -7356,28 +7342,13 @@ static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
*/
if (file->f_op == &io_uring_fops) {
fput(file);
- break;
+ goto out_fput;
}
- ret = 0;
+ table = &file_data->table[i >> IORING_FILE_TABLE_SHIFT];
+ index = i & IORING_FILE_TABLE_MASK;
table->files[index] = file;
}
- if (ret) {
- for (i = 0; i < ctx->nr_user_files; i++) {
- file = io_file_from_index(ctx, i);
- if (file)
- fput(file);
- }
- for (i = 0; i < nr_tables; i++)
- kfree(file_data->table[i].files);
-
- percpu_ref_exit(&file_data->refs);
- kfree(file_data->table);
- kfree(file_data);
- ctx->nr_user_files = 0;
- return ret;
- }
-
ctx->file_data = file_data;
ret = io_sqe_files_scm(ctx);
if (ret) {
@@ -7397,6 +7368,21 @@ static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
spin_unlock(&file_data->lock);
percpu_ref_get(&file_data->refs);
return ret;
+out_fput:
+ for (i = 0; i < ctx->nr_user_files; i++) {
+ file = io_file_from_index(ctx, i);
+ if (file)
+ fput(file);
+ }
+ for (i = 0; i < nr_tables; i++)
+ kfree(file_data->table[i].files);
+ ctx->nr_user_files = 0;
+out_ref:
+ percpu_ref_exit(&file_data->refs);
+out_free:
+ kfree(file_data->table);
+ kfree(file_data);
+ return ret;
}
static int io_sqe_file_register(struct io_ring_ctx *ctx, struct file *file,
--
2.24.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 12/12] io_uring: keep a pointer ref_node in file_data
2020-10-10 17:34 [PATCH 00/12] bundled cleanups and improvements Pavel Begunkov
` (10 preceding siblings ...)
2020-10-10 17:34 ` [PATCH 11/12] io_uring: refactor *files_register()'s error paths Pavel Begunkov
@ 2020-10-10 17:34 ` Pavel Begunkov
2020-10-10 18:48 ` [PATCH 00/12] bundled cleanups and improvements Jens Axboe
12 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2020-10-10 17:34 UTC (permalink / raw)
To: Jens Axboe, io-uring
->cur_refs of struct fixed_file_data always points to percpu_ref
embedded into struct fixed_file_ref_node. Don't overuse container_of()
and offsetting, and point directly to fixed_file_ref_node.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 14 ++++++--------
1 file changed, 6 insertions(+), 8 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index fc4ef725ae09..c729ee8033f8 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -210,7 +210,7 @@ struct fixed_file_data {
struct fixed_file_table *table;
struct io_ring_ctx *ctx;
- struct percpu_ref *cur_refs;
+ struct fixed_file_ref_node *node;
struct percpu_ref refs;
struct completion done;
struct list_head ref_list;
@@ -5980,7 +5980,7 @@ static struct file *io_file_get(struct io_submit_state *state,
fd = array_index_nospec(fd, ctx->nr_user_files);
file = io_file_from_index(ctx, fd);
if (file) {
- req->fixed_file_refs = ctx->file_data->cur_refs;
+ req->fixed_file_refs = &ctx->file_data->node->refs;
percpu_ref_get(req->fixed_file_refs);
}
} else {
@@ -7362,7 +7362,7 @@ static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
return PTR_ERR(ref_node);
}
- file_data->cur_refs = &ref_node->refs;
+ file_data->node = ref_node;
spin_lock(&file_data->lock);
list_add(&ref_node->node, &file_data->ref_list);
spin_unlock(&file_data->lock);
@@ -7432,14 +7432,12 @@ static int io_queue_file_removal(struct fixed_file_data *data,
struct file *file)
{
struct io_file_put *pfile;
- struct percpu_ref *refs = data->cur_refs;
- struct fixed_file_ref_node *ref_node;
+ struct fixed_file_ref_node *ref_node = data->node;
pfile = kzalloc(sizeof(*pfile), GFP_KERNEL);
if (!pfile)
return -ENOMEM;
- ref_node = container_of(refs, struct fixed_file_ref_node, refs);
pfile->file = file;
list_add(&pfile->list, &ref_node->file_list);
@@ -7522,10 +7520,10 @@ static int __io_sqe_files_update(struct io_ring_ctx *ctx,
}
if (needs_switch) {
- percpu_ref_kill(data->cur_refs);
+ percpu_ref_kill(&data->node->refs);
spin_lock(&data->lock);
list_add(&ref_node->node, &data->ref_list);
- data->cur_refs = &ref_node->refs;
+ data->node = ref_node;
spin_unlock(&data->lock);
percpu_ref_get(&ctx->file_data->refs);
} else
--
2.24.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH 01/12] io_uring: don't io_prep_async_work() linked reqs
2020-10-10 17:34 ` [PATCH 01/12] io_uring: don't io_prep_async_work() linked reqs Pavel Begunkov
@ 2020-10-10 18:45 ` Pavel Begunkov
2020-10-10 18:49 ` Jens Axboe
0 siblings, 1 reply; 17+ messages in thread
From: Pavel Begunkov @ 2020-10-10 18:45 UTC (permalink / raw)
To: Jens Axboe, io-uring; +Cc: Reported-by : Roman Gershman
On 10/10/2020 18:34, Pavel Begunkov wrote:
> There is no real reason left for preparing io-wq work context for linked
> requests in advance, remove it as this might become a bottleneck in some
> cases.
>
> Reported-by: Reported-by: Roman Gershman <[email protected]>
It looks like "Reported-by:" got duplicated.
s/Reported-by: Reported-by:/Reported-by:/
> Signed-off-by: Pavel Begunkov <[email protected]>
> ---
> fs/io_uring.c | 3 ---
> 1 file changed, 3 deletions(-)
>
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index 09494ca1b990..272abe03a79e 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -5672,9 +5672,6 @@ static int io_req_defer_prep(struct io_kiocb *req,
> ret = io_prep_work_files(req);
> if (unlikely(ret))
> return ret;
> -
> - io_prep_async_work(req);
> -
> return io_req_prep(req, sqe);
> }
>
>
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 00/12] bundled cleanups and improvements
2020-10-10 17:34 [PATCH 00/12] bundled cleanups and improvements Pavel Begunkov
` (11 preceding siblings ...)
2020-10-10 17:34 ` [PATCH 12/12] io_uring: keep a pointer ref_node in file_data Pavel Begunkov
@ 2020-10-10 18:48 ` Jens Axboe
12 siblings, 0 replies; 17+ messages in thread
From: Jens Axboe @ 2020-10-10 18:48 UTC (permalink / raw)
To: Pavel Begunkov, io-uring
On 10/10/20 11:34 AM, Pavel Begunkov wrote:
> Only [1] considerably affects performance (as by Roman Gershman), others
> are rather cleanups.
>
> [1-2] are on the surface cleanups following ->files changes.
> [3-5] address ->file grabbing
> [6-7] are some preparations around timeouts
> [8,9] are independent cleanups
> [10-12] toss around files_register() bits
>
> Pavel Begunkov (12):
> io_uring: don't io_prep_async_work() linked reqs
> io_uring: clean up ->files grabbing
> io_uring: kill extra check in fixed io_file_get()
> io_uring: simplify io_file_get()
> io_uring: improve submit_state.ios_left accounting
> io_uring: use a separate struct for timeout_remove
> io_uring: remove timeout.list after hrtimer cancel
> io_uring: clean leftovers after splitting issue
> io_uring: don't delay io_init_req() error check
> io_uring: clean file_data access in files_register
> io_uring: refactor *files_register()'s error paths
> io_uring: keep a pointer ref_node in file_data
>
> fs/io_uring.c | 275 ++++++++++++++++++++------------------------------
> 1 file changed, 107 insertions(+), 168 deletions(-)
Thanks, nice cleanups! LGTM, and they test out fine too. Applied.
--
Jens Axboe
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 01/12] io_uring: don't io_prep_async_work() linked reqs
2020-10-10 18:45 ` Pavel Begunkov
@ 2020-10-10 18:49 ` Jens Axboe
2020-10-10 18:55 ` Pavel Begunkov
0 siblings, 1 reply; 17+ messages in thread
From: Jens Axboe @ 2020-10-10 18:49 UTC (permalink / raw)
To: Pavel Begunkov, io-uring; +Cc: Reported-by : Roman Gershman
On 10/10/20 12:45 PM, Pavel Begunkov wrote:
> On 10/10/2020 18:34, Pavel Begunkov wrote:
>> There is no real reason left for preparing io-wq work context for linked
>> requests in advance, remove it as this might become a bottleneck in some
>> cases.
>>
>> Reported-by: Reported-by: Roman Gershman <[email protected]>
>
> It looks like "Reported-by:" got duplicated.
>
> s/Reported-by: Reported-by:/Reported-by:/
I fixed it up.
--
Jens Axboe
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 01/12] io_uring: don't io_prep_async_work() linked reqs
2020-10-10 18:49 ` Jens Axboe
@ 2020-10-10 18:55 ` Pavel Begunkov
0 siblings, 0 replies; 17+ messages in thread
From: Pavel Begunkov @ 2020-10-10 18:55 UTC (permalink / raw)
To: Jens Axboe, io-uring; +Cc: Roman Gershman
On 10/10/2020 19:49, Jens Axboe wrote:
> On 10/10/20 12:45 PM, Pavel Begunkov wrote:
>> On 10/10/2020 18:34, Pavel Begunkov wrote:
>>> There is no real reason left for preparing io-wq work context for linked
>>> requests in advance, remove it as this might become a bottleneck in some
>>> cases.
>>>
>>> Reported-by: Reported-by: Roman Gershman <[email protected]>
>>
>> It looks like "Reported-by:" got duplicated.
>>
>> s/Reported-by: Reported-by:/Reported-by:/
>
> I fixed it up.
Great, thanks!
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2020-10-10 23:12 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-10-10 17:34 [PATCH 00/12] bundled cleanups and improvements Pavel Begunkov
2020-10-10 17:34 ` [PATCH 01/12] io_uring: don't io_prep_async_work() linked reqs Pavel Begunkov
2020-10-10 18:45 ` Pavel Begunkov
2020-10-10 18:49 ` Jens Axboe
2020-10-10 18:55 ` Pavel Begunkov
2020-10-10 17:34 ` [PATCH 02/12] io_uring: clean up ->files grabbing Pavel Begunkov
2020-10-10 17:34 ` [PATCH 03/12] io_uring: kill extra check in fixed io_file_get() Pavel Begunkov
2020-10-10 17:34 ` [PATCH 04/12] io_uring: simplify io_file_get() Pavel Begunkov
2020-10-10 17:34 ` [PATCH 05/12] io_uring: improve submit_state.ios_left accounting Pavel Begunkov
2020-10-10 17:34 ` [PATCH 06/12] io_uring: use a separate struct for timeout_remove Pavel Begunkov
2020-10-10 17:34 ` [PATCH 07/12] io_uring: remove timeout.list after hrtimer cancel Pavel Begunkov
2020-10-10 17:34 ` [PATCH 08/12] io_uring: clean leftovers after splitting issue Pavel Begunkov
2020-10-10 17:34 ` [PATCH 09/12] io_uring: don't delay io_init_req() error check Pavel Begunkov
2020-10-10 17:34 ` [PATCH 10/12] io_uring: clean file_data access in files_register Pavel Begunkov
2020-10-10 17:34 ` [PATCH 11/12] io_uring: refactor *files_register()'s error paths Pavel Begunkov
2020-10-10 17:34 ` [PATCH 12/12] io_uring: keep a pointer ref_node in file_data Pavel Begunkov
2020-10-10 18:48 ` [PATCH 00/12] bundled cleanups and improvements Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox