* [PATCH v2 5.13 0/5] poll update into poll remove
@ 2021-04-14 12:38 Pavel Begunkov
2021-04-14 12:38 ` [PATCH v2 1/5] io_uring: improve sqpoll event/state handling Pavel Begunkov
` (5 more replies)
0 siblings, 6 replies; 10+ messages in thread
From: Pavel Begunkov @ 2021-04-14 12:38 UTC (permalink / raw)
To: Jens Axboe, io-uring
1-2 are are small improvements
The rest moves all poll update into IORING_OP_POLL_REMOVE,
see 5/5 for justification.
v2: fix io_poll_remove_one() on the remove request, not the one
we cancel.
Pavel Begunkov (5):
io_uring: improve sqpoll event/state handling
io_uring: refactor io_ring_exit_work()
io_uring: fix POLL_REMOVE removing apoll
io_uring: add helper for parsing poll events
io_uring: move poll update into remove not add
fs/io_uring.c | 197 ++++++++++++++++++++++++--------------------------
1 file changed, 94 insertions(+), 103 deletions(-)
--
2.24.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v2 1/5] io_uring: improve sqpoll event/state handling
2021-04-14 12:38 [PATCH v2 5.13 0/5] poll update into poll remove Pavel Begunkov
@ 2021-04-14 12:38 ` Pavel Begunkov
2021-04-14 16:41 ` Jens Axboe
2021-04-14 12:38 ` [PATCH v2 2/5] io_uring: refactor io_ring_exit_work() Pavel Begunkov
` (4 subsequent siblings)
5 siblings, 1 reply; 10+ messages in thread
From: Pavel Begunkov @ 2021-04-14 12:38 UTC (permalink / raw)
To: Jens Axboe, io-uring
As sqd->state changes rarely, don't check every event one by one but
look them all at once. Add a helper function. Also don't go into event
waiting sleeping with STOP flag set.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 45 ++++++++++++++++++++++++++++-----------------
1 file changed, 28 insertions(+), 17 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index e3b679c5547e..693fb5c5e58c 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -6742,6 +6742,11 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr)
return submitted;
}
+static inline bool io_sqd_events_pending(struct io_sq_data *sqd)
+{
+ return READ_ONCE(sqd->state);
+}
+
static inline void io_ring_set_wakeup_flag(struct io_ring_ctx *ctx)
{
/* Tell userspace we may need a wakeup call */
@@ -6796,6 +6801,24 @@ static void io_sqd_update_thread_idle(struct io_sq_data *sqd)
sqd->sq_thread_idle = sq_thread_idle;
}
+static bool io_sqd_handle_event(struct io_sq_data *sqd)
+{
+ bool did_sig = false;
+ struct ksignal ksig;
+
+ if (test_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state) ||
+ signal_pending(current)) {
+ mutex_unlock(&sqd->lock);
+ if (signal_pending(current))
+ did_sig = get_signal(&ksig);
+ cond_resched();
+ mutex_lock(&sqd->lock);
+ }
+ io_run_task_work();
+ io_run_task_work_head(&sqd->park_task_work);
+ return did_sig || test_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state);
+}
+
static int io_sq_thread(void *data)
{
struct io_sq_data *sqd = data;
@@ -6818,29 +6841,17 @@ static int io_sq_thread(void *data)
/* a user may had exited before the thread wstarted */
io_run_task_work_head(&sqd->park_task_work);
- while (!test_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state)) {
+ while (1) {
int ret;
bool cap_entries, sqt_spin, needs_sched;
- if (test_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state) ||
- signal_pending(current)) {
- bool did_sig = false;
-
- mutex_unlock(&sqd->lock);
- if (signal_pending(current)) {
- struct ksignal ksig;
-
- did_sig = get_signal(&ksig);
- }
- cond_resched();
- mutex_lock(&sqd->lock);
- io_run_task_work();
- io_run_task_work_head(&sqd->park_task_work);
- if (did_sig)
+ if (io_sqd_events_pending(sqd) || signal_pending(current)) {
+ if (io_sqd_handle_event(sqd))
break;
timeout = jiffies + sqd->sq_thread_idle;
continue;
}
+
sqt_spin = false;
cap_entries = !list_is_singular(&sqd->ctx_list);
list_for_each_entry(ctx, &sqd->ctx_list, sqd_list) {
@@ -6877,7 +6888,7 @@ static int io_sq_thread(void *data)
}
}
- if (needs_sched && !test_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state)) {
+ if (needs_sched && !io_sqd_events_pending(sqd)) {
list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
io_ring_set_wakeup_flag(ctx);
--
2.24.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v2 2/5] io_uring: refactor io_ring_exit_work()
2021-04-14 12:38 [PATCH v2 5.13 0/5] poll update into poll remove Pavel Begunkov
2021-04-14 12:38 ` [PATCH v2 1/5] io_uring: improve sqpoll event/state handling Pavel Begunkov
@ 2021-04-14 12:38 ` Pavel Begunkov
2021-04-14 12:38 ` [PATCH v2 3/5] io_uring: fix POLL_REMOVE removing apoll Pavel Begunkov
` (3 subsequent siblings)
5 siblings, 0 replies; 10+ messages in thread
From: Pavel Begunkov @ 2021-04-14 12:38 UTC (permalink / raw)
To: Jens Axboe, io-uring
Don't reinit io_ring_exit_work()'s exit work/completions on each
iteration, that's wasteful. Also add list_rotate_left(), so if we failed
to complete the task job, we don't try it again and again but defer it
until others are processed.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 693fb5c5e58c..6a70bf455c49 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -8601,6 +8601,9 @@ static void io_ring_exit_work(struct work_struct *work)
WARN_ON_ONCE(time_after(jiffies, timeout));
} while (!wait_for_completion_timeout(&ctx->ref_comp, HZ/20));
+ init_completion(&exit.completion);
+ init_task_work(&exit.task_work, io_tctx_exit_cb);
+ exit.ctx = ctx;
/*
* Some may use context even when all refs and requests have been put,
* and they are free to do so while still holding uring_lock or
@@ -8613,9 +8616,8 @@ static void io_ring_exit_work(struct work_struct *work)
node = list_first_entry(&ctx->tctx_list, struct io_tctx_node,
ctx_node);
- exit.ctx = ctx;
- init_completion(&exit.completion);
- init_task_work(&exit.task_work, io_tctx_exit_cb);
+ /* don't spin on a single task if cancellation failed */
+ list_rotate_left(&ctx->tctx_list);
ret = task_work_add(node->task, &exit.task_work, TWA_SIGNAL);
if (WARN_ON_ONCE(ret))
continue;
@@ -8623,7 +8625,6 @@ static void io_ring_exit_work(struct work_struct *work)
mutex_unlock(&ctx->uring_lock);
wait_for_completion(&exit.completion);
- cond_resched();
mutex_lock(&ctx->uring_lock);
}
mutex_unlock(&ctx->uring_lock);
--
2.24.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v2 3/5] io_uring: fix POLL_REMOVE removing apoll
2021-04-14 12:38 [PATCH v2 5.13 0/5] poll update into poll remove Pavel Begunkov
2021-04-14 12:38 ` [PATCH v2 1/5] io_uring: improve sqpoll event/state handling Pavel Begunkov
2021-04-14 12:38 ` [PATCH v2 2/5] io_uring: refactor io_ring_exit_work() Pavel Begunkov
@ 2021-04-14 12:38 ` Pavel Begunkov
2021-04-14 12:38 ` [PATCH v2 4/5] io_uring: add helper for parsing poll events Pavel Begunkov
` (2 subsequent siblings)
5 siblings, 0 replies; 10+ messages in thread
From: Pavel Begunkov @ 2021-04-14 12:38 UTC (permalink / raw)
To: Jens Axboe, io-uring
Don't allow REQ_OP_POLL_REMOVE to kill apoll requests, users should not
know about it. Also, remove weird -EACCESS in io_poll_update(), it
shouldn't know anything about apoll, and have to work even if happened
to have a poll and an async poll'ed request with same user_data.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 23 +++++++++++------------
1 file changed, 11 insertions(+), 12 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 6a70bf455c49..ce75a859a376 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5258,7 +5258,8 @@ static bool io_poll_remove_all(struct io_ring_ctx *ctx, struct task_struct *tsk,
return posted != 0;
}
-static struct io_kiocb *io_poll_find(struct io_ring_ctx *ctx, __u64 sqe_addr)
+static struct io_kiocb *io_poll_find(struct io_ring_ctx *ctx, __u64 sqe_addr,
+ bool poll_only)
__must_hold(&ctx->completion_lock)
{
struct hlist_head *list;
@@ -5268,18 +5269,20 @@ static struct io_kiocb *io_poll_find(struct io_ring_ctx *ctx, __u64 sqe_addr)
hlist_for_each_entry(req, list, hash_node) {
if (sqe_addr != req->user_data)
continue;
+ if (poll_only && req->opcode != IORING_OP_POLL_ADD)
+ continue;
return req;
}
-
return NULL;
}
-static int io_poll_cancel(struct io_ring_ctx *ctx, __u64 sqe_addr)
+static int io_poll_cancel(struct io_ring_ctx *ctx, __u64 sqe_addr,
+ bool poll_only)
__must_hold(&ctx->completion_lock)
{
struct io_kiocb *req;
- req = io_poll_find(ctx, sqe_addr);
+ req = io_poll_find(ctx, sqe_addr, poll_only);
if (!req)
return -ENOENT;
if (io_poll_remove_one(req))
@@ -5311,7 +5314,7 @@ static int io_poll_remove(struct io_kiocb *req, unsigned int issue_flags)
int ret;
spin_lock_irq(&ctx->completion_lock);
- ret = io_poll_cancel(ctx, req->poll_remove.addr);
+ ret = io_poll_cancel(ctx, req->poll_remove.addr, true);
spin_unlock_irq(&ctx->completion_lock);
if (ret < 0)
@@ -5412,14 +5415,10 @@ static int io_poll_update(struct io_kiocb *req)
int ret;
spin_lock_irq(&ctx->completion_lock);
- preq = io_poll_find(ctx, req->poll_update.old_user_data);
+ preq = io_poll_find(ctx, req->poll_update.old_user_data, true);
if (!preq) {
ret = -ENOENT;
goto err;
- } else if (preq->opcode != IORING_OP_POLL_ADD) {
- /* don't allow internal poll updates */
- ret = -EACCES;
- goto err;
}
/*
@@ -5748,7 +5747,7 @@ static void io_async_find_and_cancel(struct io_ring_ctx *ctx,
ret = io_timeout_cancel(ctx, sqe_addr);
if (ret != -ENOENT)
goto done;
- ret = io_poll_cancel(ctx, sqe_addr);
+ ret = io_poll_cancel(ctx, sqe_addr, false);
done:
if (!ret)
ret = success_ret;
@@ -5790,7 +5789,7 @@ static int io_async_cancel(struct io_kiocb *req, unsigned int issue_flags)
ret = io_timeout_cancel(ctx, sqe_addr);
if (ret != -ENOENT)
goto done;
- ret = io_poll_cancel(ctx, sqe_addr);
+ ret = io_poll_cancel(ctx, sqe_addr, false);
if (ret != -ENOENT)
goto done;
spin_unlock_irq(&ctx->completion_lock);
--
2.24.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v2 4/5] io_uring: add helper for parsing poll events
2021-04-14 12:38 [PATCH v2 5.13 0/5] poll update into poll remove Pavel Begunkov
` (2 preceding siblings ...)
2021-04-14 12:38 ` [PATCH v2 3/5] io_uring: fix POLL_REMOVE removing apoll Pavel Begunkov
@ 2021-04-14 12:38 ` Pavel Begunkov
2021-04-14 12:38 ` [PATCH v2 5/5] io_uring: move poll update into remove not add Pavel Begunkov
2021-04-14 16:20 ` [PATCH v2 5.13 0/5] poll update into poll remove Jens Axboe
5 siblings, 0 replies; 10+ messages in thread
From: Pavel Begunkov @ 2021-04-14 12:38 UTC (permalink / raw)
To: Jens Axboe, io-uring
Isolate poll mask SQE parsing and preparations into a new function,
which will be reused shortly.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 24 ++++++++++++++++--------
1 file changed, 16 insertions(+), 8 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index ce75a859a376..da5061f38fd6 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5291,6 +5291,20 @@ static int io_poll_cancel(struct io_ring_ctx *ctx, __u64 sqe_addr,
return -EALREADY;
}
+static __poll_t io_poll_parse_events(const struct io_uring_sqe *sqe,
+ unsigned int flags)
+{
+ u32 events;
+
+ events = READ_ONCE(sqe->poll32_events);
+#ifdef __BIG_ENDIAN
+ events = swahw32(events);
+#endif
+ if (!(flags & IORING_POLL_ADD_MULTI))
+ events |= EPOLLONESHOT;
+ return demangle_poll(events) | (events & (EPOLLEXCLUSIVE|EPOLLONESHOT));
+}
+
static int io_poll_remove_prep(struct io_kiocb *req,
const struct io_uring_sqe *sqe)
{
@@ -5352,14 +5366,8 @@ static int io_poll_add_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe
if (flags & ~(IORING_POLL_ADD_MULTI | IORING_POLL_UPDATE_EVENTS |
IORING_POLL_UPDATE_USER_DATA))
return -EINVAL;
- events = READ_ONCE(sqe->poll32_events);
-#ifdef __BIG_ENDIAN
- events = swahw32(events);
-#endif
- if (!(flags & IORING_POLL_ADD_MULTI))
- events |= EPOLLONESHOT;
- events = demangle_poll(events) |
- (events & (EPOLLEXCLUSIVE|EPOLLONESHOT));
+
+ events = io_poll_parse_events(sqe, flags);
if (flags & (IORING_POLL_UPDATE_EVENTS|IORING_POLL_UPDATE_USER_DATA)) {
struct io_poll_update *poll_upd = &req->poll_update;
--
2.24.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v2 5/5] io_uring: move poll update into remove not add
2021-04-14 12:38 [PATCH v2 5.13 0/5] poll update into poll remove Pavel Begunkov
` (3 preceding siblings ...)
2021-04-14 12:38 ` [PATCH v2 4/5] io_uring: add helper for parsing poll events Pavel Begunkov
@ 2021-04-14 12:38 ` Pavel Begunkov
2021-04-14 16:20 ` [PATCH v2 5.13 0/5] poll update into poll remove Jens Axboe
5 siblings, 0 replies; 10+ messages in thread
From: Pavel Begunkov @ 2021-04-14 12:38 UTC (permalink / raw)
To: Jens Axboe, io-uring
Having poll update function as a part of IORING_OP_POLL_ADD is not
great, we have to do hack around struct layouts and add some overhead in
the way of more popular POLL_ADD. Even more serious drawback is that
POLL_ADD requires file and always grabs it, and so poll update, which
doesn't need it.
Incorporate poll update into IORING_OP_POLL_REMOVE instead of
IORING_OP_POLL_ADD. It also more consistent with timeout remove/update.
Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 104 ++++++++++++++++++--------------------------------
1 file changed, 38 insertions(+), 66 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index da5061f38fd6..e9d60dee075e 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -501,11 +501,6 @@ struct io_poll_update {
bool update_user_data;
};
-struct io_poll_remove {
- struct file *file;
- u64 addr;
-};
-
struct io_close {
struct file *file;
int fd;
@@ -715,7 +710,6 @@ enum {
REQ_F_COMPLETE_INLINE_BIT,
REQ_F_REISSUE_BIT,
REQ_F_DONT_REISSUE_BIT,
- REQ_F_POLL_UPDATE_BIT,
/* keep async read/write and isreg together and in order */
REQ_F_ASYNC_READ_BIT,
REQ_F_ASYNC_WRITE_BIT,
@@ -763,8 +757,6 @@ enum {
REQ_F_REISSUE = BIT(REQ_F_REISSUE_BIT),
/* don't attempt request reissue, see io_rw_reissue() */
REQ_F_DONT_REISSUE = BIT(REQ_F_DONT_REISSUE_BIT),
- /* switches between poll and poll update */
- REQ_F_POLL_UPDATE = BIT(REQ_F_POLL_UPDATE_BIT),
/* supports async reads */
REQ_F_ASYNC_READ = BIT(REQ_F_ASYNC_READ_BIT),
/* supports async writes */
@@ -795,7 +787,6 @@ struct io_kiocb {
struct io_rw rw;
struct io_poll_iocb poll;
struct io_poll_update poll_update;
- struct io_poll_remove poll_remove;
struct io_accept accept;
struct io_sync sync;
struct io_cancel cancel;
@@ -5305,35 +5296,36 @@ static __poll_t io_poll_parse_events(const struct io_uring_sqe *sqe,
return demangle_poll(events) | (events & (EPOLLEXCLUSIVE|EPOLLONESHOT));
}
-static int io_poll_remove_prep(struct io_kiocb *req,
+static int io_poll_update_prep(struct io_kiocb *req,
const struct io_uring_sqe *sqe)
{
+ struct io_poll_update *upd = &req->poll_update;
+ u32 flags;
+
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
return -EINVAL;
- if (sqe->ioprio || sqe->off || sqe->len || sqe->buf_index ||
- sqe->poll_events)
+ if (sqe->ioprio || sqe->buf_index)
+ return -EINVAL;
+ flags = READ_ONCE(sqe->len);
+ if (flags & ~(IORING_POLL_UPDATE_EVENTS | IORING_POLL_UPDATE_USER_DATA |
+ IORING_POLL_ADD_MULTI))
+ return -EINVAL;
+ /* meaningless without update */
+ if (flags == IORING_POLL_ADD_MULTI)
return -EINVAL;
- req->poll_remove.addr = READ_ONCE(sqe->addr);
- return 0;
-}
-
-/*
- * Find a running poll command that matches one specified in sqe->addr,
- * and remove it if found.
- */
-static int io_poll_remove(struct io_kiocb *req, unsigned int issue_flags)
-{
- struct io_ring_ctx *ctx = req->ctx;
- int ret;
+ upd->old_user_data = READ_ONCE(sqe->addr);
+ upd->update_events = flags & IORING_POLL_UPDATE_EVENTS;
+ upd->update_user_data = flags & IORING_POLL_UPDATE_USER_DATA;
- spin_lock_irq(&ctx->completion_lock);
- ret = io_poll_cancel(ctx, req->poll_remove.addr, true);
- spin_unlock_irq(&ctx->completion_lock);
+ upd->new_user_data = READ_ONCE(sqe->off);
+ if (!upd->update_user_data && upd->new_user_data)
+ return -EINVAL;
+ if (upd->update_events)
+ upd->events = io_poll_parse_events(sqe, flags);
+ else if (sqe->poll32_events)
+ return -EINVAL;
- if (ret < 0)
- req_set_fail_links(req);
- __io_req_complete(req, issue_flags, ret, 0);
return 0;
}
@@ -5356,40 +5348,22 @@ static void io_poll_queue_proc(struct file *file, struct wait_queue_head *head,
static int io_poll_add_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
{
- u32 events, flags;
+ struct io_poll_iocb *poll = &req->poll;
+ u32 flags;
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
return -EINVAL;
- if (sqe->ioprio || sqe->buf_index)
+ if (sqe->ioprio || sqe->buf_index || sqe->off || sqe->addr)
return -EINVAL;
flags = READ_ONCE(sqe->len);
- if (flags & ~(IORING_POLL_ADD_MULTI | IORING_POLL_UPDATE_EVENTS |
- IORING_POLL_UPDATE_USER_DATA))
+ if (flags & ~IORING_POLL_ADD_MULTI)
return -EINVAL;
- events = io_poll_parse_events(sqe, flags);
-
- if (flags & (IORING_POLL_UPDATE_EVENTS|IORING_POLL_UPDATE_USER_DATA)) {
- struct io_poll_update *poll_upd = &req->poll_update;
-
- req->flags |= REQ_F_POLL_UPDATE;
- poll_upd->events = events;
- poll_upd->old_user_data = READ_ONCE(sqe->addr);
- poll_upd->update_events = flags & IORING_POLL_UPDATE_EVENTS;
- poll_upd->update_user_data = flags & IORING_POLL_UPDATE_USER_DATA;
- if (poll_upd->update_user_data)
- poll_upd->new_user_data = READ_ONCE(sqe->off);
- } else {
- struct io_poll_iocb *poll = &req->poll;
-
- poll->events = events;
- if (sqe->off || sqe->addr)
- return -EINVAL;
- }
+ poll->events = io_poll_parse_events(sqe, flags);
return 0;
}
-static int __io_poll_add(struct io_kiocb *req)
+static int io_poll_add(struct io_kiocb *req, unsigned int issue_flags)
{
struct io_poll_iocb *poll = &req->poll;
struct io_ring_ctx *ctx = req->ctx;
@@ -5415,7 +5389,7 @@ static int __io_poll_add(struct io_kiocb *req)
return ipt.error;
}
-static int io_poll_update(struct io_kiocb *req)
+static int io_poll_update(struct io_kiocb *req, unsigned int issue_flags)
{
struct io_ring_ctx *ctx = req->ctx;
struct io_kiocb *preq;
@@ -5429,6 +5403,12 @@ static int io_poll_update(struct io_kiocb *req)
goto err;
}
+ if (!req->poll_update.update_events && !req->poll_update.update_user_data) {
+ completing = true;
+ ret = io_poll_remove_one(preq) ? 0 : -EALREADY;
+ goto err;
+ }
+
/*
* Don't allow racy completion with singleshot, as we cannot safely
* update those. For multishot, if we're racing with completion, just
@@ -5456,14 +5436,13 @@ static int io_poll_update(struct io_kiocb *req)
}
if (req->poll_update.update_user_data)
preq->user_data = req->poll_update.new_user_data;
-
spin_unlock_irq(&ctx->completion_lock);
/* complete update request, we're done with it */
io_req_complete(req, ret);
if (!completing) {
- ret = __io_poll_add(preq);
+ ret = io_poll_add(preq, issue_flags);
if (ret < 0) {
req_set_fail_links(preq);
io_req_complete(preq, ret);
@@ -5472,13 +5451,6 @@ static int io_poll_update(struct io_kiocb *req)
return 0;
}
-static int io_poll_add(struct io_kiocb *req, unsigned int issue_flags)
-{
- if (!(req->flags & REQ_F_POLL_UPDATE))
- return __io_poll_add(req);
- return io_poll_update(req);
-}
-
static enum hrtimer_restart io_timeout_fn(struct hrtimer *timer)
{
struct io_timeout_data *data = container_of(timer,
@@ -5883,7 +5855,7 @@ static int io_req_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
case IORING_OP_POLL_ADD:
return io_poll_add_prep(req, sqe);
case IORING_OP_POLL_REMOVE:
- return io_poll_remove_prep(req, sqe);
+ return io_poll_update_prep(req, sqe);
case IORING_OP_FSYNC:
return io_fsync_prep(req, sqe);
case IORING_OP_SYNC_FILE_RANGE:
@@ -6114,7 +6086,7 @@ static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
ret = io_poll_add(req, issue_flags);
break;
case IORING_OP_POLL_REMOVE:
- ret = io_poll_remove(req, issue_flags);
+ ret = io_poll_update(req, issue_flags);
break;
case IORING_OP_SYNC_FILE_RANGE:
ret = io_sync_file_range(req, issue_flags);
--
2.24.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v2 5.13 0/5] poll update into poll remove
2021-04-14 12:38 [PATCH v2 5.13 0/5] poll update into poll remove Pavel Begunkov
` (4 preceding siblings ...)
2021-04-14 12:38 ` [PATCH v2 5/5] io_uring: move poll update into remove not add Pavel Begunkov
@ 2021-04-14 16:20 ` Jens Axboe
2021-04-14 19:32 ` Pavel Begunkov
5 siblings, 1 reply; 10+ messages in thread
From: Jens Axboe @ 2021-04-14 16:20 UTC (permalink / raw)
To: Pavel Begunkov, io-uring
On 4/14/21 6:38 AM, Pavel Begunkov wrote:
> 1-2 are are small improvements
>
> The rest moves all poll update into IORING_OP_POLL_REMOVE,
> see 5/5 for justification.
>
> v2: fix io_poll_remove_one() on the remove request, not the one
> we cancel.
I like moving update into remove, imho it makes a lot more sense. Will
you be sending a test case patch for it too?
--
Jens Axboe
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2 1/5] io_uring: improve sqpoll event/state handling
2021-04-14 12:38 ` [PATCH v2 1/5] io_uring: improve sqpoll event/state handling Pavel Begunkov
@ 2021-04-14 16:41 ` Jens Axboe
2021-04-14 19:27 ` Pavel Begunkov
0 siblings, 1 reply; 10+ messages in thread
From: Jens Axboe @ 2021-04-14 16:41 UTC (permalink / raw)
To: Pavel Begunkov, io-uring
On 4/14/21 6:38 AM, Pavel Begunkov wrote:
> As sqd->state changes rarely, don't check every event one by one but
> look them all at once. Add a helper function. Also don't go into event
> waiting sleeping with STOP flag set.
Can we defer this one to post -rc1? It'll cause a conflict with
5.12.
--
Jens Axboe
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2 1/5] io_uring: improve sqpoll event/state handling
2021-04-14 16:41 ` Jens Axboe
@ 2021-04-14 19:27 ` Pavel Begunkov
0 siblings, 0 replies; 10+ messages in thread
From: Pavel Begunkov @ 2021-04-14 19:27 UTC (permalink / raw)
To: Jens Axboe, io-uring
On 14/04/2021 17:41, Jens Axboe wrote:
> On 4/14/21 6:38 AM, Pavel Begunkov wrote:
>> As sqd->state changes rarely, don't check every event one by one but
>> look them all at once. Add a helper function. Also don't go into event
>> waiting sleeping with STOP flag set.
>
> Can we defer this one to post -rc1? It'll cause a conflict with
> 5.12.
Yeah, just as always drop what doesn't fit, and I'll resend after
rebase or for 5.14
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2 5.13 0/5] poll update into poll remove
2021-04-14 16:20 ` [PATCH v2 5.13 0/5] poll update into poll remove Jens Axboe
@ 2021-04-14 19:32 ` Pavel Begunkov
0 siblings, 0 replies; 10+ messages in thread
From: Pavel Begunkov @ 2021-04-14 19:32 UTC (permalink / raw)
To: Jens Axboe, io-uring
On 14/04/2021 17:20, Jens Axboe wrote:
> On 4/14/21 6:38 AM, Pavel Begunkov wrote:
>> 1-2 are are small improvements
>>
>> The rest moves all poll update into IORING_OP_POLL_REMOVE,
>> see 5/5 for justification.
>>
>> v2: fix io_poll_remove_one() on the remove request, not the one
>> we cancel.
>
> I like moving update into remove, imho it makes a lot more sense. Will
> you be sending a test case patch for it too?
I was testing with your test case, 3 line change, can send it.
We definitely need a helper though. There was something for liburing
from Joakim, right?
--
Pavel Begunkov
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2021-04-14 19:36 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-04-14 12:38 [PATCH v2 5.13 0/5] poll update into poll remove Pavel Begunkov
2021-04-14 12:38 ` [PATCH v2 1/5] io_uring: improve sqpoll event/state handling Pavel Begunkov
2021-04-14 16:41 ` Jens Axboe
2021-04-14 19:27 ` Pavel Begunkov
2021-04-14 12:38 ` [PATCH v2 2/5] io_uring: refactor io_ring_exit_work() Pavel Begunkov
2021-04-14 12:38 ` [PATCH v2 3/5] io_uring: fix POLL_REMOVE removing apoll Pavel Begunkov
2021-04-14 12:38 ` [PATCH v2 4/5] io_uring: add helper for parsing poll events Pavel Begunkov
2021-04-14 12:38 ` [PATCH v2 5/5] io_uring: move poll update into remove not add Pavel Begunkov
2021-04-14 16:20 ` [PATCH v2 5.13 0/5] poll update into poll remove Jens Axboe
2021-04-14 19:32 ` Pavel Begunkov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox