* [PATCH for-next 0/4] random 5.20 patches
@ 2022-06-21 9:08 Pavel Begunkov
2022-06-21 9:08 ` [PATCH for-next 1/4] io_uring: fix poll_add error handling Pavel Begunkov
` (4 more replies)
0 siblings, 5 replies; 6+ messages in thread
From: Pavel Begunkov @ 2022-06-21 9:08 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Just random patches, 1/4 is a poll fix.
Pavel Begunkov (4):
io_uring: fix poll_add error handling
io_uring: improve io_run_task_work()
io_uring: move list helpers to a separate file
io_uring: dedup io_run_task_work
io_uring/filetable.h | 2 +
io_uring/io-wq.c | 18 ++----
io_uring/io-wq.h | 131 ----------------------------------------
io_uring/io_uring.h | 3 +-
io_uring/poll.c | 9 +--
io_uring/slist.h | 138 +++++++++++++++++++++++++++++++++++++++++++
6 files changed, 148 insertions(+), 153 deletions(-)
create mode 100644 io_uring/slist.h
--
2.36.1
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH for-next 1/4] io_uring: fix poll_add error handling
2022-06-21 9:08 [PATCH for-next 0/4] random 5.20 patches Pavel Begunkov
@ 2022-06-21 9:08 ` Pavel Begunkov
2022-06-21 9:09 ` [PATCH for-next 2/4] io_uring: improve io_run_task_work() Pavel Begunkov
` (3 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Pavel Begunkov @ 2022-06-21 9:08 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
We should first look at the return value of __io_arm_poll_handler() and
only if zero checking for ipt.error, not the other way around. Currently
we may enqueue a tw for such request and then release it inline causing
UAF.
Fixes: 9c1d09f56425e ("io_uring: handle completions in the core")
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/poll.c | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/io_uring/poll.c b/io_uring/poll.c
index 8f4fff76d3b4..528418aaf3f6 100644
--- a/io_uring/poll.c
+++ b/io_uring/poll.c
@@ -782,16 +782,11 @@ int io_poll_add(struct io_kiocb *req, unsigned int issue_flags)
req->flags &= ~REQ_F_HASH_LOCKED;
ret = __io_arm_poll_handler(req, poll, &ipt, poll->events);
- if (ipt.error) {
- return ipt.error;
- } else if (ret > 0) {
+ if (ret) {
io_req_set_res(req, ret, 0);
return IOU_OK;
- } else if (!ret) {
- return IOU_ISSUE_SKIP_COMPLETE;
}
-
- return ret;
+ return ipt.error ?: IOU_ISSUE_SKIP_COMPLETE;
}
int io_poll_remove(struct io_kiocb *req, unsigned int issue_flags)
--
2.36.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH for-next 2/4] io_uring: improve io_run_task_work()
2022-06-21 9:08 [PATCH for-next 0/4] random 5.20 patches Pavel Begunkov
2022-06-21 9:08 ` [PATCH for-next 1/4] io_uring: fix poll_add error handling Pavel Begunkov
@ 2022-06-21 9:09 ` Pavel Begunkov
2022-06-21 9:09 ` [PATCH for-next 3/4] io_uring: move list helpers to a separate file Pavel Begunkov
` (2 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Pavel Begunkov @ 2022-06-21 9:09 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
Since SQPOLL now uses TWA_SIGNAL_NO_IPI, there won't be task work items
without TIF_NOTIFY_SIGNAL. Simplify io_run_task_work() by removing
task->task_works check. Even though looks it doesn't cause extra cache
bouncing, it's still nice to not touch it an extra time when it might be
not cached.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io_uring.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index 7a00bbe85d35..4c4d38ffc5ec 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -203,7 +203,7 @@ static inline unsigned int io_sqring_entries(struct io_ring_ctx *ctx)
static inline bool io_run_task_work(void)
{
- if (test_thread_flag(TIF_NOTIFY_SIGNAL) || task_work_pending(current)) {
+ if (test_thread_flag(TIF_NOTIFY_SIGNAL)) {
__set_current_state(TASK_RUNNING);
clear_notify_signal();
if (task_work_pending(current))
--
2.36.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH for-next 3/4] io_uring: move list helpers to a separate file
2022-06-21 9:08 [PATCH for-next 0/4] random 5.20 patches Pavel Begunkov
2022-06-21 9:08 ` [PATCH for-next 1/4] io_uring: fix poll_add error handling Pavel Begunkov
2022-06-21 9:09 ` [PATCH for-next 2/4] io_uring: improve io_run_task_work() Pavel Begunkov
@ 2022-06-21 9:09 ` Pavel Begunkov
2022-06-21 9:09 ` [PATCH for-next 4/4] io_uring: dedup io_run_task_work Pavel Begunkov
2022-06-21 15:17 ` [PATCH for-next 0/4] random 5.20 patches Jens Axboe
4 siblings, 0 replies; 6+ messages in thread
From: Pavel Begunkov @ 2022-06-21 9:09 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
It's annoying to have io-wq.h as a dependency every time we want some of
struct io_wq_work_list helpers, move them into a separate file.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/io-wq.c | 1 +
io_uring/io-wq.h | 131 -----------------------------------------
io_uring/io_uring.h | 1 +
io_uring/slist.h | 138 ++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 140 insertions(+), 131 deletions(-)
create mode 100644 io_uring/slist.h
diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c
index 824623bcf1a5..3e34dfbdf946 100644
--- a/io_uring/io-wq.c
+++ b/io_uring/io-wq.c
@@ -18,6 +18,7 @@
#include <uapi/linux/io_uring.h>
#include "io-wq.h"
+#include "slist.h"
#define WORKER_IDLE_TIMEOUT (5 * HZ)
diff --git a/io_uring/io-wq.h b/io_uring/io-wq.h
index 10b80ef78bb8..31228426d192 100644
--- a/io_uring/io-wq.h
+++ b/io_uring/io-wq.h
@@ -21,137 +21,6 @@ enum io_wq_cancel {
IO_WQ_CANCEL_NOTFOUND, /* work not found */
};
-#define wq_list_for_each(pos, prv, head) \
- for (pos = (head)->first, prv = NULL; pos; prv = pos, pos = (pos)->next)
-
-#define wq_list_for_each_resume(pos, prv) \
- for (; pos; prv = pos, pos = (pos)->next)
-
-#define wq_list_empty(list) (READ_ONCE((list)->first) == NULL)
-#define INIT_WQ_LIST(list) do { \
- (list)->first = NULL; \
-} while (0)
-
-static inline void wq_list_add_after(struct io_wq_work_node *node,
- struct io_wq_work_node *pos,
- struct io_wq_work_list *list)
-{
- struct io_wq_work_node *next = pos->next;
-
- pos->next = node;
- node->next = next;
- if (!next)
- list->last = node;
-}
-
-/**
- * wq_list_merge - merge the second list to the first one.
- * @list0: the first list
- * @list1: the second list
- * Return the first node after mergence.
- */
-static inline struct io_wq_work_node *wq_list_merge(struct io_wq_work_list *list0,
- struct io_wq_work_list *list1)
-{
- struct io_wq_work_node *ret;
-
- if (!list0->first) {
- ret = list1->first;
- } else {
- ret = list0->first;
- list0->last->next = list1->first;
- }
- INIT_WQ_LIST(list0);
- INIT_WQ_LIST(list1);
- return ret;
-}
-
-static inline void wq_list_add_tail(struct io_wq_work_node *node,
- struct io_wq_work_list *list)
-{
- node->next = NULL;
- if (!list->first) {
- list->last = node;
- WRITE_ONCE(list->first, node);
- } else {
- list->last->next = node;
- list->last = node;
- }
-}
-
-static inline void wq_list_add_head(struct io_wq_work_node *node,
- struct io_wq_work_list *list)
-{
- node->next = list->first;
- if (!node->next)
- list->last = node;
- WRITE_ONCE(list->first, node);
-}
-
-static inline void wq_list_cut(struct io_wq_work_list *list,
- struct io_wq_work_node *last,
- struct io_wq_work_node *prev)
-{
- /* first in the list, if prev==NULL */
- if (!prev)
- WRITE_ONCE(list->first, last->next);
- else
- prev->next = last->next;
-
- if (last == list->last)
- list->last = prev;
- last->next = NULL;
-}
-
-static inline void __wq_list_splice(struct io_wq_work_list *list,
- struct io_wq_work_node *to)
-{
- list->last->next = to->next;
- to->next = list->first;
- INIT_WQ_LIST(list);
-}
-
-static inline bool wq_list_splice(struct io_wq_work_list *list,
- struct io_wq_work_node *to)
-{
- if (!wq_list_empty(list)) {
- __wq_list_splice(list, to);
- return true;
- }
- return false;
-}
-
-static inline void wq_stack_add_head(struct io_wq_work_node *node,
- struct io_wq_work_node *stack)
-{
- node->next = stack->next;
- stack->next = node;
-}
-
-static inline void wq_list_del(struct io_wq_work_list *list,
- struct io_wq_work_node *node,
- struct io_wq_work_node *prev)
-{
- wq_list_cut(list, node, prev);
-}
-
-static inline
-struct io_wq_work_node *wq_stack_extract(struct io_wq_work_node *stack)
-{
- struct io_wq_work_node *node = stack->next;
-
- stack->next = node->next;
- return node;
-}
-
-static inline struct io_wq_work *wq_next_work(struct io_wq_work *work)
-{
- if (!work->list.next)
- return NULL;
-
- return container_of(work->list.next, struct io_wq_work, list);
-}
-
typedef struct io_wq_work *(free_work_fn)(struct io_wq_work *);
typedef void (io_wq_work_fn)(struct io_wq_work *);
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index 4c4d38ffc5ec..f026d2670959 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -5,6 +5,7 @@
#include <linux/lockdep.h>
#include <linux/io_uring_types.h>
#include "io-wq.h"
+#include "slist.h"
#include "filetable.h"
#ifndef CREATE_TRACE_POINTS
diff --git a/io_uring/slist.h b/io_uring/slist.h
new file mode 100644
index 000000000000..f27601fa4660
--- /dev/null
+++ b/io_uring/slist.h
@@ -0,0 +1,138 @@
+#ifndef INTERNAL_IO_SLIST_H
+#define INTERNAL_IO_SLIST_H
+
+#include <linux/io_uring_types.h>
+
+#define wq_list_for_each(pos, prv, head) \
+ for (pos = (head)->first, prv = NULL; pos; prv = pos, pos = (pos)->next)
+
+#define wq_list_for_each_resume(pos, prv) \
+ for (; pos; prv = pos, pos = (pos)->next)
+
+#define wq_list_empty(list) (READ_ONCE((list)->first) == NULL)
+
+#define INIT_WQ_LIST(list) do { \
+ (list)->first = NULL; \
+} while (0)
+
+static inline void wq_list_add_after(struct io_wq_work_node *node,
+ struct io_wq_work_node *pos,
+ struct io_wq_work_list *list)
+{
+ struct io_wq_work_node *next = pos->next;
+
+ pos->next = node;
+ node->next = next;
+ if (!next)
+ list->last = node;
+}
+
+/**
+ * wq_list_merge - merge the second list to the first one.
+ * @list0: the first list
+ * @list1: the second list
+ * Return the first node after mergence.
+ */
+static inline struct io_wq_work_node *wq_list_merge(struct io_wq_work_list *list0,
+ struct io_wq_work_list *list1)
+{
+ struct io_wq_work_node *ret;
+
+ if (!list0->first) {
+ ret = list1->first;
+ } else {
+ ret = list0->first;
+ list0->last->next = list1->first;
+ }
+ INIT_WQ_LIST(list0);
+ INIT_WQ_LIST(list1);
+ return ret;
+}
+
+static inline void wq_list_add_tail(struct io_wq_work_node *node,
+ struct io_wq_work_list *list)
+{
+ node->next = NULL;
+ if (!list->first) {
+ list->last = node;
+ WRITE_ONCE(list->first, node);
+ } else {
+ list->last->next = node;
+ list->last = node;
+ }
+}
+
+static inline void wq_list_add_head(struct io_wq_work_node *node,
+ struct io_wq_work_list *list)
+{
+ node->next = list->first;
+ if (!node->next)
+ list->last = node;
+ WRITE_ONCE(list->first, node);
+}
+
+static inline void wq_list_cut(struct io_wq_work_list *list,
+ struct io_wq_work_node *last,
+ struct io_wq_work_node *prev)
+{
+ /* first in the list, if prev==NULL */
+ if (!prev)
+ WRITE_ONCE(list->first, last->next);
+ else
+ prev->next = last->next;
+
+ if (last == list->last)
+ list->last = prev;
+ last->next = NULL;
+}
+
+static inline void __wq_list_splice(struct io_wq_work_list *list,
+ struct io_wq_work_node *to)
+{
+ list->last->next = to->next;
+ to->next = list->first;
+ INIT_WQ_LIST(list);
+}
+
+static inline bool wq_list_splice(struct io_wq_work_list *list,
+ struct io_wq_work_node *to)
+{
+ if (!wq_list_empty(list)) {
+ __wq_list_splice(list, to);
+ return true;
+ }
+ return false;
+}
+
+static inline void wq_stack_add_head(struct io_wq_work_node *node,
+ struct io_wq_work_node *stack)
+{
+ node->next = stack->next;
+ stack->next = node;
+}
+
+static inline void wq_list_del(struct io_wq_work_list *list,
+ struct io_wq_work_node *node,
+ struct io_wq_work_node *prev)
+{
+ wq_list_cut(list, node, prev);
+}
+
+static inline
+struct io_wq_work_node *wq_stack_extract(struct io_wq_work_node *stack)
+{
+ struct io_wq_work_node *node = stack->next;
+
+ stack->next = node->next;
+ return node;
+}
+
+static inline struct io_wq_work *wq_next_work(struct io_wq_work *work)
+{
+ if (!work->list.next)
+ return NULL;
+
+ return container_of(work->list.next, struct io_wq_work, list);
+}
+
+#endif // INTERNAL_IO_SLIST_H
\ No newline at end of file
--
2.36.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH for-next 4/4] io_uring: dedup io_run_task_work
2022-06-21 9:08 [PATCH for-next 0/4] random 5.20 patches Pavel Begunkov
` (2 preceding siblings ...)
2022-06-21 9:09 ` [PATCH for-next 3/4] io_uring: move list helpers to a separate file Pavel Begunkov
@ 2022-06-21 9:09 ` Pavel Begunkov
2022-06-21 15:17 ` [PATCH for-next 0/4] random 5.20 patches Jens Axboe
4 siblings, 0 replies; 6+ messages in thread
From: Pavel Begunkov @ 2022-06-21 9:09 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe, asml.silence
We have an identical copy of io_run_task_work() for io-wq called
io_flush_signals(), deduplicate them.
Signed-off-by: Pavel Begunkov <[email protected]>
---
io_uring/filetable.h | 2 ++
io_uring/io-wq.c | 17 +++--------------
2 files changed, 5 insertions(+), 14 deletions(-)
diff --git a/io_uring/filetable.h b/io_uring/filetable.h
index 6b58aa48bc45..fb5a274c08ff 100644
--- a/io_uring/filetable.h
+++ b/io_uring/filetable.h
@@ -2,6 +2,8 @@
#ifndef IOU_FILE_TABLE_H
#define IOU_FILE_TABLE_H
+#include <linux/file.h>
+
struct io_ring_ctx;
struct io_kiocb;
diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c
index 3e34dfbdf946..77df5b43bf52 100644
--- a/io_uring/io-wq.c
+++ b/io_uring/io-wq.c
@@ -19,6 +19,7 @@
#include "io-wq.h"
#include "slist.h"
+#include "io_uring.h"
#define WORKER_IDLE_TIMEOUT (5 * HZ)
@@ -519,23 +520,11 @@ static struct io_wq_work *io_get_next_work(struct io_wqe_acct *acct,
return NULL;
}
-static bool io_flush_signals(void)
-{
- if (unlikely(test_thread_flag(TIF_NOTIFY_SIGNAL))) {
- __set_current_state(TASK_RUNNING);
- clear_notify_signal();
- if (task_work_pending(current))
- task_work_run();
- return true;
- }
- return false;
-}
-
static void io_assign_current_work(struct io_worker *worker,
struct io_wq_work *work)
{
if (work) {
- io_flush_signals();
+ io_run_task_work();
cond_resched();
}
@@ -655,7 +644,7 @@ static int io_wqe_worker(void *data)
last_timeout = false;
__io_worker_idle(wqe, worker);
raw_spin_unlock(&wqe->lock);
- if (io_flush_signals())
+ if (io_run_task_work())
continue;
ret = schedule_timeout(WORKER_IDLE_TIMEOUT);
if (signal_pending(current)) {
--
2.36.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH for-next 0/4] random 5.20 patches
2022-06-21 9:08 [PATCH for-next 0/4] random 5.20 patches Pavel Begunkov
` (3 preceding siblings ...)
2022-06-21 9:09 ` [PATCH for-next 4/4] io_uring: dedup io_run_task_work Pavel Begunkov
@ 2022-06-21 15:17 ` Jens Axboe
4 siblings, 0 replies; 6+ messages in thread
From: Jens Axboe @ 2022-06-21 15:17 UTC (permalink / raw)
To: io-uring, asml.silence
On Tue, 21 Jun 2022 10:08:58 +0100, Pavel Begunkov wrote:
> Just random patches, 1/4 is a poll fix.
>
> Pavel Begunkov (4):
> io_uring: fix poll_add error handling
> io_uring: improve io_run_task_work()
> io_uring: move list helpers to a separate file
> io_uring: dedup io_run_task_work
>
> [...]
Applied, thanks!
[1/4] io_uring: fix poll_add error handling
(no commit info)
[2/4] io_uring: improve io_run_task_work()
(no commit info)
[3/4] io_uring: move list helpers to a separate file
(no commit info)
[4/4] io_uring: dedup io_run_task_work
(no commit info)
Best regards,
--
Jens Axboe
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2022-06-21 15:17 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-06-21 9:08 [PATCH for-next 0/4] random 5.20 patches Pavel Begunkov
2022-06-21 9:08 ` [PATCH for-next 1/4] io_uring: fix poll_add error handling Pavel Begunkov
2022-06-21 9:09 ` [PATCH for-next 2/4] io_uring: improve io_run_task_work() Pavel Begunkov
2022-06-21 9:09 ` [PATCH for-next 3/4] io_uring: move list helpers to a separate file Pavel Begunkov
2022-06-21 9:09 ` [PATCH for-next 4/4] io_uring: dedup io_run_task_work Pavel Begunkov
2022-06-21 15:17 ` [PATCH for-next 0/4] random 5.20 patches Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox