* [PATCH v2 0/3] cancel_hash per entry lock @ 2022-06-06 6:57 Hao Xu 2022-06-06 6:57 ` [PATCH 1/3] io_uring: add hash_index and its logic to track req in cancel_hash Hao Xu ` (3 more replies) 0 siblings, 4 replies; 11+ messages in thread From: Hao Xu @ 2022-06-06 6:57 UTC (permalink / raw) To: io-uring; +Cc: Jens Axboe, Pavel Begunkov From: Hao Xu <[email protected]> Make per entry lock for cancel_hash array, this reduces usage of completion_lock and contension between cancel_hash entries. v1->v2: - Add per entry lock for poll/apoll task work code which was missed in v1 - add an member in io_kiocb to track req's indice in cancel_hash Hao Xu (3): io_uring: add hash_index and its logic to track req in cancel_hash io_uring: add an io_hash_bucket structure for smaller granularity lock io_uring: switch cancel_hash to use per list spinlock io_uring/cancel.c | 15 +++++++-- io_uring/cancel.h | 6 ++++ io_uring/fdinfo.c | 9 ++++-- io_uring/io_uring.c | 8 +++-- io_uring/io_uring_types.h | 3 +- io_uring/poll.c | 64 +++++++++++++++++++++------------------ 6 files changed, 67 insertions(+), 38 deletions(-) base-commit: d8271bf021438f468dab3cd84fe5279b5bbcead8 -- 2.25.1 ^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 1/3] io_uring: add hash_index and its logic to track req in cancel_hash 2022-06-06 6:57 [PATCH v2 0/3] cancel_hash per entry lock Hao Xu @ 2022-06-06 6:57 ` Hao Xu 2022-06-06 11:59 ` Pavel Begunkov 2022-06-06 6:57 ` [PATCH 2/3] io_uring: add an io_hash_bucket structure for smaller granularity lock Hao Xu ` (2 subsequent siblings) 3 siblings, 1 reply; 11+ messages in thread From: Hao Xu @ 2022-06-06 6:57 UTC (permalink / raw) To: io-uring; +Cc: Jens Axboe, Pavel Begunkov From: Hao Xu <[email protected]> Add a new member hash_index in struct io_kiocb to track the req index in cancel_hash array. This is needed in later patches. Signed-off-by: Hao Xu <[email protected]> --- io_uring/io_uring_types.h | 1 + io_uring/poll.c | 4 +++- 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/io_uring/io_uring_types.h b/io_uring/io_uring_types.h index 7c22cf35a7e2..2041ee83467d 100644 --- a/io_uring/io_uring_types.h +++ b/io_uring/io_uring_types.h @@ -474,6 +474,7 @@ struct io_kiocb { u64 extra2; }; }; + unsigned int hash_index; /* internal polling, see IORING_FEAT_FAST_POLL */ struct async_poll *apoll; /* opcode allocated if it needs to store data for async defer */ diff --git a/io_uring/poll.c b/io_uring/poll.c index 0df5eca93b16..95e28f32b49c 100644 --- a/io_uring/poll.c +++ b/io_uring/poll.c @@ -74,8 +74,10 @@ static void io_poll_req_insert(struct io_kiocb *req) { struct io_ring_ctx *ctx = req->ctx; struct hlist_head *list; + u32 index = hash_long(req->cqe.user_data, ctx->cancel_hash_bits); - list = &ctx->cancel_hash[hash_long(req->cqe.user_data, ctx->cancel_hash_bits)]; + req->hash_index = index; + list = &ctx->cancel_hash[index]; hlist_add_head(&req->hash_node, list); } -- 2.25.1 ^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 1/3] io_uring: add hash_index and its logic to track req in cancel_hash 2022-06-06 6:57 ` [PATCH 1/3] io_uring: add hash_index and its logic to track req in cancel_hash Hao Xu @ 2022-06-06 11:59 ` Pavel Begunkov 2022-06-06 13:47 ` Hao Xu 0 siblings, 1 reply; 11+ messages in thread From: Pavel Begunkov @ 2022-06-06 11:59 UTC (permalink / raw) To: Hao Xu, io-uring; +Cc: Jens Axboe On 6/6/22 07:57, Hao Xu wrote: > From: Hao Xu <[email protected]> > > Add a new member hash_index in struct io_kiocb to track the req index > in cancel_hash array. This is needed in later patches. > > Signed-off-by: Hao Xu <[email protected]> > --- > io_uring/io_uring_types.h | 1 + > io_uring/poll.c | 4 +++- > 2 files changed, 4 insertions(+), 1 deletion(-) > > diff --git a/io_uring/io_uring_types.h b/io_uring/io_uring_types.h > index 7c22cf35a7e2..2041ee83467d 100644 > --- a/io_uring/io_uring_types.h > +++ b/io_uring/io_uring_types.h > @@ -474,6 +474,7 @@ struct io_kiocb { > u64 extra2; > }; > }; > + unsigned int hash_index; Didn't take a closer look, but can we make rid of it? E.g. computing it again when ejecting a request from the hash? or keep it in struct io_poll? > /* internal polling, see IORING_FEAT_FAST_POLL */ > struct async_poll *apoll; > /* opcode allocated if it needs to store data for async defer */ > diff --git a/io_uring/poll.c b/io_uring/poll.c > index 0df5eca93b16..95e28f32b49c 100644 > --- a/io_uring/poll.c > +++ b/io_uring/poll.c > @@ -74,8 +74,10 @@ static void io_poll_req_insert(struct io_kiocb *req) > { > struct io_ring_ctx *ctx = req->ctx; > struct hlist_head *list; > + u32 index = hash_long(req->cqe.user_data, ctx->cancel_hash_bits); > > - list = &ctx->cancel_hash[hash_long(req->cqe.user_data, ctx->cancel_hash_bits)]; > + req->hash_index = index; > + list = &ctx->cancel_hash[index]; > hlist_add_head(&req->hash_node, list); > } > -- Pavel Begunkov ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/3] io_uring: add hash_index and its logic to track req in cancel_hash 2022-06-06 11:59 ` Pavel Begunkov @ 2022-06-06 13:47 ` Hao Xu 0 siblings, 0 replies; 11+ messages in thread From: Hao Xu @ 2022-06-06 13:47 UTC (permalink / raw) To: Pavel Begunkov, io-uring; +Cc: Jens Axboe On 6/6/22 19:59, Pavel Begunkov wrote: > On 6/6/22 07:57, Hao Xu wrote: >> From: Hao Xu <[email protected]> >> >> Add a new member hash_index in struct io_kiocb to track the req index >> in cancel_hash array. This is needed in later patches. >> >> Signed-off-by: Hao Xu <[email protected]> >> --- >> io_uring/io_uring_types.h | 1 + >> io_uring/poll.c | 4 +++- >> 2 files changed, 4 insertions(+), 1 deletion(-) >> >> diff --git a/io_uring/io_uring_types.h b/io_uring/io_uring_types.h >> index 7c22cf35a7e2..2041ee83467d 100644 >> --- a/io_uring/io_uring_types.h >> +++ b/io_uring/io_uring_types.h >> @@ -474,6 +474,7 @@ struct io_kiocb { >> u64 extra2; >> }; >> }; >> + unsigned int hash_index; > > Didn't take a closer look, but can we make rid of it? > E.g. computing it again when ejecting a request from > the hash? or keep it in struct io_poll? Good point, I prefer moving it to io_poll to computing it again since this patchset is to try to make it faster. ^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 2/3] io_uring: add an io_hash_bucket structure for smaller granularity lock 2022-06-06 6:57 [PATCH v2 0/3] cancel_hash per entry lock Hao Xu 2022-06-06 6:57 ` [PATCH 1/3] io_uring: add hash_index and its logic to track req in cancel_hash Hao Xu @ 2022-06-06 6:57 ` Hao Xu 2022-06-06 11:55 ` Pavel Begunkov 2022-06-06 6:57 ` [PATCH 3/3] io_uring: switch cancel_hash to use per list spinlock Hao Xu 2022-06-06 7:06 ` [PATCH v2 0/3] cancel_hash per entry lock Hao Xu 3 siblings, 1 reply; 11+ messages in thread From: Hao Xu @ 2022-06-06 6:57 UTC (permalink / raw) To: io-uring; +Cc: Jens Axboe, Pavel Begunkov From: Hao Xu <[email protected]> Add a new io_hash_bucket structure so that each bucket in cancel_hash has separate spinlock. This is a prep patch for later use. Signed-off-by: Hao Xu <[email protected]> --- io_uring/cancel.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/io_uring/cancel.h b/io_uring/cancel.h index 4f35d8696325..b9218310611c 100644 --- a/io_uring/cancel.h +++ b/io_uring/cancel.h @@ -4,3 +4,8 @@ int io_async_cancel_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe); int io_async_cancel(struct io_kiocb *req, unsigned int issue_flags); int io_try_cancel(struct io_kiocb *req, struct io_cancel_data *cd); + +struct io_hash_bucket { + spinlock_t lock; + struct hlist_head list; +}; -- 2.25.1 ^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 2/3] io_uring: add an io_hash_bucket structure for smaller granularity lock 2022-06-06 6:57 ` [PATCH 2/3] io_uring: add an io_hash_bucket structure for smaller granularity lock Hao Xu @ 2022-06-06 11:55 ` Pavel Begunkov 0 siblings, 0 replies; 11+ messages in thread From: Pavel Begunkov @ 2022-06-06 11:55 UTC (permalink / raw) To: Hao Xu, io-uring; +Cc: Jens Axboe On 6/6/22 07:57, Hao Xu wrote: > From: Hao Xu <[email protected]> > > Add a new io_hash_bucket structure so that each bucket in cancel_hash > has separate spinlock. This is a prep patch for later use. > > Signed-off-by: Hao Xu <[email protected]> > --- > io_uring/cancel.h | 5 +++++ > 1 file changed, 5 insertions(+) > > diff --git a/io_uring/cancel.h b/io_uring/cancel.h > index 4f35d8696325..b9218310611c 100644 > --- a/io_uring/cancel.h > +++ b/io_uring/cancel.h > @@ -4,3 +4,8 @@ int io_async_cancel_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe); > int io_async_cancel(struct io_kiocb *req, unsigned int issue_flags); > > int io_try_cancel(struct io_kiocb *req, struct io_cancel_data *cd); > + > +struct io_hash_bucket { > + spinlock_t lock; > + struct hlist_head list; > +}; please, in future just merge such patches into the next one, separately it doesn't do anything meaningful, the struct is not used here and IMHO only makes reviewing harder. -- Pavel Begunkov ^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 3/3] io_uring: switch cancel_hash to use per list spinlock 2022-06-06 6:57 [PATCH v2 0/3] cancel_hash per entry lock Hao Xu 2022-06-06 6:57 ` [PATCH 1/3] io_uring: add hash_index and its logic to track req in cancel_hash Hao Xu 2022-06-06 6:57 ` [PATCH 2/3] io_uring: add an io_hash_bucket structure for smaller granularity lock Hao Xu @ 2022-06-06 6:57 ` Hao Xu 2022-06-06 7:06 ` [PATCH v2 0/3] cancel_hash per entry lock Hao Xu 3 siblings, 0 replies; 11+ messages in thread From: Hao Xu @ 2022-06-06 6:57 UTC (permalink / raw) To: io-uring; +Cc: Jens Axboe, Pavel Begunkov From: Hao Xu <[email protected]> Use per list lock for cancel_hash, this removes some completion lock invocation and remove contension between different cancel_hash entries Signed-off-by: Hao Xu <[email protected]> --- io_uring/cancel.c | 15 ++++++++-- io_uring/cancel.h | 1 + io_uring/fdinfo.c | 9 ++++-- io_uring/io_uring.c | 8 +++-- io_uring/io_uring_types.h | 2 +- io_uring/poll.c | 62 +++++++++++++++++++++------------------ 6 files changed, 59 insertions(+), 38 deletions(-) diff --git a/io_uring/cancel.c b/io_uring/cancel.c index 83cceb52d82d..c3e5b8058b0d 100644 --- a/io_uring/cancel.c +++ b/io_uring/cancel.c @@ -6,6 +6,7 @@ #include <linux/mm.h> #include <linux/slab.h> #include <linux/namei.h> +#include <linux/list.h> #include <linux/io_uring.h> #include <uapi/linux/io_uring.h> @@ -93,14 +94,14 @@ int io_try_cancel(struct io_kiocb *req, struct io_cancel_data *cd) if (!ret) return 0; - spin_lock(&ctx->completion_lock); ret = io_poll_cancel(ctx, cd); if (ret != -ENOENT) goto out; + spin_lock(&ctx->completion_lock); if (!(cd->flags & IORING_ASYNC_CANCEL_FD)) ret = io_timeout_cancel(ctx, cd); -out: spin_unlock(&ctx->completion_lock); +out: return ret; } @@ -192,3 +193,13 @@ int io_async_cancel(struct io_kiocb *req, unsigned int issue_flags) io_req_set_res(req, ret, 0); return IOU_OK; } + +inline void init_hash_table(struct io_hash_bucket *hash_table, unsigned size) +{ + unsigned int i; + + for (i = 0; i < size; i++) { + spin_lock_init(&hash_table[i].lock); + INIT_HLIST_HEAD(&hash_table[i].list); + } +} diff --git a/io_uring/cancel.h b/io_uring/cancel.h index b9218310611c..f682e9811e68 100644 --- a/io_uring/cancel.h +++ b/io_uring/cancel.h @@ -4,6 +4,7 @@ int io_async_cancel_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe); int io_async_cancel(struct io_kiocb *req, unsigned int issue_flags); int io_try_cancel(struct io_kiocb *req, struct io_cancel_data *cd); +inline void init_hash_table(struct io_hash_bucket *hash_table, unsigned size); struct io_hash_bucket { spinlock_t lock; diff --git a/io_uring/fdinfo.c b/io_uring/fdinfo.c index fcedde4b4b1e..f941c73f5502 100644 --- a/io_uring/fdinfo.c +++ b/io_uring/fdinfo.c @@ -13,6 +13,7 @@ #include "io_uring.h" #include "sqpoll.h" #include "fdinfo.h" +#include "cancel.h" #ifdef CONFIG_PROC_FS static __cold int io_uring_show_cred(struct seq_file *m, unsigned int id, @@ -157,17 +158,19 @@ static __cold void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, mutex_unlock(&ctx->uring_lock); seq_puts(m, "PollList:\n"); - spin_lock(&ctx->completion_lock); for (i = 0; i < (1U << ctx->cancel_hash_bits); i++) { - struct hlist_head *list = &ctx->cancel_hash[i]; + struct io_hash_bucket *hb = &ctx->cancel_hash[i]; struct io_kiocb *req; - hlist_for_each_entry(req, list, hash_node) + spin_lock(&hb->lock); + hlist_for_each_entry(req, &hb->list, hash_node) seq_printf(m, " op=%d, task_works=%d\n", req->opcode, task_work_pending(req->task)); + spin_unlock(&hb->lock); } seq_puts(m, "CqOverflowList:\n"); + spin_lock(&ctx->completion_lock); list_for_each_entry(ocqe, &ctx->cq_overflow_list, list) { struct io_uring_cqe *cqe = &ocqe->cqe; diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 1572ebe3cff1..b67ab76b9e56 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -725,11 +725,13 @@ static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p) if (hash_bits <= 0) hash_bits = 1; ctx->cancel_hash_bits = hash_bits; - ctx->cancel_hash = kmalloc((1U << hash_bits) * sizeof(struct hlist_head), - GFP_KERNEL); + ctx->cancel_hash = + kmalloc((1U << hash_bits) * sizeof(struct io_hash_bucket), + GFP_KERNEL); if (!ctx->cancel_hash) goto err; - __hash_init(ctx->cancel_hash, 1U << hash_bits); + + init_hash_table(ctx->cancel_hash, 1U << hash_bits); ctx->dummy_ubuf = kzalloc(sizeof(*ctx->dummy_ubuf), GFP_KERNEL); if (!ctx->dummy_ubuf) diff --git a/io_uring/io_uring_types.h b/io_uring/io_uring_types.h index 2041ee83467d..59231f7345ac 100644 --- a/io_uring/io_uring_types.h +++ b/io_uring/io_uring_types.h @@ -230,7 +230,7 @@ struct io_ring_ctx { * manipulate the list, hence no extra locking is needed there. */ struct io_wq_work_list iopoll_list; - struct hlist_head *cancel_hash; + struct io_hash_bucket *cancel_hash; unsigned cancel_hash_bits; bool poll_multi_queue; diff --git a/io_uring/poll.c b/io_uring/poll.c index 95e28f32b49c..d40fad768d58 100644 --- a/io_uring/poll.c +++ b/io_uring/poll.c @@ -19,6 +19,7 @@ #include "opdef.h" #include "kbuf.h" #include "poll.h" +#include "cancel.h" struct io_poll_update { struct file *file; @@ -73,12 +74,22 @@ static struct io_poll *io_poll_get_single(struct io_kiocb *req) static void io_poll_req_insert(struct io_kiocb *req) { struct io_ring_ctx *ctx = req->ctx; - struct hlist_head *list; u32 index = hash_long(req->cqe.user_data, ctx->cancel_hash_bits); + struct io_hash_bucket *hb = &ctx->cancel_hash[index]; req->hash_index = index; - list = &ctx->cancel_hash[index]; - hlist_add_head(&req->hash_node, list); + spin_lock(&hb->lock); + hlist_add_head(&req->hash_node, &hb->list); + spin_unlock(&hb->lock); +} + +static void io_poll_req_delete(struct io_kiocb *req, struct io_ring_ctx *ctx) +{ + spinlock_t *lock = &ctx->cancel_hash[req->hash_index].lock; + + spin_lock(lock); + hash_del(&req->hash_node); + spin_unlock(lock); } static void io_init_poll_iocb(struct io_poll *poll, __poll_t events, @@ -222,8 +233,8 @@ static void io_poll_task_func(struct io_kiocb *req, bool *locked) } io_poll_remove_entries(req); + io_poll_req_delete(req, ctx); spin_lock(&ctx->completion_lock); - hash_del(&req->hash_node); req->cqe.flags = 0; __io_req_complete_post(req); io_commit_cqring(ctx); @@ -233,7 +244,6 @@ static void io_poll_task_func(struct io_kiocb *req, bool *locked) static void io_apoll_task_func(struct io_kiocb *req, bool *locked) { - struct io_ring_ctx *ctx = req->ctx; int ret; ret = io_poll_check_events(req, locked); @@ -241,9 +251,7 @@ static void io_apoll_task_func(struct io_kiocb *req, bool *locked) return; io_poll_remove_entries(req); - spin_lock(&ctx->completion_lock); - hash_del(&req->hash_node); - spin_unlock(&ctx->completion_lock); + io_poll_req_delete(req, req->ctx); if (!ret) io_req_task_submit(req, locked); @@ -437,9 +445,7 @@ static int __io_arm_poll_handler(struct io_kiocb *req, return 0; } - spin_lock(&ctx->completion_lock); io_poll_req_insert(req); - spin_unlock(&ctx->completion_lock); if (mask && (poll->events & EPOLLET)) { /* can't multishot if failed, just queue the event we've got */ @@ -536,32 +542,32 @@ __cold bool io_poll_remove_all(struct io_ring_ctx *ctx, struct task_struct *tsk, bool found = false; int i; - spin_lock(&ctx->completion_lock); for (i = 0; i < (1U << ctx->cancel_hash_bits); i++) { - struct hlist_head *list; + struct io_hash_bucket *hb = &ctx->cancel_hash[i]; - list = &ctx->cancel_hash[i]; - hlist_for_each_entry_safe(req, tmp, list, hash_node) { + spin_lock(&hb->lock); + hlist_for_each_entry_safe(req, tmp, &hb->list, hash_node) { if (io_match_task_safe(req, tsk, cancel_all)) { hlist_del_init(&req->hash_node); io_poll_cancel_req(req); found = true; } } + spin_unlock(&hb->lock); } - spin_unlock(&ctx->completion_lock); return found; } static struct io_kiocb *io_poll_find(struct io_ring_ctx *ctx, bool poll_only, struct io_cancel_data *cd) - __must_hold(&ctx->completion_lock) { - struct hlist_head *list; struct io_kiocb *req; - list = &ctx->cancel_hash[hash_long(cd->data, ctx->cancel_hash_bits)]; - hlist_for_each_entry(req, list, hash_node) { + u32 index = hash_long(cd->data, ctx->cancel_hash_bits); + struct io_hash_bucket *hb = &ctx->cancel_hash[index]; + + spin_lock(&hb->lock); + hlist_for_each_entry(req, &hb->list, hash_node) { if (cd->data != req->cqe.user_data) continue; if (poll_only && req->opcode != IORING_OP_POLL_ADD) @@ -571,47 +577,48 @@ static struct io_kiocb *io_poll_find(struct io_ring_ctx *ctx, bool poll_only, continue; req->work.cancel_seq = cd->seq; } + spin_unlock(&hb->lock); return req; } + spin_unlock(&hb->lock); return NULL; } static struct io_kiocb *io_poll_file_find(struct io_ring_ctx *ctx, struct io_cancel_data *cd) - __must_hold(&ctx->completion_lock) { struct io_kiocb *req; int i; for (i = 0; i < (1U << ctx->cancel_hash_bits); i++) { - struct hlist_head *list; + struct io_hash_bucket *hb = &ctx->cancel_hash[i]; - list = &ctx->cancel_hash[i]; - hlist_for_each_entry(req, list, hash_node) { + spin_lock(&hb->lock); + hlist_for_each_entry(req, &hb->list, hash_node) { if (!(cd->flags & IORING_ASYNC_CANCEL_ANY) && req->file != cd->file) continue; if (cd->seq == req->work.cancel_seq) continue; req->work.cancel_seq = cd->seq; + spin_unlock(&hb->lock); return req; } + spin_unlock(&hb->lock); } return NULL; } static bool io_poll_disarm(struct io_kiocb *req) - __must_hold(&ctx->completion_lock) { if (!io_poll_get_ownership(req)) return false; io_poll_remove_entries(req); - hash_del(&req->hash_node); + io_poll_req_delete(req, req->ctx); return true; } int io_poll_cancel(struct io_ring_ctx *ctx, struct io_cancel_data *cd) - __must_hold(&ctx->completion_lock) { struct io_kiocb *req; @@ -720,14 +727,11 @@ int io_poll_remove(struct io_kiocb *req, unsigned int issue_flags) int ret2, ret = 0; bool locked; - spin_lock(&ctx->completion_lock); preq = io_poll_find(ctx, true, &cd); if (!preq || !io_poll_disarm(preq)) { - spin_unlock(&ctx->completion_lock); ret = preq ? -EALREADY : -ENOENT; goto out; } - spin_unlock(&ctx->completion_lock); if (poll_update->update_events || poll_update->update_user_data) { /* only mask one event flags, keep behavior flags */ -- 2.25.1 ^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v2 0/3] cancel_hash per entry lock 2022-06-06 6:57 [PATCH v2 0/3] cancel_hash per entry lock Hao Xu ` (2 preceding siblings ...) 2022-06-06 6:57 ` [PATCH 3/3] io_uring: switch cancel_hash to use per list spinlock Hao Xu @ 2022-06-06 7:06 ` Hao Xu 2022-06-06 12:02 ` Pavel Begunkov 3 siblings, 1 reply; 11+ messages in thread From: Hao Xu @ 2022-06-06 7:06 UTC (permalink / raw) To: io-uring; +Cc: Jens Axboe, Pavel Begunkov On 6/6/22 14:57, Hao Xu wrote: > From: Hao Xu <[email protected]> > > Make per entry lock for cancel_hash array, this reduces usage of > completion_lock and contension between cancel_hash entries. > > v1->v2: > - Add per entry lock for poll/apoll task work code which was missed > in v1 > - add an member in io_kiocb to track req's indice in cancel_hash Tried to test it with many poll_add IOSQQE_ASYNC requests but turned out that there is little conpletion_lock contention, so no visible change in data. But I still think this may be good for cancel_hash access in some real cases where completion lock matters. Regards, Hao ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 0/3] cancel_hash per entry lock 2022-06-06 7:06 ` [PATCH v2 0/3] cancel_hash per entry lock Hao Xu @ 2022-06-06 12:02 ` Pavel Begunkov 2022-06-06 12:09 ` Pavel Begunkov 2022-06-06 13:39 ` Hao Xu 0 siblings, 2 replies; 11+ messages in thread From: Pavel Begunkov @ 2022-06-06 12:02 UTC (permalink / raw) To: Hao Xu, io-uring; +Cc: Jens Axboe On 6/6/22 08:06, Hao Xu wrote: > On 6/6/22 14:57, Hao Xu wrote: >> From: Hao Xu <[email protected]> >> >> Make per entry lock for cancel_hash array, this reduces usage of >> completion_lock and contension between cancel_hash entries. >> >> v1->v2: >> - Add per entry lock for poll/apoll task work code which was missed >> in v1 >> - add an member in io_kiocb to track req's indice in cancel_hash > > Tried to test it with many poll_add IOSQQE_ASYNC requests but turned out > that there is little conpletion_lock contention, so no visible change in > data. But I still think this may be good for cancel_hash access in some > real cases where completion lock matters. Conceptually I don't mind it, but let me ask in what circumstances you expect it to make a difference? And what can we do to get favourable numbers? For instance, how many CPUs io-wq was using? -- Pavel Begunkov ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 0/3] cancel_hash per entry lock 2022-06-06 12:02 ` Pavel Begunkov @ 2022-06-06 12:09 ` Pavel Begunkov 2022-06-06 13:39 ` Hao Xu 1 sibling, 0 replies; 11+ messages in thread From: Pavel Begunkov @ 2022-06-06 12:09 UTC (permalink / raw) To: Hao Xu, io-uring; +Cc: Jens Axboe On 6/6/22 13:02, Pavel Begunkov wrote: > On 6/6/22 08:06, Hao Xu wrote: >> On 6/6/22 14:57, Hao Xu wrote: >>> From: Hao Xu <[email protected]> >>> >>> Make per entry lock for cancel_hash array, this reduces usage of >>> completion_lock and contension between cancel_hash entries. >>> >>> v1->v2: >>> - Add per entry lock for poll/apoll task work code which was missed >>> in v1 >>> - add an member in io_kiocb to track req's indice in cancel_hash >> >> Tried to test it with many poll_add IOSQQE_ASYNC requests but turned out >> that there is little conpletion_lock contention, so no visible change in >> data. But I still think this may be good for cancel_hash access in some >> real cases where completion lock matters. > > Conceptually I don't mind it, but let me ask in what > circumstances you expect it to make a difference? And > what can we do to get favourable numbers? For instance, > how many CPUs io-wq was using? Btw, I couldn't find ____cacheline_aligned_in_smp anywhere, which I expect around those new spinlocks to avoid them sharing cache lines -- Pavel Begunkov ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 0/3] cancel_hash per entry lock 2022-06-06 12:02 ` Pavel Begunkov 2022-06-06 12:09 ` Pavel Begunkov @ 2022-06-06 13:39 ` Hao Xu 1 sibling, 0 replies; 11+ messages in thread From: Hao Xu @ 2022-06-06 13:39 UTC (permalink / raw) To: Pavel Begunkov, io-uring; +Cc: Jens Axboe On 6/6/22 20:02, Pavel Begunkov wrote: > On 6/6/22 08:06, Hao Xu wrote: >> On 6/6/22 14:57, Hao Xu wrote: >>> From: Hao Xu <[email protected]> >>> >>> Make per entry lock for cancel_hash array, this reduces usage of >>> completion_lock and contension between cancel_hash entries. >>> >>> v1->v2: >>> - Add per entry lock for poll/apoll task work code which was missed >>> in v1 >>> - add an member in io_kiocb to track req's indice in cancel_hash >> >> Tried to test it with many poll_add IOSQQE_ASYNC requests but turned out >> that there is little conpletion_lock contention, so no visible change in >> data. But I still think this may be good for cancel_hash access in some >> real cases where completion lock matters. > > Conceptually I don't mind it, but let me ask in what > circumstances you expect it to make a difference? And I suppose there are cases where a bunch of users trying to access cancel_hash[] at the same time when people use multiple threads to submit sqes or they use IOSQE_ASYNC. And these io-workers or task works run parallel on different CPUs. > what can we do to get favourable numbers? For instance, > how many CPUs io-wq was using? It is not easy to construct manually since it is related with task scheduling, like if we just issue many IOSQE_ASYNC polls in an idle machine with many CPUs, there won't be much contention because of different thread start time(thus they access cancel_hash at different time > ^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2022-06-06 13:48 UTC | newest] Thread overview: 11+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2022-06-06 6:57 [PATCH v2 0/3] cancel_hash per entry lock Hao Xu 2022-06-06 6:57 ` [PATCH 1/3] io_uring: add hash_index and its logic to track req in cancel_hash Hao Xu 2022-06-06 11:59 ` Pavel Begunkov 2022-06-06 13:47 ` Hao Xu 2022-06-06 6:57 ` [PATCH 2/3] io_uring: add an io_hash_bucket structure for smaller granularity lock Hao Xu 2022-06-06 11:55 ` Pavel Begunkov 2022-06-06 6:57 ` [PATCH 3/3] io_uring: switch cancel_hash to use per list spinlock Hao Xu 2022-06-06 7:06 ` [PATCH v2 0/3] cancel_hash per entry lock Hao Xu 2022-06-06 12:02 ` Pavel Begunkov 2022-06-06 12:09 ` Pavel Begunkov 2022-06-06 13:39 ` Hao Xu
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox