public inbox for [email protected]
 help / color / mirror / Atom feed
From: kernel test robot <[email protected]>
To: Stefan Roesch <[email protected]>,
	[email protected], [email protected],
	[email protected], [email protected]
Cc: [email protected], [email protected]
Subject: Re: [PATCH v2 06/13] fs: Add gfp_t parameter to create_page_buffers()
Date: Mon, 21 Feb 2022 08:18:48 +0800	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>

Hi Stefan,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on 9195e5e0adbb8a9a5ee9ef0f9dedf6340d827405]

url:    https://github.com/0day-ci/linux/commits/Stefan-Roesch/Support-sync-buffered-writes-for-io-uring/20220220-172629
base:   9195e5e0adbb8a9a5ee9ef0f9dedf6340d827405
config: sparc64-randconfig-s031-20220220 (https://download.01.org/0day-ci/archive/20220221/[email protected]/config)
compiler: sparc64-linux-gcc (GCC) 11.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # apt-get install sparse
        # sparse version: v0.6.4-dirty
        # https://github.com/0day-ci/linux/commit/e98a7c2a17960f81efc5968cbc386af7c088a8ed
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Stefan-Roesch/Support-sync-buffered-writes-for-io-uring/20220220-172629
        git checkout e98a7c2a17960f81efc5968cbc386af7c088a8ed
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' O=build_dir ARCH=sparc64 SHELL=/bin/bash

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <[email protected]>


sparse warnings: (new ones prefixed by >>)
>> fs/buffer.c:2010:60: sparse: sparse: incorrect type in argument 4 (different base types) @@     expected restricted gfp_t [usertype] flags @@     got unsigned int flags @@
   fs/buffer.c:2010:60: sparse:     expected restricted gfp_t [usertype] flags
   fs/buffer.c:2010:60: sparse:     got unsigned int flags
>> fs/buffer.c:2147:87: sparse: sparse: incorrect type in argument 6 (different base types) @@     expected unsigned int flags @@     got restricted gfp_t [assigned] [usertype] gfp @@
   fs/buffer.c:2147:87: sparse:     expected unsigned int flags
   fs/buffer.c:2147:87: sparse:     got restricted gfp_t [assigned] [usertype] gfp

vim +2010 fs/buffer.c

  1991	
  1992	int __block_write_begin_int(struct folio *folio, loff_t pos, unsigned len,
  1993				get_block_t *get_block, const struct iomap *iomap,
  1994				unsigned int flags)
  1995	{
  1996		unsigned from = pos & (PAGE_SIZE - 1);
  1997		unsigned to = from + len;
  1998		struct inode *inode = folio->mapping->host;
  1999		unsigned block_start, block_end;
  2000		sector_t block;
  2001		int err = 0;
  2002		unsigned blocksize, bbits;
  2003		struct buffer_head *bh, *head, *wait[2], **wait_bh=wait;
  2004	
  2005		BUG_ON(!folio_test_locked(folio));
  2006		BUG_ON(from > PAGE_SIZE);
  2007		BUG_ON(to > PAGE_SIZE);
  2008		BUG_ON(from > to);
  2009	
> 2010		head = create_page_buffers(&folio->page, inode, 0, flags);
  2011		blocksize = head->b_size;
  2012		bbits = block_size_bits(blocksize);
  2013	
  2014		block = (sector_t)folio->index << (PAGE_SHIFT - bbits);
  2015	
  2016		for(bh = head, block_start = 0; bh != head || !block_start;
  2017		    block++, block_start=block_end, bh = bh->b_this_page) {
  2018			block_end = block_start + blocksize;
  2019			if (block_end <= from || block_start >= to) {
  2020				if (folio_test_uptodate(folio)) {
  2021					if (!buffer_uptodate(bh))
  2022						set_buffer_uptodate(bh);
  2023				}
  2024				continue;
  2025			}
  2026			if (buffer_new(bh))
  2027				clear_buffer_new(bh);
  2028			if (!buffer_mapped(bh)) {
  2029				WARN_ON(bh->b_size != blocksize);
  2030				if (get_block) {
  2031					err = get_block(inode, block, bh, 1);
  2032					if (err)
  2033						break;
  2034				} else {
  2035					iomap_to_bh(inode, block, bh, iomap);
  2036				}
  2037	
  2038				if (buffer_new(bh)) {
  2039					clean_bdev_bh_alias(bh);
  2040					if (folio_test_uptodate(folio)) {
  2041						clear_buffer_new(bh);
  2042						set_buffer_uptodate(bh);
  2043						mark_buffer_dirty(bh);
  2044						continue;
  2045					}
  2046					if (block_end > to || block_start < from)
  2047						folio_zero_segments(folio,
  2048							to, block_end,
  2049							block_start, from);
  2050					continue;
  2051				}
  2052			}
  2053			if (folio_test_uptodate(folio)) {
  2054				if (!buffer_uptodate(bh))
  2055					set_buffer_uptodate(bh);
  2056				continue; 
  2057			}
  2058			if (!buffer_uptodate(bh) && !buffer_delay(bh) &&
  2059			    !buffer_unwritten(bh) &&
  2060			     (block_start < from || block_end > to)) {
  2061				ll_rw_block(REQ_OP_READ, 0, 1, &bh);
  2062				*wait_bh++=bh;
  2063			}
  2064		}
  2065		/*
  2066		 * If we issued read requests - let them complete.
  2067		 */
  2068		while(wait_bh > wait) {
  2069			wait_on_buffer(*--wait_bh);
  2070			if (!buffer_uptodate(*wait_bh))
  2071				err = -EIO;
  2072		}
  2073		if (unlikely(err))
  2074			page_zero_new_buffers(&folio->page, from, to);
  2075		return err;
  2076	}
  2077	
  2078	int __block_write_begin(struct page *page, loff_t pos, unsigned len,
  2079			get_block_t *get_block)
  2080	{
  2081		return __block_write_begin_int(page_folio(page), pos, len, get_block,
  2082					       NULL, 0);
  2083	}
  2084	EXPORT_SYMBOL(__block_write_begin);
  2085	
  2086	static int __block_commit_write(struct inode *inode, struct page *page,
  2087			unsigned from, unsigned to)
  2088	{
  2089		unsigned block_start, block_end;
  2090		int partial = 0;
  2091		unsigned blocksize;
  2092		struct buffer_head *bh, *head;
  2093	
  2094		bh = head = page_buffers(page);
  2095		blocksize = bh->b_size;
  2096	
  2097		block_start = 0;
  2098		do {
  2099			block_end = block_start + blocksize;
  2100			if (block_end <= from || block_start >= to) {
  2101				if (!buffer_uptodate(bh))
  2102					partial = 1;
  2103			} else {
  2104				set_buffer_uptodate(bh);
  2105				mark_buffer_dirty(bh);
  2106			}
  2107			if (buffer_new(bh))
  2108				clear_buffer_new(bh);
  2109	
  2110			block_start = block_end;
  2111			bh = bh->b_this_page;
  2112		} while (bh != head);
  2113	
  2114		/*
  2115		 * If this is a partial write which happened to make all buffers
  2116		 * uptodate then we can optimize away a bogus readpage() for
  2117		 * the next read(). Here we 'discover' whether the page went
  2118		 * uptodate as a result of this (potentially partial) write.
  2119		 */
  2120		if (!partial)
  2121			SetPageUptodate(page);
  2122		return 0;
  2123	}
  2124	
  2125	/*
  2126	 * block_write_begin takes care of the basic task of block allocation and
  2127	 * bringing partial write blocks uptodate first.
  2128	 *
  2129	 * The filesystem needs to handle block truncation upon failure.
  2130	 */
  2131	int block_write_begin(struct address_space *mapping, loff_t pos, unsigned len,
  2132			unsigned flags, struct page **pagep, get_block_t *get_block)
  2133	{
  2134		pgoff_t index = pos >> PAGE_SHIFT;
  2135		struct page *page;
  2136		int status;
  2137		gfp_t gfp = 0;
  2138		bool no_wait = (flags & AOP_FLAG_NOWAIT);
  2139	
  2140		if (no_wait)
  2141			gfp = GFP_ATOMIC | __GFP_NOWARN;
  2142	
  2143		page = grab_cache_page_write_begin(mapping, index, flags);
  2144		if (!page)
  2145			return -ENOMEM;
  2146	
> 2147		status = __block_write_begin_int(page_folio(page), pos, len, get_block, NULL, gfp);
  2148		if (unlikely(status)) {
  2149			unlock_page(page);
  2150			put_page(page);
  2151			page = NULL;
  2152		}
  2153	
  2154		*pagep = page;
  2155		return status;
  2156	}
  2157	EXPORT_SYMBOL(block_write_begin);
  2158	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/[email protected]

  reply	other threads:[~2022-02-21  0:19 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-18 19:57 [PATCH v2 00/13] Support sync buffered writes for io-uring Stefan Roesch
2022-02-18 19:57 ` [PATCH v2 01/13] fs: Add flags parameter to __block_write_begin_int Stefan Roesch
2022-02-18 19:59   ` Matthew Wilcox
2022-02-18 20:08     ` Stefan Roesch
2022-02-18 20:13       ` Matthew Wilcox
2022-02-18 20:14         ` Stefan Roesch
2022-02-18 20:22           ` Matthew Wilcox
2022-02-18 20:25             ` Stefan Roesch
2022-02-18 20:35               ` Matthew Wilcox
2022-02-18 20:39                 ` Stefan Roesch
2022-02-18 19:57 ` [PATCH v2 02/13] mm: Introduce do_generic_perform_write Stefan Roesch
2022-02-18 19:57 ` [PATCH v2 03/13] mm: Add support for async buffered writes Stefan Roesch
2022-02-18 19:57 ` [PATCH v2 04/13] fs: split off __alloc_page_buffers function Stefan Roesch
2022-02-18 20:42   ` Matthew Wilcox
2022-02-18 20:50     ` Stefan Roesch
2022-02-19  7:35   ` Christoph Hellwig
2022-02-20  4:23     ` Matthew Wilcox
2022-02-20  4:38       ` Jens Axboe
2022-02-20  4:51         ` Jens Axboe
2022-02-22  8:18       ` Christoph Hellwig
2022-02-22 23:19         ` Jens Axboe
2022-02-18 19:57 ` [PATCH v2 05/13] fs: split off __create_empty_buffers function Stefan Roesch
2022-02-18 19:57 ` [PATCH v2 06/13] fs: Add gfp_t parameter to create_page_buffers() Stefan Roesch
2022-02-21  0:18   ` kernel test robot [this message]
2022-02-18 19:57 ` [PATCH v2 07/13] fs: add support for async buffered writes Stefan Roesch
2022-02-18 19:57 ` [PATCH v2 08/13] io_uring: " Stefan Roesch
2022-02-18 19:57 ` [PATCH v2 09/13] io_uring: Add tracepoint for short writes Stefan Roesch
2022-02-18 19:57 ` [PATCH v2 10/13] sched: add new fields to task_struct Stefan Roesch
2022-02-18 19:57 ` [PATCH v2 11/13] mm: support write throttling for async buffered writes Stefan Roesch
2022-02-18 19:57 ` [PATCH v2 12/13] io_uring: " Stefan Roesch
2022-02-18 19:57 ` [PATCH v2 13/13] block: enable async buffered writes for block devices Stefan Roesch
2022-02-20 22:38 ` [PATCH v2 00/13] Support sync buffered writes for io-uring Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox