Skip to content

Commit

Permalink
prevent cifs_writepages() from skipping unwritten pages
Browse files Browse the repository at this point in the history
Fixes a data corruption under heavy stress in which pages could be left
dirty after all open instances of a inode have been closed.

In order to write contiguous pages whenever possible, cifs_writepages()
asks pagevec_lookup_tag() for more pages than it may write at one time.
Normally, it then resets index just past the last page written before calling
pagevec_lookup_tag() again.

If cifs_writepages() can't write the first page returned, it wasn't resetting
index, and the next call to pagevec_lookup_tag() resulted in skipping all of
the pages it previously returned, even though cifs_writepages() did nothing
with them.  This can result in data loss when the file descriptor is about
to be closed.

This patch ensures that index gets set back to the next returned page so
that none get skipped.

Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
Acked-by: Jeff Layton <jlayton@redhat.com>
Cc: Shirish S Pargaonkar <shirishp@us.ibm.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
  • Loading branch information
Dave Kleikamp authored and Steve French committed Nov 18, 2008
1 parent 2c55608 commit b066a48
Showing 1 changed file with 4 additions and 1 deletion.
5 changes: 4 additions & 1 deletion fs/cifs/file.c
Original file line number Diff line number Diff line change
Expand Up @@ -1404,7 +1404,10 @@ static int cifs_writepages(struct address_space *mapping,
if ((wbc->nr_to_write -= n_iov) <= 0)
done = 1;
index = next;
}
} else
/* Need to re-find the pages we skipped */
index = pvec.pages[0]->index + 1;

pagevec_release(&pvec);
}
if (!scanned && !done) {
Expand Down

0 comments on commit b066a48

Please sign in to comment.