瀏覽代碼

jbd: fix race between write_metadata_buffer and get_write_access

The function journal_write_metadata_buffer() calls jbd_unlock_bh_state(bh_in)
too early; this could potentially allow another thread to call get_write_access
on the buffer head, modify the data, and dirty it, and allowing the wrong data
to be written into the journal.  Fortunately, if we lose this race, the only
time this will actually cause filesystem corruption is if there is a system
crash or other unclean shutdown of the system before the next commit can take
place.

Signed-off-by: dingdinghua <dingdinghua85@gmail.com>
Acked-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Jan Kara <jack@suse.cz>
dingdinghua 16 年之前
父節點
當前提交
f1015c4477
共有 1 個文件被更改,包括 11 次插入9 次删除
  1. 11 9
      fs/jbd/journal.c

+ 11 - 9
fs/jbd/journal.c

@@ -287,6 +287,7 @@ int journal_write_metadata_buffer(transaction_t *transaction,
 	struct page *new_page;
 	struct page *new_page;
 	unsigned int new_offset;
 	unsigned int new_offset;
 	struct buffer_head *bh_in = jh2bh(jh_in);
 	struct buffer_head *bh_in = jh2bh(jh_in);
+	journal_t *journal = transaction->t_journal;
 
 
 	/*
 	/*
 	 * The buffer really shouldn't be locked: only the current committing
 	 * The buffer really shouldn't be locked: only the current committing
@@ -300,6 +301,11 @@ int journal_write_metadata_buffer(transaction_t *transaction,
 	J_ASSERT_BH(bh_in, buffer_jbddirty(bh_in));
 	J_ASSERT_BH(bh_in, buffer_jbddirty(bh_in));
 
 
 	new_bh = alloc_buffer_head(GFP_NOFS|__GFP_NOFAIL);
 	new_bh = alloc_buffer_head(GFP_NOFS|__GFP_NOFAIL);
+	/* keep subsequent assertions sane */
+	new_bh->b_state = 0;
+	init_buffer(new_bh, NULL, NULL);
+	atomic_set(&new_bh->b_count, 1);
+	new_jh = journal_add_journal_head(new_bh);	/* This sleeps */
 
 
 	/*
 	/*
 	 * If a new transaction has already done a buffer copy-out, then
 	 * If a new transaction has already done a buffer copy-out, then
@@ -361,14 +367,6 @@ int journal_write_metadata_buffer(transaction_t *transaction,
 		kunmap_atomic(mapped_data, KM_USER0);
 		kunmap_atomic(mapped_data, KM_USER0);
 	}
 	}
 
 
-	/* keep subsequent assertions sane */
-	new_bh->b_state = 0;
-	init_buffer(new_bh, NULL, NULL);
-	atomic_set(&new_bh->b_count, 1);
-	jbd_unlock_bh_state(bh_in);
-
-	new_jh = journal_add_journal_head(new_bh);	/* This sleeps */
-
 	set_bh_page(new_bh, new_page, new_offset);
 	set_bh_page(new_bh, new_page, new_offset);
 	new_jh->b_transaction = NULL;
 	new_jh->b_transaction = NULL;
 	new_bh->b_size = jh2bh(jh_in)->b_size;
 	new_bh->b_size = jh2bh(jh_in)->b_size;
@@ -385,7 +383,11 @@ int journal_write_metadata_buffer(transaction_t *transaction,
 	 * copying is moved to the transaction's shadow queue.
 	 * copying is moved to the transaction's shadow queue.
 	 */
 	 */
 	JBUFFER_TRACE(jh_in, "file as BJ_Shadow");
 	JBUFFER_TRACE(jh_in, "file as BJ_Shadow");
-	journal_file_buffer(jh_in, transaction, BJ_Shadow);
+	spin_lock(&journal->j_list_lock);
+	__journal_file_buffer(jh_in, transaction, BJ_Shadow);
+	spin_unlock(&journal->j_list_lock);
+	jbd_unlock_bh_state(bh_in);
+
 	JBUFFER_TRACE(new_jh, "file as BJ_IO");
 	JBUFFER_TRACE(new_jh, "file as BJ_IO");
 	journal_file_buffer(new_jh, transaction, BJ_IO);
 	journal_file_buffer(new_jh, transaction, BJ_IO);