close Warning: Error with navigation contributor "AccountModule"

Opened 12 years ago

Closed 10 years ago

#2 closed defect (fixed)

When downloading a large file (>0x7fffffff) from server, connection is aborted

Reported by: ben Owned by: chris
Priority: critical Milestone: 0.11
Component: box libraries Version: 0.10
Keywords: windows unix large file BackupStore BadBackupStoreFile 4/11 Cc: dave@…

Description (last modified by ben)

PartialReadStream? is used to grab a portion of the incoming data. However, it uses signed ints where it shouldn't, and it all goes horribly wrong.

To fix:

test/common/testcommon.cpp ... write stream class which gives > 0x7fffffff bytes of zeros. Then create a PartialReadStream? from it, asking for > 0x7fffffff, and check to see if the results are as expected. With the current code, this should fail.

lib/common/PartialReadStream.*

Change

int mBytesLeft;

to

pos_type mBytesLeft;

then all vars which are related (eg BytesToRead? in constructor) to pos_type too. (Don't change the args to Read otherwise it won't implement the IOStream interface.)

Change History (11)

comment:1 Changed 12 years ago by ben

Component: bbackupdbox libraries
Description: modified (diff)
Milestone: 0.11
Owner: ben deleted
Priority: majorcritical

comment:2 Changed 11 years ago by chris

Owner: set to chris
Status: newassigned

Should be fixed by [1585], [1586], [1587] in chris/merge, needs to be merged to trunk.

comment:3 Changed 11 years ago by chris

(In [1598]) Fix getting files with uncertain size (over 2GB) from the store. Failure to drain the stream will leave the EOF byte in it, which breaks further communications with the store over the same connection. (refs #2, refs #3)

comment:4 Changed 11 years ago by chris

(In [1599]) Fix bbackupd choosing an invalid (too large) block size for large files (over 2GB) which will cause compare to fail: when rBlockSizeOut == BACKUP_FILE_MAX_BLOCK_SIZE we would have proceeded around the loop one more time and doubled the block size again. (refs #2, refs #3)

comment:5 Changed 11 years ago by chris

(In [1623]) Read any remaining data from the encoded stream (such as EOF marker) before discarding it, to ensure that we don't break the protocol. (refs #2, refs #3)

comment:6 Changed 11 years ago by chris

Keywords: win32 unix large file BackupStore BadBackupStoreFile 4/11 added

comment:7 Changed 11 years ago by chris

Keywords: windows added; win32 removed

comment:8 Changed 11 years ago by chris

Cc: Dave Bamford <dave@…> added

Dave Bamford reports that this is still not fixed on Windows:

Date: Mon, 24 Sep 2007 22:23:19 +0100

Is the backup/restore of large files over 2G fixed? from http://bbdev.fluffy.co.uk/trac/wiki/WindowsClientReleases Build 1662 says its fixed. But its still listed under known bugs for the windows client.

I am having trouble backing up a 2.5G file. Its reporting 00001 blocks on the server and a "get" restores a file of 0 bytes. However the event log reports it being backed up (no issues).

I am using release 1857 in http://bbdev.fluffy.co.uk/svn/box/chris/win32/releases/ as the client on a windows 2003 server to a Debian etch server running the vanilla 0.10 release.

comment:9 Changed 11 years ago by James O'Gorman

Cc: dave@… added; Dave Bamford <dave@…> removed

Correcting CC: field (email addresses only).

comment:10 Changed 10 years ago by chris

Hi Dave,

You need to upgrade the server as well to fix this problem. There are bugs on both sides. You also need to delete any old copies of the file on the server.

Cheers, Chris.

comment:11 Changed 10 years ago by chris

Resolution: fixed
Status: assignedclosed

No reports of problems in 9 months. I'm closing this ticket now. If anyone still has issues with this, please contact us or reopen the ticket.

Note: See TracTickets for help on using tickets.