0

I'm trying to implement an HTML5 Amazon S3 uploader (by using the REST API), and stumbled upon the following issue: when trying to upload a small, text file, everything works like a charm. When trying to upload a binary file, the file gets bigger on S3, and, obviously, corrupted. Here's what I'm doing:

// multipart upload init / finish code hidden; if you need it, I'll add it
// file is read by using a file input
var blob = file.slice(start, end);
var reader = new FileReader();
reader.readAsBinaryString(blob);

// in reader.onloadend:
var path = "/" + settings.key;

path += "?partNumber=" + chunk + "&uploadId=" + u.upload_id;
var method = "PUT";
var authorization = "AWS " + settings.access_key + ":" + signature;

var xhr = new XMLHttpRequest();
xhr.open(method, settings.host + path, true);
xhr.setRequestHeader("x-amz-date", date);
xhr.setRequestHeader("Authorization", authorization);

// application/octet-stream used
xhr.setRequestHeader("Content-Type", settings.content_type); 
xhr.send(e.target.result);

Also, I've tried to create a 10mb file with text (10 million lines of 0123456789) and that one works correctly.

If anyone has a solution to this problem, or stumbled upon it, let me know.

4

1 に答える 1

3

StackOverflow は、自分で物事を理解するのにも適しているようです。アイデアを書き終えたところで修正しました。xhr.send()メソッドは blob を直接受け取ることができるようですfile.slice()ので、FileReader.

これが、この問題に遭遇した他の人々に役立つことを願っています。

于 2012-09-15T14:43:07.143 に答える