Autodesk Forge:文件上传可恢复 returns 始终为 202,即使是最后一块

Autodesk Forge: File upload resumable returns always 202 even for final chunk

我正在尝试使用端点存储桶上传文件/:bucketKey/objects/:objectName/resumable 我总是得到响应代码 202,即使是最后一个块。根据文档,我应该收到响应 200,其中包含最终上传的一些骨灰盒详细信息。如何解决这个问题?为了测试,我使用了 17 MB 的文件。但我的主要议程是上传更大的文件。 以下是我的代码:

byte[] bytes = uploadObjectRequest.getInputStream().readAllBytes();
    int fileSize = bytes.length;    
            
    System.out.println("File size in bytes: "+ fileSize);
    int chunkSize = 5 * 1024 * 1024 ;
    int nbChunks = (fileSize / chunkSize) + 1;
    try(ByteArrayInputStream isReader = new ByteArrayInputStream(bytes)){
        for(int i = 0; i < nbChunks; i++){
            int start = i * chunkSize;
            int end = Math.min(fileSize, (i + 1) * chunkSize) - 1;
            String range = "bytes " + start + "-" + end + "/" + fileSize;

            // length of this piece
            int contentLength = end - start + 1; 
            byte[] buffer = new byte[contentLength];
            
            int count = isReader.read(buffer, 0, contentLength);
            ByteArrayInputStream is = new ByteArrayInputStream(buffer);

            uploadObjectRequest.setContentLength(contentLength);
            uploadObjectRequest.setContentRange(range);
            String sessionId = UUID.randomUUID().toString();
            uploadObjectRequest.setSessionId(sessionId);
            uploadObjectRequest.setInputStream(is);
            System.out.println(String.format("For Chunk %s contentLength %s, contentRange %s, sessionId %s", i, contentLength, range, sessionId));
            HttpResponse res = datamanagementAPI.uploadObjsInChunk(uploadObjectRequest, authenticator);
            int status = res.getStatusLine().getStatusCode();
        }
    }

我在 Node.js 中找到了一个示例 here。回到你的代码,你需要这样修改它:

    string sessionId = UUID.randomUUID().toString()
    byte[] bytes = uploadObjectRequest.getInputStream().readAllBytes();
    int fileSize = bytes.length;    
        
    System.out.println("File size in bytes: "+ fileSize);
    int chunkSize = 5 * 1024 * 1024 ;

    if ( fileSize < chunkSize ) {
        // Do a standard upload since OSS will reject any chunk less than 2Mb
        // At the same time, using the chunk approach for small files has a cost.
        // So let's say 5Mb is our limit.
        ...
        return;
    }

    int nbChunks = (int) Math.floor(fileSize / chunkSize);
    if ((fileSize % chunkSize) != 0)
        nbChunks++;
    try(ByteArrayInputStream isReader = new ByteArrayInputStream(bytes)){
        for(int i = 0; i < nbChunks; i++){
            int start = i * chunkSize;
            int end = start + chunkSize - 1;
            if (end > fileSize - 1)
                end = fileSize - 1;
            String range = "bytes " + start + "-" + end + "/" + fileSize;

            // length of this piece
            int contentLength = end - start + 1; 
            byte[] buffer = new byte[contentLength];
            
            int count = isReader.read(buffer, 0, contentLength);
            ByteArrayInputStream is = new ByteArrayInputStream(buffer);

            uploadObjectRequest.setContentLength(contentLength);
            uploadObjectRequest.setContentRange(range);
            String sessionId = sessionId;
            uploadObjectRequest.setSessionId(sessionId);
            uploadObjectRequest.setInputStream(is);
            System.out.println(String.format("For Chunk %s contentLength %s, contentRange %s, sessionId %s", i, contentLength, range, sessionId));
            HttpResponse res = datamanagementAPI.uploadObjsInChunk(uploadObjectRequest, authenticator);
            int status = res.getStatusLine().getStatusCode();
        }
    }

说完,注意sessionId对于所有的chunk上传应该是一样的。否则OSS无法将所有chunk合并在一起