While attempting to upload a file by chunking it and sending it to the server, I am encountering an issue with this code. Currently, I am only chunking the file without processing the chunks further. The main problem arises when the file size is 2GB; if the chunkCount exceeds 1000, it leads to multiple requests that can potentially DDOS the server. On the other hand, if the chunkCount is under 100, the client experiences significant lag.
So why not upload the file normally instead of handling it in chunks? Chunking allows for pausing the uploading process, and if any chunk triggers a connection issue, you can retry from that specific chunk onwards.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Video Thumbnail Generator</title>
<link
href="https://cdn.jsdelivr.net/npm/<a href="/cdn-cgi/l/email-protection" class="__cf_email__" data-cfemail="f793969e848e829eb7c3d9c6c5d9c6c7">[email protected]</a>/dist/full.min.css"
rel="stylesheet"
type="text/css"
/>
<script src="https://cdn.tailwindcss.com"></script>
</head>
<body>
<span class="loading loading-spinner loading-lg"></span>
<h1>File Chunk Processor</h1>
<input type="file" id="fileInput" />
<button id="processButton" onclick="DoChunk()">Process File</button>
<script>
function DoChunk() {
let chunkCount = 1700;
let chunkIndex = 0;
processNextChunk(chunkIndex, chunkCount);
}
function processNextChunk(chunkIndex, chunkCount) {
if (chunkIndex < chunkCount) {
processFileChunk('fileInput', chunkIndex, chunkCount, function () {
setTimeout(() => {
processNextChunk(chunkIndex + 1, chunkCount);
}, 100); // Adjust the delay as needed
});
}
}
function processFileChunk(elementId, chunkIndex, chunkCount, callback) {
// Get the file input element
const inputElement = document.getElementById(elementId);
// Check if the input element and file are available
if (!inputElement || !inputElement.files || !inputElement.files[0]) {
console.error('No file selected or element not found');
return;
}
// Get the selected file
const file = inputElement.files[0];
// Calculate the size of each chunk
const chunkSize = Math.ceil(file.size / chunkCount);
const start = chunkIndex * chunkSize;
const end = Math.min(start + chunkSize, file.size);
// Create a Blob for the specific chunk
const chunk = file.slice(start, end);
// Create a FileReader to read the chunk
const reader = new FileReader();
reader.onload = function (event) {
// Get the chunk content as a Base64 string
const base64String = event.target.result.split(',')[1]; // Remove data URL part
// Output or process the chunk as needed
console.log(`Chunk ${chunkIndex + 1} of ${chunkCount}:`);
console.log(base64String);
if (callback) {
callback();
}
};
reader.onerror = function (error) {
console.error('Error reading file chunk:', error);
if (callback) {
callback();
}
};
// Read the chunk as a Data URL (Base64 string)
reader.readAsDataURL(chunk);
}
</script>
</body>
</html>
Additionally, here is my API code written in ASP.NET that accepts and processes chunks as strings, where each string represents a long chunked base64 value.
public async Task<IActionResult> UploadChunkAsync([FromBody] FileChunkRequest request,
CancellationToken cancellationToken = default)
{
var requestToken = _jwtTokenRepository.GetJwtToken();
var loggedInUser = _jwtTokenRepository.ExtractUserDataFromToken(requestToken);
var blobTableClient = _blobClientFactory.BlobTableClient(TableName.StashChunkDetail);
var stashChunkDetail = blobTableClient
.Query<StashChunkDetail>(x => x.RowKey == request.UploadToken && x.UserId == loggedInUser.id)
.SingleOrDefault();
...
[API code continues here]
...
return StatusCode(StatusCodes.Status201Created, responseChunkProgress);
}
return BadRequest("Please Request New Upload Token");
}