Search Results

Search found 35291 results on 1412 pages for 'start menu'.

Page 126/1412 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How to avoid HDD spin up at system start? (Ubuntu from SSD)

    - by Oliver
    Thanks to hdparm -B1 /dev/sdb my HDD does no longer spin up when powered up on boot. But after completing the BIOS POST messages and starting Ubuntu the HDD gets a signal over the SATA data cable and spins up. Leaving the data cable (but still with plugged in SATA power cable) let the system boot up completely from my SSD without spinning up the HDD. What causes the HDD to spin up? Maybe Grub2? Edit: nope, doesn't seem to be Grub2 that spins up the drive. I just set up Grub to show its menu without timer. Nothings happens until I hit the Ubuntu standard boot option, then a few seconds later the drive spins up.

    Read the article

  • Argument list too long and copying to Samba Share

    - by Copy Run Start
    Ubuntu 12.04 LTS 64 bit. I'm trying to make a scheduled task copy from a directory with thousands of files to a samba share (while skipping duplicates). I mapped my Samba share through the GUI. The command I tried: cp /home/security/Brick/* ~/.gvfs/"cam on atm-bak-01.local/Brick" -n I found this but I don't know how to change the syntax to what I need. find -maxdepth 1 -name '*.prj' -exec mv -t ../prjshp {} + Any hints are greatly appreciated.

    Read the article

  • How to fix LibreOffice 3.5 to start from launcher icon?

    - by Bucic
    Recently I installed LibreOffice 3.5 from debs (including the ones for desktop integration. Since then it behaves in a weird manner. Basically it won't launch from launcher shortcuts. Example: dash libreoffice launched (launcher shows with 2 seconds delay) 'keep in launcher' LO closed the LO icon on the launcher clicked pulsating for 5 seconds, then stops LO didn't launch I didn't look for open processes or anything and... dash libreoffice launched immediately without problems It also didn't remember recently opened items, but now it got fixed somehow. Other symptoms: - icon on the launcher and all menu categories appear with a delay of few seconds - two writer icons appear on the launcher What I did: used 3.4.5, uninstalled, installed 3.5 RC3, problems started to occur uninstalled RC3 'clean' per http://wiki.documentfoundation.org/Installing_LibreOffice_on_Linux#Debian_.2F_Ubuntu_3 installed 3.5 stable which is identical to RC3 still same issues I don't know if it is related but I use Unity test PPA:vanvugt/unity which I needed to solve the stuttering windows bug. Any ideas?

    Read the article

  • How do I right-align the 'help' menu item in WPF?

    - by paxdiablo
    I have the following (simplifed) section in my XAML file: <Menu Width="Auto" Height="20" Background="#FFA9D1F4" DockPanel.Dock="Top"> <MenuItem Header="File"> <MenuItem Header="Exit"/> </MenuItem> <MenuItem Header="Edit"> <MenuItem Header="Cut"/> </MenuItem> <MenuItem Header="Help"> <MenuItem Header="About"/> </MenuItem> </Menu> and it results in: +-------------------------------------------+ | File Edit Help | +-------------------------------------------+ | | What do I need to do if I want the Help menu item on the right-hand side: +-------------------------------------------+ | File Edit Help | +-------------------------------------------+ | |

    Read the article

  • Can I turn off context menu scrolling in VS2010?

    - by Jane McDowell
    When I right-click in the middle of a code editor window in Visual Studio 2010 RTM, a context menu appears. This takes up about a fourth the height of the screen but doesn't show all options. Instead it scrolls up and down when you move the pointer to the top or bottom of the menu. If I click near the top or bottom of the screen, the menu is normal and doesn't scroll. Can I turn this behavior off? It's stupid. You can't even scroll using the mouse wheel. EDIT I reckon this might just be a bug - I've found a few.

    Read the article

  • Session timeout issue

    - by Kumar
    I have a role based ASP.NET C# web application in which I am putting the menu object inside a session and I have a session timeout configured in the web.config as below: <forms defaultUrl="Home.aspx" loginUrl="Login.aspx" name=".ASPXFORMSAUTH" timeout="10"></forms> I first logged into the system as an employee and waited until the session expires and then when I click a link in the menu I am being rightly redirected to the login page with the ReturnUrl parameter. Now when I try to login to the system as an administrator I am still seeing the employee menu and not the admin menu. The method which loads the menu 1st checks to see if the menu session object is not null if so loads the menu from the session if not then it builds the menu and put it into session. So when the system timesout the menu session object is not being cleared. How can I fix this?

    Read the article

  • Javascript performance issue

    - by Daniel
    Hello, I've built a top menu based on superfish, but the amount of displayed items in the menu is huge. And there is also alot of jquery on the top menu. Now to the problem, everytime I load any page that has the menu, the browser(ie7) feels like it looks it locks it self for about 1-2 seconds while the page is being loaded. I'm sure that the top menu is the issue, and I would like to improve the performance of the page.(besides removing the menu and removing the menu items) I've used firebug to see which calls take most of the times, and I the calls are standard jquery or superfish. The top menu is a ascx control. Are they any good ways to let the page load first and the menu later or any other goods ideas to improve the performance?

    Read the article

  • error C2440: '=' : cannot convert from 'std::string []' to 'std::string []'

    - by Bach
    now what is wrong with this code! Header: #pragma once #include <string> using namespace std; class Menu { public: Menu(string []); ~Menu(void); }; Implementation: #include "Menu.h" string _choices[]; Menu::Menu(string items[]) { _choices = items; } Menu::~Menu(void) { } compiler is complaining: error C2440: '=' : cannot convert from 'std::string []' to 'std::string []' There are no conversions to array types, although there are conversions to references or pointers to arrays there is no conversion! so what is it on about? please help, just need to pass a bloody array of strings and set it to Menu class _choices[] attribute. thanks

    Read the article

  • How should I build a privacy drop-down (select) menu?

    - by animuson
    I'm trying to build something similar to Facebook's privacy selection menu, except without the 'custom' option. It will only list a few options such as 'show to all', 'show to friends only', or 'completely hidden'. Right now I'm thinking of using simple JavaScript to change a hidden input field to the new value they click on, so if they clicked on the division for 'show to friends only' it would change the corresponding field, say 'email_privacy', to 1. Is there a better way to do this or am I pretty much on track? P.S. I am not planning on using a select element, I was planning on building a custom drop-down menu using CSS since select elements are so highly non-customizable. I'm doing it this way to save space, rather than having this massive selection menu at the right which takes up a bunch of space. Note: I'm not really interested in using jQuery, that's just extra libraries and crap that I don't want to load. I can do it in JavaScript just as easily so I might as well use that.

    Read the article

  • With ASP .NET, how do I display a side submenu depending on what user picked from the main menu?

    - by Ole Media
    Hello, I come from PHP side, but I'm in need to develop a site using ASP .NET So far I manage to create a master page. Now I'm dealing with menus and submenus What I'm trying to do is, if the user picks "About Us" from the main menu, under the about us page I want to display a submenu with the options "who we are, what we do, contact us" If the user picks "Our Products" from the main menu, then under the products page I want to display a submenu with the options "product 1, product 2, product 3" Is this possible with only one master page? I read something about menu control but not sure that is what I need. So far I found how to display a sitemap, but not specific sections of a sitemap, if this is the way to do things. Any links, sample code, point of reference will be appreciated. Thanks

    Read the article

  • Why is a menu item disabled when using SWTBot?

    - by reprogrammer
    I've written up a GUI test using SWTBot to test the Extract Method refactoring. I use editor.selectRange() to select a statement to extract into a method. But, when I run the unit test, the Extract Method refactoring menu item is disabled. Thus, SWTBot fails to invoke the refactoring. When we change org.eclipse.jdt.ui.actions.ExtractMethodAction so that the "Extract Method..." menu item is always enabled, our SWTBot passes. But, SWTBot should let us select the menu item without hacking the org.eclipse.jdt.ui plugin. The whole project containing the above unit test is available at github. I've also reported the problem on the Eclipse forum for SWTBot. But, we haven't received a solution from the forum.

    Read the article

  • jQuery to target CSS a:hover class

    - by songdogtech
    Confused here. What's the best way to change the hover CSS of the link below, and how? toggle.Class? add.Class? I want to show an image on hover. I have been able to change the color of the text with $("#menu-item-1099 a:contains('rss')").css("color", "#5d5d5d"); but can't seem to target the hover class. <div id="access"> <div class="menu-header"> <ul id="menu-main-menu" class="menu"> <li id="menu-item-1099" class="menu-item menu-item-type-custom menu-item-1099"> <a href="http://mydomain.com/feed/">rss</a></li>

    Read the article

  • How to keep the popup menu of a JComboBox open on populating it ?

    - by Stormshadow
    I have a JComboBox on my Panel. One of the popup menu items is 'More' and when I click that I fetch more menu items and add them to the existing list. After this, I wish to keep the popup menu open so that the user realizes that more items have been fetched however, the popup closes. The event handler code I am using is as follows public void actionPerformed(ActionEvent e) { if (e.getSource() == myCombo) { JComboBox selectedBox = (JComboBox) e.getSource(); String item = (String) selectedBox.getSelectedItem(); if (item.toLowerCase().equals("more")) { fetchItems(selectedBox); } selectedBox.showPopup(); selectedBox.setPopupVisible(true); } } private void fetchItems(JComboBox box) { box.removeAllItems(); /* code to fetch items and store them in the Set<String> items */ for (String s : items) { box.addItem(s); } } I do not understand why the showPopup() and setPopupVisible() methods are not functioning as expected.

    Read the article

  • How to select a MenuItem programatically

    - by Shaung
    I am trying to add a global shortcut to a gtk.MenuItem which has a sub menu. Here is my code: import pygtk, gtk import keybinder dlg = gtk.Dialog('menu test') dlg.set_size_request(200, 40) menubar = gtk.MenuBar() menubar.show() menuitem = gtk.MenuItem('foo') menuitem.show() menubar.append(menuitem) mitem = gtk.MenuItem('bar') mitem.show() menu = gtk.Menu() menu.add(mitem) menu.show() menuitem.set_submenu(menu) def show_menu_cb(): menubar.select_item(menuitem) keybinder.bind('<Super>i', show_menu_cb) dlg.vbox.pack_start(menubar) dlg.show() dlg.run() When I press the key menu pops up, I can then select items in the sub menu or press Esc to make it disappear. But after that the menuitem keeps selected and other windows never get input focus again. I have to click on the menuitem twice to get everything back normal.

    Read the article

  • Dismiss Menu when Focus is lost, not on mouseout. jQuery

    - by Stacey
    I have jQuery that drops a menu down when its parent is clicked on, and dismisses it when they hover away from the menu. I am wanting to change the behavior so that it only dismisses if they click somewhere else on the page, or a different menu. Is this possible? jQuery.fn.dropdown = function () { return this.each(function () { $('.ui-dropdown-list > li > a').click(function () { $(this).addClass("ui-dropdown-hover"); }); $("ul.ui-dropdown-list > li > a").click(function () { $(this).parent().find("ul").show(); $(this).parent().hover(function () { }, function () { $(this).parent().find("ul").hide(); $(this).find('> a').removeClass("ui-dropdown-hover"); }); }); }); };

    Read the article

  • Looking for a Silverlight 3 or 4 Menu control providing decent keyboard support.

    - by Geo
    I've an N-Tier application using Silverlight for the client. The customer as one particular request - I thought was more than reasonable: all actions – including menu navigation – has to be available through keyboard. When I tried Silverlight 4 I was surprised not to find any menu control so I downloaded several open source and commercial menu controls. I was very disappointed, after having searched for a couple of hours I didn’t manage to find any control that provide a decent keyboard support. Most controls provide no support or some basic support but not one control enabled to gain focus on the first item through the keyboard. You are able to use the keyboard (arrow keys) but you need first to select the control with the mouse! Not one control provided support for Keyboard shortcuts. Does anyone know of any Silverlight control providing descent support?

    Read the article

  • Auto-Selecting Navigation works for active page but how to add class to parent menu items?

    - by jacqueschoquette
    I am following this article http://docs.jquery.com/Tutorials:Auto-Selecting_Navigation I am able to successfully add a class to the active page li menu item but does anyone know how to modify or add to this script so that any parent menu li items also get the active class? I would like to avoid having to add ID's as the menu items will be changing alot My menus have three levels max here is the script jquery script i am using $(function(){ var path = location.pathname.substring(1); if ( path ) $('.topLevel a[href$="' + path + '"]').attr('class', 'underline'); }); which works on the current page li a I thought I could go this route $('.topLevel a[href$="' + path + '"]').attr('class', 'underline').parent().attr('class', 'underline'); but it does not seem to work any ideas? a working example can be found here whistlerwebandprint.com/home.html

    Read the article

  • Where do you start your design - code, UI or workflow?

    - by Mmarquee
    Hi I was discussing this at work, and was wondering where people start their designs? We tend to start with designing code to solve the problem presented to us, but that is probably all of us are (or were) programmers. I was wondering where other people and organisations start their design. Do they start with solving the problem as a coding problem, sit down and design what UI to use, or map out the data or workflow? Thanks

    Read the article

  • Where do you start your design - code, UI, workflow or whatever?

    - by Mmarquee
    Hi I was discussing this at work, and was wondering where people start their designs? We tend to start with designing code to solve the problem presented to us, but that is probably all of us are (or were) programmers. I was wondering where other people and organisations start their design. Do they start with solving the problem as a coding problem, sit down and design what UI to use, or map out the data or workflow? Thanks

    Read the article

  • What would a start-to-finish development procedure would look like?

    - by Tom Busby
    I have a problem that my developer friends share. We recently left university and find ourselves either end up working for a firm which already has good procedures (TDD, automated testing, proper agile development, etc) or working for a firm which doesn't. I want to learn some of these vital skills and get a grip on what a complete start-to-finish development procedure would look like. What differences would be between a smaller project, and a long term project with many team members.

    Read the article

  • What are the ways to start making actual/real-world programs using Java/C++ to excel my Programming Skills?

    - by Umer Hassan
    The programming that we learn at university is not that vast, like those are really small exercises to build our logic, but everyone knows that this will not be the scenario when I'll get out in the market as a professional programmer, I really want to make real life programs which would actual make some impacts and will be useful. Tell me in the light of your experience that how should I start making those programs and polish my self as a professional programmer, if there are any sources available for it then kindly also recommend me those.

    Read the article

  • Should I use Ruby version 1.8.7 or 1.9.2 to start developing Rails apps?

    - by BeachRunnerJoe
    Hello. I'm diving into RoR and I see that the current version of Rails (3.0.5) works with both 1.8.7 and 1.9.2. Currently, I have both versions of Ruby installed using RVM, but I'm wondering which version I should be using as I dive into Rails and start developing apps. I suppose I'd prefer to use the newest version (1.9.2), but I don't know the technologies well enough to know pros/cons of using either. Thanks so much!

    Read the article

  • how to start LXDE session automatically after tightvncserver starts to make me able see desktop when connecting to the host via vncclient?

    - by Oleksandr Dudchenko
    I have system which is equipped with Intel Celeron processor 1.1 GHz s370 with 384 Mb of RAM on Intel d815egew motherboard which supports wake-on-lan function. I want to use such a PC for Internet sharing to the local network. Also this PC is a DHCP+DNS server as well as router/gateway. Based on above I decided to install Lubuntu as it is lightweight system. I installed Lubuntu 10.04.4 LTS from alternate ISO. System has no auto login. System boots and has acceptable performance. Host PC has onboard 4 network adapters: eth0 – ethernet controller which is used for Local Network connections. Has static address 10.0.0.1 eth1 – ethernet controller which is not used and not configured so far, I plan to connect printer here later on. eth2 - ethernet controller which is used to connect to Internet, which we plan to share for the local network wlan0 – wireless controller, it is used in role of access poit for local Network and has address 10.0.0.2 We want to control our gateway remotely. So, we need to be able to power it on remotely. To allow this I’ve done the following things: $ cd /etc/init.d/ made a new file with command $ sudo vim wakeonlanconfig Wrote the following lines to the newly created file, saved and closed it #!/bin/bash ethtool -s eth0 wol g ethtool -s eth2 wol g exit Made the abovementioned file executable $ sudo chmod a+x wakeonlanconfig Then included it into autostart sequence during boot. $ sudo update-rc.d -f wakeonlanconfig defaults after system reboot we will be able to poweron system remotely. Than we need to have a possibility to connect remotely to the host via SSH and VNC. So, I installed following packets with the following commands: $ sudo apt-get update $ sudo apt-get install openssh-server tightvncserver Add ssh daemon into autostart sequence during boot. $ sudo update-rc.d -f ssh defaults Power off the host PC $ sudo halt Then I went to remote place, send magic paket and powered the Host up. System started... And I connected to the host via Putty from remote system under Windows. Than logged in and run the command to start vnc server. $ tightvncserver -geometry 800x600 -depth 16 :2 VNC server successfully started and I got message like follows. New 'X' desktop is gateway:2 Starting applications specified in /home/dolv/.vnc/xstartup Log file is /home/dolv/.vnc/gateway:2.log Using UltraVNC Viewer programm under windows I connected to the host's vnc server, enterd the password and.... sow only mouse cursor in form of cross on a grey background of 800x600 dots, no desktop. Here is my .vnc/xstartup file #!/bin/sh xrdb $HOME/.Xresources xsetroot -solid grey #x-terminal-emulator -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" & #x-window-manager & # Fix to make GNOME work export XKL_XMODMAP_DISABLE=1 /etc/X11/Xsession The Question: What I have to change and where to make LXDE session start automatically after tightvncserver starts?

    Read the article

  • Started wrong with a project. Should I start over?

    - by solidsnake
    I'm a beginner web developer (one year of experience). A couple of weeks after graduating, I got offered a job to build a web application for a company whose owner is not much of a tech guy. He recruited me to avoid theft of his idea, the high cost of development charged by a service company, and to have someone young he can trust onboard to maintain the project for the long run (I came to these conclusions by myself long after being hired). Cocky as I was back then, with a diploma in computer science, I accepted the offer thinking I can build anything. I was calling the shots. After some research I settled on PHP, and started with plain PHP, no objects, just ugly procedural code. Two months later, everything was getting messy, and it was hard to make any progress. The web application is huge. So I decided to check out an MVC framework that would make my life easier. That's where I stumbled upon the cool kid in the PHP community: Laravel. I loved it, it was easy to learn, and I started coding right away. My code looked cleaner, more organized. It looked very good. But again the web application was huge. The company was pressuring me to deliver the first version, which they wanted to deploy, obviously, and start seeking customers. Because Laravel was fun to work with, it made me remember why I chose this industry in the first place - something I forgot while stuck in the shitty education system. So I started working on small projects at night, reading about methodologies and best practice. I revisited OOP, moved on to object-oriented design and analysis, and read Uncle Bob's book Clean Code. This helped me realize that I really knew nothing. I did not know how to build software THE RIGHT WAY. But at this point it was too late, and now I'm almost done. My code is not clean at all, just spaghetti code, a real pain to fix a bug, all the logic is in the controllers, and there is little object oriented design. I'm having this persistent thought that I have to rewrite the whole project. However, I can't do it... They keep asking when is it going to be all done. I can not imagine this code deployed on a server. Plus I still know nothing about code efficiency and the web application's performance. On one hand, the company is waiting for the product and can not wait anymore. On the other hand I can't see myself going any further with the actual code. I could finish up, wrap it up and deploy, but god only knows what might happen when people start using it. What do you think I should do?

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >