Search Results

Search found 9387 results on 376 pages for 'double byte'.

Page 58/376 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • google collections ordering on map values

    - by chris-gr
    I would like to order a map based on the values. Function<Map.Entry<A, Double>, Double> getSimFunction = new Function<Map.Entry<A, Double>, Double>() { public Double apply(Map.Entry<A, Double> entry) { return entry.getValue(); } }; final Ordering<Map.Entry<A, Double>> entryOrdering = Ordering.natural().onResultOf(getSimFunction); ImmutableSortedMap.orderedBy(entryOrdering).putAll(....).build(); How can I create a new sortedMap based on the ordering results or a sortedset based on the map.keyset()?

    Read the article

  • C# Struct No Parameterless Constructor? See what I need to accomplish

    - by Changeling
    I am using a struct to pass to an unmanaged DLL as so - [StructLayout(LayoutKind.Sequential)] public struct valTable { public byte type; public byte map; public byte spare1; public byte spare2; public int par; public int min; public byte[] name; public valTable() { name = new byte[24]; } } The code above will not compile because VS 2005 will complain that "Structs cannot contain explicit parameterless constructors". In order to pass this struct to my DLL, I have to pass an array of struct's like so valTable[] val = new valTable[281]; What I would like to do is when I say new, the constructor is called and it creates an array of bytes like I am trying to demonstrate because the DLL is looking for that byte array of size 24 in each dimension. How can I accomplish this?

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Using AChartEngine library for graphs, not able to get value for diffrent x-axis value

    - by kundan Chaudhary
    public static ArrayList<double[]> Value = new ArrayList<double[]>(); private double[] x = new double[10]; private double[] y = new double[10]; int counter = -1; add.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { counter++; x[counter] = Double.parseDouble(income_1.getText().toString()); y[counter] = Double.parseDouble(income_2.getText().toString()); income_1.setText(""); income_2.setText(""); } }); publish.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { if (Value != null) { Value.add(x); Value.add(y); Intent intent = salesStackedBarChart.execute(BarChart.this, Value, counter); startActivity(intent); } } }); //and in SalesStackedBarChart.java class public Intent execute(Context context, ArrayList<double[]> values ,int counter) { int count = counter + 1; double fcount = counter + 1.5; String[] titles = new String[] { "Android", "iPhone" }; int[] colors = new int[] { Color.GREEN, Color.CYAN }; XYMultipleSeriesRenderer renderer = buildBarRenderer(colors); setChartSettings(renderer, "Yearly revenue in the last "+count+" years", "Years", "revenue in $", 0.5, fcount, 0, 24000, Color.GRAY, Color.LTGRAY); renderer.setXLabels(count); renderer.setYLabels(10); renderer.setDisplayChartValues(true); renderer.setXLabelsAlign(Align.LEFT); renderer.setYLabelsAlign(Align.LEFT); renderer.setZoomRate(1.1f); renderer.setBarSpacing(0.5); return ChartFactory.getBarChartIntent(context, buildBarDataset(titles, values), renderer, Type.DEFAULT); } // in AbstractDemoChart.java class protected XYMultipleSeriesDataset buildBarDataset(String[] titles, List<double[]> values) { XYMultipleSeriesDataset dataset = new XYMultipleSeriesDataset(); int length = titles.length; for (int i = 0; i < length; i++) { CategorySeries series = new CategorySeries(titles[i]); double[] v = values.get(i); int seriesLength = v.length; for (int k = 0; k < seriesLength; k++) { series.add(v[k]); } dataset.addSeries(series.toXYSeries()); } return dataset; } Run this project i get graph with x- axis value: 1,2,3,4,5.... But I want to print value: 2005,2006,2007,2008..... I changed in some code like: setChartSettings(renderer, "Yearly revenue in the last "+count+" years", "Years", "revenue in $", 2005, 2010, 0, 24000, Color.GRAY, Color.LTGRAY); and run project i get value of x-axis like: 2005,2006,2007.... but not get graph bar value. Values of all x-axis are null. How can I make this work?

    Read the article

  • android get duration from maps.google.com directions

    - by urobo
    At the moment I am using this code to inquire google maps for directions from an address to another one, then I simply draw it on a mapview from its GeometryCollection. But yet this isn't enough I need also to extract the total expected duration from the kml. can someone give a little sample code to help me? thanks StringBuilder urlString = new StringBuilder(); urlString.append("http://maps.google.com/maps?f=d&hl=en"); urlString.append("&saddr=");//from urlString.append( Double.toString((double)src.getLatitudeE6()/1.0E6 )); urlString.append(","); urlString.append( Double.toString((double)src.getLongitudeE6()/1.0E6 )); urlString.append("&daddr=");//to urlString.append( Double.toString((double)dest.getLatitudeE6()/1.0E6 )); urlString.append(","); urlString.append( Double.toString((double)dest.getLongitudeE6()/1.0E6 )); urlString.append("&ie=UTF8&0&om=0&output=kml"); //Log.d("xxx","URL="+urlString.toString()); // get the kml (XML) doc. And parse it to get the coordinates(direction route). Document doc = null; HttpURLConnection urlConnection= null; URL url = null; try { url = new URL(urlString.toString()); urlConnection=(HttpURLConnection)url.openConnection(); urlConnection.setRequestMethod("GET"); urlConnection.setDoOutput(true); urlConnection.setDoInput(true); urlConnection.connect(); dbf = DocumentBuilderFactory.newInstance(); db = dbf.newDocumentBuilder(); doc = db.parse(urlConnection.getInputStream()); if(doc.getElementsByTagName("GeometryCollection").getLength()>0) { //String path = doc.getElementsByTagName("GeometryCollection").item(0).getFirstChild().getFirstChild().getNodeName(); String path = doc.getElementsByTagName("GeometryCollection").item(0).getFirstChild().getFirstChild().getFirstChild().getNodeValue() ; //Log.d("xxx","path="+ path); String[] pairs = path.split(" "); String[] lngLat = pairs[0].split(","); // lngLat[0]=longitude lngLat[1]=latitude lngLat[2]=height // src GeoPoint startGP = new GeoPoint((int)(Double.parseDouble(lngLat[1])*1E6),(int)(Double.parseDouble(lngLat[0])*1E6)); mMapView01.getOverlays().add(new MyOverLay(startGP,startGP,1)); GeoPoint gp1; GeoPoint gp2 = startGP; for(int i=1;i<pairs.length;i++) // the last one would be crash { lngLat = pairs[i].split(","); gp1 = gp2; // watch out! For GeoPoint, first:latitude, second:longitude gp2 = new GeoPoint((int)(Double.parseDouble(lngLat[1])*1E6),(int)(Double.parseDouble(lngLat[0])*1E6)); mMapView01.getOverlays().add(new MyOverLay(gp1,gp2,2,color)); //Log.d("xxx","pair:" + pairs[i]); } mMapView01.getOverlays().add(new MyOverLay(dest,dest, 3)); // use the default color } }catch (MalformedURLException e){ e.printStackTrace(); }catch (IOException e){ e.printStackTrace(); }catch (SAXException e){ e.printStackTrace(); } catch (ParserConfigurationException e) { e.printStackTrace(); } }

    Read the article

  • Is there a potential for resource leak/double free here?

    - by nhed
    The following sample (not compiled so I won't vouch for syntax) pulls two resources from resource pools (not allocated with new), then "binds" them together with MyClass for the duration of a certain transaction. The transaction, implemented here by myFunc, attempts to protect against leakage of these resources by tracking their "ownership". The local resource pointers are cleared when its obvious that instantiation of MyClass was successful. The local catch, as well as the destructor ~MyClass return the resources to their pool (double-frees are protected by teh above mentioned clearing of the local pointers). Instantiation of MyClass can fail and result in an exception at two steps (1) actual memory allocation, or (2) at the constructor body itself. I do not have a problem with #1, but in the case of #2, if the exception is thrown AFTER m_resA & m_resB were set. Causing both the ~MyClass and the cleanup code of myFunc to assume responsibility for returning these resources to their pools. Is this a reasonable concern? Options I have considered, but didn't like: Smart pointers (like boost's shared_ptr). I didn't see how to apply to a resource pool (aside for wrapping in yet another instance). Allowing double-free to occur at this level but protecting at the resource pools. Trying to use the exception type - trying to deduce that if bad_alloc was caught that MyClass did not take ownership. This will require a try-catch in the constructor to make sure that any allocation failures in ABC() ...more code here... wont be confused with failures to allocate MyClass. Is there a clean, simple solution that I have overlooked? class SomeExtResourceA; class SomeExtResourceB; class MyClass { private: // These resources come out of a resource pool not allocated with "new" for each use by MyClass SomeResourceA* m_resA; SomeResourceB* m_resB; public: MyClass(SomeResourceA* resA, SomeResourceB* resB): m_resA(resA), m_resB(resB) { ABC(); // ... more code here, could throw exceptions } ~MyClass(){ if(m_resA){ m_resA->Release(); } if(m_resB){ m_resB->Release(); } } }; void myFunc(void) { SomeResourceA* resA = NULL; SomeResourceB* resB = NULL; MyClass* pMyInst = NULL; try { resA = g_pPoolA->Allocate(); resB = g_pPoolB->Allocate(); pMyInst = new MyClass(resA,resB); resA=NULL; // ''ownership succesfully transfered to pMyInst resB=NULL; // ''ownership succesfully transfered to pMyInst // Do some work with pMyInst; ...; delete pMyInst; } catch (...) { // cleanup // need to check if resA, or resB were allocated prior // to construction of pMyInst. if(resA) resA->Release(); if(resB) resB->Release(); delete pMyInst; throw; // rethrow caught exception } }

    Read the article

  • Best way to blend colors in tile lighting? (XNA)

    - by Lemoncreme
    I have made a color, decent, recursive, fast tile lighting system in my game. It does everything I need except one thing: different colors are not blended at all: Here is my color blend code: return (new Color( (byte)MathHelper.Clamp(color.R / factor, 0, 255), (byte)MathHelper.Clamp(color.G / factor, 0, 255), (byte)MathHelper.Clamp(color.B / factor, 0, 255))); As you can see it does not take the already in place color into account. color is the color of the previous light, which is weakened by the above code by factor. If I wanted to blend using the color already in place, I would use the variable blend. Here is an example of a blend that I tried that failed, using blend: return (new Color( (byte)MathHelper.Clamp(((color.R + blend.R) / 2) / factor, 0, 255), (byte)MathHelper.Clamp(((color.G + blend.G) / 2) / factor, 0, 255), (byte)MathHelper.Clamp(((color.B + blend.B) / 2) / factor, 0, 255))); This color blend produces inaccurate and strange results. I need a blend that is accurate, like the first example, that blends the two colors together. What is the best way to do this?

    Read the article

  • Java - Using Linear Coordinates to Check Against AI [closed]

    - by Oliver Jones
    I'm working on some artificial intelligence, and I want my AI not to run into given coordinates as these are references of a wall/boundary. To begin with, every time my AI hits a wall, it makes a reference to that position (x,y). When it hits the same wall three times, it uses linear check points to 'imagine' there is a wall going through these coordinates. I want to now prevent my AI from going into that wall again. To detect if my coordinates make a straight line, i use: private boolean collinear(double x1, double y1, double x2, double y2, double x3, double y3) { return (y1 - y2) * (x1 - x3) == (y1 - y3) * (x1 - x2); } This returns true is the given points are linear to one another. So my problems are: How do I determine whether my robot is approaching the wall from its current trajectory? Instead of Java 'imagining' theres a line from 1, to 3. But to 'imagine' a line all the way through these linear coordinantes, until infinity (or close). I have a feeling this is going to require some confusing trigonometry? (REPOST: http://stackoverflow.com/questions/13542592/java-using-linear-coordinates-to-check-against-ai)

    Read the article

  • How should I group these variables?

    - by stariz77
    I have a shape that will be defined by: char s_type; char color; double height; double width; These variables are scanned in from a request string sent to my server and passed into my printing function, which then prints out the shape. Currently they are just local variables sitting in my main(); however, I was wondering if there would be any advantage in creating a struct containing these variables, and then passing the struct to my printing function? or how else might I improve my program's structure/style, would passing a struct by reference have any kind of performance benefit if there were many requests and therefore many printing function calls? printer(char st, char cr, double ht, double wd); int main() { // Other main functionality. char s_type; char color; double height; double width; sscanf (serv_req, "GET /%c/%c/%lf/%lf", &s_type, &color, &height, &width); printer(s_type, color, height, width); // Other main functionality. return 0; } It seemed "neater" if I had a struct or something that didn't leave me with declarations in the middle of everything else going on in main. I'm interested in structure/style as well as performance. EDIT: didn't mean to put printer declaration inside main.

    Read the article

  • Basic questions while making a toy calculator

    - by Jwan622
    I am making a calculator to better understand how to program and I had a question about the following lines of code: I wanted to make my equals sign with this C# code: private void btnEquals_Click(object sender, EventArgs e) { if (plusButtonClicked == true) { total2 = total1 + Convert.ToDouble(txtDisplay.Text); //double.Parse(txtDisplay.Text); } else if (minusButtonClicked == { total2 = total1 - double.Parse(txtDisplay.Text) } } txtDisplay.Text = total2.ToString(); total1 = 0; However, my friend said this way of writing code was superior, with changes in the minus sign. private void btnEquals_Click(object sender, EventArgs e) { if (plusButtonClicked == true) { total2 = total1 + Convert.ToDouble(txtDisplay.Text); //double.Parse(txtDisplay.Text); } else if (minusButtonClicked == true) { double d1; if(double.TryParse(txtDisplay.Text, out d1)) { total2 = total1 - d1; } } txtDisplay.Text = total2.ToString(); total1 = 0; My questions: 1) What does the "out d1" section of this minus sign code mean? 2) My assumption here is that the "TryParse" code results in fewer systems crashes? If I just use "Double.Parse" and I don't put anything in the textbox, the program will crash sometimes right?

    Read the article

  • Need help with fixing Genetic Algorithm that's not evolving correctly

    - by EnderMB
    I am working on a maze solving application that uses a Genetic Algorithm to evolve a set of genes (within Individuals) to evolve a Population of Individuals that power an Agent through a maze. The majority of the code used appears to be working fine but when the code runs it's not selecting the best Individual's to be in the new Population correctly. When I run the application it outputs the following: Total Fitness: 380.0 - Best Fitness: 11.0 Total Fitness: 406.0 - Best Fitness: 15.0 Total Fitness: 344.0 - Best Fitness: 12.0 Total Fitness: 373.0 - Best Fitness: 11.0 Total Fitness: 415.0 - Best Fitness: 12.0 Total Fitness: 359.0 - Best Fitness: 11.0 Total Fitness: 436.0 - Best Fitness: 13.0 Total Fitness: 390.0 - Best Fitness: 12.0 Total Fitness: 379.0 - Best Fitness: 15.0 Total Fitness: 370.0 - Best Fitness: 11.0 Total Fitness: 361.0 - Best Fitness: 11.0 Total Fitness: 413.0 - Best Fitness: 16.0 As you can clearly see the fitnesses are not improving and neither are the best fitnesses. The main code responsible for this problem is here, and I believe the problem to be within the main method, most likely where the selection methods are called: package GeneticAlgorithm; import GeneticAlgorithm.Individual.Action; import Robot.Robot.Direction; import Maze.Maze; import Robot.Robot; import java.util.ArrayList; import java.util.Random; public class RunGA { protected static ArrayList tmp1, tmp2 = new ArrayList(); // Implementation of Elitism protected static int ELITISM_K = 5; // Population size protected static int POPULATION_SIZE = 50 + ELITISM_K; // Max number of Iterations protected static int MAX_ITERATIONS = 200; // Probability of Mutation protected static double MUTATION_PROB = 0.05; // Probability of Crossover protected static double CROSSOVER_PROB = 0.7; // Instantiate Random object private static Random rand = new Random(); // Instantiate Population of Individuals private Individual[] startPopulation; // Total Fitness of Population private double totalFitness; Robot robot = new Robot(); Maze maze; public void setElitism(int result) { ELITISM_K = result; } public void setPopSize(int result) { POPULATION_SIZE = result + ELITISM_K; } public void setMaxIt(int result) { MAX_ITERATIONS = result; } public void setMutProb(double result) { MUTATION_PROB = result; } public void setCrossoverProb(double result) { CROSSOVER_PROB = result; } /** * Constructor for Population */ public RunGA(Maze maze) { // Create a population of population plus elitism startPopulation = new Individual[POPULATION_SIZE]; // For every individual in population fill with x genes from 0 to 1 for (int i = 0; i < POPULATION_SIZE; i++) { startPopulation[i] = new Individual(); startPopulation[i].randGenes(); } // Evaluate the current population's fitness this.evaluate(maze, startPopulation); } /** * Set Population * @param newPop */ public void setPopulation(Individual[] newPop) { System.arraycopy(newPop, 0, this.startPopulation, 0, POPULATION_SIZE); } /** * Get Population * @return */ public Individual[] getPopulation() { return this.startPopulation; } /** * Evaluate fitness * @return */ public double evaluate(Maze maze, Individual[] newPop) { this.totalFitness = 0.0; ArrayList<Double> fitnesses = new ArrayList<Double>(); for (int i = 0; i < POPULATION_SIZE; i++) { maze = new Maze(8, 8); maze.fillMaze(); fitnesses.add(startPopulation[i].evaluate(maze, newPop)); //this.totalFitness += startPopulation[i].evaluate(maze, newPop); } //totalFitness = (Math.round(totalFitness / POPULATION_SIZE)); StringBuilder sb = new StringBuilder(); for(Double tmp : fitnesses) { sb.append(tmp + ", "); totalFitness += tmp; } // Progress of each Individual //System.out.println(sb.toString()); return this.totalFitness; } /** * Roulette Wheel Selection * @return */ public Individual rouletteWheelSelection() { // Calculate sum of all chromosome fitnesses in population - sum S. double randNum = rand.nextDouble() * this.totalFitness; int i; for (i = 0; i < POPULATION_SIZE && randNum > 0; ++i) { randNum -= startPopulation[i].getFitnessValue(); } return startPopulation[i-1]; } /** * Tournament Selection * @return */ public Individual tournamentSelection() { double randNum = rand.nextDouble() * this.totalFitness; // Get random number of population (add 1 to stop nullpointerexception) int k = rand.nextInt(POPULATION_SIZE) + 1; int i; for (i = 1; i < POPULATION_SIZE && i < k && randNum > 0; ++i) { randNum -= startPopulation[i].getFitnessValue(); } return startPopulation[i-1]; } /** * Finds the best individual * @return */ public Individual findBestIndividual() { int idxMax = 0; double currentMax = 0.0; double currentMin = 1.0; double currentVal; for (int idx = 0; idx < POPULATION_SIZE; ++idx) { currentVal = startPopulation[idx].getFitnessValue(); if (currentMax < currentMin) { currentMax = currentMin = currentVal; idxMax = idx; } if (currentVal > currentMax) { currentMax = currentVal; idxMax = idx; } } // Double check to see if this has the right one //System.out.println(startPopulation[idxMax].getFitnessValue()); // Maximisation return startPopulation[idxMax]; } /** * One Point Crossover * @param firstPerson * @param secondPerson * @return */ public static Individual[] onePointCrossover(Individual firstPerson, Individual secondPerson) { Individual[] newPerson = new Individual[2]; newPerson[0] = new Individual(); newPerson[1] = new Individual(); int size = Individual.SIZE; int randPoint = rand.nextInt(size); int i; for (i = 0; i < randPoint; ++i) { newPerson[0].setGene(i, firstPerson.getGene(i)); newPerson[1].setGene(i, secondPerson.getGene(i)); } for (; i < Individual.SIZE; ++i) { newPerson[0].setGene(i, secondPerson.getGene(i)); newPerson[1].setGene(i, firstPerson.getGene(i)); } return newPerson; } /** * Uniform Crossover * @param firstPerson * @param secondPerson * @return */ public static Individual[] uniformCrossover(Individual firstPerson, Individual secondPerson) { Individual[] newPerson = new Individual[2]; newPerson[0] = new Individual(); newPerson[1] = new Individual(); for(int i = 0; i < Individual.SIZE; ++i) { double r = rand.nextDouble(); if (r > 0.5) { newPerson[0].setGene(i, firstPerson.getGene(i)); newPerson[1].setGene(i, secondPerson.getGene(i)); } else { newPerson[0].setGene(i, secondPerson.getGene(i)); newPerson[1].setGene(i, firstPerson.getGene(i)); } } return newPerson; } public double getTotalFitness() { return totalFitness; } public static void main(String[] args) { // Initialise Environment Maze maze = new Maze(8, 8); maze.fillMaze(); // Instantiate Population //Population pop = new Population(); RunGA pop = new RunGA(maze); // Instantiate Individuals for Population Individual[] newPop = new Individual[POPULATION_SIZE]; // Instantiate two individuals to use for selection Individual[] people = new Individual[2]; Action action = null; Direction direction = null; String result = ""; /*result += "Total Fitness: " + pop.getTotalFitness() + " - Best Fitness: " + pop.findBestIndividual().getFitnessValue();*/ // Print Current Population System.out.println("Total Fitness: " + pop.getTotalFitness() + " - Best Fitness: " + pop.findBestIndividual().getFitnessValue()); // Instantiate counter for selection int count; for (int i = 0; i < MAX_ITERATIONS; i++) { count = 0; // Elitism for (int j = 0; j < ELITISM_K; ++j) { // This one has the best fitness newPop[count] = pop.findBestIndividual(); count++; } // Build New Population (Population size = Steps (28)) while (count < POPULATION_SIZE) { // Roulette Wheel Selection people[0] = pop.rouletteWheelSelection(); people[1] = pop.rouletteWheelSelection(); // Tournament Selection //people[0] = pop.tournamentSelection(); //people[1] = pop.tournamentSelection(); // Crossover if (rand.nextDouble() < CROSSOVER_PROB) { // One Point Crossover //people = onePointCrossover(people[0], people[1]); // Uniform Crossover people = uniformCrossover(people[0], people[1]); } // Mutation if (rand.nextDouble() < MUTATION_PROB) { people[0].mutate(); } if (rand.nextDouble() < MUTATION_PROB) { people[1].mutate(); } // Add to New Population newPop[count] = people[0]; newPop[count+1] = people[1]; count += 2; } // Make new population the current population pop.setPopulation(newPop); // Re-evaluate the current population //pop.evaluate(); pop.evaluate(maze, newPop); // Print results to screen System.out.println("Total Fitness: " + pop.totalFitness + " - Best Fitness: " + pop.findBestIndividual().getFitnessValue()); //result += "\nTotal Fitness: " + pop.totalFitness + " - Best Fitness: " + pop.findBestIndividual().getFitnessValue(); } // Best Individual Individual bestIndiv = pop.findBestIndividual(); //return result; } } I have uploaded the full project to RapidShare if you require the extra files, although if needed I can add the code to them here. This problem has been depressing me for days now and if you guys can help me I will forever be in your debt.

    Read the article

  • Read and write from a byte stream if the endianess of the data is different from that of the current

    - by Sam Holder
    I have a stream of bytes which contains a flag which identifies the endianness of the data in the header. I want to read the doubles from the stream, which will presumably need to be different if the endianness of the data in the header is different? I am currently using a BinaryReader and calling ReadDouble to read the data from the stream, but if the endianness flag indicates that the data stream has a different endianness than the machine architecture then presumably this will not work? How should this be handled? Should I check the endianness of my data against that of the current machine then when I want to read a double instead read the bytes raw into a byte array and do array.Reverse to reverse the data before using BitConverter.ToDouble () with the reversed data and a zero offset? I could just test this but I do not have a source of data for both endianness so am a bit concerned about creating test data to test the parsing and this being different from what 'real' data might look like.

    Read the article

  • How to pass a string containing both single and double quotes as a parameter to XSLT in PHP?

    - by Boaz
    Hi, I have a simple PHP-based XSLT trasform code that looks like that: $xsl = new XSLTProcessor(); $xsl->registerPHPFunctions(); $xsl->setParameter("","searchterms", $searchterms); $xsl->importStylesheet($xslDoc); echo $xsl->transformToXML($doc); The code passes the variable $searchterms, which contains a string, as a parameter to the XSLT style sheet which in turns uses it as a text: <title>search feed for <xsl:value-of select="$searchterms"/></title> This works fine until you try to pass a string with mixes in it, say: $searchterms = '"some"'." text's quotes are mixed." In that point the XSLT processor screams: Cannot create XPath expression (string contains both quote and double-quotes) What is the correct way to safely pass arbitrary strings as input to XSLT? Note that these strings will be used as a text value in the resulting XML and not as an XPATH paramater. Thanks, Boaz

    Read the article

  • DataGridView selectedRow Property not get Data if i double click Row?

    - by programmerist
    i click double dataGridView's any row. Not get Data clicked row data: private void gwStudies_CellDoubleClick(object sender, DataGridViewCellEventArgs e) { GoruntuyuAc(); } private void GoruntuyuAc() { olduid = ""; DataRowView ro = (gwStudies.SelectedRows[0].DataBoundItem as DataRowView); string uid = ""; uid = ro["StudyInstanceUid"].ToString(); string tarih = ""; DateTime t1 = Convert.ToDateTime(ro["StudyDate"]); //........ //............ } Error Data on (gwStudies.SelectedRows[0].DataBoundItem as DataRowView); IMAGE:

    Read the article

  • What does !! (double exclamation point) mean?

    - by molecules
    In the code below, from a blog post by Alias, I noticed the use of the double exclamation point !!. I was wondering what it meant and where I could go in the future to find explanations for Perl syntax like this. (Yes, I already searched for '!!' at perlsyn). package Foo; use vars qw{$DEBUG}; BEGIN { $DEBUG = 0 unless defined $DEBUG; } use constant DEBUG => !! $DEBUG; sub foo { debug('In sub foo') if DEBUG; ... } UPDATE Thanks for all of your answers. Here is something else I just found that is related The List Squash Operator x!!

    Read the article

  • java: how to get a string representation of a compressed byte array ?

    - by Guillaume
    I want to put some compressed data into a remote repository. To put data on this repository I can only use a method that take the name of the resource and its content as a String. (like data.txt + "hello world"). The repository is moking a filesystem but is not, so I can not use File directly. I want to be able to do the following: client send to server a file 'data.txt' server compress 'data.txt' into a compressed file 'data.zip' server send a string representation of data.zip to the repository repository store data.zip client download from repository data.zip and his able to open it with its favorite zip tool The problem arise at step 3 when I try to get a string representation of my compressed file. Here is a sample class, using the zip*stream and that emulate the repository showcasing my problem. The created zip file is working, but after its 'serialization' it's get corrupted. (the sample class use jakarta commons.io ) Many thanks for your help. package zip; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; import java.util.zip.ZipEntry; import java.util.zip.ZipInputStream; import java.util.zip.ZipOutputStream; import org.apache.commons.io.FileUtils; /** * Date: May 19, 2010 - 6:13:07 PM * * @author Guillaume AME. */ public class ZipMe { public static void addOrUpdate(File zipFile, File ... files) throws IOException { File tempFile = File.createTempFile(zipFile.getName(), null); // delete it, otherwise you cannot rename your existing zip to it. tempFile.delete(); boolean renameOk = zipFile.renameTo(tempFile); if (!renameOk) { throw new RuntimeException("could not rename the file " + zipFile.getAbsolutePath() + " to " + tempFile.getAbsolutePath()); } byte[] buf = new byte[1024]; ZipInputStream zin = new ZipInputStream(new FileInputStream(tempFile)); ZipOutputStream out = new ZipOutputStream(new FileOutputStream(zipFile)); ZipEntry entry = zin.getNextEntry(); while (entry != null) { String name = entry.getName(); boolean notInFiles = true; for (File f : files) { if (f.getName().equals(name)) { notInFiles = false; break; } } if (notInFiles) { // Add ZIP entry to output stream. out.putNextEntry(new ZipEntry(name)); // Transfer bytes from the ZIP file to the output file int len; while ((len = zin.read(buf)) > 0) { out.write(buf, 0, len); } } entry = zin.getNextEntry(); } // Close the streams zin.close(); // Compress the files if (files != null) { for (File file : files) { InputStream in = new FileInputStream(file); // Add ZIP entry to output stream. out.putNextEntry(new ZipEntry(file.getName())); // Transfer bytes from the file to the ZIP file int len; while ((len = in.read(buf)) > 0) { out.write(buf, 0, len); } // Complete the entry out.closeEntry(); in.close(); } // Complete the ZIP file } tempFile.delete(); out.close(); } public static void main(String[] args) throws IOException { final String zipArchivePath = "c:/temp/archive.zip"; final String tempFilePath = "c:/temp/data.txt"; final String resultZipFile = "c:/temp/resultingArchive.zip"; File zipArchive = new File(zipArchivePath); FileUtils.touch(zipArchive); File tempFile = new File(tempFilePath); FileUtils.writeStringToFile(tempFile, "hello world"); addOrUpdate(zipArchive, tempFile); //archive.zip exists and contains a compressed data.txt that can be read using winrar //now simulate writing of the zip into a in memory cache String archiveText = FileUtils.readFileToString(zipArchive); FileUtils.writeStringToFile(new File(resultZipFile), archiveText); //resultingArchive.zip exists, contains a compressed data.txt, but it can not //be read using winrar: CRC failed in data.txt. The file is corrupt } }

    Read the article

  • How to keep material on one double page in latex ?

    - by drasto
    I have two side document(so I use twoside option) in latex. I need to keep some material(text, pictures...) on one double page. In another words I want to allow page break from odd to even page but I want to prohibit breaks from even to odd page. I tried to write macro: \newcommand{\nl}{ \\ \ifodd\c@page \relax \else \nopagebreak \fi} and use it instead of \ (I don't use any other line breaks commands in my document) but it does not work. Thanks for all answers.

    Read the article

  • How can I put double quotes inside a string within an ajax JSON response?

    - by karlthorwald
    I receive a JSON response in an Ajax request from the server. This way it works: { "a" = "1", "b" = "hello 'kitty'" } But I did not succeed in putting double quotes around kitty. When I convert " to \x22 in the Ajax response, it is still interpreted as " by JavaScript and I cannot parse the JSON. Should I also escape the \ and unescape later (which would be possible)? How to do this? Edit: I am not sure if i expressed it well: I want this string inside of "b" after the parse: hello "kitty" If necessary I could also add an additional step after the parse to convert "b", but I guess it is not necessary, there is a more elegant way so this happens automatically?

    Read the article

  • Why aren't double quotes and backslashes allowed in strings in the JSON standard?

    - by Dan Herbert
    If I run this in a JavaScript console in Chrome or Firebug, it works fine. JSON.parse('"\u0027"') // Escaped single-quote But if I run either of these 2 lines in a Javascript console, it throws an error. JSON.parse('"\u0022"') // Escaped double-quote JSON.parse('"\u005C"') // Escaped backslash RFC 4627 section 2.5 seems to imply that \ and " are allowed characters as long as they're properly escaped. The 2 browsers I've tried this in don't seem to allow it, however. Is there something I'm doing wrong here or are they really not allowed in strings? I've also tried using \" and \\ in place of \u0022 and \u005C respectively. I feel like I'm just doing something very wrong, because I find it hard to believe that JSON would not allow these characters in strings, especially since the specification doesn't seem to mention anything that I could find saying they're not allowed.

    Read the article

  • Ways to divide the high/low byte from a 16bit address?

    - by Grissiom
    Hello, I'm developing a software on 8051 processor. A frequent job is to divide the high and low byte of a 16bit address. I want to see there are how many ways to achieve it. The ways I come up so far are: (say ptr is a 16bit pointer, and int is 16bit int) ADDH = (unsigned int) ptr >> 8; ADDL = (unsigned int) ptr & 0x00FF; and ADDH = ((unsigned char *)&ptr)[0]; ADDL = ((unsigned char *)&ptr)[1]; Does anyone have any other bright ideas? ;) And anyone can tell me which way is more efficient?

    Read the article

  • A 4-byte Unsigned Int for Sql Server 2008?

    - by Jeff Meatball Yang
    I understand there are multiple questions about this on SO, but I have yet to find a definitive answer of "yes, here's how..." So here it is again: What are the possible ways to store an unsigned integer value (32-bit value or 32-bit bitmap) into a 4-byte field in SQL Server? Here are ideas I have seen: 1) Use a -1*2^31 offset for all values Disadvantages: need to perform math on the values before reading/writing/aggregating. 2) Use 4 tinyint fields Disadvantages: need to concatenate values to perform any operations 3) Use binary(4) Disadvantages: actually uses 4 + 2 bytes of space

    Read the article

  • How to double the size of 8x8 Grid whilst keeping the relative position of certain tiles intact?

    - by ke3pup
    Hi guys I have grid size of size 8x8 , total of 64 Tiles. i'm using this Grid to implement java search algorithms such as BFS and DFS. The Grid has given forbidden Tiles (meaning they can't be traversed or be neighbour of any other tile) and Goal and Start tile. for example Tile 19,20,21,22 and 35, 39 are forbidden and 14 an 43 are the Goal and start node when the program runs. My question is , How can i double the size of the grid, to 16x16 whilst keeping the Relative position of forbidden tiles as well as the Relative position of start and goal Tiles intact? On paper i know i can do this by adding 4 rows and columns to all size but in coding terms i don't know how to make it work? Can someone please give any sort of hints?

    Read the article

  • In C# how can I serialize a List<int> to a byte[] in order to store it in a DB field?

    - by Matt
    In C# how can I serialize a List to a byte[] in order to store it in a DB field? I know how to serialize to a file on the disk, but how do I just serialize to a variable? Here is how I serialized to the disk: List<int> l = IenumerableofInts.ToList(); Stream s = File.OpenWrite("file.bin"); BinaryFormatter bf = new BinaryFormatter(); bf.Serialize(s, lR); s.Close(); I'm sure it's much the same but I just can't wrap my head around it.

    Read the article

  • How to print a variable in reversed byte order in Perl?

    - by jth
    Hi, I'am trying to convert the variable $num into its reverse byte order and print it out. This is what I have done so far: my $num=0x5514ddb7; my $s=pack('I!',$num); print "$s\n"; He prints it out as some non-printable characters and in a hex editor it looks right, but how can I get it readable on the console? Already tried print sprintf("%#x\n",$s); but he complains about an non-numeric argument, so I think pack returns a string. Any ideas how can I print out `0xb7dd1455 on the console, based on $num?

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >