Search Results

Search found 2773 results on 111 pages for 'calculate'.

Page 23/111 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Calculating the 2D edge normals of a triangle

    - by Kazade
    What's a reliable way to calculate a 2D normal vector for each edge of a triangle, so that each normal is pointing outwards from the triangle? To clarify, given any triangle - for each edge (e.g p2-p1), I need to calculate a 2D normal vector pointing away from the triangle at right angles to the edge (for simplicity we can assume that the points are being specified in an anti-clockwise direction). I've coded a couple of hacky attempts, but I'm sure I'm overlooking some simple method and Google isn't being that helpful today - that and I haven't had my daily caffeine yet!

    Read the article

  • Calculating Hit Accuracy score in a game

    - by N0xus
    I'm currently in the process of making a scoreboard for my game. One of things I would like to display is the players accuracy in the amount of hits they had in game. However, I have never done this before and I've no idea how to go about doing this. Is there a commonly used algorithm out there that can help me calculate this, or has someone found a way to calculate this fairly easily? Any help with this would be appreciated.

    Read the article

  • 12.1.3 Spares Management Enhancements Transfer of Information (TOI)

    - by Oracle_EBS
    Transfer of Information (TOI) presentation is available. It covers the following enhancements made to the EBS Spares Management Product: Restrict Sources with no Shipping Network definition Create Internal Order when Source is Manned Warehouse Display Delivery status in Parts Requirement UI Order Sources by distance when Shipping cost remains same Calculate Parts Shipping Distances using Navteq Data Consider Warehouse Calendar to calculate Parts Arrival Date Create Requisitions in Operating Unit of Destination Inventory Org Uptake of HZ address structure in Parts Requirement UI

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Why is TDD not working here?

    - by TobiMcNamobi
    I want to write a class A that has a method calculate(<params>). That method should calculate a value using database data. So I wrote a class Test_A for unit testing (TDD). The database access is done using another class which I have mocked with a class, let's call it Accessor_Mockup. Now, the TDD cycle requires me to add a test that fails and make the simplest changes to A so that the test passes. So I add data to Accessor_Mockup and call A.calculate with appropriate parameters. But why should A use the accessor class at all? It would be simpler (!) if the class just "knows" the values it could retrieve from the database. For every test I write I could introduce such a new value (or an if-branch or whatever). But wait ... TDD is more. There is the refactoring part. But that sounds to me like "OK, I can do this all with a big if-elseif construct. I could refactor it using a new class ... but instead I make use of the DB accessor and do this in a totally different way. The code will not necessarily look better afterwards but I know I WANT to use the database".

    Read the article

  • Understanding normal maps on terrain

    - by JohnB
    I'm having trouble understanding some of the math behind normal map textures even though I've got it to work using borrowed code, I want to understand it. I have a terrain based on a heightmap. I'm generating a mesh of triangles at load time and rendering that mesh. Now for each vertex I need to calculate a normal, a tangent, and a bitangent. My understanding is as follows, have I got this right? normal is a unit vector facing outwards from the surface of the triangle. For a vertex I take the average of the normals of the triangles using that vertex. tangent is a unit vector in the direction of the 'u' coordinates of the texture map. As my texture u,v coordinates follow the x and y coordinates of the terrain, then my understanding is that this vector is simply the vector along the surface in the x direction. So should be able to calculate this as simply the difference between vertices in the x direction to get a vector, (and normalize it). bitangent is a unit vector in the direction of the 'v' coordinates of the texture map. As my texture u,v coordinates follow the x and y coordinates of the terrain, then my understanding is that this vector is simply the vector along the surface in the y direction. So should be able to calculate this as simply the difference between vertices in the y direction to get a vector, (and normalize it). However the code I have borrowed seems much more complicated than this and takes into account the actual values of u, and v at each vertex which I don't understand the need for as they increase in exactly the same direction as x, and y. I implemented what I thought from above, and it simply doesn't work, the normals are clearly not working for lighting. Have I misunderstood something? Or can someone explain to me the physical meaning of the tangent and bitangent vectors when applied to a mesh generated from a hightmap like this, when u and v texture coordinates map along the x and y directions. Thanks for any help understanding this.

    Read the article

  • Drawing particles as a smooth blob

    - by Nömmik
    I'm new to game/graphics development and I'm playing around with particles (in 2D). I want to draw particles close to each other as a blob, just as liquid/water. I do not want to draw big circles overlapping as the blob won't be smooth (and too big). I don't really know physics but I assume what I want is something looking similar to surface tension. I haven't been able to find anything on stackexchange or on Google (maybe I do not know the correct keywords?). So far I have found two possible solutions, but I am unable to find any concrete information about algorithms. One of them is to calculate the concave hull of particles I consider being a blob. I can calculate the blob by creating an equivalence class (on the relation "close to each other"). Strangely enough I haven't been able to find any algorithm explaining how to calculate the concave hull. Many posts (and among stackexchange) links to libraries or commercial products that do this (I need libraries to work in C#), but never any algorithm. Also this solution might have a problem with a circle of particles, which would not detect the empty space in the middle. While researching concave hull I stumbled upon something called alpha shapes. Which seems to be exactly what I want to do, however just as with concave hull I haven't found any source explaining how they actually work. I have found some presentation materials but not enough to go on. It's like a big secret everyone knows except me :-/ After calculating the concave hull or alpha shape I want to make it a Bézier curve to make it smooth and nice. Although I do find my approach a bit too complex, maybe I am trying to solve this the wrong way? If you can either suggest any other solution to my problem, or explain the pieces I am missing I would be very happy and grateful :-) Thanks.

    Read the article

  • Eculidean space and vector magnitude

    - by Starkers
    Below we have distances from the origin calculated in two different ways, giving the Euclidean distance, the Manhattan distance and the Chebyshev distance. Euclidean distance is what we use to calculate the magnitude of vectors in 2D/3D games, and that makes sense to me: Let's say we have a vector that gives us the range a spaceship with limited fuel can travel. If we calculated this with Manhattan metric, our ship could travel a distance of X if it were travelling horizontally or vertically, however the second it attempted to travel diagonally it could only tavel X/2! So like I say, Euclidean distance does make sense. However, I still don't quite get how we calculate 'real' distances from the vector's magnitude. Here are two points, purple at (2,2) and green at (3,3). We can take two points away from each other to derive a vector. Let's create a vector to describe the magnitude and direction of purple from green: |d| = purple - green |d| = (purple.x, purple.y) - (green.x, green.y) |d| = (2, 2) - (3, 3) |d| = <-1,-1> Let's derive the magnitude of the vector via Pythagoras to get a Euclidean measurement: euc_magnitude = sqrt((x*x)+(y*y)) euc_magnitude = sqrt((-1*-1)+(-1*-1)) euc_magnitude = sqrt((1)+(1)) euc_magnitude = sqrt(2) euc_magnitude = 1.41 Now, if the answer had been 1, that would make sense to me, because 1 unit (in the direction described by the vector) from the green is bang on the purple. But it's not. It's 1.41. 1.41 units is the direction described, to me at least, makes us overshoot the purple by almost half a unit: So what do we do to the magnitude to allow us to calculate real distances on our point graph? Worth noting I'm a beginner just working my way through theory. Haven't programmed a game in my life!

    Read the article

  • PHP Calculating Text to Content Ratio

    - by James
    I am using the following code to calculate text to code ratio. I think it is crazy that no one can agree on how to properly calculate the result. I am looking any suggestions or ideas to improve this code that may make it more accurate. <?php // Returns the size of the content in bytes function findKb($content){ $count=0; $order = array("\r\n", "\n", "\r", "chr(13)", "\t", "\0", "\x0B"); $content = str_replace($order, "12", $content); for ($index = 0; $index < strlen($content); $index ++){ $byte = ord($content[$index]); if ($byte <= 127) { $count++; } else if ($byte >= 194 && $byte <= 223) { $count=$count+2; } else if ($byte >= 224 && $byte <= 239) { $count=$count+3; } else if ($byte >= 240 && $byte <= 244) { $count=$count+4; } } return $count; } // Collect size of entire code $filesize = findKb($content); // Remove anything within script tags $code = preg_replace("@<script[^>]*>.+</script[^>]*>@i", "", $content); // Remove anything within style tags $code = preg_replace("@<style[^>]*>.+</style[^>]*>@i", "", $content); // Remove all tags from the system $code = strip_tags($code); // Remove Extra whitespace from the content $code = preg_replace( '/\s+/', ' ', $code ); // Find the size of the remaining code $codesize = findKb($code); // Calculate Percentage $percent = $codesize/$filesize; $percentage = $percent*100; echo $percentage; ?> I don't know the exact calculations that are used so this function is just my guess. Does anyone know what the proper calculations are or if my functions are close enough for a good judgement.

    Read the article

  • Jquery calculator

    - by Nemanja
    I want to make calculator for summation. I have this jquery code: <script type="text/javascript"> $(document).ready(function() { $("#calculate").click(function() { if ($("#input").val() != '' && $("#input").val() != undefined) { $("#result").html("Result is: " + parseInt($("#input").val()) * 30 ); } else { $("#result").html("Please enter some value"); } }); }); </script> <div id="calculator"> <br /> <input type="text" style="left:20px;" id="input" /> <div id="result"></div> <input type="button" id="calculate" value="calculate" style="left:20px;" /> </div> I want something like this: I write number 5, then input + from keyboard and write another value. Then to click on summation button and to get value. Can anyone help me with this? Thank you very much!

    Read the article

  • How do I make cars on a one-dimensional track avoid collisions?

    - by user990827
    Using three.js, I use a simple spline to represent a road. Cars can only move forward on the spline. A car should be able to slow-down behind a slow moving car. I know how to calculate the distance between 2 cars, but how to calculate the proper speed in each game update? At the moment I simply do something like this: this.speed += (this.maxSpeed - this.speed) * 0.02; // linear interpolation to maxSpeed // the position on the spline (0.0 - 1.0) this.position += this.speed / this.road.spline.getLength(); This works. But how to implement the slow-down part? // transform from floats (0.0 - 1.0) into actual units var carInFrontPosition = carInFront.position * this.road.spline.getLength(); var myPosition = this.position * this.road.spline.getLength(); var distance = carInFrontPosition - myPosition; // WHAT TO DO HERE WITH THE DISTANCE? // HOW TO CALCULATE MY NEW SPEED? Obviously I have to somehow take current speed of the cars into account for calculation. Besides different maxSpeeds, I want each car to also have a different mass (causing it to accelerate slower/faster). But this mass has to be then also taken into account for braking (slowing down) so they don't crash into each other.

    Read the article

  • How to use an adjacency matrix to determine which rows to 'pass' to a function in r?

    - by dubhousing
    New to R, and I have a long-ish question: I have a shapefile/map, and I'm aiming to calculate a certain index for every polygon in that map, based on attributes of that polygon and each polygon that neighbors it. I have an adjacency matrix -- which I think is the same as a "1st-order queen contiguity weights matrix", although I'm not sure -- that describes which polygons border which other polygons, e.g., POLYID A B C D E A 0 0 1 0 1 B 0 0 1 0 0 C 1 1 0 1 0 D 0 0 1 0 1 E 1 0 0 1 0 The above indicates, for instance, that polygons 'C' and 'E' adjoin polygon 'A'; polygon 'B' adjoins only polygon 'C', etc. The attribute table I have has one polygon per row: POLYID TOT L10K 10_15K 15_20K ... A 500 24 30 77 ... Where TOT, L10K, etc. are the variables I use to calculate an index. There are 525 polygons/rows in my data, so I'd like to use the adjacency matrix to determine which rows' attributes to incorporate into the calculation of the index of interest. For now, I can calculate the index when I subset the rows that correspond to one 'bundle' of neighboring polygons, and then use a loop (if it's of interest, I'm calculating the Centile Gap Index, a measure of local income segregation). E.g., subsetting the 'neighborhood' of the Detroit City Schools: Detroit <- UNSD00[c(142,150,164,221,226,236,295,327,157,177,178,364,233,373,418,424,449,451,487),] Then record the marginal column proportions and a running total: catprops <- vector() for(i in 4:19) { catprops[(i-3)]<-sum(Detroit[,i])/sum(Detroit[,3]) } catprops <- as.data.frame(catprops) catprops[,2]<-cumsum(catprops[,1]) Columns 4:19 are the necessary ones in the attribute table. Then I use the following code to calculate the index -- note that the loop has "i in 1:19" because the Detroit subset has 19 polygons. cgidistsum <- 0 for(i in 1:19) { pranks <- vector() for(j in 4:19) { if (Detroit[i,j]==0) pranks <- append(pranks,0) else if (j == 4) pranks <- append(pranks,seq(0,catprops[1,2],by=catprops[1,2]/Detroit[i,j])) else pranks <- append(pranks,seq(catprops[j-4,2],catprops[j-3,2],by=catprops[j-3,1]/Detroit[i,j])) } distpranks <- vector() distpranks<-abs(pranks-median(pranks)) cgidistsum <- cgidistsum + sum(distpranks) } cgi <- (.25-(cgidistsum/sum(Detroit[,3])))/.25 My apologies if I've provided more information than is necessary. I would really like to exploit the adjacency matrix in order to calculate the CGI for each 'bundle' of these rows. If you happen to know how I could started with this, that would be great. and my apologies for any novice mistakes, I'm new to R!

    Read the article

  • Developing an Implementation Plan with Iterations by Russ Pitts

    - by user535886
    Developing an Implementation Plan with Iterations by Russ Pitts  Ok, so you have come to grips with understanding that applying the iterative concept, as defined by OUM is simply breaking up the project effort you have estimated for each phase into one or more six week calendar duration blocks of work. Idea being the business user(s) or key recipient(s) of work product(s) being developed never go longer than six weeks without having some sort of review or prototyping of the work results for an iteration…”think-a-little”, “do-a-little”, and “show-a-little” in a six week or less timeframe…ideally the business user(s) or key recipients(s) are involved throughout. You also understand the OUM concept that you only plan for that which you have knowledge of. The concept further defined, a project plan initially is developed at a high-level, and becomes more detailed as project knowledge grows. Agreeing to this concept means you also have to admit to the fallacy that one can plan with precision beyond six weeks into a project…Anything beyond six weeks is a best guess in most cases when dealing with software implementation projects. Project planning, as defined by OUM begins with the Implementation Plan view, which is a very high-level perspective of the effort estimated for each of the five OUM phases, as well as the number of iterations within each phase. You might wonder how can you predict the number of iterations for each phase at this early point in the project. Remember project planning is not an exact science, and initially is high-level and abstract in nature, and then becomes more detailed and precise as the project proceeds. So where do you start in defining iterations for each phase for a project? The following are three easy steps to initially define the number of iterations for each phase: Step 1 => Start with identifying the known factors… …Prior to starting a project you should know: · The agreed upon time-period for an iteration (e.g 6 weeks, or 4 weeks, or…) within a phase (recommend keeping iteration time-period consistent within a phase, if not for the entire project) · The number of resources available for the project · The number of total number of man-day (effort) you have estimated for each of the five OUM phases of the project · The number of work days for a week Step 2 => Calculate the man-days of effort required for an iteration within a phase… Lets assume for the sake of this example there are 10 project resources, and you have estimated 2,536 man-days of work effort which will need to occur for the elaboration phase of the project. Let’s also assume a week for this project is defined as 5 business days, and that each iteration in the elaboration phase will last a calendar duration of 6 weeks. A simple calculation is performed to calculate the daily burn rate for a single iteration, which produces a result of… ((Number of resources * days per week) * duration of iteration) = Number of days required per iteration ((10 resources * 5 days/week) * 6 weeks) = 300 man days of effort required per iteration Step 3 => Calculate the number of iterations that can occur within a phase Next calculate the number of iterations that can occur for the amount of man-days of effort estimated for the phase being considered… (number of man-days of effort estimated / number of man-days required per iteration) = # of iterations for phase (2,536 man-days of estimated effort for phase / 300 man days of effort required per iteration) = 8.45 iterations, which should be rounded to a whole number such as 9 iterations* *Note - It is important to note this is an approximate calculation, not an exact science. This particular example is a simple one, which assumes all resources are utilized throughout the phase, including tech resources, etc. (rounding down or up to a whole number based on project factor considerations). It is also best in many cases to round up to higher number, as this provides some calendar scheduling contingency.

    Read the article

  • A proposal for #DAX Code Formatting #ssas #powerpivot #tabular

    - by Marco Russo (SQLBI)
    I recently published a set of rules for DAX code formatting. The following is an example of what I obtain: CALCULATE (     SUMX (         Orders,         Orders[Amount]     ),     FILTER (         ALL ( Customers ),         CALCULATE (             COUNTROWS ( Sales ),             ALL ( Calendar[Date] )         ) > 42 + 8 – 25 * ( 3 - 1 )             + 2 – 1 + 2 – 1             + CALCULATE (                   2 + 2 – 2                   + 2 - 2               )             – CALCULATE ( 4 )     ) ) The goal is to improve code readability and I look forward to implement a code formatting feature in DAX Studio. The DAX Editor already supports the rules described in the article. I am also considering whether to add a rule specific for ADDCOLUMNS / SUMMARIZE because I would like to see the “pairs” of arguments to define a column in the same row or with a special indentation rule (DAX expression for a column is indented in the line following the column name). EVALUATE CALCULATETABLE (        CALCULATETABLE (         SUMMARIZE (             Audience,             'Date'[Year],             Individuals[Gender],             Individuals[AgeRange],             "Num of Rows", FORMAT (COUNTROWS (Audience), "#,#"),             "Weighted Mean Age",                 SUMX (Audience, Audience[Weight] * Audience[Age]) / SUM (Audience[Weight])         ),         SUMMARIZE (             BridgeIndividualsTargets,             Individuals[ID_Individual]         ),         Audience[Weight] > 0        ),        Targets[Target] = "Maschi",     'Date'[Year] = 2010,     'Date'[MonthName] = "January" ) I would like to get feedback for that – you can use comments here or comments in original article. Thanks!

    Read the article

  • How to Display a Bmp in a RTF control in VB.net

    - by Gerolkae
    I Started with this C# Question I'm trying to Display a bmp image inside a rtf Box for a Bot program I'm making. This function is supposed to convert a bitmap to rtf code whis is inserted to another rtf formatter srtring with additional text. Kind of like Smilies being used in a chat program. For some reason the output of this function gets rejected by the RTF Box and Vanishes completly. I'm not sure if it the way I'm converting the bmp to a Binary string or if its tied in with the header tags 'returns the RTF string representation of our picture Public Shared Function PictureToRTF(ByVal Bmp As Bitmap) As String Dim stream As New MemoryStream() Bmp.Save(stream, System.Drawing.Imaging.ImageFormat.Bmp) Dim bytes As Byte() = stream.ToArray() Dim str As String = BitConverter.ToString(bytes, 0).Replace("-", String.Empty) 'header to string we want to insert Using g As Graphics = Main.CreateGraphics() xDpi = g.DpiX yDpi = g.DpiY End Using Dim _rtf As New StringBuilder() ' Calculate the current width of the image in (0.01)mm Dim picw As Integer = CInt(Math.Round((Bmp.Width / xDpi) * HMM_PER_INCH)) ' Calculate the current height of the image in (0.01)mm Dim pich As Integer = CInt(Math.Round((Bmp.Height / yDpi) * HMM_PER_INCH)) ' Calculate the target width of the image in twips Dim picwgoal As Integer = CInt(Math.Round((Bmp.Width / xDpi) * TWIPS_PER_INCH)) ' Calculate the target height of the image in twips Dim pichgoal As Integer = CInt(Math.Round((Bmp.Height / yDpi) * TWIPS_PER_INCH)) ' Append values to RTF string _rtf.Append("{\pict\wbitmap0") _rtf.Append("\picw") _rtf.Append(Bmp.Width.ToString) ' _rtf.Append(picw.ToString) _rtf.Append("\pich") _rtf.Append(Bmp.Height.ToString) ' _rtf.Append(pich.ToString) _rtf.Append("\wbmbitspixel24\wbmplanes1") _rtf.Append("\wbmwidthbytes40") _rtf.Append("\picwgoal") _rtf.Append(picwgoal.ToString) _rtf.Append("\pichgoal") _rtf.Append(pichgoal.ToString) _rtf.Append("\bin ") _rtf.Append(str.ToLower & "}") Return _rtf.ToString End Function

    Read the article

  • In WPF, accessing containers within ListBox

    - by Jim
    I'm creating a DerivedListBox : ListBox and a DerivedHeaderedContentControl : HeaderedContentControl, which will serve as a container for each item in the ListBox. In order to calculate the size available for the expanded content of the DerivedHeaderedContentControls, I am storing each container object in a list within the DerivedListBox. This way I can calculate the height of the headers of each DerivedHeaderedContentControl and subtract that from the total size available to the DerivedListBox. This would be the size available for the expanded content of a DerivedHeaderedContentControl. public class DerivedHeaderedContentControl : HeaderedContentControl { // Do some binding to DerivedListBox to calculate height. } public class DerivedListBox : ListBox { private List<DerivedHeaderedContentControl> containers; protected override DependencyObject GetContainerForItemOverride() { DerivedHeaderedContentControl val = new DerivedHeaderedContentControl(); this.containers.Add(val); return val; } // Do some binding to calculate height available for an expanded // container by iterating over containers. } The problem comes in when the DerivedListBox's ItemsSource is cleared (or an item in the items source is removed). How can I determine when the ItemsSource is cleared so that I can clear the containers list?

    Read the article

  • Valgrind says "stack allocation," I say "heap allocation"

    - by Joel J. Adamson
    Dear Friends, I am trying to trace a segfault with valgrind. I get the following message from valgrind: ==3683== Conditional jump or move depends on uninitialised value(s) ==3683== at 0x4C277C5: sparse_mat_mat_kron (sparse.c:165) ==3683== by 0x4C2706E: rec_mating (rec.c:176) ==3683== by 0x401C1C: age_dep_iterate (age_dep.c:287) ==3683== by 0x4014CB: main (age_dep.c:92) ==3683== Uninitialised value was created by a stack allocation ==3683== at 0x401848: age_dep_init_params (age_dep.c:131) ==3683== ==3683== Conditional jump or move depends on uninitialised value(s) ==3683== at 0x4C277C7: sparse_mat_mat_kron (sparse.c:165) ==3683== by 0x4C2706E: rec_mating (rec.c:176) ==3683== by 0x401C1C: age_dep_iterate (age_dep.c:287) ==3683== by 0x4014CB: main (age_dep.c:92) ==3683== Uninitialised value was created by a stack allocation ==3683== at 0x401848: age_dep_init_params (age_dep.c:131) However, here's the offending line: /* allocate mating table */ age_dep_data->mtable = malloc (age_dep_data->geno * sizeof (double *)); if (age_dep_data->mtable == NULL) error (ENOMEM, ENOMEM, nullmsg, __LINE__); for (int j = 0; j < age_dep_data->geno; j++) { 131=> age_dep_data->mtable[j] = calloc (age_dep_data->geno, sizeof (double)); if (age_dep_data->mtable[j] == NULL) error (ENOMEM, ENOMEM, nullmsg, __LINE__); } What gives? I thought any call to malloc or calloc allocated heap space; there is no other variable allocated here, right? Is it possible there's another allocation going on (the offending stack allocation) that I'm not seeing? You asked to see the code, here goes: /* Copyright 2010 Joel J. Adamson <[email protected]> $Id: age_dep.c 1010 2010-04-21 19:19:16Z joel $ age_dep.c:main file Joel J. Adamson -- http://www.unc.edu/~adamsonj Servedio Lab University of North Carolina at Chapel Hill CB #3280, Coker Hall Chapel Hill, NC 27599-3280 This file is part of an investigation of age-dependent sexual selection. This code is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with haploid. If not, see <http://www.gnu.org/licenses/>. */ #include "age_dep.h" /* global variables */ extern struct argp age_dep_argp; /* global error message variables */ char * nullmsg = "Null pointer: %i"; /* error message for conversions: */ char * errmsg = "Representation error: %s"; /* precision for formatted output: */ const char prec[] = "%-#9.8f "; const size_t age_max = AGEMAX; /* maximum age of males */ static int keep_going_p = 1; int main (int argc, char ** argv) { /* often used counters: */ int i, j; /* read the command line */ struct age_dep_args age_dep_args = { NULL, NULL, NULL }; argp_parse (&age_dep_argp, argc, argv, 0, 0, &age_dep_args); /* set the parameters here: */ /* initialize an age_dep_params structure, set the members */ age_dep_params_t * params = malloc (sizeof (age_dep_params_t)); if (params == NULL) error (ENOMEM, ENOMEM, nullmsg, __LINE__); age_dep_init_params (params, &age_dep_args); /* initialize frequencies: this initializes a list of pointers to initial frqeuencies, terminated by a NULL pointer*/ params->freqs = age_dep_init (&age_dep_args); params->by = 0.0; /* what range of parameters do we want, and with what stepsize? */ /* we should go from 0 to half-of-theta with a step size of about 0.01 */ double from = 0.0; double to = params->theta / 2.0; double stepsz = 0.01; /* did you think I would spell the whole word? */ unsigned int numparts = floor(to / stepsz); do { #pragma omp parallel for private(i) firstprivate(params) \ shared(stepsz, numparts) for (i = 0; i < numparts; i++) { params->by = i * stepsz; int tries = 0; while (keep_going_p) { /* each time through, modify mfreqs and mating table, then go again */ keep_going_p = age_dep_iterate (params, ++tries); if (keep_going_p == ERANGE) error (ERANGE, ERANGE, "Failure to converge\n"); } fprintf (stdout, "%i iterations\n", tries); } /* for i < numparts */ params->freqs = params->freqs->next; } while (params->freqs->next != NULL); return 0; } inline double age_dep_pmate (double age_dep_t, unsigned int genot, double bp, double ba) { /* the probability of mating between these phenotypes */ /* the female preference depends on whether the female has the preference allele, the strength of preference (parameter bp) and the male phenotype (age_dep_t); if the female lacks the preference allele, then this will return 0, which is not quite accurate; it should return 1 */ return bits_isset (genot, CLOCI)? 1.0 - exp (-bp * age_dep_t) + ba: 1.0; } inline double age_dep_trait (int age, unsigned int genot, double by) { /* return the male trait, a function of the trait locus, age, the age-dependent scaling parameter (bx) and the males condition genotype */ double C; double T; /* get the male's condition genotype */ C = (double) bits_popcount (bits_extract (0, CLOCI, genot)); /* get his trait genotype */ T = bits_isset (genot, CLOCI + 1)? 1.0: 0.0; /* return the trait value */ return T * by * exp (age * C); } int age_dep_iterate (age_dep_params_t * data, unsigned int tries) { /* main driver routine */ /* number of bytes for female frequencies */ size_t geno = data->age_dep_data->geno; size_t genosize = geno * sizeof (double); /* female frequencies are equal to male frequencies at birth (before selection) */ double ffreqs[geno]; if (ffreqs == NULL) error (ENOMEM, ENOMEM, nullmsg, __LINE__); /* do not set! Use memcpy (we need to alter male frequencies (selection) without altering female frequencies) */ memmove (ffreqs, data->freqs->freqs[0], genosize); /* for (int i = 0; i < geno; i++) */ /* ffreqs[i] = data->freqs->freqs[0][i]; */ #ifdef PRMTABLE age_dep_pr_mfreqs (data); #endif /* PRMTABLE */ /* natural selection: */ age_dep_ns (data); /* normalized mating table with new frequencies */ age_dep_norm_mtable (ffreqs, data); #ifdef PRMTABLE age_dep_pr_mtable (data); #endif /* PRMTABLE */ double * newfreqs; /* mutate here */ /* i.e. get the new frequency of 0-year-olds using recombination; */ newfreqs = rec_mating (data->age_dep_data); /* return block */ { if (sim_stop_ck (data->freqs->freqs[0], newfreqs, GENO, TOL) == 0) { /* if we have converged, stop the iterations and handle the data */ age_dep_sim_out (data, stdout); return 0; } else if (tries > MAXTRIES) return ERANGE; else { /* advance generations */ for (int j = age_max - 1; j < 0; j--) memmove (data->freqs->freqs[j], data->freqs->freqs[j-1], genosize); /* advance the first age-class */ memmove (data->freqs->freqs[0], newfreqs, genosize); return 1; } } } void age_dep_ns (age_dep_params_t * data) { /* calculate the new frequency of genotypes given additive fitness and selection coefficient s */ size_t geno = data->age_dep_data->geno; double w[geno]; double wbar, dtheta, ttheta, dcond, tcond; double t, cond; /* fitness parameters */ double mu, nu; mu = data->wparams[0]; nu = data->wparams[1]; /* calculate fitness */ for (int j = 0; j < age_max; j++) { int i; for (i = 0; i < geno; i++) { /* calculate male trait: */ t = age_dep_trait(j, i, data->by); /* calculate condition: */ cond = (double) bits_popcount (bits_extract(0, CLOCI, i)); /* trait-based fitness term */ dtheta = data->theta - t; ttheta = (dtheta * dtheta) / (2.0 * nu * nu); /* condition-based fitness term */ dcond = CLOCI - cond; tcond = (dcond * dcond) / (2.0 * mu * mu); /* calculate male fitness */ w[i] = 1 + exp(-tcond) - exp(-ttheta); } /* calculate mean fitness */ /* as long as we calculate wbar before altering any values of freqs[], we're safe */ wbar = gen_mean (data->freqs->freqs[j], w, geno); for (i = 0; i < geno; i++) data->freqs->freqs[j][i] = (data->freqs->freqs[j][i] * w[i]) / wbar; } } void age_dep_norm_mtable (double * ffreqs, age_dep_params_t * params) { /* this function produces a single mating table that forms the input for recombination () */ /* i is female genotype; j is male genotype; k is male age */ int i,j,k; double norm_denom; double trait; size_t geno = params->age_dep_data->geno; for (i = 0; i < geno; i++) { double norm_mtable[geno]; /* initialize the denominator: */ norm_denom = 0.0; /* find the probability of mating and add it to the denominator */ for (j = 0; j < geno; j++) { /* initialize entry: */ norm_mtable[j] = 0.0; for (k = 0; k < age_max; k++) { trait = age_dep_trait (k, j, params->by); norm_mtable[j] += age_dep_pmate (trait, i, params->bp, params->ba) * (params->freqs->freqs)[k][j]; } norm_denom += norm_mtable[j]; } /* now calculate entry (i,j) */ for (j = 0; j < geno; j++) params->age_dep_data->mtable[i][j] = (ffreqs[i] * norm_mtable[j]) / norm_denom; } } My current suspicion is the array newfreqs: I can't memmove, memcpy or assign a stack variable then hope it will persist, can I? rec_mating() returns double *.

    Read the article

  • Linear regression confidence intervals in SQL

    - by Matt Howells
    I'm using some fairly straight-forward SQL code to calculate the coefficients of regression (intercept and slope) of some (x,y) data points, using least-squares. This gives me a nice best-fit line through the data. However we would like to be able to see the 95% and 5% confidence intervals for the line of best-fit (the curves below). What these mean is that the true line has 95% probability of being below the upper curve and 95% probability of being above the lower curve. How can I calculate these curves? I have already read wikipedia etc. and done some googling but I haven't found understandable mathematical equations to be able to calculate this. Edit: here is the essence of what I have right now. --sample data create table #lr (x real not null, y real not null) insert into #lr values (0,1) insert into #lr values (4,9) insert into #lr values (2,5) insert into #lr values (3,7) declare @slope real declare @intercept real --calculate slope and intercept select @slope = ((count(*) * sum(x*y)) - (sum(x)*sum(y)))/ ((count(*) * sum(Power(x,2)))-Power(Sum(x),2)), @intercept = avg(y) - ((count(*) * sum(x*y)) - (sum(x)*sum(y)))/ ((count(*) * sum(Power(x,2)))-Power(Sum(x),2)) * avg(x) from #lr Thank you in advance.

    Read the article

  • How can I handle dynamic calculated attributes in a model in Django?

    - by bullfish
    In Django I calculate the breadcrumb (a list of fathers) for an geographical object. Since it is not going to change very often, I am thinking of pre calculating it once the object is saved or initialized. 1.) What would be better? Which solution would have a better performance? To calculate it at _init_ or to calculate it when the object is saved (the object takes about 500-2000 characters in the DB)? 2.) I tried to overwrite the _init_ or save() methods but I don't know how to use attributes of the just saved object. Accessing *args, **kwargs did not work. How can I access them? Do I have to save, access the father and then save again? 3.) If I decide to save the breadcrumb. Whats the best way to do it? I used http://www.djangosnippets.org/snippets/1694/ and have crumb = PickledObjectField(). Thats the method to calculate the attribute crumb() def _breadcrumb(self): breadcrumb = [ ] x = self while True: x = x.father try: if hasattr(x, 'country'): breadcrumb.append(x.country) elif hasattr(x, 'region'): breadcrumb.append(x.region) elif hasattr(x, 'city'): breadcrumb.append(x.city) else: break except: break breadcrumb.reverse() return breadcrumb Thats my save-Method: def save(self,*args, **kwargs): # how can I access the father ob the object? father = self.father # does obviously not work father = kwargs['father'] # does not work either # the breadcrumb gets calculated here self.crumb = self._breadcrumb(father) super(GeoObject, self).save(*args,**kwargs) Please help me out. I am working on this for days now. Thank you.

    Read the article

  • Naive Bayesian classification (spam filtering) - Doubt in one calculation? Which one is right? Plz c

    - by Microkernel
    Hi guys, I am implementing Naive Bayesian classifier for spam filtering. I have doubt on some calculation. Please clarify me what to do. Here is my question. In this method, you have to calculate P(S|W) - Probability that Message is spam given word W occurs in it. P(W|S) - Probability that word W occurs in a spam message. P(W|H) - Probability that word W occurs in a Ham message. So to calculate P(W|S), should I do (1) (Number of times W occuring in spam)/(total number of times W occurs in all the messages) OR (2) (Number of times word W occurs in Spam)/(Total number of words in the spam message) So, to calculate P(W|S), should I do (1) or (2)? (I thought it to be (2), but I am not sure, so plz clarify me) I am refering http://en.wikipedia.org/wiki/Bayesian_spam_filtering for the info by the way. I got to complete the implementation by this weekend :( Thanks and regards, MicroKernel :) @sth: Hmm... Shouldn't repeated occurrence of word 'W' increase a message's spam score? In the your approach it wouldn't, right?. Lets take a scenario and discuss... Lets say, we have 100 training messages, out of which 50 are spam and 50 are Ham. and say word_count of each message = 100. And lets say, in spam messages word W occurs 5 times in each message and word W occurs 1 time in Ham message. So total number of times W occuring in all the spam message = 5*50 = 250 times. And total number of times W occuring in all Ham messages = 1*50 = 50 times. Total occurance of W in all of the training messages = (250+50) = 300 times. So, in this scenario, how do u calculate P(W|S) and P(W|H) ? Naturally we should expect, P(W|S) P(W|H)??? right. Please share your thought...

    Read the article

  • How to achieve interaction between GUI class with logic class

    - by volting
    Im new to GUI programming, and haven't done much OOP. Im working on a basic calculator app to help me learn GUI design and to brush up on OOP. I understand that anything GUI related should be kept seperate from the logic, but Im unsure how to implement interaction between logic an GUI classes when needed i.e. basically passing variables back and forth... Im using TKinter and when I pass a tkinter variable to my logic it only seems to hold the string PY_VAR0. def on_equal_btn_click(self): self.entryVariable.set(self.entryVariable.get() + "=") calculator = Calc(self.entryVariable) self.entryVariable.set(calculator.calculate()) Im sure that im probably doing something fundamentally wrong and probabaly really stupid, I spent a considerable amount of time experimenting (and searching for answers online) but Im getting no where. Any help would be appreciated. Thanks, V The Full Program (well just enough to show the structure..) import Tkinter class Gui(Tkinter.Tk): def __init__(self,parent): Tkinter.Tk.__init__(self,parent) self.parent = parent self.initialize() def initialize(self): self.grid() self.create_widgets() """ grid config """ #self.grid_columnconfigure(0,weight=1,pad=0) self.resizable(False, False) def create_widgets(self): """row 0 of grid""" """Create Text Entry Box""" self.entryVariable = Tkinter.StringVar() self.entry = Tkinter.Entry(self,width=30,textvariable=self.entryVariable) self.entry.grid(column=0,row=0, columnspan = 3 ) self.entry.bind("<Return>", self.on_press_enter) """create equal button""" equal_btn = Tkinter.Button(self,text="=",width=4,command=self.on_equal_btn_click) equal_btn.grid(column=3, row=0) """row 1 of grid""" """create number 1 button""" number1_btn = Tkinter.Button(self,text="1",width=8,command=self.on_number1_btn_click) number1_btn.grid(column=0, row=1) . . . def on_equal_btn_click(self): self.entryVariable.set(self.entryVariable.get() + "=") calculator = Calc(self.entryVariable) self.entryVariable.set(calculator.calculate()) class Calc(): def __init__(self, equation): self.equation = equation def calculate(self): #TODO: parse string and calculate... return self.equation if __name__ == "__main__": app = Gui(None) app.title('Calculator') app.mainloop()

    Read the article

  • Possible SWITCH Optimization in DAX – #powerpivot #dax #tabular

    - by Marco Russo (SQLBI)
    In one of the Advanced DAX Workshop I taught this year, I had an interesting discussion about how to optimize a SWITCH statement (which could be frequently used checking a slicer, like in the Parameter Table pattern). Let’s start with the problem. What happen when you have such a statement? Sales :=     SWITCH (         VALUES ( Period[Period] ),         "Current", [Internet Total Sales],         "MTD", [MTD Sales],         "QTD", [QTD Sales],         "YTD", [YTD Sales],          BLANK ()     ) The SWITCH statement is in reality just syntax sugar for a nested IF statement. When you place such a measure in a pivot table, for every cell of the pivot table the IF options are evaluated. In order to optimize performance, the DAX engine usually does not compute cell-by-cell, but tries to compute the values in bulk-mode. However, if a measure contains an IF statement, every cell might have a different execution path, so the current implementation might evaluate all the possible IF branches in bulk-mode, so that for every cell the result from one of the branches will be already available in a pre-calculated dataset. The price for that could be high. If you consider the previous Sales measure, the YTD Sales measure could be evaluated for all the cells where it’s not required, and also when YTD is not selected at all in a Pivot Table. The actual optimization made by the DAX engine could be different in every build, and I expect newer builds of Tabular and Power Pivot to be better than older ones. However, we still don’t live in an ideal world, so it could be better trying to help the engine finding a better execution plan. One student (Niek de Wit) proposed this approach: Selection := IF (     HASONEVALUE ( Period[Period] ),     VALUES ( Period[Period] ) ) Sales := CALCULATE (     [Internet Total Sales],     FILTER (         VALUES ( 'Internet Sales'[Order Quantity] ),         'Internet Sales'[Order Quantity]             = IF (                 [Selection] = "Current",                 'Internet Sales'[Order Quantity],                 -1             )     ) )     + CALCULATE (         [MTD Sales],         FILTER (             VALUES ( 'Internet Sales'[Order Quantity] ),             'Internet Sales'[Order Quantity]                 = IF (                     [Selection] = "MTD",                     'Internet Sales'[Order Quantity],                     -1                 )         )     )     + CALCULATE (         [QTD Sales],         FILTER (             VALUES ( 'Internet Sales'[Order Quantity] ),             'Internet Sales'[Order Quantity]                 = IF (                     [Selection] = "QTD",                     'Internet Sales'[Order Quantity],                     -1                 )         )     )     + CALCULATE (         [YTD Sales],         FILTER (             VALUES ( 'Internet Sales'[Order Quantity] ),             'Internet Sales'[Order Quantity]                 = IF (                     [Selection] = "YTD",                     'Internet Sales'[Order Quantity],                     -1                 )         )     ) At first sight, you might think it’s impossible that this approach could be faster. However, if you examine with the profiler what happens, there is a different story. Every original IF’s execution branch is now a separate CALCULATE statement, which applies a filter that does not execute the required measure calculation if the result of the FILTER is empty. I used the ‘Internet Sales’[Order Quantity] column in this example just because in Adventure Works it has only one value (every row has 1): in the real world, you should use a column that has a very low number of distinct values, or use a column that has always the same value for every row (so it will be compressed very well!). Because the value –1 is never used in this column, the IF comparison in the filter discharge all the values iterated in the filter if the selection does not match with the desired value. I hope to have time in the future to write a longer article about this optimization technique, but in the meantime I’ve seen this optimization has been useful in many other implementations. Please write your feedback if you find scenarios (in both Power Pivot and Tabular) where you obtain performance improvements using this technique!

    Read the article

  • Ninety-Fifth Percentile Calculation for Bandwidth

    - by Kyle Brandt
    I am trying to calculate the bandwidth of my current internet connection. I am pulling the current input and output transfer rate via snmp. If the argument to the following function is a sorted ascending list of the the some of each input and output sample, is this the right way to calculate 95th percentile? sub ninetyFifth { #Expects Sorted Data my $ninetyFifthLine = (@_ * .95) - 1; return $_[$ninetyFifthLine]; }

    Read the article

  • Monitoring folder diffs across servers with zabbix

    - by Marcus
    Problem: I want to make sure that a certain folder is equal regarding it's contents across my servers. I do not want an automatic filesync to keep them equal, changing is done manually. My initial thought was to once a day calculate some crc/hash on folder and send to Zabbix, and trigger when values differ. Is there any good tools out there that can calculate crc or similar of a folder? Anyone know of another solution that solves my problem?

    Read the article

  • Amortization Schedule in Excel - Know how much interest will be saved by large payment

    - by hubbas
    I have a really nice Amortization Schedule built in Excel using the steps from this page: http://www.wikihow.com/Prepare-Amortization-Schedule-in-Excel It works really nicely, but I am planning to make some large payments and I would love to calculate how much interest I will save, over the life of the loan, for making these larger payment. E.g., if I pay $10k for one payment I will save $4000 in interest over the life of the loan, etc. Is there a way to calculate this?

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >