Search Results

Search found 3784 results on 152 pages for 'push'.

Page 22/152 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Is my implementation of A* wrong?

    - by Bloodyaugust
    I've implemented the A* algorithm in my program. However, it would seem to be functioning incorrectly at times. Below is a screenshot of one such time. The obviously shorter line is to go immediately right at the second to last row. Instead, they move down, around the tower, and continue to their destination (bottom right from top left). Below is my actual code implementation: nodeMap.prototype.findPath = function(p1, p2) { var openList = []; var closedList = []; var nodes = this.nodes; for (var i = 0; i < nodes.length; i++) { //reset heuristics and parents for nodes var curNode = nodes[i]; curNode.f = 0; curNode.g = 0; curNode.h = 0; curNode.parent = null; if (curNode.pathable === false) { closedList.push(curNode); } } openList.push(this.getNode(p1)); while(openList.length > 0) { // Grab the lowest f(x) to process next var lowInd = 0; for(i=0; i<openList.length; i++) { if(openList[i].f < openList[lowInd].f) { lowInd = i; } } var currentNode = openList[lowInd]; if (currentNode === this.getNode(p2)) { var curr = currentNode; var ret = []; while(curr.parent) { ret.push(curr); curr = curr.parent; } return ret.reverse(); } closedList.push(currentNode); for (i = 0; i < openList.length; i++) { //remove currentNode from openList if (openList[i] === currentNode) { openList.splice(i, 1); break; } } for (i = 0; i < currentNode.neighbors.length; i++) { if(closedList.indexOf(currentNode.neighbors[i]) !== -1 ) { continue; } if (currentNode.neighbors[i].isPathable === false) { closedList.push(currentNode.neighbors[i]); continue; } var gScore = currentNode.g + 1; // 1 is the distance from a node to it's neighbor var gScoreIsBest = false; if (openList.indexOf(currentNode.neighbors[i]) === -1) { //save g, h, and f then save the current parent gScoreIsBest = true; currentNode.neighbors[i].h = currentNode.neighbors[i].heuristic(this.getNode(p2)); openList.push(currentNode.neighbors[i]); } else if (gScore < currentNode.neighbors[i].g) { //current g better than previous g gScoreIsBest = true; } if (gScoreIsBest) { currentNode.neighbors[i].parent = currentNode; currentNode.neighbors[i].g = gScore; currentNode.neighbors[i].f = currentNode.neighbors[i].g + currentNode.neighbors[i].h; } } } return false; } Towers block pathability. Is there perhaps something I am missing here, or does A* not always find the shortest path in a situation such as this? Thanks in advance for any help.

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • how do I add a texture to a triangle in three.js?

    - by Kae Verens
    I've created a scene which has many cubes, spheres, etc, in it, and have been able to apply textures to those primitives, but when I want to add a texture to a simple triangle, I'm totally lost. Here's the closest I've come to getting it right: var texture=THREE.ImageUtils.loadTexture('/f/3d-images/texture-steel.jpg'); var materialDecalRoof=new THREE.MeshBasicMaterial( { 'map': texture, 'wireframe': false, 'overdraw': true } ); var geometry = new THREE.Geometry(); geometry.vertices.push(new THREE.Vector3(tentX/2+1, tentY, tentZ/2+1)); geometry.vertices.push(new THREE.Vector3(0, tentT, 0)); geometry.vertices.push(new THREE.Vector3(0, tentT, 0)); geometry.vertices.push(new THREE.Vector3(-tentX/2-1, tentY, tentZ/2+1)); geometry.faces.push(new THREE.Face4(0, 1, 2, 3)); geometry.faceVertexUvs[0].push([ new THREE.UV(0, 0), new THREE.UV(0, 0), new THREE.UV(0, 0), new THREE.UV(0, 0) ]); geometry.computeFaceNormals(); geometry.computeCentroids(); geometry.computeVertexNormals(); var mesh= new THREE.Mesh( geometry, materialDecalRoof); scene.add(mesh); This does not work. All i get is a flickering triangle (when the scene is moved) where an image should be. To test this, go to https://www.poptents.eu/3dFrame.php, upload an image in the Roof section, and drag the view to see the roof flicker. Does anyone know what I'm doing wrong here?

    Read the article

  • Issue with getting 2 chars from string using indexer

    - by Learner
    I am facing an issue in reading char values. See my program below. I want to evaluate an infix expression. As you can see I want to read '10' , '*', '20' and then use them...but if I use string indexer s[0] will be '1' and not '10' and hence I am not able to get the expected result. Can you guys suggest me something? Code is in c# class Program { static void Main(string[] args) { string infix = "10*2+20-20+3"; float result = EvaluateInfix(infix); Console.WriteLine(result); Console.ReadKey(); } public static float EvaluateInfix(string s) { Stack<float> operand = new Stack<float>(); Stack<char> operator1 = new Stack<char>(); int len = s.Length; for (int i = 0; i < len; i++) { if (isOperator(s[i])) // I am having an issue here as s[i] gives each character and I want the number 10 operator1.Push(s[i]); else { operand.Push(s[i]); if (operand.Count == 2) Compute(operand, operator1); } } return operand.Pop(); } public static void Compute(Stack<float> operand, Stack<char> operator1) { float operand1 = operand.Pop(); float operand2 = operand.Pop(); char op = operator1.Pop(); if (op == '+') operand.Push(operand1 + operand2); else if(op=='-') operand.Push(operand1 - operand2); else if(op=='*') operand.Push(operand1 * operand2); else if(op=='/') operand.Push(operand1 / operand2); } public static bool isOperator(char c) { bool result = false; if (c == '+' || c == '-' || c == '*' || c == '/') result = true; return result; } } }

    Read the article

  • Why am I getting an error on Heroku that suggests I need to migrate my app to Bamboo?

    - by user242065
    When I type: git push heroku master, this is what happens @68-185-86-134:sample_app git push heroku master Counting objects: 110, done. Delta compression using up to 2 threads. Compressing objects: 100% (94/94), done. Writing objects: 100% (110/110), 87.48 KiB, done. Total 110 (delta 19), reused 0 (delta 0) -----> Heroku receiving push -----> Rails app detected ! This version of Rails is only supported on the Bamboo stack ! Please migrate your app to Bamboo and push again. ! See http://docs.heroku.com/bamboo for more information ! Heroku push rejected, incompatible Rails version error: hooks/pre-receive exited with error code 1 To [email protected]:blazing-frost-89.git ! [remote rejected] master -> master (pre-receive hook declined) error: failed to push some refs to '[email protected]:blazing-frost-89.git' My .gems file: rails --version 2.3.8 My .git/config file: [core] repositoryformatversion = 0 filemode = true bare = false logallrefupdates = true ignorecase = true [remote "origin"] url = [email protected]:csmeder/sample_app.git fetch = +refs/heads/*:refs/remotes/origin/* [remote "heroku"] url = [email protected]:blazing-frost-89.git fetch = +refs/heads/*:refs/remotes/heroku/*

    Read the article

  • Converting "A* Search" code from C++ to Java [on hold]

    - by mr5
    Updated! I get this code from this site It's A* Search Algorithm(finding shortest path with heuristics) I modify most of variable names and some if conditions from the original version to satisfy my syntactic taste. It works in C++ (as I can't see any trouble with it) but fails in Java version. Java Code: String findPath(int startX, int startY, int finishX, int finishY) { @SuppressWarnings("unchecked") LinkedList<Node>[] nodeList = (LinkedList<Node>[]) new LinkedList<?>[2]; nodeList[0] = new LinkedList<Node>(); nodeList[1] = new LinkedList<Node>(); Node n0; Node m0; int nlIndex = 0; // queueList index // reset the node maps for(int y = 0;y < ROW_COUNT; ++y) { for(int x = 0;x < COL_COUNT; ++x) { close_nodes_map[y][x] = 0; open_nodes_map[y][x] = 0; } } // create the start node and push into list of open nodes n0 = new Node( startX, startY, 0, 0 ); n0.updatePriority( finishX, finishY ); nodeList[nlIndex].push( n0 ); open_nodes_map[startY][startX] = n0.getPriority(); // mark it on the open nodes map // A* search while( !nodeList[nlIndex].isEmpty() ) { LinkedList<Node> pq = nodeList[nlIndex]; // get the current node w/ the highest priority // from the list of open nodes n0 = new Node( pq.peek().getX(), pq.peek().getY(), pq.peek().getIterCount(), pq.peek().getPriority()); int x = n0.getX(); int y = n0.getY(); nodeList[nlIndex].pop(); // remove the node from the open list open_nodes_map[y][x] = 0; // mark it on the closed nodes map close_nodes_map[y][x] = 1; // quit searching when the goal state is reached //if((*n0).estimate(finishX, finishY) == 0) if( x == finishX && y == finishY ) { // generate the path from finish to start // by following the directions String path = ""; while( !( x == startX && y == startY) ) { int j = dir_map[y][x]; int c = '0' + ( j + Node.DIRECTION_COUNT / 2 ) % Node.DIRECTION_COUNT; path = (char)c + path; x += DIR_X[j]; y += DIR_Y[j]; } return path; } // generate moves (child nodes) in all possible directions for(int i = 0; i < Node.DIRECTION_COUNT; ++i) { int xdx = x + DIR_X[i]; int ydy = y + DIR_Y[i]; // boundary check if (!(xdx >= 0 && xdx < COL_COUNT && ydy >= 0 && ydy < ROW_COUNT)) continue; if ( ( gridMap.getData( ydy, xdx ) == GridMap.WALKABLE || gridMap.getData( ydy, xdx ) == GridMap.FINISH) && close_nodes_map[ydy][xdx] != 1 ) { // generate a child node m0 = new Node( xdx, ydy, n0.getIterCount(), n0.getPriority() ); m0.nextLevel( i ); m0.updatePriority( finishX, finishY ); // if it is not in the open list then add into that if( open_nodes_map[ydy][xdx] == 0 ) { open_nodes_map[ydy][xdx] = m0.getPriority(); nodeList[nlIndex].push( m0 ); // mark its parent node direction dir_map[ydy][xdx] = ( i + Node.DIRECTION_COUNT / 2 ) % Node.DIRECTION_COUNT; } else if( open_nodes_map[ydy][xdx] > m0.getPriority() ) { // update the priority info open_nodes_map[ydy][xdx] = m0.getPriority(); // update the parent direction info dir_map[ydy][xdx] = ( i + Node.DIRECTION_COUNT / 2 ) % Node.DIRECTION_COUNT; // replace the node // by emptying one queueList to the other one // except the node to be replaced will be ignored // and the new node will be pushed in instead while( !(nodeList[nlIndex].peek().getX() == xdx && nodeList[nlIndex].peek().getY() == ydy ) ) { nodeList[1 - nlIndex].push( nodeList[nlIndex].pop() ); } nodeList[nlIndex].pop(); // remove the wanted node // empty the larger size queueList to the smaller one if( nodeList[nlIndex].size() > nodeList[ 1 - nlIndex ].size() ) nlIndex = 1 - nlIndex; while( !nodeList[nlIndex].isEmpty() ) { nodeList[1 - nlIndex].push( nodeList[nlIndex].pop() ); } nlIndex = 1 - nlIndex; nodeList[nlIndex].push( m0 ); // add the better node instead } } } } return ""; // no route found } Output1: Legends . = PATH ? = START X = FINISH 3,2,1 = OBSTACLES (Misleading path) Output2: Changing these lines: n0 = new Node( a, b, c, d ); m0 = new Node( e, f, g, h ); to n0.set( a, b, c, d ); m0.set( e, f, g, h ); I get (I'm really confused) C++ Code: std::string A_Star::findPath(int startX, int startY, int finishX, int finishY) { typedef std::queue<Node> List_Container; List_Container nodeList[2]; // list of open (not-yet-tried) nodes Node n0; Node m0; int pqIndex = 0; // nodeList index // reset the node maps for(int y = 0;y < ROW_COUNT; ++y) { for(int x = 0;x < COL_COUNT; ++x) { close_nodes_map[y][x] = 0; open_nodes_map[y][x] = 0; } } // create the start node and push into list of open nodes n0 = Node( startX, startY, 0, 0 ); n0.updatePriority( finishX, finishY ); nodeList[pqIndex].push( n0 ); open_nodes_map[startY][startX] = n0.getPriority(); // mark it on the open nodes map // A* search while( !nodeList[pqIndex].empty() ) { List_Container &pq = nodeList[pqIndex]; // get the current node w/ the highest priority // from the list of open nodes n0 = Node( pq.front().getX(), pq.front().getY(), pq.front().getIterCount(), pq.front().getPriority()); int x = n0.getX(); int y = n0.getY(); nodeList[pqIndex].pop(); // remove the node from the open list open_nodes_map[y][x] = 0; // mark it on the closed nodes map close_nodes_map[y][x] = 1; // quit searching when the goal state is reached //if((*n0).estimate(finishX, finishY) == 0) if( x == finishX && y == finishY ) { // generate the path from finish to start // by following the directions std::string path = ""; while( !( x == startX && y == startY) ) { int j = dir_map[y][x]; char c = '0' + ( j + DIRECTION_COUNT / 2 ) % DIRECTION_COUNT; path = c + path; x += DIR_X[j]; y += DIR_Y[j]; } return path; } // generate moves (child nodes) in all possible directions for(int i = 0; i < DIRECTION_COUNT; ++i) { int xdx = x + DIR_X[i]; int ydy = y + DIR_Y[i]; // boundary check if (!( xdx >= 0 && xdx < COL_COUNT && ydy >= 0 && ydy < ROW_COUNT)) continue; if ( ( pGrid->getData(ydy,xdx) == WALKABLE || pGrid->getData(ydy, xdx) == FINISH) && close_nodes_map[ydy][xdx] != 1 ) { // generate a child node m0 = Node( xdx, ydy, n0.getIterCount(), n0.getPriority() ); m0.nextLevel( i ); m0.updatePriority( finishX, finishY ); // if it is not in the open list then add into that if( open_nodes_map[ydy][xdx] == 0 ) { open_nodes_map[ydy][xdx] = m0.getPriority(); nodeList[pqIndex].push( m0 ); // mark its parent node direction dir_map[ydy][xdx] = ( i + DIRECTION_COUNT / 2 ) % DIRECTION_COUNT; } else if( open_nodes_map[ydy][xdx] > m0.getPriority() ) { // update the priority info open_nodes_map[ydy][xdx] = m0.getPriority(); // update the parent direction info dir_map[ydy][xdx] = ( i + DIRECTION_COUNT / 2 ) % DIRECTION_COUNT; // replace the node // by emptying one nodeList to the other one // except the node to be replaced will be ignored // and the new node will be pushed in instead while ( !( nodeList[pqIndex].front().getX() == xdx && nodeList[pqIndex].front().getY() == ydy ) ) { nodeList[1 - pqIndex].push( nodeList[pqIndex].front() ); nodeList[pqIndex].pop(); } nodeList[pqIndex].pop(); // remove the wanted node // empty the larger size nodeList to the smaller one if( nodeList[pqIndex].size() > nodeList[ 1 - pqIndex ].size() ) pqIndex = 1 - pqIndex; while( !nodeList[pqIndex].empty() ) { nodeList[1-pqIndex].push(nodeList[pqIndex].front()); nodeList[pqIndex].pop(); } pqIndex = 1 - pqIndex; nodeList[pqIndex].push( m0 ); // add the better node instead } } } } return ""; // no route found } Output: Legends . = PATH ? = START X = FINISH 3,2,1 = OBSTACLES (Just right) From what I read about Java's documentation, I came up with the conclusion: C++'s std::queue<T>::front() == Java's LinkedList<T>.peek() Java's LinkedList<T>.pop() == C++'s std::queue<T>::front() + std::queue<T>::pop() What might I be missing in my Java version? In what way does it became different algorithmically from the C++ version?

    Read the article

  • Sort string array by analysing date details in those strings

    - by Jason Evans
    I have a requirement for the project I'm working on right now which is proving a bit tricky for me. Basically I have to sort an array of items based on the Text property of those items: Here are my items: var answers = [], answer1 = { Id: 1, Text: '3-4 weeks ago' }, answer2 = { Id: 2, Text: '1-2 weeks ago' }, answer3 = { Id: 3, Text: '7-8 weeks ago' }, answer4 = { Id: 4, Text: '5-6 weeks ago' }, answer5 = { Id: 5, Text: '1-2 days ago' }, answer6 = { Id: 6, Text: 'More than 1 month ago' }; answers.push(answer1); answers.push(answer2); answers.push(answer3); answers.push(answer4); answers.push(answer5); answers.push(answer6); I need to analyse the Text property of each item so that, after the sorting, the array looks like this: answers[0] = { Id: 6, Text: 'More than 1 month ago' } answers[1] = { Id: 3, Text: '7-8 weeks ago' } answers[2] = { Id: 4, Text: '5-6 weeks ago' } answers[3] = { Id: 1, Text: '3-4 weeks ago' } answers[4] = { Id: 2, Text: '1-2 weeks ago' } answers[5] = { Id: 5, Text: '1-2 days ago' } The logic is that, the furthest away the date, the more high priority it is, so it should appear first in the array. So "1-2 days" is less of a priority then "7-8 weeks". So the logic is that, I need to extract the number values, and then the units (e.g. days, weeks) and somehow sort the array based on those details. Quite honestly I'm finding it very difficult to come up with a solution, and I'd appreciate any help.

    Read the article

  • Bad idea to force creation of Mercurial remote heads (ie. branches)?

    - by Chad Johnson
    I am developing a centralized web application, and I have a centralized Mercurial repository. Locally I created a branch in my repository hg branch my_branch I then made some changes and committed. Then when I try to push, I get abort: push creates new remote branch 'my_branch'! (did you forget to merge? use push -f to force) I've just been using push -f. Is this bad? I WANT multiple branches in my central, remote repository, as I want to 1) back up my work and 2) allow other developers to develop with me on that branch. Is it bad or something to have branches in my remote repository or something? Should I not be doing push -f (and if not, what should I do?)? Why does Joel say this in his tutorial: Occasionally I've made a change in a branch, pushed, switched to another branch, and changes I had made in that branch I switch to were mysteriously reverted to a previous version from several commits ago. Maybe this is a symptom of forcing a push?

    Read the article

  • OpenVPN not connecting

    - by LandArch
    There have been a number of post similar to this, but none seem to satisfy my need. Plus I am a Ubuntu newbie. I followed this tutorial to completely set up OpenVPN on Ubuntu 12.04 server. Here is my server.conf file ################################################# # Sample OpenVPN 2.0 config file for # # multi-client server. # # # # This file is for the server side # # of a many-clients <-> one-server # # OpenVPN configuration. # # # # OpenVPN also supports # # single-machine <-> single-machine # # configurations (See the Examples page # # on the web site for more info). # # # # This config should work on Windows # # or Linux/BSD systems. Remember on # # Windows to quote pathnames and use # # double backslashes, e.g.: # # "C:\\Program Files\\OpenVPN\\config\\foo.key" # # # # Comments are preceded with '#' or ';' # ################################################# # Which local IP address should OpenVPN # listen on? (optional) local 192.168.13.8 # Which TCP/UDP port should OpenVPN listen on? # If you want to run multiple OpenVPN instances # on the same machine, use a different port # number for each one. You will need to # open up this port on your firewall. port 1194 # TCP or UDP server? proto tcp ;proto udp # "dev tun" will create a routed IP tunnel, # "dev tap" will create an ethernet tunnel. # Use "dev tap0" if you are ethernet bridging # and have precreated a tap0 virtual interface # and bridged it with your ethernet interface. # If you want to control access policies # over the VPN, you must create firewall # rules for the the TUN/TAP interface. # On non-Windows systems, you can give # an explicit unit number, such as tun0. # On Windows, use "dev-node" for this. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. dev tap0 up "/etc/openvpn/up.sh br0" down "/etc/openvpn/down.sh br0" ;dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel if you # have more than one. On XP SP2 or higher, # you may need to selectively disable the # Windows firewall for the TAP adapter. # Non-Windows systems usually don't need this. ;dev-node MyTap # SSL/TLS root certificate (ca), certificate # (cert), and private key (key). Each client # and the server must have their own cert and # key file. The server and all clients will # use the same ca file. # # See the "easy-rsa" directory for a series # of scripts for generating RSA certificates # and private keys. Remember to use # a unique Common Name for the server # and each of the client certificates. # # Any X509 key management system can be used. # OpenVPN can also use a PKCS #12 formatted key file # (see "pkcs12" directive in man page). ca "/etc/openvpn/ca.crt" cert "/etc/openvpn/server.crt" key "/etc/openvpn/server.key" # This file should be kept secret # Diffie hellman parameters. # Generate your own with: # openssl dhparam -out dh1024.pem 1024 # Substitute 2048 for 1024 if you are using # 2048 bit keys. dh dh1024.pem # Configure server mode and supply a VPN subnet # for OpenVPN to draw client addresses from. # The server will take 10.8.0.1 for itself, # the rest will be made available to clients. # Each client will be able to reach the server # on 10.8.0.1. Comment this line out if you are # ethernet bridging. See the man page for more info. ;server 10.8.0.0 255.255.255.0 # Maintain a record of client <-> virtual IP address # associations in this file. If OpenVPN goes down or # is restarted, reconnecting clients can be assigned # the same virtual IP address from the pool that was # previously assigned. ifconfig-pool-persist ipp.txt # Configure server mode for ethernet bridging. # You must first use your OS's bridging capability # to bridge the TAP interface with the ethernet # NIC interface. Then you must manually set the # IP/netmask on the bridge interface, here we # assume 10.8.0.4/255.255.255.0. Finally we # must set aside an IP range in this subnet # (start=10.8.0.50 end=10.8.0.100) to allocate # to connecting clients. Leave this line commented # out unless you are ethernet bridging. server-bridge 192.168.13.101 255.255.255.0 192.168.13.105 192.168.13.200 # Configure server mode for ethernet bridging # using a DHCP-proxy, where clients talk # to the OpenVPN server-side DHCP server # to receive their IP address allocation # and DNS server addresses. You must first use # your OS's bridging capability to bridge the TAP # interface with the ethernet NIC interface. # Note: this mode only works on clients (such as # Windows), where the client-side TAP adapter is # bound to a DHCP client. ;server-bridge # Push routes to the client to allow it # to reach other private subnets behind # the server. Remember that these # private subnets will also need # to know to route the OpenVPN client # address pool (10.8.0.0/255.255.255.0) # back to the OpenVPN server. push "route 192.168.13.1 255.255.255.0" push "dhcp-option DNS 192.168.13.201" push "dhcp-option DOMAIN blahblah.dyndns-wiki.com" ;push "route 192.168.20.0 255.255.255.0" # To assign specific IP addresses to specific # clients or if a connecting client has a private # subnet behind it that should also have VPN access, # use the subdirectory "ccd" for client-specific # configuration files (see man page for more info). # EXAMPLE: Suppose the client # having the certificate common name "Thelonious" # also has a small subnet behind his connecting # machine, such as 192.168.40.128/255.255.255.248. # First, uncomment out these lines: ;client-config-dir ccd ;route 192.168.40.128 255.255.255.248 # Then create a file ccd/Thelonious with this line: # iroute 192.168.40.128 255.255.255.248 # This will allow Thelonious' private subnet to # access the VPN. This example will only work # if you are routing, not bridging, i.e. you are # using "dev tun" and "server" directives. # EXAMPLE: Suppose you want to give # Thelonious a fixed VPN IP address of 10.9.0.1. # First uncomment out these lines: ;client-config-dir ccd ;route 10.9.0.0 255.255.255.252 # Then add this line to ccd/Thelonious: # ifconfig-push 10.9.0.1 10.9.0.2 # Suppose that you want to enable different # firewall access policies for different groups # of clients. There are two methods: # (1) Run multiple OpenVPN daemons, one for each # group, and firewall the TUN/TAP interface # for each group/daemon appropriately. # (2) (Advanced) Create a script to dynamically # modify the firewall in response to access # from different clients. See man # page for more info on learn-address script. ;learn-address ./script # If enabled, this directive will configure # all clients to redirect their default # network gateway through the VPN, causing # all IP traffic such as web browsing and # and DNS lookups to go through the VPN # (The OpenVPN server machine may need to NAT # or bridge the TUN/TAP interface to the internet # in order for this to work properly). ;push "redirect-gateway def1 bypass-dhcp" # Certain Windows-specific network settings # can be pushed to clients, such as DNS # or WINS server addresses. CAVEAT: # http://openvpn.net/faq.html#dhcpcaveats # The addresses below refer to the public # DNS servers provided by opendns.com. ;push "dhcp-option DNS 208.67.222.222" ;push "dhcp-option DNS 208.67.220.220" # Uncomment this directive to allow different # clients to be able to "see" each other. # By default, clients will only see the server. # To force clients to only see the server, you # will also need to appropriately firewall the # server's TUN/TAP interface. ;client-to-client # Uncomment this directive if multiple clients # might connect with the same certificate/key # files or common names. This is recommended # only for testing purposes. For production use, # each client should have its own certificate/key # pair. # # IF YOU HAVE NOT GENERATED INDIVIDUAL # CERTIFICATE/KEY PAIRS FOR EACH CLIENT, # EACH HAVING ITS OWN UNIQUE "COMMON NAME", # UNCOMMENT THIS LINE OUT. ;duplicate-cn # The keepalive directive causes ping-like # messages to be sent back and forth over # the link so that each side knows when # the other side has gone down. # Ping every 10 seconds, assume that remote # peer is down if no ping received during # a 120 second time period. keepalive 10 120 # For extra security beyond that provided # by SSL/TLS, create an "HMAC firewall" # to help block DoS attacks and UDP port flooding. # # Generate with: # openvpn --genkey --secret ta.key # # The server and each client must have # a copy of this key. # The second parameter should be '0' # on the server and '1' on the clients. ;tls-auth ta.key 0 # This file is secret # Select a cryptographic cipher. # This config item must be copied to # the client config file as well. ;cipher BF-CBC # Blowfish (default) ;cipher AES-128-CBC # AES ;cipher DES-EDE3-CBC # Triple-DES # Enable compression on the VPN link. # If you enable it here, you must also # enable it in the client config file. comp-lzo # The maximum number of concurrently connected # clients we want to allow. ;max-clients 100 # It's a good idea to reduce the OpenVPN # daemon's privileges after initialization. # # You can uncomment this out on # non-Windows systems. user nobody group nogroup # The persist options will try to avoid # accessing certain resources on restart # that may no longer be accessible because # of the privilege downgrade. persist-key persist-tun # Output a short status file showing # current connections, truncated # and rewritten every minute. status openvpn-status.log # By default, log messages will go to the syslog (or # on Windows, if running as a service, they will go to # the "\Program Files\OpenVPN\log" directory). # Use log or log-append to override this default. # "log" will truncate the log file on OpenVPN startup, # while "log-append" will append to it. Use one # or the other (but not both). ;log openvpn.log ;log-append openvpn.log # Set the appropriate level of log # file verbosity. # # 0 is silent, except for fatal errors # 4 is reasonable for general usage # 5 and 6 can help to debug connection problems # 9 is extremely verbose verb 3 # Silence repeating messages. At most 20 # sequential messages of the same message # category will be output to the log. ;mute 20 I am using Windows 7 as the Client and set that up accordingly using the OpenVPN GUI. That conf file is as follows: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. dev tap0 up "/etc/openvpn/up.sh br0" down "/etc/openvpn/down.sh br0" ;dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. proto tcp ;proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. blahblah.dyndns-wiki.com 1194 ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. ;remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) user nobody group nobody # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca "C:\\Program Files\OpenVPN\config\\ca.crt" cert "C:\\Program Files\OpenVPN\config\\ChadMWade-THINK.crt" key "C:\\Program Files\OpenVPN\config\\ChadMWade-THINK.key" # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 Not sure whats left to do.

    Read the article

  • Catching an exception class within a template

    - by Todd Bauer
    I'm having a problem using the exception class Overflow() for a Stack template I'm creating. If I define the class regularly there is no problem. If I define the class as a template, I cannot make my call to catch() work properly. I have a feeling it's simply syntax, but I can't figure it out for the life of me. #include<iostream> #include<exception> using namespace std; template <class T> class Stack { private: T *stackArray; int size; int top; public: Stack(int size) { this->size = size; stackArray = new T[size]; top = 0; } ~Stack() { delete[] stackArray; } void push(T value) { if (isFull()) throw Overflow(); stackArray[top] = value; top++; } bool isFull() { if (top == size) return true; else return false; } class Overflow {}; }; int main() { try { Stack<double> Stack(5); Stack.push( 5.0); Stack.push(10.1); Stack.push(15.2); Stack.push(20.3); Stack.push(25.4); Stack.push(30.5); } catch (Stack::Overflow) { cout << "ERROR! The stack is full.\n"; } return 0; } The problem is in the catch (Stack::Overflow) statement. As I said, if the class is not a template, this works just fine. However, once I define it as a template, this ceases to work. I've tried all sorts of syntaxes, but I always get one of two sets of error messages from the compiler. If I use catch(Stack::Overflow): ch18pr01.cpp(89) : error C2955: 'Stack' : use of class template requires template argument list ch18pr01.cpp(13) : see declaration of 'Stack' ch18pr01.cpp(89) : error C2955: 'Stack' : use of class template requires template argument list ch18pr01.cpp(13) : see declaration of 'Stack' ch18pr01.cpp(89) : error C2316: 'Stack::Overflow' : cannot be caught as the destructor and/or copy constructor are inaccessible EDIT: I meant If I use catch(Stack<double>::Overflow) or any variety thereof: ch18pr01.cpp(89) : error C2061: syntax error : identifier 'Stack' ch18pr01.cpp(89) : error C2310: catch handlers must specify one type ch18pr01.cpp(95) : error C2317: 'try' block starting on line '75' has no catch handlers I simply can not figure this out. Does anyone have any idea?

    Read the article

  • Issue with class template partial specialization

    - by DeadMG
    I've been trying to implement a function that needs partial template specializations and fallen back to the static struct technique, and I'm having a number of problems. template<typename T> struct PushImpl<const T&> { typedef T* result_type; typedef const T& argument_type; template<int StackSize> static result_type Push(IStack<StackSize>* sptr, argument_type ref) { // Code if the template is T& } }; template<typename T> struct PushImpl<const T*> { typedef T* result_type; typedef const T* argument_type; template<int StackSize> static result_type Push(IStack<StackSize>* sptr, argument_type ptr) { return PushImpl<const T&>::Push(sptr, *ptr); } }; template<typename T> struct PushImpl { typedef T* result_type; typedef const T& argument_type; template<int StackSize> static result_type Push(IStack<StackSize>* sptr, argument_type ref) { // Code if the template is neither T* nor T& } }; template<typename T> typename PushImpl<T>::result_type Push(typename PushImpl<T>::argument_type ref) { return PushImpl<T>::Push(this, ref); } First: The struct is nested inside another class (the one that offers Push as a member func), but it can't access the template parameter (StackSize), even though my other nested classes all could. I've worked around it, but it would be cleaner if they could just access StackSize like a normal class. Second: The compiler complains that it doesn't use or can't deduce T. Really? Thirdly: The compiler complains that it can't specialize a template in the current scope (class scope). I can't see what the problem is. Have I accidentally invoked some bad syntax?

    Read the article

  • BES - BlackBerry exception to policy Disable Third Party Software Downloads

    - by ICTdesk.net
    Hi everybody, Running BES 5.0, with an Salesforce Mobile application push. Works fine. We don't want the users to download and install other applications, so we disable this by a BES policy (disable Third Party Software Downloads). But when we disable this, the application push does not work anymore. Does anybody know/have experience with making an exception for certain applications or an exception for applications that are provided by application push from the BES? Thank you, kindest regards, Marcel

    Read the article

  • How to set up Git on remote instance using keys from local machine?

    - by Lucas
    I have a setup where I can ssh into my remote server (ie a Google Compute instance) from my local machine. I used to be able to clone, push, and pull from a repository on my remote instance without adding any keys to my remote instance, nor adding any new keys to my repository online (just the public key from my local machine). I believe the remote instance was using the keys from my local machine to authenticate my Git pushes and pulls. However, the system broke when I reinstalled the OS on my local machine. Now I when I try to connect with the Github server from my remote instance, I get the following: Cannot clone: [lucas@ecoinstance]~/node$ git clone [email protected]:lucasExample/test.git test Cloning into 'test'... Permission denied (publickey). fatal: The remote end hung up unexpectedly Cannot push: [lucas@ecoinstance]~/node/nodetest1$ git status # On branch master # Your branch is ahead of 'origin/master' by 1 commit. # nothing to commit (working directory clean) [lucas@ecoinstance]~/node/nodetest1$ git push Permission denied (publickey). fatal: The remote end hung up unexpectedly Additional info: [lucas@ecoinstance]~/node/nodetest1$ ssh-add -l Could not open a connection to your authentication agent. [lucas@ecoinstance]~/.ssh$ ls authorized_keys known_hosts As you can see, I have no keys on my remote instance. I have never had keys on the remote, and it would push and pull just fine until I re-installed my local OS. I can still clone, push, and pull on my local machine, it is just my remote machine that cannot get authentication. My local OS is Ubuntu 14.04 and my remote OS is Debian Wheezy. Any suggestions would be great. I am not sure how to search for this concept where I can authenticate from a remote instance via my local machine, so any reference are appreciated as well.

    Read the article

  • Trouble with site-to-site OpenVPN & pfSense not passing traffic

    - by JohnCC
    I'm trying to get an OpenVPN tunnel going on pfSense 1.2.3-RELEASE running on embedded routers. I have a local LAN 10.34.43.0/254. The remote LAN is 10.200.1.0/24. The local pfSense is configured as the client, and the remote is configured as the server. My OpenVPN tunnel is using the IP range 10.99.89.0/24 internally. There are also some additional LANs on the remote side routed through the tunnel, but the issue is not with those since my connectivity fails before that point in the chain. The tunnel comes up fine and the logs look healthy. What I find is this:- I can ping and telnet to the remote LAN and the additional remote LANs from the local pfSense box's shell. I cannot ping or telnet to any remote LANs from the local network. I cannot ping or telnet to the local network from the remote LAN or the remote pfSense box's shell. If I tcpdump the tun interfaces on both sides and ping from the local LAN, I see the packets hit the tunnel locally, but they do not appear on the remote side (nor do they appear on the remote LAN interface if I tcpdump that). If I tcpdump the tun interfaces on both sides and ping from the local pfSense shell, I see the packets hit the tunnel locally, and exit the remote side. I can also tcpdump the remote LAN interface and see them pass there too. If I tcpdump the tun interfaces on both sides and ping from the remote pfSense shell, I see the packets hit the remote tun but they do not emerge from the local one. Here is the config file the remote side is using:- #user nobody #group nobody daemon keepalive 10 60 ping-timer-rem persist-tun persist-key dev tun proto udp cipher BF-CBC up /etc/rc.filter_configure down /etc/rc.filter_configure server 10.99.89.0 255.255.255.0 client-config-dir /var/etc/openvpn_csc push "route 10.200.1.0 255.255.255.0" lport <port> route 10.34.43.0 255.255.255.0 ca /var/etc/openvpn_server0.ca cert /var/etc/openvpn_server0.cert key /var/etc/openvpn_server0.key dh /var/etc/openvpn_server0.dh comp-lzo push "route 205.217.5.128 255.255.255.224" push "route 205.217.5.64 255.255.255.224" push "route 165.193.147.128 255.255.255.224" push "route 165.193.147.32 255.255.255.240" push "route 192.168.1.16 255.255.255.240" push "route 192.168.2.16 255.255.255.240" Here is the local config:- writepid /var/run/openvpn_client0.pid #user nobody #group nobody daemon keepalive 10 60 ping-timer-rem persist-tun persist-key dev tun proto udp cipher BF-CBC up /etc/rc.filter_configure down /etc/rc.filter_configure remote <host> <port> client lport 1194 ifconfig 10.99.89.2 10.99.89.1 ca /var/etc/openvpn_client0.ca cert /var/etc/openvpn_client0.cert key /var/etc/openvpn_client0.key comp-lzo You can see the relevant parts of the routing tables extracted from pfSense here http://pastie.org/5365800 The local firewall permits all ICMP from the LAN, and my PC is allowed everything to anywhere. The remote firewall treats its LAN as trusted and permits all traffic on that interface. Can anyone suggest why this is not working, and what I could try next?

    Read the article

  • Replace DNS on Openvpn client without redirect-gateway

    - by Gabor Vincze
    I am trying to push DNS to the client with OpenVPN server with config: push "dhcp-option DNS 192.168.x.x" It is working well, but what I really need is that during the VPN connection I do not want to use my primary resolvers, clients should use only the DNS provided by the server. It can be done with push redirect-gateway, but I do not want to tunnel all connections from the client thru the VPN, only specific networks. Is it possible to do it somehow? Linux clients are OK with a script, on Windows I am not sure

    Read the article

  • Tracking a subdomain serately within the main domain account [closed]

    - by Vinay
    I have a website, for example: xyz.com and info.xyz.com. I created a profile for xyz and tracking is good. I added a new profile for info.xyz.com in xyz.com. Analytics tracking for info.xyz.com is showing traffic from both xyz.com and info.xyz.com. How do I change to show only info.xyz traffic in the info.xyz.com profile. I used the following code: Analytics code for xyz.com domain: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-xxxxxx-x']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> Analytics code for info.xyz.com <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-xxxxxx-x']); _gaq.push(['_setDomainName', 'xyz.com']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script>

    Read the article

  • How can I modify my Shunting-Yard Algorithm so it accepts unary operators?

    - by KingNestor
    I've been working on implementing the Shunting-Yard Algorithm in JavaScript for class. Here is my work so far: var userInput = prompt("Enter in a mathematical expression:"); var postFix = InfixToPostfix(userInput); var result = EvaluateExpression(postFix); document.write("Infix: " + userInput + "<br/>"); document.write("Postfix (RPN): " + postFix + "<br/>"); document.write("Result: " + result + "<br/>"); function EvaluateExpression(expression) { var tokens = expression.split(/([0-9]+|[*+-\/()])/); var evalStack = []; while (tokens.length != 0) { var currentToken = tokens.shift(); if (isNumber(currentToken)) { evalStack.push(currentToken); } else if (isOperator(currentToken)) { var operand1 = evalStack.pop(); var operand2 = evalStack.pop(); var result = PerformOperation(parseInt(operand1), parseInt(operand2), currentToken); evalStack.push(result); } } return evalStack.pop(); } function PerformOperation(operand1, operand2, operator) { switch(operator) { case '+': return operand1 + operand2; case '-': return operand1 - operand2; case '*': return operand1 * operand2; case '/': return operand1 / operand2; default: return; } } function InfixToPostfix(expression) { var tokens = expression.split(/([0-9]+|[*+-\/()])/); var outputQueue = []; var operatorStack = []; while (tokens.length != 0) { var currentToken = tokens.shift(); if (isNumber(currentToken)) { outputQueue.push(currentToken); } else if (isOperator(currentToken)) { while ((getAssociativity(currentToken) == 'left' && getPrecedence(currentToken) <= getPrecedence(operatorStack[operatorStack.length-1])) || (getAssociativity(currentToken) == 'right' && getPrecedence(currentToken) < getPrecedence(operatorStack[operatorStack.length-1]))) { outputQueue.push(operatorStack.pop()) } operatorStack.push(currentToken); } else if (currentToken == '(') { operatorStack.push(currentToken); } else if (currentToken == ')') { while (operatorStack[operatorStack.length-1] != '(') { if (operatorStack.length == 0) throw("Parenthesis balancing error! Shame on you!"); outputQueue.push(operatorStack.pop()); } operatorStack.pop(); } } while (operatorStack.length != 0) { if (!operatorStack[operatorStack.length-1].match(/([()])/)) outputQueue.push(operatorStack.pop()); else throw("Parenthesis balancing error! Shame on you!"); } return outputQueue.join(" "); } function isOperator(token) { if (!token.match(/([*+-\/])/)) return false; else return true; } function isNumber(token) { if (!token.match(/([0-9]+)/)) return false; else return true; } function getPrecedence(token) { switch (token) { case '^': return 9; case '*': case '/': case '%': return 8; case '+': case '-': return 6; default: return -1; } } function getAssociativity(token) { switch(token) { case '+': case '-': case '*': case '/': return 'left'; case '^': return 'right'; } } It works fine so far. If I give it: ((5+3) * 8) It will output: Infix: ((5+3) * 8) Postfix (RPN): 5 3 + 8 * Result: 64 However, I'm struggling with implementing the unary operators so I could do something like: ((-5+3) * 8) What would be the best way to implement unary operators (negation, etc)? Also, does anyone have any suggestions for handling floating point numbers as well? One last thing, if anyone sees me doing anything weird in JavaScript let me know. This is my first JavaScript program and I'm not used to it yet.

    Read the article

  • Private Git repo using Smart HTTP with LDAP authentification

    - by ALOToverflow
    I've been crawling the interwebz and getting my hands dirty for the last few days, but I can't seem to make it all work together. I managed to get a HTTP repo working with Ubuntu 10.04 over Smart HTTP (pull and push over HTTP) for a single repo. This means that I do the initial setup over SSH to the server (git init --bare) and after that the clients can pull and push to it (git clone http://servername/allgitrepos/repo.git). Unfortunately it's impossible to add a new repo without SSHing to the server and adding it manually) i.e. git push http://servername/allgitrepos/repo2.git (allgitrepos is available for everyone to read-write and execute) would fail talking about git update-server-info (which seems to be a general error message). So far the repository is anonymous, so I would like to authenticate using LDAP and also use the LDAP creds to make the git commit. So, how can I push new repos to the server and how can I use the LDAP creds to make the git commit. Thanks

    Read the article

  • Does anyone get zero-height select fields in Firefox 3.6.3?

    - by user350635
    If you open this HTML in Firefox 3.6.3 (confirmed in some earlier versions too), and click the drawStuff() link repeatedly, it doesn't render the contents of the last div consistently. Looking more closely it seems like it's rendering select fields with height=0. Any idea why this would happen? <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <title> A Page </title> <script type="text/javascript"> function drawStuff() { for (var i = 1; i <= 5; i++) { var curHtmlArr = []; for (var j = 0; j < 5; j++){ curHtmlArr.push("<select>"); curHtmlArr.push(getOptgroup()); curHtmlArr.push(getOptgroup()); curHtmlArr.push(getOptgroup()); curHtmlArr.push("<\/select>"); } var foobar = document.getElementById('elem_' + i); foobar.innerHTML = curHtmlArr.join(''); } } function getOptgroup(){ var htmlArr = []; htmlArr.push('<optgroup label="Whatever">'); for (var ii = 0; ii < 32; ii++){ htmlArr.push(' <option value="' + ii + '"> Blah ' + "<\/option>"); } htmlArr.push("<\/optgroup>"); return htmlArr.join(''); } </script> </head> <body> <table border=1 style="width:900px;" summary="A Table"> <tr> <td> <div id="elem_1"></div> </td> <td> <div id="elem_2"></div> </td> <td> <div id="elem_3"></div> </td> <td> <div id="elem_4"></div> </td> <td> <div>abc</div> <div id="elem_5"></div> </td> </tr> </table> <a href="javascript:drawStuff()"> drawStuff() </a> <script type="text/javascript"> drawStuff(); </script> </body> </html>

    Read the article

  • PHP remote development workflow: git, symfony and hudson

    - by user2022
    I'm looking to develop a website and all the work will be done remotely (no local dev server). The reason for this is that my shared hosting company a2hosting has a specific configuration (symfony,mysql,git) that I don't want to spend time duplicating when I can just ssh and develop remotely or through netbeans remote editing features. My question is how can I use git to separate my site into three areas: live, staging and dev. Here's my initial thought: public_html (live site and git repo) testing: a mirror of the site used for visual tests (full git repo) dev/ticket# : git branches of public_html used for features and bug fixes (full git repo) Version Control with git: Initial setup: cd public_html git init git add * git commit -m ‘initial commit of the site’ cd .. git clone public_html testing mkdir dev Development: cd /dev git clone ../testing ticket# all work is done in ./dev/ticket#, then visit www.domain.com/dev/ticket# to visually test make granular commits as necessary until dev is done git push origin master:ticket# if the above fails: merge latest testing state into current dev work: git merge origin/master then try the push again mark ticket# as ready for integration integration and deployment process: cd ../../testing git merge ticket# -m "integration test for ticket# --no-ff (check for conflicts ) run hudson tests visit www.domain.com/testing for visual test if all tests pass: if this ticket marks the end of a big dev sprint: make a snapshot with git tag git push --tags origin else git push origin cd ../public_html git checkout -f (live site should have the latest dev from ticket#) else: revert the merge: git checkout master~1; git commit -m "reverting ticket#" update ticket# that testing failed with the failure details Snapshots: Each major deployment sprint should have a standard name and should be tracked. Method: git tag Naming convention: TBD Reverting site to previous state If something goes wrong, then revert to previous snapshot and debug the issue in dev with a new ticket#. Once the bug is fixed, follow the deployment process again. My questions: Does this workflow make sense, if not, any recommendations Is my approach for reverting correct or is there a better way to say 'revert to before x commit'

    Read the article

  • Boost your infrastructure with Coherence into the Cloud

    - by Nino Guarnacci
    Authors: Nino Guarnacci & Francesco Scarano,  at this URL could be found the original article:  http://blogs.oracle.com/slc/coherence_into_the_cloud_boost. Thinking about the enterprise cloud, come to mind many possible configurations and new opportunities in enterprise environments. Various customers needs that serve as guides to this new trend are often very different, but almost always united by two main objectives: Elasticity of infrastructure both Hardware and Software Investments related to the progressive needs of the current infrastructure Characteristics of innovation and economy. A concrete use case that I worked on recently demanded the fulfillment of two basic requirements of economy and innovation.The client had the need to manage a variety of data cache, which can process complex queries and parallel computational operations, maintaining the caches in a consistent state on different server instances, on which the application was installed.In addition, the customer was looking for a solution that would allow him to manage the likely situations in load peak during certain times of the year.For this reason, the customer requires a replication site, on which convey part of the requests during periods of peak; the desire was, however, to prevent the immobilization of investments in owned hardware-software architectures; so, to respond to this need, it was requested to seek a solution based on Cloud technologies and architectures already offered by the market. Coherence can already now address the requirements of large cache between different nodes in the cluster, providing further technology to search and parallel computing, with the simultaneous use of all hardware infrastructure resources. Moreover, thanks to the functionality of "Push Replication", which can replicate and update the information contained in the cache, even to a site hosted in the cloud, it is satisfied the need to make resilient infrastructure that can be based also on nodes temporarily housed in the Cloud architectures. There are different types of configurations that can be realized using the functionality "Push-Replication" of Coherence. Configurations can be either: Active - Passive  Hub and Spoke Active - Active Multi Master Centralized Replication Whereas the architecture of this particular project consists of two sites (Site 1 and Site Cloud), between which only Site 1 is enabled to write into the cache, it was decided to adopt an Active-Passive Configuration type (Hub and Spoke). If, however, the requirement should change over time, it will be particularly easy to change this configuration in an Active-Active configuration type. Although very simple, the small sample in this post, inspired by the specific project is effective, to better understand the features and capabilities of Coherence and its configurations. Let's create two distinct coherence cluster, located at miles apart, on two different domain contexts, one of them "hosted" at home (on-premise) and the other one hosted by any cloud provider on the network (or just the same laptop to test it :)). These two clusters, which we call Site 1 and Site Cloud, will contain the necessary information, so a simple client can insert data only into the Site 1. On both sites will be subscribed a listener, who listens to the variations of specific objects within the various caches. To implement these features, you need 4 simple classes: CachedResponse.java Represents the POJO class that will be inserted into the cache, and fulfills the task of containing useful information about the hypothetical links navigation ResponseSimulatorHelper.java Represents a link simulator, which has the task of randomly creating objects of type CachedResponse that will be added into the caches CacheCommands.java Represents the model of our example, because it is responsible for receiving instructions from the controller and performing basic operations against the cache, such as insert, delete, update, listening, objects within the cache Shell.java It is our controller, which give commands to be executed within the cache of the two Sites So, summarily, we execute the java class "Shell", asking it to put into the cache 100 objects of type "CachedResponse" through the java class "CacheCommands", then the simulator "ResponseSimulatorHelper" will randomly create new instances of objects "CachedResponse ". Finally, the Shell class will listen to for events occurring within the cache on the Site Cloud, while insertions and deletions are performed on Site 1. Now, we realize the two configurations of two respective sites / cluster: Site 1 and Site Cloud.For the Site 1 we define a cache of type "distributed" with features of "read and write", using the cache class store for the "push replication", a functionality offered by the project "incubator" of Oracle Coherence.For the "Site Cloud" we expect even the definition of “distributed” cache type with tcp proxy feature enabled, so it can receive updates from Site 1.  Coherence Cache Config XML file for "storage node" on "Site 1" site1-prod-cache-config.xml Coherence Cache Config XML file for "storage node" on "Site Cloud" site2-prod-cache-config.xml For two clients "Shell" which will connect respectively to the two clusters we have provided two easy access configurations.  Coherence Cache Config XML file for Shell on "Site 1" site1-shell-prod-cache-config.xml Coherence Cache Config XML file for Shell on "Site Cloud" site2-shell-prod-cache-config.xml Now, we just have to get everything and run our tests. To start at least one "storage" node (which holds the data) for the "Cloud Site", we can run the standard class  provided OOTB by Oracle Coherence com.tangosol.net.DefaultCacheServer with the following parameters and values:-Xmx128m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.cacheconfig=config/site2-prod-cache-config.xml-Dtangosol.coherence.clusterport=9002-Dtangosol.coherence.site=SiteCloud To start at least one "storage" node (which holds the data) for the "Site 1", we can perform again the standard class provided by Coherence  com.tangosol.net.DefaultCacheServer with the following parameters and values:-Xmx128m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.cacheconfig=config/site1-prod-cache-config.xml-Dtangosol.coherence.clusterport=9001-Dtangosol.coherence.site=Site1 Then, we start the first client "Shell" for the "Cloud Site", launching the java class it.javac.Shell  using these parameters and values: -Xmx64m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=config/site2-shell-prod-cache-config.xml-Dtangosol.coherence.clusterport=9002-Dtangosol.coherence.site=SiteCloud Finally, we start the second client "Shell" for the "Site 1", re-launching a new instance of class  it.javac.Shell  using  the following parameters and values: -Xmx64m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=config/site1-shell-prod-cache-config.xml-Dtangosol.coherence.clusterport=9001-Dtangosol.coherence.site=Site1  And now, let’s execute some tests to validate and better understand our configuration. TEST 1The purpose of this test is to load the objects into the "Site 1" cache and seeing how many objects are cached on the "Site Cloud". Within the "Shell" launched with parameters to access the "Site 1", let’s write and run the command: load test/100 Within the "Shell" launched with parameters to access the "Site Cloud" let’s write and run the command: size passive-cache Expected result If all is OK, the first "Shell" has uploaded 100 objects into a cache named "test"; consequently the "push-replication" functionality has updated the "Site Cloud" by sending the 100 objects to the second cluster where they will have been posted into a respective cache, which we named "passive-cache". TEST 2The purpose of this test is to listen to deleting and adding events happening on the "Site 1" and that are replicated within the cache on "Cloud Site". In the "Shell" launched with parameters to access the "Site Cloud" let’s write and run the command: listen passive-cache/name like '%' or a "cohql" query, with your preferred parameters In the "Shell" launched with parameters to access the "Site 1" let’s write and run the following commands: load test/10 load test2/20 delete test/50 Expected result If all is OK, the "Shell" to Site Cloud let us to listen to all the add and delete events within the cache "cache-passive", whose objects satisfy the query condition "name like '%' " (ie, every objects in the cache; you could change the tests and create different queries).Through the Shell to "Site 1" we launched the commands to add and to delete objects on different caches (test and test2). With the "Shell" running on "Site Cloud" we got the evidence (displayed or printed, or in a log file) that its cache has been filled with events and related objects generated by commands executed from the" Shell "on" Site 1 ", thanks to "push-replication" feature.  Other tests can be performed, such as, for example, the subscription to the events on the "Site 1" too, using different "cohql" queries, changing the cache configuration,  to effectively demonstrate both the potentiality and  the versatility produced by these different configurations, even in the cloud, as in our case. More information on how to configure Coherence "Push Replication" can be found in the Oracle Coherence Incubator project documentation at the following link: http://coherence.oracle.com/display/INC10/Home More information on Oracle Coherence "In Memory Data Grid" can be found at the following link: http://www.oracle.com/technetwork/middleware/coherence/overview/index.html To download and execute the whole sources and configurations of the example explained in the above post,  click here to download them; After download the last available version of the Push-Replication Pattern library implementation from the Oracle Coherence Incubator site, and download also the related and required version of Oracle Coherence. For simplicity the required .jarS to execute the example (that can be found into the Push-Replication-Pattern  download and Coherence Distribution download) are: activemq-core-5.3.1.jar activemq-protobuf-1.0.jar aopalliance-1.0.jar coherence-commandpattern-2.8.4.32329.jar coherence-common-2.2.0.32329.jar coherence-eventdistributionpattern-1.2.0.32329.jar coherence-functorpattern-1.5.4.32329.jar coherence-messagingpattern-2.8.4.32329.jar coherence-processingpattern-1.4.4.32329.jar coherence-pushreplicationpattern-4.0.4.32329.jar coherence-rest.jar coherence.jar commons-logging-1.1.jar commons-logging-api-1.1.jar commons-net-2.0.jar geronimo-j2ee-management_1.0_spec-1.0.jar geronimo-jms_1.1_spec-1.1.1.jar http.jar jackson-all-1.8.1.jar je.jar jersey-core-1.8.jar jersey-json-1.8.jar jersey-server-1.8.jar jl1.0.jar kahadb-5.3.1.jar miglayout-3.6.3.jar org.osgi.core-4.1.0.jar spring-beans-2.5.6.jar spring-context-2.5.6.jar spring-core-2.5.6.jar spring-osgi-core-1.2.1.jar spring-osgi-io-1.2.1.jar At this URL could be found the original article: http://blogs.oracle.com/slc/coherence_into_the_cloud_boost Authors: Nino Guarnacci & Francesco Scarano

    Read the article

  • Farmyard

    - by Richard Jones
    Moooooooo     For a while now we’ve been using Apple’s enterprise device app distribution mechanism.   This allows you to have a user, click on a URL on their iOS device and it pulls down a new version of an enterprise app. of of our servers. Its really nice,  have a look at - http://developer.apple.com/library/ios/#featuredarticles/FA_Wireless_Enterprise_App_Distribution/Introduction/Introduction.html   I’ve embedded this, into a check on application launch, that a web-service is called to detect a newer version of the software is available.  It then calls the URL to the App and a new version is deployed. You can alert users that a new App update is available by sending them a push notification.  See screenshot at the top. We send our push notifications out to users,  using a simple C# service.    The fun part is this.   You can instruct the push notification to play a sound (embedded in the app already). So our push notification’s play a random farmyard noise, i.e from a selection of - cow.wav dogbrk.wav duck.wav goose.wav horse.wav lamb.wav monkey.wav – left field I know rooster.wav Imagine my amusement being able to periodically send out an update and watch our office (of about 60 people) turn into farm for a few seconds. I’ve messed up a few times, with people being interrupted on customer conference calls,  but people seem good humoured about it. (so far) Simple(ish) pleasures…

    Read the article

  • Is there an elegant way to track multiple domains under separate accounts with google analytics?

    - by J_M_J
    I have a situation where a content management system uses the same template for multiple websites with different domain names and I can't make a separate template for each. However, each website needs to be tracked with Google analytics. Would this be appropriate to track each domain like this by putting in some conditional code? And would this be robust enough not to break? Is there a more elegant way to do this? <script type="text/javascript"> var _gaq = _gaq || []; switch (location.hostname){ case 'www.aaa.com': _gaq.push(['_setAccount', 'UA-xxxxxxx-1']); break; case 'www.bbb.com': _gaq.push(['_setAccount', 'UA-xxxxxxx-2']); break; case 'www.ccc.com': _gaq.push(['_setAccount', 'UA-xxxxxxx-3']); break; } _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(ga); })(); </script> Just to be clear, each website is a separate domain name and must be tracked separately, NOT different domains with same pages on one analytics profile.

    Read the article

  • Not able to track traffic on subdomain using Google Analytics

    - by Steven
    I'm trying to track traffic for my sub-domain, but it's not happening. This is how it's set up. My partner has a domain called sub1.partner.com. This domain points to partner1.mydomain.com. The idea is that users think they are browsing my partners website, when they are in fact browsing pages on my server. My tracking code looks like this: var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-xxxxxxxx-x']); _gaq.push(['_setDomainName', '.mysite.com']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); In Google analytics I've created a new account under my main account and called in partner1.mysite.com. On this account I have created a filter: Filter type: include Filter field: Host name Filter pattern: partner1.mysite.no Case sensetive: No What more can I try to track traffic on my subdomain? UPDATE Question 1 Is this line correct? _gaq.push(['_setDomainName', '.mysite.com']); Question 2 Is it correct that I have to add \ before any punctuations like so \. in filters?

    Read the article

  • Can we use both Google Analytics (Asynchronous) and Google Analytics with Display Advertising code in same page

    - by Gadde
    I have Google Analytics (Asynchronous) script <script type=”text/javascript”> _gaq.push(['_setAccount', 'UA-XXXXX-X']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> and Google Analytics with Display Advertising Script <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-XXXXX-X-yz']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://' : 'http://') + 'stats.g.doubleclick.net/dc.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> The UA - codes are different can i use both the codes ? I've read some where that Universal Analytics will not interfere with previous versions of Google Analytics. If i have upgraded to Universal Analytics, If the UA - codes are different should i use only the Universal Analytics script or should i use both Universal Analytics script and Universal Analytics script. please advise.....

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >