Search Results

Search found 8824 results on 353 pages for 'cloud virtualization vmware density scalability'.

Page 236/353 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • Box2D `ApplyLinearImpulse` is not working whereas `SetLinearVelocity` works

    - by Narek
    I need to mimic jumping behavior for the player in my game. Player consists of two fixtures with circle and rectangle shapes. Rectangle I use to detect ground and it is a sensor. Is some point for jumping I do this: float impulseY = body->GetMass() * PLAYER_JUMPING_VEOCITY / PTM_RATIO * std::sin(PLAYER_JUMPING_ANGLE * PI / 180); body->ApplyLinearImpulse(b2Vec2(0, impulseY), body->GetWorldCenter(), true); and player does not jump. But when I do this: body->SetLinearVelocity(b2Vec2(0, PLAYER_JUMPING_VEOCITY / PTM_RATIO * std::sin(PLAYER_JUMPING_ANGLE * PI / 180))); my player jumps. Also when I change the rectangle shape to be normal (not sensor) shape, its works again. Why? Just in case here are the parameters of my rectangular sensor: b2PolygonShape boxShape; boxShape.SetAsBox(width * 0.5/2/PTM_RATIO, height * 0.2/2/PTM_RATIO, b2Vec2(0, -height * 0.4 /PTM_RATIO), 0); b2FixtureDef boxFixtureDef; boxFixtureDef.friction = 0; boxFixtureDef.restitution = 0; boxFixtureDef.density = 1; boxFixtureDef.isSensor = true; boxFixtureDef.userData = static_cast<void*>(PLAYER_GROUP);

    Read the article

  • Impact Earth Lets You Simulate Asteroid Impacts

    - by Jason Fitzpatrick
    If you’re looking for a little morbid simulation to cap off your Friday afternoon, this interactive asteroid impact simulator makes it easy to the results of asteroid impacts big and small. The simulator is the result of a collaboration between Purdue University and the Imperial College of London. You can adjust the size, density, impact angle, and impact velocity of the asteroid as well as change the target from water to land. The only feature missing is the ability to select a specific location as the point of impact (if you want to know what a direct strike to Paris would yield, for example, you’ll have to do your own layering). Once you plug all that information in, you’re treated to a little 3D animation as the simulator crunches the numbers. After it finishes you’ll see a breakdown of a variety of effects including the size of the crater, the energy of the impact, seismic effects, and more. Hit up the link below to take it for a spin. Impact Earth [via Boing Boing] How To Delete, Move, or Rename Locked Files in Windows HTG Explains: Why Screen Savers Are No Longer Necessary 6 Ways Windows 8 Is More Secure Than Windows 7

    Read the article

  • Can anyone point me to some open source directX rendering engines or frameworks? [on hold]

    - by Jim
    I'm completely new to graphics API programmming, but not at all new to the theory and principle operation of game engines and rendering engines. That being said, I want to do some experiments of rendering very dense geometry scenes in a basic rendering engine or game engine. I don't need a lot of bells and whistles. What I need is enough control that I can implement my own scene graph algorithms and control the rendering pipeline very specifically. My ideal candidate engine would be either a rendering engine or game engine with a modular design that might be ready to go out of the box but would be simple enough in case I need to rip out some of the guts in the rendering management and implement my own. It's a tough call because I'm right at the level where it's almost better to go from scratch, but there's no sense in having to build every single basic thing such as heirarchical transforms, etc. I just want to work with rendering optimization to push dense geometry for maximum FPS. Does anyone have a suggestion for an engine or basic framework to use? I requested DirectX in my title because I figured it would likely be better supported and less likely for me to run into some obscure less-documented problem. But OpenGL might be acceptable if the recommended framework was definitely better than my other options. EDIT: I should add that I really want GPU tessellation support (part of adding to the density of geometry detail).

    Read the article

  • trying to upgrade memory

    - by user214876
    I've been using Ubuntu on my laptop for awhile now. Not quite used to it yet. I've got a Acer Aspire with an orig 4 gig mem/500 gig HDD. Running 12.04 presently 32 bit sys. I have the 13.04 upgrade disc and want to upgrade my memory to 8 Gig. Everytime I install the 8 gig memory, the system won't boot to either version. I downloaded the 64 bit version of both versions of Ubuntu but no results yet. Can anyone offer a suggestion here? I'm kinda lost. Additional Information: The memory was purchased through Acer/Kingston. Recommended for this computer. I watched the video on installing it, so I doubt it's installed wrong. (There's only one way of putting it in). I swapped Op Sys, from Ubuntu 12.04 to 13.04 to 13.10 and now to Xubuntu 13.10 64 bit version. I'm still not having any luck with this upgrade. Would it be necessary to upgrade the CPU? It's just a thought, I don't know what else could keep me from utilizing the new memory. Additional Information: Called Kingston this afternoon, they are sending replacement lower density memory modules 2/4 gig - 8 gig. Tech service says I need to upgrade BIOS to utilize new memory install VIA Dos since it is no longer a windows system. I'm not sure how to go about that but it's a learning process I can live with. Thank you all for your help/support. I realize this isn't a Ubuntu problem but each new user of this op sys, seems to share simular problems and maybe someone can use this info to their advantage.

    Read the article

  • Function for building an isosurface (a sphere cut by planes)

    - by GameDevEnthusiast
    I want to build an octree over a quarter of a sphere (for debugging and testing). The octree generator relies on the AIsosurface interface to compute the density and normal at any given point in space. For example, for a full sphere the corresponding code is: // returns <0 if the point is inside the solid virtual float GetDensity( float _x, float _y, float _z ) const override { Float3 P = Float3_Set( _x, _y, _z ); Float3 v = Float3_Subtract( P, m_origin ); float l = Float3_LengthSquared( v ); float d = Float_Sqrt(l) - m_radius; return d; } // estimates the gradient at the given point virtual Float3 GetNormal( float _x, float _y, float _z ) const override { Float3 P = Float3_Set( _x, _y, _z ); float d = this->AIsosurface::GetDensity( P ); float Nx = this->GetDensity( _x + 0.001f, _y, _z ) - d; float Ny = this->GetDensity( _x, _y + 0.001f, _z ) - d; float Nz = this->GetDensity( _x, _y, _z + 0.001f ) - d; Float3 N = Float3_Normalized( Float3_Set( Nx, Ny, Nz ) ); return N; } What is a nice and fast way to compute those values when the shape is bounded by a low number of half-spaces?

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Mongodb vs. Cassandra

    - by ming yeow
    I am evaluating what might be the best migration option. Currently, i am on a sharded mysql (horizontal partition), with most of my data stored in json blobs. I do not have any complex SQL queries( already migrated away after since I partitioned my db) Right now, it seems like both Mongodb and Cassandra would be likely options. My situation lots of reads in every query, less regular writes not worried about "massive" scalability more concerned about simple setup, maintenance and code minimize hardware/server cost

    Read the article

  • installing bitnami stacks with virtualbox

    - by dreftymac
    Background: I cannot seem to get the vmdk files to work with VirtualBox as a way of using Bitnami stacks. The documentation says you can use VirtualBox, but there is no detail except how to use VMWare player. I know how to get .iso files working with virtual box, but not the files that Bitnami uses. Question: Anyone have experience getting this specific configuration to work?

    Read the article

  • Httpd + Expect Script Fail (no more ptys) if httpd is not run through cli

    - by Apostolis
    I have a CentOS virtual server through Vmware. The server runs an httpd daemon which serves an php page with a form. The users complete the form, and by clicking submit the php page calls an expect scripts. If i run the httpd throught the default init.d script i get a "no more ptys" error, but if i run httpd through root terminal the script runs without problems. How can i make the httpd run the expect scripts without having to run the httpd daemon by hand.

    Read the article

  • Unable to Install SQL Server 2008 on Win Server 2008 R2 Datacenter

    - by MikeKusold
    I have been trying for the past three days to install SQL Server 2008 with SharePoint integrated mode in VMware Player, however I keep getting the following error: Reporting Services in SharePoint integrated mode is not supported for WORKGROUP edition I setup ADDS and have my computer part of that domain (therefore not a WORKGROUP). I am currently at my wits end and any help would be appreciated. Current Roles installed: Application Server, Active Directory Domain Services, Web Server (IIS) Features: Desktop Experience, Group Policy Management, Ink and Handwriting Services, Remote Server Administration Tools, Windows Process Activation Service, .NET Framework 3.5.1 Features

    Read the article

  • installation of mac os 10.6.2

    - by mahesh
    hi this is mahesh.i had run mac by using vmware workstation 7.0 in my windows pc.i installed mac 10.4.8 os succesfully.but i cant able to install mac10.6.2 version.it shows an error that "invalid front-side bus frequency 66000000 hz,disabling cpu".please help me to solve this and suggest any link to easily understand how to install mac 0s 10.6.2 succesfully.

    Read the article

  • Why can I not get a WDS-originated PXE boot to progress past the first file download?

    - by Jeff Shattock
    I'm trying to work out an automated Windows install process, and thought I'd give WDS a look. After some promising initial progress, I seem to have hit a wall. I imported the boot and install WIMs, and created the capture WIM successfully. However, whenever I try to PXE boot the reference machine against the WDS server, it kinda craps out. It finds the server and downloads WDSNBP.COM successfully, and then gives the message "TFTP download failed." According to WireShark, the only communication between the WDS box and the client box is the successful TFTP request and download of boot\x86\WDSNBP.COM. No further requests are sent. The WDS log on the server shows the same thing, one successful download and no more activity. I've tried every combination of the following, with exactly zero change in behaviour: Win Server 2008R2 vs 2012 vs 2012R2 WDS virtualized on KVM, ESXi, VirtualBox, VMWare Workstation Client virtualized on KVM, ESXi, VirtualBox, VMWare Workstation Every network adaptor type offered by the virtualization platforms. "Actual" network vs isolated, virtual network. MS DHCP server vs Linux isc-dhcp-server Joined to a domain vs Stand-alone I tried changing the boot filename in DHCP to pxeboot.com instead, and it has no problem downloading that file instead, but it then crabs about Boot\BCD being corrupted. Also, with 2012, it doesnt appear that WDSNBP.com does the architecture detection, or at least does'nt report that it did. 2008 reports that it found x64, and then errors. I find myself out of things to check, and I dont see anything immediately wrong. Where do I go from here? WDS server is at 192.168.1.50, DHCP/DNS at 192.168.1.7. Console of the client computer after the boot: MAC: 52:54:00:28:94:0E UUID: blah blah Searching for server (DHCP)..... Me: 192.168.1.155, DHCP: 192.168.1.7, Gateway 192.168.1.1 Loading 192.168.1.50:boot\x86\wdsnbp.com ...(PXE).................done Downloaded WDSNCP... TFPT download failed Interesting parts of /etc/dhcp/dhcpd.conf on the Linux DHCP server: allow booting; allow bootp; option option-60 code 60 = string; option option-66 code 66 = string; option option-67 code 67 = string; subnet 192.168.1.0 netmask 255.255.255.0 { range 192.168.1.110 192.168.1.253; next-server 192.168.1.50; option tftp-server-name "192.168.1.50"; option option-60 "PXEClient"; filename "boot\\x86\\wdsnbp.com"; option bootfile-name "boot\\x86\\wdsnbp.com"; }

    Read the article

  • Windows 7 Enterprise MAK needs Reactivation?

    - by chris
    I have a VMware image that contains our "standard workstation" where I do a lot of testing. I used a Windows 7 Enterprise MAK key (from MSDN) for activation because the doc said that MAK keys don't have to be reactivated when the hardware changes. Activation was done with slmgr.vbs /ipk LICENSE-KEY slmgr.vbs /ato Now after some testing where the virtual hw was changed it says it "needs to activate because the hw has changed". What did I do wrong?

    Read the article

  • Apache2 not restarting on ubuntu

    - by Mirage
    I have installed apache 2 in ubuntu but when i try to restart it gives me this error vmware@ubuntu:~$ sudo /etc/init.d/apache2 restart * Restarting web server apache2 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Mon Apr 12 17:37:43 2010] [warn] NameVirtualHost *:80 has no VirtualHosts apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Mon Apr 12 17:37:43 2010] [warn] NameVirtualHost *:80 has no VirtualHosts When i try http://localhost nothing comes

    Read the article

  • Google Chrome OS ISO

    - by Taylor
    First of all, a few questions as I'm a bit confused :P Is the Chrome OS that is shipped on Chromebooks "Chromium"? I'm wanting to buy a Chromebook, but I want to download an ISO or other similar format first so I can try it out in VMware. To be clear, I'm wanting the exact version that ships on Chromebooks, as I want to get a good feel for what I'll be buying Where can I find this? I've found other things but I'm not sure they're the actual Chrome OS

    Read the article

  • Download Microsoft Windows OS for Test Environments

    - by Gavin
    I need to create a development environments with Windows XP, Windows Server 2003, Windows Server 2008 on VMWare. Are there free (unrestricted) development type versions of those respective operating systems distributed by Microsoft? Or is there some kind of cheap software network/membership i can sign up for? Other companies like Oracle make this very easy to download dev copies of Solaris and OEL, but Microsoft seems a little too protective of their operating systems and don't make it easy.

    Read the article

  • How to configure Ubuntu for X11 client only?

    - by Shannon Severance
    I have Ubuntu 10.04 LTS server running within a VM (VMware Fusion) Currently I can only access the machine via character mode console. I would like to install whatever I need to install to run X11 client software on Ubuntu, without installing a full X11 Server. (I'll be using the X11 server within OS X, and ssh -X) What should I do?

    Read the article

  • Error activating XKB configuration (Ubuntu 9.10)?

    - by Mohammad
    I'm absolutely new on Ubuntu. I was gonna learn Ubuntu, so I installed VMware 7.0.1 build-227600 on my Win7 x64, afterward I've installed Ubuntu 9.10 on it as a Virtual Machine. I just enabled root account, and when I log in as root for once, I received the following error : What's wrong with it? Am I miss something?

    Read the article

  • Real World Experience of db4o and/or Eloquera Database

    - by user341127
    I am evaluating two object databases, db4o (http://www.db4o.com) and Eloquera Database (http://eloquera.com) for a coming project. I have to choose one. My basic requirement is scalability, multi user support and easy type evolution for RAD. Please share your real world experience. If you have both, can you compare these two? Which do you prefer?

    Read the article

  • Getting HAPROXY to redirect http to https in users browser session

    - by Jon
    We are currently using a Internet cloud provider to host our SaaS platform. The platform consists of a Firewall - Cloud Provider SLB - - Apache Web Server - HAPROXY SLB - Liferay Platform We have had to use HAPROXY because of an issue with the cloud providers SLB that meant we were unable to use it for load balancing the Liferay platform applications. I have implemented HAPROXY in our secure tier and that seems to do the trick of load balancing the requests quite adequately. However during testing we encountered a functional issue whereby selecting a sub-menu from the web portal resulted in the application hanging, using an http analyser we saw that the request being passed back to the users browser was in http, from discussing this with the software vendor it transpires that the Liferay application has some hard-coded http links, and that other customers have worked around this by using physical NLB's such as F5 and redirecting the http traffic to https. The entry in the HAPROXY logs reads: haproxy[2717]: haproxy[2717]: <Apache Web Agent>:37957 [11/Apr/2013:08:07:00.128] http-uapi uapi/<ServerName> 0/0/0/9/10 200 4912 - - ---- 4/2/1/2/0 0/0 "GET /servicedesk/controller?docommand=renderradform&!key=esd_sfb001_frm_feedback_forms_list&isportalintegratedmode=true&USR=joe.bloggs%40gmail.com&_dc=1365667773097&redirecturl=controller%3Fdocommand%3Drenderbody%26%21key%3DESD_SFB001_FRM_FEEDBACK_FORMS_LIST%26isportalintegratedmode%3Dtrue&sso_token=ALiYv2UqzLsAhSw1ZchRDlCHlq44Bhj9&ONERROR=%2Fweb%2Fjsp%2Fapps%2Fportal-integration-error.jsp&itype=login&slicetoken=NW51O%242aRo%2C_Zz%2476P_9DTtnFmz6%28bhk&AUTOFORWARDURL=controller%3Fdocommand%3Drenderbody%26%21key%3DESD_SFB001_FRM_FEEDBACK_FORMS_LIST%26isportalintegratedmode%3Dtrue&LOGINPAGE=https%3A%2F%2F<FQDN of Web Portal>%2Fweb%2F4732cf01-82c3-4bc5-b6c9-552253e672cf%2Fworkflow-tools&appid=1&!uid=1&!redownloadToken=7.0.3.1.1363611301.0&userlocale=en_US&!datechanged=2012-05-18%2015:05:31.38 HTTP/1.1" :37957 [11/Apr/2013:08:07:00.128] http-uapi uapi/<ServerName> 0/0/0/9/10 200 4912 - - ---- 4/2/1/2/0 0/0 "GET /servicedesk/controller?docommand=renderradform&!key=esd_sfb001_frm_feedback_forms_list&isportalintegratedmode=true&USR=joe.bloggs%40gmail.com&_dc=1365667773097&redirecturl=controller%3Fdocommand%3Drenderbody%26%21key%3DESD_SFB001_FRM_FEEDBACK_FORMS_LIST%26isportalintegratedmode%3Dtrue&sso_token=ALiYv2UqzLsAhSw1ZchRDlCHlq44Bhj9&ONERROR=%2Fweb%2Fjsp%2Fapps%2Fportal-integration-error.jsp&itype=login&slicetoken=NW51O%242aRo%2C_Zz%2476P_9DTtnFmz6%28bhk&AUTOFORWARDURL=controller%3Fdocommand%3Drenderbody%26%21key%3DESD_SFB001_FRM_FEEDBACK_FORMS_LIST%26isportalintegratedmode%3Dtrue&LOGINPAGE=https%3A%2F%2F<FQDN of Web Portal>%2Fweb%2F4732cf01-82c3-4bc5-b6c9-552253e672cf%2Fworkflow-tools&appid=1&!uid=1&!redownloadToken=7.0.3.1.1363611301.0&userlocale=en_US&!datechanged=2012-05-18%2015:05:31.38 HTTP/1.1" The corresponding HTTP browser entry shows: http://<FQDN of ServiceDesk>/servicedesk/controller?docommand=renderradform&!key=esd_org019_frm_contact_list&isportalintegratedmode=true&USR=joe.bloggs%40gmail.com&_dc=1365665987887&redirecturl=controller%3Fdocommand%3Drenderbody%26%21key%3DESD_ORG019_FRM_CONTACT_LIST%26isportalintegratedmode%3Dtrue&sso_token=3NxsXYORMPp32SwL8ftVUCMH2QdWLH82&ONERROR=%2Fweb%2Fjsp%2Fapps%2Fportal-integration-error.jsp&itype=login&slicetoken=NW51O%242aRo%2C_Zz%2476P_9DTtnFmz6%28bhk&AUTOFORWARDURL=controller%3Fdocommand%3Drenderbody%26%21key%3DESD_ORG019_FRM_CONTACT_LIST%26isportalintegratedmode%3Dtrue&LOGINPAGE=https%3A%2F%2F<FQDN of Web Portal>>%2Fweb%2F4732cf01-82c3-4bc5-b6c9-552253e672cf%2Fapplication-setup&appid=1&!uid=1&!redownloadToken=7.0.3.1.1363611301.0&userlocale=en_US&!datechanged=2012-10-26%2019:00:25.08 From reading through the forums and other sites it looks like we should be use to use HAPROXY to redirect the traffic to https, but try as I might I cant get it to work. This is our HAPROXY configuration: global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon stats socket /var/lib/haproxy/stats defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend http-openfire bind *:7070 default_backend openfire backend openfire balance roundrobin server <serverName> <IPv4 Address>:7070 check server <serverName> <IPv4 Address>:7070 check frontend http-uapi bind *:7080 default_backend uapi backend uapi balance roundrobin server <serverName> <IPv4 Address>:7080 check server <serverName> <IPv4 Address>:7080 check frontend http-sec bind *:8080 default_backend sec backend sec balance roundrobin server <serverName> <IPv4 Address>:8080 check server <serverName> <IPv4 Address>:8080 check frontend http-wall bind *:9080 default_backend wall backend wall balance roundrobin server <serverName> <IPv4 Address>:9080 check server <serverName> <IPv4 Address>:9080 check frontend http-xmpp bind *:9090 default_backend xmpp backend xmpp balance roundrobin server <serverName> <IPv4 Address>:9090 check server <serverName> <IPv4 Address>:9090 check frontend http-aim bind *:10080 default_backend aim backend aim balance roundrobin server <serverName> <IPv4 Address>:10080 check server <serverName> <IPv4 Address>:10080 check frontend http-servicedesk bind *:8081 default_backend servicedesk backend servicedesk balance roundrobin server <serverName> <IPv4 Address>:8081 check server <serverName> <IPv4 Address>:8081 check listen stats :1936 mode http stats enable stats hide-version stats realm Haproxy\ Statistics stats uri / stats auth haproxy:<Password> I have tried following the articles listed posted on http://stackoverflow.com/questions/13227544/haproxy-redirecting-http-to-https-ssl and http://parsnips.net/haproxy-http-to-https-redirect/ but that hasn't made any difference. Am I on the right track with this or are we trying to achieve the impossible?, I'm hoping I'm just being an idiot and one of you good people can point me in the right direction.

    Read the article

  • open source VDI solution [closed]

    - by sysconfig
    looking to build a 10 node to eventually 50 node VDI solution. the only OS on the desktop will be ubuntu ( or some other linux ) looking for easy setup administration, and remote administration etc. will probably just use diskless PC as clients for now, but would want a solution that can accommodate thin-clients as well, and maybe there its just XDMCP from the server. must be completely open source ( no VMware ) thoughts ?

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >