Search Results

Search found 1774 results on 71 pages for 'azure'.

Page 5/71 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • SQL SERVER – Generate Database Script for SQL Azure

    - by pinaldave
    When talking about SQL Azure the common complain I hear is that the script generated from stand-along SQL Server database is not compatible with SQL Azure. This was true for some time for sure but not any more. If you have SQL Server 2008 R2 installed you can follow the guideline below to generate script which is compatible with SQL Azure. As above images are very clear I will not write more about them. SQL Azure does not support filegroups. Let us generate script for any table created on PRIMARY filegroup for standalong SQL Server and compare it with the script generated for SQL Azure. You can clearly see that there is no filegroup in the code generated for SQL Azure. Give it a try and please your comment here about what do you think about the same. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Add-On, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Azure

    Read the article

  • Controlling azure worker roles concurrency in multiple instance

    - by NER1808
    I have a simple work role in azure that does some data processing on an SQL azure database. The worker basically adds data from a 3rd party datasource to my database every 2 minutes. When I have two instances of the role, this obviously doubles up unnecessarily. I would like to have 2 instances for redundancy and the 99.95 uptime, but do not want them both processing at the same time as they will just duplicate the same job. Is there a standard pattern for this that I am missing? I know I could set flags in the database, but am hoping there is another easier or better way to manage this. Thanks

    Read the article

  • SyncToBlog #10 Lots of Azure and Cloud Links including MIX10 videos

    - by Eric Nelson
    Just getting a few interesting cloud links “down on paper”. I last did one of these on Azure in Feb 20010. Cloud Links: Article on Debugging in the Cloud http://code.msdn.microsoft.com/azurescale  A sample app that demonstrates monitoring and automatically scaling an Azure application in response to dropping performance etc. Basically a console app that checks perf stats and then uses the Service Management API to spin up new instances when needed. Azure In Action book is imminent :) Running Memcached in Windows Azure from the MS UK team Using Microsoft Codename Dallas as a data source for Drupal also from the MS UK team I often mention them – but this post is the biz! Metodi on fault and upgrade domains Detailed blog post on comparing Azure AppFabric Service Bus REST support to the free Faye Ruby+JavaScript gem that implements the JSON publish/subscribe protocol Bayeux. AppFabric LABS allow you to test out and play with experimental AppFabric technologies. Details of the upcoming VM support in Windows Azure Nice series of posts from J D Meier in the Patterns and Practice team How To Use ASP.NET Forms Auth with Azure Tables  How To Use ASP.NET Forms Auth with Roles in Azure Tables How To Use ASP.NET Forms Auth with SQL Server on Windows Azure And sessions from MIX10 held March 15th to 17th: Lap around the Windows Azure Platform – Steve Marx Building and Deploying Windows Azure Based Applications with Microsoft Visual Studio 2010 – Jim Nakashima Building PHP Applications using the Windows Azure Platform – Craig Kitterman, Sumit Chawla Using Ruby on Rails to Build Windows Azure Applications – Sriram Krishnan Microsoft Project Code Name “Dallas": Data for your apps – Moe Khosravy Using Storage in the Windows Azure Platform – Chris Auld Building Web Applications with Windows Azure Storage – Brad Calder Building Web Application with Microsoft SQL Azure – David Robinson Connecting Your Applications in the Cloud with Windows Azure AppFabric – Clemens Vasters Microsoft Silverlight and Windows Azure: A Match Made for the Web – Matt Kerner Something for everyone :)

    Read the article

  • Free Windows Azure event next Monday in London (29th March)

    - by Eric Nelson
    I just heard that we still have spaces for this event happening next week (29th March 2010). Whilst the event is designed for start-ups, I’m sure nobody would notice if you snuck in :-) Just keep it to yourself ;-) Register using invitation code: 79F2AB. Hope to see you there. The agenda is looking pretty swish: 09:00 – 09:30 Registration 09:30 - 10:15 Keynote  ‘I’ve looked at clouds from both sides now....’– John Taysom, Active Seed Investor 10:15 - 10:45   The Microsoft Vision for Cloud Computing – Steve Clayton, Director Software + Services, EMEA 10:45 - 11:00   Break 11:00 - 12:30 “Windows Azure in Real World” – hear from startups that have built their business around the Azure platform, moderated by Alistair Beagley, Azure UK Developer and Platform Lead 12:30 - 13:15 Lunch and networking  13:15 - 14:15  Breakout Tracks, moderated by our Azure Experts 1. Windows Azure Technical Overview - David Gristwood, Application Architect, Microsoft 2. SQL Azure Technical Overview – Eric Nelson, Application Architect, Microsoft 3. Commercial insight into Windows Azure and what this means for BizSpark Start-ups - Simon Karn, Commercial Lead, UK Windows Azure Incubation Team, Microsoft 14:15 - 14:30 Session change over 14:30 - 15:30   Breakout Tracks, moderated by our Azure Experts 1. SQL Azure Technical Overview (repeat) - Eric Nelson, Application Architect, Microsoft 2. Deep dive into Windows Azure – Neil Kidd, Architect, Microsoft Technology Centre 3. Lessons Learnt - Windows Azure in the Real World interactive session – Two customers hosted by Matt Deacon, Enterprise Architect, Microsoft 15:30 - 16:00 Break & Session change over 16:00 - 17:00 Breakout Tracks, moderated by our Azure Experts 1. PHP / Ruby on Azure Simon Davies, Architect, UK Windows Azure Incubation Team, Microsoft 2. Commercial insight into Windows Azure and what this means for BizSpark Start-ups (repeat) - Simon Karn, Commercial Lead, UK Windows Azure Incubation Team, Microsoft 3. Lessons Learnt - Windows Azure in the Real World interactive session #2 Two customers hosted by Matt Deacon, Enterprise Architect, Microsoft 17:00 - 18:00 Pitches and Judging 18:15 Wrap-up and close 18:15 - 20:00 Drinks & Networking

    Read the article

  • The Data Scientist

    - by BuckWoody
    A new term - well, perhaps not that new - has come up and I’m actually very excited about it. The term is Data Scientist, and since it’s new, it’s fairly undefined. I’ll explain what I think it means, and why I’m excited about it. In general, I’ve found the term deals at its most basic with analyzing data. Of course, we all do that, and the term itself in that definition is redundant. There is no science that I know of that does not work with analyzing lots of data. But the term seems to refer to more than the common practices of looking at data visually, putting it in a spreadsheet or report, or even using simple coding to examine data sets. The term Data Scientist (as far as I can make out this early in it’s use) is someone who has a strong understanding of data sources, relevance (statistical and otherwise) and processing methods as well as front-end displays of large sets of complicated data. Some - but not all - Business Intelligence professionals have these skills. In other cases, senior developers, database architects or others fill these needs, but in my experience, many lack the strong mathematical skills needed to make these choices properly. I’ve divided the knowledge base for someone that would wear this title into three large segments. It remains to be seen if a given Data Scientist would be responsible for knowing all these areas or would specialize. There are pretty high requirements on the math side, specifically in graduate-degree level statistics, but in my experience a company will only have a few of these folks, so they are expected to know quite a bit in each of these areas. Persistence The first area is finding, cleaning and storing the data. In some cases, no cleaning is done prior to storage - it’s just identified and the cleansing is done in a later step. This area is where the professional would be able to tell if a particular data set should be stored in a Relational Database Management System (RDBMS), across a set of key/value pair storage (NoSQL) or in a file system like HDFS (part of the Hadoop landscape) or other methods. Or do you examine the stream of data without storing it in another system at all? This is an important decision - it’s a foundation choice that deals not only with a lot of expense of purchasing systems or even using Cloud Computing (PaaS, SaaS or IaaS) to source it, but also the skillsets and other resources needed to care and feed the system for a long time. The Data Scientist sets something into motion that will probably outlast his or her career at a company or organization. Often these choices are made by senior developers, database administrators or architects in a company. But sometimes each of these has a certain bias towards making a decision one way or another. The Data Scientist would examine these choices in light of the data itself, starting perhaps even before the business requirements are created. The business may not even be aware of all the strategic and tactical data sources that they have access to. Processing Once the decision is made to store the data, the next set of decisions are based around how to process the data. An RDBMS scales well to a certain level, and provides a high degree of ACID compliance as well as offering a well-known set-based language to work with this data. In other cases, scale should be spread among multiple nodes (as in the case of Hadoop landscapes or NoSQL offerings) or even across a Cloud provider like Windows Azure Table Storage. In fact, in many cases - most of the ones I’m dealing with lately - the data should be split among multiple types of processing environments. This is a newer idea. Many data professionals simply pick a methodology (RDBMS with Star Schemas, NoSQL, etc.) and put all data there, regardless of its shape, processing needs and so on. A Data Scientist is familiar not only with the various processing methods, but how they work, so that they can choose the right one for a given need. This is a huge time commitment, hence the need for a dedicated title like this one. Presentation This is where the need for a Data Scientist is most often already being filled, sometimes with more or less success. The latest Business Intelligence systems are quite good at allowing you to create amazing graphics - but it’s the data behind the graphics that are the most important component of truly effective displays. This is where the mathematics requirement of the Data Scientist title is the most unforgiving. In fact, someone without a good foundation in statistics is not a good candidate for creating reports. Even a basic level of statistics can be dangerous. Anyone who works in analyzing data will tell you that there are multiple errors possible when data just seems right - and basic statistics bears out that you’re on the right track - that are only solvable when you understanding why the statistical formula works the way it does. And there are lots of ways of presenting data. Sometimes all you need is a “yes” or “no” answer that can only come after heavy analysis work. In that case, a simple e-mail might be all the reporting you need. In others, complex relationships and multiple components require a deep understanding of the various graphical methods of presenting data. Knowing which kind of chart, color, graphic or shape conveys a particular datum best is essential knowledge for the Data Scientist. Why I’m excited I love this area of study. I like math, stats, and computing technologies, but it goes beyond that. I love what data can do - how it can help an organization. I’ve been fortunate enough in my professional career these past two decades to work with lots of folks who perform this role at companies from aerospace to medical firms, from manufacturing to retail. Interestingly, the size of the company really isn’t germane here. I worked with one very small bio-tech (cryogenics) company that worked deeply with analysis of complex interrelated data. So  watch this space. No, I’m not leaving Azure or distributed computing or Microsoft. In fact, I think I’m perfectly situated to investigate this role further. We have a huge set of tools, from RDBMS to Hadoop to allow me to explore. And I’m happy to share what I learn along the way.

    Read the article

  • The Low Down Dirty Azure Blues

    - by SGWellens
    Remember the SETI screen savers that used to be on everyone's computer? As far I as know, it was the first bona-fide use of "Cloud" computing…albeit an ad hoc cloud. I still think it was a brilliant leveraging of computing power. My interest in clouds was re-piqued when I went to a technical seminar at the local .Net User Group. The speaker was Mike Benkovitch and he expounded magnificently on the virtues of the Azure platform. Mike always does a good job. One killer reason he gave for cloud computing is instant scalability. Not applicable for most applications, but it is there if needed. I have a bunch of files stored on Microsoft's SkyDrive platform which is cloud storage. It is painfully slow. Accessing a file means going through layers and layers of software, redirections and security. Am I complaining? Hell no! It's free! So my opinions of Cloud Computing are both skeptical and appreciative. What intrigued me at the seminar, in addition to its other features, is that Azure can serve as a web hosting platform. I have a client with an Asp.Net web site I developed who is not happy with the performance of their current hosting service. I checked the cost of Azure and since the site has low bandwidth/space requirements the cost would be competitive with the existing host provider: Azure Pricing Calculator. And, Azure has a three month free trial. Perfect! I could try moving the website and see how it works for free. I went through the signup process. Everything was proceeding fine until I went to the MS SQL database management screen. A popup window informed me that I needed to install Silverlight on my machine. Silverlight? No thanks. Buh-Bye. I half-heartedly found the Azure support button and logged a ticket telling them I didn't want Silverlight on my machine. Within 4 to 6 hours (and a myriad (5) of automated support emails) they sent me a link to a database management page that did not require Silverlight. Thanks! I was able to create a database immediately. One really nice feature was that after creating the database, I was given a list of connection strings. I went to the current host provider, made a backup of the database and saved it to my machine. I attached to the remote database using SQL Server Studio 2012 and looked for the Restore menu item. It was missing. So I tried using the SQL command: RESTORE DATABASE MyDatabase FROM DISK ='C:\temp\MyBackup.bak' Msg 40510, Level 16, State 1, Line 1 Statement 'RESTORE DATABASE' is not supported in this version of SQL Server. Are you kidding me? Why on earth…? This can't be happening! I opened both the source database and destination database in SQL Management Studio. I right clicked the source database, selected "Tasks" and noticed a menu selection called "Deploy Database to SQL Azure" Are you kidding me? Could it be? Oh yes, it be! There was a small problem because the database already existed on the Azure machine, I deployed to a new name, deleted the existing database and renamed the deployed database to what I needed. It was ridiculously easy. Being able to attach SQL Management Studio to remote databases is an awesome but scary feature. You can limit the IP addresses that can access the database which enhances security but when you give people, any people, me included, that much power, one errant mouse click could bring a live system down. My Advice: Tread softly and carry a large backup thumb-drive. Then I created a web site, the URL it returned look something like this: http://MyWebSite.azurewebsites.net/ Azure supports FTP, but I couldn't figure out the settings until I downloaded the publishing profile. It was an XML file that contained the needed information. I still couldn't connect with my FTP client (FileZilla). After about an hour of messing around, I deleted the port number from the FileZilla setup page….and voila, I was in like Flynn.   There are other options of deploying directly from Visual Studio, TFS, etc. but I do not like integrated tools that do things without my asking: It's usually hard to figure out what they did and how to undo it. I uploaded the aspx , cs , webconfig, etc. files. Bu it didn't run. The site I ported was in .NET 3.5. The Azure website configuration page gave me a choice between .NET 2.0 and 4.0. So, I switched to Visual Studio 2010, chose .NET 4.0 and upgraded the site. Of course I have the original version completely backed up and stored in a granite cave beneath the Nevada desert. And I have a backup CD under my pillow. The site uses ReportViewer to generate PDF documents. Of course it was the wrong version. I removed the old references to version 9 and added new references to version 10 (*see note below). Since the DLLs were not on the Azure Server, I uploaded them to the bin directory, crossed my fingers, burned some incense and gave it a try. After some fiddling around it ran. I don't know if I did anything particular to make it work or it just needed time to sort things out. However, one critical feature didn't work: ReportViewer could not programmatically generate PDF documents. I was getting this exception: "An error occurred during local report processing. Parameter is not valid." Rats. I did some searching and found other people were having the same problem, so I added a post saying I was having the same problem: http://social.msdn.microsoft.com/Forums/en-US/windowsazurewebsitespreview/thread/b4a6eb43-0013-435f-9d11-00ee26a8d017 Currently they are looking into this problem and I am waiting for the results. Hence I had the time to write this BLOG entry. How lucky you are. This was the last message I got from the Microsoft person: Hi Steve, Windows Azure Web Sites is a multi-tenant environment. For security issue, we limited some API calls. Unfortunately, some GDI APIS required by the PDF converting function are in this list. We have noticed this issue, and still investigation the best way to go. At this moment, there is no news to share. Sorry about this. Will keep you posted. If I had to guess, I would say they are concerned with people uploading images and doing intensive graphics programming which would hog CPU time.  But that is just a guess. Another problem. While trying to resolve the ReportViewer problem, I tried to write a file to the PDF directory to see if there was a permissions problem with some test code: String MyPath = MapPath(@"~\PDFs\Test.txt"); File.WriteAllText(MyPath, "Hello Azure");     I got this message: Access to the path <my path> is denied. After some research, I understood that since Azure is a cloud based platform, it can't allow web applications to save files to local directories. The application could be moved or replicated as scaling occurs and trying to manage local files would be problematic to say the least. There are other options: Use the Azure APIs to get a path. That way the location of the storage is separated from the application. However, the web site is then tied Azure and can't be moved to another hosting platform. Use the ApplicationData folder (not recommended). Write to BLOB storage. Or, I could try and stream the PDF output directly to the email and not save a file. I'm not going to work on a final solution until the ReportViewer is fixed. I am just sharing some of the things you need to be aware of if you decide to use Azure. I got this information from here. (Note the author of the BLOG added a comment saying he has updated his entry). Is my memory faulty? While getting this BLOG ready, I tried to write the test file again. And it worked. My memory is incorrect, or much more likely, something changed on the server…perhaps while they are trying to get ReportViewer to work. (Anyway, that's my story and I'm sticking to it). *Note: Since Visual Studio 2010 Express doesn't include a Report Editor, I downloaded and installed SQL Server Report Builder 2.0. It is a standalone Report Editor to replace the one not in Visual Studio 2010 Express. I hope someone finds this useful. Steve Wellens CodeProject

    Read the article

  • Azure Table Storage rejects an entity with a Property whose value is an Interface

    - by Andrew B Schultz
    I have a type called "Comment" that I'm saving to Azure Table Storage. Since a comment can be about any number of other types, I created an interface which all of these types implement, and then put a property of type ICommentable on the comment. So Comment has a property called About of type ICommentable. When I try to save a Comment to Azure Table Storage, if the Comment.About property has a value, I get the worthless invalid input error. However, if there is no value for Comment.About, I have no problem. Why would this be? Comment.About is not the only property that is a reference type. For example, Comment.From is a reference type, but the Comment.About is the only property of a type that is an interface. Fails: var comment = new Comment(); comment.CommentText = "It fails!"; comment.PartitionKey = "TEST"; comment.RowKey = "TEST123"; comment.About = sow1; comment.From = person1; Works: var comment = new Comment(); comment.CommentText = "It works!"; comment.PartitionKey = "TEST"; comment.RowKey = "TEST123"; //comment.About = sow1; comment.From = person1; Thanks!

    Read the article

  • Upcoming presentations by me at Windows Azure Events

    - by ScottGu
    I recently blogged about a big wave of improvements we recently released for Windows Azure.  I also delivered a keynote on June 7th that discussed and demoed the enhancements – you can watch a recorded version of it online. Over the next few weeks I’ll be doing several more speaking events about Windows Azure in North America and Europe.  Below are details on some of the upcoming the events and how you can sign-up to attend one in person: Scottsdale, Arizona on June 19th, 2012 Attend this FREE all-day event in Scottsdale, Arizona on Tuesday, June 19th to learn more about Windows Azure, ASP.NET, Web API and SignalR.  I’ll be doing a 2 hour presentation on Windows Azure, followed by Scott Hanselman on ASP.NET and Web API, and Brady Gaster on SignalR.  Learn more about the event and register to attend here. Cambridge, United Kingdom on June 21st, 2012 Attend this FREE two-hour event in Cambridge (UK) the evening of Thursday, June 21st.  I’ll be covering the new Windows Azure release – expects lots of demos and audience participation. Learn more about the event and register to attend here. London, United Kingdom on June 22nd, 2012 Attend the FREE all-day Microsoft Cloud Day conference in London (UK) on Friday, June 22nd to learn about Windows Azure and Windows 8.  I’ll be kicking off the event with a two hour keynote, and will be followed by some other fantastic speakers. Learn more about the conference and register to attend here. TechEd Europe in Amsterdam, Netherlands on June 26th, 2012 I’ll be at TechEd Europe this year where I’ll be presenting on Windows Azure.  I’ll be in the general session keynote and also have a foundation track session on Windows Azure on Tuesday, June 26th. Learn more about TechEd Europe and register to attend here. Amsterdam, Netherlands on June 26th, 2012 Not attending TechEd Europe but near Amsterdam and still want to see me talk?  The good news is that the leaders of the Windows Azure User Group NL have setup a FREE event during the evening of Tuesday, June 26th where I’ll be presenting along with Clemens Vasters. Learn more about the event and register to attend here. Dallas, Texas on July 10th, 2012 I’ll be in Dallas, Texas on Tuesday, July 10th and presenting at a FREE all day Microsoft Cloud Summit.  I’ll kick off the day with a keynote, which will be followed by a great set of additional Windows Azure talks as well as a “Grill the Gu” Q&A session with me over lunch. Learn more about the event and register to attend here. Additional Events I’ll be doing many more events and talks in the months ahead – I’ll blog details of additional conferences/events I’m doing as they are fixed. Hope to see some of you at the above ones! Scott

    Read the article

  • Keeping your options open in a cloud solution

    - by BuckWoody
    In on-premises solutions we have the full range of options open for a given computing solution – but we don’t always take advantage of them, for multiple reasons. Data goes in a Relational Database Management System, files go on a share, and e-mail goes to the Exchange server. Over time, vendors (including ourselves) add in functionality to one product that allow non-standard use of the platform. For example, SQL Server (and Oracle, and others) allow large binary storage in or through the system – something not originally intended for an RDBMS to handle. There are certainly times when this makes sense, of course, but often these platform hammers turn every problem into a nail. It can make us “lazy” in our design – we sometimes don’t take the time to learn another architecture because the one we’ve spent so much time with can handle what we want to do. But there’s a distinct danger here. In nature, when a population shares too many of the same traits, it can cause a complete collapse if a situation exploits a weakness shared by that population. The same is true with not using the righttool for the job in a computing environment. Your company or organization depends on your knowledge as a professional to select the best mix of supportable, flexible, cost-effective technologies to solve their problems, whether you’re in an architect role or not.  So take some time today to learn something new. The way I do this is to select a given problem, and try to solve it with a technology I’m not familiar with. For instance – create a Purchase Order system in Excel, then in Hadoop or MongoDB, or even in flat-files using PowerShell as an interface. No, I’m not suggesting any of these architectures are the proper way to solve the PO problem, but taking something concrete that you know well and applying that meta-knowledge to another platform will assist you in exercising the “little grey cells” and help you and your organization understand what is open to you. And of course you can do all of this on-premises – but my recommendation is to check out a cloud platform (my suggestion would of course be Windows Azure :) ) and try it there. Most providers (including Microsoft) provide free time to do that.

    Read the article

  • Azure flexibility in geolocation and specs

    - by bitbitbit
    Hi, I'm reading about Azure and there are a couple of things I don't understand: If I choose put a server on Europe, why can't I move it to another datacenter afterwards? If I choose a size for a worker/web role, could I upgrade it to a bigger one afterwards? I think I've read that I cannot put different instances of a worker/web role with different sizes working together? Am I right? And in affirmative case, why? What is cheaper? a big VM with a nicely multithreaded worker role or multiple single-threaded worker roles in several small VMs? Thanks in advance.

    Read the article

  • Azure Blobs - ArgumentNullException when calling UploadFile()

    - by Ariel
    I’m getting the following exception when trying to upload a file with the following code: string encodedUrl = "videos/Sample.mp4" CloudBlockBlob encodedVideoBlob = blobClient.GetBlockBlobReference(encodedUrl); Log(string.Format("Got blob reference for {0}", encodedUrl), EventLogEntryType.Information); encodedVideoBlob.Properties.ContentType = contentType; encodedVideoBlob.Metadata[BlobProperty.Description] = description; encodedVideoBlob.UploadFile(localEncodedBlobPath); I see the "Got blob reference" message, so I assume the reference resolves correctly. Void Run() C:\Inter\Projects\PoC\WorkerRole\WorkerRole.cs (40) System.ArgumentNullException: Value cannot be null. Parameter name: value at Microsoft.WindowsAzure.StorageClient.Tasks.Task`1.get_Result() at Microsoft.WindowsAzure.StorageClient.Tasks.Task`1.ExecuteAndWait() at Microsoft.WindowsAzure.StorageClient.CloudBlob.UploadFromStream(Stream source, BlobRequestOptions options) at Microsoft.WindowsAzure.StorageClient.CloudBlob.UploadFile(String fileName, BlobRequestOptions options) at EncoderWorkerRole.WorkerRole.ProcessJobOutput(IJob job, String videoBlobToEncodeUrl) in C:\Inter\Projects\PoC\WorkerRole\WorkerRole.cs:line 144 at EncoderWorkerRole.WorkerRole.Run() in C:\Inter\Projects\PoC\WorkerRole\WorkerRole.cs:line 40 Interestingly, I'm running that same snippet from an on-premises server i.e., outside of Azure and it works correctly. Ideas welcome, thanks!

    Read the article

  • Azure Diagnostics wrt Custom Logs and honoring scheduledTransferPeriod

    - by kjsteuer
    I have implemented my own TraceListener similar to http://blogs.technet.com/b/meamcs/archive/2013/05/23/diagnostics-of-cloud-services-custom-trace-listener.aspx . One thing I noticed is that that logs show up immediately in My Azure Table Storage. I wonder if this is expected with Custom Trace Listeners or because I am in a development environment. My diagnosics.wadcfg <?xml version="1.0" encoding="utf-8"?> <DiagnosticMonitorConfiguration configurationChangePollInterval="PT1M""overallQuotaInMB="4096" xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration"> <DiagnosticInfrastructureLogs scheduledTransferLogLevelFilter="Information" /> <Directories scheduledTransferPeriod="PT1M"> <IISLogs container="wad-iis-logfiles" /> <CrashDumps container="wad-crash-dumps" /> </Directories> <Logs bufferQuotaInMB="0" scheduledTransferPeriod="PT30M" scheduledTransferLogLevelFilter="Information" /> </DiagnosticMonitorConfiguration> I have changed my approach a bit. Now I am defining in the web config of my webrole. I notice when I set autoflush to true in the webconfig, every thing works but scheduledTransferPeriod is not honored because the flush method pushes to the table storage. I would like to have scheduleTransferPeriod trigger the flush or trigger flush after a certain number of log entries like the buffer is full. Then I can also flush on server shutdown. Is there any method or event on the CustomTraceListener where I can listen to the scheduleTransferPeriod? <system.diagnostics> <!--http://msdn.microsoft.com/en-us/library/sk36c28t(v=vs.110).aspx By default autoflush is false. By default useGlobalLock is true. While we try to be threadsafe, we keep this default for now. Later if we would like to increase performance we can remove this. see http://msdn.microsoft.com/en-us/library/system.diagnostics.trace.usegloballock(v=vs.110).aspx --> <trace> <listeners> <add name="TableTraceListener" type="Pos.Services.Implementation.TableTraceListener, Pos.Services.Implementation" /> <remove name="Default" /> </listeners> </trace> </system.diagnostics> I have modified the custom trace listener to the following: namespace Pos.Services.Implementation { class TableTraceListener : TraceListener { #region Fields //connection string for azure storage readonly string _connectionString; //Custom sql storage table for logs. //TODO put in config readonly string _diagnosticsTable; [ThreadStatic] static StringBuilder _messageBuffer; readonly object _initializationSection = new object(); bool _isInitialized; CloudTableClient _tableStorage; readonly object _traceLogAccess = new object(); readonly List<LogEntry> _traceLog = new List<LogEntry>(); #endregion #region Constructors public TableTraceListener() : base("TableTraceListener") { _connectionString = RoleEnvironment.GetConfigurationSettingValue("DiagConnection"); _diagnosticsTable = RoleEnvironment.GetConfigurationSettingValue("DiagTableName"); } #endregion #region Methods /// <summary> /// Flushes the entries to the storage table /// </summary> public override void Flush() { if (!_isInitialized) { lock (_initializationSection) { if (!_isInitialized) { Initialize(); } } } var context = _tableStorage.GetTableServiceContext(); context.MergeOption = MergeOption.AppendOnly; lock (_traceLogAccess) { _traceLog.ForEach(entry => context.AddObject(_diagnosticsTable, entry)); _traceLog.Clear(); } if (context.Entities.Count > 0) { context.BeginSaveChangesWithRetries(SaveChangesOptions.None, (ar) => context.EndSaveChangesWithRetries(ar), null); } } /// <summary> /// Creates the storage table object. This class does not need to be locked because the caller is locked. /// </summary> private void Initialize() { var account = CloudStorageAccount.Parse(_connectionString); _tableStorage = account.CreateCloudTableClient(); _tableStorage.GetTableReference(_diagnosticsTable).CreateIfNotExists(); _isInitialized = true; } public override bool IsThreadSafe { get { return true; } } #region Trace and Write Methods /// <summary> /// Writes the message to a string buffer /// </summary> /// <param name="message">the Message</param> public override void Write(string message) { if (_messageBuffer == null) _messageBuffer = new StringBuilder(); _messageBuffer.Append(message); } /// <summary> /// Writes the message with a line breaker to a string buffer /// </summary> /// <param name="message"></param> public override void WriteLine(string message) { if (_messageBuffer == null) _messageBuffer = new StringBuilder(); _messageBuffer.AppendLine(message); } /// <summary> /// Appends the trace information and message /// </summary> /// <param name="eventCache">the Event Cache</param> /// <param name="source">the Source</param> /// <param name="eventType">the Event Type</param> /// <param name="id">the Id</param> /// <param name="message">the Message</param> public override void TraceEvent(TraceEventCache eventCache, string source, TraceEventType eventType, int id, string message) { base.TraceEvent(eventCache, source, eventType, id, message); AppendEntry(id, eventType, eventCache); } /// <summary> /// Adds the trace information to a collection of LogEntry objects /// </summary> /// <param name="id">the Id</param> /// <param name="eventType">the Event Type</param> /// <param name="eventCache">the EventCache</param> private void AppendEntry(int id, TraceEventType eventType, TraceEventCache eventCache) { if (_messageBuffer == null) _messageBuffer = new StringBuilder(); var message = _messageBuffer.ToString(); _messageBuffer.Length = 0; if (message.EndsWith(Environment.NewLine)) message = message.Substring(0, message.Length - Environment.NewLine.Length); if (message.Length == 0) return; var entry = new LogEntry() { PartitionKey = string.Format("{0:D10}", eventCache.Timestamp >> 30), RowKey = string.Format("{0:D19}", eventCache.Timestamp), EventTickCount = eventCache.Timestamp, Level = (int)eventType, EventId = id, Pid = eventCache.ProcessId, Tid = eventCache.ThreadId, Message = message }; lock (_traceLogAccess) _traceLog.Add(entry); } #endregion #endregion } }

    Read the article

  • "One of the request inputs not valid" error when attempting to update Azure Table Storage

    - by sako73
    I am attempting to update an entry in Azure Table Storage. The function is: public void SaveBug(DaBug bug) { bug.PartitionKey = "bugs"; bug.Timestamp = DateTime.UtcNow; if (bug.RowKey == null || bug.RowKey == string.Empty) { bug.RowKey = Guid.NewGuid().ToString(); _context.AddObject(c_TableName, bug); } else { _context.AttachTo(c_TableName, bug); _context.UpdateObject(bug); } _context.SaveChanges(); } If it is a new entry (the "bug.RowKey == null" path), then it works fine. If it is an update to an existing entity, then the "AttachTo", and the "UpdateObject" calls work, but when it gets to "SaveChanges", it throws the "One of the request inputs not valid" exception. The class that is being stored is: [DataContract] [DataServiceKey("RowKey")] public class DaBug { [DataMember] public bool IsOpen { get; set; } [DataMember] public string Title { get; set; } [DataMember] public string Description { get; set; } [DataMember] public string SubmittedBy { get; set; } [DataMember] public DateTime SubmittedDate { get; set; } [DataMember] public string RowKey { get; set; } public DateTime Timestamp { get; set; } public string PartitionKey { get; set; } } Does anyone know what the problem is? Thanks for any help.

    Read the article

  • Convert C# Silverlight App To AZURE CLOUD Platform?!?!

    - by Goober
    The Scenario I've been following Brad Abrams Silverlight tutorial on his blog.... I have tried following Brads "How to deploy your app to the Cloud" tutorial however i'm struggling with it, even though it is in the same context as the first tutorial.... The Question Is the application structure essentially the same as the original "non-cloud based version"!? If not, which parts are different? (I get that there is a Cloud Service project added to the solution) - but what else?! Connection String Issue In my "Non-Cloud based application", I make use of the ADO.Net Entity Framework to communicate with my database. The connection string in my web.config file looks like: <add name="InmZenEntities" connectionString="metadata=res://*/InmZenModel.csdl|res://*/InmZenModel.ssdl|res://*/InmZenModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=CHASEDIGITALWS3;Initial Catalog=InmarsatZenith;Integrated Security=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /></connectionStrings> However However the connection string that I get from SQL AZURE looks like: Server=tcp:k12ioy1rsi.ctp.database.windows.net;Database=master;User ID=simongilbert;Password=myPassword;Trusted_Connection=False; So how do I go about merging the two when I move the "non-cloud based application" to THE CLOUD?! Any help regarding converting a silverlight application to a cloud service and deploying it would be greatly appreciated

    Read the article

  • Windows Azure - access webrole local storage from separate workerrole

    - by Brett Smith
    I'm running an application on windows azure, the MVC views need to be dynamic, I started by storing them as records in the database, but am quite keen to move them to a physical location. My concept was to create the physical file via code... which worked great and speeds up the page load dramatically. This was of course before I realised that the files were only available for the duration of the role Next I looked at a start up task to create the files when the role was started - however I then realised that any separate instances weren't going to sync up unless I monitored the database for changes. So I moved from a start up task to a function in the run method of the role that checks the database every 10 minutes to see if changes have occurred. The problem is that this seems to choke up the application (at least in the warm up stage). Ideally I would like to move the run function to it's own worker role that can sit there and push files out to web role instances, but I'm unsure on how I would go about accessing the web roles local storage from the worker role. Can anybody tell me whether this is actually possible? and hopefully point me in the right direction to achieve this? Just to clarify what I'm trying to achieve -View is created in user interface running on web role and stored in database -Separate web role (front end) has clientside application with virtualpath provider pointing Views requests to local storage (localresource) -separate worker role to create View structure and load this into clientside web role local storage

    Read the article

  • Tellago && Tellago Studios 2010

    - by gsusx
    With 2011 around the corner we, at Tellago and Tellago Studios , we have been spending a lot of times evaluating our successes and failures (yes those too ;)) of 2010 and delineating some of our goals and strategies for 2011. When I look at 2010 here are some of the things that quickly jump off the page: Growing Tellago by 300% Launching a brand new company: Tellago Studios Expanding our customer base Establishing our business intelligence practice http://tellago.com/what-we-say/events/business-intelligence...(read more)

    Read the article

  • AdventureWorks2012 now available for all on SQL Azure

    - by jamiet
    Three days ago I tweeted this: Idea. MSFT could host read-only copies of all the [AdventureWorks] DBs up on #sqlazure for the SQL community to use. RT if agree #sqlfamily — Jamie Thomson (@jamiet) March 24, 2012 Evidently I wasn't the only one that thought this was a good idea because as you can see from the screenshot that tweet has, so far, been retweeted more than fifty times. Clearly there is a desire to see the AdventureWorks databases made available for the community to noodle around on so I am pleased to announce that as of today you can do just that - [AdventureWorks2012] now resides on SQL Azure and is available for anyone, absolutely anyone, to connect to and use* for their own means. *By use I mean "issue some SELECT statements". You don't have permission to issue INSERTs, UPDATEs, DELETEs or EXECUTEs I'm afraid - if you want to do that then you can get the bits and host it yourself. This database is free for you to use but SQL Azure is of course not free so before I give you the credentials please lend me your ears eyes for a short while longer. AdventureWorks on Azure is being provided for the SQL Server community to use and so I am hoping that that same community will rally around to support this effort by making a voluntary donation to support the upkeep which, going on current pricing, is going to be $119.88 per year. If you would like to contribute to keep AdventureWorks on Azure up and running for that full year please donate via PayPal to [email protected]: Any amount, no matter how small, will help. If those 50+ people that retweeted me beforehand all contributed $2 then that would just about be enough to keep this up for a year. If the community contributes more that we need then there are a number of additional things that could be done: Host additional databases (Northwind anyone??) Host in more datacentres (this first one is in Western Europe) Make a charitable donation That last one, a charitable donation, is something I would really like to do. The SQL Community have proved before that they can make a significant contribution to charitable orgnisations through purchasing the SQL Server MVP Deep Dives book and I harbour hopes that AdventureWorks on Azure can continue in that vein. So please, if you think AdventureWorks on Azure is something that is worth supporting please make a contribution. OK, with the prickly subject of begging for cash out of the way let me share the details that you need to connect to [AdventureWorks2012] on SQL Azure: Server mhknbn2kdz.database.windows.net  Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly That user sqlfamily has all the permissions required to enable you to query away to your heart's content. Here is the code that I used to set it up: CREATE USER sqlfamily FOR LOGIN sqlfamily;CREATE ROLE sqlfamilyrole;EXEC sp_addrolemember 'sqlfamilyrole','sqlfamily';GRANT VIEW DEFINITION ON Database::AdventureWorks2012 TO sqlfamilyrole;GRANT VIEW DATABASE STATE ON Database::AdventureWorks2012 TO sqlfamilyrole;GRANT SHOWPLAN TO sqlfamilyrole;EXEC sp_addrolemember 'db_datareader','sqlfamilyrole'; You can connect to the database using SQL Server Management Studio (instructions to do that are provided at Walkthrough: Connecting to SQL Azure via the SSMS) or you can use the web interface at https://mhknbn2kdz.database.windows.net: Lastly, just for a bit of fun I created a table up there called [dbo].[SqlFamily] into which you can leave a small calling card. Simply execute the following SQL statement (changing the values of course): INSERT [dbo].[SqlFamily]([Name],[Message],[TwitterHandle],[BlogURI])VALUES ('Your name here','Some Message','your twitter handle (optional)','Blog URI (optional)'); [Id] is an IDENTITY field and there is a default constraint on [DT] hence there is no need to supply a value for those. Note that you only have INSERT permissions, not UPDATE or DELETE so make sure you get it right first time! Any offensive or distasteful remarks will of course be deleted :) Thank you for reading this far and have fun using AdventureWorks on Azure. I hope it proves to be useful for some of you. @jamiet AdventureWorks on Azure - Provided by the SQL Server community, for the SQL Server community!

    Read the article

  • .NET 4.5 now supported with Windows Azure Web Sites

    - by ScottGu
    This week we finished rolling out .NET 4.5 to all of our Windows Azure Web Site clusters.  This means that you can now publish and run ASP.NET 4.5 based apps, and use  .NET 4.5 libraries and features (for example: async and the new spatial data-type support in EF), with Windows Azure Web Sites.  This enables a ton of really great capabilities - check out Scott Hanselman’s great post of videos that highlight a few of them. Visual Studio 2012 includes built-in publishing support to Windows Azure, which makes it really easy to publish and deploy .NET 4.5 based sites within Visual Studio (you can deploy both apps + databases).  With the Migrations feature of EF Code First you can also do incremental database schema updates as part of publishing (which enables a really slick automated deployment workflow). Each Windows Azure account is eligible to host 10 free web-sites using our free-tier.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today. In the next few days we’ll also be releasing support for .NET 4.5 and Windows Server 2012 with Windows Azure Cloud Services (Web and Worker Roles) – together with some great new Azure SDK enhancements.  Keep an eye out on my blog for details about these soon. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • .NET 4.5 agora é suportado nos Web Sites da Windows Azure

    - by Leniel Macaferi
    Nesta semana terminamos de instalar o .NET 4.5 em todos os nossos clusters que hospedam os Web Sistes da Windows Azure. Isso significa que agora você pode publicar e executar aplicações baseadas na ASP.NET 4.5, e usar as bibliotecas e recursos do .NET 4.5 (por exemplo: async e o novo suporte para dados espaciais (spatial data type no Entity Framework), com os Web Sites da Windows Azure. Isso permite uma infinidade de ótimos recursos - confira o post de Scott Hanselman com vídeos (em Inglês) que destacam alguns destes recursos. O Visual Studio 2012 inclui suporte nativo para publicar uma aplicação na Windows Azure, o que torna muito fácil publicar e implantar sites baseados no .NET 4.5 a partir do Visual Studio (você pode publicar aplicações + bancos de dados). Com o recurso de Migrações da abordagem Entity Framework Code First você também pode fazer atualizações incrementais do esquema do banco de dados, como parte do processo de publicação (o que permite um fluxo de trabalho de publicação extremamente automatizado). Cada conta da Windows Azure é elegível para hospedar até 10 web-sites gratuitamente, usando nossa camada Escalonável "Compartilhada". Se você ainda não tem uma conta da Windows Azure, você pode inscrever-se em um teste gratuito para começar a usar estes recursos hoje mesmo. Nos próximos dias, vamos também lançar o suporte para .NET 4.5 e Windows Server 2012 para os Serviços da Nuvem da Windows Azure (Web e Worker Roles) - juntamente com algumas novas e ótimas melhorias para o SDK da Windows Azure. Fique de olho no meu blog para mais informações sobre estes lançamentos em breve. Espero que ajude, - Scott PS Além do blog, eu também estou agora utilizando o Twitter para atualizações rápidas e para compartilhar links. Siga-me em: twitter.com/ScottGu Texto traduzido do post original por Leniel Macaferi.

    Read the article

  • Windows Azure VMs - New "Stopped" VM Options Provide Cost-effective Flexibility for On-Demand Workloads

    - by KeithMayer
    Originally posted on: http://geekswithblogs.net/KeithMayer/archive/2013/06/22/windows-azure-vms---new-stopped-vm-options-provide-cost-effective.aspxDidn’t make it to TechEd this year? Don’t worry!  This month, we’ll be releasing a new article series that highlights the Best of TechEd announcements and technical information for IT Pros.  Today’s article focuses on a new, much-heralded enhancement to Windows Azure Infrastructure Services to make it more cost-effective for spinning VMs up and down on-demand on the Windows Azure cloud platform. NEW! VMs that are shutdown from the Windows Azure Management Portal will no longer continue to accumulate compute charges while stopped! Previous to this enhancement being available, the Azure platform maintained fabric resource reservations for VMs, even in a shutdown state, to ensure consistent resource availability when starting those VMs in the future.  And, this meant that VMs had to be exported and completely deprovisioned when not in use to avoid compute charges. In this article, I'll provide more details on the scenarios that this enhancement best fits, and I'll also review the new options and considerations that we now have for performing safe shutdowns of Windows Azure VMs. Which scenarios does the new enhancement best fit? Being able to easily shutdown VMs from the Windows Azure Management Portal without continued compute charges is a great enhancement for certain cloud use cases, such as: On-demand dev/test/lab environments - Freely start and stop lab VMs so that they are only accumulating compute charges when being actively used.  "Bursting" load-balanced web applications - Provision a number of load-balanced VMs, but keep the minimum number of VMs running to support "normal" loads. Easily start-up the remaining VMs only when needed to support peak loads. Disaster Recovery - Start-up "cold" VMs when needed to recover from disaster scenarios. BUT ... there is a consideration to keep in mind when using the Windows Azure Management Portal to shutdown VMs: although performing a VM shutdown via the Windows Azure Management Portal causes that VM to no longer accumulate compute charges, it also deallocates the VM from fabric resources to which it was previously assigned.  These fabric resources include compute resources such as virtual CPU cores and memory, as well as network resources, such as IP addresses.  This means that when the VM is later started after being shutdown from the portal, the VM could be assigned a different IP address or placed on a different compute node within the fabric. In some cases, you may want to shutdown VMs using the old approach, where fabric resource assignments are maintained while the VM is in a shutdown state.  Specifically, you may wish to do this when temporarily shutting down or restarting a "7x24" VM as part of a maintenance activity.  Good news - you can still revert back to the old VM shutdown behavior when necessary by using the alternate VM shutdown approaches listed below.  Let's walk through each approach for performing a VM Shutdown action on Windows Azure so that we can understand the benefits and considerations of each... How many ways can I shutdown a VM? In Windows Azure Infrastructure Services, there's three general ways that can be used to safely shutdown VMs: Shutdown VM via Windows Azure Management Portal Shutdown Guest Operating System inside the VM Stop VM via Windows PowerShell using Windows Azure PowerShell Module Although each of these options performs a safe shutdown of the guest operation system and the VM itself, each option handles the VM shutdown end state differently. Shutdown VM via Windows Azure Management Portal When clicking the Shutdown button at the bottom of the Virtual Machines page in the Windows Azure Management Portal, the VM is safely shutdown and "deallocated" from fabric resources.  Shutdown button on Virtual Machines page in Windows Azure Management Portal  When the shutdown process completes, the VM will be shown on the Virtual Machines page with a "Stopped ( Deallocated )" status as shown in the figure below. Virtual Machine in a "Stopped (Deallocated)" Status "Deallocated" means that the VM configuration is no longer being actively associated with fabric resources, such as virtual CPUs, memory and networks. In this state, the VM will not continue to allocate compute charges, but since fabric resources are deallocated, the VM could receive a different internal IP address ( called "Dynamic IPs" or "DIPs" in Windows Azure ) the next time it is started.  TIP: If you are leveraging this shutdown option and consistency of DIPs is important to applications running inside your VMs, you should consider using virtual networks with your VMs.  Virtual networks permit you to assign a specific IP Address Space for use with VMs that are assigned to that virtual network.  As long as you start VMs in the same order in which they were originally provisioned, each VM should be reassigned the same DIP that it was previously using. What about consistency of External IP Addresses? Great question! External IP addresses ( called "Virtual IPs" or "VIPs" in Windows Azure ) are associated with the cloud service in which one or more Windows Azure VMs are running.  As long as at least 1 VM inside a cloud service remains in a "Running" state, the VIP assigned to a cloud service will be preserved.  If all VMs inside a cloud service are in a "Stopped ( Deallocated )" status, then the cloud service may receive a different VIP when VMs are next restarted. TIP: If consistency of VIPs is important for the cloud services in which you are running VMs, consider keeping one VM inside each cloud service in the alternate VM shutdown state listed below to preserve the VIP associated with the cloud service. Shutdown Guest Operating System inside the VM When performing a Guest OS shutdown or restart ( ie., a shutdown or restart operation initiated from the Guest OS running inside the VM ), the VM configuration will not be deallocated from fabric resources. In the figure below, the VM has been shutdown from within the Guest OS and is shown with a "Stopped" VM status rather than the "Stopped ( Deallocated )" VM status that was shown in the previous figure. Note that it may require a few minutes for the Windows Azure Management Portal to reflect that the VM is in a "Stopped" state in this scenario, because we are performing an OS shutdown inside the VM rather than through an Azure management endpoint. Virtual Machine in a "Stopped" Status VMs shown in a "Stopped" status will continue to accumulate compute charges, because fabric resources are still being reserved for these VMs.  However, this also means that DIPs and VIPs are preserved for VMs in this state, so you don't have to worry about VMs and cloud services getting different IP addresses when they are started in the future. Stop VM via Windows PowerShell In the latest version of the Windows Azure PowerShell Module, a new -StayProvisioned parameter has been added to the Stop-AzureVM cmdlet. This new parameter provides the flexibility to choose the VM configuration end result when stopping VMs using PowerShell: When running the Stop-AzureVM cmdlet without the -StayProvisioned parameter specified, the VM will be safely stopped and deallocated; that is, the VM will be left in a "Stopped ( Deallocated )" status just like the end result when a VM Shutdown operation is performed via the Windows Azure Management Portal.  When running the Stop-AzureVM cmdlet with the -StayProvisioned parameter specified, the VM will be safely stopped but fabric resource reservations will be preserved; that is the VM will be left in a "Stopped" status just like the end result when performing a Guest OS shutdown operation. So, with PowerShell, you can choose how Windows Azure should handle VM configuration and fabric resource reservations when stopping VMs on a case-by-case basis. TIP: It's important to note that the -StayProvisioned parameter is only available in the latest version of the Windows Azure PowerShell Module.  So, if you've previously downloaded this module, be sure to download and install the latest version to get this new functionality. Want to Learn More about Windows Azure Infrastructure Services? To learn more about Windows Azure Infrastructure Services, be sure to check-out these additional FREE resources: Become our next "Early Expert"! Complete the Early Experts "Cloud Quest" and build a multi-VM lab network in the cloud for FREE!  Build some cool scenarios! Check out our list of over 20+ Step-by-Step Lab Guides based on key scenarios that IT Pros are implementing on Windows Azure Infrastructure Services TODAY!  Looking forward to seeing you in the Cloud! - Keith Build Your Lab! Download Windows Server 2012 Don’t Have a Lab? Build Your Lab in the Cloud with Windows Azure Virtual Machines Want to Get Certified? Join our Windows Server 2012 "Early Experts" Study Group

    Read the article

  • Can we run windowservice or EXE in Azure website or in Virtual Machine?

    - by Arun Rana
    I have experienced with cloud service/hosted service on Azure. However regarding another project i am confused in selection in terms of functionalities. I have project (2 tier asp.net app) with that i need to run windowservice or exe which will do some functionality every day (like fetch data) so my confusions are as below Regarding Azure website Can i access RDP if i'll move to reserved instance? can i run windowservice/exe ? Regarding Virtual Machine Is it same as dedicated server? can i use WASD as database from application reside in same? I think i can run any exe and installed anything however azure is going to recycle this and if yes then what happened on recycling? can i use new window server 2012 (VHD) in that? Azure website & VM both are in preview mode so is it reliable to use it as production version?

    Read the article

  • Newbie error with Windows Azure–403 not authorised

    - by simonsabin
    Azure uses certificates to manage secure access to the management side of things. I’ve started looking into azure for some SQLBits . So in doing so I needed to configure certificates to  connect. VS nicely will create a cert for you and copies a path for you to use to upload to azure. I went to the Azure portal ( https://windows.azure.com/Default.aspx ) found the certificate bit and added the certificate. However after doing this I couldn’t connect. After trying numerous fruitless things, including...(read more)

    Read the article

  • SQLAuthority News – Windows Azure Training Kit Updated October 2012

    - by pinaldave
    Microsoft has recently released the updated to Windows Azure Training Kit. Earlier this month they have updated the kit and included quite a lot of things. Now the training kit contains 47 hands-on labs, 24 demos and 38 presentations. The best part is that the kit is now available to download in two different formats 1) Full Package (324.5 MB) and 2) Web Installer (2.4 MB). The full package enables you to download all of the hands-on labs and presentations to your local machine. The Web Installer allows you to select and download just the specific hands-on labs and presentations that you need. This Windows Azure Training Kit contains Hands on Labs, Presentations and Videos and Demos. I encourage all of you to try this out as well. The Kit also contains details about Samples and Tools. The training kit is the most authoritative learning resource on Windows Azure. You can download the Windows Azure Training Kit from here. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Azure, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • OData.org updated, gives clues about future SQL Azure enhancements

    - by jamiet
    The OData website at http://www.odata.org/home has been updated today to provide a much more engaging page than the previous sterile attempt. Moreover its now chockful of information about the progress of OData including a blog, a list of products that produce OData feeds plus some live OData feeds that you can hit up today, a list of OData-compliant clients and an FAQ. Most interestingly SQL Azure is listed as a producer of OData feeds: If you have a SQL Azure database account you can easily expose an OData feed through a simple configuration portal. You can select authenticated or anonymous access and expose different OData views according to permissions granted to the specified SQL Azure database user. A preview of this upcoming SQL Azure service is available at https://www.sqlazurelabs.com/OData.aspx and it enables you to select one of your existing SQL Azure databases and, with a few clicks, turn it into an OData feed. It looks as though SQL Azure will soon be added to the stable of products that natively support OData, good news indeed. @Jamiet Related blog posts: OData gunning for ubiquity across Microsoft products Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >