Search Results

Search found 1999 results on 80 pages for 'temporary'.

Page 18/80 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Upgrading Fusion Middleware 11.1.1.x to 11.1.1.4

    - by James Taylor
    This is a follow on from my previous post where we upgraded 11.1.1.2 to 11.1.1.3. The instructions I provide here will work for Fusion Middleware 11.1.1.2 and 11.1.1.3 wanting to upgrade to 11.1.1.4. In this example I’m just upgrading SOA Suite on OEL 64bit but the steps will be the same, some of the downloads may be different based on your environment. To upgrade to 11.1.1.4 you need to have access to http://support.oracle.com as this is where the downloads reside. Oracle provides 11.1.1.4 as a standalone download so you can do a fresh install if required using OTN downloads (http://www.oracle.com/technetwork/indexes/downloads/index.html). The high level steps to upgrade are as follows: Download software Shutdown you SOA Environment Upgrade WLS to 11.1.1.4 Upgrade SOA Suite to 11.1.1.4 Upgrade OSB to 11.1.1.4 Upgrade MSD Schemas Identify the downloads you require for your install. You will need the WebLogic Server Upgrade and the additional product downloads. If you are using 64bit then use the generic version. The downloads are found from the following location - http://download.oracle.com/docs/html/E18749_01/download_readme.htm#BABDDIIC For the purpose of this post I downloaded the following patches 11060985 – WLS Server Generic 11060960 – SOA Suite 11061005 – OSB Suite You must also download the 11.1.1.4 RCU tool to upgrade the DB schemas. It is available via OTN, or, Oracle Support, I have provided the link from Oracle Support.  11060956 – RCU Make sure you have set the Java executable in your PATH e.g. export PATH=$JAVA_HOME/bin:$PATH  Make sure all your WebLogic environment has been shut down before performing the upgrade. Extract the WLS patch 11060985 to a temporary directory and start the installer java –jar wls1034_upgrade_generic.jar Please note if you are not running 64BIT then the upgrade executable will be just a bin file which you can execute directly. Chose the right Oracle home for your WebLogic Server install. In the Register for Security Updates you can enter your details or just click Next. If you do not enter details confirm that you don’t want to receive these updates Select the products you want to upgrade and select next. It is recommended that you accept the defaults. Confirm the directories that will be upgraded Upgrade of WLS ahs been completed   Extract your both SOA downloads to a temporary directory and run the installer found in Disk1 ./runInstaller -jreLoc /java/jdk1.6.0_20/jre Please note that the java location and version may be different for your environment Skip the Software Updates Ensure your system meets the prerequisites Set the Oracle home for your SOA install. You will be asked to confirm that you want to upgrade, click Yes Choose your application server. Since you are upgrading from 11.1.1.x you will be on WebLogic Start the Install Installation Upgrade of SOA Suite completed accept the default to finish.   In my environment I have OSB installed so I need to upgrade this next. If you don’t have SOA Suite you can go straight to completing the DB Schema updates at Step 24.  Extract the OSB upgrade files to a temporary directory and execute the installer found in the Disk1 folder. ./runInstaller -jreLoc /java/jdk1.6.0_20/jre Skip the software updates Select the Oracle home for your environment Accept the warning to continue the upgrade Point to the location of your WebLogic Server installation Install the OSB upgrade Upgrade has been completed accept the defaults Change directory to $MW_HOME/oracle_common/bin where the Patch Set Assistant is installed Execute the following command to update the MDS schema. Please not for my examples I have the context set to DEV. your may be different. This means that all my schemas are prefixed by DEV. ./psa -dbType Oracle -dbConnectString 'localhost:1521:xe' -dbaUserName sys -schemaUserName DEV_MDS You will be asked you passwords for sys and the schema Enter the database administrator password for "sys": Enter the schema password for schema user "DEV_MDS": Change directory to $MW_HOME/Oracle_SOA1/bin to where the Patch Set Assistant is installed for SOA Suite. Execute the following command to update the SOA and BAM schemas ./psa -dbType Oracle -dbConnectString 'localhost:1521:xe' -dbaUserName sys -schemaUserName DEV_SOAINFRA   To check that you have the installed correctly run the following SQL as sysdba. SELECT owner, version, status FROM schema_version_registry; OWNER                          VERSION                        STATUS ------------------------------ ------------------------------ ----------- DEV_MDS                        11.1.1.4.0                     VALID DEV_SOAINFRA                   11.1.1.4.0                     VALID Don’t stress if the versions are not all sitting at version 11.1.1.4 as not all schemas need to be updated. The key ones are MDS and SOAINFRA

    Read the article

  • How to nest transactions nicely - &quot;begin transaction&quot; vs &quot;save transaction&quot; and SQL Server

    - by Brian Biales
    Do you write stored procedures that might be used by others?  And those others may or may not have already started a transaction?  And your SP does several things, but if any of them fail, you have to undo them all and return with a code indicating it failed? Well, I have written such code, and it wasn’t working right until I finally figured out how to handle the case when we are already in a transaction, as well as the case where the caller did not start a transaction.  When a problem occurred, my “ROLLBACK TRANSACTION” would roll back not just my nested transaction, but the caller’s transaction as well.  So when I tested the procedure stand-alone, it seemed to work fine, but when others used it, it would cause a problem if it had to rollback.  When something went wrong in my procedure, their entire transaction was rolled back.  This was not appreciated. Now, I knew one could "nest" transactions, but the technical documentation was very confusing.  And I still have not found the approach below documented anywhere.  So here is a very brief description of how I got it to work, I hope you find this helpful. My example is a stored procedure that must figure out on its own if the caller has started a transaction or not.  This can be done in SQL Server by checking the @@TRANCOUNT value.  If no BEGIN TRANSACTION has occurred yet, this will have a value of 0.  Any number greater than zero means that a transaction is in progress.  If there is no current transaction, my SP begins a transaction. But if a transaction is already in progress, my SP uses SAVE TRANSACTION and gives it a name.  SAVE TRANSACTION creates a “save point”.  Note that creating a save point has no effect on @@TRANCOUNT.  So my SP starts with something like this: DECLARE @startingTranCount int SET @startingTranCount = @@TRANCOUNT IF @startingTranCount > 0 SAVE TRANSACTION mySavePointName ELSE BEGIN TRANSACTION -- … Then, when ready to commit the changes, you only need to commit if we started the transaction ourselves: IF @startingTranCount = 0 COMMIT TRANSACTION And finally, to roll back just your changes so far: -- Roll back changes... IF @startingTranCount > 0 ROLLBACK TRANSACTION MySavePointName ELSE ROLLBACK TRANSACTION Here is some code that you can try that will demonstrate how the save points work inside a transaction. This sample code creates a temporary table, then executes selects and updates, documenting what is going on, then deletes the temporary table. if running in SQL Management Studio, set Query Results to: Text for best readability of the results. -- Create a temporary table to test with, we'll drop it at the end. CREATE TABLE #ATable( [Column_A] [varchar](5) NULL ) ON [PRIMARY] GO SET NOCOUNT ON -- Ensure just one row - delete all rows, add one DELETE #ATable -- Insert just one row INSERT INTO #ATable VALUES('000') SELECT 'Before TRANSACTION starts, value in table is: ' AS Note, * FROM #ATable SELECT @@trancount AS CurrentTrancount --insert into a values ('abc') UPDATE #ATable SET Column_A = 'abc' SELECT 'UPDATED without a TRANSACTION, value in table is: ' AS Note, * FROM #ATable BEGIN TRANSACTION SELECT 'BEGIN TRANSACTION, trancount is now ' AS Note, @@TRANCOUNT AS TranCount UPDATE #ATable SET Column_A = '123' SELECT 'Row updated inside TRANSACTION, value in table is: ' AS Note, * FROM #ATable SAVE TRANSACTION MySavepoint SELECT 'Save point MySavepoint created, transaction count now:' as Note, @@TRANCOUNT AS TranCount UPDATE #ATable SET Column_A = '456' SELECT 'Updated after MySavepoint created, value in table is: ' AS Note, * FROM #ATable SAVE TRANSACTION point2 SELECT 'Save point point2 created, transaction count now:' as Note, @@TRANCOUNT AS TranCount UPDATE #ATable SET Column_A = '789' SELECT 'Updated after point2 savepoint created, value in table is: ' AS Note, * FROM #ATable ROLLBACK TRANSACTION point2 SELECT 'Just rolled back savepoint "point2", value in table is: ' AS Note, * FROM #ATable ROLLBACK TRANSACTION MySavepoint SELECT 'Just rolled back savepoint "MySavepoint", value in table is: ' AS Note, * FROM #ATable SELECT 'Both save points were rolled back, transaction count still:' as Note, @@TRANCOUNT AS TranCount ROLLBACK TRANSACTION SELECT 'Just rolled back the entire transaction..., value in table is: ' AS Note, * FROM #ATable DROP TABLE #ATable The output should look like this: Note                                           Column_A ---------------------------------------------- -------- Before TRANSACTION starts, value in table is:  000 CurrentTrancount ---------------- 0 Note                                               Column_A -------------------------------------------------- -------- UPDATED without a TRANSACTION, value in table is:  abc Note                                 TranCount ------------------------------------ ----------- BEGIN TRANSACTION, trancount is now  1 Note                                                Column_A --------------------------------------------------- -------- Row updated inside TRANSACTION, value in table is:  123 Note                                                   TranCount ------------------------------------------------------ ----------- Save point MySavepoint created, transaction count now: 1 Note                                                   Column_A ------------------------------------------------------ -------- Updated after MySavepoint created, value in table is:  456 Note                                              TranCount ------------------------------------------------- ----------- Save point point2 created, transaction count now: 1 Note                                                        Column_A ----------------------------------------------------------- -------- Updated after point2 savepoint created, value in table is:  789 Note                                                     Column_A -------------------------------------------------------- -------- Just rolled back savepoint "point2", value in table is:  456 Note                                                          Column_A ------------------------------------------------------------- -------- Just rolled back savepoint "MySavepoint", value in table is:  123 Note                                                        TranCount ----------------------------------------------------------- ----------- Both save points were rolled back, transaction count still: 1 Note                                                            Column_A --------------------------------------------------------------- -------- Just rolled back the entire transaction..., value in table is:  abc

    Read the article

  • any clue in these logs why keyboard audio and internet are messed up

    - by mmj
    Jun 7 00:01:18 Isis lightdm: pam_unix(lightdm-autologin:session): session opened for user mimi by (uid=0) Jun 7 00:01:18 Isis lightdm: pam_ck_connector(lightdm-autologin:session): nox11 mode, ignoring PAM_TTY :0 Jun 7 00:01:26 Isis polkitd(authority=local): Registered Authentication Agent for unix-session:/org/freedesktop/ConsoleKit/Session1 (system bus name :1.36 [/usr/lib/policykit-1-gnome/polkit-gnome-authentication-agent-1], object path /org/gnome/PolicyKit1/AuthenticationAgent, locale zh_CN.UTF-8) Jun 7 00:01:29 Isis dbus[610]: [system] Rejected send message, 2 matched rules; type="method_call", sender=":1.44" (uid=1000 pid=1763 comm="/usr/lib/indicator-datetime/indicator-datetime-ser") interface="org.freedesktop.DBus.Properties" member="GetAll" error name="(unset)" requested_reply="0" destination=":1.15" (uid=0 pid=1219 comm="/usr/sbin/console-kit-daemon --no-daemon ") Jun 7 00:07:55 Isis sudo: pam_unix(sudo:auth): authentication failure; logname=mimi uid=1000 euid=0 tty=/dev/pts/1 ruser=mimi rhost= user=mimi Jun 7 00:08:11 Isis sudo: mimi : TTY=pts/1 ; PWD=/home/mimi ; USER=root ; COMMAND=/usr/bin/add-apt-repository ppa:colingille/freshlight Jun 7 00:08:11 Isis sudo: pam_unix(sudo:session): session opened for user root by mimi(uid=1000) Jun 7 00:08:32 Isis sudo: pam_unix(sudo:session): session closed for user root Jun 7 00:11:20 Isis sudo: mimi : TTY=pts/1 ; PWD=/home/mimi ; USER=root ; COMMAND=/usr/bin/apt-get install gparted Jun 7 00:11:20 Isis sudo: pam_unix(sudo:session): session opened for user root by mimi(uid=1000) Jun 7 00:11:59 Isis sudo: pam_unix(sudo:session): session closed for user root Jun 7 00:17:02 Isis CRON[2651]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 00:17:02 Isis CRON[2651]: pam_unix(cron:session): session closed for user root Jun 7 00:17:32 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 successfully authenticated as unix-user:mimi to gain ONE-SHOT authorization for action com.ubuntu.pkexec.gparted for unix-process:2655:96838 [/bin/sh /usr/bin/gparted-pkexec] (owned by unix-user:mimi) Jun 7 00:17:32 Isis pkexec: pam_unix(polkit-1:session): session opened for user root by (uid=1000) Jun 7 00:17:32 Isis pkexec: pam_ck_connector(polkit-1:session): cannot determine display-device Jun 7 00:17:32 Isis pkexec[2657]: mimi: Executing command [USER=root] [TTY=unknown] [CWD=/home/mimi] [COMMAND=/usr/sbin/gparted] Jun 7 00:48:15 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 successfully authenticated as unix-user:mimi to gain ONE-SHOT authorization for action com.ubuntu.pkexec.gparted for unix-process:3813:281120 [/bin/sh /usr/bin/gparted-pkexec] (owned by unix-user:mimi) Jun 7 00:48:15 Isis pkexec: pam_unix(polkit-1:session): session opened for user root by (uid=1000) Jun 7 00:48:15 Isis pkexec: pam_ck_connector(polkit-1:session): cannot determine display-device Jun 7 00:48:15 Isis pkexec[3815]: mimi: Executing command [USER=root] [TTY=unknown] [CWD=/home/mimi] [COMMAND=/usr/sbin/gparted] Jun 7 01:17:01 Isis CRON[3960]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 01:17:01 Isis CRON[3960]: pam_unix(cron:session): session closed for user root Jun 7 02:08:52 Isis gnome-screensaver-dialog: gkr-pam: unlocked login keyring Jun 7 02:17:01 Isis CRON[4246]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 02:17:01 Isis CRON[4246]: pam_unix(cron:session): session closed for user root Jun 7 02:17:05 Isis sudo: mimi : TTY=pts/1 ; PWD=/home/mimi ; USER=root ; COMMAND=/usr/bin/apt-get install unetbootin Jun 7 02:17:05 Isis sudo: pam_unix(sudo:session): session opened for user root by mimi(uid=1000) Jun 7 02:17:57 Isis sudo: pam_unix(sudo:session): session closed for user root Jun 7 02:18:59 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 02:18:59 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 02:18:59 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 02:18:59 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 02:18:59 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 02:18:59 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 02:18:59 Isis sudo: mimi : 3 incorrect password attempts ; TTY=unknown ; PWD=/home/mimi ; USER=root ; COMMAND=/usr/bin/unetbootin 'rootcheck=no' Jun 7 02:18:59 Isis sudo: unable to execute /usr/sbin/sendmail: No such file or directory Jun 7 02:19:26 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 02:19:26 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 02:19:26 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 02:19:26 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 02:19:26 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 02:19:26 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 02:19:26 Isis sudo: mimi : 3 incorrect password attempts ; TTY=unknown ; PWD=/home/mimi ; USER=root ; COMMAND=/usr/bin/unetbootin 'rootcheck=no' Jun 7 02:19:26 Isis sudo: unable to execute /usr/sbin/sendmail: No such file or directory Jun 7 02:33:21 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 02:33:21 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 02:33:21 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 02:33:21 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 02:33:21 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 02:33:21 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 02:33:21 Isis sudo: mimi : 3 incorrect password attempts ; TTY=unknown ; PWD=/home/mimi ; USER=root ; COMMAND=/usr/bin/unetbootin 'rootcheck=no' Jun 7 02:33:21 Isis sudo: unable to execute /usr/sbin/sendmail: No such file or directory Jun 7 02:40:04 Isis sudo: mimi : TTY=pts/1 ; PWD=/home/mimi ; USER=root ; COMMAND=/usr/bin/unetbootin rootcheck=no Jun 7 02:40:04 Isis sudo: pam_unix(sudo:session): session opened for user root by mimi(uid=1000) Jun 7 03:17:01 Isis CRON[5506]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 03:17:01 Isis CRON[5506]: pam_unix(cron:session): session closed for user root Jun 7 03:33:24 Isis sudo: pam_unix(sudo:session): session closed for user root Jun 7 03:33:43 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 03:33:43 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 03:33:43 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 03:33:43 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 03:33:43 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 03:33:43 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 03:33:43 Isis sudo: mimi : 3 incorrect password attempts ; TTY=pts/1 ; PWD=/home/mimi ; USER=root ; COMMAND=/usr/bin/unetbootin showall=yes 'rootcheck=no' Jun 7 03:33:43 Isis sudo: unable to execute /usr/sbin/sendmail: No such file or directory Jun 7 04:17:01 Isis CRON[6119]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 04:17:01 Isis CRON[6119]: pam_unix(cron:session): session closed for user root Jun 7 04:18:35 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 successfully authenticated as unix-user:mimi to gain TEMPORARY authorization for action org.debian.apt.install-or-remove-packages for system-bus-name::1.79 [/usr/bin/python /usr/bin/landscape-client-ui-install] (owned by unix-user:mimi) Jun 7 04:19:11 Isis groupadd[6702]: group added to /etc/group: name=landscape, GID=127 Jun 7 04:19:11 Isis groupadd[6702]: group added to /etc/gshadow: name=landscape Jun 7 04:19:11 Isis groupadd[6702]: new group: name=landscape, GID=127 Jun 7 04:19:11 Isis useradd[6706]: new user: name=landscape, UID=115, GID=127, home=/var/lib/landscape, shell=/bin/false Jun 7 04:19:12 Isis usermod[6711]: change user 'landscape' password Jun 7 04:19:12 Isis chage[6716]: changed password expiry for landscape Jun 7 04:19:37 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 FAILED to authenticate to gain authorization for action com.canonical.LandscapeClientSettings.configure for unix-process:6146:1543697 [/usr/bin/python /usr/bin/landscape-client-settings-ui] (owned by unix-user:mimi) Jun 7 04:20:20 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 FAILED to authenticate to gain authorization for action com.canonical.LandscapeClientSettings.configure for unix-process:6832:1555313 [/usr/bin/python /usr/bin/landscape-client-settings-ui] (owned by unix-user:mimi) Jun 7 04:21:04 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 FAILED to authenticate to gain authorization for action com.ubuntu.languageselector.setsystemdefaultlanguage for unix-process:6827:1555123 [/usr/bin/python /usr/bin/gnome-language-selector] (owned by unix-user:mimi) Jun 7 04:21:08 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 FAILED to authenticate to gain authorization for action com.ubuntu.languageselector.setsystemdefaultlanguage for unix-process:6827:1555123 [/usr/bin/python /usr/bin/gnome-language-selector] (owned by unix-user:mimi) Jun 7 04:21:44 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 FAILED to authenticate to gain authorization for action org.debian.apt.install-or-remove-packages for system-bus-name::1.87 [/usr/bin/python /usr/bin/gnome-language-selector] (owned by unix-user:mimi) Jun 7 04:22:27 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 successfully authenticated as unix-user:mimi to gain TEMPORARY authorization for action com.canonical.LandscapeClientSettings.configure for unix-process:7830:1567424 [/usr/bin/python /usr/bin/landscape-client-settings-ui] (owned by unix-user:mimi) Jun 7 04:25:50 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 FAILED to authenticate to gain authorization for action com.ubuntu.languageselector.setsystemdefaultlanguage for unix-process:7876:1584865 [/usr/bin/python /usr/bin/gnome-language-selector] (owned by unix-user:mimi) Jun 7 04:25:52 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 FAILED to authenticate to gain authorization for action com.ubuntu.languageselector.setsystemdefaultlanguage for unix-process:7876:1584865 [/usr/bin/python /usr/bin/gnome-language-selector] (owned by unix-user:mimi) Jun 7 05:11:57 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 successfully authenticated as unix-user:mimi to gain TEMPORARY authorization for action org.debian.apt.install-or-remove-packages for system-bus-name::1.95 [/usr/bin/python /usr/bin/gnome-language-selector] (owned by unix-user:mimi) Jun 7 05:17:02 Isis CRON[8708]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 05:17:02 Isis CRON[8708]: pam_unix(cron:session): session closed for user root Jun 7 05:28:03 Isis lightdm: pam_unix(lightdm-autologin:session): session opened for user mimi by (uid=0) Jun 7 05:28:03 Isis lightdm: pam_ck_connector(lightdm-autologin:session): nox11 mode, ignoring PAM_TTY :0 Jun 7 05:28:17 Isis polkitd(authority=local): Registered Authentication Agent for unix-session:/org/freedesktop/ConsoleKit/Session1 (system bus name :1.32 [/usr/lib/policykit-1-gnome/polkit-gnome-authentication-agent-1], object path /org/gnome/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) Jun 7 05:28:32 Isis dbus[660]: [system] Rejected send message, 2 matched rules; type="method_call", sender=":1.44" (uid=1000 pid=1736 comm="/usr/lib/indicator-datetime/indicator-datetime-ser") interface="org.freedesktop.DBus.Properties" member="GetAll" error name="(unset)" requested_reply="0" destination=":1.17" (uid=0 pid=1333 comm="/usr/sbin/console-kit-daemon --no-daemon ") Jun 7 06:17:01 Isis CRON[2391]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 06:17:02 Isis CRON[2391]: pam_unix(cron:session): session closed for user root Jun 7 06:25:02 Isis CRON[2492]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 06:25:02 Isis CRON[2492]: pam_unix(cron:session): session closed for user root Jun 7 07:17:01 Isis CRON[3174]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 07:17:01 Isis CRON[3174]: pam_unix(cron:session): session closed for user root Jun 7 07:30:01 Isis CRON[3397]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 07:30:01 Isis CRON[3397]: pam_unix(cron:session): session closed for user root Jun 7 08:09:01 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 08:09:01 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 08:09:01 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 08:09:01 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 08:09:01 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 08:09:01 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 08:09:01 Isis sudo: mimi : 3 incorrect password attempts ; TTY=unknown ; PWD=/home/mimi ; USER=root ; COMMAND=/usr/share/checkbox/backend --path=/usr/share/checkbox/scripts:/usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games /tmp/checkboxQbuE6V/input /tmp/checkboxQbuE6V/output Jun 7 08:09:01 Isis sudo: unable to execute /usr/sbin/sendmail: No such file or directory Jun 7 08:09:59 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 08:09:59 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 08:09:59 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 08:09:59 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 08:09:59 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 08:09:59 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 08:09:59 Isis sudo: mimi : 3 incorrect password attempts ; TTY=unknown ; PWD=/home/mimi ; USER=root ; COMMAND=/usr/share/checkbox/backend --path=/usr/share/checkbox/scripts:/usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games /tmp/checkboxQbuE6V/input /tmp/checkboxQbuE6V/output Jun 7 08:09:59 Isis sudo: unable to execute /usr/sbin/sendmail: No such file or directory Jun 7 08:10:55 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 08:10:55 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 08:10:55 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 08:10:55 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 08:10:55 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 08:10:55 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 08:10:55 Isis sudo: mimi : 3 incorrect password attempts ; TTY=unknown ; PWD=/home/mimi ; USER=root ; COMMAND=/usr/share/checkbox/backend --path=/usr/share/checkbox/scripts:/usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games /tmp/checkboxQbuE6V/input /tmp/checkboxQbuE6V/output Jun 7 08:10:55 Isis sudo: unable to execute /usr/sbin/sendmail: No such file or directory Jun 7 08:17:01 Isis CRON[4215]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 08:17:01 Isis CRON[4215]: pam_unix(cron:session): session closed for user root Jun 7 09:17:02 Isis CRON[4766]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 09:17:02 Isis CRON[4766]: pam_unix(cron:session): session closed for user root Jun 7 10:17:02 Isis CRON[5046]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 10:17:02 Isis CRON[5046]: pam_unix(cron:session): session closed for user root Jun 7 11:17:02 Isis CRON[5325]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 11:17:02 Isis CRON[5325]: pam_unix(cron:session): session closed for user root Jun 7 12:17:01 Isis CRON[5617]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 12:17:01 Isis CRON[5617]: pam_unix(cron:session): session closed for user root Jun 7 13:07:51 Isis gnome-screensaver-dialog: pam_unix(gnome-screensaver:auth): authentication failure; logname= uid=1000 euid=1000 tty=:0.0 ruser= rhost= user=mimi Jun 7 13:07:51 Isis gnome-screensaver-dialog: pam_winbind(gnome-screensaver:auth): getting password (0x00000388) Jun 7 13:07:51 Isis gnome-screensaver-dialog: pam_winbind(gnome-screensaver:auth): pam_get_item returned a password Jun 7 13:07:51 Isis gnome-screensaver-dialog: pam_winbind(gnome-screensaver:auth): request wbcLogonUser failed: WBC_ERR_AUTH_ERROR, PAM error: PAM_USER_UNKNOWN (10), NTSTATUS: NT_STATUS_NO_SUCH_USER, Error message was: No such user Jun 7 13:08:03 Isis gnome-screensaver-dialog: pam_unix(gnome-screensaver:auth): conversation failed Jun 7 13:08:03 Isis gnome-screensaver-dialog: pam_unix(gnome-screensaver:auth): auth could not identify password for [mimi] Jun 7 13:08:03 Isis gnome-screensaver-dialog: pam_winbind(gnome-screensaver:auth): getting password (0x00000388) Jun 7 13:08:08 Isis lightdm: pam_unix(lightdm:session): session opened for user lightdm by (uid=0) Jun 7 13:08:08 Isis lightdm: pam_ck_connector(lightdm:session): nox11 mode, ignoring PAM_TTY :1 Jun 7 13:08:13 Isis lightdm: pam_succeed_if(lightdm:auth): requirement "user ingroup nopasswdlogin" not met by user "mimi" Jun 7 13:08:16 Isis dbus[660]: [system] Rejected send message, 2 matched rules; type="method_call", sender=":1.91" (uid=104 pid=5961 comm="/usr/lib/indicator-datetime/indicator-datetime-ser") interface="org.freedesktop.DBus.Properties" member="GetAll" error name="(unset)" requested_reply="0" destination=":1.17" (uid=0 pid=1333 comm="/usr/sbin/console-kit-daemon --no-daemon ") Jun 7 13:08:18 Isis dbus[660]: [system] Rejected send message, 2 matched rules; type="method_call", sender=":1.98" (uid=104 pid=5999 comm="/usr/lib/indicator-datetime/indicator-datetime-ser") interface="org.freedesktop.DBus.Properties" member="GetAll" error name="(unset)" requested_reply="0" destination=":1.17" (uid=0 pid=1333 comm="/usr/sbin/console-kit-daemon --no-daemon ") Jun 7 13:10:15 Isis lightdm: pam_unix(lightdm:session): session closed for user lightdm Jun 7 13:17:02 Isis CRON[6181]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 13:17:02 Isis CRON[6181]: pam_unix(cron:session): session closed for user root Jun 7 13:55:14 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 13:55:14 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 13:55:14 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 13:55:14 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 13:55:14 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 13:55:14 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 13:55:14 Isis sudo: mimi : 3 incorrect password attempts ; TTY=unknown ; PWD=/home/mimi ; USER=root ; COMMAND=/usr/bin/unetbootin 'rootcheck=no' Jun 7 13:55:14 Isis sudo: unable to execute /usr/sbin/sendmail: No such file or directory Jun 7 14:02:33 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 FAILED to authenticate to gain authorization for action com.canonical.LandscapeClientSettings.configure for unix-process:6736:3087856 [/usr/bin/python /usr/bin/landscape-client-settings-ui] (owned by unix-user:mimi) Jun 7 14:02:51 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 FAILED to authenticate to gain authorization for action com.canonical.LandscapeClientSettings.configure for unix-process:6752:3089992 [/usr/bin/python /usr/bin/landscape-client-settings-ui] (owned by unix-user:mimi) Jun 7 14:03:14 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 successfully authenticated as unix-user:mimi to gain TEMPORARY authorization for action com.canonical.LandscapeClientSettings.configure for unix-process:6763:3092515 [/usr/bin/python /usr/bin/landscape-client-settings-ui] (owned by unix-user:mimi) Jun 7 14:17:01 Isis CRON[6933]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 14:17:01 Isis CRON[6933]: pam_unix(cron:session): session closed for user root Jun 7 15:17:02 Isis CRON[7611]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 15:17:02 Isis CRON[7611]: pam_unix(cron:session): session closed for user root

    Read the article

  • How do I get the C# Winforms Webbrowser control to show PDF content embedded in the html file as bin

    - by uniball
    I am using a Winforms webbrowser control to display HTML content stored in my SQL DB table. However, some of my contents are PDFs which I intend to store as binary data in the SQL DB and feed to the Webbrowser control. I wish to avoid the hassle of storing the binary content in a temporary file on the client machine and then create html container code to reference this temporary PDF file. Is there a way inwhich I can embed the PDF binary content into the html directly and show this html on the webbrowser control directly? I found these previous threads referring to a COM 'hack'. Is there a simpler, easier way? Thanks! http://stackoverflow.com/questions/290035/how-do-i-get-a-c-webbrowser-control-to-show-jpeg-files-raw Thanks, uniball

    Read the article

  • ASP.Net error: “The type ‘foo’ exists in both ”temp1.dll“ and ”temp2.dll" (pt 2)

    - by Greg
    I'm experiencing the same problem as in this question, but none of the answers fixed my problem. (edit: Setting the web.config batch attribute works, but that's a coverup, not a solution) The problem I'm having is with a User Control that I moved from the root directory to a subdirectory within the same Web Application project. It used to work fine before I moved it. When I moved it it started giving me the error message. It's saying that the class name exists in two dll files in Temporary ASP.NET Files. Sure enough, when I open Reflector, it's in two dlls. If I rename the class and ascx file, everything works fine. No usages of the original name exist within any of the files in my entire application. When I rename the file, I opened all of the dll files in Temporary ASP.NET Files with Reflector, and no references to the original class name exists. So where's this phantom reference coming from how can I fix this?

    Read the article

  • Start position for a reused t- sql cursor?

    - by Span
    I'm working on a stored procedure that uses a cursor on a temporary table (I have read a bit about why cursors are undesirable, but in this situation I believe I still need to use one). In my procedure I need to step through the rows of the table twice. Having declared the cursor, already stepped through the temporary table once and closed the cursor, would the position of the cursor remain at the end of the table when re-opened or does it reposition itself to the initial starting position (ie: before the first row)? Alternatively, to reposition the cursor must I do a 'FETCH FIRST' before stepping through again? Am I right to assume the 'cost' of doing this repositioning and reusing the cursor would be less than deallocating and reallocating the cursor?

    Read the article

  • How do I read 64-bit Registry values from VBScript running as a an msi post-installation task?

    - by Joergen Bech
    I need to read the location of the Temporary ASP.NET Files folder from VBScript as part of a post-installation task in an installer created using a Visual Studio 2008 deployment project. I thought I would do something like this: Set oShell = CreateObject("Wscript.Shell") strPath = oShell.RegRead("HKLM\SOFTWARE\Microsoft\ASP.NET\2.0.50727.0\Path") and then concatenate strPath with "\Temporary ASP.NET Files" and be done with it. On an x64 system, however, I am getting the value from the WOW6432Node (HKLM\SOFTWARE\Wow6432Node\Microsoft\ASP.NET\2.0.50727.0), which gives me the 32-bit framework path (C:\Windows\Microsoft.NET\Framework\v2.0.50727), but on an x64 system, I actually want the 64-bit path, i.e. C:\Windows\Microsoft.NET\Framework64\v2.0.50727. I understand that this happens because the .vbs file is run using the 32-bit script host due to the parent process (the installer) being 32-bit itself. How can I run the script using the 64-bit script host - or - how can I read the 64-bit values even if the script is run using the 32-bit script host?

    Read the article

  • Why isn't my query using any indices when I use a subquery?

    - by sfussenegger
    I have the following tables (removed columns that aren't used for my examples): CREATE TABLE `person` ( `id` int(11) NOT NULL, `name` varchar(1024) NOT NULL, `sortname` varchar(1024) NOT NULL, PRIMARY KEY (`id`), KEY `sortname` (`sortname`(255)), KEY `name` (`name`(255)) ); CREATE TABLE `personalias` ( `id` int(11) NOT NULL, `person` int(11) NOT NULL, `name` varchar(1024) NOT NULL, PRIMARY KEY (`id`), KEY `person` (`person`), KEY `name` (`name`(255)) ) Currently, I'm using this query which works just fine: select p.* from person p where name = 'John Mayer' or sortname = 'John Mayer'; mysql> explain select p.* from person p where name = 'John Mayer' or sortname = 'John Mayer'; +----+-------------+-------+-------------+---------------+---------------+---------+------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+-------------+---------------+---------------+---------+------+------+----------------------------------------------+ | 1 | SIMPLE | p | index_merge | name,sortname | name,sortname | 767,767 | NULL | 3 | Using sort_union(name,sortname); Using where | +----+-------------+-------+-------------+---------------+---------------+---------+------+------+----------------------------------------------+ 1 row in set (0.00 sec) Now I'd like to extend this query to also consider aliases. First, I've tried using a join: select p.* from person p join personalias a where p.name = 'John Mayer' or p.sortname = 'John Mayer' or a.name = 'John Mayer'; mysql> explain select p.* from person p join personalias a on p.id = a.person where p.name = 'John Mayer' or p.sortname = 'John Mayer' or a.name = 'John Mayer'; +----+-------------+-------+--------+-----------------------+---------+---------+-------------------+-------+-----------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+-----------------------+---------+---------+-------------------+-------+-----------------+ | 1 | SIMPLE | a | ALL | ref,name | NULL | NULL | NULL | 87401 | Using temporary | | 1 | SIMPLE | p | eq_ref | PRIMARY,name,sortname | PRIMARY | 4 | musicbrainz.a.ref | 1 | Using where | +----+-------------+-------+--------+-----------------------+---------+---------+-------------------+-------+-----------------+ 2 rows in set (0.00 sec) This looks bad: no index, 87401 rows, using temporary. Using temporary only appears when I use distinct, but as an alias might be the same as the name, I can't really get rid of it. Next, I've tried to replace the join with a subquery: select p.* from person p where p.name = 'John Mayer' or p.sortname = 'John Mayer' or p.id in (select person from personalias a where a.name = 'John Mayer'); mysql> explain select p.* from person p where p.name = 'John Mayer' or p.sortname = 'John Mayer' or p.id in (select id from personalias a where a.name = 'John Mayer'); +----+--------------------+-------+----------------+------------------+--------+---------+------+--------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+--------------------+-------+----------------+------------------+--------+---------+------+--------+-------------+ | 1 | PRIMARY | p | ALL | name,sortname | NULL | NULL | NULL | 540309 | Using where | | 2 | DEPENDENT SUBQUERY | a | index_subquery | person,name | person | 4 | func | 1 | Using where | +----+--------------------+-------+----------------+------------------+--------+---------+------+--------+-------------+ 2 rows in set (0.00 sec) Again, this looks pretty bad: no index, 540309 rows. Interestingly, both queries (select p.* from person ... or p.id in (4711,12345) and select id from personalias a where a.name = 'John Mayer') work extremely well. Why doesn't MySQL use any indices for both of my queries? What else could I do? Currently, it looks best to fetch person.ids for aliases and add them statically as an in(...) to the second query. There certainly has to be another way to do this with a single query. I'm currently out of ideas though. Could I somehow force MySQL into using another (better) query plan?

    Read the article

  • AppFabric Cache errors

    - by Joseph
    The AppFabric Cache in our production crashes almost every day, and is highly unstable. The below errors are logged: Microsoft.ApplicationServer.Caching.DataCacheException: ErrorCode:SubStatus:There is a temporary failure. Please retry later. (Sufficient secondaries not present or they are in throttled state.) Microsoft.ApplicationServer.Caching.DataCacheException: ErrorCode:SubStatus:There is a temporary failure. Please retry later. (The request did not find the primary.) AppFabric Caching service crashed.{Lease with external store expired: Microsoft.Fabric.Federation.ExternalRingStateStoreException: Lease already expired at Microsoft.Fabric.Data.ExternalStoreAuthority.UpdateNode(NodeInfo nodeInfo, TimeSpan timeout) at Microsoft.Fabric.Federation.SiteNode.PerformExternalRingStateStoreOperations(Boolean& canFormRing, Boolean isInsert, Boolean isJoining)} Could someone please provide me some inputs? This is a HA enabled cache environment with 3 cache hosts. All of them are running on Windows Server 2008 Enterprise Edition, and the SQL Server is used for config.

    Read the article

  • Many ascx-to-one ascx.cs bug in VS2008

    - by pukipuki
    I'm developing second language support for the site. So I made duplicate .ascx and .aspx files for existing ascx.cs and aspx.cs Most of the time everything works fine.. but suddenly I'm getting: Type 'ctrl_car' exists both in 'c:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\rzhdengine\d072cc72\b9d5698b\App_Web_xdmblegv.dll', and in 'c:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\rzhdengine\d072cc72\b9d5698b\App_Web_gkptrzo2.dll' (translated from russian) ctrl_car ctrl = (ctrl_car) LoadControl("car.ascx"); I have few such strings of code... and same error occurs with one of them. But WITHOUT any changes from me with those files. Sometimes it works on another VS... sometimes regetting project helps... but it is always waste of time (

    Read the article

  • Symlinking (ln) faster than moving (mv)?

    - by Chad Johnson
    When we build web software releases, we prepare the release in a temporary directory and then replace the release directory with the temporary one just prepared: # Move and replace existing release directory. mv /path/to/httpdocs /path/to/httpdocs.before mv /path/to/$newReleaseName /path/to/httpdocs Under this scheme, it happens that with about 1 in every 15 releases, a user was using a file in the original release directory exactly at the time the commands above are run, and a fatal error occurs for that user. I am wondering if using symlinking like follows would be significantly faster, in terms of processing time, thereby helping to lessen the likelihood of this problem: # Remove and replace existing release symlink. ln -sf /path/to/$newReleaseName path/to/httpdocs

    Read the article

  • Which HTTP redirect status code is best for this REST API scenario?

    - by Aseem Kishore
    I'm working on a REST API. The key objects ("nouns") are "items", and each item has a unique ID. E.g. to get info on the item with ID foo: GET http://api.example.com/v1/item/foo New items can be created, but the client doesn't get to pick the ID. Instead, the client sends some info that represents that item. So to create a new item: POST http://api.example.com/v1/item/ hello=world&hokey=pokey With that command, the server checks if we already have an item for the info hello=world&hokey=pokey. So there are two cases here. Case 1: the item doesn't exist; it's created. This case is easy. 201 Created Location: http://api.example.com/v1/item/bar Case 2: the item already exists. Here's where I'm struggling... not sure what's the best redirect code to use. 301 Moved Permanently? 302 Found? 303 See Other? 307 Temporary Redirect? Location: http://api.example.com/v1/item/foo I've studied the Wikipedia descriptions and RFC 2616, and none of these seem to be perfect. Here are the specific characteristics I'm looking for in this case: The redirect is permanent, as the ID will never change. So for efficiency, the client can and should make all future requests to the ID endpoint directly. This suggests 301, as the other three are meant to be temporary. The redirect should use GET, even though this request is POST. This suggests 303, as all others are technically supposed to re-use the POST method. In practice, browsers will use GET for 301 and 302, but this is a REST API, not a website meant to be used by regular users in browsers. It should be broadly usable and easy to play with. Specifically, 303 is HTTP/1.1 whereas 301 and 302 are HTTP/1.0. I'm not sure how much of an issue this is. At this point, I'm leaning towards 303 just to be semantically correct (use GET, don't re-POST) and just suck it up on the "temporary" part. But I'm not sure if 302 would be better since in practice it's been the same behavior as 303, but without requiring HTTP/1.1. But if I go down that line, I wonder if 301 is even better for the same reason plus the "permanent" part. Thoughts appreciated!

    Read the article

  • Php, socket and connecting to pop3-server

    - by Ockonal
    Hi guys, I'm trying to connect to the pop3-server with php. Here is my code: $pop3Server = 'mail.roller.ru'; $pop_conn = fsockopen($pop3Server, 110, $errno, $errstr, 10); print fgets($pop_conn, 1024); Warning: fsockopen() [function.fsockopen]: php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution in /home/ockonal/public_html/mail_parser.php on line 45 Warning: fsockopen() [function.fsockopen]: unable to connect to mail.roller.ru:110 (php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution) in /home/ockonal/public_html/mail_parser.php on line 45 Warning: fgets() expects parameter 1 to be resource, boolean given in /home/ockonal/public_html/mail_parser.php on line 46 What's wrong? I'm sure in pop3-server address. It doesn't print anything.

    Read the article

  • Processing potentially large STDIN data, more than once

    - by d11wtq
    I'd like to provide an accessor on a class that provides an NSInputStream for STDIN, which may be several hundred megabytes (or gigabytes, though unlikely, perhaps) of data. When I caller gets this NSInputStream it should be able to read from it without worrying about exhausting the data it contains. In other words, another block of code may request the NSInputStream and will expect to be able to read from it. Without first copying all of the data into an NSData object which (I assume) would cause memory exhaustion, what are my options for handling this? The returned NSInputStream does not have to be the same instance, it simply needs to provide the same data. The best I can come up with right now is to copy STDIN to a temporary file and then return NSInputStream instances using that file. Is this pretty much the only way to handle it? Is there anything I should be cautious of if I go the temporary file route?

    Read the article

  • Cannot execute a program. The command being executed "dqiitg0c.cmdline"

    - by Laxman
    i have used impersonation in this application. whenever this error occurs i required to restart the IIS.. please guide me to solve this issue. Error: Cannot execute a program. The command being executed was "C:\Windows\Microsoft.NET\Framework64\v2.0.50727\csc.exe" /noconfig /fullpaths @"C:\Windows\Microsoft.NET\Framework64\v2.0.50727\Temporary ASP.NET Files\root\c825d188\1fae8a71\dqiitg0c.cmdline". Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Runtime.InteropServices.ExternalException: Cannot execute a program. The command being executed was "C:\Windows\Microsoft.NET\Framework64\v2.0.50727\csc.exe" /noconfig /fullpaths @"C:\Windows\Microsoft.NET\Framework64\v2.0.50727\Temporary ASP.NET Files\root\c825d188\1fae8a71\dqiitg0c.cmdline".

    Read the article

  • MySQL Create tables without commiting current transaction

    - by user276648
    I'd like my program to be able to install plugins, and rollback all the changes made if an error occurs. So I create a transaction that keeps all the things that were added while installing the plugin. The problem is that the plugin may want to create tables, and doing so automatically commits the current transaction in MySQL. See Statements That Cause an Implicit Commit on MySQL web site. Any idea on how I could do it? I thought of using temporary tables as they are not automatically commited, unless they are using too much memory, but it looks like temporary tables cannot be rolled back anyway (and I haven't found a way to convert them to permanent tables). I just found out about "save points" (http://dev.mysql.com/doc/refman/5.1/en/savepoint.html), but I don't really understand how/when it should be used nor if it can help me achieve what I want.

    Read the article

  • Linq to SQL gives NotSupportedException when using local variables

    - by zwanz0r
    It appears to me that it matters whether you use a variable to temporary store an IQueryable or not. See the simplified example below: This works: List<string> jobNames = new List<string> { "ICT" }; var ictPeops = from p in dataContext.Persons where ( from j in dataContext.Jobs where jobNames.Contains(j.Name) select j.ID).Contains(p.JobID) select p; But when I use a variable to temporary store the subquery I get an exception: List<string> jobNames = new List<string> { "ICT" }; var jobs = from j in dataContext.Jobs where jobNames.Contains(j.Name) select j.ID; var ictPeops = from p in dataContext.Persons where jobs.Contains(p.JobID) select p; "System.NotSupportedException: Queries with local collections are not supported" I don't see what the problem is. Isn't this logic that is supposed to work in LINQ?

    Read the article

  • Why would the assignment operator ever do something different than its matching constructor?

    - by Neil G
    I was reading some boost code, and came across this: inline sparse_vector &assign_temporary(sparse_vector &v) { swap(v); return *this; } template<class AE> inline sparse_vector &operator=(const sparse_vector<AE> &ae) { self_type temporary(ae); return assign_temporary(temporary); } It seems to be mapping all of the constructors to assignment operators. Great. But why did C++ ever opt to make them do different things? All I can think of is scoped_ptr?

    Read the article

  • can oracle types be updated like tables?

    - by Omnipresent
    I am converting GTT's to oracle types as explained in an excellent answer by APC. however, some GTT's are being updated based on a select query from another table. For example: UPDATE my_gtt_1 c SET (street, city, STATE, zip) = (SELECT src.unit_address, src.unit_city, src.unit_state, src.unit_zip_code FROM (SELECT mbr.ROWID row_id, unit_address, RTRIM(a.unit_city) unit_city, RTRIM(a.unit_state) unit_state, RTRIM(a.unit_zip_code) unit_zip_code FROM table_1 b, table_2 a, my_gtt_1 mbr WHERE type = 'ABC' AND id = b.ssn_head AND a.h_id = b.h_id AND row_id >= v_start_row AND row_id <= v_end_row) src WHERE c.ROWID = src.row_id) WHERE state IS NULL OR state = ' '; if my_gtt_1 was not a global temporary table but an oracle collection type then is it possible to do updates this complex? Or in these cases we are better off using the global temporary table?

    Read the article

  • creation of multiple dataset in Reporting service report

    - by brijit
    Hi, I have a SSRS report and using PL/SQL for the dataset creation. My report needs two tables 1 one gives detailed view.(dataset 1) 2 one below that gives a summary table (data should come from the calculations based on the data in 1 table) I am using a temporary table for the dataset one. What are the methods to get calculated result for dataset 2. I wrote 2 procedures for each. since first table is a temporary one i am not getting result for second dataset. Wht can be the options. Can I have multiple dataset outof single procedure please give me and idea. Thanks brijit

    Read the article

  • How does one programmatically create a topic w/ hornet q ?

    - by mP
    I have been looking at the org.hornetq.core.server package which seems to have the most interesting low level APIS relating to managing the server. The server session has a few methods labelled something Queue but none include Topic ... ServerSession void createQueue(SimpleString address, SimpleString name, SimpleString filterString, boolean temporary, boolean durable) throws Exception; void deleteQueue(SimpleString name) throws Exception interface QueueFactory Queue createQueue(long persistenceID, final SimpleString address, SimpleString name, Filter filter, boolean durable, boolean temporary); However i could not figure out how to create a topic. Am i missing something is a JMS topic implemented as a queue ?

    Read the article

  • how to get ipv6 public address on xp machine ?

    - by SR Dusad
    C:netsh netshinterface ipv6 netsh interface ipv6show addres Querying active state... Interface 6: Local Area Connection 3 Addr Type DAD State Valid Life Pref. Life Address Temporary Preferred 6d23h38m55s 23h36m8s 2001:db8:1dde:1:6d16:9d1:b1ec:2245 Public Preferred 29d23h59m30s 6d23h59m30s 2001:db8:1dde:1:201:2ff:fe29:23b6 Link Preferred infinite infinite fe80::201:2ff:fe29:23b6 The books says, after IPv6 is enabled on XP, auto configure will give it 3 addresses, a Link, a Public, and a Temporary one. But on my computer, I could only see Link, what was the problem? I used the XP machine. But when i try it on windows 2007 .it works fine and show me a public address. - How to get public ipv6 address from xp machine ? - on my xp machine ipv6 is enabled but when i use command tracert6 www.kame.net it shows No route to destination. Why Any type of help willl appreciated..

    Read the article

  • Oracle/c#: How do i use bind variables with select statements to return multiple records?

    - by twiga
    I have a question regarding Oracle bind variables and select statements. What I would like to achieve is do a select on a number different values for the primary key. I would like to pass these values via an array using bind values. select * from tb_customers where cust_id = :1 int[] cust_id = { 11, 23, 31, 44 , 51 }; I then bind a DataReader to get the values into a table. The problem is that the resulting table only contains a single record (for cust_id=51). Thus it seems that each statement is executed independently (as it should), but I would like the results to be available as a collective (single table). A workaround is to create a temporary table, insert all the values of cust_id and then do a join against tb_customers. The problem with this approach is that I would require temporary tables for every different type of primary key, as I would like to use this against a number of tables (some even have combined primary keys). Is there anything I am missing?

    Read the article

  • Amazon S3 copy request timeouts

    - by jean27
    We're uploading files to a temporary folder in a bucket. After that, we're trying to copy the uploaded files to its actual folder then delete the files in the temporary folder. It doesn't timeout when working with a single file. We're using the ThreeSharp API. Stack Trace: [WebException: The operation has timed out] System.Net.HttpWebRequest.GetRequestStream() +5322142 Affirma.ThreeSharp.Query.ThreeSharpQuery.GenerateAndSendHttpWebRequest(Request request) in C:\Consulting\Amazon\Amazon S3\Affirma.ThreeSharp\Affirma.ThreeSharp\Query\ThreeSharpQuery.cs:386 Affirma.ThreeSharp.Query.ThreeSharpQuery.Invoke(Request request) in C:\Consulting\Amazon\Amazon S3\Affirma.ThreeSharp\Affirma.ThreeSharp\Query\ThreeSharpQuery.cs:479

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >