Search Results

Search found 10859 results on 435 pages for 'raid controller'.

Page 170/435 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Rails3 nomethod error #<ActiveRecord::Relation>

    - by Dodi
    Hi! I'm writing a static page controller. I get the menuname in the routes.rb and it's call the static controller show method. match '/:menuname' = 'static#show' And static_controller.rb: @static=Staticpage.where("menuname = ?", params[:menuname]) But if I want print @static.title in the view, I get this error: undefined method `title' for # Whats wrong? the SQL query looks good: SELECT staticpages.* FROM staticpages WHERE (menuname = 'asd')

    Read the article

  • UIPicker reloading content

    - by kumaravel
    Hi i have a problem with UIPicker, im keeping my Picker in Navigation controller. i want the Picker content to reload whenever the navigation controller push the PickerView. but the pickerview content maintaining same state in all navigation.. how to change this?

    Read the article

  • [iPhone] Error reading plist file for fill a table

    - by Matthew
    Hi, I'm developing an app for iPhone but I've a problem... I've a view with some textField and the informations writed in them are saved in a plist file. With @class and #import declarations I import this view controller in another controller that manage a table view. The code I've just wrote appear to be right but my table is filled up with 3 same row... I don't know why the row are 3... Can anyone help me?

    Read the article

  • Respond_with not working Rails 3

    - by MDM
    As no one can help me with my UJS ajax problem I am trying "respond_with", but I am getting this error: RuntimeError (In order to use respond_with, first you need to declare the formats your controller responds to in the class level) Yet I have declared it in my controller: class HomepagesController < ApplicationController respond_to :html, :xml, :js def index @homepages = Homepage.search(params[:search]) respond_with(@homepages) end Anyone has any ideas why.

    Read the article

  • Multiple controllers with a single model

    - by Eric K
    I'm setting up a directory application for which I need to have two separate interfaces for the same Users table. Basically, administrators use the Users controller and views to list, edit, and add users, while non-admins need a separate interface which lists users in a completely different manner. To do this, would I be able to just set up another controller with different views but which accesses the Users model? Sorry if this is a simple question, but I've had a hard time finding how to do this.

    Read the article

  • MEF C# Service - DLL Updating

    - by connerb
    Currently, I have a C# service that runs off of many .dll's and has modules/plugins that it imports at startup. I would like to create an update system that basically stops the service, deletes any files it is told to delete (old versions), downloads new versions from a server, and starts the service. I believe I have coded this right except for the delete part, because as long as I am not overwriting anything, the file will download. If I try to overwrite something, it won't work, which is why I am trying to delete it before hand. However, when I do File.Delete() to the path that I want to do, it gives me access to the path is denied. Here is my code: new Thread(new ThreadStart(() => { ServiceController controller = new ServiceController("client"); controller.Stop(); controller.WaitForStatus(ServiceControllerStatus.Stopped); try { if (um.FilesUpdated != null) { foreach (FilesUpdated file in um.FilesUpdated) { if (file.OldFile != null) { File.Delete(Path.Combine(Utility.AssemblyDirectory, file.OldFile)); } if (file.NewFile != null) { wc.DownloadFile(cs.UpdateUrl + "/updates/client/" + file.NewFile, Path.Combine(Utility.AssemblyDirectory, file.NewFile)); } } } if (um.ModulesUpdated != null) { foreach (ModulesUpdated module in um.ModulesUpdated) { if (module.OldModule != null) { File.Delete(Path.Combine(cs.ModulePath, module.OldModule)); } if (module.NewModule != null) { wc.DownloadFile(cs.UpdateUrl + "/updates/client/modules/" + module.NewModule, Path.Combine(cs.ModulePath, module.NewModule)); } } } } catch (Exception ex) { Logger.log(ex); } controller.Start(); })).Start(); I believe it is because the files are in use, but I can't seem to unload them. I though stopping the service would work, but apparently not. I have also checked the files and they are not read-only (but the folder is, which is located in Program Files, however I couldn't seem to get it to not be read-only programmatically or manually). The service is also being run as an administrator (NT AUTHORITY\SYSTEM). I've read about unloading the AppDomain but AppDomain.Unload(AppDomain.CurrentDomain); returned an exception as well. Not too sure even if this is a problem with MEF or my program just not having the correct permissions...I would assume that it's mainly because the file is in use.

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Determining cause of high NFS/IO utilization without iotop

    - by Matt
    I have a server that is doing an NFSv4 export for user's home directories. There are roughly 25 users (mostly developers/analysts) and about 40 servers mounting the home directory export. Performance is miserable, with users often seeing multi-second lags for simple commands (like ls, or writing a small text file). Sometimes the home directory mount completely hangs for minutes, with users getting "permission denied" errors. The hardware is a Dell R510 with dual E5620 CPUs and 8 GB RAM. There are eight 15k 2.5” 600 GB drives (Seagate ST3600057SS) configured in hardware RAID-6 with a single hot spare. RAID controller is a Dell PERC H700 w/512MB cache (Linux sees this as a LSI MegaSAS 9260). OS is CentOS 5.6, home directory partition is ext3, with options “rw,data=journal,usrquota”. I have the HW RAID configured to present two virtual disks to the OS: /dev/sda for the OS (boot, root and swap partitions), and /dev/sdb for the home directories. What I find curious, and suspicious, is that the sda device often has very high utilization, even though it only contains the OS. I would expect this virtual drive to be idle almost all the time. The system is not swapping, according to "free" and "vmstat". Why would there be major load on this device? Here is a 30-second snapshot from iostat: Time: 09:37:28 AM Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 44.09 0.03 107.76 0.13 607.40 11.27 0.89 8.27 7.27 78.35 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda2 0.00 44.09 0.03 107.76 0.13 607.40 11.27 0.89 8.27 7.27 78.35 sdb 0.00 2616.53 0.67 157.88 2.80 11098.83 140.04 8.57 54.08 4.21 66.68 sdb1 0.00 2616.53 0.67 157.88 2.80 11098.83 140.04 8.57 54.08 4.21 66.68 dm-0 0.00 0.00 0.03 151.82 0.13 607.26 8.00 1.25 8.23 5.16 78.35 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.67 2774.84 2.80 11099.37 8.00 474.30 170.89 0.24 66.84 dm-3 0.00 0.00 0.67 2774.84 2.80 11099.37 8.00 474.30 170.89 0.24 66.84 Looks like iotop is the ideal tool to use to sniff out these kinds of issues. But I'm on CentOS 5.6, which doesn't have a new enough kernel to support that program. I looked at Determining which process is causing heavy disk I/O?, and besides iotop, one of the suggestions said to do a "echo 1 /proc/sys/vm/block_dump". I did that (after directing kernel messages to tempfs). In about 13 minutes I had about 700k reads or writes, roughly half from kjournald and the other half from nfsd: # egrep " kernel: .*(READ|WRITE)" messages | wc -l 768439 # egrep " kernel: kjournald.*(READ|WRITE)" messages | wc -l 403615 # egrep " kernel: nfsd.*(READ|WRITE)" messages | wc -l 314028 For what it's worth, for the last hour, utilization has constantly been over 90% for the home directory drive. My 30-second iostat keeps showing output like this: Time: 09:36:30 PM Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 6.46 0.20 11.33 0.80 71.71 12.58 0.24 20.53 14.37 16.56 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda2 0.00 6.46 0.20 11.33 0.80 71.71 12.58 0.24 20.53 14.37 16.56 sdb 137.29 7.00 549.92 3.80 22817.19 43.19 82.57 3.02 5.45 1.74 96.32 sdb1 137.29 7.00 549.92 3.80 22817.19 43.19 82.57 3.02 5.45 1.74 96.32 dm-0 0.00 0.00 0.20 17.76 0.80 71.04 8.00 0.38 21.21 9.22 16.57 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 687.47 10.80 22817.19 43.19 65.48 4.62 6.61 1.43 99.81 dm-3 0.00 0.00 687.47 10.80 22817.19 43.19 65.48 4.62 6.61 1.43 99.82

    Read the article

  • ESXI guests not using available CPU resources

    - by Alan M
    I have a VMWare ESXI 5.0.0 (it's a bit old, I know) host with three guest VMs on it. For reasons unknown, the guests will not use much of the availble CPU resources. I have all three guests in a single pool, with the hosts all configured to use the same amount of resource shares, so they're basically 33% each. The three guests are basically identically configured as far as their VM resources go. So the problem is, even when the guests are performing what should be very 'busy' activitites, such as at bootup, the actual host CPU consumed is something tiny, like 33mhz, when seen via vSphere console's "Virtual Machines" tab when viewing properties for the pool. And of course, the performance of the guest VMs is terrible. The host has plenty of CPU to spare. I've tried tinkering with individual guest VM resource settings; cranking up the reservation, etc. No matter. The guests just refuse to make use of the abundant CPU available to them, and insist on using a sliver of the available resources. Any suggestions? Update after reading various comments below Per the suggestions below, I did remove the guests from the application pool; this didn't make any difference. I do understand that the guests are not going to consume resources they do not need. I have tried to do a remote perfmon on the guest which is experiencing long boot times, but I cannot connect to the guest remotely with perfmon (guest is w2k8r2 server). Host graphs for CPU, Mem, Disk are basically flatlining; very little demand. Same is true for the guest stats; while the guest itself seems to be crawling, the guest resource graphing shows very little activity across CPU, Mem, Disk. Host is a Dell PowerEdge 2900, has 2 physical CPU,20gb RAM. (it's a test/dev environment using surplus gear) Guest1 has: VM ver. 7, 2vCPU, 4gb RAM, 140gb storage which lives on a RAID-5 array on the host. Guest2 has: VM ver. 7, 2vCPU, 4gb RAM, 140gb storage which lives on a RAID-5 array on the host. Guest3 has: VM ver. 7, 1vCPU, 2gb RAM, 2tb storage which lives on a RAID-5 ISCSI NAS box Perhaps I am making a false assumption that if a guest has a demand for CPU (e.g. Windows Task Manager shows 100% CPU), the host would supply the guest with more CPU (mem, disk) on demand. Another Update After checking the stats, it would appear that the host is indeed not busy at all, neither is the guest. I believe I have a good idea on the issue, though; a messed-up VMWare Tools install. The guest has VMware Tools on it, but the host says it does not. VMWare Tools refuses to uninstall, refuses to be upgraded, refuses to be recognized. While I cannot say with authority, this would appear to be something worth investigation. I do not know the origin of the guest itself, nor the specifics on the original VMWare Tools install. Following various bits of googling, I did come up with a few suggestions that went nowhere. To that end, I was going to delete this question, but was prompted not to do so since so many folks answered. My suspicion right now is; the problem truly is the guest; the guest is not making a demand on the host, and as a natural result, the host is treating the guest accordingly. My Final Update I am 99% certain the guest VM had something fundamentally wrong with it re VMWare Tools. I created a clone of a different VM with a near-identical OS config, but a properly working install of VMWare tools. The guest runs just great, and takes up it's allotment of resources when it needs to; e.g. it eats up about 850mhz CPU during startup, then ticks down to idle once the guest OS is stable.

    Read the article

  • File server share access intermittent/slow/machine unstable: win2k8r2

    - by Jack B.
    I have a file server running Win2k8R2 on an older HP DL380G4. It has nothing set up on it other than file sharing. All drivers/firmware/updates installed. The file server is used as a dump for a bunch of test machines - so essentially a lot of small files are being written to it. It was working fine until it started showing the following symptoms: Shares became either very slow/intermittent or could not access them at all. Logging in the the server, you could use it like normal but windows would start freezing and eventually you had to hard reboot it because nothing was responsive. After rebooting, it would work fine for 20min-2hours and then degrade into this broken state again. Some info after investigation: HP Raid Config utility shows the Raid array as functioning properly (RAID5 btw). Event log shows a bunch of DoS attacks from the test machines, saying it has disconnected the connection a. AFAIK (not part of my job) the test machines haven't changed the way they log information to this server or the amount of them hasn't increased. b. Nothing is infected, this server was scanned fully, and the test machines are re-imaged almost daily. Nothing in performance monitor shows as anything being pegged at maximum (CPU/HD/Network/RAM) I installed MS Network Monitor and it is showing a lot of traffic The server was using one gigabit Ethernet connection, I connected the second one as well with the same results. Forgot to add - one of the commonly written to dirs on the share has over 16k subdirs in it, with a crapton of small files within those dirs. Some of the OS instability was slow access to the drive which has this directory - perfmon doesn't show much activity on the HD though so I'm not sure if this crowded dir is the cause. Here is one important fact: I ran into this issue 2-3 months ago, couldn't figure it out, but I had a spare identical machine so I swapped them out (thought it was related to the machine), and now I have the same issue. Also, the computer will be stable if I turn off file sharing. So is the server just getting DoS'd by the test machines? I've never dealt with such an issue. Is instability in the server's OS common when getting DoS'd? Is there anything I can do to confirm this before telling the owners of the test machines to optimize their traffic? (I'm not sure what they'll be able to do). Is there something within Win2k8R2 that can balance the traffic across the two NICs? Any help would be appreciated. Update: Another thought - the drive with the share is RAID5 across 6 SCSI320 300GB HDs. They are near full capacity about 100GB from 1TB left. Could the amount of tiny files could be causing some weirdness with the parity in this array? I think I've read something about this in the past but I'm no expert on RAID.

    Read the article

  • libvirt upgrade caused vms to not see drives (boot media not found)

    - by bias
    I upgraded to Ubuntu 12.04.1 and now libvirt (via open nebula) successfully runs vms but they aren't finding the 2 drives (specifically, the boot drive). One is "hd" the other is "cdrom". The machine boots but fails and displays something like "boot media not found hd" (this was in a vnc terminal and I didn't copy the output anywhere so that's not the verbatim message). I tried constructing a new disk using the new version of qemu (via vmbuilder) and this new machine has the same problem as the old machine. In case it matters (I can't see why it would) I'm using open nebula to manage the machines. There's nothing relevant in any of the logs: syslog, libvirtd, oned. Which is to say nothing interesting/anomalous is reported when the machine is brought up. Versions libvirt 0.9.8-2ubuntu17.4 qemu-kvm 1.0+noroms-0ubuntu14.3 The libvirt xml config portions (relavent) <os> <type arch='x86_64' machine='pc-1.0'>hvm</type> <boot dev='hd'/> </os> ... <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/one//203/images/disk.0'/> <target dev='sda' bus='scsi'/> <alias name='scsi0-0-0'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/var/lib/one//203/images/disk.1'/> <target dev='sdc' bus='scsi'/> <readonly/> <alias name='scsi0-0-2'/> <address type='drive' controller='0' bus='0' unit='2'/> </disk> <controller type='scsi' index='0'> <alias name='scsi0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </memballoon> ... </devices> The libvirt/qemu log contains 2012-11-25 22:19:24.328+0000: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-1.0 -enable-kvm -m 256 -smp 1,sockets=1,cores=1,threads=1 -name one-204 -uuid 4be6c276-19e8-bdc2-e9c9-9ca5352f2be3 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-204.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device lsi,id=scsi0,bus=pci.0,addr=0x5 -drive file=/var/lib/one//204/images/disk.0,if=none,id=drive-scsi0-0-0,format=qcow2 -device scsi-disk,bus=scsi0.0,scsi-id=0,drive=drive-scsi0-0-0,id=scsi0-0-0,bootindex=1 -drive file=/var/lib/one//204/images/disk.1,if=none,media=cdrom,id=drive-scsi0-0-2,readonly=on,format=raw -device scsi-disk,bus=scsi0.0,scsi-id=2,drive=drive-scsi0-0-2,id=scsi0-0-2 -netdev tap,fd=18,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=02:00:c0:a8:00:68,bus=pci.0,addr=0x3 -netdev tap,fd=19,id=hostnet1 -device rtl8139,netdev=hostnet1,id=net1,mac=02:00:ad:f0:1b:94,bus=pci.0,addr=0x4 -usb -vnc 0.0.0.0:204 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 kvm: -device rtl8139,netdev=hostnet0,id=net0,mac=02:00:c0:a8:00:68,bus=pci.0,addr=0x3: pci_add_option_rom: failed to find romfile "pxe-rtl8139.rom" kvm: -device rtl8139,netdev=hostnet1,id=net1,mac=02:00:ad:f0:1b:94,bus=pci.0,addr=0x4: pci_add_option_rom: failed to find romfile "pxe-rtl8139.rom"

    Read the article

  • WNDR3700 Router + Cisco SG200-08 + LACP + Dual Uplink

    - by kobaltz
    Background I have a storage server that has several virtual machine images stored on them. I would store them locally, but I have limited space on my desktop (using SSD storage). I would like to increase the bandwidth between the desktop and the storage server by using two NICs on each computer. My original configuration allowed about 55MBps between the desktop and storage server. This storage server also has several TBs of documents, pictures, movies, vms, and ISO/programs. The storage server has 8 1.5TB hard drives in a RAID 10 configuration with a hardware RAID controller. The benchmarks on the RAID 10 are about 300MBps. Configuration In short, I am trying to bridge my switch and router. The switch is a small 8 port Cisco smart switch that supports 802.3ad LACP. I have two computers plugged into the switch, each with 2 Intel Gigabit NICs. The first computer is a Windows 7 machine that has the Intel ANS software installed. I have LACP configured with the computer and now show 3 NICs (2 Physical + 1 TEAM Virtual @ 2Gbps). It looks like this computer is configured correctly. I trunked the two ports that this computer is plugged into with the switch's web interface. The second computer is a homebrew storage box running debian. I also have the bonding enabled on this machine and the switch configured with LACP. Without having the WNDR3700 router in the picture yet, I am able to communicate between the Windows 7 machine and the debian box since they both have static IP addresses. With LACP enabled on both machines I am getting about 106-108MBps speeds. Issue I plug in a network cable from the switch into the router and enable DHCP on the desktop. I saw no need to have a static address on the desktop. My transfer rates are still from 106MBps-108MBps. While this is still a boost, I am trying to figure out how to get about 140-180MBps. I am thinking that I need to increase the bandwidth from the router to the switch. My switch allows 4 groups for port trunking. I plugged in a second network cable from the router to the switch. My question is, what is the proper way to fix this issue. Should I port trunk the two ports that are going from the switch to the router? Keep in mind that the router is a WNDR3700 and is unsure whether or not it supports LACP. I do have OpenWRT installed on the router, but it still wasn't clear in any documentation that I found if it supported 802.3ad LACP standards. I am also wondering if there needs to be anything changed within the Cisco settings. [Edit] - Corrected some numbers, wasn't really paying attention. It looks like the speeds though at least two NICs are bonded with LACP is still reaching the max bandwidth of one port. Is there a way to configure the switch so that I can increase this bandwidth? Also, on the storage server, I had a couple of extra NICs laying around and threw them on there as well. Another EDIT and More Findings I happened to look at the traffic of each individual NIC and think that I see the problem. I tested with a simple transfer for a 4GB file. I noticed that only one of the NICs was taking the load of the traffic. I then copied the file back to the Storage Server and noticed that the other NIC was sending out the traffic. I have 802.3ad LACP enabled on the two NICs and I see that it gets enabled dynamically on the switch's interface. Should I be using Static Link Aggregation?

    Read the article

  • File server share access intermittent/slow/machine unstable: win2k8r2

    - by Jack B.
    I have a file server running Win2k8R2 on an older HP DL380G4. It has nothing set up on it other than file sharing. All drivers/firmware/updates installed. The file server is used as a dump for a bunch of test machines - so essentially a lot of small files are being written to it. It was working fine until it started showing the following symptoms: Shares became either very slow/intermittent or could not access them at all. Logging in the the server, you could use it like normal but windows would start freezing and eventually you had to hard reboot it because nothing was responsive. After rebooting, it would work fine for 20min-2hours and then degrade into this broken state again. Some info after investigation: HP Raid Config utility shows the Raid array as functioning properly (RAID5 btw). Event log shows a bunch of DoS attacks from the test machines, saying it has disconnected the connection a. AFAIK (not part of my job) the test machines haven't changed the way they log information to this server or the amount of them hasn't increased. b. Nothing is infected, this server was scanned fully, and the test machines are re-imaged almost daily. Nothing in performance monitor shows as anything being pegged at maximum (CPU/HD/Network/RAM) I installed MS Network Monitor and it is showing a lot of traffic The server was using one gigabit Ethernet connection, I connected the second one as well with the same results. Forgot to add - one of the commonly written to dirs on the share has over 16k subdirs in it, with a crapton of small files within those dirs. Some of the OS instability was slow access to the drive which has this directory - perfmon doesn't show much activity on the HD though so I'm not sure if this crowded dir is the cause. Here is one important fact: I ran into this issue 2-3 months ago, couldn't figure it out, but I had a spare identical machine so I swapped them out (thought it was related to the machine), and now I have the same issue. Also, the computer will be stable if I turn off file sharing. So is the server just getting DoS'd by the test machines? I've never dealt with such an issue. Is instability in the server's OS common when getting DoS'd? Is there anything I can do to confirm this before telling the owners of the test machines to optimize their traffic? (I'm not sure what they'll be able to do). Is there something within Win2k8R2 that can balance the traffic across the two NICs? Any help would be appreciated. Update: Another thought - the drive with the share is RAID5 across 6 SCSI320 300GB HDs. They are near full capacity about 100GB from 1TB left. Could the amount of tiny files could be causing some weirdness with the parity in this array? I think I've read something about this in the past but I'm no expert on RAID.

    Read the article

  • File server share access intermittent/slow/machine unstable: win2kr2

    - by Jack B.
    I have a file server running Win2k8R2 on an older HP DL380G4. It has nothing set up on it other than file sharing. All drivers/firmware/updates installed. The file server is used as a dump for a bunch of test machines - so essentially a lot of small files are being written to it. It was working fine until it started showing the following symptoms: Shares became either very slow/intermittent or could not access them at all. Logging in the the server, you could use it like normal but windows would start freezing and eventually you had to hard reboot it because nothing was responsive. After rebooting, it would work fine for 20min-2hours and then degrade into this broken state again. Some info after investigation: HP Raid Config utility shows the Raid array as functioning properly (RAID5 btw). Event log shows a bunch of DoS attacks from the test machines, saying it has disconnected the connection a. AFAIK (not part of my job) the test machines haven't changed the way they log information to this server or the amount of them hasn't increased. b. Nothing is infected, this server was scanned fully, and the test machines are re-imaged almost daily. Nothing in performance monitor shows as anything being pegged at maximum (CPU/HD/Network/RAM) I installed MS Network Monitor and it is showing a lot of traffic The server was using one gigabit Ethernet connection, I connected the second one as well with the same results. Forgot to add - one of the commonly written to dirs on the share has over 16k subdirs in it, with a crapton of small files within those dirs. Some of the OS instability was slow access to the drive which has this directory - perfmon doesn't show much activity on the HD though so I'm not sure if this crowded dir is the cause. Here is one important fact: I ran into this issue 2-3 months ago, couldn't figure it out, but I had a spare identical machine so I swapped them out (thought it was related to the machine), and now I have the same issue. Also, the computer will be stable if I turn off file sharing. So is the server just getting DoS'd by the test machines? I've never dealt with such an issue. Is instability in the server's OS common when getting DoS'd? Is there anything I can do to confirm this before telling the owners of the test machines to optimize their traffic? (I'm not sure what they'll be able to do). Is there something within Win2k8R2 that can balance the traffic across the two NICs? Any help would be appreciated. Update: Another thought - the drive with the share is RAID5 across 6 SCSI320 300GB HDs. They are near full capacity about 100GB from 1TB left. Could the amount of tiny files could be causing some weirdness with the parity in this array? I think I've read something about this in the past but I'm no expert on RAID.

    Read the article

  • Unable to connect to Samba printer

    - by user127236
    I have a headless Ubuntu 12.04 server for files and printers. It shares files via Samba just fine. However, the HP PSC-750xi connected to the server via USB is not accessible from my Ubuntu 12.04 laptop. I can browse for it in the Printing control panel, but any attempt to authenticate my ID to the printer with my user credentials results in the error "This print share is not accessible". I have included the Samba smb.conf file below. Any help appreciated. Thanks... JGB # # Sample configuration file for the Samba suite for Debian GNU/Linux. # # # This is the main Samba configuration file. You should read the # smb.conf(5) manual page in order to understand the options listed # here. Samba has a huge number of configurable options most of which # are not shown in this example # # Some options that are often worth tuning have been included as # commented-out examples in this file. # - When such options are commented with ";", the proposed setting # differs from the default Samba behaviour # - When commented with "#", the proposed setting is the default # behaviour of Samba but the option is considered important # enough to be mentioned here # # NOTE: Whenever you modify this file you should run the command # "testparm" to check that you have not made any basic syntactic # errors. # A well-established practice is to name the original file # "smb.conf.master" and create the "real" config file with # testparm -s smb.conf.master >smb.conf # This minimizes the size of the really used smb.conf file # which, according to the Samba Team, impacts performance # However, use this with caution if your smb.conf file contains nested # "include" statements. See Debian bug #483187 for a case # where using a master file is not a good idea. # #======================= Global Settings ======================= [global] log file = /var/log/samba/log.%m passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . obey pam restrictions = yes map to guest = bad user encrypt passwords = true passwd program = /usr/bin/passwd %u passdb backend = tdbsam dns proxy = no writeable = yes server string = %h server (Samba, Ubuntu) unix password sync = yes workgroup = WORKGROUP syslog = 0 panic action = /usr/share/samba/panic-action %d usershare allow guests = yes max log size = 1000 pam password change = yes ## Browsing/Identification ### # Change this to the workgroup/NT-domain name your Samba server will part of # server string is the equivalent of the NT Description field # Windows Internet Name Serving Support Section: # WINS Support - Tells the NMBD component of Samba to enable its WINS Server # wins support = no # WINS Server - Tells the NMBD components of Samba to be a WINS Client # Note: Samba can be either a WINS Server, or a WINS Client, but NOT both ; wins server = w.x.y.z # This will prevent nmbd to search for NetBIOS names through DNS. # What naming service and in what order should we use to resolve host names # to IP addresses ; name resolve order = lmhosts host wins bcast #### Networking #### # The specific set of interfaces / networks to bind to # This can be either the interface name or an IP address/netmask; # interface names are normally preferred ; interfaces = 127.0.0.0/8 eth0 # Only bind to the named interfaces and/or networks; you must use the # 'interfaces' option above to use this. # It is recommended that you enable this feature if your Samba machine is # not protected by a firewall or is a firewall itself. However, this # option cannot handle dynamic or non-broadcast interfaces correctly. ; bind interfaces only = yes #### Debugging/Accounting #### # This tells Samba to use a separate log file for each machine # that connects # Cap the size of the individual log files (in KiB). # If you want Samba to only log through syslog then set the following # parameter to 'yes'. # syslog only = no # We want Samba to log a minimum amount of information to syslog. Everything # should go to /var/log/samba/log.{smbd,nmbd} instead. If you want to log # through syslog you should set the following parameter to something higher. # Do something sensible when Samba crashes: mail the admin a backtrace ####### Authentication ####### # "security = user" is always a good idea. This will require a Unix account # in this server for every user accessing the server. See # /usr/share/doc/samba-doc/htmldocs/Samba3-HOWTO/ServerType.html # in the samba-doc package for details. # security = user # You may wish to use password encryption. See the section on # 'encrypt passwords' in the smb.conf(5) manpage before enabling. # If you are using encrypted passwords, Samba will need to know what # password database type you are using. # This boolean parameter controls whether Samba attempts to sync the Unix # password with the SMB password when the encrypted SMB password in the # passdb is changed. # For Unix password sync to work on a Debian GNU/Linux system, the following # parameters must be set (thanks to Ian Kahan <<[email protected]> for # sending the correct chat script for the passwd program in Debian Sarge). # This boolean controls whether PAM will be used for password changes # when requested by an SMB client instead of the program listed in # 'passwd program'. The default is 'no'. # This option controls how unsuccessful authentication attempts are mapped # to anonymous connections ########## Domains ########### # Is this machine able to authenticate users. Both PDC and BDC # must have this setting enabled. If you are the BDC you must # change the 'domain master' setting to no # ; domain logons = yes # # The following setting only takes effect if 'domain logons' is set # It specifies the location of the user's profile directory # from the client point of view) # The following required a [profiles] share to be setup on the # samba server (see below) ; logon path = \\%N\profiles\%U # Another common choice is storing the profile in the user's home directory # (this is Samba's default) # logon path = \\%N\%U\profile # The following setting only takes effect if 'domain logons' is set # It specifies the location of a user's home directory (from the client # point of view) ; logon drive = H: # logon home = \\%N\%U # The following setting only takes effect if 'domain logons' is set # It specifies the script to run during logon. The script must be stored # in the [netlogon] share # NOTE: Must be store in 'DOS' file format convention ; logon script = logon.cmd # This allows Unix users to be created on the domain controller via the SAMR # RPC pipe. The example command creates a user account with a disabled Unix # password; please adapt to your needs ; add user script = /usr/sbin/adduser --quiet --disabled-password --gecos "" %u # This allows machine accounts to be created on the domain controller via the # SAMR RPC pipe. # The following assumes a "machines" group exists on the system ; add machine script = /usr/sbin/useradd -g machines -c "%u machine account" -d /var/lib/samba -s /bin/false %u # This allows Unix groups to be created on the domain controller via the SAMR # RPC pipe. ; add group script = /usr/sbin/addgroup --force-badname %g ########## Printing ########## # If you want to automatically load your printer list rather # than setting them up individually then you'll need this # load printers = yes # lpr(ng) printing. You may wish to override the location of the # printcap file ; printing = bsd ; printcap name = /etc/printcap # CUPS printing. See also the cupsaddsmb(8) manpage in the # cupsys-client package. ; printing = cups ; printcap name = cups ############ Misc ############ # Using the following line enables you to customise your configuration # on a per machine basis. The %m gets replaced with the netbios name # of the machine that is connecting ; include = /home/samba/etc/smb.conf.%m # Most people will find that this option gives better performance. # See smb.conf(5) and /usr/share/doc/samba-doc/htmldocs/Samba3-HOWTO/speed.html # for details # You may want to add the following on a Linux system: # SO_RCVBUF=8192 SO_SNDBUF=8192 # socket options = TCP_NODELAY # The following parameter is useful only if you have the linpopup package # installed. The samba maintainer and the linpopup maintainer are # working to ease installation and configuration of linpopup and samba. ; message command = /bin/sh -c '/usr/bin/linpopup "%f" "%m" %s; rm %s' & # Domain Master specifies Samba to be the Domain Master Browser. If this # machine will be configured as a BDC (a secondary logon server), you # must set this to 'no'; otherwise, the default behavior is recommended. # domain master = auto # Some defaults for winbind (make sure you're not using the ranges # for something else.) ; idmap uid = 10000-20000 ; idmap gid = 10000-20000 ; template shell = /bin/bash # The following was the default behaviour in sarge, # but samba upstream reverted the default because it might induce # performance issues in large organizations. # See Debian bug #368251 for some of the consequences of *not* # having this setting and smb.conf(5) for details. ; winbind enum groups = yes ; winbind enum users = yes # Setup usershare options to enable non-root users to share folders # with the net usershare command. # Maximum number of usershare. 0 (default) means that usershare is disabled. ; usershare max shares = 100 # Allow users who've been granted usershare privileges to create # public shares, not just authenticated ones #======================= Share Definitions ======================= # Un-comment the following (and tweak the other settings below to suit) # to enable the default home directory shares. This will share each # user's home director as \\server\username ;[homes] ; comment = Home Directories ; browseable = no # By default, the home directories are exported read-only. Change the # next parameter to 'no' if you want to be able to write to them. ; read only = yes # File creation mask is set to 0700 for security reasons. If you want to # create files with group=rw permissions, set next parameter to 0775. ; create mask = 0700 # Directory creation mask is set to 0700 for security reasons. If you want to # create dirs. with group=rw permissions, set next parameter to 0775. ; directory mask = 0700 # By default, \\server\username shares can be connected to by anyone # with access to the samba server. Un-comment the following parameter # to make sure that only "username" can connect to \\server\username # The following parameter makes sure that only "username" can connect # # This might need tweaking when using external authentication schemes ; valid users = %S # Un-comment the following and create the netlogon directory for Domain Logons # (you need to configure Samba to act as a domain controller too.) ;[netlogon] ; comment = Network Logon Service ; path = /home/samba/netlogon ; guest ok = yes ; read only = yes # Un-comment the following and create the profiles directory to store # users profiles (see the "logon path" option above) # (you need to configure Samba to act as a domain controller too.) # The path below should be writable by all users so that their # profile directory may be created the first time they log on ;[profiles] ; comment = Users profiles ; path = /home/samba/profiles ; guest ok = no ; browseable = no ; create mask = 0600 ; directory mask = 0700 [printers] comment = All Printers browseable = no path = /var/spool/samba printable = yes guest ok = no read only = yes create mask = 0700 # Windows clients look for this share name as a source of downloadable # printer drivers [print$] comment = Printer Drivers browseable = yes writeable = no path = /var/lib/samba/printers # Uncomment to allow remote administration of Windows print drivers. # You may need to replace 'lpadmin' with the name of the group your # admin users are members of. # Please note that you also need to set appropriate Unix permissions # to the drivers directory for these users to have write rights in it ; write list = root, @lpadmin # A sample share for sharing your CD-ROM with others. ;[cdrom] ; comment = Samba server's CD-ROM ; read only = yes ; locking = no ; path = /cdrom ; guest ok = yes # The next two parameters show how to auto-mount a CD-ROM when the # cdrom share is accesed. For this to work /etc/fstab must contain # an entry like this: # # /dev/scd0 /cdrom iso9660 defaults,noauto,ro,user 0 0 # # The CD-ROM gets unmounted automatically after the connection to the # # If you don't want to use auto-mounting/unmounting make sure the CD # is mounted on /cdrom # ; preexec = /bin/mount /cdrom ; postexec = /bin/umount /cdrom [mediafiles] path = /media/multimedia/

    Read the article

  • How to create a new WCF/MVC/jQuery application from scratch

    - by pjohnson
    As a corporate developer by trade, I don't get much opportunity to create from-the-ground-up web sites; usually it's tweaks, fixes, and new functionality to existing sites. And with hobby sites, I often don't find the challenges I run into with enterprise systems; usually it's starting from Visual Studio's boilerplate project and adding whatever functionality I want to play around with, rarely deploying outside my own machine. So my experience creating a new enterprise-level site was a bit dated, and the technologies to do so have come a long way, and are much more ready to go out of the box. My intention with this post isn't so much to provide any groundbreaking insights, but to just tie together a lot of information in one place to make it easy to create a new site from scratch. Architecture One site I created earlier this year had an MVC 3 front end and a WCF 4-driven service layer. Using Visual Studio 2010, these project types are easy enough to add to a new solution. I created a third Class Library project to store common functionality the front end and services layers both needed to access, for example, the DataContract classes that the front end uses to call services in the service layer. By keeping DataContract classes in a separate project, I avoided the need for the front end to have an assembly/project reference directly to the services code, a bit cleaner and more flexible of an SOA implementation. Consuming the service Even by this point, VS has given you a lot. You have a working web site and a working service, neither of which do much but are great starting points. To wire up the front end and the services, I needed to create proxy classes and WCF client configuration information. I decided to use the SvcUtil.exe utility provided as part of the Windows SDK, which you should have installed if you installed VS. VS also provides an Add Service Reference command since the .NET 1.x ASMX days, which I've never really liked; it creates several .cs/.disco/etc. files, some of which contained hardcoded URL's, adding duplicate files (*1.cs, *2.cs, etc.) without doing a good job of cleaning up after itself. I've found SvcUtil much cleaner, as it outputs one C# file (containing several proxy classes) and a config file with settings, and it's easier to use to regenerate the proxy classes when the service changes, and to then maintain all your configuration in one place (your Web.config, instead of the Service Reference files). I provided it a reference to a copy of my common assembly so it doesn't try to recreate the data contract classes, had it use the type List<T> for collections, and modified the output files' names and .NET namespace, ending up with a command like: svcutil.exe /l:cs /o:MyService.cs /config:MyService.config /r:MySite.Common.dll /ct:System.Collections.Generic.List`1 /n:*,MySite.Web.ServiceProxies http://localhost:59999/MyService.svc I took the generated MyService.cs file and drop it in the web project, under a ServiceProxies folder, matching the namespace and keeping it separate from classes I coded manually. Integrating the config file took a little more work, but only needed to be done once as these settings didn't often change. A great thing Microsoft improved with WCF 4 is configuration; namely, you can use all the default settings and not have to specify them explicitly in your config file. Unfortunately, SvcUtil doesn't generate its config file this way. If you just copy & paste MyService.config's contents into your front end's Web.config, you'll copy a lot of settings you don't need, plus this will get unwieldy if you add more services in the future, each with its own custom binding. Really, as the only mandatory settings are the endpoint's ABC's (address, binding, and contract) you can get away with just this: <system.serviceModel>  <client>    <endpoint address="http://localhost:59999/MyService.svc" binding="wsHttpBinding" contract="MySite.Web.ServiceProxies.IMyService" />  </client></system.serviceModel> By default, the services project uses basicHttpBinding. As you can see, I switched it to wsHttpBinding, a more modern standard. Using something like netTcpBinding would probably be faster and more efficient since the client & service are both written in .NET, but it requires additional server setup and open ports, whereas switching to wsHttpBinding is much simpler. From an MVC controller action method, I instantiated the client, and invoked the method for my operation. As with any object that implements IDisposable, I wrapped it in C#'s using() statement, a tidy construct that ensures Dispose gets called no matter what, even if an exception occurs. Unfortunately there are problems with that, as WCF's ClientBase<TChannel> class doesn't implement Dispose according to Microsoft's own usage guidelines. I took an approach similar to Technology Toolbox's fix, except using partial classes instead of a wrapper class to extend the SvcUtil-generated proxy, making the fix more seamless from the controller's perspective, and theoretically, less code I have to change if and when Microsoft fixes this behavior. User interface The MVC 3 project template includes jQuery and some other common JavaScript libraries by default. I updated the ones I used to the latest versions using NuGet, available in VS via the Tools > Library Package Manager > Manage NuGet Packages for Solution... > Updates. I also used this dialog to remove packages I wasn't using. Given that it's smart enough to know the difference between the .js and .min.js files, I was hoping it would be smart enough to know which to include during build and publish operations, but this doesn't seem to be the case. I ended up using Cassette to perform the minification and bundling of my JavaScript and CSS files; ASP.NET 4.5 includes this functionality out of the box. The web client to web server link via jQuery was easy enough. In my JavaScript function, unobtrusively wired up to a button's click event, I called $.ajax, corresponding to an action method that returns a JsonResult, accomplished by passing my model class to the Controller.Json() method, which jQuery helpfully translates from JSON to a JavaScript object.$.ajax calls weren't perfectly straightforward. I tried using the simpler $.post method instead, but ran into trouble without specifying the contentType parameter, which $.post doesn't have. The url parameter is simple enough, though for flexibility in how the site is deployed, I used MVC's Url.Action method to get the URL, then sent this to JavaScript in a JavaScript string variable. If the request needed input data, I used the JSON.stringify function to convert a JavaScript object with the parameters into a JSON string, which MVC then parses into strongly-typed C# parameters. I also specified "json" for dataType, and "application/json; charset=utf-8" for contentType. For success and error, I provided my success and error handling functions, though success is a bit hairier. "Success" in this context indicates whether the HTTP request succeeds, not whether what you wanted the AJAX call to do on the web server was successful. For example, if you make an AJAX call to retrieve a piece of data, the success handler will be invoked for any 200 OK response, and the error handler will be invoked for failed requests, e.g. a 404 Not Found (if the server rejected the URL you provided in the url parameter) or 500 Internal Server Error (e.g. if your C# code threw an exception that wasn't caught). If an exception was caught and handled, or if the data requested wasn't found, this would likely go through the success handler, which would need to do further examination to verify it did in fact get back the data for which it asked. I discuss this more in the next section. Logging and exception handling At this point, I had a working application. If I ran into any errors or unexpected behavior, debugging was easy enough, but of course that's not an option on public web servers. Microsoft Enterprise Library 5.0 filled this gap nicely, with its Logging and Exception Handling functionality. First I installed Enterprise Library; NuGet as outlined above is probably the best way to do so. I needed a total of three assembly references--Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging, and Microsoft.Practices.EnterpriseLibrary.Logging. VS links with the handy Enterprise Library 5.0 Configuration Console, accessible by right-clicking your Web.config and choosing Edit Enterprise Library V5 Configuration. In this console, under Logging Settings, I set up a Rolling Flat File Trace Listener to write to log files but not let them get too large, using a Text Formatter with a simpler template than that provided by default. Logging to a different (or additional) destination is easy enough, but a flat file suited my needs. At this point, I verified it wrote as expected by calling the Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write method from my C# code. With those settings verified, I went on to wire up Exception Handling with Logging. Back in the EntLib Configuration Console, under Exception Handling, I used a LoggingExceptionHandler, setting its Logging Category to the category I already had configured in the Logging Settings. Then, from code (e.g. a controller's OnException method, or any action method's catch block), I called the Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicy.HandleException method, providing the exception and the exception policy name I had configured in the Exception Handling Settings. Before I got this configured correctly, when I tried it out, nothing was logged. In working with .NET, I'm used to seeing an exception if something doesn't work or isn't set up correctly, but instead working with these EntLib modules reminds me more of JavaScript (before the "use strict" v5 days)--it just does nothing and leaves you to figure out why, I presume due in part to the listener pattern Microsoft followed with the Enterprise Library. First, I verified logging worked on its own. Then, verifying/correcting where each piece wires up to the next resolved my problem. Your C# code calls into the Exception Handling module, referencing the policy you pass the HandleException method; that policy's configuration contains a LoggingExceptionHandler that references a logCategory; that logCategory should be added in the loggingConfiguration's categorySources section; that category references a listener; that listener should be added in the loggingConfiguration's listeners section, which specifies the name of the log file. One final note on error handling, as the proper way to handle WCF and MVC errors is a whole other very lengthy discussion. For AJAX calls to MVC action methods, depending on your configuration, an exception thrown here will result in ASP.NET'S Yellow Screen Of Death being sent back as a response, which is at best unnecessarily and uselessly verbose, and at worst a security risk as the internals of your application are exposed to potential hackers. I mitigated this by overriding my controller's OnException method, passing the exception off to the Exception Handling module as above. I created an ErrorModel class with as few properties as possible (e.g. an Error string), sending as little information to the client as possible, to both maximize bandwidth and mitigate risk. I then return an ErrorModel in JSON format for AJAX requests: if (filterContext.HttpContext.Request.IsAjaxRequest()){    filterContext.Result = Json(new ErrorModel(...));    filterContext.ExceptionHandled = true;} My $.ajax calls from the browser get a valid 200 OK response and go into the success handler. Before assuming everything is OK, I check if it's an ErrorModel or a model containing what I requested. If it's an ErrorModel, or null, I pass it to my error handler. If the client needs to handle different errors differently, ErrorModel can contain a flag, error code, string, etc. to differentiate, but again, sending as little information back as possible is ideal. Summary As any experienced ASP.NET developer knows, this is a far cry from where ASP.NET started when I began working with it 11 years ago. WCF services are far more powerful than ASMX ones, MVC is in many ways cleaner and certainly more unit test-friendly than Web Forms (if you don't consider the code/markup commingling you're doing again), the Enterprise Library makes error handling and logging almost entirely configuration-driven, AJAX makes a responsive UI more feasible, and jQuery makes JavaScript coding much less painful. It doesn't take much work to get a functional, maintainable, flexible application, though having it actually do something useful is a whole other matter.

    Read the article

  • Enabling Kerberos Authentication for Reporting Services

    - by robcarrol
    Recently, I’ve helped several customers with Kerberos authentication problems with Reporting Services and Analysis Services, so I’ve decided to write this blog post and pull together some useful resources in one place (there are 2 whitepapers in particular that I found invaluable configuring Kerberos authentication, and these can be found in the references section at the bottom of this post). In most of these cases, the problem has manifested itself with the Login failed for User ‘NT Authority\Anonymous’ (“double-hop”) error. By default, Reporting Services uses Windows Integrated Authentication, which includes the Kerberos and NTLM protocols for network authentication. Additionally, Windows Integrated Authentication includes the negotiate security header, which prompts the client to select Kerberos or NTLM for authentication. The client can access reports which have the appropriate permissions by using Kerberos for authentication. Servers that use Kerberos authentication can impersonate those clients and use their security context to access network resources. You can configure Reporting Services to use both Kerberos and NTLM authentication; however this may lead to a failure to authenticate. With negotiate, if Kerberos cannot be used, the authentication method will default to NTLM. When negotiate is enabled, the Kerberos protocol is always used except when: Clients/servers that are involved in the authentication process cannot use Kerberos. The client does not provide the information necessary to use Kerberos. An in-depth discussion of Kerberos authentication is beyond the scope of this post, however when users execute reports that are configured to use Windows Integrated Authentication, their logon credentials are passed from the report server to the server hosting the data source. Delegation needs to be set on the report server and Service Principle Names (SPNs) set for the relevant services. When a user processes a report, the request must go through a Web server on its way to a database server for processing. Kerberos authentication enables the Web server to request a service ticket from the domain controller; impersonate the client when passing the request to the database server; and then restrict the request based on the user’s permissions. Each time a server is required to pass the request to another server, the same process must be used. Kerberos authentication is supported in both native and SharePoint integrated mode, but I’ll focus on native mode for the purpose of this post (I’ll explain configuring SharePoint integrated mode and Kerberos authentication in a future post). Configuring Kerberos avoids the authentication failures due to double-hop issues. These double-hop errors occur when a users windows domain credentials can’t be passed to another server to complete the user’s request. In the case of my customers, users were executing Reporting Services reports that were configured to query Analysis Services cubes on a separate machine using Windows Integrated security. The double-hop issue occurs as NTLM credentials are valid for only one network hop, subsequent hops result in anonymous authentication. The client attempts to connect to the report server by making a request from a browser (or some other application), and the connection process begins with authentication. With NTLM authentication, client credentials are presented to Computer 2. However Computer 2 can’t use the same credentials to access Computer 3 (so we get the Anonymous login error). To access Computer 3 it is necessary to configure the connection string with stored credentials, which is what a number of customers I have worked with have done to workaround the double-hop authentication error. However, to get the benefits of Windows Integrated security, a better solution is to enable Kerberos authentication. Again, the connection process begins with authentication. With Kerberos authentication, the client and the server must demonstrate to one another that they are genuine, at which point authentication is successful and a secure client/server session is established. In the illustration above, the tiers represent the following: Client tier (computer 1): The client computer from which an application makes a request. Middle tier (computer 2): The Web server or farm where the client’s request is directed. Both the SharePoint and Reporting Services server(s) comprise the middle tier (but we’re only concentrating on native deployments just now). Back end tier (computer 3): The Database/Analysis Services server/Cluster where the requested data is stored. In order to enable Kerberos authentication for Reporting Services it’s necessary to configure the relevant SPNs, configure trust for delegation for server accounts, configure Kerberos with full delegation and configure the authentication types for Reporting Services. Service Principle Names (SPNs) are unique identifiers for services and identify the account’s type of service. If an SPN is not configured for a service, a client account will be unable to authenticate to the servers using Kerberos. You need to be a domain administrator to add an SPN, which can be added using the SetSPN utility. For Reporting Services in native mode, the following SPNs need to be registered --SQL Server Service SETSPN -S mssqlsvc/servername:1433 Domain\SQL For named instances, or if the default instance is running under a different port, then the specific port number should be used. --Reporting Services Service SETSPN -S http/servername Domain\SSRS SETSPN -S http/servername.domain.com Domain\SSRS The SPN should be set for the NETBIOS name of the server and the FQDN. If you access the reports using a host header or DNS alias, then that should also be registered SETSPN -S http/www.reports.com Domain\SSRS --Analysis Services Service SETSPN -S msolapsvc.3/servername Domain\SSAS Next, you need to configure trust for delegation, which refers to enabling a computer to impersonate an authenticated user to services on another computer: Location Description Client 1. The requesting application must support the Kerberos authentication protocol. 2. The user account making the request must be configured on the domain controller. Confirm that the following option is not selected: Account is sensitive and cannot be delegated. Servers 1. The service accounts must be trusted for delegation on the domain controller. 2. The service accounts must have SPNs registered on the domain controller. If the service account is a domain user account, the domain administrator must register the SPNs. In Active Directory Users and Computers, verify that the domain user accounts used to access reports have been configured for delegation (the ‘Account is sensitive and cannot be delegated’ option should not be selected): We then need to configure the Reporting Services service account and computer to use Kerberos with full delegation:   We also need to do the same for the SQL Server or Analysis Services service accounts and computers (depending on what type of data source you are connecting to in your reports). Finally, and this is the part that sometimes gets over-looked, we need to configure the authentication type correctly for reporting services to use Kerberos authentication. This is configured in the Authentication section of the RSReportServer.config file on the report server. <Authentication> <AuthenticationTypes>           <RSWindowsNegotiate/> </AuthenticationTypes> <EnableAuthPersistence>true</EnableAuthPersistence> </Authentication> This will enable Kerberos authentication for Internet Explorer. For other browsers, see the link below. The report server instance must be restarted for these changes to take effect. Once these changes have been made, all that’s left to do is test to make sure Kerberos authentication is working properly by running a report from report manager that is configured to use Windows Integrated authentication (either connecting to Analysis Services or SQL Server back-end). Resources: Manage Kerberos Authentication Issues in a Reporting Services Environment http://download.microsoft.com/download/B/E/1/BE1AABB3-6ED8-4C3C-AF91-448AB733B1AF/SSRSKerberos.docx Configuring Kerberos Authentication for Microsoft SharePoint 2010 Products http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=23176 How to: Configure Windows Authentication in Reporting Services http://msdn.microsoft.com/en-us/library/cc281253.aspx RSReportServer Configuration File http://msdn.microsoft.com/en-us/library/ms157273.aspx#Authentication Planning for Browser Support http://msdn.microsoft.com/en-us/library/ms156511.aspx

    Read the article

  • NHibernate: Collection was modified; enumeration operation may not execute

    - by Daoming Yang
    Hi All, I'm currently struggling with this "Collection was modified; enumeration operation may not execute" issue. I have searched about this error message, and it's all related to the foreach statement. I do have the some foreach statements, but they are just simply representing the data. I did not using any remove or add inside the foreach statement. NOTE: The error randomly happens (about 4-5 times a day). The application is the MVC website. There are about 5 users operate this applications (about 150 orders a day). Could it be some another users modified the collection, and then occur this error? I have log4net setup and the settings can be found here Make sure that the controller has a parameterless public constructor I do have parameterless public constructor in AdminProductController Does anyone know why this happen and how to resolve this issue? A friend (Oskar) mentioned that "Theory: Maybe the problem is that your configuration and session factory is initialized on the first request after application restart. If a second request comes in before the first request is finished, maybe it will also try to initialize and then triggering this problem somehow." Many thanks. Daoming Here is the error message: System.InvalidOperationException Collection was modified; enumeration operation may not execute. System.InvalidOperationException: An error occurred when trying to create a controller of type 'WebController.Controllers.Admin.AdminProductController'. Make sure that the controller has a parameterless public constructor. --- System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. --- NHibernate.MappingException: Could not configure datastore from input stream DomainModel.Entities.Mappings.OrderProductVariant.hbm.xml --- System.InvalidOperationException: Collection was modified; enumeration operation may not execute. at System.Collections.ArrayList.ArrayListEnumeratorSimple.MoveNext() at System.Xml.Schema.XmlSchemaSet.AddSchemaToSet(XmlSchema schema) at System.Xml.Schema.XmlSchemaSet.Add(String targetNamespace, XmlSchema schema) at System.Xml.Schema.XmlSchemaSet.Add(XmlSchema schema) at NHibernate.Cfg.Configuration.LoadMappingDocument(XmlReader hbmReader, String name) at NHibernate.Cfg.Configuration.AddInputStream(Stream xmlInputStream, String name) --- End of inner exception stack trace --- at NHibernate.Cfg.Configuration.LogAndThrow(Exception exception) at NHibernate.Cfg.Configuration.AddInputStream(Stream xmlInputStream, String name) at NHibernate.Cfg.Configuration.AddResource(String path, Assembly assembly) at NHibernate.Cfg.Configuration.AddAssembly(Assembly assembly) at DomainModel.RepositoryBase..ctor() at WebController.Controllers._baseController..ctor() at WebController.Controllers.Admin.AdminProductController..ctor() at System.RuntimeType.CreateInstanceImpl(Boolean publicOnly, Boolean skipVisibilityChecks, Boolean fillCache) --- End of inner exception stack trace --- at System.RuntimeType.CreateInstanceImpl(Boolean publicOnly, Boolean skipVisibilityChecks, Boolean fillCache) at System.Activator.CreateInstance(Type type, Boolean nonPublic) at System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(RequestContext requestContext, Type controllerType) --- End of inner exception stack trace --- at System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(RequestContext requestContext, Type controllerType) at System.Web.Mvc.DefaultControllerFactory.CreateController(RequestContext requestContext, String controllerName) at System.Web.Mvc.MvcHandler.ProcessRequestInit(HttpContextBase httpContext, IController& controller, IControllerFactory& factory) at System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContextBase httpContext, AsyncCallback callback, Object state) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) UPDATE CODE: In my Global.asax.cs, I'm doing this: protected void Application_BeginRequest(object sender, EventArgs e) { ManagedWebSessionContext.Bind(HttpContext.Current, SessionManager.SessionFactory.OpenSession()); } protected void Application_EndRequest(object sender, EventArgs e) { ISession session = ManagedWebSessionContext.Unbind(HttpContext.Current, SessionManager.SessionFactory); if (session != null) { try { if (session.Transaction != null && session.Transaction.IsActive) { session.Transaction.Rollback(); } else { session.Flush(); } } finally { session.Close(); } } } In the SessionManager class, I'm doing: public class SessionManager { private readonly ISessionFactory sessionFactory; public static ISessionFactory SessionFactory { get { return Instance.sessionFactory; } } private ISessionFactory GetSessionFactory() { return sessionFactory; } public static SessionManager Instance { get { return NestedSessionManager.sessionManager; } } public static ISession OpenSession() { return Instance.GetSessionFactory().OpenSession(); } public static ISession CurrentSession { get { return Instance.GetSessionFactory().GetCurrentSession(); } } private SessionManager() { Configuration config = new Configuration().Configure(); config.AddAssembly(Assembly.GetExecutingAssembly()); sessionFactory = config.BuildSessionFactory(); } class NestedSessionManager { internal static readonly SessionManager sessionManager = new SessionManager(); } } In the Repository, I'm doing this: public IEnumerable<User> GetAll() { ICriteria criteria = SessionManager.CurrentSession.CreateCriteria(typeof(User)); return criteria.List<User>(); } In the Controller, I'm doing this: public class UserController : _baseController { IUserRoleRepository _userRoleRepository; internal static readonly ILogger log = LogManager.GetLogger(typeof(UserController)); public UserController() { _userRoleRepository = new UserRoleRepository(); } public ActionResult UserList() { var myList = _usersRepository.GetAll(); return View(myList); } }

    Read the article

  • RSS feeds in Orchard

    - by Bertrand Le Roy
    When we added RSS to Orchard, we wanted to make it easy for any module to expose any contents as a feed. We also wanted the rendering of the feed to be handled by Orchard in order to minimize the amount of work from the module developer. A typical example of such feed exposition is of course blog feeds. We have an IFeedManager interface for which you can get the built-in implementation through dependency injection. Look at the BlogController constructor for an example: public BlogController( IOrchardServices services, IBlogService blogService, IBlogSlugConstraint blogSlugConstraint, IFeedManager feedManager, RouteCollection routeCollection) { If you look a little further in that same controller, in the Item action, you’ll see a call to the Register method of the feed manager: _feedManager.Register(blog); This in reality is a call into an extension method that is specialized for blogs, but we could have made the two calls to the actual generic Register directly in the action instead, that is just an implementation detail: feedManager.Register(blog.Name, "rss", new RouteValueDictionary { { "containerid", blog.Id } }); feedManager.Register(blog.Name + " - Comments", "rss", new RouteValueDictionary { { "commentedoncontainer", blog.Id } }); What those two effective calls are doing is to register two feeds: one for the blog itself and one for the comments on the blog. For each call, the name of the feed is provided, then we have the type of feed (“rss”) and some values to be injected into the generic RSS route that will be used later to route the feed to the right providers. This is all you have to do to expose a new feed. If you’re only interested in exposing feeds, you can stop right there. If on the other hand you want to know what happens after that under the hood, carry on. What happens after that is that the feedmanager will take care of formatting the link tag for the feed (see FeedManager.GetRegisteredLinks). The GetRegisteredLinks method itself will be called from a specialized filter, FeedFilter. FeedFilter is an MVC filter and the event we’re interested in hooking into is OnResultExecuting, which happens after the controller action has returned an ActionResult and just before MVC executes that action result. In other words, our feed registration has already been called but the view is not yet rendered. Here’s the code for OnResultExecuting: model.Zones.AddAction("head:after", html => html.ViewContext.Writer.Write( _feedManager.GetRegisteredLinks(html))); This is another piece of code whose execution is differed. It is saying that whenever comes time to render the “head” zone, this code should be called right after. The code itself is rendering the link tags. As a result of all that, here’s what can be found in an Orchard blog’s head section: <link rel="alternate" type="application/rss+xml"     title="Tales from the Evil Empire"     href="/rss?containerid=5" /> <link rel="alternate" type="application/rss+xml"     title="Tales from the Evil Empire - Comments"     href="/rss?commentedoncontainer=5" /> The generic action that these two feeds point to is Index on FeedController. That controller has three important dependencies: an IFeedBuilderProvider, an IFeedQueryProvider and an IFeedItemProvider. Different implementations of these interfaces can provide different formats of feeds, such as RSS and Atom. The Match method enables each of the competing providers to provide a priority for themselves based on arbitrary criteria that can be found on the FeedContext. This means that a provider can be selected based not only on the desired format, but also on the nature of the objects being exposed as a feed or on something even more arbitrary such as the destination device (you could imagine for example giving shorter text only excerpts of posts on mobile devices, and full HTML on desktop). The key here is extensibility and dynamic competition and collaboration from unknown and loosely coupled parts. You’ll find this pattern pretty much everywhere in the Orchard architecture. The RssFeedBuilder implementation of IFeedBuilderProvider is also a regular controller with a Process action that builds a RssResult, which is itself a thin ActionResult wrapper around an XDocument. Let’s get back to the FeedController’s Index action. After having called into each known feed builder to get its priority on the currently requested feed, it will select the one with the highest priority. The next thing it needs to do is to actually fetch the data for the feed. This again is a collaborative effort from a priori unknown providers, the implementations of IFeedQueryProvider. There are several implementations by default in Orchard, the choice of which is again done through a Match method. ContainerFeedQuery for example chimes in when a “containerid” parameter is found in the context (see URL in the link tag above): public FeedQueryMatch Match(FeedContext context) { var containerIdValue = context.ValueProvider.GetValue("containerid"); if (containerIdValue == null) return null; return new FeedQueryMatch { FeedQuery = this, Priority = -5 }; } The actual work is done in the Execute method, which finds the right container content item in the Orchard database and adds elements for each of them. In other words, the feed query provider knows how to retrieve the list of content items to add to the feed. The last step is to translate each of the content items into feed entries, which is done by implementations of IFeedItemBuilder. There is no Match method this time. Instead, all providers are called with the collection of items (or more accurately with the FeedContext, but this contains the list of items, which is what’s relevant in most cases). Each provider can then choose to pick those items that it knows how to treat and transform them into the format requested. This enables the construction of heterogeneous feeds that expose content items of various types into a single feed. That will be extremely important when you’ll want to expose a single feed for all your site. So here are feeds in Orchard in a nutshell. The main point here is that there is a fair number of components involved, with some complexity in implementation in order to allow for extreme flexibility, but the part that you use to expose a new feed is extremely simple and light: declare that you want your content exposed as a feed and you’re done. There are cases where you’ll have to dive in and provide new implementations for some or all of the interfaces involved, but that requirement will only arise as needed. For example, you might need to create a new feed item builder to include your custom content type but that effort will be extremely focused on the specialized task at hand. The rest of the system won’t need to change. So what do you think?

    Read the article

  • 1 Million IOPS

    - by GrumpyOldDBA
    As a keen follower of storage performance I couldn't help but be drawn to this article in The Register http://www.theregister.co.uk/2010/04/14/lsi_million_iops/ this morning. I gave my 5 year old laptop a new lease of life with a SSD and in combination with the old drive made external managed to reduce the time of a demo query from 50 odd mins down to 6 mins. I also have 4 Silicon Power 32GB SSDs set up as a raid 0 on my home server, an overblown PC. http://www.futurestorage.co.uk/index.asp?selmanuf...(read more)

    Read the article

  • Storage Configuration

    - by jchang
    Storage performance is not inherently complicated subject. The concepts are relatively simple. In fact, scaling storage performance is far easier compared with the difficulties encounters in scaling processor performance in NUMA systems. Storage performance is achieved by properly distributing IO over: 1) multiple independent PCI-E ports (system memory and IO bandwith is key) 2) multiple RAID controllers or host bus adapters (HBAs) 3) multiple storage IO channels (SAS or FC, complete path) most importantly,...(read more)

    Read the article

  • The Basics of Desktop Data Recovery

    Desktop data recovery is an important part of computer repairs, as it is pretty common for a hard drive or server RAID to fail and lose major amounts of data. With desktops especially, it is sometime... [Author: Richard Cuthbertson - Computers and Internet - April 07, 2010]

    Read the article

  • Uses for Types of Data Recovery Services

    There are several different types of data recovery services, including hard drive, server raid, and smart media recovery. What makes things tricky is to know when to use which service, and how to kno... [Author: Richard Cuthbertson - Computers and Internet - April 08, 2010]

    Read the article

  • Storage Configuration

    - by jchang
    Storage performance is not inherently complicated subject. The concepts are relatively simple. In fact, scaling storage performance is far easier compared with the difficulties encounters in scaling processor performance in NUMA systems. Storage performance is achieved by properly distributing IO over: 1) multiple independent PCI-E ports (system memory and IO bandwith is key) 2) multiple RAID controllers or host bus adapters (HBAs) 3) multiple storage IO channels (SAS or FC, complete path) most importantly,...(read more)

    Read the article

  • ASP.NET MVC 2 throws exception for ‘favicon.ico’

    - by nmarun
    I must be on fire or something – third blog in 2 days… awesome! Before I begin, in case you’re wondering, favicon.ico is the small image that appears to the left of your web address, once the page loads. In order to learn more about MVC or any thing for that matter, it’s better to look at the source itself. Since MVC is open source (at least some part of it is), I started looking at the source code that’s available for download. While doing so, I hit Steve Sanderson’s blog site where he explains in great detail the way to debug your app using ASP.NET MVC source code. For those who are not aware, Steve Sanderson’s book - Pro ASP.NET MVC Framework, is one of the best books to learn about MVC. Alrighty, I followed the article and I hit F5 to debug the default / unchanged MVC project. I put a breakpoint in the DefaultControllerFactory.cs, CreateController() method. To know a little more about this class and the method, read this. Sure enough, the control stopped at the breakpoint and I hit F5 again and the page rendered just fine. But then what’s this? The breakpoint was hit again, as if something else was being requested. I now hovered my mouse over the ‘controllerName’ parameter and it says – favicon.ico. This by itself was more than enough for me to raise my eye-brows, but what happened next just took the ground below my feet. Oh, oh, I’m sorry I’m just typing, no code, no image, so here are a couple of screen captures. The first one shows the request for the Home controller; I get ‘Home’ when I hover over the parameter: And here’s the one that shows the same for call for ‘favicon.ico’. So, I step through the code and when the control reaches line 91 – GetControllerInstance() method, I step in. This is when I had the ‘ground-losing’ experience. Wow, an exception is being thrown for this file and that too in RTM. For some reason MVC thinks, this as a controller and tries to run it through the MvcHandler and it hits this snag. So it seems like this will happen for any MVC 2 site and this did not happen for me in the previous version of MVC. Before I get to how to resolve it, here’s another way of reproducing this exception. Revert back all your changes that you did as mentioned in Steve’s blog above. Now, add a class to your MVC project and call it say, MyControllerFactory and let this inherit from DefaultControllerFactory class. (Read this for details on the DefaultControllerFactory class is and how it is used in a different context). Add an override for the CreateController() method and for the sake of this blog, just copy the same content from the DefaultControllerFactory class. The last step is to tell your MVC app to use the MyControllerFactory class instead of the default one. To do this, go to your Global.asax.cs file and add line 6 of the snippet below: 1: protected void Application_Start() 2: { 3: AreaRegistration.RegisterAllAreas(); 4:   5: RegisterRoutes(RouteTable.Routes); 6: ControllerBuilder.Current.SetControllerFactory(new MyControllerFactory()); 7: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now, you’re ready to reproduce the issue. Just F5 the project and when you hit the overridden CreateController() method for the second time, this is what it looks like for me: And continuing further gives me the same exception. I believe this is something that MS should fix, as not having ‘favicon.ico’ file will be common for most of the applications. So I think the when you create an MVC project, line 6 should be added by default by Visual Studio itself: 1: public class MvcApplication : System.Web.HttpApplication 2: { 3: public static void RegisterRoutes(RouteCollection routes) 4: { 5: routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); 6: routes.IgnoreRoute("favicon.ico"); 7:   8: routes.MapRoute( 9: "Default", // Route name 10: "{controller}/{action}/{id}", // URL with parameters 11: new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults 12: ); 13: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } There it is, that’s the solution to avoid the exception altogether. I tried this both IE8 and Firefox browsers and was able to successfully reproduce the error. Hope someone will look at this issue and find a fix. Just before I finish up, I found another ‘bug’, if you want to call it, with Visual Studio 2008. Remember how you could change what browser you want your application to run in by just right clicking on the .aspx file and choosing ‘Browse with…’? Seems like that’s missing when you’re working with an MVC project. In order to test the above bug in the other browser, I had to load a classic ASP.NET project, change the settings and then run my MVC project. Felt kinda ‘icky’, for lack of a better word.

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >