Search Results

Search found 12038 results on 482 pages for 'controller actions'.

Page 147/482 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • How to find selected node in Tree?

    - by badgirl
    Hello. I have dynamic tree (moreover nodes can have children). Every node has some action. When I right click on some node, it offers me some actions. One action, for example createChildNode creates child node, which in turn creates MyObject2. MyObject2 must be created with assistance of MyOject1, which was created in parent node(that one where I right click for actions ). How to get that object from selected node? That objects in nodes are putted to lookups.singleton(MyObjectX)

    Read the article

  • How to post data from user control - asp.net mvc

    - by Dharmesh
    Currently i am working on project in which login user control is there in master page. earlier i had seperate login.aspx page and was able to call login method of home controller (with acceptverb = post). now we have changed the idea, want to place login control on master page (home page). Now when i click on login button of login control - it calls login method of controller class with acceptver = get. how can i call login method with acceptverb = post?

    Read the article

  • Add multiple ActionName for button

    - by NewToBirtReporting
    I have one controller on which i have Save button click event. Im using same controller and view for Add and Edit purpose. My code is as per below [HttpPost] [Button(ButtonName = "Save")] [ActionName("Create")] [ValidateAntiForgeryToken(Salt = "PostData")] public ActionResult Save(Ntegra m_Ntegra,FormCollection form) {} As Im Using ActionName("Create") here so button can not work for ActionName("Edit"). can anyone tell me how i can achive my requirnment!! Thanks for help...... :)

    Read the article

  • rewrite routes for URl

    - by user348173
    I have the following URl: http://localhost:12981/BaseEvent/EventOverview/12?type=Film This is route: routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults ); I want that in a browser the url looks like: http://localhost:12981/Film/Overview/12 How can I do this? One more example: http://localhost:12981/BaseEvent/EventOverview/15?type=Sport should be http://localhost:12981/Sport/Overview/15 Thanks.

    Read the article

  • Rails3 nomethod error #<ActiveRecord::Relation>

    - by Dodi
    Hi! I'm writing a static page controller. I get the menuname in the routes.rb and it's call the static controller show method. match '/:menuname' = 'static#show' And static_controller.rb: @static=Staticpage.where("menuname = ?", params[:menuname]) But if I want print @static.title in the view, I get this error: undefined method `title' for # Whats wrong? the SQL query looks good: SELECT staticpages.* FROM staticpages WHERE (menuname = 'asd')

    Read the article

  • MySQL Query Ranking

    - by eft0
    Hey there, I have this MySQL query for list a "Vote Ranking" for "Actions" and it work fine, but I want what the "id_user" doesn't repeat, like made a "DISTINCT" in this field. SELECT count(v.id) votos, v.id_action, a.id_user, a.id_user_to, a.action, a.descripcion FROM votes v, actions a WHERE v.id_action = a.id GROUP BY v.id_action ORDER BY votos DESC The result: votes act id_user 3 3 745059251 2 20 1245069513 2 23 1245069513 2 26 100000882722297 2 29 1245069513 2 44 1040560484 2 49 1257441644 2 50 1040560484 The expected result votes act id_user 3 3 745059251 2 20 1245069513 2 26 100000882722297 2 44 1040560484 2 49 1257441644 2 50 1040560484 Thanks in advanced!

    Read the article

  • UIPicker reloading content

    - by kumaravel
    Hi i have a problem with UIPicker, im keeping my Picker in Navigation controller. i want the Picker content to reload whenever the navigation controller push the PickerView. but the pickerview content maintaining same state in all navigation.. how to change this?

    Read the article

  • Would popup blockers stop a URL which pops up only when the user clicked on something?

    - by tomeaton
    I'm currently building a web application that can can track a users actions on a particular website and pop a URL if the user takes certain actions, such as: first click, responding to a question by clicking yes / no, clicking a submit button, or exiting the site. It is important that these URLs are served to the user and are not blocked by pop-up blockers. It is my understanding that there are certain exceptions within the major internet browsers that allow pop-ups if they are served based on some user action, rather than serving an unsolicited pop? Is this true? How do I design this web application so that it can serve these pops (and not have them blocked).

    Read the article

  • [iPhone] Error reading plist file for fill a table

    - by Matthew
    Hi, I'm developing an app for iPhone but I've a problem... I've a view with some textField and the informations writed in them are saved in a plist file. With @class and #import declarations I import this view controller in another controller that manage a table view. The code I've just wrote appear to be right but my table is filled up with 3 same row... I don't know why the row are 3... Can anyone help me?

    Read the article

  • Respond_with not working Rails 3

    - by MDM
    As no one can help me with my UJS ajax problem I am trying "respond_with", but I am getting this error: RuntimeError (In order to use respond_with, first you need to declare the formats your controller responds to in the class level) Yet I have declared it in my controller: class HomepagesController < ApplicationController respond_to :html, :xml, :js def index @homepages = Homepage.search(params[:search]) respond_with(@homepages) end Anyone has any ideas why.

    Read the article

  • Multiple controllers with a single model

    - by Eric K
    I'm setting up a directory application for which I need to have two separate interfaces for the same Users table. Basically, administrators use the Users controller and views to list, edit, and add users, while non-admins need a separate interface which lists users in a completely different manner. To do this, would I be able to just set up another controller with different views but which accesses the Users model? Sorry if this is a simple question, but I've had a hard time finding how to do this.

    Read the article

  • Can we use (id) in if- else condition ?

    - by srikanth rongali
    I have written my code in following way in cocos2d. id actionTo = [CCFadeOut actionWithDuration:4.0f]; id actionTo0 = [CCSequence actionWithDuration:2.0f]; if (m < enemyNumber) id actionTo1 = [CCCallFunc actionWithTarget:self selector:@selector(goToNextScene)]; else id actionTo1 = [CCCallFunc actionWithTarget:self selector:@selector(goToEndScene)]; id actionSeq = [CCSequence actions:actionTo, actionTo0, actionTo1, nil]; [targetE runAction: [CCSequence actions:actionSeq, nil]]; error: expected expression before 'id' I am getting the above error. Should not we use (id) in if condition ? I want to get called two selectors by using the if- else condition. How can I make it ? Thank You.

    Read the article

  • Initialise a wix CheckBox's check state based on a property?

    - by MauriceL
    How does one initalise a Wix check box based on the value of a property? So far, I've done the following: <Control Id="Checkbox" Type="CheckBox" X="0" Y="0" Width="100" Height="15" Property="CHECKBOX_SELECTION" Text="I want this feature" CheckBoxValue="1" TabSkip="no"> <Condition Action="hide">HIDE_CHECKBOX</Condition> <Condition Action="show">NOT HIDE_CHECKBOX</Condition> </Control> Currently I have two custom actions to set HIDE_CHECKBOX and CHECKBOX_SELECTION. The CHECKBOX_SELECTION custom action occurs immediately after the HIDE_CHECKBOX action. What I'm seeing is that HIDE_CHECKBOX is behaving correctly (ie. the checkbox is hidden) which suggests that I've got the ordering of custom actions correct, but CHECKBOX_SELECTION is not changing the check state of the check box. Is this a safe assumption? Also, I've confirmed that SELECTION is being set to '1' in the logs.

    Read the article

  • Multifunctioning in Javascript

    - by Starx
    The concept is running multiple functions concurrently. The reason is, I have a page which performs various actions through ajax. These actions includes making multiple backups of new files uploaded in the upload directory. But I want this process to be initiated by a moderator. As this is a very lengthy process(might even take hours to complete), it blocks others ajax requests from executing, until this process complete. I want to execute functions along with the previously executed function parallelly. I am using jQuery's Ajax to sent initiate the request.

    Read the article

  • Rails layouts per action?

    - by mrbrdo
    I use a different layout for some actions (mostly for the new action in most of the controllers). I am wondering what the best way to specify the layout would be? (Consider like I am using 3 or more different layouts in the same controller) I don't like using render :layout = 'name' that much. I did like doing layout 'name', :only = [:new] but it turns out I can't use that to specify 2 different layouts (e.g. if I call layout 2 times in the same controller, with different layout names and different only options, the first one gets ignored - those actions don't display in the layout i specified). I'm using Rails 2.

    Read the article

  • (Rails) How do I reference resources from non-restful views...?

    - by humble_coder
    Hi All, How do you ensure that your javascript includes get included properly for all files everywhere using a particular layout? Basically I have some non-restful actions that I've added. I haven't added any ROUTES for them, however, using normal text rendering works fine. It's when I start requiring different "swf" and "js" files (files which are properly placed in PUBLIC) that things get hairy. I start receiving "406 Not Acceptable" errors telling me that it's looking for the files in a subdirectory of the current controller. As it stands I'm in including the JS in "application" layout file. It works for the INDEX action, but doesn't seem to work for any of the non-restful actions. Thoughts?

    Read the article

  • MEF C# Service - DLL Updating

    - by connerb
    Currently, I have a C# service that runs off of many .dll's and has modules/plugins that it imports at startup. I would like to create an update system that basically stops the service, deletes any files it is told to delete (old versions), downloads new versions from a server, and starts the service. I believe I have coded this right except for the delete part, because as long as I am not overwriting anything, the file will download. If I try to overwrite something, it won't work, which is why I am trying to delete it before hand. However, when I do File.Delete() to the path that I want to do, it gives me access to the path is denied. Here is my code: new Thread(new ThreadStart(() => { ServiceController controller = new ServiceController("client"); controller.Stop(); controller.WaitForStatus(ServiceControllerStatus.Stopped); try { if (um.FilesUpdated != null) { foreach (FilesUpdated file in um.FilesUpdated) { if (file.OldFile != null) { File.Delete(Path.Combine(Utility.AssemblyDirectory, file.OldFile)); } if (file.NewFile != null) { wc.DownloadFile(cs.UpdateUrl + "/updates/client/" + file.NewFile, Path.Combine(Utility.AssemblyDirectory, file.NewFile)); } } } if (um.ModulesUpdated != null) { foreach (ModulesUpdated module in um.ModulesUpdated) { if (module.OldModule != null) { File.Delete(Path.Combine(cs.ModulePath, module.OldModule)); } if (module.NewModule != null) { wc.DownloadFile(cs.UpdateUrl + "/updates/client/modules/" + module.NewModule, Path.Combine(cs.ModulePath, module.NewModule)); } } } } catch (Exception ex) { Logger.log(ex); } controller.Start(); })).Start(); I believe it is because the files are in use, but I can't seem to unload them. I though stopping the service would work, but apparently not. I have also checked the files and they are not read-only (but the folder is, which is located in Program Files, however I couldn't seem to get it to not be read-only programmatically or manually). The service is also being run as an administrator (NT AUTHORITY\SYSTEM). I've read about unloading the AppDomain but AppDomain.Unload(AppDomain.CurrentDomain); returned an exception as well. Not too sure even if this is a problem with MEF or my program just not having the correct permissions...I would assume that it's mainly because the file is in use.

    Read the article

  • Error reading file with accented vowels

    - by Daniel Dcs
    The following statement to fill a list from a file : action = [] with open (os.getcwd() + "/files/" + "actions.txt") as temp:          action = list (temp) gives me the following error: (result, consumed) = self._buffer_decode (data, self.errors, end) UnicodeDecodeError: 'utf-8' codec can not decode byte 0xf1 in position 67: invalid continuation byte if I add errors = 'ignore': action = [] with open (os.getcwd () + "/ files /" + "actions.txt", errors = 'ignore') as temp:          action = list (temp) Is read the file but not the ñ and vowels accented á-é-í-ó-ú being that python 3 works, as I have understood, default to 'utf-8' I'm looking for a solution for two or more days, and I'm getting more confused. In advance thank you very much for any suggestions.

    Read the article

  • CakePHP ACL use case(s)

    - by Jonathan
    I have got a simple web app in development, i want to establish a couple of user groups; Admin, Doctors & Patients. Each group would have their access restricted to particular controller actions rather than individual content. So for example, Doctors can view patient records (index & view actions), but cannot delete them. Usually i would create a groups model, and assign the various users to a group. And filter in the beforeFilter() method to determine if the user has access. But if ACL can do the job, why right the code, right? Thanks

    Read the article

  • What does the & sign mean in PHP?

    - by jeffkee
    I was trying to find this answer on Google but I guess the symbol & works as some operator, or is just not generally a searchable term for any reason.. anyhow. I saw this code snippet while learning how to create wordpress plugins, so I just need to know what the & means when it precedes a variable that holds a class object. //Actions and Filters if (isset($dl_pluginSeries)) { //Actions add_action('wp_head', array(&$dl_pluginSeries, 'addHeaderCode'), 1); //Filters add_filter('the_content', array(&$dl_pluginSeries, 'addContent')); }

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • HP SmartArray P400: How to repair failed logical drive?

    - by TegtmeierDE
    I have a HP Server with SmartArray P400 controller (incl. 256 MB Cache/Battery Backup) with a logicaldrive with replaced failed physicaldrive that does not rebuild. This is how it looked when I detected the error: ~# /usr/sbin/hpacucli ctrl slot=0 show config Smart Array P400 in Slot 0 (Embedded) (sn: XXXX) array A (SATA, Unused Space: 0 MB) logicaldrive 1 (698.6 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SATA, 750 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SATA, 750 GB, OK) array B (SATA, Unused Space: 0 MB) logicaldrive 2 (2.7 TB, RAID 5, Failed) physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SATA, 750 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SATA, 750 GB, OK) physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SATA, 750 GB, OK) physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SATA, 750 GB, Failed) physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SATA, 750 GB, OK) unassigned physicaldrive 2I:1:8 (port 2I:box 1:bay 8, SATA, 750 GB, OK) ~# I thought that I had drive 2I:1:8 configured as a spare for Array A and Array B, but it seems this was not the case :-(. I noticed the problem due to I/O errors on the host, even if only 1 physicaldrive of the RAID5 is failed. Does someone know why this could happen? The logicaldrive should go into "Degraded" mode but still be fully accessible from the host os!? I first tried to add the unassigned drive 2I:1:8 as a spare to logicaldrive 2, but this was not possible: ~# /usr/sbin/hpacucli ctrl slot=0 array B add spares=2I:1:8 Error: This operation is not supported with the current configuration. Use the "show" command on devices to show additional details about the configuration. ~# Interestingly it is possible to add the unassigned drive to the first array without problems. I thought maybe the controller put the array into "failed" state due to the missing spare and protects failed arrays from modification. So I tried was to reenable the logicaldrive (to add the spare afterwards): ~# /usr/sbin/hpacucli ctrl slot=0 ld 2 modify reenable Warning: Any previously existing data on the logical drive may not be valid or recoverable. Continue? (y/n) y Error: This operation is not supported with the current configuration. Use the "show" command on devices to show additional details about the configuration. ~# But as you can see, re-enabling the logicaldrive this was not possible. Now I replaced the failed drive by hotswapping it with the unassigned drive. The status now looks like this: ~# /usr/sbin/hpacucli ctrl slot=0 show config Smart Array P400 in Slot 0 (Embedded) (sn: XXXX) array A (SATA, Unused Space: 0 MB) logicaldrive 1 (698.6 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SATA, 750 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SATA, 750 GB, OK) array B (SATA, Unused Space: 0 MB) logicaldrive 2 (2.7 TB, RAID 5, Failed) physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SATA, 750 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SATA, 750 GB, OK) physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SATA, 750 GB, OK) physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SATA, 750 GB, OK) physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SATA, 750 GB, OK) ~# The logical drive is still not accessible. Why is it not rebuilding? What can I do? FYI, this is the configuration of my controller: ~# /usr/sbin/hpacucli ctrl slot=0 show Smart Array P400 in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: XXXX Cache Serial Number: XXXX RAID 6 (ADG) Status: Enabled Controller Status: OK Chassis Slot: Hardware Revision: Rev E Firmware Version: 5.22 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Analysis Inconsistency Notification: Disabled Raid1 Write Buffering: Disabled Post Prompt Timeout: 0 secs Cache Board Present: True Cache Status: OK Accelerator Ratio: 25% Read / 75% Write Drive Write Cache: Disabled Total Cache Size: 256 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True ~# Thanks for you help in advance.

    Read the article

  • Custom fail2ban Filter

    - by Michael Robinson
    In my quest to block excessive failed phpMyAdmin login attempts with fail2ban, I've created a script that logs said failed attempts to a file: /var/log/phpmyadmin_auth.log Custom log The format of the /var/log/phpmyadmin_auth.log file is: phpMyadmin login failed with username: root; ip: 192.168.1.50; url: http://somedomain.com/phpmyadmin/index.php phpMyadmin login failed with username: ; ip: 192.168.1.50; url: http://192.168.1.48/phpmyadmin/index.php Custom filter [Definition] # Count all bans in the logfile failregex = phpMyadmin login failed with username: .*; ip: <HOST>; phpMyAdmin jail [phpmyadmin] enabled = true port = http,https filter = phpmyadmin action = sendmail-whois[name=HTTP] logpath = /var/log/phpmyadmin_auth.log maxretry = 6 The fail2ban log contains: 2012-10-04 10:52:22,756 fail2ban.server : INFO Stopping all jails 2012-10-04 10:52:23,091 fail2ban.jail : INFO Jail 'ssh-iptables' stopped 2012-10-04 10:52:23,866 fail2ban.jail : INFO Jail 'fail2ban' stopped 2012-10-04 10:52:23,994 fail2ban.jail : INFO Jail 'ssh' stopped 2012-10-04 10:52:23,994 fail2ban.server : INFO Exiting Fail2ban 2012-10-04 10:52:24,253 fail2ban.server : INFO Changed logging target to /var/log/fail2ban.log for Fail2ban v0.8.6 2012-10-04 10:52:24,253 fail2ban.jail : INFO Creating new jail 'ssh' 2012-10-04 10:52:24,253 fail2ban.jail : INFO Jail 'ssh' uses poller 2012-10-04 10:52:24,260 fail2ban.filter : INFO Added logfile = /var/log/auth.log 2012-10-04 10:52:24,260 fail2ban.filter : INFO Set maxRetry = 6 2012-10-04 10:52:24,261 fail2ban.filter : INFO Set findtime = 600 2012-10-04 10:52:24,261 fail2ban.actions: INFO Set banTime = 600 2012-10-04 10:52:24,279 fail2ban.jail : INFO Creating new jail 'ssh-iptables' 2012-10-04 10:52:24,279 fail2ban.jail : INFO Jail 'ssh-iptables' uses poller 2012-10-04 10:52:24,279 fail2ban.filter : INFO Added logfile = /var/log/auth.log 2012-10-04 10:52:24,280 fail2ban.filter : INFO Set maxRetry = 5 2012-10-04 10:52:24,280 fail2ban.filter : INFO Set findtime = 600 2012-10-04 10:52:24,280 fail2ban.actions: INFO Set banTime = 600 2012-10-04 10:52:24,287 fail2ban.jail : INFO Creating new jail 'fail2ban' 2012-10-04 10:52:24,287 fail2ban.jail : INFO Jail 'fail2ban' uses poller 2012-10-04 10:52:24,287 fail2ban.filter : INFO Added logfile = /var/log/fail2ban.log 2012-10-04 10:52:24,287 fail2ban.filter : INFO Set maxRetry = 3 2012-10-04 10:52:24,288 fail2ban.filter : INFO Set findtime = 604800 2012-10-04 10:52:24,288 fail2ban.actions: INFO Set banTime = 604800 2012-10-04 10:52:24,292 fail2ban.jail : INFO Jail 'ssh' started 2012-10-04 10:52:24,293 fail2ban.jail : INFO Jail 'ssh-iptables' started 2012-10-04 10:52:24,297 fail2ban.jail : INFO Jail 'fail2ban' started When I issue: sudo service fail2ban restart fail2ban emails me to say ssh has restarted, but I receive no such email about my phpmyadmin jail. Repeated failed logins to phpMyAdmin does not cause an email to be sent. Have I missed some critical setup? Is my filter's regular expression wrong? Update: added changes from default installation Starting with a clean fail2ban installation: cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local Change email address to my own, action to: action = %(action_mwl)s Append the following to jail.local [phpmyadmin] enabled = true port = http,https filter = phpmyadmin action = sendmail-whois[name=HTTP] logpath = /var/log/phpmyadmin_auth.log maxretry = 4 Add the following to /etc/fail2ban/filter.d/phpmyadmin.conf # phpmyadmin configuration file # # Author: Michael Robinson # [Definition] # Option: failregex # Notes.: regex to match the password failures messages in the logfile. The # host must be matched by a group named "host". The tag "<HOST>" can # be used for standard IP/hostname matching and is only an alias for # (?:::f{4,6}:)?(?P<host>\S+) # Values: TEXT # # Count all bans in the logfile failregex = phpMyadmin login failed with username: .*; ip: <HOST>; # Option: ignoreregex # Notes.: regex to ignore. If this regex matches, the line is ignored. # Values: TEXT # # Ignore our own bans, to keep our counts exact. # In your config, name your jail 'fail2ban', or change this line! ignoreregex = Restart fail2ban sudo service fail2ban restart PS: I like eggs

    Read the article

  • Custom fail2ban Filter for phpMyadmin bruteforce attempts

    - by Michael Robinson
    In my quest to block excessive failed phpMyAdmin login attempts with fail2ban, I've created a script that logs said failed attempts to a file: /var/log/phpmyadmin_auth.log Custom log The format of the /var/log/phpmyadmin_auth.log file is: phpMyadmin login failed with username: root; ip: 192.168.1.50; url: http://somedomain.com/phpmyadmin/index.php phpMyadmin login failed with username: ; ip: 192.168.1.50; url: http://192.168.1.48/phpmyadmin/index.php Custom filter [Definition] # Count all bans in the logfile failregex = phpMyadmin login failed with username: .*; ip: <HOST>; phpMyAdmin jail [phpmyadmin] enabled = true port = http,https filter = phpmyadmin action = sendmail-whois[name=HTTP] logpath = /var/log/phpmyadmin_auth.log maxretry = 6 The fail2ban log contains: 2012-10-04 10:52:22,756 fail2ban.server : INFO Stopping all jails 2012-10-04 10:52:23,091 fail2ban.jail : INFO Jail 'ssh-iptables' stopped 2012-10-04 10:52:23,866 fail2ban.jail : INFO Jail 'fail2ban' stopped 2012-10-04 10:52:23,994 fail2ban.jail : INFO Jail 'ssh' stopped 2012-10-04 10:52:23,994 fail2ban.server : INFO Exiting Fail2ban 2012-10-04 10:52:24,253 fail2ban.server : INFO Changed logging target to /var/log/fail2ban.log for Fail2ban v0.8.6 2012-10-04 10:52:24,253 fail2ban.jail : INFO Creating new jail 'ssh' 2012-10-04 10:52:24,253 fail2ban.jail : INFO Jail 'ssh' uses poller 2012-10-04 10:52:24,260 fail2ban.filter : INFO Added logfile = /var/log/auth.log 2012-10-04 10:52:24,260 fail2ban.filter : INFO Set maxRetry = 6 2012-10-04 10:52:24,261 fail2ban.filter : INFO Set findtime = 600 2012-10-04 10:52:24,261 fail2ban.actions: INFO Set banTime = 600 2012-10-04 10:52:24,279 fail2ban.jail : INFO Creating new jail 'ssh-iptables' 2012-10-04 10:52:24,279 fail2ban.jail : INFO Jail 'ssh-iptables' uses poller 2012-10-04 10:52:24,279 fail2ban.filter : INFO Added logfile = /var/log/auth.log 2012-10-04 10:52:24,280 fail2ban.filter : INFO Set maxRetry = 5 2012-10-04 10:52:24,280 fail2ban.filter : INFO Set findtime = 600 2012-10-04 10:52:24,280 fail2ban.actions: INFO Set banTime = 600 2012-10-04 10:52:24,287 fail2ban.jail : INFO Creating new jail 'fail2ban' 2012-10-04 10:52:24,287 fail2ban.jail : INFO Jail 'fail2ban' uses poller 2012-10-04 10:52:24,287 fail2ban.filter : INFO Added logfile = /var/log/fail2ban.log 2012-10-04 10:52:24,287 fail2ban.filter : INFO Set maxRetry = 3 2012-10-04 10:52:24,288 fail2ban.filter : INFO Set findtime = 604800 2012-10-04 10:52:24,288 fail2ban.actions: INFO Set banTime = 604800 2012-10-04 10:52:24,292 fail2ban.jail : INFO Jail 'ssh' started 2012-10-04 10:52:24,293 fail2ban.jail : INFO Jail 'ssh-iptables' started 2012-10-04 10:52:24,297 fail2ban.jail : INFO Jail 'fail2ban' started When I issue: sudo service fail2ban restart fail2ban emails me to say ssh has restarted, but I receive no such email about my phpmyadmin jail. Repeated failed logins to phpMyAdmin does not cause an email to be sent. Have I missed some critical setup? Is my filter's regular expression wrong? Update: added changes from default installation Starting with a clean fail2ban installation: cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local Change email address to my own, action to: action = %(action_mwl)s Append the following to jail.local [phpmyadmin] enabled = true port = http,https filter = phpmyadmin action = sendmail-whois[name=HTTP] logpath = /var/log/phpmyadmin_auth.log maxretry = 4 Add the following to /etc/fail2ban/filter.d/phpmyadmin.conf # phpmyadmin configuration file # # Author: Michael Robinson # [Definition] # Option: failregex # Notes.: regex to match the password failures messages in the logfile. The # host must be matched by a group named "host". The tag "<HOST>" can # be used for standard IP/hostname matching and is only an alias for # (?:::f{4,6}:)?(?P<host>\S+) # Values: TEXT # # Count all bans in the logfile failregex = phpMyadmin login failed with username: .*; ip: <HOST>; # Option: ignoreregex # Notes.: regex to ignore. If this regex matches, the line is ignored. # Values: TEXT # # Ignore our own bans, to keep our counts exact. # In your config, name your jail 'fail2ban', or change this line! ignoreregex = Restart fail2ban sudo service fail2ban restart PS: I like eggs

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >