Search Results

Search found 9271 results on 371 pages for 'drop handler'.

Page 48/371 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Adding an Admin user to an ASP.NET MVC 4 application using a single drop-in file

    - by Jon Galloway
    I'm working on an ASP.NET MVC 4 tutorial and wanted to set it up so just dropping a file in App_Start would create a user named "Owner" and assign them to the "Administrator" role (more explanation at the end if you're interested). There are reasons why this wouldn't fit into most application scenarios: It's not efficient, as it checks for (and creates, if necessary) the user every time the app starts up The username, password, and role name are hardcoded in the app (although they could be pulled from config) Automatically creating an administrative account in code (without user interaction) could lead to obvious security issues if the user isn't informed However, with some modifications it might be more broadly useful - e.g. creating a test user with limited privileges, ensuring a required account isn't accidentally deleted, or - as in my case - setting up an account for demonstration or tutorial purposes. Challenge #1: Running on startup without requiring the user to install or configure anything I wanted to see if this could be done just by having the user drop a file into the App_Start folder and go. No copying code into Global.asax.cs, no installing addition NuGet packages, etc. That may not be the best approach - perhaps a NuGet package with a dependency on WebActivator would be better - but I wanted to see if this was possible and see if it offered the best experience. Fortunately ASP.NET 4 and later provide a PreApplicationStartMethod attribute which allows you to register a method which will run when the application starts up. You drop this attribute in your application and give it two parameters: a method name and the type that contains it. I created a static class named PreApplicationTasks with a static method named, then dropped this attribute in it: [assembly: PreApplicationStartMethod(typeof(PreApplicationTasks), "Initializer")] That's it. One small gotcha: the namespace can be a problem with assembly attributes. I decided my class didn't need a namespace. Challenge #2: Only one PreApplicationStartMethod per assembly In .NET 4, the PreApplicationStartMethod is marked as AllMultiple=false, so you can only have one PreApplicationStartMethod per assembly. This was fixed in .NET 4.5, as noted by Jon Skeet, so you can have as many PreApplicationStartMethods as you want (allowing you to keep your users waiting for the application to start indefinitely!). The WebActivator NuGet package solves the multiple instance problem if you're in .NET 4 - it registers as a PreApplicationStartMethod, then calls any methods you've indicated using [assembly: WebActivator.PreApplicationStartMethod(type, method)]. David Ebbo blogged about that here:  Light up your NuGets with startup code and WebActivator. In my scenario (bootstrapping a beginner level tutorial) I decided not to worry about this and stick with PreApplicationStartMethod. Challenge #3: PreApplicationStartMethod kicks in before configuration has been read This is by design, as Phil explains. It allows you to make changes that need to happen very early in the pipeline, well before Application_Start. That's fine in some cases, but it caused me problems when trying to add users, since the Membership Provider configuration hadn't yet been read - I got an exception stating that "Default Membership Provider could not be found." The solution here is to run code that requires configuration in a PostApplicationStart method. But how to do that? Challenge #4: Getting PostApplicationStartMethod without requiring WebActivator The WebActivator NuGet package, among other things, provides a PostApplicationStartMethod attribute. That's generally how I'd recommend running code that needs to happen after Application_Start: [assembly: WebActivator.PostApplicationStartMethod(typeof(TestLibrary.MyStartupCode), "CallMeAfterAppStart")] This works well, but I wanted to see if this would be possible without WebActivator. Hmm. Well, wait a minute - WebActivator works in .NET 4, so clearly it's registering and calling PostApplicationStartup tasks somehow. Off to the source code! Sure enough, there's even a handy comment in ActivationManager.cs which shows where PostApplicationStartup tasks are being registered: public static void Run() { if (!_hasInited) { RunPreStartMethods(); // Register our module to handle any Post Start methods. But outside of ASP.NET, just run them now if (HostingEnvironment.IsHosted) { Microsoft.Web.Infrastructure.DynamicModuleHelper.DynamicModuleUtility.RegisterModule(typeof(StartMethodCallingModule)); } else { RunPostStartMethods(); } _hasInited = true; } } Excellent. Hey, that DynamicModuleUtility seems familiar... Sure enough, K. Scott Allen mentioned it on his blog last year. This is really slick - a PreApplicationStartMethod can register a new HttpModule in code. Modules are run right after application startup, so that's a perfect time to do any startup stuff that requires configuration to be read. As K. Scott says, it's this easy: using System; using System.Web; using Microsoft.Web.Infrastructure.DynamicModuleHelper; [assembly:PreApplicationStartMethod(typeof(MyAppStart), "Start")] public class CoolModule : IHttpModule { // implementation not important // imagine something cool here } public static class MyAppStart { public static void Start() { DynamicModuleUtility.RegisterModule(typeof(CoolModule)); } } Challenge #5: Cooperating with SimpleMembership The ASP.NET MVC Internet template includes SimpleMembership. SimpleMembership is a big improvement over traditional ASP.NET Membership. For one thing, rather than forcing a database schema, it can work with your database schema. In the MVC 4 Internet template case, it uses Entity Framework Code First to define the user model. SimpleMembership bootstrap includes a call to InitializeDatabaseConnection, and I want to play nice with that. There's a new [InitializeSimpleMembership] attribute on the AccountController, which calls \Filters\InitializeSimpleMembershipAttribute.cs::OnActionExecuting(). That comment in that method that says "Ensure ASP.NET Simple Membership is initialized only once per app start" which sounds like good advice. I figured the best thing would be to call that directly: new Mvc4SampleApplication.Filters.InitializeSimpleMembershipAttribute().OnActionExecuting(null); I'm not 100% happy with this - in fact, it's my least favorite part of this solution. There are two problems - first, directly calling a method on a filter, while legal, seems odd. Worse, though, the Filter lives in the application's namespace, which means that this code no longer works well as a generic drop-in. The simplest workaround would be to duplicate the relevant SimpleMembership initialization code into my startup code, but I'd rather not. I'm interested in your suggestions here. Challenge #6: Module Init methods are called more than once When debugging, I noticed (and remembered) that the Init method may be called more than once per page request - it's run once per instance in the app pool, and an individual page request can cause multiple resource requests to the server. While SimpleMembership does have internal checks to prevent duplicate user or role entries, I'd rather not cause or handle those exceptions. So here's the standard single-use lock in the Module's init method: void IHttpModule.Init(HttpApplication context) { lock (lockObject) { if (!initialized) { //Do stuff } initialized = true; } } Putting it all together With all of that out of the way, here's the code I came up with: using Mvc4SampleApplication.Filters; using System.Web; using System.Web.Security; using WebMatrix.WebData; [assembly: PreApplicationStartMethod(typeof(PreApplicationTasks), "Initializer")] public static class PreApplicationTasks { public static void Initializer() { Microsoft.Web.Infrastructure.DynamicModuleHelper.DynamicModuleUtility .RegisterModule(typeof(UserInitializationModule)); } } public class UserInitializationModule : IHttpModule { private static bool initialized; private static object lockObject = new object(); private const string _username = "Owner"; private const string _password = "p@ssword123"; private const string _role = "Administrator"; void IHttpModule.Init(HttpApplication context) { lock (lockObject) { if (!initialized) { new InitializeSimpleMembershipAttribute().OnActionExecuting(null); if (!WebSecurity.UserExists(_username)) WebSecurity.CreateUserAndAccount(_username, _password); if (!Roles.RoleExists(_role)) Roles.CreateRole(_role); if (!Roles.IsUserInRole(_username, _role)) Roles.AddUserToRole(_username, _role); } initialized = true; } } void IHttpModule.Dispose() { } } The Verdict: Is this a good thing? Maybe. I think you'll agree that the journey was undoubtedly worthwhile, as it took us through some of the finer points of hooking into application startup, integrating with membership, and understanding why the WebActivator NuGet package is so useful Will I use this in the tutorial? I'm leaning towards no - I think a NuGet package with a dependency on WebActivator might work better: It's a little more clear what's going on Installing a NuGet package might be a little less error prone than copying a file A novice user could uninstall the package when complete It's a good introduction to NuGet, which is a good thing for beginners to see This code either requires either duplicating a little code from that filter or modifying the file to use the namespace Honestly I'm undecided at this point, but I'm glad that I can weigh the options. If you're interested: Why are you doing this? I'm updating the MVC Music Store tutorial to ASP.NET MVC 4, taking advantage of a lot of new ASP.NET MVC 4 features and trying to simplify areas that are giving people trouble. One change that addresses both needs us using the new OAuth support for membership as much as possible - it's a great new feature from an application perspective, and we get a fair amount of beginners struggling with setting up membership on a variety of database and development setups, which is a distraction from the focus of the tutorial - learning ASP.NET MVC. Side note: Thanks to some great help from Rick Anderson, we had a draft of the tutorial that was looking pretty good earlier this summer, but there were enough changes in ASP.NET MVC 4 all the way up to RTM that there's still some work to be done. It's high priority and should be out very soon. The one issue I ran into with OAuth is that we still need an Administrative user who can edit the store's inventory. I thought about a number of solutions for that - making the first user to register the admin, or the first user to use the username "Administrator" is assigned to the Administrator role - but they both ended up requiring extra code; also, I worried that people would use that code without understanding it or thinking about whether it was a good fit.

    Read the article

  • Is there a way to fix the width of drop down list?

    - by Harry Pham
    Here is what I got: <select id="box1"> <option>ABCDEFG</option> </select> <select id="box2"> <option>ABCDEFGHIJKLMNO</option> </select> I have 2 different Drop Down lists. Since the width of a drop down list depends on the width of the longest text in the option, I end up with 2 drop down lists with 2 different widths. This makes my webpage look goofy. What I want is to set it so that both of my drop down lists will have the same width (I'd prefer the width to be very long, so that even the longest item won't be truncated).

    Read the article

  • drag drop no longer working once application gets installed.

    - by Sdry
    I have an application that has drag and drop functionality to import images and video's. While developing, and testing through Visual Studio this has never given any problems. After installing through a set up project, everything in the application works fine, except the drag and drop, which seems to be doing nothing. Are there any security settings that need to be set through an installer, or something of that nature that could be preventing drag and drop after installation ?

    Read the article

  • Set-Cookie Headers getting stripped in ASP.NET HttpHandlers

    - by Rick Strahl
    Yikes, I ran into a real bummer of an edge case yesterday in one of my older low level handler implementations (for West Wind Web Connection in this case). Basically this handler is a connector for a backend Web framework that creates self contained HTTP output. An ASP.NET Handler captures the full output, and then shoves the result down the ASP.NET Response object pipeline writing out the content into the Response.OutputStream and seperately sending the HttpHeaders in the Response.Headers collection. The headers turned out to be the problem and specifically Http Cookies, which for some reason ended up getting stripped out in some scenarios. My handler works like this: Basically the HTTP response from the backend app would return a full set of HTTP headers plus the content. The ASP.NET handler would read the headers one at a time and then dump them out via Response.AppendHeader(). But I found that in some situations Set-Cookie headers sent along were simply stripped inside of the Http Handler. After a bunch of back and forth with some folks from Microsoft (thanks Damien and Levi!) I managed to pin this down to a very narrow edge scenario. It's easiest to demonstrate the problem with a simple example HttpHandler implementation. The following simulates the very much simplified output generation process that fails in my handler. Specifically I have a couple of headers including a Set-Cookie header and some output that gets written into the Response object.using System.Web; namespace wwThreads { public class Handler : IHttpHandler { /* NOTE: * * Run as a web.config set handler (see entry below) * * Best way is to look at the HTTP Headers in Fiddler * or Chrome/FireBug/IE tools and look for the * WWHTREADSID cookie in the outgoing Response headers * ( If the cookie is not there you see the problem! ) */ public void ProcessRequest(HttpContext context) { HttpRequest request = context.Request; HttpResponse response = context.Response; // If ClearHeaders is used Set-Cookie header gets removed! // if commented header is sent... response.ClearHeaders(); response.ClearContent(); // Demonstrate that other headers make it response.AppendHeader("RequestId", "asdasdasd"); // This cookie gets removed when ClearHeaders above is called // When ClearHEaders is omitted above the cookie renders response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); // *** This always works, even when explicit // Set-Cookie above fails and ClearHeaders is called //response.Cookies.Add(new HttpCookie("WWTHREADSID", "ThisIsTheValue")); response.Write(@"Output was created.<hr/> Check output with Fiddler or HTTP Proxy to see whether cookie was sent."); } public bool IsReusable { get { return false; } } } } In order to see the problem behavior this code has to be inside of an HttpHandler, and specifically in a handler defined in web.config with: <add name=".ck_handler" path="handler.ck" verb="*" type="wwThreads.Handler" preCondition="integratedMode" /> Note: Oddly enough this problem manifests only when configured through web.config, not in an ASHX handler, nor if you paste that same code into an ASPX page or MVC controller. What's the problem exactly? The code above simulates the more complex code in my live handler that picks up the HTTP response from the backend application and then peels out the headers and sends them one at a time via Response.AppendHeader. One of the headers in my app can be one or more Set-Cookie. I found that the Set-Cookie headers were not making it into the Response headers output. Here's the Chrome Http Inspector trace: Notice, no Set-Cookie header in the Response headers! Now, running the very same request after removing the call to Response.ClearHeaders() command, the cookie header shows up just fine: As you might expect it took a while to track this down. At first I thought my backend was not sending the headers but after closer checks I found that indeed the headers were set in the backend HTTP response, and they were indeed getting set via Response.AppendHeader() in the handler code. Yet, no cookie in the output. In the simulated example the problem is this line:response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); which in my live code is more dynamic ( ie. AppendHeader(token[0],token[1[]) )as it parses through the headers. Bizzaro Land: Response.ClearHeaders() causes Cookie to get stripped Now, here is where it really gets bizarre: The problem occurs only if: Response.ClearHeaders() was called before headers are added It only occurs in Http Handlers declared in web.config Clearly this is an edge of an edge case but of course - knowing my relationship with Mr. Murphy - I ended up running smack into this problem. So in the code above if you remove the call to ClearHeaders(), the cookie gets set!  Add it back in and the cookie is not there. If I run the above code in an ASHX handler it works. If I paste the same code (with a Response.End()) into an ASPX page, or MVC controller it all works. Only in the HttpHandler configured through Web.config does it fail! Cue the Twilight Zone Music. Workarounds As is often the case the fix for this once you know the problem is not too difficult. The difficulty lies in tracking inconsistencies like this down. Luckily there are a few simple workarounds for the Cookie issue. Don't use AppendHeader for Cookies The easiest and obvious solution to this problem is simply not use Response.AppendHeader() to set Cookies. Duh! Under normal circumstances in application level code there's rarely a reason to write out a cookie like this:response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); but rather create the cookie using the Response.Cookies collection:response.Cookies.Add(new HttpCookie("WWTHREADSID", "ThisIsTheValue")); Unfortunately, in my case where I dynamically read headers from the original output and then dynamically  write header key value pairs back  programmatically into the Response.Headers collection, I actually don't look at each header specifically so in my case the cookie is just another header. My first thought was to simply trap for the Set-Cookie header and then parse out the cookie and create a Cookie object instead. But given that cookies can have a lot of different options this is not exactly trivial, plus I don't really want to fuck around with cookie values which can be notoriously brittle. Don't use Response.ClearHeaders() The real mystery in all this is why calling Response.ClearHeaders() prevents a cookie value later written with Response.AppendHeader() to fail. I fired up Reflector and took a quick look at System.Web and HttpResponse.ClearHeaders. There's all sorts of resetting going on but nothing that seems to indicate that headers should be removed later on in the request. The code in ClearHeaders() does access the HttpWorkerRequest, which is the low level interface directly into IIS, and so I suspect it's actually IIS that's stripping the headers and not ASP.NET, but it's hard to know. Somebody from Microsoft and the IIS team would have to comment on that. In my application it's probably safe to simply skip ClearHeaders() in my handler. The ClearHeaders/ClearContent was mainly for safety but after reviewing my code there really should never be a reason that headers would be set prior to this method firing. However, if for whatever reason headers do need to be cleared, it's easy enough to manually clear the headers out:private void RemoveHeaders(HttpResponse response) { List<string> headers = new List<string>(); foreach (string header in response.Headers) { headers.Add(header); } foreach (string header in headers) { response.Headers.Remove(header); } response.Cookies.Clear(); } Now I can replace the call the Response.ClearHeaders() and I don't get the funky side-effects from Response.ClearHeaders(). Summary I realize this is a total edge case as this occurs only in HttpHandlers that are manually configured. It looks like you'll never run into this in any of the higher level ASP.NET frameworks or even in ASHX handlers - only web.config defined handlers - which is really, really odd. After all those frameworks use the same underlying ASP.NET architecture. Hopefully somebody from Microsoft has an idea what crazy dependency was triggered here to make this fail. IAC, there are workarounds to this should you run into it, although I bet when you do run into it, it'll likely take a bit of time to find the problem or even this post in a search because it's not easily to correlate the problem to the solution. It's quite possible that more than cookies are affected by this behavior. Searching for a solution I read a few other accounts where headers like Referer were mysteriously disappearing, and it's possible that something similar is happening in those cases. Again, extreme edge case, but I'm writing this up here as documentation for myself and possibly some others that might have run into this. © Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET   IIS7   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Is it time to drop Courier from your monospace font stacks?

    - by Jeff
    I've been fine-tuning my font stacks lately and was wondering if it's safe to drop Courier from my monospace font stack yet? Would you feel comfortable dropping it? Of course, monospace is my final fallback. Note 1: OS Testbed: WinXP, WinVista, Win7, iPhone, iPad Based on my research, these browsers now substitute Courier New for Courier by default: IE9+ Chrome 2+ Firefox 10+ Safari 3.1+ iDevices Note 2: The default "font-family: monospace;" renders as Courier New in every browser I've tested, from IE6 through the latest iPhone/iPad devices. EDIT: One exception is Opera 12, which renders Consolas on Win. Opera 10 renders Courier New. Note 3: I've noticed that Courier refuses to render with any font smoothing (anti-aliasing) in any browser I've tested, regardless of system and/or browser display settings. Probably because it's an old bitmap font. This could be because of my system setup, however.

    Read the article

  • Why would urls submitted in google webmaster tools drop to 0?

    - by ambient
    Why would urls submitted in google webmaster tools drop to 0? It's a small site, only like 20 pages, I submitted the xml sitemap and for about a week it said 20 urls submitted. A day or so ago it indexed about 17 of the pages, but today when looking it not only says that 0 are indexed but also 0 have been submitted. I did a site search on google and found clearly that pages are indexed, is this just an error on google webmaster tools? Any help or thoughts would be appreciated. Thanks!

    Read the article

  • in Open Office Calc, how do I drag and drop cells to insert rather than replace their destination?

    - by joachim
    I want to rearrange rows with the mouse in Calc. In Excel, I select the whole row, then drag and drop it while holding SHIFT. This causes the drag and drop cursor to turn into a bar rather than cells, and the cells are inserted at the bar's position. Is there a way to accomplish the same sort of thing in Calc without going round the houses inserting columns before the drag operation?

    Read the article

  • In OpenOffice Calc, how do I drag and drop cells to insert rather than replace their destination?

    - by joachim
    I want to rearrange rows with the mouse in Calc. In Excel, I select the whole row, then drag and drop it while holding Shift. This causes the drag and drop cursor to turn into a bar rather than cells, and the cells are inserted at the bar's position. Is there a way to accomplish the same sort of thing in Calc without going around the houses inserting columns before the drag operation?

    Read the article

  • Why does the JSF action tag handler in JSP invoke rendering immediately after creation?

    - by Pentius
    Dear fellows, I read the article "Improving JSF by Dumping JSP" from Hans Bergsten. There I read the following: The JSP container processes the page and invokes the JSF action tag handlers as they are encountered. A JSF tag handler looks for the JSF component it represents in the component tree. If it can't find the component, it creates it and adds it to the component tree. It then asks the component to render itself. and furthermore On the first request, the action creates its component and asks it to render itself. I understand that the immediate rendering after the creation of the component is the problem here (The reference to the input component can't be resolved in the example). That's one point, why JSF doesn't fit with JSP. But it reads as if the action tag handler itself would ask the component to render. Or is it JSP that triggers the rendering directly after the action tag handler created the component. If it is the action tag handler, I don't understand, why this is the fault of JSP. What is different here than from JSF intended? Thanks for your help, I need this for my thesis.

    Read the article

  • C#, WinForms: Why aren't KeyDown events chaining up to Form? I have to add event handler to each chi

    - by blak3r
    As i understand it, when a button is pressed it should first invoke the Control which has focus' KeyDown Handler, then it call the KeyDown handler for any Parent controls, all the way up to the main form. The only thing that would stop the chain would be if somewhere along the chain one of the EventHandlers did: e.SuppressKeyPress = true; e.Handled = true; In my case, KeyDown events never get to the main form. I have Form - Panel - button for example. Panel doesn't offer a KeyDown Event, but it shouldn't stop it from reaching the main form right? Right now as a work around I set every single control to call an event handler I wrote. I'm basically trying to prevent Alt-F4 from closing the application and instead minimize it.

    Read the article

  • Can a CouchDB document update handler get an update conflict?

    - by jhs
    How likely is a revision conflict when using an update handler? Should I concern myself with conflict-handling code when writing a robust update function? As described in Document Update Handlers, CouchDB 0.10 and later allows on-demand server-side document modification. Update handlers can process non-JSON formats; but the other major features are these: An HTTP front-end to arbitrarily complex document modification code Similar code needn't be written for all possible clients—a DRY architecture Execution is faster and less likely to hit a revision conflict I am unclear about the third point. Executing locally, the update handler will run much faster and with lower latency. But in situations with high contention, that does not guarantee a successful update. Or does the update handler guarantee a successful update?

    Read the article

  • What is the chance a CouchDB document update handler will get a revision conflict?

    - by jhs
    How likely is a revision conflict when using an update handler? Should I concern myself with conflict-handling code when writing a robust update function? As described in Document Update Handlers, CouchDB 0.10 and later allows on-demand server-side document modification. Update handlers can process non-JSON formats; but the other major features are these: An HTTP front-end to arbitrarily complex document modification code Similar code needn't be written for all possible clients—a DRY architecture Execution is faster and less likely to hit a revision conflict I am unclear about the third point. Executing locally, the update handler will run much faster and with lower latency. But in situations with high contention, that does not guarantee a successful update. Or does the update handler guarantee a successful update?

    Read the article

  • Could crosslinking using very general anchor texts be a reason for a drop in rankings?

    - by webmasters
    I have crosslinked 20 sites and I thought I have been penalized for this, asked this question and some experienced members told me maybe that crosslinking may not necessarily be the reason. The sites are on same host, different C class IP and every site in linked to each other. Each site targets long tail kewords. Site 1 - BMW Used Cars - and my area Site 2 - WW Used Cars - and my area And so on... When I crosslinked them (in the sidebar), I did it for the users; instead of repeating the terms used cars and my location over and over (since my users are targeted) I just crosslinked them using the brand: BMW, WW. Targeting locally, my niches are not overly competitive, so I did not need to many external links to rank on various positions on the 1st page. I'm thinking that when I chose to link using only the brand, google might have thought I wanted to actually rank for BBW and WW, hence the drop in my targeted local traffic. Could this be? I now have no-followed the links and I am noticing a slight recovery, but if it's not a interlinking penalty it would be a shame not to benefit from my links.

    Read the article

  • What could have caused a large traffic drop from Google in early May?

    - by Scott Schluer
    I have a website (www.equispot.com) that has been indexed for almost 2 years in Google. I managed to get myself on the first page (average position 6-8) on Google for my target keyword of "horses for sale" and held there pretty solidly for months. Suddenly, with no changes to the site, traffic from Google dropped like a rock in early May. I slowly fell in position until now I'm sitting at the bottom of page 4. I have never hired an SEO firm, have not used any "black hat" techniques that Google would have penalized me for in their May update, etc. I'm not familiar enough with SEO to know how to look at link profiles, etc. to tell if there's something wrong. I've run my site through a DNS checker and it came back with no errors. Google Webmaster Tools shows no messages or notices of any kind, just a drop in traffic. GWT also shows only 2 server errors and 1 404. Is there anyone who can tell me by quickly checking my domain if there's an obvious reason that my traffic would have fallen so far, something that I can fix?

    Read the article

  • In Spring MVC, is it possible to have different return types in one request handler method?

    - by Bobo
    For example, if a request succeeds, I will return a View ,if not, return a String indicating error message and set the content-type to either xml or json. Based on what I read, seems like I should use "void" as the return type for handler methods. Check this out: "void if the method handles the response itself (by writing the response content directly, declaring an argument of type ServletResponse / HttpServletResponse for that purpose) or if the view name is supposed to be implicitly determined through a RequestToViewNameTranslator (not declaring a response argument in the handler method signature)."(Spring Framework reference). What I dont understand is what " the view name is supposed to be implicitly determined through a RequestToViewNameTranslator (not declaring a response argument in the handler method signature)" means? Any anyone give me an example?

    Read the article

  • Block Skype on Cisco IOS

    - by ensnare
    I'm trying to block skype via policy routing but it's not working ... here's my configuration: class-map match-any block match protocol skype policy-map QoS-Priority-Input class block police 1000000 31250 31250 conform-action drop exceed-action drop violate-action drop policy-map QoS-Priority-Output class block police 1000000 31250 31250 conform-action drop exceed-action drop violate-action drop interface FastEthernet4 description WAN service-policy input QoS-Priority-Input service-policy output QoS-Priority-Output

    Read the article

  • Is this a good starting point for iptables in Linux?

    - by sbrattla
    Hi, I'm new to iptables, and i've been trying to put together a firewall which purpose is to protect a web server. The below rules are the ones i've put together so far, and i would like to hear if the rules makes sense - and wether i've left out anything essential? In addition to port 80, i also need to have port 3306 (mysql) and 22 (ssh) open for external connections. Any feedback is highly appreciated! #!/bin/sh # Clear all existing rules. iptables -F # ACCEPT connections for loopback network connection, 127.0.0.1. iptables -A INPUT -i lo -j ACCEPT # ALLOW established traffic iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # DROP packets that are NEW but does not have the SYN but set. iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP # DROP fragmented packets, as there is no way to tell the source and destination ports of such a packet. iptables -A INPUT -f -j DROP # DROP packets with all tcp flags set (XMAS packets). iptables -A INPUT -p tcp --tcp-flags ALL ALL -j DROP # DROP packets with no tcp flags set (NULL packets). iptables -A INPUT -p tcp --tcp-flags ALL NONE -j DROP # ALLOW ssh traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport ssh -m limit --limit 1/s -j ACCEPT # ALLOW http traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport http -m limit --limit 5/s -j ACCEPT # ALLOW mysql traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport mysql -m limit --limit 25/s -j ACCEPT # DROP any other traffic. iptables -A INPUT -j DROP

    Read the article

  • Allow outgoing connections for DNS

    - by Jimmy
    I'm new to IPtables, but I am trying to setup a secure server to host a website and allow SSH. This is what I have so far: #!/bin/sh i=/sbin/iptables # Flush all rules $i -F $i -X # Setup default filter policy $i -P INPUT DROP $i -P OUTPUT DROP $i -P FORWARD DROP # Respond to ping requests $i -A INPUT -p icmp --icmp-type any -j ACCEPT # Force SYN checks $i -A INPUT -p tcp ! --syn -m state --state NEW -j DROP # Drop all fragments $i -A INPUT -f -j DROP # Drop XMAS packets $i -A INPUT -p tcp --tcp-flags ALL ALL -j DROP # Drop NULL packets $i -A INPUT -p tcp --tcp-flags ALL NONE -j DROP # Stateful inspection $i -A INPUT -m state --state NEW -p tcp --dport 22 -j ACCEPT # Allow established connections $i -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow unlimited traffic on loopback $i -A INPUT -i lo -j ACCEPT $i -A OUTPUT -o lo -j ACCEPT # Open nginx $i -A INPUT -p tcp --dport 443 -j ACCEPT $i -A INPUT -p tcp --dport 80 -j ACCEPT # Open SSH $i -A INPUT -p tcp --dport 22 -j ACCEPT However I've locked down my outgoing connections and it means I can't resolve any DNS. How do I allow that? Also, any other feedback is appreciated. James

    Read the article

  • New install of Steam not running on new install of Ubuntu 13.10

    - by inferKNOX
    I tried purging steam, un-installing and reinstalling steam, deleting /home/.steam/share/steam/appcache/, deleting everything in /home/.steam/share/steam/ and nothing helped. I installed Ubuntu, then steam into it directly afterward. I installed steam from Ubuntu Software Centre, launched it, it updated 206MB, then closed. When I tried to launch it again, it momentarily flashes the checking for update dialogue, then closes every time. Then (in an unrelated event) Ubuntu said some system updates are necessary and one of them was Steam launcher. I did the update, tried to launch Steam; same story. Really need help on this, as I did a complete re-isntall of Ubuntu, then Steam again and it did not help at all. Here's the log: user@computer:~$ steam Running Steam on ubuntu 13.10 64-bit STEAM_RUNTIME is enabled automatically Installing breakpad exception handler for appid(steam)/version(1381282832_client) Installing breakpad exception handler for appid(steam)/version(1381282832_client) Installing breakpad exception handler for appid(steam)/version(1381282832_client) unlinked 0 orphaned pipes removing stale semaphore last operated on by process 2297 with name 0eBlobRegistryMutex_313E4D748EE12691A95DDE8913185F7E removing stale semaphore last operated on by process 2297 with name 0eBlobRegistrySignal_313E4D748EE12691A95DDE8913185F7E removing stale semaphore last operated on by process 2297 with name 0emSteamEngineInstance removing stale semaphore last operated on by process 2297 with name 0eSteamEngineLock Gtk-Message: Failed to load module "overlay-scrollbar" Gtk-Message: Failed to load module "unity-gtk-module" Installing breakpad exception handler for appid(steam)/version(1381282832_client) Fontconfig error: "/etc/fonts/conf.d/10-scale-bitmap-fonts.conf", line 70: non-double matrix element Fontconfig error: "/etc/fonts/conf.d/10-scale-bitmap-fonts.conf", line 70: non-double matrix element Fontconfig warning: "/etc/fonts/conf.d/10-scale-bitmap-fonts.conf", line 78: saw unknown, expected number [1030/115016:WARNING:proxy_service.cc(958)] PAC support disabled because there is no system implementation Installing breakpad exception handler for appid(steam)/version(1381282832_client) Installing breakpad exception handler for appid(steam)/version(1381282832_client) Installing breakpad exception handler for appid(steam)/version(1381282832_client) Installing breakpad exception handler for appid(steam)/version(1381282832_client) Steam: An X Error occurred X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 18 (X_ChangeProperty) Value in failed request: 0x0 Serial number of failed request: 105 xerror_handler: X failed, continuing Uploading dump (out-of-process) [proxy ''] /tmp/dumps/crash_20131030115012_1.dmp /home/user/.local/share/Steam/steam.sh: line 717: 2650 Segmentation fault (core dumped) $STEAM_DEBUGGER "$STEAMROOT/$PLATFORM/$STEAMEXE" "$@" Finished uploading minidump (out-of-process): success = yes response: CrashID=bp-484ddae7-0b1c-4ae4-be84-42a9c2131030 Thanks in advance.

    Read the article

  • Warning and error information in stored procedures revisited

    - by user13334359
    Originally way to handle warnings and errors in MySQL stored routine was designed as follows: if warning was generated during stored routine execution which has a handler for such a warning/error, MySQL remembered the handler, ignored the warning and continued execution after routine is executed MySQL checked if there is a remembered handler and activated if any This logic was not ideal and causes several problems, particularly: it was not possible to choose right handler for an instruction which generated several warnings or errors, because only first one was chosen handling conditions in current scope messed with conditions in different there were no generated warning/errors in Diagnostic Area that is against SQL Standard. First try to fix this was done in version 5.5. Patch left Diagnostic Area intact after stored routine execution, but cleared it in the beginning of each statement which can generate warnings or to work with tables. Diagnostic Area checked after stored routine execution.This patch solved issue with order of condition handlers, but lead to new issues. Most popular was that outer stored routine could see warnings which should be already handled by handler inside inner stored routine, although latest has handler. I even had to wrote a blog post about it.And now I am happy to announce this behaviour changed third time.Since version 5.6 Diagnostic Area cleared after instruction leaves its handler.This lead to that only one handler will see condition it is supposed to proceed and in proper order. All past problems are solved.I am happy that my old blog post describing weird behaviour in version 5.5 is not true any more.

    Read the article

  • Subterranean IL: Exception handling 2

    - by Simon Cooper
    Control flow in and around exception handlers is tightly controlled, due to the various ways the handler blocks can be executed. To start off with, I'll describe what SEH does when an exception is thrown. Handling exceptions When an exception is thrown, the CLR stops program execution at the throw statement and searches up the call stack looking for an appropriate handler; catch clauses are analyzed, and filter blocks are executed (I'll be looking at filter blocks in a later post). Then, when an appropriate catch or filter handler is found, the stack is unwound to that handler, executing successive finally and fault handlers in their own stack contexts along the way, and program execution continues at the start of the catch handler. Because catch, fault, finally and filter blocks can be executed essentially out of the blue by the SEH mechanism, without any reference to preceding instructions, you can't use arbitary branches in and out of exception handler blocks. Instead, you need to use specific instructions for control flow out of handler blocks: leave, endfinally/endfault, and endfilter. Exception handler control flow try blocks You cannot branch into or out of a try block or its handler using normal control flow instructions. The only way of entering a try block is by either falling through from preceding instructions, or by branching to the first instruction in the block. Once you are inside a try block, you can only leave it by throwing an exception or using the leave <label> instruction to jump to somewhere outside the block and its handler. The leave instructions signals the CLR to execute any finally handlers around the block. Most importantly, you cannot fall out of the block, and you cannot use a ret to return from the containing method (unlike in C#); you have to use leave to branch to a ret elsewhere in the method. As a side effect, leave empties the stack. catch blocks The only way of entering a catch block is if it is run by the SEH. At the start of the block execution, the thrown exception will be the only thing on the stack. The only way of leaving a catch block is to use throw, rethrow, or leave, in a similar way to try blocks. However, one thing you can do is use a leave to branch back to an arbitary place in the handler's try block! In other words, you can do this: .try { // ... newobj instance void [mscorlib]System.Exception::.ctor() throw MidTry: // ... leave.s RestOfMethod } catch [mscorlib]System.Exception { // ... leave.s MidTry } RestOfMethod: // ... As far as I know, this mechanism is not exposed in C# or VB. finally/fault blocks The only way of entering a finally or fault block is via the SEH, either as the result of a leave instruction in the corresponding try block, or as part of handling an exception. The only way to leave a finally or fault block is to use endfinally or endfault (both compile to the same binary representation), which continues execution after the finally/fault block, or, if the block was executed as part of handling an exception, signals that the SEH can continue walking the stack. filter blocks I'll be covering filters in a separate blog posts. They're quite different to the others, and have their own special semantics. Phew! Complicated stuff, but it's important to know if you're writing or outputting exception handlers in IL. Dealing with the C# compiler is probably best saved for the next post.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >