Search Results

Search found 5416 results on 217 pages for 'urls py'.

Page 106/217 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • Why should I install Python packages into `~/.local`?

    - by Matthew Rankin
    Background I don't develop using OS X's system provided Python versions (on OS X 10.6 that's Python 2.5.4 and 2.6.1). I don't install anything in the site-packages directory for the OS provided versions of Python. (The only exception is Mercurial installed from a binary package, which installs two packages in the Python 2.6.1 site-packages directory.) I installed three versions of Python, all using the Mac OS X installer disk image: Python 2.6.6 Python 2.7 Python 3.1.2 I don't like polluting the site-packages directory for my Python installations. So I only install the following five base packages in the site-packages directory. For the actual method/commands used to install these, see SO Question 4324558. setuptools/ez_setup distribute pip virtualenv virtualenvwrapper All other packages are installed in virtualenvs. I am the only user of this MacBook. Questions Given the above background, why should I install the five base packages in ~/.local? Since I'm installing these base packages into the site-packages directories of Python distributions that I've installed, I'm isolated from the OS X's Python distributions. Using this method, should I be concerned about Glyph's comment that other things could potentially break (see his comment below)? Again, I'm only interested in where to install those five base packages. Related Questions/Info I'm asking because of Glyph's comment to my answer to SO question 4314376, which stated: NO. NEVER EVER do sudo python setup.py install whatever. Write a ~/.pydistutils.cfg that puts your pip installation into ~/.local or something. Especially files named ez_setup.py tend to suck down newer versions of things like setuptools and easy_install, which can potentially break other things on your operating system. Previously, I asked What's the proper way to install pip, virtualenv, and distribute for Python?. However, no one answered the "why" of using ~/.local.

    Read the article

  • inserts 'Array' into mysql table

    - by Noah Smith
    i want to insert an array into a mysql table. The array is produced by script scanning all the links, converting into absolute links and then displaying them in an array. i decided to mysql_query the array into the table but now i am stuck. it only posts 'Array', instead of every row from the array into a different row. Any ideas??! <?php require_once('simplehtmldom_1_5/simple_html_dom.php'); require_once('url_to_absolute/url_to_absolute.php'); $connect = mysql_connect("xxxx", "xxxx", "xxx") or die('Couldn\'t connect to MySQL Server: ' . mysql_error()); mysql_select_db("xxxx", $connect ) or die('Couldn\'t Select the database: ' . mysql_error( $connect )); $links = Array(); $URL = 'http://www.theqlick.com'; // change it for urls to grab // grabs the urls from URL $file = file_get_html($URL); foreach ($file->find('a') as $theelement) { $links[] = url_to_absolute($URL, $theelement->href); } print_r($links); mysql_query("INSERT INTO pages (url) VALUES ('$links[]')"); mysql_close($connect);

    Read the article

  • singledispatch in class, how to dispatch self type

    - by yanxinyou
    Using python3.4. Here I want use singledispatch to dispatch different type in __mul__ method . The code like this : class Vector(object): ## some code not paste @functools.singledispatch def __mul__(self, other): raise NotImplementedError("can't mul these type") @__mul__.register(int) @__mul__.register(object) # Becasue can't use Vector , I have to use object def _(self, other): result = Vector(len(self)) # start with vector of zeros for j in range(len(self)): result[j] = self[j]*other return result @__mul__.register(Vector) # how can I use the self't type @__mul__.register(object) # def _(self, other): pass # need impl As you can see the code , I want support Vector*Vertor , This has Name error Traceback (most recent call last): File "p_algorithms\vector.py", line 6, in <module> class Vector(object): File "p_algorithms\vector.py", line 84, in Vector @__mul__.register(Vector) # how can I use the self't type NameError: name 'Vector' is not defined The question may be How Can I use class Name a Type in the class's method ? I know c++ have font class statement . How python solve my problem ? And it is strange to see result = Vector(len(self)) where the Vector can be used in method body .

    Read the article

  • Loading jQuery Consistently in a .NET Web App

    - by Rick Strahl
    One thing that frequently comes up in discussions when using jQuery is how to best load the jQuery library (as well as other commonly used and updated libraries) in a Web application. Specifically the issue is the one of versioning and making sure that you can easily update and switch versions of script files with application wide settings in one place and having your script usage reflect those settings in the entire application on all pages that use the script. Although I use jQuery as an example here, the same concepts can be applied to any script library - for example in my Web libraries I use the same approach for jQuery.ui and my own internal jQuery support library. The concepts used here can be applied both in WebForms and MVC. Loading jQuery Properly From CDN Before we look at a generic way to load jQuery via some server logic, let me first point out my preferred way to embed jQuery into the page. I use the Google CDN to load jQuery and then use a fallback URL to handle the offline or no Internet connection scenario. Why use a CDN? CDN links tend to be loaded more quickly since they are very likely to be cached in user's browsers already as jQuery CDN is used by many, many sites on the Web. Using a CDN also removes load from your Web server and puts the load bearing on the CDN provider - in this case Google - rather than on your Web site. On the downside, CDN links gives the provider (Google, Microsoft) yet another way to track users through their Web usage. Here's how I use jQuery CDN plus a fallback link on my WebLog for example: <!DOCTYPE HTML> <html> <head> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js"></script> <script> if (typeof (jQuery) == 'undefined') document.write(unescape("%3Cscript " + "src='/Weblog/wwSC.axd?r=Westwind.Web.Controls.Resources.jquery.js' %3E%3C/script%3E")); </script> <title>Rick Strahl's Web Log</title> ... </head>   You can see that the CDN is referenced first, followed by a small script block that checks to see whether jQuery was loaded (jQuery object exists). If it didn't load another script reference is added to the document dynamically pointing to a backup URL. In this case my backup URL points at a WebResource in my Westwind.Web  assembly, but the URL can also be local script like src="/scripts/jquery.min.js". Important: Use the proper Protocol/Scheme for  for CDN Urls [updated based on comments] If you're using a CDN to load an external script resource you should always make sure that the script is loaded with the same protocol as the parent page to avoid mixed content warnings by the browser. You don't want to load a script link to an http:// resource when you're on an https:// page. The easiest way to use this is by using a protocol relative URL: <script src="//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js"></script> which is an easy way to load resources from other domains. This URL syntax will automatically use the parent page's protocol (or more correctly scheme). As long as the remote domains support both http:// and https:// access this should work. BTW this also works in CSS (with some limitations) and links. BTW, I didn't know about this until it was pointed out in the comments. This is a very useful feature for many things - ah the benefits of my blog to myself :-) Version Numbers When you use a CDN you notice that you have to reference a specific version of jQuery. When using local files you may not have to do this as you can rename your private copy of jQuery.js, but for CDN the references are always versioned. The version number is of course very important to ensure you getting the version you have tested with, but it's also important to the provider because it ensures that cached content is always correct. If an existing file was updated the updates might take a very long time to get past the locally cached content and won't refresh properly. The version number ensures you get the right version and not some cached content that has been changed but not updated in your cache. On the other hand version numbers also mean that once you decide to use a new version of the script you now have to change all your script references in your pages. Depending on whether you use some sort of master/layout page or not this may or may not be easy in your application. Even if you do use master/layout pages, chances are that you probably have a few of them and at the very least all of those have to be updated for the scripts. If you use individual pages for all content this issue then spreads to all of your pages. Search and Replace in Files will do the trick, but it's still something that's easy to forget and worry about. Personaly I think it makes sense to have a single place where you can specify common script libraries that you want to load and more importantly which versions thereof and where they are loaded from. Loading Scripts via Server Code Script loading has always been important to me and as long as I can remember I've always built some custom script loading routines into my Web frameworks. WebForms makes this fairly easy because it has a reasonably useful script manager (ClientScriptManager and the ScriptManager) which allow injecting script into the page easily from anywhere in the Page cycle. What's nice about these components is that they allow scripts to be injected by controls so components can wrap up complex script/resource dependencies more easily without having to require long lists of CSS/Scripts/Image includes. In MVC or pure script driven applications like Razor WebPages  the process is more raw, requiring you to embed script references in the right place. But its also more immediate - it lets you know exactly which versions of scripts to use because you have to manually embed them. In WebForms with different controls loading resources this often can get confusing because it's quite possible to load multiple versions of the same script library into a page, the results of which are less than optimal… In this post I look a simple routine that embeds jQuery into the page based on a few application wide configuration settings. It returns only a string of the script tags that can be manually embedded into a Page template. It's a small function that merely a string of the script tags shown at the begging of this post along with some options on how that string is comprised. You'll be able to specify in one place which version loads and then all places where the help function is used will automatically reflect this selection. Options allow specification of the jQuery CDN Url, the fallback Url and where jQuery should be loaded from (script folder, Resource or CDN in my case). While this is specific to jQuery you can apply this to other resources as well. For example I use a similar approach with jQuery.ui as well using practically the same semantics. Providing Resources in ControlResources In my Westwind.Web Web utility library I have a class called ControlResources which is responsible for holding resource Urls, resource IDs and string contants that reference those resource IDs. The library also provides a few helper methods for loading common scriptscripts into a Web page. There are specific versions for WebForms which use the ClientScriptManager/ScriptManager and script link methods that can be used in any .NET technology that can embed an expression into the output template (or code for that matter). The ControlResources class contains mostly static content - references to resources mostly. But it also contains a few static properties that configure script loading: A Script LoadMode (CDN, Resource, or script url) A default CDN Url A fallback url They are  static properties in the ControlResources class: public class ControlResources { /// <summary> /// Determines what location jQuery is loaded from /// </summary> public static JQueryLoadModes jQueryLoadMode = JQueryLoadModes.ContentDeliveryNetwork; /// <summary> /// jQuery CDN Url on Google /// </summary> public static string jQueryCdnUrl = "//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js"; /// <summary> /// jQuery CDN Url on Google /// </summary> public static string jQueryUiCdnUrl = "//ajax.googleapis.com/ajax/libs/jqueryui/1.8.16/jquery-ui.min.js"; /// <summary> /// jQuery UI fallback Url if CDN is unavailable or WebResource is used /// Note: The file needs to exist and hold the minimized version of jQuery ui /// </summary> public static string jQueryUiLocalFallbackUrl = "~/scripts/jquery-ui.min.js"; } These static properties are fixed values that can be changed at application startup to reflect your preferences. Since they're static they are application wide settings and respected across the entire Web application running. It's best to set these default in Application_Init or similar startup code if you need to change them for your application: protected void Application_Start(object sender, EventArgs e) { // Force jQuery to be loaded off Google Content Network ControlResources.jQueryLoadMode = JQueryLoadModes.ContentDeliveryNetwork; // Allow overriding of the Cdn url ControlResources.jQueryCdnUrl = "http://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js"; // Route to our own internal handler App.OnApplicationStart(); } With these basic settings in place you can then embed expressions into a page easily. In WebForms use: <!DOCTYPE html> <html> <head runat="server"> <%= ControlResources.jQueryLink() %> <script src="scripts/ww.jquery.min.js"></script> </head> In Razor use: <!DOCTYPE html> <html> <head> @Html.Raw(ControlResources.jQueryLink()) <script src="scripts/ww.jquery.min.js"></script> </head> Note that in Razor you need to use @Html.Raw() to force the string NOT to escape. Razor by default escapes string results and this ensures that the HTML content is properly expanded as raw HTML text. Both the WebForms and Razor output produce: <!DOCTYPE html> <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof (jQuery) == 'undefined') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/WebResource.axd?d=-b6oWzgbpGb8uTaHDrCMv59VSmGhilZP5_T_B8anpGx7X-PmW_1eu1KoHDvox-XHqA1EEb-Tl2YAP3bBeebGN65tv-7-yAimtG4ZnoWH633pExpJor8Qp1aKbk-KQWSoNfRC7rQJHXVP4tC0reYzVw2&t=634535391996872492' type='text/javascript'%3E%3C/script%3E"));</script> <script src="scripts/ww.jquery.min.js"></script> </head> which produces the desired effect for both CDN load and fallback URL. The implementation of jQueryLink is pretty basic of course: /// <summary> /// Inserts a script link to load jQuery into the page based on the jQueryLoadModes settings /// of this class. Default load is by CDN plus WebResource fallback /// </summary> /// <param name="url"> /// An optional explicit URL to load jQuery from. Url is resolved. /// When specified no fallback is applied /// </param> /// <returns>full script tag and fallback script for jQuery to load</returns> public static string jQueryLink(JQueryLoadModes jQueryLoadMode = JQueryLoadModes.Default, string url = null) { string jQueryUrl = string.Empty; string fallbackScript = string.Empty; if (jQueryLoadMode == JQueryLoadModes.Default) jQueryLoadMode = ControlResources.jQueryLoadMode; if (!string.IsNullOrEmpty(url)) jQueryUrl = WebUtils.ResolveUrl(url); else if (jQueryLoadMode == JQueryLoadModes.WebResource) { Page page = new Page(); jQueryUrl = page.ClientScript.GetWebResourceUrl(typeof(ControlResources), ControlResources.JQUERY_SCRIPT_RESOURCE); } else if (jQueryLoadMode == JQueryLoadModes.ContentDeliveryNetwork) { jQueryUrl = ControlResources.jQueryCdnUrl; if (!string.IsNullOrEmpty(jQueryCdnUrl)) { // check if jquery loaded - if it didn't we're not online and use WebResource fallbackScript = @"<script type=""text/javascript"">if (typeof(jQuery) == 'undefined') document.write(unescape(""%3Cscript src='{0}' type='text/javascript'%3E%3C/script%3E""));</script>"; fallbackScript = string.Format(fallbackScript, WebUtils.ResolveUrl(ControlResources.jQueryCdnFallbackUrl)); } } string output = "<script src=\"" + jQueryUrl + "\" type=\"text/javascript\"></script>"; // add in the CDN fallback script code if (!string.IsNullOrEmpty(fallbackScript)) output += "\r\n" + fallbackScript + "\r\n"; return output; } There's one dependency here on WebUtils.ResolveUrl() which resolves Urls without access to a Page/Control (another one of those features that should be in the runtime, not in the WebForms or MVC engine). You can see there's only a little bit of logic in this code that deals with potentially different load modes. I can load scripts from a Url, WebResources or - my preferred way - from CDN. Based on the static settings the scripts to embed are composed to be returned as simple string <script> tag(s). I find this extremely useful especially when I'm not connected to the internet so that I can quickly swap in a local jQuery resource instead of loading from CDN. While CDN loading with the fallback works it can be a bit slow as the CDN is probed first before the fallback kicks in. Switching quickly in one place makes this trivial. It also makes it very easy once a new version of jQuery rolls around to move up to the new version and ensure that all pages are using the new version immediately. I'm not trying to make this out as 'the' definite way to load your resources, but rather provide it here as a pointer so you can maybe apply your own logic to determine where scripts come from and how they load. You could even automate this some more by using configuration settings or reading the locations/preferences out of some sort of data/metadata store that can be dynamically updated instead via recompilation. FWIW, I use a very similar approach for loading jQuery UI and my own ww.jquery library - the same concept can be applied to any kind of script you might be loading from different locations. Hopefully some of you find this a useful addition to your toolset. Resources Google CDN for jQuery Full ControlResources Source Code ControlResource Documentation Westwind.Web NuGet This method is part of the Westwind.Web library of the West Wind Web Toolkit or you can grab the Web library from NuGet and add to your Visual Studio project. This package includes a host of Web related utilities and script support features. © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  jQuery   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • What’s new in ASP.NET 4.0: Core Features

    - by Rick Strahl
    Microsoft released the .NET Runtime 4.0 and with it comes a brand spanking new version of ASP.NET – version 4.0 – which provides an incremental set of improvements to an already powerful platform. .NET 4.0 is a full release of the .NET Framework, unlike version 3.5, which was merely a set of library updates on top of the .NET Framework version 2.0. Because of this full framework revision, there has been a welcome bit of consolidation of assemblies and configuration settings. The full runtime version change to 4.0 also means that you have to explicitly pick version 4.0 of the runtime when you create a new Application Pool in IIS, unlike .NET 3.5, which actually requires version 2.0 of the runtime. In this first of two parts I'll take a look at some of the changes in the core ASP.NET runtime. In the next edition I'll go over improvements in Web Forms and Visual Studio. Core Engine Features Most of the high profile improvements in ASP.NET have to do with Web Forms, but there are a few gems in the core runtime that should make life easier for ASP.NET developers. The following list describes some of the things I've found useful among the new features. Clean web.config Files Are Back! If you've been using ASP.NET 3.5, you probably have noticed that the web.config file has turned into quite a mess of configuration settings between all the custom handler and module mappings for the various web server versions. Part of the reason for this mess is that .NET 3.5 is a collection of add-on components running on top of the .NET Runtime 2.0 and so almost all of the new features of .NET 3.5 where essentially introduced as custom modules and handlers that had to be explicitly configured in the config file. Because the core runtime didn't rev with 3.5, all those configuration options couldn't be moved up to other configuration files in the system chain. With version 4.0 a consolidation was possible, and the result is a much simpler web.config file by default. A default empty ASP.NET 4.0 Web Forms project looks like this: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> </configuration> Need I say more? Configuration Transformation Files to Manage Configurations and Application Packaging ASP.NET 4.0 introduces the ability to create multi-target configuration files. This means it's possible to create a single configuration file that can be transformed based on relatively simple replacement rules using a Visual Studio and WebDeploy provided XSLT syntax. The idea is that you can create a 'master' configuration file and then create customized versions of this master configuration file by applying some relatively simplistic search and replace, add or remove logic to specific elements and attributes in the original file. To give you an idea, here's the example code that Visual Studio creates for a default web.Release.config file, which replaces a connection string, removes the debug attribute and replaces the CustomErrors section: <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <connectionStrings> <add name="MyDB" connectionString="Data Source=ReleaseSQLServer;Initial Catalog=MyReleaseDB;Integrated Security=True" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> <system.web> <compilation xdt:Transform="RemoveAttributes(debug)" /> <customErrors defaultRedirect="GenericError.htm" mode="RemoteOnly" xdt:Transform="Replace"> <error statusCode="500" redirect="InternalError.htm"/> </customErrors> </system.web> </configuration> You can see the XSL transform syntax that drives this functionality. Basically, only the elements listed in the override file are matched and updated – all the rest of the original web.config file stays intact. Visual Studio 2010 supports this functionality directly in the project system so it's easy to create and maintain these customized configurations in the project tree. Once you're ready to publish your application, you can then use the Publish <yourWebApplication> option on the Build menu which allows publishing to disk, via FTP or to a Web Server using Web Deploy. You can also create a deployment package as a .zip file which can be used by the WebDeploy tool to configure and install the application. You can manually run the Web Deploy tool or use the IIS Manager to install the package on the server or other machine. You can find out more about WebDeploy and Packaging here: http://tinyurl.com/2anxcje. Improved Routing Routing provides a relatively simple way to create clean URLs with ASP.NET by associating a template URL path and routing it to a specific ASP.NET HttpHandler. Microsoft first introduced routing with ASP.NET MVC and then they integrated routing with a basic implementation in the core ASP.NET engine via a separate ASP.NET routing assembly. In ASP.NET 4.0, the process of using routing functionality gets a bit easier. First, routing is now rolled directly into System.Web, so no extra assembly reference is required in your projects to use routing. The RouteCollection class now includes a MapPageRoute() method that makes it easy to route to any ASP.NET Page requests without first having to implement an IRouteHandler implementation. It would have been nice if this could have been extended to serve *any* handler implementation, but unfortunately for anything but a Page derived handlers you still will have to implement a custom IRouteHandler implementation. ASP.NET Pages now include a RouteData collection that will contain route information. Retrieving route data is now a lot easier by simply using this.RouteData.Values["routeKey"] where the routeKey is the value specified in the route template (i.e., "users/{userId}" would use Values["userId"]). The Page class also has a GetRouteUrl() method that you can use to create URLs with route data values rather than hardcoding the URL: <%= this.GetRouteUrl("users",new { userId="ricks" }) %> You can also use the new Expression syntax using <%$RouteUrl %> to accomplish something similar, which can be easier to embed into Page or MVC View code: <a runat="server" href='<%$RouteUrl:RouteName=user, id=ricks %>'>Visit User</a> Finally, the Response object also includes a new RedirectToRoute() method to build a route url for redirection without hardcoding the URL. Response.RedirectToRoute("users", new { userId = "ricks" }); All of these routines are helpers that have been integrated into the core ASP.NET engine to make it easier to create routes and retrieve route data, which hopefully will result in more people taking advantage of routing in ASP.NET. To find out more about the routing improvements you can check out Dan Maharry's blog which has a couple of nice blog entries on this subject: http://tinyurl.com/37trutj and http://tinyurl.com/39tt5w5. Session State Improvements Session state is an often used and abused feature in ASP.NET and version 4.0 introduces a few enhancements geared towards making session state more efficient and to minimize at least some of the ill effects of overuse. The first improvement affects out of process session state, which is typically used in web farm environments or for sites that store application sensitive data that must survive AppDomain restarts (which in my opinion is just about any application). When using OutOfProc session state, ASP.NET serializes all the data in the session statebag into a blob that gets carried over the network and stored either in the State server or SQL Server via the Session provider. Version 4.0 provides some improvement in this serialization of the session data by offering an enableCompression option on the web.Config <Session> section, which forces the serialized session state to be compressed. Depending on the type of data that is being serialized, this compression can reduce the size of the data travelling over the wire by as much as a third. It works best on string data, but can also reduce the size of binary data. In addition, ASP.NET 4.0 now offers a way to programmatically turn session state on or off as part of the request processing queue. In prior versions, the only way to specify whether session state is available is by implementing a marker interface on the HTTP handler implementation. In ASP.NET 4.0, you can now turn session state on and off programmatically via HttpContext.Current.SetSessionStateBehavior() as part of the ASP.NET module pipeline processing as long as it occurs before the AquireRequestState pipeline event. Output Cache Provider Output caching in ASP.NET has been a very useful but potentially memory intensive feature. The default OutputCache mechanism works through in-memory storage that persists generated output based on various lifetime related parameters. While this works well enough for many intended scenarios, it also can quickly cause runaway memory consumption as the cache fills up and serves many variations of pages on your site. ASP.NET 4.0 introduces a provider model for the OutputCache module so it becomes possible to plug-in custom storage strategies for cached pages. One of the goals also appears to be to consolidate some of the different cache storage mechanisms used in .NET in general to a generic Windows AppFabric framework in the future, so various different mechanisms like OutputCache, the non-Page specific ASP.NET cache and possibly even session state eventually can use the same caching engine for storage of persisted data both in memory and out of process scenarios. For developers, the OutputCache provider feature means that you can now extend caching on your own by implementing a custom Cache provider based on the System.Web.Caching.OutputCacheProvider class. You can find more info on creating an Output Cache provider in Gunnar Peipman's blog at: http://tinyurl.com/2vt6g7l. Response.RedirectPermanent ASP.NET 4.0 includes features to issue a permanent redirect that issues as an HTTP 301 Moved Permanently response rather than the standard 302 Redirect respond. In pre-4.0 versions you had to manually create your permanent redirect by setting the Status and Status code properties – Response.RedirectPermanent() makes this operation more obvious and discoverable. There's also a Response.RedirectToRoutePermanent() which provides permanent redirection of route Urls. Preloading of Applications ASP.NET 4.0 provides a new feature to preload ASP.NET applications on startup, which is meant to provide a more consistent startup experience. If your application has a lengthy startup cycle it can appear very slow to serve data to clients while the application is warming up and loading initial resources. So rather than serve these startup requests slowly in ASP.NET 4.0, you can force the application to initialize itself first before even accepting requests for processing. This feature works only on IIS 7.5 (Windows 7 and Windows Server 2008 R2) and works in combination with IIS. You can set up a worker process in IIS 7.5 to always be running, which starts the Application Pool worker process immediately. ASP.NET 4.0 then allows you to specify site-specific settings by setting the serverAutoStartEnabled on a particular site along with an optional serviceAutoStartProvider class that can be used to receive "startup events" when the application starts up. This event in turn can be used to configure the application and optionally pre-load cache data and other information required by the app on startup.  The configuration settings need to be made in applicationhost.config: <sites> <site name="WebApplication2" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PreWarmup" /> </site> </sites> <serviceAutoStartProviders> <add name="PreWarmup" type="PreWarmupProvider,MyAssembly" /> </serviceAutoStartProviders> Hooking up a warm up provider is optional so you can omit the provider definition and reference. If you do define it here's what it looks like: public class PreWarmupProvider System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // initialization for app } } This code fires and while it's running, ASP.NET/IIS will hold requests from hitting the pipeline. So until this code completes the application will not start taking requests. The idea is that you can perform any pre-loading of resources and cache values so that the first request will be ready to perform at optimal performance level without lag. Runtime Performance Improvements According to Microsoft, there have also been a number of invisible performance improvements in the internals of the ASP.NET runtime that should make ASP.NET 4.0 applications run more efficiently and use less resources. These features come without any change requirements in applications and are virtually transparent, except that you get the benefits by updating to ASP.NET 4.0. Summary The core feature set changes are minimal which continues a tradition of small incremental changes to the ASP.NET runtime. ASP.NET has been proven as a solid platform and I'm actually rather happy to see that most of the effort in this release went into stability, performance and usability improvements rather than a massive amount of new features. The new functionality added in 4.0 is minimal but very useful. A lot of people are still running pure .NET 2.0 applications these days and have stayed off of .NET 3.5 for some time now. I think that version 4.0 with its full .NET runtime rev and assembly and configuration consolidation will make an attractive platform for developers to update to. If you're a Web Forms developer in particular, ASP.NET 4.0 includes a host of new features in the Web Forms engine that are significant enough to warrant a quick move to .NET 4.0. I'll cover those changes in my next column. Until then, I suggest you give ASP.NET 4.0 a spin and see for yourself how the new features can help you out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Windows Azure: General Availability of Web Sites + Mobile Services, New AutoScale + Alerts Support, No Credit Card Needed for MSDN

    - by ScottGu
    This morning we released a major set of updates to Windows Azure.  These updates included: Web Sites: General Availability Release of Windows Azure Web Sites with SLA Mobile Services: General Availability Release of Windows Azure Mobile Services with SLA Auto-Scale: New automatic scaling support for Web Sites, Cloud Services and Virtual Machines Alerts/Notifications: New email alerting support for all Compute Services (Web Sites, Mobile Services, Cloud Services, and Virtual Machines) MSDN: No more credit card requirement for sign-up All of these improvements are now available to use immediately (note: some are still in preview).  Below are more details about them. Web Sites: General Availability Release of Windows Azure Web Sites I’m incredibly excited to announce the General Availability release of Windows Azure Web Sites. The Windows Azure Web Sites service is perfect for hosting a web presence, building customer engagement solutions, and delivering business web apps.  Today’s General Availability release means we are taking off the “preview” tag from the Free and Standard (formerly called reserved) tiers of Windows Azure Web Sites.  This means we are providing: A 99.9% monthly SLA (Service Level Agreement) for the Standard tier Microsoft Support available on a 24x7 basis (with plans that range from developer plans to enterprise Premier support) The Free tier runs in a shared compute environment and supports up to 10 web sites. While the Free tier does not come with an SLA, it works great for rapid development and testing and enables you to quickly spike out ideas at no cost. The Standard tier, which was called “Reserved” during the preview, runs using dedicated per-customer VM instances for great performance, isolation and scalability, and enables you to host up to 500 different Web sites within them.  You can easily scale your Standard instances on-demand using the Windows Azure Management Portal.  You can adjust VM instance sizes from a Small instance size (1 core, 1.75GB of RAM), up to a Medium instance size (2 core, 3.5GB of RAM), or Large instance (4 cores and 7 GB RAM).  You can choose to run between 1 and 10 Standard instances, enabling you to easily scale up your web backend to 40 cores of CPU and 70GB of RAM: Today’s release also includes general availability support for custom domain SSL certificate bindings for web sites running using the Standard tier. Customers will be able to utilize certificates they purchase for their custom domains and use either SNI or IP based SSL encryption. SNI encryption is available for all modern browsers and does not require an IP address.  SSL certificates can be used for individual sites or wild-card mapped across multiple sites (we charge extra for the use of a SSL cert – but the fee is per-cert and not per site which means you pay once for it regardless of how many sites you use it with).  Today’s release also includes the following new features: Auto-Scale support Today’s Windows Azure release adds preview support for Auto-Scaling web sites.  This enables you to setup automatic scale rules based on the activity of your instances – allowing you to automatically scale down (and save money) when they are below a CPU threshold you define, and automatically scale up quickly when traffic increases.  See below for more details. 64-bit and 32-bit mode support You can now choose to run your standard tier instances in either 32-bit or 64-bit mode (previously they only ran in 32-bit mode).  This enables you to address even more memory within individual web applications. Memory dumps Memory dumps can be very useful for diagnosing issues and debugging apps. Using a REST API, you can now get a memory dump of your sites, which you can then use for investigating issues in Visual Studio Debugger, WinDbg, and other tools. Scaling Sites Independently Prior to today’s release, all sites scaled up/down together whenever you scaled any site in a sub-region. So you may have had to keep your proof-of-concept or testing sites in a separate sub-region if you wanted to keep them in the Free tier. This will no longer be necessary.  Windows Azure Web Sites can now mix different tier levels in the same geographic sub-region. This allows you, for example, to selectively move some of your sites in the West US sub-region up to Standard tier when they require the features, scalability, and SLA of the Standard tier. Full pricing details on Windows Azure Web Sites can be found here.  Note that the “Shared Tier” of Windows Azure Web Sites remains in preview mode (and continues to have discounted preview pricing).  Mobile Services: General Availability Release of Windows Azure Mobile Services I’m incredibly excited to announce the General Availability release of Windows Azure Mobile Services.  Mobile Services is perfect for building scalable cloud back-ends for Windows 8.x, Windows Phone, Apple iOS, Android, and HTML/JavaScript applications.  Customers We’ve seen tremendous adoption of Windows Azure Mobile Services since we first previewed it last September, and more than 20,000 customers are now running mobile back-ends in production using it.  These customers range from startups like Yatterbox, to university students using Mobile Services to complete apps like Sly Fox in their spare time, to media giants like Verdens Gang finding new ways to deliver content, and telcos like TalkTalk Business delivering the up-to-the-minute information their customers require.  In today’s Build keynote, we demonstrated how TalkTalk Business is using Windows Azure Mobile Services to deliver service, outage and billing information to its customers, wherever they might be. Partners When we unveiled the source control and Custom API features I blogged about two weeks ago, we enabled a range of new scenarios, one of which is a more flexible way to work with third party services.  The following blogs, samples and tutorials from our partners cover great ways you can extend Mobile Services to help you build rich modern apps: New Relic allows developers to monitor and manage the end-to-end performance of iOS and Android applications connected to Mobile Services. SendGrid eliminates the complexity of sending email from Mobile Services, saving time and money, while providing reliable delivery to the inbox. Twilio provides a telephony infrastructure web service in the cloud that you can use with Mobile Services to integrate phone calls, text messages and IP voice communications into your mobile apps. Xamarin provides a Mobile Services add on to make it easy building cross-platform connected mobile aps. Pusher allows quickly and securely add scalable real-time messaging functionality to Mobile Services-based web and mobile apps. Visual Studio 2013 and Windows 8.1 This week during //build/ keynote, we demonstrated how Visual Studio 2013, Mobile Services and Windows 8.1 make building connected apps easier than ever. Developers building Windows 8 applications in Visual Studio can now connect them to Windows Azure Mobile Services by simply right clicking then choosing Add Connected Service. You can either create a new Mobile Service or choose existing Mobile Service in the Add Connected Service dialog. Once completed, Visual Studio adds a reference to Mobile Services SDK to your project and generates a Mobile Services client initialization snippet automatically. Add Push Notifications Push Notifications and Live Tiles are a key to building engaging experiences. Visual Studio 2013 and Mobile Services make it super easy to add push notifications to your Windows 8.1 app, by clicking Add a Push Notification item: The Add Push Notification wizard will then guide you through the registration with the Windows Store as well as connecting your app to a new or existing mobile service. Upon completion of the wizard, Visual Studio will configure your mobile service with the WNS credentials, as well as add sample logic to your client project and your mobile service that demonstrates how to send push notifications to your app. Server Explorer Integration In Visual Studio 2013 you can also now view your Mobile Services in the the Server Explorer. You can add tables, edit, and save server side scripts without ever leaving Visual Studio, as shown on the image below: Pricing With today’s general availability release we are announcing that we will be offering Mobile Services in three tiers – Free, Standard, and Premium.  Each tier is metered using a simple pricing model based on the # of API calls (bandwidth is included at no extra charge), and the Standard and Premium tiers are backed by 99.9% monthly SLAs.  You can elastically scale up or down the number of instances you have of each tier to increase the # of API requests your service can support – allowing you to efficiently scale as your business grows. The following table summarizes the new pricing model (full pricing details here):   You can find the full details of the new pricing model here. Build Conference Talks The //BUILD/ conference will be packed with sessions covering every aspect of developing connected applications with Mobile Services. The best part is that, even if you can’t be with us in San Francisco, every session is being streamed live. Be sure not to miss these talks: Mobile Services – Soup to Nuts — Josh Twist Building Cross-Platform Apps with Windows Azure Mobile Services — Chris Risner Connected Windows Phone Apps made Easy with Mobile Services — Yavor Georgiev Build Connected Windows 8.1 Apps with Mobile Services — Nick Harris Who’s that user? Identity in Mobile Apps — Dinesh Kulkarni Building REST Services with JavaScript — Nathan Totten Going Live and Beyond with Windows Azure Mobile Services — Kirill Gavrylyuk , Paul Batum Protips for Windows Azure Mobile Services — Chris Risner AutoScale: Dynamically scale up/down your app based on real-world usage One of the key benefits of Windows Azure is that you can dynamically scale your application in response to changing demand. In the past, though, you have had to either manually change the scale of your application, or use additional tooling (such as WASABi or MetricsHub) to automatically scale your application. Today, we’re announcing that AutoScale will be built-into Windows Azure directly.  With today’s release it is now enabled for Cloud Services, Virtual Machines and Web Sites (Mobile Services support will come soon). Auto-scale enables you to configure Windows Azure to automatically scale your application dynamically on your behalf (without any manual intervention) so you can achieve the ideal performance and cost balance. Once configured it will regularly adjust the number of instances running in response to the load in your application. Currently, we support two different load metrics: CPU percentage Storage queue depth (Cloud Services and Virtual Machines only) We’ll enable automatic scaling on even more scale metrics in future updates. When to use Auto-Scale The following are good criteria for services/apps that will benefit from the use of auto-scale: The service/app can scale horizontally (e.g. it can be duplicated to multiple instances) The service/app load changes over time If your app meets these criteria, then you should look to leverage auto-scale. How to Enable Auto-Scale To enable auto-scale, simply navigate to the Scale tab in the Windows Azure Management Portal for the app/service you wish to enable.  Within the scale tab turn the Auto-Scale setting on to either CPU or Queue (for Cloud Services and VMs) to enable Auto-Scale.  Then change the instance count and target CPU settings to configure the Auto-Scale ranges you want to maintain. The image below demonstrates how to enable Auto-Scale on a Windows Azure Web-Site.  I’ve configured the web-site so that it will run using between 1 and 5 VM instances.  The exact # used will depend on the aggregate CPU of the VMs using the 40-70% range I’ve configured below.  If the aggregate CPU goes above 70%, then Windows Azure will automatically add new VMs to the pool (up to the maximum of 5 instances I’ve configured it to use).  If the aggregate CPU drops below 40% then Windows Azure will automatically start shutting down VMs to save me money: Once you’ve turned auto-scale on, you can return to the Scale tab at any point and select Off to manually set the number of instances. Using the Auto-Scale Preview With today’s update you can now, in just a few minutes, have Windows Azure automatically adjust the number of instances you have running  in your apps to keep your service performant at an even better cost. Auto-scale is being released today as a preview feature, and will be free until General Availability. During preview, each subscription is limited to 10 separate auto-scale rules across all of the resources they have (Web sites, Cloud services or Virtual Machines). If you hit the 10 limit, you can disable auto-scale for any resource to enable it for another. Alerts and Notifications Starting today we are now providing the ability to configure threshold based alerts on monitoring metrics. This feature is available for compute services (cloud services, VM, websites and mobiles services). Alerts provide you the ability to get proactively notified of active or impending issues within your application.  You can define alert rules for: Virtual machine monitoring metrics that are collected from the host operating system (CPU percentage, network in/out, disk read bytes/sec and disk write bytes/sec) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. Cloud service monitoring metrics that are collected from the host operating system (same as VM), monitoring metrics from the guest VM (from performance counters within the VM) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. For Web Sites and Mobile Services, alerting rules can be configured on monitoring metrics from monitoring endpoint urls (response time and uptime) that you have configured. Creating Alert Rules You can add an alert rule for a monitoring metric by navigating to the Setting -> Alerts tab in the Windows Azure Management Portal. Click on the Add Rule button to create an alert rule. Give the alert rule a name and optionally add a description. Then pick the service which you want to define the alert rule on: The next step in the alert creation wizard will then filter the monitoring metrics based on the service you selected:   Once created the rule will show up in your alerts list within the settings tab: The rule above is defined as “not activated” since it hasn’t tripped over the CPU threshold we set.  If the CPU on the above machine goes over the limit, though, I’ll get an email notifying me from an Windows Azure Alerts email address ([email protected]). And when I log into the portal and revisit the alerts tab I’ll see it highlighted in red.  Clicking it will then enable me to see what is causing it to fail, as well as view the history of when it has happened in the past. Alert Notifications With today’s initial preview you can now easily create alerting rules based on monitoring metrics and get notified on active or impending issues within your application that require attention. During preview, each subscription is limited to 10 alert rules across all of the services that support alert rules. No More Credit Card Requirement for MSDN Subscribers Earlier this month (during TechEd 2013), Windows Azure announced that MSDN users will get Windows Azure Credits every month that they can use for any Windows Azure services they want. You can read details about this in my previous Dev/Test blog post. Today we are making further updates to enable an easier Windows Azure signup for MSDN users. MSDN users will now not be required to provide payment information (e.g. no credit card) during sign-up, so long as they use the service within the included monetary credit for the billing period. For usage beyond the monetary credit, they can enable overages by providing the payment information and remove the spending limit. This enables a super easy, one page sign-up experience for MSDN users.  Simply sign-up for your Windows Azure trial using the same Microsoft ID that you use to manage your MSDN account, then complete the one page sign-up form below and you will be able to spend your free monthly MSDN credits (up to $150 each month) on any Windows Azure resource for dev/test:   This makes it trivially easy for every MDSN customer to start using Windows Azure today.  If you haven’t signed up yet, I definitely recommend checking it out. Summary Today’s release includes a ton of great features that enable you to build even better cloud solutions.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • ASPNET WebAPI REST Guidance

    - by JoshReuben
    ASP.NET Web API is an ideal platform for building RESTful applications on the .NET Framework. While I may be more partial to NodeJS these days, there is no denying that WebAPI is a well engineered framework. What follows is my investigation of how to leverage WebAPI to construct a RESTful frontend API.   The Advantages of REST Methodology over SOAP Simpler API for CRUD ops Standardize Development methodology - consistent and intuitive Standards based à client interop Wide industry adoption, Ease of use à easy to add new devs Avoid service method signature blowout Smaller payloads than SOAP Stateless à no session data means multi-tenant scalability Cache-ability Testability   General RESTful API Design Overview · utilize HTTP Protocol - Usage of HTTP methods for CRUD, standard HTTP response codes, common HTTP headers and Mime Types · Resources are mapped to URLs, actions are mapped to verbs and the rest goes in the headers. · keep the API semantic, resource-centric – A RESTful, resource-oriented service exposes a URI for every piece of data the client might want to operate on. A REST-RPC Hybrid exposes a URI for every operation the client might perform: one URI to fetch a piece of data, a different URI to delete that same data. utilize Uri to specify CRUD op, version, language, output format: http://api.MyApp.com/{ver}/{lang}/{resource_type}/{resource_id}.{output_format}?{key&filters} · entity CRUD operations are matched to HTTP methods: · Create - POST / PUT · Read – GET - cacheable · Update – PUT · Delete - DELETE · Use Uris to represent a hierarchies - Resources in RESTful URLs are often chained · Statelessness allows for idempotency – apply an op multiple times without changing the result. POST is non-idempotent, the rest are idempotent (if DELETE flags records instead of deleting them). · Cache indication - Leverage HTTP headers to label cacheable content and indicate the permitted duration of cache · PUT vs POST - The client uses PUT when it determines which URI (Id key) the new resource should have. The client uses POST when the server determines they key. PUT takes a second param – the id. POST creates a new resource. The server assigns the URI for the new object and returns this URI as part of the response message. Note: The PUT method replaces the entire entity. That is, the client is expected to send a complete representation of the updated product. If you want to support partial updates, the PATCH method is preferred DELETE deletes a resource at a specified URI – typically takes an id param · Leverage Common HTTP Response Codes in response headers 200 OK: Success 201 Created - Used on POST request when creating a new resource. 304 Not Modified: no new data to return. 400 Bad Request: Invalid Request. 401 Unauthorized: Authentication. 403 Forbidden: Authorization 404 Not Found – entity does not exist. 406 Not Acceptable – bad params. 409 Conflict - For POST / PUT requests if the resource already exists. 500 Internal Server Error 503 Service Unavailable · Leverage uncommon HTTP Verbs to reduce payload sizes HEAD - retrieves just the resource meta-information. OPTIONS returns the actions supported for the specified resource. PATCH - partial modification of a resource. · When using PUT, POST or PATCH, send the data as a document in the body of the request. Don't use query parameters to alter state. · Utilize Headers for content negotiation, caching, authorization, throttling o Content Negotiation – choose representation (e.g. JSON or XML and version), language & compression. Signal via RequestHeader.Accept & ResponseHeader.Content-Type Accept: application/json;version=1.0 Accept-Language: en-US Accept-Charset: UTF-8 Accept-Encoding: gzip o Caching - ResponseHeader: Expires (absolute expiry time) or Cache-Control (relative expiry time) o Authorization - basic HTTP authentication uses the RequestHeader.Authorization to specify a base64 encoded string "username:password". can be used in combination with SSL/TLS (HTTPS) and leverage OAuth2 3rd party token-claims authorization. Authorization: Basic sQJlaTp5ZWFslylnaNZ= o Rate Limiting - Not currently part of HTTP so specify non-standard headers prefixed with X- in the ResponseHeader. X-RateLimit-Limit: 10000 X-RateLimit-Remaining: 9990 · HATEOAS Methodology - Hypermedia As The Engine Of Application State – leverage API as a state machine where resources are states and the transitions between states are links between resources and are included in their representation (hypermedia) – get API metadata signatures from the response Link header - in a truly REST based architecture any URL, except the initial URL, can be changed, even to other servers, without worrying about the client. · error responses - Do not just send back a 200 OK with every response. Response should consist of HTTP error status code (JQuery has automated support for this), A human readable message , A Link to a meaningful state transition , & the original data payload that was problematic. · the URIs will typically map to a server-side controller and a method name specified by the type of request method. Stuff all your calls into just four methods is not as crazy as it sounds. · Scoping - Path variables look like you’re traversing a hierarchy, and query variables look like you’re passing arguments into an algorithm · Mapping URIs to Controllers - have one controller for each resource is not a rule – can consolidate - route requests to the appropriate controller and action method · Keep URls Consistent - Sometimes it’s tempting to just shorten our URIs. not recommend this as this can cause confusion · Join Naming – for m-m entity relations there may be multiple hierarchy traversal paths · Routing – useful level of indirection for versioning, server backend mocking in development ASPNET WebAPI Considerations ASPNET WebAPI implements a lot (but not all) RESTful API design considerations as part of its infrastructure and via its coding convention. Overview When developing an API there are basically three main steps: 1. Plan out your URIs 2. Setup return values and response codes for your URIs 3. Implement a framework for your API.   Design · Leverage Models MVC folder · Repositories – support IoC for tests, abstraction · Create DTO classes – a level of indirection decouples & allows swap out · Self links can be generated using the UrlHelper · Use IQueryable to support projections across the wire · Models can support restful navigation properties – ICollection<T> · async mechanism for long running ops - return a response with a ticket – the client can then poll or be pushed the final result later. · Design for testability - Test using HttpClient , JQuery ( $.getJSON , $.each) , fiddler, browser debug. Leverage IDependencyResolver – IoC wrapper for mocking · Easy debugging - IE F12 developer tools: Network tab, Request Headers tab     Routing · HTTP request method is matched to the method name. (This rule applies only to GET, POST, PUT, and DELETE requests.) · {id}, if present, is matched to a method parameter named id. · Query parameters are matched to parameter names when possible · Done in config via Routes.MapHttpRoute – similar to MVC routing · Can alternatively: o decorate controller action methods with HttpDelete, HttpGet, HttpHead,HttpOptions, HttpPatch, HttpPost, or HttpPut., + the ActionAttribute o use AcceptVerbsAttribute to support other HTTP verbs: e.g. PATCH, HEAD o use NonActionAttribute to prevent a method from getting invoked as an action · route table Uris can support placeholders (via curly braces{}) – these can support default values and constraints, and optional values · The framework selects the first route in the route table that matches the URI. Response customization · Response code: By default, the Web API framework sets the response status code to 200 (OK). But according to the HTTP/1.1 protocol, when a POST request results in the creation of a resource, the server should reply with status 201 (Created). Non Get methods should return HttpResponseMessage · Location: When the server creates a resource, it should include the URI of the new resource in the Location header of the response. public HttpResponseMessage PostProduct(Product item) {     item = repository.Add(item);     var response = Request.CreateResponse<Product>(HttpStatusCode.Created, item);     string uri = Url.Link("DefaultApi", new { id = item.Id });     response.Headers.Location = new Uri(uri);     return response; } Validation · Decorate Models / DTOs with System.ComponentModel.DataAnnotations properties RequiredAttribute, RangeAttribute. · Check payloads using ModelState.IsValid · Under posting – leave out values in JSON payload à JSON formatter assigns a default value. Use with RequiredAttribute · Over-posting - if model has RO properties à use DTO instead of model · Can hook into pipeline by deriving from ActionFilterAttribute & overriding OnActionExecuting Config · Done in App_Start folder > WebApiConfig.cs – static Register method: HttpConfiguration param: The HttpConfiguration object contains the following members. Member Description DependencyResolver Enables dependency injection for controllers. Filters Action filters – e.g. exception filters. Formatters Media-type formatters. by default contains JsonFormatter, XmlFormatter IncludeErrorDetailPolicy Specifies whether the server should include error details, such as exception messages and stack traces, in HTTP response messages. Initializer A function that performs final initialization of the HttpConfiguration. MessageHandlers HTTP message handlers - plug into pipeline ParameterBindingRules A collection of rules for binding parameters on controller actions. Properties A generic property bag. Routes The collection of routes. Services The collection of services. · Configure JsonFormatter for circular references to support links: PreserveReferencesHandling.Objects Documentation generation · create a help page for a web API, by using the ApiExplorer class. · The ApiExplorer class provides descriptive information about the APIs exposed by a web API as an ApiDescription collection · create the help page as an MVC view public ILookup<string, ApiDescription> GetApis()         {             return _explorer.ApiDescriptions.ToLookup(                 api => api.ActionDescriptor.ControllerDescriptor.ControllerName); · provide documentation for your APIs by implementing the IDocumentationProvider interface. Documentation strings can come from any source that you like – e.g. extract XML comments or define custom attributes to apply to the controller [ApiDoc("Gets a product by ID.")] [ApiParameterDoc("id", "The ID of the product.")] public HttpResponseMessage Get(int id) · GlobalConfiguration.Configuration.Services – add the documentation Provider · To hide an API from the ApiExplorer, add the ApiExplorerSettingsAttribute Plugging into the Message Handler pipeline · Plug into request / response pipeline – derive from DelegatingHandler and override theSendAsync method – e.g. for logging error codes, adding a custom response header · Can be applied globally or to a specific route Exception Handling · Throw HttpResponseException on method failures – specify HttpStatusCode enum value – examine this enum, as its values map well to typical op problems · Exception filters – derive from ExceptionFilterAttribute & override OnException. Apply on Controller or action methods, or add to global HttpConfiguration.Filters collection · HttpError object provides a consistent way to return error information in the HttpResponseException response body. · For model validation, you can pass the model state to CreateErrorResponse, to include the validation errors in the response public HttpResponseMessage PostProduct(Product item) {     if (!ModelState.IsValid)     {         return Request.CreateErrorResponse(HttpStatusCode.BadRequest, ModelState); Cookie Management · Cookie header in request and Set-Cookie headers in a response - Collection of CookieState objects · Specify Expiry, max-age resp.Headers.AddCookies(new CookieHeaderValue[] { cookie }); Internet Media Types, formatters and serialization · Defaults to application/json · Request Accept header and response Content-Type header · determines how Web API serializes and deserializes the HTTP message body. There is built-in support for XML, JSON, and form-urlencoded data · customizable formatters can be inserted into the pipeline · POCO serialization is opt out via JsonIgnoreAttribute, or use DataMemberAttribute for optin · JSON serializer leverages NewtonSoft Json.NET · loosely structured JSON objects are serialzed as JObject which derives from Dynamic · to handle circular references in json: json.SerializerSettings.PreserveReferencesHandling =    PreserveReferencesHandling.All à {"$ref":"1"}. · To preserve object references in XML [DataContract(IsReference=true)] · Content negotiation Accept: Which media types are acceptable for the response, such as “application/json,” “application/xml,” or a custom media type such as "application/vnd.example+xml" Accept-Charset: Which character sets are acceptable, such as UTF-8 or ISO 8859-1. Accept-Encoding: Which content encodings are acceptable, such as gzip. Accept-Language: The preferred natural language, such as “en-us”. o Web API uses the Accept and Accept-Charset headers. (At this time, there is no built-in support for Accept-Encoding or Accept-Language.) · Controller methods can take JSON representations of DTOs as params – auto-deserialization · Typical JQuery GET request: function find() {     var id = $('#prodId').val();     $.getJSON("api/products/" + id,         function (data) {             var str = data.Name + ': $' + data.Price;             $('#product').text(str);         })     .fail(         function (jqXHR, textStatus, err) {             $('#product').text('Error: ' + err);         }); }            · Typical GET response: HTTP/1.1 200 OK Server: ASP.NET Development Server/10.0.0.0 Date: Mon, 18 Jun 2012 04:30:33 GMT X-AspNet-Version: 4.0.30319 Cache-Control: no-cache Pragma: no-cache Expires: -1 Content-Type: application/json; charset=utf-8 Content-Length: 175 Connection: Close [{"Id":1,"Name":"TomatoSoup","Price":1.39,"ActualCost":0.99},{"Id":2,"Name":"Hammer", "Price":16.99,"ActualCost":10.00},{"Id":3,"Name":"Yo yo","Price":6.99,"ActualCost": 2.05}] True OData support · Leverage Query Options $filter, $orderby, $top and $skip to shape the results of controller actions annotated with the [Queryable]attribute. [Queryable]  public IQueryable<Supplier> GetSuppliers()  · Query: ~/Suppliers?$filter=Name eq ‘Microsoft’ · Applies the following selection filter on the server: GetSuppliers().Where(s => s.Name == “Microsoft”)  · Will pass the result to the formatter. · true support for the OData format is still limited - no support for creates, updates, deletes, $metadata and code generation etc · vnext: ability to configure how EditLinks, SelfLinks and Ids are generated Self Hosting no dependency on ASPNET or IIS: using (var server = new HttpSelfHostServer(config)) {     server.OpenAsync().Wait(); Tracing · tracability tools, metrics – e.g. send to nagios · use your choice of tracing/logging library, whether that is ETW,NLog, log4net, or simply System.Diagnostics.Trace. · To collect traces, implement the ITraceWriter interface public class SimpleTracer : ITraceWriter {     public void Trace(HttpRequestMessage request, string category, TraceLevel level,         Action<TraceRecord> traceAction)     {         TraceRecord rec = new TraceRecord(request, category, level);         traceAction(rec);         WriteTrace(rec); · register the service with config · programmatically trace – has helper extension methods: Configuration.Services.GetTraceWriter().Info( · Performance tracing - pipeline writes traces at the beginning and end of an operation - TraceRecord class includes aTimeStamp property, Kind property set to TraceKind.Begin / End Security · Roles class methods: RoleExists, AddUserToRole · WebSecurity class methods: UserExists, .CreateUserAndAccount · Request.IsAuthenticated · Leverage HTTP 401 (Unauthorized) response · [AuthorizeAttribute(Roles="Administrator")] – can be applied to Controller or its action methods · See section in WebApi document on "Claim-based-security for ASP.NET Web APIs using DotNetOpenAuth" – adapt this to STS.--> Web API Host exposes secured Web APIs which can only be accessed by presenting a valid token issued by the trusted issuer. http://zamd.net/2012/05/04/claim-based-security-for-asp-net-web-apis-using-dotnetopenauth/ · Use MVC membership provider infrastructure and add a DelegatingHandler child class to the WebAPI pipeline - http://stackoverflow.com/questions/11535075/asp-net-mvc-4-web-api-authentication-with-membership-provider - this will perform the login actions · Then use AuthorizeAttribute on controllers and methods for role mapping- http://sixgun.wordpress.com/2012/02/29/asp-net-web-api-basic-authentication/ · Alternate option here is to rely on MVC App : http://forums.asp.net/t/1831767.aspx/1

    Read the article

  • 301 redirect root domain to www subdomain on godaddy windows hosting account

    - by Greg
    If I type in domain.com and www.domain.com, they both show the same website, but show different urls in the address bar. I'd like visitors and search engines that just type "domain.com" to be redirected to "www.domain.com". I'm using IIS 7 on a godaddy hosting account. How do I redirect all requests for "domain.com" to "www.domain.com"? I have the default DNS setup, "domain.com" as my "A record" and the cname "www" points to my "A record".

    Read the article

  • C# RegEx - find html tags (div and anchor)

    - by czesio
    Hi I have to retrieve several div section (of specific class name "row ") with it's content, and additionally find all anchor tags (link urls) (with class "underline red bold"). Shortly speaing : get section of: ... (divs, tags ...) and collections of urls string[] urls = {"/searchClickThru? pid=prod56534895&q=&rpos=109181&rpp=10&_dyncharset=UTF-8&sort=&url=/culture-and-gender-intimate-relation-ksiazka,prod56534895,p"} the entire page looks like that: <html> ... a lot of stuff <div class="row "> <div class="photo"> <a rel="nofollow" href="/searchClickThru?pid=prod56534895&amp;q=&amp;rpos=109181&amp;rpp=10&amp;_dyncharset=UTF-8&amp;sort=&amp;url=/culture-and-gender-intimate-relation-ksiazka,prod56534895,p"> <img alt="alt msg" src="/b/s/b9/03/b9038292d147a582add07ee1f0607827.jpg"> </a> </div> <div class="desc"> <div class="l1"> <div class="icons"> </div> <table cellspacing="0" cellpadding="0" border="0"> <tbody> <tr> <td> <div class="fleft"> <a class="underline red bold" href="/searchClickThru?pid=prod56534895&amp;q=&amp;rpos=109181&amp;rpp=10&amp;_dyncharset=UTF-8&amp;sort=&amp;url=/culture-and-gender-intimate-relation-ksiazka,prod56534895,p"> Culture And Gender <br>Intimate Relation</a> </div> <div class="fleft"> </div> </td> </tr> </tbody> </table> </div> <div class="l2"> <div> </div> <div> <div class="but"> </div> </div> </div> <div class="l3"> Long description <a class="underlinepix_red no_wrap" rel="nofollow" href="/searchClickThru?pid=prod56534895&amp;q=&amp;rpos=109181&amp;rpp=10&amp;_dyncharset=UTF-8&amp;sort=&amp;url=/culture-and-gender-intimate-relation-ksiazka,prod56534895,p"> more<img alt="" src="/b/img/arr_red_sm.gif"> </a> </div> </div> </div> <div class="omit"></div> <div class="row "> <div class="photo"> <a rel="nofollow" href="/searchClickThru?pid=prod56534895&amp;q=&amp;rpos=109181&amp;rpp=10&amp;_dyncharset=UTF-8&amp;sort=&amp;url=/culture-and-gender-intimate-relation-ksiazka,prod56534899,p"> <img alt="alt msg" src="/b/s/b9/03/b9038292d147a582add07ee1f06078222.jpg"> </a> </div> <div class="desc"> <div class="l1"> <div class="icons"> </div> <table cellspacing="0" cellpadding="0" border="0"> <tbody> <tr> <td> <div class="fleft"> <a class="underline red bold" href="/searchClickThru?pid=prod56534895&amp;q=&amp;rpos=109181&amp;rpp=10&amp;_dyncharset=UTF-8&amp;sort=&amp;url=/culture-and-gender-intimate-relation-ksiazka,prod5653489225,p"> Culture And Gender <br>Intimate Relation</a> </div> <div class="fleft"> </div> </td> </tr> </tbody> </table> </div> <div class="l2"> <div> </div> <div> <div class="but"> </div> </div> </div> <div class="l3"> Long description <a class="underlinepix_red no_wrap" rel="nofollow" href="/searchClickThru?pid=prod56534895&amp;q=&amp;rpos=109181&amp;rpp=10&amp;_dyncharset=UTF-8&amp;sort=&amp;url=/culture-and-gender-intimate-relation-ksiazka,prod56534895,p"> more<img alt="" src="/b/img/arr_red_sm.gif"> </a> </div> </div> </div> Can anybody help me to create suitable reg ex?

    Read the article

  • Django doesn't refresh my request object when reloading the current page.

    - by Boris Rusev
    I have a Django web site which I want ot be viewable in different languages. Until this morning everything was working fine. Here is the deal. I go to my say About Us page and it is in English. Below it there is the change language button and when I press it everything "magically" translates to Bulgarian just the way I want it. On the other hand I have a JS menu from which the user is able to browse through the products. I click on 'T-Shirt' then a sub-menu opens bellow the previously pressed containing different categories - Men, Women, Children. The link guides me to a page where the exact clothes I have requested are listed. BUT... When I try to change the language THEN, nothing happens. I go to the Abouts Page, change the language from there, return to the clothes catalog and the language is changed... I will no paste some code. This is my change button code: function changeLanguage() { if (getCookie('language') == 'EN') { setCookie("language", 'BG'); } else { setCookie("language", 'EN'); } window.location.reload(); } These are my URL patterns: urlpatterns = patterns('', # Example: # (r'^enter_clothing/', include('enter_clothing.foo.urls')), # Uncomment the admin/doc line below and add 'django.contrib.admindocs' # to INSTALLED_APPS to enable admin documentation: # (r'^admin/doc/', include('django.contrib.admindocs.urls')), # Uncomment the next line to enable the admin: (r'^site_media/(?P<path>.*)$', 'django.views.static.serve', {'document_root': '/home/boris/Projects/enter_clothing/templates/media', 'show_indexes': True}), (r'^$', 'enter_clothing.clothes_app.views.index'), (r'^home', 'enter_clothing.clothes_app.views.home'), (r'^products', 'enter_clothing.clothes_app.views.products'), (r'^orders', 'enter_clothing.clothes_app.views.orders'), (r'^aboutUs', 'enter_clothing.clothes_app.views.aboutUs'), (r'^contactUs', 'enter_clothing.clothes_app.views.contactUs'), (r'^admin/', include(admin.site.urls)), (r'^(\w+)/(\w+)/page=(\d+)', 'enter_clothing.clothes_app.views.displayClothes'), ) My About Us page: @base def aboutUs(request): return """<b>%s</b>""" % getTranslation("About Us Text", request.COOKIES['language']) The @base method: def base(myfunc): def inner_func(*args, **kwargs): try: args[0].COOKIES['language'] except: args[0].COOKIES['language'] = 'BG' resetGlobalVariables() initCollections(args[0]) categoriesByCollection = dict((collection, getCategoriesFromCollection(args[0], collection)) for collection in collections) if args[0].COOKIES['language'] == 'BG': for k, v in categoriesByCollection.iteritems(): categoriesByCollection[k] = reduce(lambda a,b: a+b, map(lambda x: """<li><a href="/%s/%s/page=1">%s</a></li>""" % (translateCategory(args[0], x), translateCollection(args[0], k), str(x)), v), "") else: for k, v in categoriesByCollection.iteritems(): categoriesByCollection[k] = reduce(lambda a,b: a+b, map(lambda x: """<li><a href="/%s/%s/page=1">%s</a></li>""" % (str(x), str(k), str(x)), v), "") contents = myfunc(*args, **kwargs) return render_to_response('index.html', {'title': title, 'categoriesByCollection': categoriesByCollection.iteritems(), 'keys': enumerate(keys), 'values': enumerate(values), 'contents': contents, 'btnHome':getTranslation("Home Button", args[0].COOKIES['language']), 'btnProducts':getTranslation("Products Button", args[0].COOKIES['language']), 'btnOrders':getTranslation("Orders Button", args[0].COOKIES['language']), 'btnAboutUs':getTranslation("About Us Button", args[0].COOKIES['language']), 'btnContacts':getTranslation("Contact Us Button", args[0].COOKIES['language']), 'btnChangeLanguage':getTranslation("Button Change Language", args[0].COOKIES['language'])}) return inner_func And the catalog page: @base def displayClothes(request, category, collection, page): clothesToDisplay = getClothesFromCollectionAndCategory(request, category, collection) contents = "" pageCount = len(clothesToDisplay) / ( rowCount * columnCount) + 1 matrixSize = rowCount * columnCount currentPage = str(page).replace("page=", "") currentPage = int(currentPage) - 1 #raise Exception(request) # this is for the clothes layout for x in range(currentPage * matrixSize, matrixSize * (currentPage + 1)): if x < len(clothesToDisplay): if request.COOKIES['language'] == 'EN': contents += """<div class="clothes">%s</div>""" % clothesToDisplay[x].getEnglishHTML() else: contents += """<div class="clothes">%s</div>""" % clothesToDisplay[x].getBulgarianHTML() if (x + 1) % columnCount == 0: contents += """<div class="clear"></div>""" contents += """<div class="clear"></div>""" # this is for the page links if pageCount > 1: for x in range(0, pageCount): if x == currentPage: contents += """<a href="/%s/%s/page=%s"><span style="font-size: 20pt; color: black;">%s</span></a>""" % (category, collection, x + 1, x + 1) else: contents += """<a href="/%s/%s/page=%s"><span style="font-size: 20pt; color: blue;">%s</span></a>""" % (category, collection, x + 1, x + 1) return """%s""" % (contents) Let me explain that you needn't be alarmed by the large quantities of code I have posted. You don't have to understand it or even look at all of it. I've published it just in case because I really can't understand the origins of the bug. Now this is how I have narrowed the problem. I am debuging with "raise Exception(request)" every time I want to know what's inside my request object. When I place this in my aboutUs method, the language cookie value changes every time I press the language button. But NOT when I am in the displayClothes method. There the language stays the same. Also I tried putting the exception line in the beginning of the @base method. It turns out the situation there is exactly the same. When I am in my About Us page and click on the button, the language in my request object changes, but when I press the button while in the catalog page it remains unchanged. That is all I could find, and I have no idea as to how Django distinguishes my pages and in what way. P.S. The JavaScript I think works perfectly, I have tested it in multiple ways. Thank you, I hope some of you will read this enormous post, and don't hesitate to ask for more code excerpts.

    Read the article

  • Having problems install py2app 0.5.2

    - by Francis Young
    Hi there, I am a beginner at python so please excuse me for silly comments or rookie mistakes that i make. I was trying to install py2app 0.5.2 and i hit an error: $Best match: altgraph 0.7.1 $Downloading http://pypi.python.org/packages/source/a/altgraph/altgraph-$0.7.1.tar.gz#md5=f65988bf153410a8514bcdad6a3a8ba6 $Processing altgraph-0.7.1.tar.gz $Running altgraph-0.7.1/setup.py -q bdist_egg --dist-dir /tmp/easy_install-GGBuKJ/altgraph-$\0.7.1/egg-dist-tmp-NdWVjC $error: doc/changelog.rst: No such file or directory I was wondering what the solution to this problem is?

    Read the article

  • Django rewrites URL as IP address in browser - why?

    - by Mitch
    I am using django, nginx and apache. When I access my site with a URL (e.g., http://www.foo.com/) what appears in my browser address is the IP address with admin appended (e.g., http://123.45.67.890/admin/). When I access the site by IP, it is redirected as expected by django's urls.py (e.g., http://123.45.67.890/ - http://123.45.67.890/accounts/login/?next=/) I would like to have the name URL act the same way as the IP. That is, if the URL goes to a new view, the host in the browser address should remain the same and not change to the IP address. Where should I be looking to fix this? My files: ; cpa.com (apache) NameVirtualHost *:8080 <VirtualHost *:8080> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css text/javascript application/javascript application/x-javascript BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/htm DocumentRoot /path/to/root ServerName www.foo.com <IfModule mod_rpaf.c> RPAFenable On RPAFsethostname On RPAFproxy_ips 127.0.0.1 </IfModule> <Directory /public/static> AllowOverride None AddHandler mod_python .py PythonHandler mod_python.publisher </Directory> Alias / /dj <Location /> SetHandler python-program PythonPath "['/usr/lib/python2.5/site-packages/django', '/usr/lib/python2.5/site-packages/django/forms'] + sys.path" PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE dj.settings PythonDebug On </Location> </VirtualHost> ; ; ports.conf (apache) Listen 127.0.0.1:8080 ; ; cpa.conf (nginx) server { listen 80; server_name www.foo.com; location /static { root /var/public; index index.html; } location /cpa/js { root /var/public/js; } location /cpa/css { root /var/public/css; } location /djmedia { alias "/usr/lib/python2.5/site-packages/django/contrib/admin/media/"; } location / { include /etc/nginx/proxy.conf; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:8080; } } ; ; proxy.conf (nginx) proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 500; proxy_buffers 32 4k;

    Read the article

  • How to Install Python Modules on Web Server?

    - by sanghan
    Im running a python cgi script in the cgi-bin directory which uses the sqlite3 module. I run it and it says that it does not recognize the name.. So how do I install this module or other modules on the server hosted by networksolutions? Python documentation has this: python setup.py install --home=<dir> but I have no idea where or how I would run that line. Any help would be appreciated.

    Read the article

  • Nginx: Can I cache a URL matching a pattern at a different URL?

    - by Josh French
    I have a site with some URLs that look like this: /prefix/ID, where /prefix is static and ID is unique. Using Nginx as a reverse proxy, I'd like to cache these pages at the /ID portion only, omitting the prefix. Can I configure Nginx so that a request for the original URL is cached at the shortened URL? I tried this (I'm omitting some irrelevant parts) but obviously it's not the correct solution: http { map $request_uri $page_id { default $request_uri; ~^/prefix/(?<id>.+)$ $id; } location / { proxy_cache_key $page_id } }

    Read the article

  • How Do I Cache Just the Homepage with Apache .htaccess?

    - by Volomike
    This config is close... <FilesMatch "\.(php)$"> Header set Cache-Control "max-age=7200, must-revalidate" </FilesMatch> ...but it does all php pages, not just the home page like I want. Basically the developer said he wants example.com to be cached, while: http://example.com/electronics/ would not be cached. Note the developer is using pretty URLs with an MVC framework that runs everything through index.php.

    Read the article

  • gunicorn + django + nginx unix://socket failed (11: Resource temporarily unavailable)

    - by user1068118
    Running very high volume traffic on these servers configured with django, gunicorn, supervisor and nginx. But a lot of times I tend to see 502 errors. So I checked the nginx logs to see what error and this is what is recorded: [error] 2388#0: *208027 connect() to unix:/tmp/gunicorn-ourapp.socket failed (11: Resource temporarily unavailable) while connecting to upstream Can anyone help debug what might be causing this to happen? This is our nginx configuration: sendfile on; tcp_nopush on; tcp_nodelay off; listen 80 default_server; server_name imp.ourapp.com; access_log /mnt/ebs/nginx-log/ourapp-access.log; error_log /mnt/ebs/nginx-log/ourapp-error.log; charset utf-8; keepalive_timeout 60; client_max_body_size 8m; gzip_types text/plain text/xml text/css application/javascript application/x-javascript application/json; location / { proxy_pass http://unix:/tmp/gunicorn-ourapp.socket; proxy_pass_request_headers on; proxy_read_timeout 600s; proxy_connect_timeout 600s; proxy_redirect http://localhost/ http://imp.ourapp.com/; #proxy_set_header Host $host; #proxy_set_header X-Real-IP $remote_addr; #proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_set_header X-Forwarded-Proto $my_scheme; #proxy_set_header X-Forwarded-Ssl $my_ssl; } We have configure Django to run in Gunicorn as a generic WSGI application. Supervisord is used to launch the gunicorn workers: home/user/virtenv/bin/python2.7 /home/user/virtenv/bin/gunicorn --config /home/user/shared/etc/gunicorn.conf.py daggr.wsgi:application This is what the gunicorn.conf.py looks like: import multiprocessing bind = 'unix:/tmp/gunicorn-ourapp.socket' workers = multiprocessing.cpu_count() * 3 + 1 timeout = 600 graceful_timeout = 40 Does anyone know where I can start digging to see what might be causing the problem? This is what my ulimit -a output looks like on the server: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 59481 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 50000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • How do I create multiple instances of Certificate Server on the same Windows installation?

    - by makerofthings7
    The following URLs describe a new feature of Windows Certificate server is the ability to install multiple instances on the same server. (see end of "transcript" link it's a zip file) http://www.digitalsupporttech.com/mskb/896/896733_TechNet_Support_WebCast:_Best_Practices_for_Public_Key_Infrastructure:_Steps_to_build_an_offline_root_certification_authority_%28part_1_of_2%29.htm Quote: "Multiple Certificate Server instances on a single physical server" http://winintro.ru/certsvr.en/html/cf5622e1-daa9-42cc-8b43-14953e34f8b6.htm Quote: "Multiple instances of the Certificate Enrollment Web Service can be installed on a single computer in order to support multiple CAs." Question How can I actually implement multiple CA instances on a Windows 2008R2 server?

    Read the article

  • apache mod_cache in v2.2 - enable cache based on url

    - by Janning
    We are using apache2.2 as a front-end server with application servers as reverse proxies behind apache. We are using mod_cache for some images and enabled it like this: <IfModule mod_disk_cache.c> CacheEnable disk / CacheRoot /var/cache/apache2/mod_disk_cache CacheIgnoreCacheControl On CacheMaxFileSize 2500000 CacheIgnoreURLSessionIdentifiers jsessionid CacheIgnoreHeaders Set-Cookie </IfModule> The image urls vary completely and have no common start pattern, but they all end in ".png". Thats why we used the root in CacheEnable / If not served from the cache, the request is forwarded to an application server via reverse proxy. So far so good, cache is working fine. But I really only need to cache all image request ending in ".png". My above configuration still works as my application server send an appropriate Cache-Control: no-cache header on the way back to apache. So most pages send a no-cache header back and they get not cached at all. My ".png" responses doesn't send a Cache-Control header so apache is only going to cache all urls with ".png". Fine. But when a new request enters apache, apache does not know that only .png requests should be considered, so every request is checking a file on disk (recorded with strace -e trace=file -p pid): [pid 19063] open("/var/cache/apache2/mod_disk_cache/zK/q8/Kd/g6OIv@woJRC_ba_A.header", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) I don't want to have apache going to disk every request, as the majority of requests are not cached at all. And we have up to 10.000 request/s at peak time. Sometimes our read IO wait spikes. It is not getting really slow, but we try to tweak it for better performance. In apache 2.4 you can say: <LocationMatch .png$> CacheEnable disk </LocationMatch> This is not possible in 2.2 and as I see no backports for debian I am not going to upgrade. So I tried to tweak apache2.2 to follow my rules: <IfModule mod_disk_cache.c> SetEnvIf Request_URI "\.png$" image RequestHeader unset Cache-Control RequestHeader append Cache-Control no-cache env=!image CacheEnable disk / CacheRoot /var/cache/apache2/mod_disk_cache #CacheIgnoreCacheControl on CacheMaxFileSize 2500000 CacheIgnoreURLSessionIdentifiers jsessionid CacheIgnoreHeaders Set-Cookie </IfModule> The idea is to let apache decide to serve request from cache based on Cache-control header (CacheIgnoreCacheControl default to off). And before simply set a RequestHeader based on the request. If it is not an image request, set a Cache-control header, so it should bypass the cache at all. This does not work, I guess because of late processing of RequestHeader directive, see https://httpd.apache.org/docs/2.2/mod/mod_headers.html#early I can't add early processing as "early" keyword can't be used together with a conditional "env=!image" I can't change the url requesting the images and I know there are of course other solutions. But I am only interested in configuring apache2.2 to reach my goal. Does anybody has an idea how to achieve my goal?

    Read the article

  • re-use '~/.profile` for Fish?

    - by Albert
    (I'm talking about the shell Fish, esp. Fish's Fish.) For Bash/ZSH, I had ~/.profile with some exports, aliases and other stuff. I don't want to have a separate config for environment variables for Fish, I want to re-use my ~/.profile. How? In The FAQ, it is stated that I can at least import those via /usr/local/share/fish/tools/import_bash_settings.py, however I don't really like running that for each Fish-instance.

    Read the article

  • .htaccess working on remote server but does not work on localhost. Getting 404 errors on localhost

    - by Afsheen Khosravian
    MY PROBLEM: When I visit localhost the site does not work. It shows some text from the site but it seems the server can not locate any other files. Here is a snippet of the errors from firebug: "NetworkError: 404 Not Found - localhost/css/popup.css" "NetworkError: 404 Not Found - localhost/css/style.css" "NetworkError: 404 Not Found - localhost/css/player.css" "NetworkError: 404 Not Found - localhost/css/ui-lightness/jquery-ui-1.8.11.custom.css" "NetworkError: 404 Not Found - localhost/js/jquery.js" It seems my server is looking for the files in the wrong places. For example, localhost/css/popup.css is actually located at localhost/app/webroot/css/popup.css. I have my site setup on a remote server with the same exact configurations and it works perfectly fine. I am just having this issue trying to run the site on my laptop at localhost. I edited my VirtualHosts file DocumentRoot and to /home/user/public_html/site.com/public/app/webroot/ and this reduces some errors but I feel that this is wrong and sort of hacking it since I didn't use these setting on my production server which works. The last note I want to make is that the website uses dynamic URLs. I dont know if that has anything to do with it. For example, on the production server the URLS are: site.com/#hello/12321. HERES WHAT I AM WORKING WITH: I have a LAMP server setup on my laptop which runs on Ubuntu 11.10. I have enabled mod_rewrite: sudo a2enmod rewrite Then I edited my Virtual Hosts file: <VirtualHost *:80> ServerName localhost DirectoryIndex index.php DocumentRoot /home/user/public_html/site.com/public <Directory /home/user/public_html/site.com/public/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost> Then I restarted apache. My website is using cakePHP. This is the directory structure of the website: "/home/user/public_html/site.com/public" contains: index.php app cake plugins vendors These are my .htaccess files: /home/user/public_html/site.com/public/app/.htaccess: <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ webroot/ [L] RewriteRule (.*) webroot/$1 [L] </IfModule> /home/user/public_html/site.com/public/app/webroot/.htaccess: <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php?url=$1 [QSA,L] </IfModule>

    Read the article

  • install pymedia and python audio tools

    - by aaron
    I noticed a pattern of errors while trying to install PyMedia and Python Audio Tools. For both modules I run the following: $ python setup.py install Then I get a series of compilation errors, and then this: lipo: can't figure out the architecture type of: /var/folders/Kx/Kxxj4868HGi6VMhZLPyZN++++TI/-Tmp-//cch1y9AO.out error: command '/usr/bin/gcc-4.2' failed with exit status 1 I'm running Mac OS X 10.5, and this happens whether I'm using gcc-4.0 or gcc-4.2, Mac-Python 2.5 or 2.6, and MacPorts-Python 2.6. What's going on?

    Read the article

  • Force view text file instead of download in Firefox?

    - by davr
    Often times I'll click on a random link to a .sh or .py or .cpp or ... file in Firefox, and all I want is to view the code. I don't have a Firefox handler set up for every text file extension under the sun, and I don't want to have to. Is there an easy way to force Firefox to view the file as text instead of trying to save (or open in external app)?

    Read the article

  • Odd squid transparent redirect behavior

    - by EMiller
    This is the first time I've set up squid. It's running a redirect script that does some text search/replace on html pages, and then saves them to a location on the same machine on the nginx path - then issues the redirect to that URL (it's an art project :D). The relevant lines in squid.conf are http_port 3128 transparent redirect_program /etc/squid/jefferson_redirect.py The jefferson_redirect.py script is based on this script: http://gofedora.com/how-to-write-custom-redirector-rewritor-plugin-squid-python/ The issue: I'm getting strange http redirect behavior. For example, here is the normal request/response from a PHP script that issues a header("Location:"); - a 302 redirect: http://redirector.mysite.com/?unicmd=g+yreka GET /?unicmd=g+yreka HTTP/1.1 Host: redirector.mysite.com User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.9) Gecko/20100330 Fedora/3.5.9-1.fc12 Firefox/3.5.9 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive HTTP/1.1 302 Found Date: Tue, 13 Apr 2010 05:15:43 GMT Server: Apache X-Powered-By: PHP/5.2.11 Location: http://www.google.com/search?q=yreka Content-Type: text/html Vary: User-Agent,Accept-Encoding Content-Encoding: gzip Content-Length: 2108 Keep-Alive: timeout=3, max=100 Connection: Keep-Alive Here's what it looks like when running through the squid proxy (note that "redirector.mysite.com" is not the site running squid or nginx): http://redirector.mysite.com/?unicmd=g+yreka GET /?unicmd=g+yreka HTTP/1.1 Host: redirector.mysite.com User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.9) Gecko/20100330 Fedora/3.5.9-1.fc12 Firefox/3.5.9 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Proxy-Connection: keep-alive If-Modified-Since: Tue, 13 Apr 2010 05:21:02 GMT HTTP/1.0 200 OK Server: nginx/0.7.62 Date: Tue, 13 Apr 2010 05:21:10 GMT Content-Type: text/html Content-Length: 17865 Last-Modified: Tue, 13 Apr 2010 05:21:10 GMT Accept-Ranges: bytes X-Cache: MISS from jefferson X-Cache-Lookup: HIT from jefferson:3128 Via: 1.1 jefferson:3128 (squid/2.7.STABLE6) Connection: keep-alive Proxy-Connection: keep-alive It is basically working - but the URL http://redirector.mysite.com/?unicmd=g+yreka remains unchanged, while displaying the google page (mostly broken as it's using URLs relative to redirector.mysite.com) I've experienced a similar thing with google results pages: when clicking to another page from google, I get a google URL, with the other site's content. Sorry for the long post - many thanks if you've read this far! Any ideas?

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >