Search Results

Search found 15691 results on 628 pages for 'browser caching'.

Page 141/628 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • How do I keep the keyword.url setting in firefox to default when you restart the browser without del

    - by user34801
    I am on the latest version of Firefox (not beta or anything like that) and currently my keyword.url is stuck on search.google.com (which I don't remember setting even though the about:config says it's a user setting. Can someone tell me how to set it back to default and keep it at default when I reset my browser? I do not want to delete prefs.js as I do not want to go thru setting up all the extension settings I have just to have my location bar search google (if this is the only way then I'll stick with searching from the search bar instead). I've checked all my extensions that may effect the location bar but could not find anything that says it would change the default search engine for this. I've also tried to open the prefs.js in wordpad or notepad but it just ends up freezing when trying to edit it at all (yes the browser is closed at the time). I also deleted the prefs-1.js (along with 2 others) that were older (after trying to rename those to prefs.js and see if this corrects it. It might have but had such old extension settings I went back to my latest prefs.js with this one issue instead of the issue of setting back up a ton of extensions. I can give any other info if needed, someone please help me fix this issue if possible.

    Read the article

  • How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment [closed]

    - by lmttag
    Possible Duplicate: How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment We have an ASP.NET Web API server that serves up a SQL Server data driven website. The API uses JSON to transfer data from SQL Server to the front end. We need to move it to an internal production environment (nothing will be exposed on the public Internet) and we’re having problems - or just not understanding what needs to be done. There are two domains: The corporate domain - where all users login normally. The process domain - contains the database the Web API needs to access. The IT staff wants to put a DMZ between the two domains to house the IIS app and shield the users on the corporate domain from having access into the process domain directly. The ideal configuration is: corp domain (end users) <–> firewall (open port 80) <–> DMZ (web server running IIS) <–> firewall (open port 80 or 1433????) <–> process domain (IIS for Web API and SQL Server) We don’t really understand how to deploy our browser/Web API application in this scenario. Do we need to break up our application so that all the client code is on the IIS server in the DMZ, while the Web API gets installed on the server in the process domain? Does the entire app (client code and Web API) stay together on the IIS server in the DMZ, which then somehow accesses the SQL Server instance to get data? From the IIS server and app in the DMZ, would you simply access the Web API on the server in the process domain by going to http://server/appname/api/getitmes? In the second firewall between the DMZ and the process domain, would you have to open port 1433 or just port 80 since the Web API is a HTTP endpoint? Or, is there some better way of deployment (i.e., how ASP.NET Web API single page applications written all in HTML5 and JavaScript supposed to be deployed to production environments?)? NB: The servers are Win2k8 R2, SQL Server 2k8 R2, and IIS 7.5.

    Read the article

  • In Stud, which Private RSA Key should be concatenated in the x509 SSL certificate pem file to avoid "self-signed" browser warning?

    - by Aaron
    I'm trying to implement Stud as an SSL termination point before HAProxy as a proof of concept for WebSockets routing. My domain registrar Gandi.net offers free 1-year SSL certs. Through OpenSSL, I generated a CSR which gave me two files: domain.key domain.csr I gave domain.csr to my trusted authority and they gave me two files: domain.cert GandiStandardSSLCA.pem (I think this is referred to as the intermediary cert?) This is where I encountered friction: Stud, which uses OpenSSL, expects there to be an "rsa private key" in the "pem-file" - which it describes as "SSL x509 certificate file. REQUIRED." If I add the domain.key to the bottom of Stud's pem-file, Stud will start but I receive the browser warning saying "The certificate is self-signed." If I omit the domain.key Stud will not start and throws an error triggered by an OpenSSL function that appears intended to determine whether or not my "pem-file" contains an "RSA Private Key". At this point I cannot determine whether the problem is: Free SSL cert will always be self-signed and will always cause browser to present warning I'm just not using Stud correctly I'm using the wrong "RSA private key" The CA domain cert, the intermediary cert, and the private key are in the wrong order.

    Read the article

  • ASP.NET Web Forms Extensibility: Control Adapters

    - by Ricardo Peres
    All ASP.NET controls from version 2.0 can be associated with a control adapter. A control adapter is a class that inherits from ControlAdapter and it has the chance to interact with the control(s) it is targeting so as to change some of its properties or alter its output. I talked about control adapters before and they really a cool feature. The ControlAdapter class exposes virtual methods for some well known lifecycle events, OnInit, OnLoad, OnPreRender and OnUnload that closely match their Control counterparts, but are fired before them. Because the control adapter has a reference to its target Control, it can cast it to its concrete class and do something with it before its lifecycle events are actually fired. The adapter is also notified before the control is rendered (BeginRender), after their children are renderes (RenderChildren) and after itself is rendered (Render): this way the adapter can modify the control’s output. Control adapters may be specified for any class inheriting from Control, including abstract classes, web server controls and even pages. You can, for example, specify a control adapter for the WebControl and UserControl classes, but, curiously, not for Control itself. When specifying a control adapter for a page, it must inherit from PageAdapter instead of ControlAdapter. The adapter for a control, if specified, can be found on the protected Adapter property, and for a page, on the PageAdapter property. The first use of control adapters that came to my attention was for changing the output of standard ASP.NET web controls so that they were more based on CSS and less on HTML tables: it was the CSS Friendly Control Adapters project, now available at http://code.google.com/p/aspnetcontroladapters/. They are interesting because you specify them in one location and they apply anywhere a control of the target type is created. Mind you, it applies to controls declared on markup as well as controls created by code with the new operator. So, how do you use control adapters? The most usual way is through a browser definition file. In it, you specify a set of control adapters and their target controls, for a given browser. This browser definition file is a XML file with extension .Browser, and can either be global (%WINDIR%\Microsoft.NET\Framework64\vXXXX\Config\Browsers) or local to the web application, in which case, it must be placed inside the App_Browsers folder at the root of the web site. It looks like this: 1: <browsers> 2: <browser refID="Default"> 3: <controlAdapters> 4: <adapter controlType="System.Web.UI.WebControls.TextBox" adapterType="MyNamespace.TextBoxAdapter, MyAssembly" /> 5: </controlAdapters> 6: </browser> 7: </browsers> A browser definition file targets a specific browser, so you can have different definitions for Chrome, IE, Firefox, Opera, as well as for specific version of each of those (like IE8, Firefox3). Alternatively, if you set the target to Default, it will apply to all. The reason to pick a specific browser and version might be, for example, in order to circumvent some limitation present in that specific version, so that on markup you don’t need to be concerned with that. Another option is through the the current Browser object of the request: 1: this.Context.Request.Browser.Adapters.Add(typeof(TextBox).FullName, typeof(TextBoxAdapter).FullName); This must go very early on the page lifecycle, for example, on the OnPreInit event, or even on Application_Start. You have to specify the full class name for both the target control and the adapter. Of course, you have to do this for every request, because it won’t be persisted. As an example, you may know that the classic TextBox control renders an HTML input tag if its TextMode is set to SingleLine and a textarea if set to MultiLine. Because the textarea has no notion of maximum length, unlike the input, something must be done in order to enforce this. Here’s a simple suggestion: 1: public class TextBoxControlAdapter : ControlAdapter 2: { 3: protected TextBox Target 4: { 5: get 6: { 7: return (this.Control as TextBox); 8: } 9: } 10:  11: protected override void OnLoad(EventArgs e) 12: { 13: if ((this.Target.MaxLength > 0) && (this.Target.TextMode == TextBoxMode.MultiLine)) 14: { 15: if (this.Target.Page.ClientScript.IsClientScriptBlockRegistered("TextBox_KeyUp") == false) 16: { 17: if (this.Target.Page.ClientScript.IsClientScriptBlockRegistered(this.Target.Page.GetType(), "TextBox_KeyUp") == false) 18: { 19: String script = String.Concat("function TextBox_KeyUp(sender) { if (sender.value.length > ", this.Target.MaxLength, ") { sender.value = sender.value.substr(0, ", this.Target.MaxLength, "); } }\n"); 20:  21: this.Target.Page.ClientScript.RegisterClientScriptBlock(this.Target.Page.GetType(), "TextBox_KeyUp", script, true); 22: } 23:  24: this.Target.Attributes["onkeyup"] = "TextBox_KeyUp(this)"; 25: } 26: } 27: 28: base.OnLoad(e); 29: } 30: } What it does is, for every TextBox control, if it is set for multi line and has a defined maximum length, it injects some JavaScript that will filter out any content that exceeds this maximum length. This will occur for any TextBox that you may have on your site, or any class that inherits from it. You can use any of the previous options to register this adapter. Stay tuned for more ASP.NET Web Forms extensibility tips!

    Read the article

  • Rewriting Apache URLs to use only paths and set response headers

    - by jabley
    I have apache httpd in front of an application running in Tomcat. The application exposes URLs of the form: /path/to/images?id={an-image-id} The entities returned by such URLs are images (even though URIs are opaque, I find human-friendly ones are easier to work with!). The application does not set caching directives on the image response, so I've added that via Apache. # LocationMatch to set caching directives on image responses <LocationMatch "^/path/to/images$"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> Note that I can't use ExpiresByType since not all images served by the app have versioned URIs. I know that ones served by the /path/to/images resource handler are versioned URIs though, which don't perform any sort of content negotiation, and thus are ripe for Far Future Expires management. This is working well for us. Now a requirement has come up to put something else in front of the app (in this case, Amazon CloudFront) to further distribute and cache some of the content. Amazon CloudFront will not pass query string parameters through to my origin server. I thought I would be able to work around this, by changing my apache config appropriately: # Rewrite to map new Amazon CloudFront friendly URIs to the application resources RewriteRule ^/new/path/to/images/([0-9]+) /path/to/images?id=$1 [PT] # LocationMatch to set caching directives on image responses <LocationMatch "^/path/to/images$"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> This works fine in terms of serving the content, but there are no longer caching directives with the response. I've tried playing around with [PT], [P] for the RewriteRule, and adding a new LocationMatch directive: # Rewrite to map new Amazon CloudFront friendly URIs to the application resources # /new/path/to/images/12345 -> /path/to/images?id=12345 RewriteRule ^/new/path/to/images/([0-9]+) /path/to/images?id=$1 [PT] # LocationMatch to set caching directives on image responses <LocationMatch "^/path/to/images$"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> <LocationMatch "^/new/path/to/images/"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> Unfortunately, I'm still unable to get the Cache-Control header added to the response with the new URL format. Please point out what I'm missing to get /new/path/to/images/12345 returning a 200 response with a Cache-Control: max-age=8640000 header. Pointers as to how to debug apache like this would be appreciated as well!

    Read the article

  • "Best" language /architecture for browser-based app with ODBC and sockets? (subjective)

    - by mawg
    Sorry to ask a subjective question, but I would welcome some advice. I am an experienced programmer of embedded s/w, but haven't done much network programming, although I have done a fair bit of hobbyist PHP. Anyway, I have to develop what is probably a fairly general type of app, as shown in this crude diagram --------------------------------------------------------------------------------- | Browser / user interface Takes input from user form and writes data to d/b. | | Also gets data and updates browser contents when when d/b contents are changed | | because of info received over TCP/IP. | |________________________________________________________________________________| | ODBC | |_______________________________________________________________________________| | database | |_______________________________________________________________________________| | ODBC | |_______________________________________________________________________________| | Socket (TCP/IP) | | Send data out when d/b is updated from browser. | | Also, update d/b when data are received over TCP/IP. | |_______________________________________________________________________________| As I say, I imagine this to be a fairly typical architecture? Am I right? Then client is insisting on MSIE - unless I can show compelling technical reasons for FireFox or other then it will have to be MSIE (are there any compelling technical reasons?). So, with MIE (almost) a given, I had though to use PHP, since I know it, but the client seems awfully keen on Java (which ought to be OK since I am conversant with C++) it woudl seem to make sense to use the same language for the "upper" interface between the web pages (which that app generates) and the d/b, and for the "lower" interface between the d/b and the socket. (a single language means a single set of tools since text approach, etc) So, the (probably highly subjective) question is "which language shoudl I choose". As I say, the client is keen on Java. Any compelling reason why not? Is it generally a good choice for the sort of thing described here? Any other hints & tips gratefully appreciated (and up-voted): URLs, books, tool chain suggestions, etc, etc

    Read the article

  • What do I need to do to make a WPF Browser Application (XBAP) that requires Full Trust work on Windo

    - by Benoit J. Girard
    So this is a Visual Studio 2008, .NET, WPF, XBAP, Windows 7 question, regarding .NET trust policies. At work, we have several Web Browser Applications (.XBAP files) developed with Visual Studio 2008 (so .NET 3.5) that we deployed internally. These required a .NET FullTrust policy, we found a way to make a .MSI that adjusted the policy on individual stations, everything worked great. Users love in-browser apps. This was last year and on Windows XP. This year our company started upgrading users to Windows 7, and now none of our Web Browser Applications work. The error message is "Trust Not Granted", as if the policy-changing .MSI had not been run. Other details: I can confirm that our apps work on Windows XP for Internet Explorer 7 and Firefox, and do not work on Windows 7 for Internet Explorer 8 nor Firefox. I must admit that .NET security policies mystify me. Still, I could not find any mention of this problem on the Net at large or on this site. Did anybody else encounter this problem? Any and all help welcome.

    Read the article

  • Can I use Data URLs in Android 2.1's Webkit-based browser?

    - by Sven Haiges
    Hi all, I am writing a tutorial about the HTML5 Canvas for mobile and did some basic tests. While I can call the getDataURL() Method on an iPhone's HTML5 Canvas Element, it does not seem to return the data URL on Android 2.1 (Google Nexus One) and it's webkit-based default browser. Here is the sample: var dataURL = canvas.toDataURL(); var img = document.createElement('img'); img.setAttribute('src', dataURL); document.getElementById('box').appendChild(img); This will work on iPhone, it will add a new image element with the same content as the canvas. It does nothing or fails on Android 2.1. Has anyone ever gotten this to work? I am also wondering if anyone could help me with understanding the WebKit Build numbers and what it means with regards to what features I can expect. For the iphone, I see a build number of 528.18, on Android 2.1's Browser I see (from the user agent strign) a WebKit build 530.17. So it looks Android 2.1's webkit browser is more up to date, still some features work on iPhone's webkit but not on Android. Does this comparison just make no sense? Thanx all!

    Read the article

  • Does anyone know of a good Commercial WPF Web Browser Control?

    - by VoidDweller
    I have an MDI WPF app that I need to add web content too. At first, great it looks like I have 2 options built into the framework the Frame control and the WebBrowser control. Given that this is an MDI app it doesn't take long to discover that neither of these will work. The WebBrowser control wraps up the IE WebBrowser ActiveX Control which uses the Win32 graphics pipeline. The "Airspace" issue pretty much sums this up as "Sorry, the layouts will not play nice together". Yes, I have thought about taking snapshots of the web content rendering these and mapping the mouse and keyboard events back to the browser control, but I can't afford the performance penalty and I really don't have time to write and thoroughly test it. I have looked for third party controls, but so far I have only found Chris Cavanagh's WPF Chromium Web Browser control. Which wraps up Awesomium 1.5. Together these are very cool, they play nice with the WPF layouts. But they do not meet my performance requirements. They are VERY HEAVY on memory consumption and not to friendly with CPU usage either. Not to mention still quite buggy. I'll elaborate if you are interested. So, do any of you know of a stable performant WPF web browser control? Thanks.

    Read the article

  • Problem: How to display a Wordpress RSS feed in a browser that doesn't have a built in RSS reader?

    - by StephenMeehan
    If I can, i'd rather not use a service like FeedBurner. My setup: I've setup a RSS feed link on a self-hosted Wordpress website, clicking the RSS link in Safari shows the feed - because Safari has a built in RSS reader. Great. Unfortunately clicking the same RSS link in Chrome displays the raw XML feed. I know why this happens - Chrome doesn't have a built in RSS reader. I also assume this will be the same in older versions of Internet Explorer. Possible solution? I've noticed http://www.bbc.co.uk/news has a nice solution: Click the RSS feed (top tight of the page) in a RSS enabled browser (Safari) and it uses the built in RSS reader to display the RSS feed. Click the same RSS feed link in Chrome (Chrome has no built in RSS reader) it displays the RSS feed using what looks like a custom page. Is there a way to check if a browser has a built in RSS reader? How would I provide alternative content (like the BBC site) to a browser that doesn't have a RSS reader installed? Any help on this would be brilliant, thanks for taking the time to read this. Stephen

    Read the article

  • Is Accessing USB from web application for cross browser cross os possible at all ?

    - by Ved
    Hey Guys, I am wondering if there is anyway we can achieve this. I heard different things about Silverlight 4 , Java Script or Active X control but not seen any demo of code for any of them. Does anyone know any web component that is available or how to write one. We really like capture client's USB drive via Web and read/write data on it. This has to work for ANY Operating system in Any web browser. Thanks UPDATED What about WPF in browser mode...I read that I can host my wpf apps inside browser and sort of like smart client. Here is a great example of doing this via silverlight 4 but author mentions about possibility of accessing USB on MAC via 1) Enable executing AppleScripts. This option will let us have the same amount of control on a mac machine as we do on a windows machine. 2) Add an overload to ComAutomationFactory.CreateObject() that calls the “Tell Application” command under the scenes and gets a AppleScript object. This option would work extremely well for Office automation. For any other operating system feature, you’ll have to code OS access twice.  I did not quite understand it. Has any tried this ?

    Read the article

  • How can I get HTML to link to a browser (or system) specified URL?

    - by MrHatken
    Hi All, I'd like to be able to create a "HTML link" that the user can click on and be taken to an URL (location) specified either in the browser (preferences?) or system environment. Is this possible? Any suggestions on how to do it please? For example, it may look something like this (or alternatively it could be a clickable image or even a submit button): "Click here to go to your preferred news site." When the user clicks on "here" the browser would go to a location specified not in the HTML but somehow in the browser (preferences?) or some system environment variable (OS specific etc.) Of course, the user would have to set up this preference or environment variable (or have some local application or better Web page that could set it - when approved by the user). This is sort of like most OS these days allow you to set "preferred app" for image processing or playing media. I would like to set preferred Web sites for certain tasks. Thanks for any suggestions. Hopefully with Javascript and modern browsers and perhaps HTML 5 something like this is possible. Cheers, Ashley.

    Read the article

  • Does running IIS7 in classic mode affect MVC output caching?

    - by Bob
    I have a need to run an application in classic mode for backwards compatibility with a specific application, and am trying to understand what kind of impact that will have on the performance of an MVC application that is running on the site. If we put a few static file maps (for .js, .css, .png, etc) above the ASP.NET wildcard map to reduce the amount of processing by the ASP.NET handler, will we be approaching the integrated mode in terms of performance? The thing i'm primarily concerned with is any effect this might have on output caching. I understand that integrated mode might (?) allow for the output cache to handle non ASP.NET content, but that isn't really a concern. We're more interested in ensuring that the MVC application has full use of the output cache. Empirically i've found that the two configurations operate on par when things go well, but if the page references resources that are not available, the integrated mode tends to fail much more quickly than the classic mode (e.g. 500 ms vs 10 seconds), reducing 'hang time' on the page load. Thanks for any feedback.

    Read the article

  • Implementing Tagging System with PHP and mySQL. Caching help!!!

    - by Hamid Sarfraz
    With reference to this post: http://stackoverflow.com/questions/2122546/how-to-implement-tag-counting I have implemented the suggested 3 table tagging system completely. To count the number of Articles per tag, i am using another column named tagArticleCount in the tag definition table. (other columns are tagId, tagText, tagUrl, tagArticleCount). If i implement realtime editing of this table, so that whenever user adds another tag to article or deletes an existing tag, the tag_definition_table is updated to update the counter of the added/removed tag. This will cost an extra query each time any modification is made. (at the same time, related link entry for tag and article is deleted from tagLinkTable). An alternative to this is not allowing any real time editing to the counter, instead use CRONs to update counter of each tag after a specified time period. Here comes the problem that i want to discuss. This can be seen as caching the article count in database. Can you please help me find a way to present the articles in a list when a tag is explored and when the article counter for that tag is not up to date. For example: 1. Counter shows 50 articles, but there are infact 55 entries in the tag link table (that links tags and articles). 2. Counter shows 50 articles, but there are infact 45 extries in the tag link table. How to handle these 2 scenerios given in example. I am going to use APC to keep cache of these counters. Consider it too in your solution. Also discuss performance in the realtime / CRONNED counter updates.

    Read the article

  • CSS - How to prevent the browser from showing scrollbars when a div goes outside of the window?

    - by xarfai
    I have a centered wrapper with following CSS: div.wrapper { width: 1170px; padding-left:30px; margin-top: 80px; margin-bottom:20px; margin-left: auto; margin-right: auto; position:relative; background-color:black; } inside i have a div with following css: position:absolute; top:-26px; left:517px; height:63px; z-index:3; inside of this div is an image which has 759px width, that makes the wrapper grow larger and makes the browser show a v-scrollbar on lower display resolutions. what i want is to make the image go outside the wrapper but prevent the browser from showing the scrollbar, so that the right side of the image is only shown if your browser window is large enough and the wrapper keeps its 1200px width. i can't make it a background image because it goes over some of the other content. something that is compatible with = IE7 would be nice

    Read the article

  • MVC4 App opens with directory listing and gives a 404 for any direct URL's entered in the browser

    - by ProfK
    I've just deployed a previously (on my local IIS) working MVC4 app to IIS 7.5 on the dev server. After tweaking this and that - one knows how these things get forgotten - the app finally launches, but shows a directory listing of the app root. Clicking on most links there works, opening the directory listing of the sub-directory. Elmah logs no errors and /elmah.asd also gives a 404. The site has an appropriate localhost binding in the hosts file. I can find nothing wrong. MVC is installed on the server, as another MCV app works fine.

    Read the article

  • Using HTML 5 SessionState to save rendered Page Content

    - by Rick Strahl
    HTML 5 SessionState and LocalStorage are very useful and super easy to use to manage client side state. For building rich client side or SPA style applications it's a vital feature to be able to cache user data as well as HTML content in order to swap pages in and out of the browser's DOM. What might not be so obvious is that you can also use the sessionState and localStorage objects even in classic server rendered HTML applications to provide caching features between pages. These APIs have been around for a long time and are supported by most relatively modern browsers and even all the way back to IE8, so you can use them safely in your Web applications. SessionState and LocalStorage are easy The APIs that make up sessionState and localStorage are very simple. Both object feature the same API interface which  is a simple, string based key value store that has getItem, setItem, removeitem, clear and  key methods. The objects are also pseudo array objects and so can be iterated like an array with  a length property and you have array indexers to set and get values with. Basic usage  for storing and retrieval looks like this (using sessionStorage, but the syntax is the same for localStorage - just switch the objects):// set var lastAccess = new Date().getTime(); if (sessionStorage) sessionStorage.setItem("myapp_time", lastAccess.toString()); // retrieve in another page or on a refresh var time = null; if (sessionStorage) time = sessionStorage.getItem("myapp_time"); if (time) time = new Date(time * 1); else time = new Date(); sessionState stores data that is browser session specific and that has a liftetime of the active browser session or window. Shut down the browser or tab and the storage goes away. localStorage uses the same API interface, but the lifetime of the data is permanently stored in the browsers storage area until deleted via code or by clearing out browser cookies (not the cache). Both sessionStorage and localStorage space is limited. The spec is ambiguous about this - supposedly sessionStorage should allow for unlimited size, but it appears that most WebKit browsers support only 2.5mb for either object. This means you have to be careful what you store especially since other applications might be running on the same domain and also use the storage mechanisms. That said 2.5mb worth of character data is quite a bit and would go a long way. The easiest way to get a feel for how sessionState and localStorage work is to look at a simple example. You can go check out the following example online in Plunker: http://plnkr.co/edit/0ICotzkoPjHaWa70GlRZ?p=preview which looks like this: Plunker is an online HTML/JavaScript editor that lets you write and run Javascript code and similar to JsFiddle, but a bit cleaner to work in IMHO (thanks to John Papa for turning me on to it). The sample has two text boxes with counts that update session/local storage every time you click the related button. The counts are 'cached' in Session and Local storage. The point of these examples is that both counters survive full page reloads, and the LocalStorage counter survives a complete browser shutdown and restart. Go ahead and try it out by clicking the Reload button after updating both counters and then shutting down the browser completely and going back to the same URL (with the same browser). What you should see is that reloads leave both counters intact at the counted values, while a browser restart will leave only the local storage counter intact. The code to deal with the SessionStorage (and LocalStorage not shown here) in the example is isolated into a couple of wrapper methods to simplify the code: function getSessionCount() { var count = 0; if (sessionStorage) { var count = sessionStorage.getItem("ss_count"); count = !count ? 0 : count * 1; } $("#txtSession").val(count); return count; } function setSessionCount(count) { if (sessionStorage) sessionStorage.setItem("ss_count", count.toString()); } These two functions essentially load and store a session counter value. The two key methods used here are: sessionStorage.getItem(key); sessionStorage.setItem(key,stringVal); Note that the value given to setItem and return by getItem has to be a string. If you pass another type you get an error. Don't let that limit you though - you can easily enough store JSON data in a variable so it's quite possible to pass complex objects and store them into a single sessionStorage value:var user = { name: "Rick", id="ricks", level=8 } sessionStorage.setItem("app_user",JSON.stringify(user)); to retrieve it:var user = sessionStorage.getItem("app_user"); if (user) user = JSON.parse(user); Simple! If you're using the Chrome Developer Tools (F12) you can also check out the session and local storage state on the Resource tab:   You can also use this tool to refresh or remove entries from storage. What we just looked at is a purely client side implementation where a couple of counters are stored. For rich client centric AJAX applications sessionStorage and localStorage provide a very nice and simple API to store application state while the application is running. But you can also use these storage mechanisms to manage server centric HTML applications when you combine server rendering with some JavaScript to perform client side data caching. You can both store some state information and data on the client (ie. store a JSON object and carry it forth between server rendered HTML requests) or you can use it for good old HTTP based caching where some rendered HTML is saved and then restored later. Let's look at the latter with a real life example. Why do I need Client-side Page Caching for Server Rendered HTML? I don't know about you, but in a lot of my existing server driven applications I have lists that display a fair amount of data. Typically these lists contain links to then drill down into more specific data either for viewing or editing. You can then click on a link and go off to a detail page that provides more concise content. So far so good. But now you're done with the detail page and need to get back to the list, so you click on a 'bread crumbs trail' or an application level 'back to list' button and… …you end up back at the top of the list - the scroll position, the current selection in some cases even filters conditions - all gone with the wind. You've left behind the state of the list and are starting from scratch in your browsing of the list from the top. Not cool! Sound familiar? This a pretty common scenario with server rendered HTML content where it's so common to display lists to drill into, only to lose state in the process of returning back to the original list. Look at just about any traditional forums application, or even StackOverFlow to see what I mean here. Scroll down a bit to look at a post or entry, drill in then use the bread crumbs or tab to go back… In some cases returning to the top of a list is not a big deal. On StackOverFlow that sort of works because content is turning around so quickly you probably want to actually look at the top posts. Not always though - if you're browsing through a list of search topics you're interested in and drill in there's no way back to that position. Essentially anytime you're actively browsing the items in the list, that's when state becomes important and if it's not handled the user experience can be really disrupting. Content Caching If you're building client centric SPA style applications this is a fairly easy to solve problem - you tend to render the list once and then update the page content to overlay the detail content, only hiding the list temporarily until it's used again later. It's relatively easy to accomplish this simply by hiding content on the page and later making it visible again. But if you use server rendered content, hanging on to all the detail like filters, selections and scroll position is not quite as easy. Or is it??? This is where sessionStorage comes in handy. What if we just save the rendered content of a previous page, and then restore it when we return to this page based on a special flag that tells us to use the cached version? Let's see how we can do this. A real World Use Case Recently my local ISP asked me to help out with updating an ancient classifieds application. They had a very busy, local classifieds app that was originally an ASP classic application. The old app was - wait for it: frames based - and even though I lobbied against it, the decision was made to keep the frames based layout to allow rapid browsing of the hundreds of posts that are made on a daily basis. The primary reason they wanted this was precisely for the ability to quickly browse content item by item. While I personally hate working with Frames, I have to admit that the UI actually works well with the frames layout as long as you're running on a large desktop screen. You can check out the frames based desktop site here: http://classifieds.gorge.net/ However when I rebuilt the app I also added a secondary view that doesn't use frames. The main reason for this of course was for mobile displays which work horribly with frames. So there's a somewhat mobile friendly interface to the interface, which ditches the frames and uses some responsive design tweaking for mobile capable operation: http://classifeds.gorge.net/mobile  (or browse the base url with your browser width under 800px)   Here's what the mobile, non-frames view looks like:   As you can see this means that the list of classifieds posts now is a list and there's a separate page for drilling down into the item. And of course… originally we ran into that usability issue I mentioned earlier where the browse, view detail, go back to the list cycle resulted in lost list state. Originally in mobile mode you scrolled through the list, found an item to look at and drilled in to display the item detail. Then you clicked back to the list and BAM - you've lost your place. Because there are so many items added on a daily basis the full list is never fully loaded, but rather there's a "Load Additional Listings"  entry at the button. Not only did we originally lose our place when coming back to the list, but any 'additionally loaded' items are no longer there because the list was now rendering  as if it was the first page hit. The additional listings, and any filters, the selection of an item all were lost. Major Suckage! Using Client SessionStorage to cache Server Rendered Content To work around this problem I decided to cache the rendered page content from the list in SessionStorage. Anytime the list renders or is updated with Load Additional Listings, the page HTML is cached and stored in Session Storage. Any back links from the detail page or the login or write entry forms then point back to the list page with a back=true query string parameter. If the server side sees this parameter it doesn't render the part of the page that is cached. Instead the client side code retrieves the data from the sessionState cache and simply inserts it into the page. It sounds pretty simple, and the overall the process is really easy, but there are a few gotchas that I'll discuss in a minute. But first let's look at the implementation. Let's start with the server side here because that'll give a quick idea of the doc structure. As I mentioned the server renders data from an ASP.NET MVC view. On the list page when returning to the list page from the display page (or a host of other pages) looks like this: https://classifieds.gorge.net/list?back=True The query string value is a flag, that indicates whether the server should render the HTML. Here's what the top level MVC Razor view for the list page looks like:@model MessageListViewModel @{ ViewBag.Title = "Classified Listing"; bool isBack = !string.IsNullOrEmpty(Request.QueryString["back"]); } <form method="post" action="@Url.Action("list")"> <div id="SizingContainer"> @if (!isBack) { @Html.Partial("List_CommandBar_Partial", Model) <div id="PostItemContainer" class="scrollbox" xstyle="-webkit-overflow-scrolling: touch;"> @Html.Partial("List_Items_Partial", Model) @if (Model.RequireLoadEntry) { <div class="postitem loadpostitems" style="padding: 15px;"> <div id="LoadProgress" class="smallprogressright"></div> <div class="control-progress"> Load additional listings... </div> </div> } </div> } </div> </form> As you can see the query string triggers a conditional block that if set is simply not rendered. The content inside of #SizingContainer basically holds  the entire page's HTML sans the headers and scripts, but including the filter options and menu at the top. In this case this makes good sense - in other situations the fact that the menu or filter options might be dynamically updated might make you only cache the list rather than essentially the entire page. In this particular instance all of the content works and produces the proper result as both the list along with any filter conditions in the form inputs are restored. Ok, let's move on to the client. On the client there are two page level functions that deal with saving and restoring state. Like the counter example I showed earlier, I like to wrap the logic to save and restore values from sessionState into a separate function because they are almost always used in several places.page.saveData = function(id) { if (!sessionStorage) return; var data = { id: id, scroll: $("#PostItemContainer").scrollTop(), html: $("#SizingContainer").html() }; sessionStorage.setItem("list_html",JSON.stringify(data)); }; page.restoreData = function() { if (!sessionStorage) return; var data = sessionStorage.getItem("list_html"); if (!data) return null; return JSON.parse(data); }; The data that is saved is an object which contains an ID which is the selected element when the user clicks and a scroll position. These two values are used to reset the scroll position when the data is used from the cache. Finally the html from the #SizingContainer element is stored, which makes for the bulk of the document's HTML. In this application the HTML captured could be a substantial bit of data. If you recall, I mentioned that the server side code renders a small chunk of data initially and then gets more data if the user reads through the first 50 or so items. The rest of the items retrieved can be rather sizable. Other than the JSON deserialization that's Ok. Since I'm using SessionStorage the storage space has no immediate limits. Next is the core logic to handle saving and restoring the page state. At first though this would seem pretty simple, and in some cases it might be, but as the following code demonstrates there are a few gotchas to watch out for. Here's the relevant code I use to save and restore:$( function() { … var isBack = getUrlEncodedKey("back", location.href); if (isBack) { // remove the back key from URL setUrlEncodedKey("back", "", location.href); var data = page.restoreData(); // restore from sessionState if (!data) { // no data - force redisplay of the server side default list window.location = "list"; return; } $("#SizingContainer").html(data.html); var el = $(".postitem[data-id=" + data.id + "]"); $(".postitem").removeClass("highlight"); el.addClass("highlight"); $("#PostItemContainer").scrollTop(data.scroll); setTimeout(function() { el.removeClass("highlight"); }, 2500); } else if (window.noFrames) page.saveData(null); // save when page loads $("#SizingContainer").on("click", ".postitem", function() { var id = $(this).attr("data-id"); if (!id) return true; if (window.noFrames) page.saveData(id); var contentFrame = window.parent.frames["Content"]; if (contentFrame) contentFrame.location.href = "show/" + id; else window.location.href = "show/" + id; return false; }); … The code starts out by checking for the back query string flag which triggers restoring from the client cache. If cached the cached data structure is read from sessionStorage. It's important here to check if data was returned. If the user had back=true on the querystring but there is no cached data, he likely bookmarked this page or otherwise shut down the browser and came back to this URL. In that case the server didn't render any detail and we have no cached data, so all we can do is redirect to the original default list view using window.location. If we continued the page would render no data - so make sure to always check the cache retrieval result. Always! If there is data the it's loaded and the data.html data is restored back into the document by simply injecting the HTML back into the document's #SizingContainer element:$("#SizingContainer").html(data.html); It's that simple and it's quite quick even with a fully loaded list of additional items and on a phone. The actual HTML data is stored to the cache on every page load initially and then again when the user clicks on an element to navigate to a particular listing. The former ensures that the client cache always has something in it, and the latter updates with additional information for the selected element. For the click handling I use a data-id attribute on the list item (.postitem) in the list and retrieve the id from that. That id is then used to navigate to the actual entry as well as storing that Id value in the saved cached data. The id is used to reset the selection by searching for the data-id value in the restored elements. The overall process of this save/restore process is pretty straight forward and it doesn't require a bunch of code, yet it yields a huge improvement in the usability of the site on mobile devices (or anybody who uses the non-frames view). Some things to watch out for As easy as it conceptually seems to simply store and retrieve cached content, you have to be quite aware what type of content you are caching. The code above is all that's specific to cache/restore cycle and it works, but it took a few tweaks to the rest of the script code and server code to make it all work. There were a few gotchas that weren't immediately obvious. Here are a few things to pay attention to: Event Handling Logic Timing of manipulating DOM events Inline Script Code Bookmarking to the Cache Url when no cache exists Do you have inline script code in your HTML? That script code isn't going to run if you restore from cache and simply assign or it may not run at the time you think it would normally in the DOM rendering cycle. JavaScript Event Hookups The biggest issue I ran into with this approach almost immediately is that originally I had various static event handlers hooked up to various UI elements that are now cached. If you have an event handler like:$("#btnSearch").click( function() {…}); that works fine when the page loads with server rendered HTML, but that code breaks when you now load the HTML from cache. Why? Because the elements you're trying to hook those events to may not actually be there - yet. Luckily there's an easy workaround for this by using deferred events. With jQuery you can use the .on() event handler instead:$("#SelectionContainer").on("click","#btnSearch", function() {…}); which monitors a parent element for the events and checks for the inner selector elements to handle events on. This effectively defers to runtime event binding, so as more items are added to the document bindings still work. For any cached content use deferred events. Timing of manipulating DOM Elements Along the same lines make sure that your DOM manipulation code follows the code that loads the cached content into the page so that you don't manipulate DOM elements that don't exist just yet. Ideally you'll want to check for the condition to restore cached content towards the top of your script code, but that can be tricky if you have components or other logic that might not all run in a straight line. Inline Script Code Here's another small problem I ran into: I use a DateTime Picker widget I built a while back that relies on the jQuery date time picker. I also created a helper function that allows keyboard date navigation into it that uses JavaScript logic. Because MVC's limited 'object model' the only way to embed widget content into the page is through inline script. This code broken when I inserted the cached HTML into the page because the script code was not available when the component actually got injected into the page. As the last bullet - it's a matter of timing. There's no good work around for this - in my case I pulled out the jQuery date picker and relied on native <input type="date" /> logic instead - a better choice these days anyway, especially since this view is meant to be primarily to serve mobile devices which actually support date input through the browser (unlike desktop browsers of which only WebKit seems to support it). Bookmarking Cached Urls When you cache HTML content you have to make a decision whether you cache on the client and also not render that same content on the server. In the Classifieds app I didn't render server side content so if the user comes to the page with back=True and there is no cached content I have to a have a Plan B. Typically this happens when somebody ends up bookmarking the back URL. The easiest and safest solution for this scenario is to ALWAYS check the cache result to make sure it exists and if not have a safe URL to go back to - in this case to the plain uncached list URL which amounts to effectively redirecting. This seems really obvious in hindsight, but it's easy to overlook and not see a problem until much later, when it's not obvious at all why the page is not rendering anything. Don't use <body> to replace Content Since we're practically replacing all the HTML in the page it may seem tempting to simply replace the HTML content of the <body> tag. Don't. The body tag usually contains key things that should stay in the page and be there when it loads. Specifically script tags and elements and possibly other embedded content. It's best to create a top level DOM element specifically as a placeholder container for your cached content and wrap just around the actual content you want to replace. In the app above the #SizingContainer is that container. Other Approaches The approach I've used for this application is kind of specific to the existing server rendered application we're running and so it's just one approach you can take with caching. However for server rendered content caching this is a pattern I've used in a few apps to retrofit some client caching into list displays. In this application I took the path of least resistance to the existing server rendering logic. Here are a few other ways that come to mind: Using Partial HTML Rendering via AJAXInstead of rendering the page initially on the server, the page would load empty and the client would render the UI by retrieving the respective HTML and embedding it into the page from a Partial View. This effectively makes the initial rendering and the cached rendering logic identical and removes the server having to decide whether this request needs to be rendered or not (ie. not checking for a back=true switch). All the logic related to caching is made on the client in this case. Using JSON Data and Client RenderingThe hardcore client option is to do the whole UI SPA style and pull data from the server and then use client rendering or databinding to pull the data down and render using templates or client side databinding with knockout/angular et al. As with the Partial Rendering approach the advantage is that there's no difference in the logic between pulling the data from cache or rendering from scratch other than the initial check for the cache request. Of course if the app is a  full on SPA app, then caching may not be required even - the list could just stay in memory and be hidden and reactivated. I'm sure there are a number of other ways this can be handled as well especially using  AJAX. AJAX rendering might simplify the logic, but it also complicates search engine optimization since there's no content loaded initially. So there are always tradeoffs and it's important to look at all angles before deciding on any sort of caching solution in general. State of the Session SessionState and LocalStorage are easy to use in client code and can be integrated even with server centric applications to provide nice caching features of content and data. In this post I've shown a very specific scenario of storing HTML content for the purpose of remembering list view data and state and making the browsing experience for lists a bit more friendly, especially if there's dynamically loaded content involved. If you haven't played with sessionStorage or localStorage I encourage you to give it a try. There's a lot of cool stuff that you can do with this beyond the specific scenario I've covered here… Resources Overview of localStorage (also applies to sessionStorage) Web Storage Compatibility Modernizr Test Suite© Rick Strahl, West Wind Technologies, 2005-2013Posted in JavaScript  HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Django rewrites URL as IP address in browser - why?

    - by Mitch
    I am using django, nginx and apache. When I access my site with a URL (e.g., http://www.foo.com/) what appears in my browser address is the IP address with admin appended (e.g., http://123.45.67.890/admin/). When I access the site by IP, it is redirected as expected by django's urls.py (e.g., http://123.45.67.890/ - http://123.45.67.890/accounts/login/?next=/) I would like to have the name URL act the same way as the IP. That is, if the URL goes to a new view, the host in the browser address should remain the same and not change to the IP address. Where should I be looking to fix this? My files: ; cpa.com (apache) NameVirtualHost *:8080 <VirtualHost *:8080> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css text/javascript application/javascript application/x-javascript BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/htm DocumentRoot /path/to/root ServerName www.foo.com <IfModule mod_rpaf.c> RPAFenable On RPAFsethostname On RPAFproxy_ips 127.0.0.1 </IfModule> <Directory /public/static> AllowOverride None AddHandler mod_python .py PythonHandler mod_python.publisher </Directory> Alias / /dj <Location /> SetHandler python-program PythonPath "['/usr/lib/python2.5/site-packages/django', '/usr/lib/python2.5/site-packages/django/forms'] + sys.path" PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE dj.settings PythonDebug On </Location> </VirtualHost> ; ; ports.conf (apache) Listen 127.0.0.1:8080 ; ; cpa.conf (nginx) server { listen 80; server_name www.foo.com; location /static { root /var/public; index index.html; } location /cpa/js { root /var/public/js; } location /cpa/css { root /var/public/css; } location /djmedia { alias "/usr/lib/python2.5/site-packages/django/contrib/admin/media/"; } location / { include /etc/nginx/proxy.conf; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:8080; } } ; ; proxy.conf (nginx) proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 500; proxy_buffers 32 4k;

    Read the article

  • how to install flash for opera browser? ubuntu 12 04

    - by santosamaru
    for the opera:plugins setting its had been setup as enable to use the flash player .. and also i do trying to follow the instruction from I am testing the new Opera 11, but it keeps telling me I need to install flash player it still not help me .. what i have after following that link instructions is root@santos:/home/santos# cp /usr/lib/flashplugin-installer/libflashplayer.so ~/.opera/plugins/libflashplayer.so cp: cannot create regular file `/root/.opera/plugins/libflashplayer.so': No such file or directory root@santos:/home/santos# sudo apt-get gecko-mediaplayer E: Invalid operation gecko-mediaplayer root@santos:/home/santos# cp /usr/lib/flashplugin-installer/libflashplayer.so ~/.opera/plugins/libflashplayer.so cp: cannot create regular file `/root/.opera/plugins/libflashplayer.so': No such file or directory anyone can help me to solve this ?

    Read the article

  • Measuring ASP.NET and SharePoint output cache

    - by DigiMortal
    During ASP.NET output caching week in my local blog I wrote about how to measure ASP.NET output cache. As my posting was based on real work and real-life results then I thought that this posting is maybe interesting to you too. So here you can read what I did, how I did and what was the result. Introduction Caching is not effective without measuring it. As MVP Henn Sarv said in one of his sessions then you will get what you measure. And right he is. Lately I measured caching on local Microsoft community portal to make sure that our caching strategy is good enough in environment where this system lives. In this posting I will show you how to start measuring the cache of your web applications. Although the application measured is built on SharePoint Server publishing infrastructure, all those counters have same meaning as similar counters under pure ASP.NET applications. Measured counters I used Performance Monitor and the following performance counters (their names are similar on ASP.NET and SharePoint WCMS): Total number of objects added – how much objects were added to output cache. Total object discards – how much objects were deleted from output cache. Cache hit count – how many times requests were served by cache. Cache hit ratio – percent of requests served from cache. The first three counters are cumulative while last one is coefficient. You can use also other counters to measure the full effect of caching (memory, processor, disk I/O, network load etc before and after caching). Measuring process The measuring I describe here started from freshly restarted web server. I measured application during 12 hours that covered also time ranges when users are most active. The time range does not include late evening hours and night because there is nothing to measure during these hours. During measuring we performed no maintenance or administrative tasks on server. All tasks performed were related to usual daily content management and content monitoring. Also we had no advertisement campaigns or other promotions running at same time. The results You can see the results on following graphic.   Total number of objects added   Total object discards   Cache hit count   Cache hit ratio You can see that adds and discards are growing in same tempo. It is good because cache expires and not so popular items are not kept in memory. If there are more popular content then the these lines may have bigger distance between them. Cache hit count grows faster and this shows that more and more content is served from cache. In current case it shows that cache is filled optimally and we can do even better if we tune caches more. The site contains also pages that are discarded when some subsite changes (page was added/modified/deleted) and one modification may affect about four or five pages. This may also decrease cache hit count because during day the site gets about 5-10 new pages. Cache hit ratio is currently extremely good. The suggested minimum is about 85% but after some tuning and measuring I achieved 98.7% as a result. This is due to the fact that new pages are most often requested and after new pages are added the older ones are requested only sometimes. So they get discarded from cache and only some of these will return sometimes back to cache. Although this may also indicate the need for additional SEO work the result is very well in technical means. Conclusion Measuring ASP.NET output cache is not complex thing to do and you can start by measuring performance of cache as a start. Later you can move on and measure caching effect to other counters such as disk I/O, network, processors etc. What you have to achieve is optimal cache that is not full of items asked only couple of times per day (you can avoid this by not using too long cache durations). After some tuning you should be able to boost cache hit ratio up to at least 85%.

    Read the article

  • Windows Azure Learning Plan - Application Fabric

    - by BuckWoody
    This is one in a series of posts on a Windows Azure Learning Plan. You can find the main post here. This one deals with the Application Fabric for Windows Azure. It serves three main purposes - Access Control, Caching, and as a Service Bus.   Overview and Training Overview and general  information about the Azure Application Fabric, - what it is, how it works, and where you can learn more. General Introduction and Overview http://msdn.microsoft.com/en-us/library/ee922714.aspx Access Control Service Overview http://msdn.microsoft.com/en-us/magazine/gg490345.aspx Microsoft Documentation http://msdn.microsoft.com/en-gb/windowsazure/netservices.aspx Learning and Examples Sources for online and other Azure Appllications Fabric training Application Fabric SDK http://www.microsoft.com/downloads/en/details.aspx?FamilyID=39856a03-1490-4283-908f-c8bf0bfad8a5&displaylang=en Application Fabric Caching Service Primer http://blogs.msdn.com/b/appfabriccat/archive/2010/11/29/azure-appfabric-caching-service-soup-to-nuts-primer.aspx?wa=wsignin1.0 Hands-On Lab: Building Windows Azure Applications with the Caching Service http://www.wadewegner.com/2010/11/hands-on-lab-building-windows-azure-applications-with-the-caching-service/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+WadeWegner+%28Wade+Wegner+-+Technical%29 Architecture  Azure Application Fabric Internals and Architectures for Scale Out and other use-cases. Azure Application Fabric Architecture Guide http://blogs.msdn.com/b/yasserabdelkader/archive/2010/09/12/release-of-windows-server-appfabric-architecture-guide.aspx Windows Azure AppFabric Service Bus - A Deep Dive (Video) http://www.msteched.com/2010/Europe/ASI410 Access Control Service (ACS) High Level Architecture http://blogs.msdn.com/b/alikl/archive/2010/09/28/azure-appfabric-access-control-service-acs-v-2-0-high-level-architecture-web-application-scenario.aspx Applications  and Programming Programming Patterns and Architectures for SQL Azure systems. Various Examples from PDC 2010 on using Azure Application as a Service Bus http://tinyurl.com/2dcnt8o Creating a Distributed Cache using the Application Fabric http://blog.structuretoobig.com/post/2010/08/31/Creating-a-Poor-Mane28099s-Distributed-Cache-in-Azure.aspx  Azure Application Fabric Java SDK http://jdotnetservices.com/

    Read the article

  • Modern/Metro Internet Explorer: What were they thinking???

    - by Rick Strahl
    As I installed Windows 8.1 last week I decided that I really should take a closer look at Internet Explorer in the Modern/Metro environment again. Right away I ran into two issues that are real head scratchers to me.Modern Split Windows don't resize Viewport but Zoom OutThis one falls in the "WTF, really?" department: It looks like Modern Internet Explorer's Modern doesn't resize the browser window as every other browser (including IE 11 on the desktop) does, but rather tries to adjust the zoom to the width of the browser. This means that if you use the Modern IE browser and you split the display between IE and another application, IE will be zoomed out, with text becoming much, much smaller, rather than resizing the browser viewport and adjusting the pixel width as you would when a browser window is typically resized.Here's what I'm talking about in a couple of pictures. First here's the full screen Internet Explorer version (this shot is resized down since it's full screen at 1080p, click to see the full image):This brings up the first issue which is: On the desktop who wants to browse a site full screen? Most sites aren't fully optimized for 1080p widescreen experience and frankly most content that wide just looks weird. Even in typical 10" resolutions of 1280 width it's weird to look at things this way. At least this issue can be worked around with @media queries and either constraining the view, or adding additional content to make use of the extra space. Still running a desktop browser full screen is not optimal on a desktop machine - ever.Regardless, this view, while oversized, is what I expect: Everything is rendered in the right ratios, with font-size and the responsive design styling properly respected.But now look what happens when you split the desktop windows and show half desktop and have modern IE (this screen shot is not resized but cropped - this is actual size content as you can see in the cropped Twitter window on the right half of the screen):What's happening here is that IE is zooming out of the content to make it fit into the smaller width, shrinking the content rather than resizing the viewport's pixel width. In effect it looks like the pixel width stays at 1080px and the viewport expands out height-wise in response resulting in some crazy long portrait view.There goes responsive design - out the window literally. If you've built your site using @media queries and fixed viewport sizes, Internet Explorer completely screws you in this split view. On my 1080p monitor, the site shown at a little under half width becomes completely unreadable as the fonts are too small and break up. As you go into split view and you resize the window handle the content of the browser gets smaller and smaller (and effectively longer and longer on the bottom) effectively throwing off any responsive layout to the point of un-readability even on a big display, let alone a small tablet screen.What could POSSIBLY be the benefit of this screwed up behavior? I checked around a bit trying different pages in this shrunk down view. Other than the Microsoft home page, every page I went to was nearly unreadable at a quarter width. The only page I found that worked 'normally' was the Microsoft home page which undoubtedly is optimized just for Internet Explorer specifically.Bottom Address Bar opaquely overlays ContentAnother problematic feature for me is the browser address bar on the bottom. Modern IE shows the status bar opaquely on the bottom, overlaying the content area of the Web Page - until you click on the page. Until you do though, the address bar overlays the bottom content solidly. And not just a little bit but by good sizable chunk.In the application from the screen shot above I have an application toolbar on the bottom and the IE Address bar completely hides that bottom toolbar when the page is first loaded, until the user clicks into the content at which point the address bar shrinks down to a fat border style bar with a … on it. Toolbars on the bottom are pretty common these days, especially for mobile optimized applications, so I'd say this is a common use case. But even if you don't have toolbars on the bottom maybe there's other fixed content on the bottom of the page that is vital to display. While other browsers often also show address bars and then later hide them, these other browsers tend to resize the viewport when the address bar status changes, so the content can respond to the size change. Not so with Modern IE. The address bar overlays content and stays visible until content is clicked. No resize notification or viewport height change is sent to the browser.So basically Internet Explorer is telling me: "Our toolbar is more important than your content!" - AND gives me no chance to re-act to that behavior. The result on this page/application is that the user sees no actionable operations until he or she clicks into the content area, which is terrible from a UI perspective as the user has no idea what options are available on initial load.It's doubly confounding in that IE is running in full screen mode and has an the entire height of the screen at its disposal - there's plenty of real estate available to not require this sort of hiding of content in the first place. Heck, even Windows Phone with its more constrained size doesn't hide content - in fact the address bar on Windows Phone 8 is always visible.What were they thinking?Every time I use anything in the Modern Metro interface in Windows 8/8.1 I get angry.  I can pretty much ignore Metro/Modern for my everyday usage, but unfortunately with Internet Explorer in the modern shell I have to live with, because there will be users using it to access my sites. I think it's inexcusable by Microsoft to build such a crappy shell around the browser that impacts the actual usability of Web content. In both of the cases above I can only scratch my head at what could have possibly motivated anybody designing the UI for the browser to make these screwed up choices, that manipulate the content in a totally unmaintainable way.© Rick Strahl, West Wind Technologies, 2005-2013Posted in Windows  HTML5   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to make web icon open with specific browser?

    - by David
    I have an icon on my desktop for a website called QUAKE LIVE and I use Google Chrome as my default browser. The website isn't compatible with Google Chrome, but it with Mozilla Firefox. Is there any way to edit the properties of the icon to open with Firefox instead of Chrome?

    Read the article

  • Is it possible to make a web browser proxy tunnel with Netcat/Socat?

    - by djangofan
    Concerning the Netcat/Socat utility . From the man page, it seems like it is possible to create a secure proxy using netcat by which I could point my web browser to like a proxy server , that could fork/drive my web traffic through the proxy. Is this possible? Any hints on how to do this? Socat on windows is preferrable but netcat on linux is ok. http://www.dest-unreach.org/socat/doc/socat.html

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >