Search Results

Search found 1790 results on 72 pages for 'tabbed browsing'.

Page 14/72 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Lenovo ThinkPad L520 slows down when AC power adapter is plugged in

    - by Aamir
    I have a new laptop Lenovo ThinkPad L520 (7859-5BG) Core i5-2520M(2.5GHz) with 4GB RAM. Having installed Ubuntu 11.10 32-bit, while browsing with Chrome on GNOME classic (no effects), I noticed 173% CPU usage by chrome browser process, and the system slowly got very very slow, Now, at this stage as I removed the power adapter, the system suddenly got faster (and stopped the lagging behavior) and CPU usage drops down to 48% !! Observation 1: I was browsing through chrome when my system seemed to be seriously lagging, so I killed chrome to see if it gets any faster. But there remained no difference. Notice that CPU usage was a bit strange here. It showed no high activity, but as soon as I would click on applications in gnome panel, it would shoot CPU usage to 70, or 80 or 90 or 143% etc. depending on how quickly i clicked back and forth. At this instance I removed by AC adapter of my laptop, and suddenly system got fine. So i again clicked on gnome panel, and noticed that it now took only 7% or 12% or 13% at max, with same kind of clicks in application menu. Observation 2: At the other times, with AC adapter plugged in, top indicates four instances of chromium taking 90%, 60%, 47% and 2% (for example), and then once I take out the AC adapter same processes take lesser CPU all of a sudden Intermediate conclusions: What does this indicate ? I cannot figure out any "other" process in "top" that is suddenly being triggered, its the same process that hogs up my CPU once AC power is plugged in ! NOTE: the problem is now CONFIRMED, as i can repeat that when I have power adapter plugged in ! Can anyone tell me what exactly does this indicate ? What is wrong, is it some bug with power management or what ?

    Read the article

  • Internet slow on one router only [the problem only in Ubuntu] [on hold]

    - by mrSuperEvening
    Internet works perfectly on every other router, but browsing sucks at home (slow browsing and slow loading times). I changed DNS servers to 8.8.0.0, still doesn't help. And funnily, download speed is extremely high on this network (meaning torrents for example), but using browsers and loading websites is extremely slow (only on this network). Do I need to change something in router settings or what can I try? By the way, I use wired connection to router. EDIT: There's no problems when using Windows. EDIT: ifconfig: eth0 Link encap:Ethernet HWaddr f2:4d:a0:c0:3f:4c inet addr:192.168.11.8 Bcast:192.168.11.255 Mask:255.255.255.0 inet6 addr: fe80::f24d:a2ff:fec6:3f4c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:206798 errors:0 dropped:0 overruns:0 frame:0 TX packets:219570 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:76680734 (76.6 MB) TX bytes:21738160 (21.7 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:160 errors:0 dropped:0 overruns:0 frame:0 TX packets:160 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:11094 (11.0 KB) TX bytes:11094 (11.0 KB)` ping -c 2 4.2.2.2 PING 4.2.2.2 (4.2.2.2) 56(84) bytes of data. --- 4.2.2.2 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1007ms ping -c 2 google.com PING google.com (213.159.32.147) 56(84) bytes of data. 64 bytes from lan-213-159-32-147.kns.skynet.lv (213.159.32.147): icmp_seq=1 ttl=61 time=0.936 ms 64 bytes from lan-213-159-32-147.kns.skynet.lv (213.159.32.147): icmp_seq=2 ttl=61 time=0.937 ms --- google.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.936/0.936/0.937/0.030 ms

    Read the article

  • Not able to track traffic on subdomain using Google Analytics

    - by Steven
    I'm trying to track traffic for my sub-domain, but it's not happening. This is how it's set up. My partner has a domain called sub1.partner.com. This domain points to partner1.mydomain.com. The idea is that users think they are browsing my partners website, when they are in fact browsing pages on my server. My tracking code looks like this: var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-xxxxxxxx-x']); _gaq.push(['_setDomainName', '.mysite.com']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); In Google analytics I've created a new account under my main account and called in partner1.mysite.com. On this account I have created a filter: Filter type: include Filter field: Host name Filter pattern: partner1.mysite.no Case sensetive: No What more can I try to track traffic on my subdomain? UPDATE Question 1 Is this line correct? _gaq.push(['_setDomainName', '.mysite.com']); Question 2 Is it correct that I have to add \ before any punctuations like so \. in filters?

    Read the article

  • How can one-handed work in Ubuntu be eased?

    - by N.N.
    My right hand is temporarily immobilized and I would like to do some minor general work on my computer. Mostly web browsing, mailing and file and directory browsing and editing. For this I currently use Firefox, Thunderbird, Nautilus and the GNOME terminal (I have already asked a specific question about Emacs). Are there ways to ease such, or any other general, one-handed work in Ubuntu? I have found http://stackoverflow.com/questions/2391805/how-can-i-remain-productive-with-one-hand-completely-immobilized but that is not exactly what I am asking for. I want to ease whatever little time spent one-handed in Ubuntu and this is also interesting for situations where there is no injury involved, such as when one hand is occupied. I do realize I should avoid unnecessary strain. The main thing that is much slower one-handed is writing. Since I am only temporarily immobilized it seems to make no sense learn a new keyboard layout. I would be surprised if I managed to learn and become more effective with a new keyboard layout (than one-handed QWERTY) before I can use my other hand again. What I have already found: Sticky keys for making it easier to enter keyboard commands. When writing one-handed there are more cases of where it is useful to paste in phrases rather than to reenter them. It is easier to use Super+S rather than CtrlAlt+arrow keys to switch work space.

    Read the article

  • What You Said: How Do You Browse Securely Away From Home?

    - by Jason Fitzpatrick
    Responses to this week’s Ask the Reader question show that just because you’re away from home doesn’t mean you have to give up the security and privacy that your home network provides. Earlier this week we asked you to share you browsing away from home security tips and tricks and obliged. JC offered one of the more entertaining tales of away-from-home browsing: Recently a bunch of us stayed at a high end resort down in Mexico. Internet was offered as a pay per device service at about $80/week/device. Considering we had about 12 wifi devices there among us(a few geeks), I decided to plan ahead. I setup a WRT54G as a WiFi client with a vpn back to my house and NAT. Setup a second one as a basic wireless access point with password and plugged it into the first. Onsite we setup the devices and connected to the wireless with one paid account(tied to the MAC address). Everyone connected to the other device for wireless access and it was all tunnelled through my home network with encryption. HTG Explains: Learn How Websites Are Tracking You Online Here’s How to Download Windows 8 Release Preview Right Now HTG Explains: Why Linux Doesn’t Need Defragmenting

    Read the article

  • Is there supposed to be a Windows Network folder in the file manager?

    - by Cindy
    I pulled my hard drive out of my computer and started with a bootable usb version of Ubuntu, which I am using that at this point. At first boot, I see that there is a Windows folder when browsing network. Since there is no operating system present, besides the usb that I boot from, should there be a Windows network folder? Original question First of all I just want to say, I wish I had tried Ubuntu a couple years ago when I first heard about it, but I was like a lot of the population and went with the "easy way" and stuck with Windows because I didn't want to take the time to learn something new. Well, about 3 months ago I realized someone had hacked into my computer, and then found they had hacked my facebook account so I decided I had better do a complete credit check. I found student loans (totalling about 30,000 so far) had recently showed up on my credit report. I think it's going to be a long, long road to recovery now but I'm hoping Ubuntu will be a start and definitely an eye opener. My relationship with Windows is over. I had 3 antivirus programs running, none were protecting me like I thought they were. Turned out a free program that I downloaded was the only one that could detect and clean the virus, but by then it was too late. Anyhow, my question is, I pulled my hard drive out of my computer and started with a bootable usb version of Ubuntu, which I am using that at this point. At first boot, I see that there is a Windows folder when browsing network. Since there is no operating system present, besides the usb that I boot from, should there be a Windows network folder? I am using a local ISP (and won't be much longer because I am very paranoid at this point) and I want to make sure all is ok before I put my new hard drive in and install Ubuntu. Any help would be appreciated. Also, I want to thank Ubuntu and the community for giving people an alternative.

    Read the article

  • Using HTML 5 SessionState to save rendered Page Content

    - by Rick Strahl
    HTML 5 SessionState and LocalStorage are very useful and super easy to use to manage client side state. For building rich client side or SPA style applications it's a vital feature to be able to cache user data as well as HTML content in order to swap pages in and out of the browser's DOM. What might not be so obvious is that you can also use the sessionState and localStorage objects even in classic server rendered HTML applications to provide caching features between pages. These APIs have been around for a long time and are supported by most relatively modern browsers and even all the way back to IE8, so you can use them safely in your Web applications. SessionState and LocalStorage are easy The APIs that make up sessionState and localStorage are very simple. Both object feature the same API interface which  is a simple, string based key value store that has getItem, setItem, removeitem, clear and  key methods. The objects are also pseudo array objects and so can be iterated like an array with  a length property and you have array indexers to set and get values with. Basic usage  for storing and retrieval looks like this (using sessionStorage, but the syntax is the same for localStorage - just switch the objects):// set var lastAccess = new Date().getTime(); if (sessionStorage) sessionStorage.setItem("myapp_time", lastAccess.toString()); // retrieve in another page or on a refresh var time = null; if (sessionStorage) time = sessionStorage.getItem("myapp_time"); if (time) time = new Date(time * 1); else time = new Date(); sessionState stores data that is browser session specific and that has a liftetime of the active browser session or window. Shut down the browser or tab and the storage goes away. localStorage uses the same API interface, but the lifetime of the data is permanently stored in the browsers storage area until deleted via code or by clearing out browser cookies (not the cache). Both sessionStorage and localStorage space is limited. The spec is ambiguous about this - supposedly sessionStorage should allow for unlimited size, but it appears that most WebKit browsers support only 2.5mb for either object. This means you have to be careful what you store especially since other applications might be running on the same domain and also use the storage mechanisms. That said 2.5mb worth of character data is quite a bit and would go a long way. The easiest way to get a feel for how sessionState and localStorage work is to look at a simple example. You can go check out the following example online in Plunker: http://plnkr.co/edit/0ICotzkoPjHaWa70GlRZ?p=preview which looks like this: Plunker is an online HTML/JavaScript editor that lets you write and run Javascript code and similar to JsFiddle, but a bit cleaner to work in IMHO (thanks to John Papa for turning me on to it). The sample has two text boxes with counts that update session/local storage every time you click the related button. The counts are 'cached' in Session and Local storage. The point of these examples is that both counters survive full page reloads, and the LocalStorage counter survives a complete browser shutdown and restart. Go ahead and try it out by clicking the Reload button after updating both counters and then shutting down the browser completely and going back to the same URL (with the same browser). What you should see is that reloads leave both counters intact at the counted values, while a browser restart will leave only the local storage counter intact. The code to deal with the SessionStorage (and LocalStorage not shown here) in the example is isolated into a couple of wrapper methods to simplify the code: function getSessionCount() { var count = 0; if (sessionStorage) { var count = sessionStorage.getItem("ss_count"); count = !count ? 0 : count * 1; } $("#txtSession").val(count); return count; } function setSessionCount(count) { if (sessionStorage) sessionStorage.setItem("ss_count", count.toString()); } These two functions essentially load and store a session counter value. The two key methods used here are: sessionStorage.getItem(key); sessionStorage.setItem(key,stringVal); Note that the value given to setItem and return by getItem has to be a string. If you pass another type you get an error. Don't let that limit you though - you can easily enough store JSON data in a variable so it's quite possible to pass complex objects and store them into a single sessionStorage value:var user = { name: "Rick", id="ricks", level=8 } sessionStorage.setItem("app_user",JSON.stringify(user)); to retrieve it:var user = sessionStorage.getItem("app_user"); if (user) user = JSON.parse(user); Simple! If you're using the Chrome Developer Tools (F12) you can also check out the session and local storage state on the Resource tab:   You can also use this tool to refresh or remove entries from storage. What we just looked at is a purely client side implementation where a couple of counters are stored. For rich client centric AJAX applications sessionStorage and localStorage provide a very nice and simple API to store application state while the application is running. But you can also use these storage mechanisms to manage server centric HTML applications when you combine server rendering with some JavaScript to perform client side data caching. You can both store some state information and data on the client (ie. store a JSON object and carry it forth between server rendered HTML requests) or you can use it for good old HTTP based caching where some rendered HTML is saved and then restored later. Let's look at the latter with a real life example. Why do I need Client-side Page Caching for Server Rendered HTML? I don't know about you, but in a lot of my existing server driven applications I have lists that display a fair amount of data. Typically these lists contain links to then drill down into more specific data either for viewing or editing. You can then click on a link and go off to a detail page that provides more concise content. So far so good. But now you're done with the detail page and need to get back to the list, so you click on a 'bread crumbs trail' or an application level 'back to list' button and… …you end up back at the top of the list - the scroll position, the current selection in some cases even filters conditions - all gone with the wind. You've left behind the state of the list and are starting from scratch in your browsing of the list from the top. Not cool! Sound familiar? This a pretty common scenario with server rendered HTML content where it's so common to display lists to drill into, only to lose state in the process of returning back to the original list. Look at just about any traditional forums application, or even StackOverFlow to see what I mean here. Scroll down a bit to look at a post or entry, drill in then use the bread crumbs or tab to go back… In some cases returning to the top of a list is not a big deal. On StackOverFlow that sort of works because content is turning around so quickly you probably want to actually look at the top posts. Not always though - if you're browsing through a list of search topics you're interested in and drill in there's no way back to that position. Essentially anytime you're actively browsing the items in the list, that's when state becomes important and if it's not handled the user experience can be really disrupting. Content Caching If you're building client centric SPA style applications this is a fairly easy to solve problem - you tend to render the list once and then update the page content to overlay the detail content, only hiding the list temporarily until it's used again later. It's relatively easy to accomplish this simply by hiding content on the page and later making it visible again. But if you use server rendered content, hanging on to all the detail like filters, selections and scroll position is not quite as easy. Or is it??? This is where sessionStorage comes in handy. What if we just save the rendered content of a previous page, and then restore it when we return to this page based on a special flag that tells us to use the cached version? Let's see how we can do this. A real World Use Case Recently my local ISP asked me to help out with updating an ancient classifieds application. They had a very busy, local classifieds app that was originally an ASP classic application. The old app was - wait for it: frames based - and even though I lobbied against it, the decision was made to keep the frames based layout to allow rapid browsing of the hundreds of posts that are made on a daily basis. The primary reason they wanted this was precisely for the ability to quickly browse content item by item. While I personally hate working with Frames, I have to admit that the UI actually works well with the frames layout as long as you're running on a large desktop screen. You can check out the frames based desktop site here: http://classifieds.gorge.net/ However when I rebuilt the app I also added a secondary view that doesn't use frames. The main reason for this of course was for mobile displays which work horribly with frames. So there's a somewhat mobile friendly interface to the interface, which ditches the frames and uses some responsive design tweaking for mobile capable operation: http://classifeds.gorge.net/mobile  (or browse the base url with your browser width under 800px)   Here's what the mobile, non-frames view looks like:   As you can see this means that the list of classifieds posts now is a list and there's a separate page for drilling down into the item. And of course… originally we ran into that usability issue I mentioned earlier where the browse, view detail, go back to the list cycle resulted in lost list state. Originally in mobile mode you scrolled through the list, found an item to look at and drilled in to display the item detail. Then you clicked back to the list and BAM - you've lost your place. Because there are so many items added on a daily basis the full list is never fully loaded, but rather there's a "Load Additional Listings"  entry at the button. Not only did we originally lose our place when coming back to the list, but any 'additionally loaded' items are no longer there because the list was now rendering  as if it was the first page hit. The additional listings, and any filters, the selection of an item all were lost. Major Suckage! Using Client SessionStorage to cache Server Rendered Content To work around this problem I decided to cache the rendered page content from the list in SessionStorage. Anytime the list renders or is updated with Load Additional Listings, the page HTML is cached and stored in Session Storage. Any back links from the detail page or the login or write entry forms then point back to the list page with a back=true query string parameter. If the server side sees this parameter it doesn't render the part of the page that is cached. Instead the client side code retrieves the data from the sessionState cache and simply inserts it into the page. It sounds pretty simple, and the overall the process is really easy, but there are a few gotchas that I'll discuss in a minute. But first let's look at the implementation. Let's start with the server side here because that'll give a quick idea of the doc structure. As I mentioned the server renders data from an ASP.NET MVC view. On the list page when returning to the list page from the display page (or a host of other pages) looks like this: https://classifieds.gorge.net/list?back=True The query string value is a flag, that indicates whether the server should render the HTML. Here's what the top level MVC Razor view for the list page looks like:@model MessageListViewModel @{ ViewBag.Title = "Classified Listing"; bool isBack = !string.IsNullOrEmpty(Request.QueryString["back"]); } <form method="post" action="@Url.Action("list")"> <div id="SizingContainer"> @if (!isBack) { @Html.Partial("List_CommandBar_Partial", Model) <div id="PostItemContainer" class="scrollbox" xstyle="-webkit-overflow-scrolling: touch;"> @Html.Partial("List_Items_Partial", Model) @if (Model.RequireLoadEntry) { <div class="postitem loadpostitems" style="padding: 15px;"> <div id="LoadProgress" class="smallprogressright"></div> <div class="control-progress"> Load additional listings... </div> </div> } </div> } </div> </form> As you can see the query string triggers a conditional block that if set is simply not rendered. The content inside of #SizingContainer basically holds  the entire page's HTML sans the headers and scripts, but including the filter options and menu at the top. In this case this makes good sense - in other situations the fact that the menu or filter options might be dynamically updated might make you only cache the list rather than essentially the entire page. In this particular instance all of the content works and produces the proper result as both the list along with any filter conditions in the form inputs are restored. Ok, let's move on to the client. On the client there are two page level functions that deal with saving and restoring state. Like the counter example I showed earlier, I like to wrap the logic to save and restore values from sessionState into a separate function because they are almost always used in several places.page.saveData = function(id) { if (!sessionStorage) return; var data = { id: id, scroll: $("#PostItemContainer").scrollTop(), html: $("#SizingContainer").html() }; sessionStorage.setItem("list_html",JSON.stringify(data)); }; page.restoreData = function() { if (!sessionStorage) return; var data = sessionStorage.getItem("list_html"); if (!data) return null; return JSON.parse(data); }; The data that is saved is an object which contains an ID which is the selected element when the user clicks and a scroll position. These two values are used to reset the scroll position when the data is used from the cache. Finally the html from the #SizingContainer element is stored, which makes for the bulk of the document's HTML. In this application the HTML captured could be a substantial bit of data. If you recall, I mentioned that the server side code renders a small chunk of data initially and then gets more data if the user reads through the first 50 or so items. The rest of the items retrieved can be rather sizable. Other than the JSON deserialization that's Ok. Since I'm using SessionStorage the storage space has no immediate limits. Next is the core logic to handle saving and restoring the page state. At first though this would seem pretty simple, and in some cases it might be, but as the following code demonstrates there are a few gotchas to watch out for. Here's the relevant code I use to save and restore:$( function() { … var isBack = getUrlEncodedKey("back", location.href); if (isBack) { // remove the back key from URL setUrlEncodedKey("back", "", location.href); var data = page.restoreData(); // restore from sessionState if (!data) { // no data - force redisplay of the server side default list window.location = "list"; return; } $("#SizingContainer").html(data.html); var el = $(".postitem[data-id=" + data.id + "]"); $(".postitem").removeClass("highlight"); el.addClass("highlight"); $("#PostItemContainer").scrollTop(data.scroll); setTimeout(function() { el.removeClass("highlight"); }, 2500); } else if (window.noFrames) page.saveData(null); // save when page loads $("#SizingContainer").on("click", ".postitem", function() { var id = $(this).attr("data-id"); if (!id) return true; if (window.noFrames) page.saveData(id); var contentFrame = window.parent.frames["Content"]; if (contentFrame) contentFrame.location.href = "show/" + id; else window.location.href = "show/" + id; return false; }); … The code starts out by checking for the back query string flag which triggers restoring from the client cache. If cached the cached data structure is read from sessionStorage. It's important here to check if data was returned. If the user had back=true on the querystring but there is no cached data, he likely bookmarked this page or otherwise shut down the browser and came back to this URL. In that case the server didn't render any detail and we have no cached data, so all we can do is redirect to the original default list view using window.location. If we continued the page would render no data - so make sure to always check the cache retrieval result. Always! If there is data the it's loaded and the data.html data is restored back into the document by simply injecting the HTML back into the document's #SizingContainer element:$("#SizingContainer").html(data.html); It's that simple and it's quite quick even with a fully loaded list of additional items and on a phone. The actual HTML data is stored to the cache on every page load initially and then again when the user clicks on an element to navigate to a particular listing. The former ensures that the client cache always has something in it, and the latter updates with additional information for the selected element. For the click handling I use a data-id attribute on the list item (.postitem) in the list and retrieve the id from that. That id is then used to navigate to the actual entry as well as storing that Id value in the saved cached data. The id is used to reset the selection by searching for the data-id value in the restored elements. The overall process of this save/restore process is pretty straight forward and it doesn't require a bunch of code, yet it yields a huge improvement in the usability of the site on mobile devices (or anybody who uses the non-frames view). Some things to watch out for As easy as it conceptually seems to simply store and retrieve cached content, you have to be quite aware what type of content you are caching. The code above is all that's specific to cache/restore cycle and it works, but it took a few tweaks to the rest of the script code and server code to make it all work. There were a few gotchas that weren't immediately obvious. Here are a few things to pay attention to: Event Handling Logic Timing of manipulating DOM events Inline Script Code Bookmarking to the Cache Url when no cache exists Do you have inline script code in your HTML? That script code isn't going to run if you restore from cache and simply assign or it may not run at the time you think it would normally in the DOM rendering cycle. JavaScript Event Hookups The biggest issue I ran into with this approach almost immediately is that originally I had various static event handlers hooked up to various UI elements that are now cached. If you have an event handler like:$("#btnSearch").click( function() {…}); that works fine when the page loads with server rendered HTML, but that code breaks when you now load the HTML from cache. Why? Because the elements you're trying to hook those events to may not actually be there - yet. Luckily there's an easy workaround for this by using deferred events. With jQuery you can use the .on() event handler instead:$("#SelectionContainer").on("click","#btnSearch", function() {…}); which monitors a parent element for the events and checks for the inner selector elements to handle events on. This effectively defers to runtime event binding, so as more items are added to the document bindings still work. For any cached content use deferred events. Timing of manipulating DOM Elements Along the same lines make sure that your DOM manipulation code follows the code that loads the cached content into the page so that you don't manipulate DOM elements that don't exist just yet. Ideally you'll want to check for the condition to restore cached content towards the top of your script code, but that can be tricky if you have components or other logic that might not all run in a straight line. Inline Script Code Here's another small problem I ran into: I use a DateTime Picker widget I built a while back that relies on the jQuery date time picker. I also created a helper function that allows keyboard date navigation into it that uses JavaScript logic. Because MVC's limited 'object model' the only way to embed widget content into the page is through inline script. This code broken when I inserted the cached HTML into the page because the script code was not available when the component actually got injected into the page. As the last bullet - it's a matter of timing. There's no good work around for this - in my case I pulled out the jQuery date picker and relied on native <input type="date" /> logic instead - a better choice these days anyway, especially since this view is meant to be primarily to serve mobile devices which actually support date input through the browser (unlike desktop browsers of which only WebKit seems to support it). Bookmarking Cached Urls When you cache HTML content you have to make a decision whether you cache on the client and also not render that same content on the server. In the Classifieds app I didn't render server side content so if the user comes to the page with back=True and there is no cached content I have to a have a Plan B. Typically this happens when somebody ends up bookmarking the back URL. The easiest and safest solution for this scenario is to ALWAYS check the cache result to make sure it exists and if not have a safe URL to go back to - in this case to the plain uncached list URL which amounts to effectively redirecting. This seems really obvious in hindsight, but it's easy to overlook and not see a problem until much later, when it's not obvious at all why the page is not rendering anything. Don't use <body> to replace Content Since we're practically replacing all the HTML in the page it may seem tempting to simply replace the HTML content of the <body> tag. Don't. The body tag usually contains key things that should stay in the page and be there when it loads. Specifically script tags and elements and possibly other embedded content. It's best to create a top level DOM element specifically as a placeholder container for your cached content and wrap just around the actual content you want to replace. In the app above the #SizingContainer is that container. Other Approaches The approach I've used for this application is kind of specific to the existing server rendered application we're running and so it's just one approach you can take with caching. However for server rendered content caching this is a pattern I've used in a few apps to retrofit some client caching into list displays. In this application I took the path of least resistance to the existing server rendering logic. Here are a few other ways that come to mind: Using Partial HTML Rendering via AJAXInstead of rendering the page initially on the server, the page would load empty and the client would render the UI by retrieving the respective HTML and embedding it into the page from a Partial View. This effectively makes the initial rendering and the cached rendering logic identical and removes the server having to decide whether this request needs to be rendered or not (ie. not checking for a back=true switch). All the logic related to caching is made on the client in this case. Using JSON Data and Client RenderingThe hardcore client option is to do the whole UI SPA style and pull data from the server and then use client rendering or databinding to pull the data down and render using templates or client side databinding with knockout/angular et al. As with the Partial Rendering approach the advantage is that there's no difference in the logic between pulling the data from cache or rendering from scratch other than the initial check for the cache request. Of course if the app is a  full on SPA app, then caching may not be required even - the list could just stay in memory and be hidden and reactivated. I'm sure there are a number of other ways this can be handled as well especially using  AJAX. AJAX rendering might simplify the logic, but it also complicates search engine optimization since there's no content loaded initially. So there are always tradeoffs and it's important to look at all angles before deciding on any sort of caching solution in general. State of the Session SessionState and LocalStorage are easy to use in client code and can be integrated even with server centric applications to provide nice caching features of content and data. In this post I've shown a very specific scenario of storing HTML content for the purpose of remembering list view data and state and making the browsing experience for lists a bit more friendly, especially if there's dynamically loaded content involved. If you haven't played with sessionStorage or localStorage I encourage you to give it a try. There's a lot of cool stuff that you can do with this beyond the specific scenario I've covered here… Resources Overview of localStorage (also applies to sessionStorage) Web Storage Compatibility Modernizr Test Suite© Rick Strahl, West Wind Technologies, 2005-2013Posted in JavaScript  HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • ASP.NET Deployment under IIS7/VS2010 as Web Application

    - by adchased
    I transformed my VS2008 ASP.NET Website to a "Web Application" today using VS2010. So now it's possible to build a Deployment Package. A Zip Package which can be direclty imported into IIS7. Usually I added a website in IIS7 called mydomain.com and put everything in its root dir. That worked. However, since I converted to an Web Application, this Application is added beneath my "Website container". Now I'm confused, this is how it actually looks now when I try to open the website: Browsing to mydomain.com says 404 ERROR. Browsing to mydomain.com/mydomain.com opens the actual website, but in a subfolder instead of the root directory. (The Application is named after the Domain) How to make this application the root of the website now? I want the application to run under the mydomain.com ROOT and not some subfolder. Thanks a lot!

    Read the article

  • Are fopen/fread/fgets PID-safe in C ?

    - by Jane
    Various users are browsing through a website 100% programmed in C (CGI). Each webpage uses fopen/fgets/fread to read common data (like navigation bars) from files. Would each call to fopen/fgets/fread interefere with each other if various people are browsing the same page ? If so, how can this be solved in C ? (This is a Linux server, compiling is done with gcc and this is for a CGI website programmed in C.) Example: FILE *DATAFILE = fopen(PATH, "r"); if ( DATAFILE != NULL ) { while ( fgets( LINE, BUFFER, DATAFILE ) ) { /* do something */ } }

    Read the article

  • Determine if on product page programmatically in Magento

    - by dfondente
    I want to insert tracking codes on all of the pages of a Magento site, and need to use a different syntax if the page is a CMS page, a category browsing page, or a product view page. I have a custom module set up with a block that inserts a generic tracking code on each page for now. From within the block, how can I distinguish between CMS pages, category pages, and product pages? I started with: Mage::app()->getRequest(); I can see that Mage::app()->getRequest()->getParam('id'); returns the product or category ID on product and category pages, but doesn't distinguish between those page types. Mage::app()->getRequest()->getRouteName(); return "cms" for CMS pages, but returns "catalog" for both category browsing and product view pages, so I can't use that to tell category and product pages apart. Is there some indicator in the request I can use safely? Or is there a better way to accomplish my goal of different tracking codes for different page types?

    Read the article

  • Context aware breadcrumbs with php sessions - Will search engines index each variation?

    - by Haroldo
    Some pages on my website appear differently depending on where the user has been, using php sessions. for example with breadcrumbs: standard crumb setup: All Books - fiction - Lord Of the Flies if the visitor has just been on the 'William Golding Page', a session will have been created to say, this visitor is broswing by author, so i would check if( $_SESSION['browsing by] == 'author' ): and the breadcrumbs (for the exact same page as before) would now be: Authors - William Golding - Lord Of the Flies to summarise: So 1 page exists for each book, but depending where the user has come from, the page will show different breadcrumbs. the questions: Can search engines create my 'browsing by' SESSION? Would they index the same page multiple times (for each variation)?

    Read the article

  • How to suppress javascript errors for sites I'm not developing?

    - by Simon_Weaver
    I like to keep javascript debugging enabled in my browser so when I'm developing my own code I can instantly see when I've made an error. Of course this means I see errors on apple.com, microsoft.com, stackoverflow.com, cnn.com, facebook.com. Its quite fun sometimes to see just how much awful code there is out there being run by major sites but sometimes it gets really annoyed. I've wondered for YEARS how to change this but never really got around to it. Its particularly annoying today and I'd really like to know of any solutions. The only solution I have is : use a different browser for everyday browsing. I'm hopin theres some quick and easy plugin someone can direct me to where I can toggle it on and off based upon the domain i'm on. Edit: I generally use IE7 for everyday browsing

    Read the article

  • What can cause forms authentication to forget you are logged in?

    - by metanaito
    I have an asp.net web site that uses forms authentication. When users check the "remember me" it will remember the user if they close the browser and come back. I have the timeout set for a day or so. Recently I've noticed that I can be browsing the site and suddenly it forgets I'm logged in. It does not happen after 20 minutes as I can be actively browsing. My question is, what can cause this to happen? If the server recycles the app or app pool will the login be lost? On my local I can restart my machine and when I come back I'm still logged in.

    Read the article

  • How to get warning massage when clicking close(X) button of browser with tabs in Google Crome

    - by Shantanu
    How to get a warning massage if i accidently click the close button of the chrome browser which have multiple tabs opened at that time? I am a regular user of Crome and having this problem while using it. I normally open multiple tabs inside a single browser but sometimes i accidently click the close button of browser and as soon as the button is clicked crome does't give any warning issue about multiple active tabs and close the entire window. Is the end user like me is browsing on normal crome window then he can open the websites again by checking the web history but if he is browsing inside private browser then he can't do anything(this happens with me very regularly because i normally browse in private browser). On the other hand if you accidently click the close button in mozilla which have multiple tabs open it throws a warning massage to the user and asks for his wish.

    Read the article

  • Android EditText won't take up remaining space

    - by Jamie
    In my Android app, I have a tabbed Activity. In one of the tabs I have two TextViews and two EditTexts. The first EditText is only one line, and that's fine. However, I want the other EditText, android:id="@+id/paste_code", to take up the remaining space, but no matter what I do to it, it will only show one line. I don't want to manually set the number of lines, since the number that would fit on the screen differs based on your device. Here's the relevant code. It's nested inside all the necessary components for a tabbed Activity. <ScrollView android:id="@+id/basicTab" android:layout_width="fill_parent" android:layout_height="fill_parent" > <LinearLayout android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_weight="1" > <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="Paste title" android:layout_weight="0" /> <EditText android:layout_width="fill_parent" android:layout_height="wrap_content" android:hint="@string/paste_title_hint" android:id="@+id/paste_title" android:lines="1" android:gravity="top|left" android:layout_weight="0" /> <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="Paste text" android:layout_weight="0" /> <EditText android:layout_width="fill_parent" android:layout_height="fill_parent" android:hint="@string/paste_hint" android:id="@+id/paste_code" android:gravity="top|left" android:layout_weight="1" /> </LinearLayout> </ScrollView>

    Read the article

  • How do you reenable a validation control w/o it simultaneously performing an immediate validation?

    - by Velika2
    When I called this function to enable a validator from client javascript: `ValidatorEnable(document.getElementById('<%=valPassportOtherText.ClientID%>'), true); //enable` validation control the required validation control immediately performed it validation, found the value in the associated text box blank and set focus to the textbox (because SetFocusOnError was set to true). As a result, the side effect was that focus was shifted to the control that was associated with the Validation control, i teh example, txtSpecifyOccupation. <asp:TextBox ID="txtSpecifyOccupation" runat="server" AutoCompleteType="Disabled" CssClass="DefaultTextBox DefaultWidth" MaxLength="24" Rows="2"></asp:TextBox> <asp:RequiredFieldValidator ID="valSpecifyOccupation" runat="server" ControlToValidate="txtSpecifyOccupation" ErrorMessage="1.14b Please specify your &lt;b&gt;Occupation&lt;/b&gt;" SetFocusOnError="True">&nbsp;Required</asp:RequiredFieldValidator> Perhaps there is a way to enable the (required) validator without having it simultaneously perform the validation (at least until the user has tabbed off of it?) I'd like validation of the txtSpecifyOccupation textbox to occur only on a Page submit or when the user has tabbed of the required txtSpecifyoccupation textbox. How can I achieve this?

    Read the article

  • XML Pretty Printer Missing 2 Key Edge Cases

    - by viatropos
    Using this xslt file found on this blog to pretty print xml using Nokogiri, everything almost works, but to the point where I can't use it for HTML. First, if a node is empty, it turns it into a self closing node, so: <textarea></textarea> gets converted to <textarea/> But that messes up the html tree when rendered. Second, if the node just has text, the text isn't tabbed, and the closing node isn't tabbed, so: <li> <label>some text</label> </li> becomes: <li> <label>some text </label> </li> ...but it would ideally be: <li> <label> some text </label> </li> Does anyone who's pro at XSLT know a quick fix for this?

    Read the article

  • How do you reenable a validation control w/o it simultaneously perform an immediate validation?

    - by Velika2
    When I called this function to enable a validator from client javascript: `ValidatorEnable(document.getElementById('<%=valPassportOtherText.ClientID%>'), true); //enable` validation control the required validation control immediately performed it validation, found the value in the associated text box blank and set focus to the textbox (because SetFocusOnError was set to true). As a result, the side effect was that focus was shifted to the control that was associated with the Validation control, i teh example, txtSpecifyOccupation. <asp:TextBox ID="txtSpecifyOccupation" runat="server" AutoCompleteType="Disabled" CssClass="DefaultTextBox DefaultWidth" MaxLength="24" Rows="2"></asp:TextBox> <asp:RequiredFieldValidator ID="valSpecifyOccupation" runat="server" ControlToValidate="txtSpecifyOccupation" ErrorMessage="1.14b Please specify your &lt;b&gt;Occupation&lt;/b&gt;" SetFocusOnError="True">&nbsp;Required</asp:RequiredFieldValidator> Perhaps there is a way to enable the (required) validator without having it simultaneously perform the validation (at least until the user has tabbed off of it?) I'd like validation of the txtSpecifyOccupation textbox to occur only on a Page submit or when the user has tabbed of the required txtSpecifyoccupation textbox. How can I achieve this?

    Read the article

  • JQuery plugin: catch events for clicking/tabbing into and out of an input box

    - by poswald
    I'm creating a Javascript JQuery Timepicker control plugin (which I hope to open source soon) and I would like some advice on how to best register the events in the cleanest way. The control will attach to an <input> box and provide a graphical way to enter times of day ( 14:25, 2:45 AM, etc...). It does this by adding a <div> after the input box. What I want is to bind an openControl() function that fires when the input is clicked or tabbed to, and a closeControl() function that fires when the input box is tabbed away from or deselected but not if the control itself is clicked. That is, I don't want to close the control if you're clicking inside of the control's <input> or the <div>. Here's what I have been doing to try to get there: /* Close the control attached to the passed inputNode */ function closeContainer(inputNode, options) { $input = $(inputNode); if ( $input.next().is(':visible')) { $input.next().hide(options.hideAnim, options.hideOptions, options.hideDuration, options.onHide ); } } /* Open the control */ function openContainer(node, options) { $input = $(node); $input.next().show(options.showAnim, options.showOptions, options.showDuration, options.onShow ); // bind a click handler for closing the contol $("body").bind('click', function (e) { $('.time-control').each( function () { $input = $(this).prev(); // only close if click is outside of the control or the input box if (jQuery.contains(this, e.target) || ($input.get(0) === e.target) ) { closeContainer($input, options); setTime($input, $input.next(), options); } else { closeContainer($input, options); } }); }); } I want to add support for tabbing in/out but I feel like this approach is wrong. Focus/Blur wasn't working well because the blur event fires if you click on the control. Should I be using those events but filtering out if they are inside the control's div? Anyone have a better way of doing this? Thanks!

    Read the article

  • tabs using jquery

    - by sea_1987
    I currently have a tabbed system in place, however it not doing exactly as I need it too, I was hoping that by navigating to the URL with #tab2 suffixed on then end it would navigate to my tabbed page and the tab that is present in the URL would be the one that is active, however the first tab in the sequence is always active, is there a way to check what is being passed in the URL first and if there is #tabid present then make that tab the current tab? My javascript currently looks like this, $(".tab_content").hide(); //Hide all content $("ul.tabNavigation li.shortlist").addClass("active").show(); //Activate first tab (in this case second because of floats) $(".tab_content#shortlist").show(); //Show first tab content //On Click Event $("ul.tabNavigation li").click(function() { $("ul.tabNavigation li").removeClass("active"); //Remove any "active" class $(this).addClass("active"); //Add "active" class to selected tab $(".tab_content").hide(); //Hide all tab content var activeTab = $(this).find("a").attr("href"); //Find the href attribute value to identify the active tab + content $(activeTab).fadeIn(); //Fade in the active ID content return false; });

    Read the article

  • Twitter Bootstrap on page tabs: not hiding tab content

    - by user973424
    I'm trying to get the twitter on page tabbed content to work. I have the tabs working with switching around active class on the tabs. I've included jquery and the bootstrap-tabs.js but the following code can't seem to get the tabbed content to hide / display as they should. Any help on what may be a simple fix would be appreciated. <div class="span8"> <ul class="tabs" data-tabs="tabs"> <li class="active"><a href="#2009">2009</a></li> <li><a href="#2010">2010</a></li> <li><a href="#2011">2011</a></li> </ul> <div class="pill-content"> <div class="active" id="2009"> 2009 copy </div> <div id="2010"> 2010 copy </div> <div id="2011"> 2011 copy </div> </div> <script> $(function () { $('.tabs').tabs() }) </script> </div><!-- end span 8 -->

    Read the article

  • Can not access network computers anymore

    - by Johny Skovdal
    Last Thursday (03/05/12) I got a new computer to be able to work from home. I plugged it, by cable, into the company network and installed most of the software needed for me to do so, by accessing a share on my stationary computer at work. I had no issues here what so ever, and everything just worked. Yesterday evening I tried accessing the company network trough Windows VPN, and while I was able to connect to the network, I was unable to connect to any computers on the network. I did, however, get an error when connecting, but I can't seem to get the error again, to get the details of the error message. Today I am sitting on the company network again, and now I can not access anything on the network like I could last Thursday, though I can ping all the computers I am attempting to access. Here is a list of details that might help in troubleshooting this issue (updated): List of observations / actions My computer is identical to another computer that has no issues. It is not on the domain but rather on the default workgroup, but this was not an issue last Thursday, so I am assuming it still is not. I am able to access my e-mail on the exchange server. I can connect to our TFS server from Visual Studio but not from Explorer. I can also connect to Database Servers and Remote Desktop. I can see several computers when browsing network computers, but I am unable to connect to any of them. When trying to connect to a computer I am consistently met with the error code "0x80070035" (network path not found). I also get the 0x80070035 error when double clicking the target computer from the Network UI. I am not met with a login dialog when trying to access a computer, as I should, since I am not on the domain. (I did login to both Exchange, Remote Desktop and TFS though) Between Thursday where it worked and Sunday evening where it did not, I have installed quite a few security updates, plus various tools etc. that I need for programming. I have tried accessing by computer name and ip and neither of them work. I can ping by computer name. I have deleted all (1 entry) stored network credentials. I am able to access my computer from the target computer. Client and Server can see each other on the network = Network Discovery is enabled. I am using the network profile "Work". When accessing the network through VPN, I am unable to get anything to work using computernames, but all of the above applies when using IP adresses instead of computername. I run Windows 7 Home Premium on my computer. Using powershell attempting to access a share I get the following error (ComputerName and ShareName being correct values of course): PS C:\Users\MyUser> cd \\ComputerName\ShareName Set-Location : Cannot find path '\\ComputerName\ShareName' because it does not exist. At line:1 char:3 + cd <<<< \\ComputerName\ShareName + CategoryInfo : ObjectNotFound: (\\ComputerName\ShareName:String) [Set-Location], ItemNotFoundException + FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.SetLocationCommand However, ping'ing the same machine (ping ComputerName) from powershell I get response immediately. (As mentioned in the list of observations/actions, I tried the above with the IP address again on VPN, to get the same result) Conclusion So to sum up, pretty much the only thing I can not do, is access the other computers through browsing (explorer.exe, powershell, map networkdrive, etc.), which means that I am pretty much down to, that it is unable to resolve the path somehow, when trying to connect to other computers trough browsing, though the path gets resolved perfectly using all kinds of other services. Any recommendations as to what I can try next to resolve the issue? :)

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >