Search Results

Search found 1776 results on 72 pages for 'cached'.

Page 23/72 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • JS and CSS caching issue: possibly .htaccess related

    - by adamturtle
    I've been using the HTML5 Boilerplate for some web projects for a while now and have noticed the following issue cropping up on some sites. My CSS and JS files, when loaded by the browser, are being renamed to things like: ce.52b8fd529e8142bdb6c4f9e7f55aaec0.modernizr-1,o7,omin,l.js …in the case of modernizr-1.7.min.js The pattern always seems to add ce. or cc. in front of the filename. I'm not sure what's causing this, and it's frustrating since when I make updates to those files, the same old cached file is being loaded. I have to explicitly call modernizr-1.7.min.js?v=2 or something similar to get it to re-cache. I'd like to scrap it altogether but it still happens even when .htaccess is empty. Any ideas? Is anyone else experiencing this issue?

    Read the article

  • RAM Cache always full

    - by Tobias
    I have 6GB RAM, i5 2.4GHZ Processor running Ubuntu 11.10. When streaming online or opening several tabs in Chromium I soon have 4GB Memory in the cache. And this makes my Notebook slow. When streaming a video, after a few minutes it really slows down and stumbles/jerks. I partitioned my HD so that I have 8GB swap. What could the problem be? How can I solve this? P.s: initially I had 4gb and recently upgraded to 6GB, but I did not experience a significant change. P.P.S: if I enter "free -g" in the terminal this is the result: total used free shared buffers cached Mem: 5 2 3 0 0 0 -/+ buffers/cache: 1 4 Swap: 8 0 8

    Read the article

  • SQL Azure Data Sync

    - by kaleidoscope
    The Microsoft Sync Framework Power Pack for SQL Azure contains a series of components that improve the experience of synchronizing with SQL Azure. This includes runtime components that optimize performance and simplify the process of synchronizing with the cloud. SQL Azure Data Sync allows developers and DBA's to: · Link existing on-premises data stores to SQL Azure. · Create new applications in Windows Azure without abandoning existing on-premises applications. · Extend on-premises data to remote offices, retail stores and mobile workers via the cloud. · Take Windows Azure and SQL Azure based web application offline to provide an “Outlook like” cached-mode experience. The Microsoft Sync Framework Power Pack for SQL Azure is comprised of the following: · SqlAzureSyncProvider · Sql Azure Offline Visual Studio Plug-In · SQL Azure Data Sync Tool for SQL Server · New SQL Azure Events Automated Provisioning Geeta

    Read the article

  • Laptop freezes on boot, not sure where to start

    - by J. Pablo Fernández
    I have an Ubuntu laptop that stops responding when booting up. It will change between consoles with ctrl-alt-FN but pressing enter in the console would not even enter a blank line. The last printed line when booting up in recovery mode was "Skipping EDID probe due to cached edid". Any ideas what might be wrong? Pressing ctrl-alt-del successfully rebooted it when in that mode. Another symptom is that GRUB stopped booting automatically, not sure if related (I doubt it).

    Read the article

  • ubuntu is very slow

    - by johnny smithens
    Hello all. I am new with Ubuntu, and it is very slow(even in Ubuntu 2D). The performance is degraded for almost any task. I just reinstalled with amd64, and tried updating the Nvidia drivers with Nvidia Xserver. but it made no difference. This is the output of free -m: total used free shared buffers cached Mem: 3006 1318 1688 0 61 699 -/+ buffers/cache: 556 2449 Swap: 3064 0 3064 tl;dr - total: 3006, used: 1318 When I see the virtual console with Ctrl+Alt+F2, I see constantly: Assuming Drive Cache: write through; asking for cache data failed; It is very frustrating. Thanks in advance!

    Read the article

  • Check connection and reconnect wifi

    - by Ruud
    I'm building a wireless photoframe. The one thing I haven't been able to figure out is how to get my wifi connection back up using a recommended method. Right now I edited /etc/network/interfaces so wlan0 is started at boot: auto wlan0 iface wlan0 inet dhcp wireless-essid ourssid This method works fine for booting. But I found that if I do not check the connection for a long time (could be a week) it might be down. So I should reconnect. What I do now to verify the connection is working is downloading a file from the server that can't be cached (http://server.ext/ping.php?randomize=123456) If I fail to retrieve the file I assume that the connection is no longer working and I run a shell script like #!/bin/bash ifconfig wlan0 up iwconfig wlan0 essid "ourssid" dhclient wlan0 And the connection comes back. But I can't find anything if this is in any way a good method. Can this be improved upon, or is this already right?

    Read the article

  • Wise settings for Git

    - by Marko Apfel
    These settings reflecting my Git-environment. It a result of reading and trying several ideas of input from others. Must-Haves Aliases [alias] ci = commit st = status co = checkout oneline = log --pretty=oneline br = branch la = log --pretty=\"format:%ad %h (%an): %s\" --date=short df = diff dc = diff --cached lg = log -p lol = log --graph --decorate --pretty=oneline --abbrev-commit lola = log --graph --decorate --pretty=oneline --abbrev-commit --all ls = ls-files ign = ls-files -o -i --exclude-standard Colors [color] ui = auto [color "branch"] current = yellow reverse local = yellow remote = green [color "diff"] meta = yellow bold frag = magenta bold old = red bold new = green bold whitespace = red reverse [color "status"] added = green changed = red untracked = cyan Core [core] autocrlf = true excludesfile = c:/Users/<user>/.gitignore editor = 'C:/Program Files (x86)/Notepad++/notepad++.exe' -multiInst -notabbar -nosession –noPlugin Nice to have Merge and Diff [merge] tool = kdiff3 [mergetool "kdiff3"] path = c:/Program Files (x86)/KDiff3/kdiff3.exe [mergetool "p4merge"] path = c:/Program Files (x86)/Perforce Merge/p4merge.exe cmd = p4merge \"$BASE\" \"$LOCAL\" \"$REMOTE\" \"$MERGED\" keepTemporaries = false trustExitCode = false keepBackup = false [diff] guitool = kdiff3 [difftool "kdiff3"] path = c:/Program Files (x86)/KDiff3/kdiff3.exe [difftool "p4merge"] path = C:/Users/<user>/My Applications/Perforce Merge/p4merge.exe cmd = \"p4merge.exe $LOCAL $REMOTE\" .

    Read the article

  • 64 bit Ubuntu sees half my RAM

    - by koehn
    This is on my AMD FX(tm)-4100 Quad-Core Processor (according to /proc/cpuinfo) on a machine with two 4GB RAM DIMMs. BIOS shows 8GB RAM installed. Any help would be appreciated. RAM: Extreme Performance Sector 5 G Series 8GB DDR3-1333 (PC3-1066) Enhanced Latency Dual Channel Desktop Memory Kit (Two 4GB Memory Modules) MB: GA-78LMT-S2P Socket AM3+ 760G mATX AMD Motherboard CPU: FX 4100 Black Edition 3.6GHz Quad-Core Socket AM3+ Boxed Processor Here's what the software says: $ free total used free shared buffers cached Mem: 3515100 3293656 221444 0 19260 2670352 -/+ buffers/cache: 604044 2911056 Swap: 3650556 90916 3559640 $ uname -a Linux mythbuntu 3.2.0-30-generic #48-Ubuntu SMP Fri Aug 24 16:52:48 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux From lshw: *-memory description: System Memory physical id: 20 slot: System board or motherboard size: 4GiB *-bank:0 description: DIMM 1066 MHz (0.9 ns) product: None vendor: None physical id: 0 serial: None slot: A0 size: 2GiB width: 64 bits clock: 1066MHz (0.9ns) *-bank:1 description: DIMM 1066 MHz (0.9 ns) product: None vendor: None physical id: 1 serial: None slot: A1 size: 2GiB width: 64 bits clock: 1066MHz (0.9ns)

    Read the article

  • Do CDNs work with POST operations?

    - by iddqd
    I'm using a CDN (Level3) for the first time and I'm a bit confused. I'm accessing dynamic URLs such as http://cdn.mysite.com?getItem=1234 that return text data. Do CDNs work with HTTP POST operations? When i issue a HTTP POST operation, my "real" server receives this request every time, so I'm wondering if the CDN has a problem with POST operations. If i use HTTP GET it seems to work, i call the URL once (from my application), i can see my server receiving the request. If i call it a second time, the CDN delivers it directly, my server doesn't get anything. However if i open same the link manually from a second browser tab, my server is asked to deliver again, shouldn't it be cached by now? Many thanks.

    Read the article

  • New host, high load?

    - by dotancohen
    A few minutes ago I signed up at a new webhost. I have yet to move my sites over. Upon initial SSH connection, I checked the load and memory usage, they do seem rather higher than I would like: # uptime 12:06:51 up 71 days, 23:23, 1 user, load average: 9.02, 9.49, 9.45 # free total used free shared buffers cached Mem: 33014800 31927192 1087608 0 2384812 17729816 -/+ buffers/cache: 11812564 21202236 Swap: 16787916 8584 16779332 Is that a bit to packed? I'm only paying about $5 USD per month, so I don't expect <0.1 loads, but ~10 is worrisome. Is it not? Also, there is no /etc/issue file so I tried other methods to guess the OS: # uname -a Linux box358.bluehost.com 2.6.32-20120131.55.1.bh6.x86_64 #1 SMP Tue Jan 31 15:43:27 EST 2012 x86_64 x86_64 x86_64 GNU/Linux # which yum /usr/bin/yum # which apt-get # That looks like CentOS / RHEL 6.2 possibly?

    Read the article

  • How can I prevent Google mistakenly offering to translate a page?

    - by DisgruntledGoat
    Several of my site's pages are appearing in search results with [Translate this page] next to it. When I click that it takes me to Google Translate and translates my page "from Catalan to English". The pages are in English but have a couple of foreign words (actually Japanese romanisations, not Catalan) that appear to be tripping Google up. A few weeks ago I set the html tag to <html lang="en"> which from research appears to be the best method to specify the language of a document. Google has cached the pages with this attribute but it is still offering to translate. More research led me to a "notranslate" attribute which prevents translation entirely: <html lang="en" class="notranslate">. The problem now is users cannot translate from English to their desired language! Are there any other solutions that force Google to parse my site as English only?

    Read the article

  • Looking for a CDN

    - by Bill
    Most of the CDN's that I've seen require you to upload your content in advance. I'm looking for a CDN that, upon receiving a request for a resource it hasn't seen, will contact my application server. If the application server returns something, it should be sent to the user and then cached in the CDN. If not, it should just return a 404. If the user requests an unexpired item, the CDN should just serve it without bothering my app server. Does anything like this exist? Is there a way to get Cloudfront to work like this?

    Read the article

  • cowbuilder --create --distribution lucid fails

    - by Daenyth
    I'm trying to create a build environment for Lucid, and calling cowbuilder --create --distribution lucid fails with the messages below: Get:1 http://us-east-1.ec2.archive.ubuntu.com lucid Release.gpg [189B] Hit http://us-east-1.ec2.archive.ubuntu.com lucid Release Hit http://us-east-1.ec2.archive.ubuntu.com lucid/main Packages Fetched 189B in 0s (2376B/s) Reading package lists... I: Obtaining the cached apt archive contents Reading package lists... Building dependency tree... 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Reading package lists... Building dependency tree... apt is already the newest version. Package cowdancer is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package cowdancer has no installation candidate I: unmounting dev/pts filesystem I: unmounting proc filesystem pbuilder create failed forking: rm -rf /opt/cowbuilder

    Read the article

  • Google Webmaster Tools shows invalid data

    - by Altar
    Webmaster Tools shows 1 URL error (not found page). The report says that 5 pages are linking to a page (let's call it x) that does not exist (and because it doesn't exists it returns a soft 404). HOWEVER, I look in those 5 pages (in the source code) and none is linking to the x page. It is like Google sees an old page that was indeed pointing to x. What is the problem? How do I know if Google cached an old version for those 5 pages?

    Read the article

  • Does Submit to Index on a page with new content update Content Keywords for the site?

    - by Dan Kanze
    Using Google Webmaster Tools I'm trying to update the Content Keywords of my site. I'm confused about the relationship between Submit to Index and Content Keywords Does Fetch as Google -- Submit to Index on a previously existing indexed page containing new content expidite updating the Content Keywords crawled by the real Google bot? Does Submit to Index only submit new URL's so that previously indexed URL's still point to the older cached version until Google crawls specifically for new content on its own? Does Submit to Index have anything to do with Content Keywords or crawling new content being a previously indexed page or never been indexed page?

    Read the article

  • Integrated ads in phone apps - how to avoid wasting battery?

    - by Jarede
    Considering the PCWorld review that came out in March: Free Android Apps Packed with Ads are Major Battery Drains ...Researchers from Purdue University in collaboration with Microsoft claim that third-party advertising in free smartphone apps can be responsible for as much as 65 percent to 75 percent of an app's energy consumption... Is there a best practice for integrating advert support into mobile applications, so as to not drain user battery too much? ...When you fire up Angry Birds on your Android phone, the researchers found that the core gaming component only consumes about 18 percent of total app energy. The biggest battery suck comes from the software powering third-party ads and analytics accounting for 45 percent of total app energy, according to the study... Has anyone invoked better ways of keeping away from the "3G Tail", as the report puts it? Is it better/possible to download a large set of adverts that are cached for a few hours, and using them to populate your ad space, to avoid constant use of the Wi-Fi/3G radios? Are there any best practices for the inclusion of adverts in mobile apps?

    Read the article

  • How do Expires headers and cache manifest rules work together?

    - by Robert K
    I find the W3C's official Offline Web Applications specification to be rather vague about how the cache manifest interacts with headers such as ETag, Expires, or Pragma on cached assets. I know that the manifest should be checked with each request so that the browser knows when to check the other assets for updates. But because the specification doesn't define how the cache manifest interacts with normal cache instructions, I can't predict precisely how the browser will react. Will assets with a future expiration date be refreshed (no matter the cache headers) when the cache manifest is updated? Or, will those assets obey the normal caching rules? Which caching mechanism, HTTP cache versus cache manifest, will take precedence, and when?

    Read the article

  • Manually updating HTML5 local storage?

    - by hustlerinc
    I'm just starting out HTML5 game developement (and game dev in general) and watching all the videos and tutorials available something has crossed my mind. Everyone keep saying I should set the cookie's (or cached files) to be expired after a certain amount of time. So that when it reaches that time the browser automatically downloads all assets again, even if it's the same asset's. Wouldn't it be possible to manually define the version of the game? For example the user has downloaded all the files for 1.01 of the game, when updating I change a simple variable to 1.02. When the user logs in it would compare his version to the current and if they are not equal only then it downloads the files? This could even be improved to download only specific files depending on what needs to be updated? Would this be possible or am I just dreaming? What are the possible downsides of this approach?

    Read the article

  • Why is facebook cache buggy?

    - by IAdapter
    I just started using facebook and I see that many times when I add something to my profile and visit it later its not there. I bet the reason is that the page is cached and not updated very often. Is this on purpose or is it a bug? P.S. For example I added the music I like and later I see that I did not add it, but next day when I visit again its there. I saw it in two web-browsers, so its a facebook bug. Does it has something to do with scalability?

    Read the article

  • Wireless always disconnect

    - by Silas
    I upgraded my Ubuntu 10.10 for Ubuntu 12.04. I have a Emachines E625 With Ubuntu 12.04 my wireless always disconnects every 10 sec. It didn't do that with 10.10. Here is my config: Linux sylvain-eMachines-E625 3.2.0-30-generic #48-Ubuntu SMP Fri Aug 24 16:54:40 UTC 2012 i686 athlon i386 GNU/Linux total used free shared buffers cached Mem: 1759 1425 333 0 46 925 -/+ buffers/cache: 452 1306 Swap: 1788 0 1788 02:00.0 Network controller: Broadcom Corporation BCM4312 802.11b/g LP-PHY (rev 01) 05:00.0 Ethernet controller: Atheros Communications Inc. AR8132 Fast Ethernet (rev c0) I will really enjoy an answer to solve this problem. Thank you in advance! Silas

    Read the article

  • Is it really a security problem to have non secure assets on an ssl page?

    - by blockhead
    My understanding is that this is just an example of being overly cautious, but if my checkout form contains an unsecure asset on it, that doesn't endanger anybody's credit card numbers from being caught by a man-in-the-middle. I'm asking this because every once in while, maybe because of cached content or whatnot, somebody writes in saying that they are seeing this "error" (even though there are no unsecure assets on my page), but they want an explanation. So yes, I can tell all about encryption and certificates and trust and men-in-the-middle. But what do I tell them about this. How do I convince them that the site is 100% safe (and if it isn't let me know that I'm mistaken!)

    Read the article

  • How are hybrid VB6/.Net applications functioning in the Real World?

    - by Dabblernl
    I am maintaining a VB6 application and we are studying how to migrate to .Net We are considering doing this gradually by implementing new features in COM visible .Net classes and migrating existing functionality slowly. I found some instructive 'Hello World' examples about how to do this and it works fine with our App. But how is the real world behaviour of these hybrid applications? Are they stable, maintainable? Particular of our program is that more users on the same computer will use it by switching user accounts. EDIT: The VB6 app reads data from a USB connection and stores it in an Access database. The user can call up various views on the data. The data is cached in a hardware device, so interuptions in the reading of it are not fatal.

    Read the article

  • Disqus 2012 comments NOT being indexed by Google

    - by Buckers
    We run a high-traffic website at http://www.onedirection.net and we've been using Disqus throughout this year, initially to great effect. We accepted the upgrade to Disqus 2012 back in June, loving the increased user experience and the better community feel - albeit back to an Iframe again. However the fact we were specifically told that the comments are now being indexed by Google was great, and the dynamic nature of the iFrame suited our site (all our pages are cached, so by using Disqus the comments are updated straight away). However, it seems that the Disqus 2012 comments are not being indexed, and we've noticed an obvious fall in traffic over the last few months. Initially we didn't put this down to Disqus and focused on other issues (Google algorithm updates etc). But we're quickly coming down the reasoning that our pages now contain less indexable text, and we are getting less traffic because of this. We've tried emailing Disqus directly but they're very slow and don't seem keen to help. Any thoughts on this?

    Read the article

  • Updating query results

    - by Francisco Garcia
    Within a DDD and CQRS context, a query result is displayed as table rows. Whenever new rows are inserted or deleted, their positions must be calculated by comparing the previous query result with the most recent one. This is needed to visualize with an animation new or deleted rows. The model of my view contains an array of the displayed query results. But I need a place to compare its contents against the latest query. Right now I consider my model view part of my application layer, but the comparison of two query result sets seems something that must be done within the domain layer. Which component should cache a query result and which one compare them? Are view models (and their cached contents) supposed to be in the application layer?

    Read the article

  • (Joomla 1.6) Template position descriptions don't refresh

    - by avanwieringen
    I want to change a description of a template position, so when I go to Admin-Extensions-Module Manager I see a different description of a module position in the position list when I edit a module. However, when I change (for instance) the template 'beez_20' and want to rename the name of the position 'debug', I change the description (TPL_BEEZ_20_POSITION_DEBUG) in the language file 'languages\en-GB\en-GB.tpl_beez_20.sys.ini' to something different, say 'Abracadabra'. However, the changes don't appear in the position list and I can find no reference whatsoever of how or when the ini files are read or maybe cached. Does anyone has a clue?

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >