Search Results

Search found 14653 results on 587 pages for 'disk cache'.

Page 117/587 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • When HDD becomes full, how to create a symbolic link to the data store on another disk?

    - by Brij Raj Singh
    I have a Linux Ubuntu machine which has an X GB hard disk. There is folder, say, /opt/software/data. The disk /dev/sda1 is almost full and I have attached another disk at /dev/sda2 which is mounted at /hdd2. Is it possible for me to link the folders /opt/software/data with /hdd2/software/data so, that every file get stored in the /hdd2/software/data but may be referred from the /opt/software/data? I can't do a reinstall of the software that creates this data, to change the default location of storage.

    Read the article

  • What is the best way to shutdown hard disk?

    - by Sunil
    Right Now I'm using hdparm command in unix to shut down the hard disk but there are few issues with it. when it wakes back up it consumes lots power. Is there any other way to do it? Many times when I put my hard disk to sleep, I can see few bursts at the beginning and then after a while it goes to sleep. I think its because of the journaling system in ubuntu (which I use) Have anybody encountered that? What would be the best linux/unix operating system (eg: ubuntu/centos/redhat) to work on extensive hard disk operations? I would highly appreciate if you could share the problems you encountered while doing this operation.

    Read the article

  • How can I run fsck on a disk image via Mac Terminal?

    - by mvizual
    I want to run fsck on a disk image before I use it to restore (replace) a corrupted volume. Using Terminal, what would be the proper command, syntax, and options for this operation? I've just recently become acquainted with Terminal and line commands, so syntax and specific options aren't part of my computing vocabulary. I'm using Terminal 2.1.2, bash, OS 10.6.8. Ultimately, I'm trying to restore an image to a secondary startup volume (external drive). The image is mounted on my desktop and I want to check it for errors before I use it. Disk Utility runs "repair disk" successfully but the integrity of the image is suspect.

    Read the article

  • Vmware Workstation 10 connect remote server (Debian, Guest-Windows XP) Does not allow raw disk access nor shared folders

    - by Alex
    The setup: Ubuntu with local Vmware Workstation 10 (everything works locally) Connects(File- Connect to Server) Debian server with the same Vmware Workstation 10 (Windows XP Guest) Debian setup does not allow raw disk access nor shared folders (most options does not exist) No shared folder No physical disk option I use root user for this machine. Default install. I've tried to add shared folder from command line - it does not work. How to enable shared folders or raw disk access? I have created new Windows 8 64 bit template from scratch - I cannot use physical HDD either, and no SharedFolder option. I think this is something about security policy of remote server.

    Read the article

  • Ubuntu: Resize the root LVM(2?) partition

    - by user12259
    I have an Ubuntu virtual machine running in VirtualBox 2.2.4, and I created it on an 8gb virtual disk which is too small. So, I am trying to increase the size of the disk. So far, I have done this: Created a new larger virtual disk Added the 2nd disk to the machine Used CloneZilla to clone the first disk onto the 2nd disk Removed the first disk Booted up off the 2nd (larger disk) But now I'm still stuck with an 8gb partition on my new 100gb virtual disk. Whats the easiest path from here to having a 100gb partition? :) I gather GPart can resize partitions, but it doesn't seem to support LVM2 partitions, which mine seems to be. thx Alex

    Read the article

  • rails question find no result

    - by Small Wolf
    Hey.Guys! Now .I have a question ,i want someone to help me to solve it ,the log of the problem like the under text >> Department.find(EmeReference.find(:all,:select =>:ref_config_id,:conditions=>"emergency_id = 1")) ActiveRecord::RecordNotFound: Couldn't find Department with ID=0 from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:1591:in `find_one' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:1574:in `find_from_ids_without_cache' from (__DELEGATION__):2:in `__send__' from (__DELEGATION__):2:in `find_from_ids_without_cache' from /usr/lib/ruby/gems/1.8/gems/nkallen-cache-money-0.2.5/lib/cash/query/abstract.rb:158:in `find_from_keys' from /usr/lib/ruby/gems/1.8/gems/nkallen-cache-money-0.2.5/lib/cash/query/primary_key.rb:31:in `miss' from /usr/lib/ruby/gems/1.8/gems/nkallen-cache-money-0.2.5/lib/cash/query/abstract.rb:66:in `hit_or_miss' from /usr/lib/ruby/gems/1.8/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:17:in `call' from /usr/lib/ruby/gems/1.8/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:17:in `fetch' from /usr/lib/ruby/gems/1.8/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:29:in `get' from /usr/lib/ruby/gems/1.8/gems/nkallen-cache-money-0.2.5/lib/cash/query/abstract.rb:65:in `hit_or_miss' from /usr/lib/ruby/gems/1.8/gems/nkallen-cache-money-0.2.5/lib/cash/query/abstract.rb:18:in `perform' from /usr/lib/ruby/gems/1.8/gems/nkallen-cache-money-0.2.5/lib/cash/query/primary_key.rb:17:in `perform' from /usr/lib/ruby/gems/1.8/gems/nkallen-cache-money-0.2.5/lib/cash/query/abstract.rb:7:in `perform' from /usr/lib/ruby/gems/1.8/gems/nkallen-cache-money-0.2.5/lib/cash/finders.rb:29:in `find_from_ids' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:616:in `find' from (irb):135 that's the question! Thank your and best regards!

    Read the article

  • ASP.net AppendHeader not working in ASP MVC

    - by Chao
    I'm having problems getting AppendHeader to work properly if I am also using an authorize filter. I'm using an actionfilter for my AJAX actions that applies Expires, Last-Modified, Cache-Control and Pragma (though while testing I have tried including it in the action method itself with no change in results). If I don't have an authorize filter the headers work fine. Once I add the filter the headers I tried to add get stripped. The headers I want to add Response.AppendHeader("Expires", "Sun, 19 Nov 1978 05:00:00 GMT"); Response.AppendHeader("Last-Modified", String.Format("{0:r}", DateTime.Now)); Response.AppendHeader("Cache-Control", "no-store, no-cache, must-revalidate"); Response.AppendHeader("Cache-Control", "post-check=0, pre-check=0"); Response.AppendHeader("Pragma", "no-cache"); An example of the headers from a correct page: Server ASP.NET Development Server/9.0.0.0 Date Mon, 14 Jun 2010 17:22:24 GMT X-AspNet-Version 2.0.50727 X-AspNetMvc-Version 2.0 Pragma no-cache Expires Sun, 19 Nov 1978 05:00:00 GMT Last-Modified Mon, 14 Jun 2010 18:22:24 GMT Cache-Control no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Content-Type text/html; charset=utf-8 Content-Length 352 Connection Close And from an incorrect page: Server ASP.NET Development Server/9.0.0.0 Date Mon, 14 Jun 2010 17:27:34 GMT X-AspNet-Version 2.0.50727 X-AspNetMvc-Version 2.0 Pragma no-cache, no-cache Cache-Control private, s-maxage=0 Content-Type text/html; charset=utf-8 Content-Length 4937 Connection Close

    Read the article

  • [ask][php] find dynamic filename exist

    - by r4ccoon
    hi. i am writing a cache module in php. it tries to write a cache with a $string+timestamp as a filename. i dont have problem with writing the cache. the problem is i do a foreach loop to get the cache that i want. this is the logic that i use for getting the cache foreach ($filenames as $filename){ if(strstr($filename,$cachename)){//if found if(check_timestamp($filename,time())) display_cace($filename); break; } } but when it tries to get and read the cache, it slows the server down. imagine that i have 10000 cache file in a folder, and i need to check for every file in that cache folder. so how do you think the best way of doing this. here i explain again, because even me still dont understand my written question.. :D i write cache file with this format filename_timestamp.. e.g cache_function_random_news_191982899010 in a folder ./cache/ when i want to get the cache, i only pass "cache_function_random_news_" and check recursively on that folder. if i find something with that needle on a file name, display it, and break. but checking recursively on a 10000 files in a folder is not a good thing yeah? please give me your opinion ok, that would clarify more. thanks.

    Read the article

  • does anyone know why apt-cacher-ng always downloading index file (Packages.gz) even though its exist on the apt-cacher-ng's cache?

    - by soekarmana
    just updated from 11.04 to 12.04, fresh install installed apt-cacher-ng and notice something strange about it its always downloading index file (Packages.gz) even though the file exist on the apt-cacher-ng's cache, so this is what exactly happened : on ubuntu 10.10 & 11.04 apt-cacher-ng installed & configured on my laptop, then i reload & install some packages after that i configure my friend's laptop with apt-cacher-ng proxy (192.168.1.1:3142), reloading repository was blazingly fast, finished in a second without using my Internet connection (checked on system monitor, total Received just 15kB) on ubuntu 11.10 & 12.04 apt-cacher-ng installed & configured on my laptop, then i reload & install some packages after that i configure my friend's laptop with apt-cacher-ng proxy (192.168.1.1:3142), reloading repository was really slow!, apt-cacher-ng redownload the index file from internet

    Read the article

  • Recommended storage scheme for home server? (LVM/JBOD/RAID 5...)

    - by j-g-faustus
    Are there any guidelines for which storage scheme(s) makes most sense for a multiple-disk home server? I am assuming a separate boot/OS disk (so bootability is not a concern, this is for data storage only) and 4-6 storage disks of 1-2 TB each, for a total storage capacity in the range 4-12 TB. The file system is ext4, I expect there will be only one big partition spanning all disks. As far as I can tell, the alternatives are individual disks pros: works with any combination of disk sizes; losing a disk loses only the data on that disk; no need for volume management. cons: data management is clumsy when logical units (like a "movies" folder) are larger than the capacity of any single drive. JBOD span pros: can merge disks of any size. cons: losing a disk loses all data on all disks LVM pros: can merge disks of any size; relatively simple to add and remove disks. cons: losing a disk loses all data on all disks RAID 0 pros: speed cons: losing one drive loses all data; disks must be same size RAID 5 pros: data survives losing one disk cons: gives up one disk worth of capacity; disks must be same size RAID 6 pros: data survives losing two disks cons: gives up two disks worth of capacity; disks must be same size I'm primarily considering either LVM or JBOD span simply because it will let me reuse older, smaller-capacity disks when I upgrade the system. The runner-up is RAID 0 for speed. I'm planning on having full backups to a separate system, so I expect the extra redundancy from RAID levels 5 or 6 won't be important. Is this a fair representation of the alternatives? Are there other considerations or alternatives I have missed? And what would you recommend?

    Read the article

  • Linux 3.6 sort en version stable : veille hybride, TCP Fast Open, VFIO, améliorations de Btrfs et suppression du cache IPv4

    Linux 3.6 sort en version stable ajout de la veille hybride, TCP Fast Open, VFIO, améliorations de Btrfs et suppression du cache IPv4 Linus Torvalds vient d'annoncer la sortie de la version 3.6 stable du Kernel Linux. La nouveauté phare de cette mouture est l'introduction d'un mode de veille hybride, longtemps supporté par Windows et Mac OS X. L'option Suspend to Both (Veille et hibernation combinée) permet de suspendre l'activité de l'ordinateur tout en conservant le contenu de la mémoire vive sur le disque dur (uspend-to-disk) et ensuite une sauvegarde du système dans la mémoire (suspend-to-RAM). Le grand avantage de ces deux techniques liées est qu'elles permettent le retou...

    Read the article

  • Le framework PHP Jelix disponible en version 1.4 : compatibilité PSR0, templates virtuels et gestion du cache HTTP à la une

    Jelix 1.4 est disponible ! Compatibilité PSR0, templates virtuels et gestion du cache HTTP à la une du framework PHP Dans toute cette agitation de mise à jour de framework PHPn, on aurait presque oublié la sortie de Jelix. [IMG]http://idelways.developpez.com/news/images/jelix.png[/IMG] Jelix est et reste l'un des meilleurs frameworks PHP existants et cela par sa conception bien souvent en avance sur d'autres outils. Je pense à la modularité et à la gestion d'événements mises en place dans Jelix depuis de nombreuses années et qui font à peine leurs apparitions sur certains frameworks dits majeurs. Une nouvelle version majeure de Jelix ...

    Read the article

  • Linux fsck.ext3 says "Device or resource busy" although I did not mount the disk.

    - by matnagel
    I am running an ubuntu 8.04 server instance with a 8GB virtual disk on vmware 1.0.9. For disk maintenance I made a copy of the virtual disk (by making a copy of the 2 vmdk files of sda on the stopped vm on the host) and added it to the original vm. Now this vm has it's original virtual disk sda plus a 1:1 copy (sdd). There are 2 additional disk sdb and sdc which I ignore.) I would expect sdb not to be mounted when I start the vm. So I try tp do a ext2 fsck on sdd from the running vm, but it reports fsck reported that sdb was mounted. $ sudo fsck.ext3 -b 8193 /dev/sdd e2fsck 1.40.8 (13-Mar-2008) fsck.ext3: Device or resource busy while trying to open /dev/sdd Filesystem mounted or opened exclusively by another program? The "mount" command does not tell me sdd is mounted: $ sudo mount /dev/sda1 on / type ext3 (rw,relatime,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) /sys on /sys type sysfs (rw,noexec,nosuid,nodev) varrun on /var/run type tmpfs (rw,noexec,nosuid,nodev,mode=0755) varlock on /var/lock type tmpfs (rw,noexec,nosuid,nodev,mode=1777) udev on /dev type tmpfs (rw,mode=0755) devshm on /dev/shm type tmpfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/sdc1 on /mnt/r1 type ext3 (rw,relatime,errors=remount-ro) /dev/sdb1 on /mnt/k1 type ext3 (rw,relatime,errors=remount-ro) securityfs on /sys/kernel/security type securityfs (rw) When I ignore the warning and continue the fsck, it reported many errors. How do I get this under control? Is there a better way to figure out if sdd is mounted? Or how is it "busy? How to unmount it then? How to prevent ubuntu from automatically mounting. Or is there something else I am missing? Also from /var/log/syslog I cannot see it is mounted, this is the last part of the startup sequence: kernel: [ 14.229494] ACPI: Power Button (FF) [PWRF] kernel: [ 14.230326] ACPI: AC Adapter [ACAD] (on-line) kernel: [ 14.460136] input: PC Speaker as /devices/platform/pcspkr/input/input3 kernel: [ 14.639366] udev: renamed network interface eth0 to eth1 kernel: [ 14.670187] eth1: link up kernel: [ 16.329607] input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/ kernel: [ 16.367540] parport_pc 00:08: reported by Plug and Play ACPI kernel: [ 16.367670] parport0: PC-style at 0x378, irq 7 [PCSPP,TRISTATE] kernel: [ 19.425637] NET: Registered protocol family 10 kernel: [ 19.437550] lo: Disabled Privacy Extensions kernel: [ 24.328857] loop: module loaded kernel: [ 24.449293] lp0: using parport0 (interrupt-driven). kernel: [ 26.075499] EXT3 FS on sda1, internal journal kernel: [ 28.380299] kjournald starting. Commit interval 5 seconds kernel: [ 28.381706] EXT3 FS on sdc1, internal journal kernel: [ 28.381747] EXT3-fs: mounted filesystem with ordered data mode. kernel: [ 28.444867] kjournald starting. Commit interval 5 seconds kernel: [ 28.445436] EXT3 FS on sdb1, internal journal kernel: [ 28.445444] EXT3-fs: mounted filesystem with ordered data mode. kernel: [ 31.309766] eth1: no IPv6 routers present kernel: [ 35.054268] ip_tables: (C) 2000-2006 Netfilter Core Team mysqld_safe[4367]: started mysqld[4370]: 100124 14:40:21 InnoDB: Started; log sequence number 0 10130914 mysqld[4370]: 100124 14:40:21 [Note] /usr/sbin/mysqld: ready for connections. mysqld[4370]: Version: '5.0.51a-3ubuntu5.4' socket: '/var/run/mysqld/mysqld.sock' port: 3 /etc/mysql/debian-start[4417]: Upgrading MySQL tables if necessary. /etc/mysql/debian-start[4422]: Looking for 'mysql' in: /usr/bin/mysql /etc/mysql/debian-start[4422]: Looking for 'mysqlcheck' in: /usr/bin/mysqlcheck /etc/mysql/debian-start[4422]: This installation of MySQL is already upgraded to 5.0.51a, u /etc/mysql/debian-start[4436]: Checking for insecure root accounts. /etc/mysql/debian-start[4444]: Checking for crashed MySQL tables.

    Read the article

  • Optimizing Haskell code

    - by Masse
    I'm trying to learn Haskell and after an article in reddit about Markov text chains, I decided to implement Markov text generation first in Python and now in Haskell. However I noticed that my python implementation is way faster than the Haskell version, even Haskell is compiled to native code. I am wondering what I should do to make the Haskell code run faster and for now I believe it's so much slower because of using Data.Map instead of hashmaps, but I'm not sure I'll post the Python code and Haskell as well. With the same data, Python takes around 3 seconds and Haskell is closer to 16 seconds. It comes without saying that I'll take any constructive criticism :). import random import re import cPickle class Markov: def __init__(self, filenames): self.filenames = filenames self.cache = self.train(self.readfiles()) picklefd = open("dump", "w") cPickle.dump(self.cache, picklefd) picklefd.close() def train(self, text): splitted = re.findall(r"(\w+|[.!?',])", text) print "Total of %d splitted words" % (len(splitted)) cache = {} for i in xrange(len(splitted)-2): pair = (splitted[i], splitted[i+1]) followup = splitted[i+2] if pair in cache: if followup not in cache[pair]: cache[pair][followup] = 1 else: cache[pair][followup] += 1 else: cache[pair] = {followup: 1} return cache def readfiles(self): data = "" for filename in self.filenames: fd = open(filename) data += fd.read() fd.close() return data def concat(self, words): sentence = "" for word in words: if word in "'\",?!:;.": sentence = sentence[0:-1] + word + " " else: sentence += word + " " return sentence def pickword(self, words): temp = [(k, words[k]) for k in words] results = [] for (word, n) in temp: results.append(word) if n > 1: for i in xrange(n-1): results.append(word) return random.choice(results) def gentext(self, words): allwords = [k for k in self.cache] (first, second) = random.choice(filter(lambda (a,b): a.istitle(), [k for k in self.cache])) sentence = [first, second] while len(sentence) < words or sentence[-1] is not ".": current = (sentence[-2], sentence[-1]) if current in self.cache: followup = self.pickword(self.cache[current]) sentence.append(followup) else: print "Wasn't able to. Breaking" break print self.concat(sentence) Markov(["76.txt"]) -- module Markov ( train , fox ) where import Debug.Trace import qualified Data.Map as M import qualified System.Random as R import qualified Data.ByteString.Char8 as B type Database = M.Map (B.ByteString, B.ByteString) (M.Map B.ByteString Int) train :: [B.ByteString] -> Database train (x:y:[]) = M.empty train (x:y:z:xs) = let l = train (y:z:xs) in M.insertWith' (\new old -> M.insertWith' (+) z 1 old) (x, y) (M.singleton z 1) `seq` l main = do contents <- B.readFile "76.txt" print $ train $ B.words contents fox="The quick brown fox jumps over the brown fox who is slow jumps over the brown fox who is dead."

    Read the article

  • How to improve Varnish performance?

    - by Darkseal
    We're experiencing a strange problem with our current Varnish configuration. 4x Web Servers (IIS 6.5 on Windows 2003 Server, each installed on a Intel(R) Xeon(R) CPU E5450 @ 3.00GHz Quad Core, 4GB RAM) 3x Varnish Servers (varnish-3.0.3 revision 9e6a70f on Ubuntu 12.04.2 LTS - 64 bit/precise, Kernel Linux 3.2.0-29-generic, each installed on a Intel(R) Xeon(R) CPU E5450 @ 3.00GHz Quad Core, 4GB RAM) The Varnish Servers performance are awfully bad in general, to the point that if we shut down one of them the other two are unable to fullfill all the requests and start to skip beats resulting in pending requests, timeouts, 404, etc. What can we do to improve our Varnish performance? Considering that we're getting less than 5k request per seconds during our max peak, we should be able to serve our pages even with a single one of them without any problem. We use a standard, vanilla CFG, as shown by this varnishadm param.show output: acceptor_sleep_decay 0.900000 [] acceptor_sleep_incr 0.001000 [s] acceptor_sleep_max 0.050000 [s] auto_restart on [bool] ban_dups on [bool] ban_lurker_sleep 0.010000 [s] between_bytes_timeout 60.000000 [s] cc_command "exec gcc -std=gnu99 -g -O2 -pthread -fpic -shared - Wl,-x -o %o %s" cli_buffer 8192 [bytes] cli_timeout 20 [seconds] clock_skew 10 [s] connect_timeout 0.700000 [s] critbit_cooloff 180.000000 [s] default_grace 10.000000 [seconds] default_keep 0.000000 [seconds] default_ttl 120.000000 [seconds] diag_bitmap 0x0 [bitmap] esi_syntax 0 [bitmap] expiry_sleep 1.000000 [seconds] fetch_chunksize 128 [kilobytes] fetch_maxchunksize 262144 [kilobytes] first_byte_timeout 60.000000 [s] group varnish (113) gzip_level 6 [] gzip_memlevel 8 [] gzip_stack_buffer 32768 [Bytes] gzip_tmp_space 0 [] gzip_window 15 [] http_gzip_support off [bool] http_max_hdr 64 [header lines] http_range_support on [bool] http_req_hdr_len 8192 [bytes] http_req_size 32768 [bytes] http_resp_hdr_len 8192 [bytes] http_resp_size 32768 [bytes] idle_send_timeout 60 [seconds] listen_address :80 listen_depth 1024 [connections] log_hashstring on [bool] log_local_address off [bool] lru_interval 2 [seconds] max_esi_depth 5 [levels] max_restarts 4 [restarts] nuke_limit 50 [allocations] pcre_match_limit 10000 [] pcre_match_limit_recursion 10000 [] ping_interval 3 [seconds] pipe_timeout 60 [seconds] prefer_ipv6 off [bool] queue_max 100 [%] rush_exponent 3 [requests per request] saintmode_threshold 10 [objects] send_timeout 600 [seconds] sess_timeout 5 [seconds] sess_workspace 16384 [bytes] session_linger 50 [ms] session_max 100000 [sessions] shm_reclen 255 [bytes] shm_workspace 8192 [bytes] shortlived 10.000000 [s] syslog_cli_traffic on [bool] thread_pool_add_delay 2 [milliseconds] thread_pool_add_threshold 2 [requests] thread_pool_fail_delay 200 [milliseconds] thread_pool_max 2000 [threads] thread_pool_min 5 [threads] thread_pool_purge_delay 1000 [milliseconds] thread_pool_stack unlimited [bytes] thread_pool_timeout 300 [seconds] thread_pool_workspace 65536 [bytes] thread_pools 2 [pools] thread_stats_rate 10 [requests] user varnish (106) vcc_err_unref on [bool] vcl_dir /etc/varnish vcl_trace off [bool] vmod_dir /usr/lib/varnish/vmods waiter default (epoll, poll) This is our default.vcl file: LINK sub vcl_recv { # BASIC recv COMMANDS: # # lookup -> search the item in the cache # pass -> always serve a fresh item (no-caching) # pipe -> like pass but ensures a direct-connection with the backend (no-cache AND no-proxy) # Allow the backend to serve up stale content if it is responding slow. # This defines when Varnish should use a stale object if it has one in the cache. set req.grace = 30s; if (client.ip == "127.0.0.1") { # request from NGINX - do not alter X-Forwarded-For set req.http.HTTPS = "on"; } else { # Add an X-Forwarded-For to keep track of original request unset req.http.HTTPS; unset req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; } set req.backend = www_director; # Strip all cookies to force an anonymous request when the back-end servers are down. if (!req.backend.healthy) { unset req.http.Cookie; } ## HHTP Accept-Encoding if (req.http.Accept-Encoding) { if (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { unset req.http.Accept-Encoding; } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { /* non-RFC2616 or CONNECT */ return (pipe); } if (req.request != "GET" && req.request != "HEAD") { /* only deal with GET and HEAD by default */ return (pass); } if (req.http.Authorization) { return (pass); } if (req.http.HTTPS ~ "on") { return (pass); } ###################################################### # COOKIE HANDLING ###################################################### # METHOD 1: do not remove cookies, but pass the page if they contain TB_NC if (!(req.url ~ "(?i)\.(png|gif|ipeg|jpg|ico|swf|css|js)(\?[a-z0-9]+)?$")) { if (req.http.Cookie && req.http.Cookie ~ "TB_NC") { return (pass); } } return (lookup); } # Code determining what to do when serving items from the IIS Server sub vcl_fetch { unset beresp.http.Server; set beresp.http.Server = "Server-1"; # Allow items to be stale if needed. This is the maximum time Varnish should keep an object. set beresp.grace = 1h; if (req.url ~ "(?i)\.(png|gif|ipeg|jpg|ico|swf|css|js)(\?[a-z0-9]+)?$") { unset beresp.http.set-cookie; } # Default Varnish VCL logic if (!beresp.cacheable || beresp.ttl <= 0s || beresp.http.Set-Cookie || beresp.http.Vary == "*") { set beresp.ttl = 120 s; return(hit_for_pass); } # Not Cacheable if it has specific TB_NC no-caching cookie if (req.http.Cookie && req.http.Cookie ~ "TB_NC") { set beresp.http.X-Cacheable = "NO:Got Cookie"; set beresp.ttl = 120 s; return(hit_for_pass); } # Not Cacheable if it has Cache-Control private else if (beresp.http.Cache-Control ~ "private") { set beresp.http.X-Cacheable = "NO:Cache-Control=private"; set beresp.ttl = 120 s; return(hit_for_pass); } # Not Cacheable if it has Cache-Control no-cache or Pragma no-cache else if (beresp.http.Cache-Control ~ "no-cache" || beresp.http.Pragma ~ "no-cache") { set beresp.http.X-Cacheable = "NO:Cache-Control=no-cache (or pragma no-cache)"; set beresp.ttl = 120 s; return(hit_for_pass); } # If we reach to this point, the object is cacheable. # Cacheable but with not enough ttl: we need to extend the lifetime of the object artificially # NOTE: Varnish default TTL is set in /etc/sysconfig/varnish # and can be checked using the following command: # varnishadm param.show default_ttl else if (beresp.ttl < 1s) { set beresp.ttl = 5s; set beresp.grace = 5s; set beresp.http.X-Cacheable = "YES:FORCED"; } # Cacheable and with valid TTL. else { set beresp.http.X-Cacheable = "YES"; } # DEBUG INFO (Cookies) # set beresp.http.X-Cookie-Debug = "Request cookie: " + req.http.Cookie; return(deliver); } sub vcl_error { set obj.http.Content-Type = "text/html; charset=utf-8"; if (obj.status == 404) { synthetic {" <!-- Markup for the 404 page goes here --> "}; } else if (obj.status == 500) { synthetic {" <!-- Markup for the 500 page goes here --> "}; } else if (obj.status == 503) { if (req.restarts < 4) { return(restart); } else { synthetic {" <!-- Markup for the 503 page goes here --> "}; } } else { synthetic {" <!-- Markup for a generic error page goes here --> "}; } } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } } Thanks in advance,

    Read the article

  • Why does lighttpd keep static files in cache, even when modified on disk ?

    - by Pixelastic
    I am using lighttpd to serve static files. I have a bunch of images in a dir that I regularly update. This will change the file content (and filesize) as well as the modification date, but not their filename. When I access the files through http, the updates are not taken into account and lighty serves the old file. I can manually rename the file to something different, then lighttpd will return a 404 error, and if I rename my file back, I will get the correct updated version. Seems like lightty is using some kind of cache mechanism of its own (which is fine) to return static files. Unfortunatly, it seems that this mechanism doesn't update itself when files are modified. I checked through Wireshark, and my browser is really doing a request to the file, this is not a browser caching issue. It returns a 200 OK when requesting it from an empty cache, and a 304 Not Modified otherwise, as expected. But the file is returned with a wrong Last-Modified header that do not reflect the real last modification date. Maybe there is some config directive that I am not aware of ? I would like the files returned by lighty to reflect the changes made on disk directly, or at least being able to invalidate its cache.

    Read the article

  • How do I prevent lighttpd from caching static files, even when modified on disk?

    - by Pixelastic
    I am using lighttpd to serve static files. I have a bunch of images in a dir that I regularly update. This will change the file content (and filesize) as well as the modification date, but not their filename. When I access the files through http, the updates are not taken into account and lighty serves the old file. I can manually rename the file to something different, then lighttpd will return a 404 error, and if I rename my file back, I will get the correct updated version. Seems like lightty is using some kind of cache mechanism of its own (which is fine) to return static files. Unfortunatly, it seems that this mechanism doesn't update itself when files are modified. I checked through Wireshark, and my browser is really doing a request to the file, this is not a browser caching issue. It returns a 200 OK when requesting it from an empty cache, and a 304 Not Modified otherwise, as expected. But the file is returned with a wrong Last-Modified header that do not reflect the real last modification date. Maybe there is some config directive that I am not aware of ? I would like the files returned by lighty to reflect the changes made on disk directly, or at least being able to invalidate its cache.

    Read the article

  • Troubleshooting Application Timeouts in SQL Server

    - by Tara Kizer
    I recently received the following email from a blog reader: "We are having an OLTP database instance, using SQL Server 2005 with little to moderate traffic (10-20 requests/min). There are also bulk imports that occur at regular intervals in this DB and the import duration ranges between 10secs to 1 min, depending on the data size. Intermittently (2-3 times in a week), we face an issue, where queries get timed out (default of 30 secs set in application). On analyzing, we found two stored procedures, having queries with multiple table joins inside them of taking a long time (5-10 mins) in getting executed, when ideally the execution duration ranges between 5-10 secs. Execution plan of the same displayed Clustered Index Scan happening instead of Clustered Index Seek. All required Indexes are found to be present and Index fragmentation is also minimal as we Rebuild Indexes regularly alongwith Updating Statistics. With no other alternate options occuring to us, we restarted SQL server and thereafter the performance was back on track. But sometimes it was still giving timeout errors for some hits and so we also restarted IIS and that stopped the problem as of now." Rather than respond directly to the blog reader, I thought it would be more interesting to share my thoughts on this issue in a blog. There are a few things that I can think of that could cause abnormal timeouts: Blocking Bad plan in cache Outdated statistics Hardware bottleneck To determine if blocking is the issue, we can easily run sp_who/sp_who2 or a query directly on sysprocesses (select * from master..sysprocesses where blocking <> 0).  If blocking is present and consistent, then you'll need to determine whether or not to kill the parent blocking process.  Killing a process will cause the transaction to rollback, so you need to proceed with caution.  Killing the parent blocking process is only a temporary solution, so you'll need to do more thorough analysis to figure out why the blocking was present.  You should look into missing indexes and perhaps consider changing the database's isolation level to READ_COMMITTED_SNAPSHOT. The blog reader mentions that the execution plan shows a clustered index scan when a clustered index seek is normal for the stored procedure.  A clustered index scan might have been chosen either because that is what is in cache already or because of out of date statistics.  The blog reader mentions that bulk imports occur at regular intervals, so outdated statistics is definitely something that could cause this issue.  The blog reader may need to update statistics after imports are done if the imports are changing a lot of data (greater than 10%).  If the statistics are good, then the query optimizer might have chosen to scan rather than seek in a previous execution because the scan was determined to be less costly due to the value of an input parameter.  If this parameter value is rare, then its execution plan in cache is what we call a bad plan.  You want the best plan in cache for the most frequent parameter values.  If a bad plan is a recurring problem on your system, then you should consider rewriting the stored procedure.  You might want to break up the code into multiple stored procedures so that each can have a different execution plan in cache. To remove a bad plan from cache, you can recompile the stored procedure.  An alternative method is to run DBCC FREEPROCACHE which drops the procedure cache.  It is better to recompile stored procedures rather than dropping the procedure cache as dropping the procedure cache affects all plans in cache rather than just the ones that were bad, so there will be a temporary performance penalty until the plans are loaded into cache again. To determine if there is a hardware bottleneck occurring such as slow I/O or high CPU utilization, you will need to run Performance Monitor on the database server.  Hopefully you already have a baseline of the server so you know what is normal and what is not.  Be on the lookout for I/O requests taking longer than 12 milliseconds and CPU utilization over 90%.  The servers that I support typically are under 30% CPU utilization, but your baseline could be higher and be within a normal range. If restarting the SQL Server service fixes the problem, then the problem was most likely due to blocking or a bad plan in the procedure cache.  Rather than restarting the SQL Server service, which causes downtime, the blog reader should instead analyze the above mentioned things.  Proceed with caution when restarting the SQL Server service as all transactions that have not completed will be rolled back at startup.  This crash recovery process could take longer than normal if there was a long-running transaction running when the service was stopped.  Until the crash recovery process is completed on the database, it is unavailable to your applications. If restarting IIS fixes the problem, then the problem might not have been inside SQL Server.  Prior to taking this step, you should do analysis of the above mentioned things. If you can think of other reasons why the blog reader is facing this issue a few times a week, I'd love to hear your thoughts via a blog comment.

    Read the article

  • Why Doesn’t Disk Cleanup Delete Everything from the Temp Folder?

    - by The Geek
    After you’ve used Disk Cleanup, you probably expect every temporary file to be completely deleted, but that’s not actually the case. Files are only deleted if they are older than 7 days old, but you can tweak that number to something else. This is one of those tutorials that we’re showing you for the purpose of explaining how something works, but we’re not necessarily recommending that you implement it unless you really understand what’s going on. Keep reading for more Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

  • How do you limit root partition disk access to allow drive to go into stanby mode?

    - by Casey
    When there are no users on my system, I would like the hard disk to spindown to low-power state. I realize that this might not be 100% achievable for a straight 24 hours, but it seems reasonable that the system could remain idle for a few hours at a time when it is not in use. My system is headless and running a limited number of services. The primary services are: exim4, mythtv-backend, nfs, samba, cups, apt-cacher-ng Assume that drives are already enabled to go into standby mode. Also, its not acceptable to increase the write-back timeout, since my system is not on a UPS.

    Read the article

  • What can I do to give some more love and disk space to my database on Ubuntu?

    - by Yaron Naveh
    I'm new to linux. I've deployed a db to ubuntu server on amazon and found out I'm low on disk space. did df (see below) - and found out that I'm 89% capacity on one file system, but less on others. What does this mean? Do I have a few partitions and can now utilize others besides /dev/xvda1? Also /dev/xvdb seems large, is it safe to put the db in it and only use it? If so do I need to mount it or do something special? $> df -lah Filesystem Size Used Avail Use% Mounted on /dev/xvda1 8.0G 6.7G 914M 89% / proc 0 0 0 - /proc sysfs 0 0 0 - /sys none 0 0 0 - /sys/fs/fuse/connections none 0 0 0 - /sys/kernel/debug none 0 0 0 - /sys/kernel/security udev 3.7G 8.0K 3.7G 1% /dev devpts 0 0 0 - /dev/pts tmpfs 1.5G 164K 1.5G 1% /run none 5.0M 0 5.0M 0% /run/lock none 3.7G 0 3.7G 0% /run/shm /dev/xvdb 414G 199M 393G 1% /mnt

    Read the article

  • Identifying CDs

    - by Chris
    I'd like to be able to determine what music album CD is in a CD drive. For example, if someone claims that the CD in their drive is Eminem - The Eminem Show, I would like to be able to verify that the CD is indeed The Eminem Show. Any ideas? I've applied for a Gracenote developer license, but they won't get back to me for five days. Also, how does this work? Is there some GUID or other unique identifier that music discs are encoded with? Lastly, might this be possible with data CDs, like, say, the Diablo II install Disc 1? If so, any directions you can point me in, for accomplishing this?

    Read the article

  • Caching items in Orchard

    - by Bertrand Le Roy
    Orchard has its own caching API that while built on top of ASP.NET's caching feature adds a couple of interesting twists. In addition to its usual work, the Orchard cache API must transparently separate the cache entries by tenant but beyond that, it does offer a more modern API. Here's for example how I'm using the API in the new version of my Favicon module: _cacheManager.Get( "Vandelay.Favicon.Url", ctx => { ctx.Monitor(_signals.When("Vandelay.Favicon.Changed")); var faviconSettings = ...; return faviconSettings.FaviconUrl; }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } There is no need for any code to test for the existence of the cache entry or to later fill that entry. Seriously, how many times have you written code like this: var faviconUrl = (string)cache["Vandelay.Favicon.Url"]; if (faviconUrl == null) { faviconUrl = ...; cache.Add("Vandelay.Favicon.Url", faviconUrl, ...); } Orchard's cache API takes that control flow and internalizes it into the API so that you never have to write it again. Notice how even casting the object from the cache is no longer necessary as the type can be inferred from the return type of the Lambda. The Lambda itself is of course only hit when the cache entry is not found. In addition to fetching the object we're looking for, it also sets up the dependencies to monitor. You can monitor anything that implements IVolatileToken. Here, we are monitoring a specific signal ("Vandelay.Favicon.Changed") that can be triggered by other parts of the application like so: _signals.Trigger("Vandelay.Favicon.Changed"); In other words, you don't explicitly expire the cache entry. Instead, something happens that triggers the expiration. Other implementations of IVolatileToken include absolute expiration or monitoring of the files under a virtual path, but you can also come up with your own.

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >