Search Results

Search found 16787 results on 672 pages for 'mod disk cache'.

Page 111/672 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Ignoring GET parameters in Varnish VCL

    - by JamesHarrison
    Okay: I've got a site set up which has some APIs we expose to developers, which are in the format /api/item.xml?type_ids=34,35,37&region_ids=1000002,1000003&key=SOMERANDOMALPHANUM In this URI, type_ids is always set, region_ids and key are optional. The important thing to note is that the key variable does not affect the content of the response. It is used for internal tracking of requests so we can identify people who make slow or otherwise unwanted requests. In Varnish, we have a VCL like this: if (req.http.host ~ "the-site-in-question.com") { if (req.url ~ "^/api/.+\.xml") { unset req.http.cookie; } } We just strip cookies out and let the backend do the rest as far as times are concerned (this is a hackaround since Rails/authlogic sends session cookies with API responses). At present though, any distinct developers are basically hitting different caches since &key=SOMEALPHANUM is considered as part of the Varnish hash for storage. This is obviously not a great solution and I'm trying to work out how to tell Varnish to ignore that part of the URI.

    Read the article

  • Caching API Proxy Server

    - by edc1591
    I need to have a server that caches API responses and then forwards them along to a desktop app. I don't really have much experience with this, so I have a few questions. First of all, what kind of server should I get? I already use Linode for my websites, so ideally I'd like to go with them. I expect to get anywhere from 30 million to 40 million requests to my proxy server each month. Will a 512 Linode be able to support that? Also, is there any software out there that does this already, or will I have to write my own? The API responses are roughly 10 KB each on average, so doing the math, that's a lot of data each month. Should I just add more transfer to whatever server I buy, or can I somehow compress the API responses before sending them off to the user? Thanks for any help.

    Read the article

  • SQL Service Broker enabled causes 100% CPU

    - by user40373
    I have new set of code for a website that is using SqlCacheDependencies based on sql commands. I have enabled SQL Service Broker and some triggers on update/insert/delete and it is causing 100% CPU. Any ideas if I am doing something wrong or suggestions to improve? Here are the SQLchanges I ran: alter database DATABASE_NAME set enable_broker WITH ROLLBACK IMMEDIATE grant subscribe query notifications to CONNECTION_USER_NAME grant send on service::sqlquerynotificationservice to CONNECTION_USER_NAME ALTER AUTHORIZATION ON DATABASE::DATABASE_NAME TO CONNECTION_USER_NAME;

    Read the article

  • postfix concurrency limit with round robin dns

    - by goose
    Take the following internal round robin dns setup mymta.com. IN A 172.31.1.1 mymta.com. IN A 172.31.1.2 mymta.com. IN A 172.31.1.3 mymta.com. IN A 172.31.1.4 mymta.com. IN A 172.31.1.5 mymta.com. IN A 172.31.1.6 mymta.com. IN A 172.31.1.7 mymta.com. IN A 172.31.1.8 mymta.com. IN A 172.31.1.9 mymta.com. IN A 172.31.1.10 Now assume the following postfix setup (assume these are the only tweaks from defaults in debian package) main.cf: smtp_connection_cache_destinations = mymta.com smtp_connection_cache_reuse_limit = 750 smtp_destination_concurrency_limit = 75 transport * :[mymta.com] I would expect 75 concurrent connections spread across the 10 A records I've set in DNS. However I'm seeing more than a few hundred connections to mymta.com and I'm wondering if Postfix is "smart" enough to set up 75 concurrent connections for each IP address. Thoughts?

    Read the article

  • Per-machine decentralised DNS caching - nscd/lwresd/etc

    - by Dan Carley
    Preface: We have caching resolvers at each of our geographic network locations. These are clustered for resiliency and their locality reduces the latency of internal requests generated by our servers. This works well. Except that a vast quantity of the requests seen over the wire are lookups for the same records, generated by applications which don't perform any DNS caching of their own. Questions: Is there a significant benefit to running lightweight caching daemons on the individual servers in order to reduce repeated requests from hitting the network? Does anyone have experience of using [u]nscd, lwresd or dnscache to do such a thing? Are there any other packages worth looking at? Any caveats to beware of? Besides the obvious, caching and negative caching stale results.

    Read the article

  • IIS seems to be caching files on a system share?

    - by scott novell
    Switching over to windows 2008 and IIS 7.5 and it seems whenever I make a change to a css file on a system share it does not show through the browser for a few mins. It is shown through the browser using an ISAPI filter. I have turned off output caching in IIS and also turned off caching on the share itself. The browser is not caching either forcing a 200 and it is cached. Any ideas

    Read the article

  • Image file can't be opened by windows store app if the file is in the Ext4's external hard disk

    - by ? ?
    My question is described below: I have two disks, external hard disk and removable disk. They are formatted with EXT4. I put the same image in the disks, and use Ext2Fsd to mount the disks. I open the file in win8, and it will default open by the windows store app : photos. It will be success to open if the file is in the Ext4's removable disk. It will be failed to open if the file is in the Ext4's external hard disk. I don't know why it will be failed to open if the file is in the Ext4's external hard disk. Does anyone have similar experiences?

    Read the article

  • Squid closing the connection on long HTTP GET requests

    - by Rhys
    Hello, When running a database query on a specific external site we use, Squid seems to cut off the connection after a consistent period of time (just over a minute). The query is submitted through a standard web form is that uses GET to query their database. Firefox 3 just displays a blank page. Internet Explorer throws a 'Page Cannot Be Displayed' error (tested in v6 and v8). When we perform the same query on the same machine, but bypass the Squid proxy, it works fine. The query takes about two and a half minutes to complete. There are a few timeout settings in Squid, but I honestly don't know what one to be looking at. Any possible solutions would be much appreciated. Cheers

    Read the article

  • If-Modified-Since vs If-None-Match

    - by Roger
    This question is based on this article response header HTTP/1.1 200 OK Last-Modified: Tue, 12 Dec 2006 03:03:59 GMT ETag: "10c24bc-4ab-457e1c1f" Content-Length: 12195 request header GET /i/yahoo.gif HTTP/1.1 Host: us.yimg.com If-Modified-Since: Tue, 12 Dec 2006 03:03:59 GMT If-None-Match: "10c24bc-4ab-457e1c1f" HTTP/1.1 304 Not Modified In this case browser is sending both If-None-Match and If-Modified-Since. My question is on the server side do I need to match BOTH etag and If-Modified-Since before I send 304. Or Should I just look at etag and send 304 if etag is a match. In this case I am ignoring If-Modified-Since .

    Read the article

  • Dual boot new laptop win 7 / ubuntu 12.04 - 750gb + 32gb SSD

    - by Alex Waters
    I have just purchased a new HP dv7t-7000 and I would like to run Windows 7 / Ubuntu. How do I setup the dual boot? Can I install both operating systems with an 8gb USB drive? Can I still make use of the 32gb SSD? I'm unfamiliar with the efficacy of using an SSD for caching with a 750gb 7200rpm sata 3 drive. I can only see using it for windows 7 - which I have installed in order to play games. Thank you!

    Read the article

  • Verify server performance

    - by George Kesler
    I'm looking for a quick and SIMPLE way to verify that new servers are performing as expected. The most important metric is disk performance, second is network performance. I’m trying to prevent problems caused by misconfiguration of RAID arrays, NIC teaming etc. The solution should work with both physical and virtual servers. I don’t need sophisticated analysis with different workloads, just one set of benchmarks which I would run against a reference server and later compare to new ones. One problem is that most benchmarks are not giving accurate results when running on a VM.

    Read the article

  • How to increase virtual hard drive space

    - by Chris
    I have a Microsoft Virutal PC hard drive (.vhd format) that's maxed out it's 16 gig hard drive space. What would be the best way to increase this diskspace? Booting into the machine (windows xp professional) and using the disk management snap in, I can see that the virtual drive has approximately 40 more unused gigs of space. Trying to use diskpart, I find out that Windows XP can't extend the boot partition. So I'm at an empass, any suggestions on how to increase the partion or to increase the actual virtual hard drive would be great. Note: The virtual hard drive is running on Windows 7 using XP mode.

    Read the article

  • Apache2 BufferedLogs On - anybody using it ?

    - by Qiqi
    Greetings, I am wondering, whether anybody is using BufferedLogs On with Apache2 and found any issues ? Feature is marked as experimental, but for many years now, so I guess it's rather pretty stable. I am running some servers with constrained disk IO capacity at the moment, so I turned it on hoping that even a small benefit could help in the long run ;-) I do have several to several hundreds requests per seconds so by my thoughts there is really no need to write to log after each request, cause honestly I don't think that my filesystem is the best handler for many unnecessary writes. (OCFS2 shared among several DomUs in the Xen)

    Read the article

  • How can I connect a SAS drive to USB? [closed]

    - by dave
    I have a Dell T710 with Seagate Cheetah 15k.7 SAS disks. If the T710 motherboard dies, I'll need to resort to one of my nightly off-site backups and salvage the journal/logfile from the SAS disk to bring the backup bang up-to-date. I need a way of reading the healthy-but-inaccessible SAS disc that does not depend on the only SAS-capable machine I have to hand. So I bought: SAS to SATA Adapter and: USB 2.0 to SATA Adapter with Power ...so that I could read the SAS drive via USB. I can plug it all together just fine. The chain looks like: USB - SATA - SAS. But the drive does not spin up and the computer doesn't even acknowledge anything being attached by USB. Is there a cheap external enclosure I can buy for SAS drives? I can't believe these USB to SATA adapters are everywhere but the USB to SAS adapters are almost non-existent...

    Read the article

  • Scan dis problem on xp

    - by Sarfraz Ahmed
    hi, I have four drives on my computer. The problem is that each time i start a computer the scan disk check runs for a drive even if i shut down my computer properly. I ran the thorough scandisk check but still for that drive, the scandisk check is always performed no matter what. I wonder what is wrong although everything is fine and accessible along with drive data. Could you guys please help me out of this? I am using Windows XP SP2. Thanks

    Read the article

  • Throughput = BS * IOPS?

    - by Marvin
    I've seen in many places that throughput = bs * iops should be true. For example writing at 128k block size to a SAS disk that can support 190 IOPS should give a throughput of ~23 MBps - 23.75(MBs) = 128(BS)*190(SAS-15 IOPS)/1024. Now when I tested it in a VM against a monster NetApp filer I got theses results: # dd if=/dev/zero of=/tmp/dd.out bs=4k count=2097152 8589934592 bytes (8.6 GB) copied, 61.5996 seconds, 139 MB/s To view the IO rate of the VM I used iostat and esxtop, and they both showed around 250 IOPS. So to my understanding the throughput was supposed to be ~1000k: 1000(KBs) = 4(BS)*250(IOPS). dd of 8GB is twice the size of RAM of course, so no page caching here. What am I missing? Thanks!

    Read the article

  • Data drive disappearing.

    - by Mike Keller
    We have a Windows 2003 R2 server with SP 2 here that randomly loses a partition. There are two partitions the C: and the D: (the one that disappears). When I go into Disk Management the space shows available on the drive but that it isn't formated. There are two drives that are set up in a RAID 1 array. There isn't anything sticking out in the event log as to something triggering this problem and thank god we do daily backups of the data, but it gets kind of annoying to have to go back in there and reformat the partition and restore the data. Any places I can poke around to find the cause of this or even better solutions to the problem would be appreciated.

    Read the article

  • copy boot-able partition

    - by Dima
    I have an disk image with 3 partitions: first partition (hd0,0) is boot-able with GRUB1 with the following configuration GRUB file: default=0 timeout=5 title Bank A root (hd0,1) chainloader +1 title Bank B root (hd0,2) chainloader +1 The partitions (hd0,1) and (hd0,2) are also boot-able. I'm trying to clone partition (hd0,1) to (hd0,2) by creating device map using kpartx and copying whole partition using dd command. The problem is: after partition cloning, the cloned partition did not boot (but all files are OK). What the wrong? I need both partitions to bee identical (I'm using them for fail-over purposes into embedded device)

    Read the article

  • How to configure squid for retrieving (and caching) directly my static resources?

    - by fabien7474
    I have an Apache/Tomcat/Spring tc Server running on CentOS EC2 VM. I would like to install squid on the same machine as a proxy for retrieving (directly i.e. without forwarding the request to Apache/Tomcat) and caching static content ONLY identified by URIs : /images, /css or /js. Other URIs should be forwarded to the normal Web Server and not cached. Since I am a newbie, I didn't find from squid documentation how to configure squid for this desired behavior (and if it is even possible). Could you please help me and tell me how should I configure squid for this purpose? Thank you.

    Read the article

  • Should I disable write caching on my Windows 2008 VM?

    - by javano
    I have a Windows Server 2008 x64 Standard virtual machine that runs on a machine with a hardware RAID controller, a Perc 6/i, which has a battery on-board. Doing everything I can for additional performance, I think I should disable this. Is this very dangerous though? My understand is that Battery Backed Write Caching gives a performance boost to the host OS, telling it the write was complete when they are still sitting in flash waiting to be written. However, I can't see how it would be detrimental to performance, but is there a gain (even if marginal) to enabling it / disabling it? P.s. There machine has a backup power. Here is a screen shot for clarification:

    Read the article

  • Weird caching bug where old version of the same web page (same filename) is still called (Windows 2008 R2, Tomcat 5.5)

    - by user717236
    This is definitely one of the strangest errors I've seen and it occurs intermittently. I am running Windows 2008 R2, IIS 7.5, and Apache Tomcat 5.5, by the way. Let's say I have two machines, A and B. Both A and B are running Windows 2008 R2. I have a web page called login.jsp on machine A, and I have a newer, modified version of login.jsp on machine B . Now, I copy the new login.jsp from machine B and paste it to machine A, replacing the older version with the same filename. For whatever reason, when I hit up the web page in my browser from a local machine (i.e. my laptop), it still recalls the old version of the web page, even though it's been replaced! I tried restarting IIS and Apache Tomcat. That didn't work. I tried restarting machine A and that didn't work. I tried a cold reboot of my local machine and that didn't work, either. So, I spoke to someone I can confide in for help. He said to open the login.jsp page in notepad, put a space in, save the file, and try again. Sure enough, it worked. He said he hasn't seen it in Windows 2003, but this is occurring with Windows 2008. What I don't understand is why did it work and what the heck is this error and I do I really diagnose it and resolve it for good, instead of the hack my colleague proposed? Is this bug related to Windows 2008, Windows 2008 R2, Tomcat, or something else entirely? Anyone else have the same problem? Thank you for any help.

    Read the article

  • Windows, why 8 GB of RAM feel like a few MB?

    - by Desmond Hume
    I'm on Windows 7 x64 with 4-core Intel i7 and 8 GB of RAM, but lately it feels like my computer's "RAM" is located solely on the hard drive. Here is what the task manager shows: The total amount of memory used by the processes in the list is just about 1 GB. And what is happening on my computer for a few days now is that one program (Cataloger.exe) is continually processing large quantities of (rather big) files, repeatedly opening and reading them for the purposes of cataloging. But it doesn't grow too much in memory and stays about that size, about 90 MB. However, the amount of data it processes in, say, 30 minutes can be measured in gigabytes. So my guess was that Windows file caching has something to do with it. And after some research on the topic, I came across this program, called RamMap, that displays detailed info on a computer's RAM. Here is the screenshot: So to me it looks like Windows keeps in RAM huge amounts of data that is no longer needed, redirecting any RAM allocation requests to the pagefile on the hard drive. Even when I close Cataloger.exe, the RamMap reports the size of the mapped file as about the same for a long time on. And it's not just this particular program. Earlier I noticed that similar slowdown occurred after some massive file operations with other programs. So it's really not an exception. Whatever it is, it slows down the computer by like 50 times. Opening a new tab in Chrome takes 20-30 seconds, opening a new program can take up to a minute. Due to the slowdown, some programs even crash. So what do you think, is the problem hiding in file caching or somewhere else? How do I solve it?

    Read the article

  • Free, Linux-based rescue CD for Windows machines

    - by Adam Matan
    Hi, Too often, I'm being called to help a friend who screwed a Windows machine by some creative methods. Th usual remedy is backing up the hard drive contents and reinstalling. Right now, this is done by removing the defected hard drive to my machine. I figured out that using a rescue disk running some version of Linux might ease the process. I'm looking for: NTFS access Partition tools Large variety of drivers (Network, Hard drives, etc.) GUI and some rescue wizards a great plus. Any ideas? Adam

    Read the article

  • Record browsing history

    - by nc3b
    How can I record everything I browse so that, ideally, it might later enable me to re-surf the same pages without internet access ? For instance, if I go to http://www.example.com/example.html I would like to be able to view the same page later exactly as initially (but without reconnecting to www.example.com). Thank you in advance.

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >