Search Results

Search found 3912 results on 157 pages for 'distributed caching'.

Page 21/157 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Caching without file extensions

    - by Sigurs
    I'm trying to use Varnish to show the non-logged in users a cached version of my website. I'm able to perfectly detect if the user is logged in or out, but I can't cache pages without extensions. There is no file extension because nginx is rewriting the URL to a php script(so caching .php does not work). For example I'd like varnish to cache: example.com example.com/forum/ example.com/contact/ I have tried if (req.request == "GET" && req.url ~ "^/") { return(lookup); } if (req.request == "GET" && req.url ~ "") { return(lookup); } if (req.request == "GET" && req.url ~ "/") { return(lookup); } but nothing seems to work... any help?

    Read the article

  • how can I effect DNS Caching on PHP/Memcache application

    - by Niro
    In a very high loaded Ubuntu/PHP web server I found that the PHP line: $memcache-connect("int-aws_ec2.memcached.myapp.net",11211); sometimes takes ~5 secs. Replacing the url with the ip address decreases the server load from ~20 to 0 My question is - where are the settings that effect the DNS caching for this? Is it in the server level or the memcache library ? How can I change it ? Additional info: Ubuntu 10.04 lucid PHP: 5.3.2-1ubuntu4.10 Apache/2.2.14 (Ubuntu) Amazon EC2 Even more info per Celada's comment: The DNS handling for the memcache server is done by scalr (the platform I use to manage the cloud resources). They have a client located on the instances and their own DNS servers. /etc/nsswitch.conf - hosts: files dns /etc/resolv.conf: nameserver 172.16.0.23 domain ec2.internal search ec2.internal The domain is not in hosts.conf To check if I run nscd I used /etc/init.d/nscd stop and received 'no such file' so i guess I dont run nscd. Thanks !

    Read the article

  • One-way forest trust between geographically distributed forests using Server 2008 R2

    - by bwerks
    Hi all, I'm planning out a joinder between two domains, as would take place with contracting companies. Forests A and B exist in distant sites, and there is to be a one-way forest trust so that domain users in Forest A can be authenticated on machines in Forest B. In order to facilitate this, each forest's domain controller must be able to contact each other in order to set up & confirm the trust, but my question is what underlying networking magic must take place beneath it. So far the prevailing approach has been to maintain a VPN connection between the two sites, but the technet documentation seems to indicate that DNS forwarding may be the way to go. Is this the case? Furthermore, if DNS will suffice, does that mean that there must be a server running DNS on boundary servers in each domain so that they can be reached from across the internet? How must they be configured? Thanks!

    Read the article

  • http proxy caching headers

    - by David Hagan
    I have a service for which I'm about to upgrade the authentication. However, I'm trying to ensure that I make the right decision about where the encryption algorithms occur. I currently have two options: option 1) the authentication module is deployed to the client as a javascript library over https and executes client-side, so that the client can POST back an encrypted string. option 2) the authentication module is kept server-side so that the client need only POST back an unencrypted string. I know that many http proxies cache/log the query-string (and therefore any query parameters), but does anyone know of any http proxies that cache the headers as well? If the headers are being cached, then I'll clearly want to encrypt the password inside the SSL encryption, because to my understanding the headers of an HTTPS request may not always be encrypted (depending on the capabilities of the browser etcetera). Can anyone shed any light on the caching of headers by http proxies? Do you have one that does, or know of one that does?

    Read the article

  • Backup hardware and strategy on distributed Windows Server 2008 network

    - by CesarGon
    This question is a follow up to this. We have a Windows Server 2008 R2 domain over a network that spans two different buildings, linked by a 100-Mbps point-to-point line. Over 60 users work in the organisation. We are planning to use DFS folders and DFS replication for file serving across the organisation. The estimated data volume is over 2 TB, and will grow at approximately 20% annually. The idea is to set up a DFS file server in each building and use DFS so that all the contents stay replicated over the 100-Mbps link. We are now considering backup hardware and strategies. We are Dell customers and, after browsing the online Dell catalogue, I can see a number of backup hardware options. My main doubts are the following: Would you go for a tape library, disk backup, or are there other options worth considering? Would you perform batch backups (i.e. nightly) or would you use continuous backup (i.e. while users are working)? Would you use a dedicated backup server to which the tape library (or any other backup device) is attached, or is there any other alternative way of doing things? My experience with backup hardware and overall setup is limited, so I appreciate any good piece of advice that you may have. Thanks.

    Read the article

  • Perforce Proxy Server: Caching selective files [closed]

    - by fbrereto
    I just set up a Perforce proxy server for work. I'm noticing the cache directory is filling up very quickly -- with files I know I will never need. For example, there is a 'sandbox' directory in the depot where users keep personal branches and other work; a p4 sync is causing the p4 proxy cache to grab these user's sandboxes when I'll never need them. I would create a symbolic link for the sandbox directory to /dev/null but then I wouldn't be caching my sandbox, which I am interested in. Is there any way to tell the perforce proxy something to the effect of "if I haven't had to sync it, please don't cache it?"

    Read the article

  • faking NAT with a VMware distributed switch across multiple hosts

    - by romant
    I need to construct a NAT for certain machines within the network. Wish to do this with dvSwitch - as it seems the logical way of attacking the problem as in this scenario there's just under 30 hosts. In order for the NAT'ed VM's to have access to the 'real' network. I am providing a 'router' VM, which will have access to the WAN/outside network, and also act as the DHCP server for the NAT'ed machines. Problem Space When the machines connected to the NAT interface and the router are on the same host, then they get an IP from the router VM, and work perfectly (routed outside). Unfortunately machines on other Hosts that are connected to the dvSwitch do not get an IP and further tcpdump shows no network data getting through across the hosts within the dvSwitch. Has anyone achieved a NAT solution using a dvSwitch before that they could share?! Thank you. EDIT: Including the diagram.

    Read the article

  • Rail's FileStore with Linux Disk Caching or RAMdisk?

    - by Yo Ludke
    I have a Ruby on Rails application that stores it's catched files on the filesystem (Rails file-system cache). I was thinking about changing to memcached Store, but a short test shows it isn't a big difference in speed. From linuxatemyram.com I learned a bit about file caching. On the current machine there would be around 40..45GB RAM left which isn't needed for the application and which can be used to linux-disk-cache this rails file cache store. The disk is a RAID10 system with almost 120MB disk perfomance. How can I tell Linux to use free RAM more deliberately and not to be shy about using it? Do think it's necessary to adjust a sysyctl/.. value here, or would I have performance advantages to put the File Store root diretory on a ramdisk? (Loosing the cache during a reboot wouldn't be a problem)

    Read the article

  • Distributed Nagios Installation

    - by kruczkowski
    I'm looking for a plug-in or product that will act as a remote probe and perform tests then send back the results to the central Nagios server. Reason for this is that I'd like to monitor internal systems and servers at customers, but don't want to allow all the traffic passing the firewalls. Ideally I'd like a soft-probe that would be installed and then perform the tests and send back the results (via SSH) to the central Nagios installation. Does anyone know of a product or plug-in that would offer such service? If not Nagios, is there any other monitoring system that does such a thing (ideally open-source)?

    Read the article

  • One EC2 source with distributed varnish machines

    - by Elad Lachmi
    I have a web site hosted in an EC2 instance (2008 r2 + iis7.5 + sql server). I put one linux box running RHEL with varnish. After some configuration trail and error, I found a configuration that works. Now I want to duplicate the varnish boxes to other availability zones, but continue to pull the pages from the original windows box. It is my understanding that I can put the varnish boxes in different zones and pull from the windows box via it's external IP. But what do I need to do in order for each user to receive content from the box physically closest to them? Is this even possible? Thank you!

    Read the article

  • How do I turn off caching in IIS7?

    - by jammus
    Hello. I'm developing an ASP classic site under Windows 7 (form a queue ladies). The problem is IIS seems to be heavily making use of its cache for both static and dynamic content which really conflicts with my 'make a small change, alt-tab, hit ctrl-F5' development style. Changes made to .asp files may take two or three refreshes to show up where as changes to .js files can take 20 times as many. How do I go about turning the caching off on my development machine? Cheers. in b4 stop using asp classic

    Read the article

  • Migration from Distributed File System 2003 to 2008

    - by miro23
    I have two Windows Server 2003 and both have DFS and it does replication between them. I would like to migrate the primary win2k3 DFS server to win2k8. what is the best way to do that? I found this article: Migrate a Domain-based Namespace to Windows Server 2008 Mode http://technet.microsoft.com/en-us/library/cc753875.aspx But I am interested only on migrating the DFS replication and not the namespace. Thanks.

    Read the article

  • nginx caching per user agent

    - by Tuinslak
    I'm currently using nginx as reverse proxy with caching enabled. However, the main site has two different layouts, depending on the user-agent (mobile or not). I've tried something similar to this: # mobile users if ($http_user_agent ~* '(iPhone|iPod|mobile|Android|2.0\ MMP|240x320|AvantGo|BlackBerry|Blazer|Cellphone|Danger|DoCoMo|Elaine/3.0|EudoraWeb|hiptop|IEMobile)') { set $iphone_request '1'; } if ($iphone_request = '1') { proxy_cache mobile; } if ($iphone_request = '') { proxy_cache site; } proxy_cache_key "$scheme://$host$request_uri"; proxy_pass http://real-site.tld; However, nginx gives an error, stating proxy_cache can't be used in an if-structure. Any other way to serve from a different cache depending on the browser? Thanks, Tuinslak

    Read the article

  • SQL Distributed Reporting Services Setup

    - by Praesagus
    I am setting up two servers; one iis 7 and one sql 2008. I need to use reporting services. What is the best way to set up reporting services so that my iis box can serve the reports. I'm sure this is not an unusual configuration, but I'm having a lot of trouble finding an answer to this - probably because I am using the wrong terminology. Also does this configuration require two sql licenses (one for each server)? This sound like a lengthy explanation so links or even the correct terminology for this so I can find the answers myself would be much appreciated. Thanks

    Read the article

  • How does NFS read cache work on Debian?

    - by Ztyx
    I am planning to use NFS to serve out many small files. They will be read very often so client side caching is crucial. Does NFS handle this? Is there a way to increase the client side caching in some way? ...or should I look at another solution? Syncing using rsync or unison periodically is not an option since the files are modified on the client side from time to time.

    Read the article

  • server caching problem on ASP.NET MVC page

    - by Rita
    Hi I have server caching error on ASP.NET MVC Pages. The scenario is like this. I have two applications (1).External Website and (2).Internal Adminsite, both pointing to the same Database. There is one page called EditProfile Page on the External Website that registered customer can update his profile information like Firstname, Lastname and Address…etc. Similarly there is similar functionality on the Internal Adminsite on the page called CustomerProfile Page where the Site Admin can update all these fields. When the user updates the profile information from the Adminsite, those updates are not reflecting back to the Website. Now I tried restarting the Website on IIS and that din’t help. Again I tried both restarting the Website on IIS and opening a new browser, then those updates are reflecting back. I am wondering how I can come out of this caching problem without restarting the site and open a new browser window everytime? Are there any IIS settings that could help? This caching is happening only on couple of tables only and all the updates are showing up in the database. Appreciate your responses. Thanks

    Read the article

  • Mod disk_cache permanent caching images and disabling reacurring header updates

    - by user135532
    I am trying to get mod disk_cache to permantly cache images retrieved from an image server on the webserver using ProxyPass. While the image is being retrieved correctly from the server and is served from the cache on further requests, then I am still having the webserver call the image server and causing the cached header to be updated. Because of load concerns then I need to never call the image server on a specific url again after it has been cached once, or extend the refresh time for as long as possible. The webserver is IHS 7.0 The mod's are mod_disk_cache.so, mod_cache.so, mod_proxy.so Version 2.2.8.0 Following is from my httpd.conf: ProxyPass /webserver/media/images/ http://imageserver.com/ws/media/images/ # Caching pictures <IfModule mod_cache.c> <IfModule mod_disk_cache.c> CacheDefaultExpire 2628000 #CacheDisable CacheEnable disk /webserver/media/images/ CacheIgnoreCacheControl On CacheIgnoreHeaders Cookie Referer User-Agent X-Forwarded-For X-Forwarded-Host X-Forwarded-Server Accept-Language Accept Host CacheIgnoreNoLastMod On CacheIgnoreQueryString Off #CacheIgnoreURLSessionIdentifiers CacheLastModifiedFactor 10000000.1 #CacheLock on #CacheLockMaxAge 5 #CacheLockPath CacheMaxExpire 1576800 CacheStoreNoStore On CacheStorePrivate On CacheDirLength 2 CacheDirLevels 3 CacheMaxFileSize 1000000 CacheMinFileSize 1 CacheRoot c:/cacheroot2 </IfModule> </IfModule>

    Read the article

  • HAProxy is caching the forwarding?

    - by shadow_of__soul
    i'm trying to set up a server structure for an application i'm building in Node.js with socket.io. My setup is: HAProxy frontend forward to -> apache2 as default backend (or nginx, is apache in this local test) -> node.js app if the url has socket.io in the request AND a domain name i have something like: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 user haproxy group haproxy daemon defaults log global mode http maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 frontend all 0.0.0.0:80 timeout client 5000 default_backend www_backend acl is_soio url_dom(host) -i socket.io #if the request contains socket.io acl is_chat hdr_dom(host) -i chaturl #if the request comes from chaturl.com use_backend chat_backend if is_chat is_soio backend www_backend balance roundrobin option forwardfor # This sets X-Forwarded-For timeout server 5000 timeout connect 4000 server server1 localhost:6060 weight 1 maxconn 1024 check #forwards to apache2 backend chat_backend balance roundrobin option forwardfor # This sets X-Forwarded-For timeout queue 50000 timeout server 50000 timeout connect 50000 server server1 localhost:5558 weight 1 maxconn 1024 check #forward to node.js app The problem comes when i made a request to something like www.chaturl.com/index.html it load perfectly but fails to loads the socket.io files (www.chaturl.com/socket.io/socket.io.js) why it redirect to apache (and should redirect to the node.js app that serve the files). The weird thing is that if i access directly to the socket.io file, after refreshing a few times, it loads, so i suppose is "caching" the forwarding for the client when it makes the first request and reach the apache server. Any suggestion of how this can be solved? or what i can try or look about this?

    Read the article

  • Windows 7 not booting after failed SRT (SSD caching) install

    - by david
    This is a fairly new computer, only about a month old. i7 2700k, z68 motherboard, with a 1.5tb WD black HD, and a 128gb crucial M4 ssd. I followed the instructions for setting up ssd caching, the SATA controller was set to RAID, I installed the intel software and enabled acceleration and it said everything went fine. But when I went to reboot, I received the lovely "Reboot and Select proper Boot device" error message. I checked the bios, and it was booting from the correct HD (I tried the only other option anyway just in case, it was the ~50 odd gb of unformatted space left on the SSD) AFter that I entered the raid until (ctrl-i at boot) and removed the acceleration and deleted the raid array (because it was being used as a cache this was non destructive) Still no boot. So I reinstalled win7 directly on the SSD, booted, and checked the HDD to make sure it hadn't been wiped. It hadn't, all the files were still there, including all the windows stuff. I backed up my data to an external drive just in case, but I'd really like to get this install booting again. I trawled the webs a bit, and have tried entering recovery mode and using the bootrec.exe and bootsect.exe to fix it, but to be honest I'm not sure what I'm doing with those. My question is basically: How do I make my harddrive bootable again?

    Read the article

  • Faster caching method

    - by pataroulis
    I have a service that provides HTML code which at some point it is not updated anymore. The code is always generated dynamically from a database with 10 million entries so each HTML code page rendering searches there for say 60 or 70 of those entries and then renders the page. So, for those expired pages, I want to use a caching system which will be VERY simple (like just enter a record with the rendered HTML and (if I need) remove it). I tried to do it file-based but the search for the existence of a file and then passing it through php to actually render it , seems like too much for what I want to do. I was thinking of doing it on mysql with a table with MEDIUMBLOBs (each page is around 100k). It would hold about 150000 such records (for now, at least). My question is: Would it be faster to let mysql do the lookup of the file and the passing to php or is the file-based approach faster? The lookup code for the file based version looks like this: $page = @file_get_contents(getCacheFilename($pageId)); if($page!=NULL) { echo $page; } else { renderAndCachePage($pageId); } which does one lookup whether it finds the file or not. The mysql table would just have an ID (the page id) and the blob entry. The disk of the system is a simple SATA raid 1 , the mysql daemon can grab up to 2.5GB of memory (i have a proxy running too, eating the rest of the 16GB of the machine. ) In general the disk is quite busy already. My not using PEAR cache, is because I think (please feel free to correct me on this) it adds overhead I do not need because the page rendering code is called about 2M times per day and I wouldn't want to go through the whole code each time (and yes, I have eaccelerator to cache the code too). Any pointer to what direction I should go, would be greatly welcome. Thanks!

    Read the article

  • Best distributed version control system?

    - by afsharm
    I have been using SourceSafe and Subversion for years, but recently decided to choose a distributed version control system (or decentralized source control management) like Git, Mercurial or Bazaar or any other thing. So what is best of them? I'm a Windows/Visual Studio user.

    Read the article

  • Distributed sequence number generation?

    - by Jon
    I've generally implemented sequence number generation using database sequences in the past. e.g. Using Postgres SERIAL type http://neilconway.org/docs/sequences/ I'm curious though as how to generate sequence numbers for large distributed systems where there is no database. Does anybody have any experience or suggestions of a best practice for achieving sequence number generation in a thread safe manner for multiple clients?

    Read the article

  • Distributed Transactions in SQL Server 2005

    - by AJM
    As part of a transaction I’m modifying rows in tables via a server link, so have to specify “SET XACT_ABORT ON” in my sproc otherwise it won’t execute. Now I’m noticing that SCOPE_IDENTITY() is returning NULL, which is presumably something to do with the distributed transaction scope? Does anyone know why and how to resolve?

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >