Search Results

Search found 7128 results on 286 pages for 'httpcontext cache'.

Page 145/286 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • Need to install libstdc++-5 on ubuntu 9.10

    - by xain
    Hi, I'm running ubuntu 9.10 which has gcc4 (libstdc++6) installed, and I need to install a program that requires libstdc++5 that comes with gcc3. I don't see gcc-3 listed when I run "apt-cache search" so: Is it necessary to install gcc-3 to achieve libstdc++5 ? If so where can I get it ? Is there a way to install just libstdc++5 Would a soft link to libstdc++6 do or is it not recommended ? Thanks

    Read the article

  • What is the best way to archive (spider) a site that is going to be removed?

    - by Guy
    Three different blogs that I read have recently announced that they are going to be discontinued and removed from the web. Although the archived pages will probably be in Google's cache for a few weeks after they've gone and some of the pages will be in the Way Back Machine I'd like to archive those sites to my hard disk for future reference. What is the best way to do this? Is there any software that transforms a blog (e.g. Blogspot) into a chronological PDF?

    Read the article

  • Varnish -> Nginx -> Apache a good idea?

    - by Zoran Zaric
    Hey, I'm thinking about the architecture for a new Webserver. Would having Varnish as a cache in front of Nginx as a reverse-proxy and serving static files in front of apache for all heavy lifting be a good idea? I'm going to run php and ruby on rails applications. Will there be too much overhead passing php requests to apache through two other processes? Thanks a lot!

    Read the article

  • Best Postfix spam RBL policy weight daemon?

    - by TRS-80
    I just heard about policyd-weight so I did an apt-cache search policyd which returns three options: policyd-weight postfix-policyd postfwd Which one is the best, and do you have any tips on setting them up? Our current setup is whitelister plus postgrey to greylist RBLd hosts, then fail2ban them for 10 minutes if they have 10 failures, followed by content filtering (Kaspersky Anti-Spam). The content filtering is pretty good, but there's still a lot of spam that gets through the RBL greylisting.

    Read the article

  • HP Smart Array p400i with Intel X25-M 160 SSD

    - by user67304
    I have a pair of x25-M 160 Intel SSD's in an HP DL360 G5 with a p400i Smart Array running 512 BBWC. The disk performance I am getting on this box and another identical one does not come close to matching the same two drives running through a cheap 3ware RAID card. Any idea? I have played with the cache settings, but nothing allows me to get the same results. It seems like the Smart Array controller is the bottleneck.

    Read the article

  • How to keep group-writeable shares on Samba with OSX clients?

    - by Oliver Salzburg
    I have a FreeNAS server on a network with OSX and Windows clients. When the OSX clients interact with SMB/CIFS shares on the server, they are causing permission problems for all other clients. Update: I can no longer verify any answers because we abandoned the project, but feel free to post any help for future visitors. The details of this behavior seem to also be dependent on the version of OSX the client is running. For this question, let's assume a client running 10.8.2. When I mount the CIFS share on an OSX client and create a new directory on it, the directory will be created with drwxr-x-rx permissions. This is undesirable because it will not allow anyone but me to write to the directory. There are other users in my group which should have write permissions as well. This behavior happens even though the following settings are present in smb.conf on the server: [global] create mask= 0666 directory mask= 0777 [share] force directory mode= 0775 force create mode= 0660 I was under the impression that these settings should make sure that directories are at least created with rwxrwxr-x permissions. But, I guess, that doesn't stop the client from changing the permissions after creating the directory. When I create a folder on the same share from a Windows client, the new folder will have the desired access permissions (rwxrwxrwx), so I'm currently assuming that the problem lies with the OSX client. I guess this wouldn't be such an issue if you could easily change the permissions of the directories you've created, but you can't. When opening the directory info in Finder, I get the old "You have custom access" notice with no ability to make any changes. I'm assuming that this is caused because we're using Windows ACLs on the share, but that's just a wild guess. Changing the write permissions for the group through the terminal works fine, but this is unpractical for the deployment and unreasonable to expect from anyone to do. This is the complete smb.conf: [global] encrypt passwords = yes dns proxy = no strict locking = no read raw = yes write raw = yes oplocks = yes max xmit = 65535 deadtime = 15 display charset = LOCALE max log size = 10 syslog only = yes syslog = 1 load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes smb passwd file = /var/etc/private/smbpasswd private dir = /var/etc/private getwd cache = yes guest account = nobody map to guest = Bad Password obey pam restrictions = Yes # NOTE: read smb.conf. directory name cache size = 0 max protocol = SMB2 netbios name = freenas workgroup = COMPANY server string = FreeNAS Server store dos attributes = yes hostname lookups = yes security = user passdb backend = ldapsam:ldap://ldap.company.local ldap admin dn = cn=admin,dc=company,dc=local ldap suffix = dc=company,dc=local ldap user suffix = ou=Users ldap group suffix = ou=Groups ldap machine suffix = ou=Computers ldap ssl = off ldap replication sleep = 1000 ldap passwd sync = yes #ldap debug level = 1 #ldap debug threshold = 1 ldapsam:trusted = yes idmap uid = 10000-39999 idmap gid = 10000-39999 create mask = 0666 directory mask = 0777 client ntlmv2 auth = yes dos charset = CP437 unix charset = UTF-8 log level = 1 [share] path = /mnt/zfs0 printable = no veto files = /.snap/.windows/.zfs/ writeable = yes browseable = yes inherit owner = no inherit permissions = no vfs objects = zfsacl guest ok = no inherit acls = Yes map archive = No map readonly = no nfs4:mode = special nfs4:acedup = merge nfs4:chown = yes hide dot files force directory mode = 0775 force create mode = 0660

    Read the article

  • isa 2004 - banned site rule cause slow internet

    - by Holian
    Hi Gods, We have windows server 2003 with isa 2004. Our clients uses internet with proxy. We have two isa rule: order name action protocolls from/listener to condition 1. trafic ALLOW all outbound all networks all networks all users 2. FTP ALLOW FTP Server EXTERNAL/INTERNAL/Local host 10.1.1.1 So we have to "bann" a few webpage (like facebook, youtube...etc...), so we make a new rule 0. banned DENY HTTP internal denied pages all users In the denied pages we have the *.facebook.com domain set. After we enable this rule, the entire internet slows down. The banning rule works well, redirect to an internal site, but the other sites.... If i open a page..it normally takes 3-10 sec to load, but after this rule this time is: 2-4 minutes. In the monitor / logging menu we got a few FAILED CONNECTION ATTEMPT like: Log type: Web Proxy (Forward) Status: 304 Not Modified Rule: All local traffic Source: Internal ( 10.1.1.1:0 ) Destination: External ( 172.24.28.22:3128 ) Request: GET http://www.konyvelozona.hu/wp-content/uploads/nyugdijas-holgy-2.jpg Filter information: Req ID: 17270b72 Protocol: http User: anonymous Additional information Client agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.3072... Object source: Verified Cache Processing time: 9047 Cache info: 0x18801002 MIME type: - In the event log we got a few log: Description: The Web Proxy filter failed to bind its socket to 10.1.1.1 port 80. This may have been caused by another service that is already using the same port or by a network adapter that is not functional. To resolve this issue, restart the Microsoft Firewall service. The error code specified in the data area of the event properties indicates the cause of the failure. The failure is due to error: 0x8007271d The Web Proxy filter failed to bind its socket to 127.0.0.1 port 80. This may have been caused by another service that is already using the same port or by a network adapter that is not functional. To resolve this issue, restart the Microsoft Firewall service. The error code specified in the data area of the event properties indicates the cause of the failure. The failure is due to error: 0x8007271d If i tpye: netstat -o -n -a | findstr 0.0:80 then i got, tcp 0.0.0.0:80 0.0.0.0:0 LISTEN 4 udp 0.0.0.0:8031 *.* 2780 udp 0.0.0.0:8082 *.* 2780 Some month ago we installed XMAP, but now we only use mysql. Apache service stopped. In the Xamp port check menu i see: Service POrt Status Apache (http) 80 Process: System Maybee this is the problem? I dont know what should i do now... Thank you folks.

    Read the article

  • Jetty interprets JETTY_ARGS as file name

    - by Lena Schimmel
    I'm running Jetty (version "null 6.1.22") on Ubuntu 10.04. It's running fine until I need JSP support. According to several blog posts I need to set the JETTY_ARGS to OPTIONS=Server,jsp. However, if I put this into /etc/default/jetty: JETTY_ARGS=OPTIONS=Server,jsp and restart Jetty via /etc/init.d/jetty stop && /etc/init.d/jetty start, it reports success, but does not accept connections. I notices that it logs something to /usr/share/jetty/logs/out.log: 2012-09-11 11:19:05.110:WARN::EXCEPTION java.io.FileNotFoundException: /var/cache/jetty/tmp/OPTIONS=Server,jsp (No such file or directory) at java.io.FileInputStream.open(Native Method) at java.io.FileInputStream.<init>(FileInputStream.java:137) at java.io.FileInputStream.<init>(FileInputStream.java:96) at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:87) at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:178) at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.setupCurrentEntity(XMLEntityManager.java:630) at com.sun.org.apache.xerces.internal.impl.XMLVersionDetector.determineDocVersion(XMLVersionDetector.java:189) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:776) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:741) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:123) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1208) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:525) at javax.xml.parsers.SAXParser.parse(SAXParser.java:392) at org.mortbay.xml.XmlParser.parse(XmlParser.java:188) at org.mortbay.xml.XmlParser.parse(XmlParser.java:204) at org.mortbay.xml.XmlConfiguration.<init>(XmlConfiguration.java:109) at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:969) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.mortbay.start.Main.invokeMain(Main.java:194) at org.mortbay.start.Main.start(Main.java:534) at org.mortbay.jetty.start.daemon.Bootstrap.start(Bootstrap.java:30) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:177) That is, whatever I put into JETTY_ARGS, it inteprets is as a filename inside /var/cache/jetty/tmp/ and tries to parse that file as XML (or does it parse some other XML and tries to read that file as a DTD? I'm not sure.). This doesn't seem to make any sense to me, especially since that directory is entirely empty. I've verified this with several other Strings, not only OPTIONS=Server,jsp.

    Read the article

  • HP Smart Array p400i with Intel X25-M 160 SSD

    - by user67304
    I have a pair of x25-M 160 Intel SSD's in an HP DL360 G5 with a p400i Smart Array running 512 BBWC. The disk performance I am getting on this box and another identical one does not come close to matching the same two drives running through a cheap 3ware RAID card. Any idea? I have played with the cache settings, but nothing allows me to get the same results. It seems like the Smart Array controller is the bottleneck.

    Read the article

  • Authenticate Teamcity against LDAP using StartTLS

    - by aseq
    I am running a 6.5 version of Teamcity on a Debian Squeeze server and I use OpenLDAP to authenticate users. I know I can use LDAPS to be able to use encrypted password authentication, however this has been deprecated by the OpenLDAP developers, see: http://www.openldap.org/faq/data/cache/605.html I would like to know if there is a way to configure LDAP authentication in Teamcity to use StartTLS on port 389. I can't find anything about it here: http://confluence.jetbrains.net/display/TCD65/LDAP+Integration Or here: http://therightstuff.de/2009/02/02/How-To-Set-Up-Secure-LDAP-Authentication-With-TeamCity.aspx

    Read the article

  • Deleting large no of files on linux eats up CPU

    - by Sanjay
    I generate more than 50GB of cache files on my RHEL server (and typical file size is 200kb so no of files is huge). When I try to delete these files it takes 8-10 hours. However, the bigger issue is that the system load goes to critical for these 8-10 hours. Is there anyway where I can keep the system load under control during the deletion. I tried using nice -n19 rm -rf * but that doesn't help in system load.

    Read the article

  • Linux filesystem with inodes close on the disk

    - by pts
    I'd like to make the ls -laR /media/myfs on Linux as fast as possible. I'll have 1 million files on the filesystem, 2TB of total file size, and some directories containing as much as 10000 files. Which filesystem should I use and how should I configure it? As far as I understand, the reason why ls -laR is slow because it has to stat(2) each inode (i.e. 1 million stat(2)s), and since inodes are distributed randomly on the disk, each stat(2) needs one disk seek. Here are some solutions I had in mind, none of which I am satisfied with: Create the filesystem on an SSD, because the seek operations on SSDs are fast. This wouldn't work, because a 2TB SSD doesn't exist, or it's prohibitively expensive. Create a filesystem which spans on two block devices: an SSD and a disk; the disk contains file data, and the SSD contains all the metadata (including directory entries, inodes and POSIX extended attributes). Is there a filesystem which supports this? Would it survive a system crash (power outage)? Use find /media/myfs on ext2, ext3 or ext4, instead of ls -laR /media/myfs, because the former can the advantage of the d_type field (see in the getdents(2) man page), so it doesn't have to stat. Unfortunately, this doesn't meet my requirements, because I need all file sizes as well, which find /media/myfs doesn't print. Use a filesystem, such as VFAT, which stores inodes in the directory entries. I'd love this one, but VFAT is not reliable and flexible enough for me, and I don't know of any other filesystem which does that. Do you? Of course, storing inodes in the directory entries wouldn't work for files with a link count more than 1, but that's not a problem since I have only a few dozen such files in my use case. Adjust some settings in /proc or sysctl so that inodes are locked to system memory forever. This would not speed up the first ls -laR /media/myfs, but it would make all subsequent invocations amazingly fast. How can I do this? I don't like this idea, because it doesn't speed up the first invocation, which currently takes 30 minutes. Also I'd like to lock the POSIX extended attributes in memory as well. What do I have to do for that? Use a filesystem which has an online defragmentation tool, which can be instructed to relocate inodes to the the beginning of the block device. Once the relocation is done, I can run dd if=/dev/sdb of=/dev/null bs=1M count=256 to get the beginning of the block device fetched to the kernel in-memory cache without seeking, and then the stat(2) operations would be fast, because they read from the cache. Is there a way to lock those inodes and/or blocks into memory once they have been read? Which filesystem has such a defragmentation tool?

    Read the article

  • alternative to download them all extension for firefox

    - by Nrew
    Do you know of any good alternative for the firefox extension download them all. Because when I try to download the second time(after the first has been downloaded) in megaupload. There would be a temporary error, which is not really temporary. Because it will last until you clean the cache/history.

    Read the article

  • Which is a better use of my SSD Drive [on hold]

    - by RS Conley
    I have the choice of setting up a system with two SSD Drives in Raid 1 mode as my boot drive for Windows 7 64-bit. With the Program Files and User Folders moved to a Second regular HD Drive also configured using Raid 1. Or Setup a single SSD Drive (120 GB or 256 gb) as a cache Drive using Intel Rapid Storage Technology combined with two normal hard drives configured as Raid 1. Which setup would have the faster hard drive performance over the life of the computer?

    Read the article

  • What is preventing me from upgrading / installing Fixefox extensions?

    - by Josh
    I am trying to install a Firefox extension into Firefox 3.6.13 under OS X 10.5.8, and I keep getting an error message: Firefox could not install the file at because: Download error -228 I have read this and found it unhelpful. My cache is 500 MB and ~/Library/Caches/TemporaryItems is writable: [jnet@Stan ~]$ ls -la ~/Library/Caches/TemporaryItems total 16 drwx------ 5 jnet jnet 170 Jan 27 20:51 . Any idea how I can correct this?

    Read the article

  • Sarg report error

    - by amyassin
    I have a proxy server that runs Ubuntu Server 11.10, Squid 2.7.STABLE9. I installed sarg (version 2.3.1 Sep-18-2010) to generate reports using the ordinary apt-get install, and added a cron job to generate a report of the day every 5 minutes (that will overwrite the 5-minutes-older one): */5 * * * * /root/proxy_report.sh And the content of /root/proxy_report.sh is: #!/bin/bash /usr/bin/sarg -nd `date +"%d/%m/%Y"` > /dev/null 2>&1 And I added another cron job to generate a full report every hour at :32 (not to collide with the 5 minutes job): */32 * * * * /root/proxy_report_full.sh And the content of /root/proxy_report_full.sh is : #!/bin/bash /usr/bin/sarg -n > /dev/null 2>&1 And I added a small script to remove the yesterday full report (the full report that ends in yesterday that won't be overwritten by the new today full report) in /etc/rc.local to run at startup: /usr/bin/rm_yesterday.sh &>> /var/log/rm_yesterday Where /usr/bin/rm_yesterday.sh: #!/bin/bash find /var/www/sarg/ | grep `date -d Apr1 +"%Y%b%d"`-* | grep -v `date +"%Y%b%d"` | xargs rm -rf * Apr1 is the starting date of the proxy... ** I've placed it in /usr/bin to be mounted early at startup... That arrangement went OK for about a month and a half, except for one time I noticed some errors and reports wasn't generated, and fixed that by making an offset (the two minutes in 32 of the second cron job). However, it then started not to generate reports anymore. By manually trying to generate it it gives the following error: root@proxy-server:~# sarg -n SARG: getword_atoll loop detected after 3 bytes. SARG: Line="154 192.168.10.40 TCP_MISS/200 39 CONNECT www.google.com" SARG: Record="154 192.168.10.40 TCP_MISS/200 39 CONNECT www.google.com" SARG: searching for 'x2f' SARG: getword backtrace: SARG: 1:sarg() [0x8050a4a] SARG: 2:sarg() [0x8050c8b] SARG: 3:sarg() [0x804fc2e] SARG: 4:/lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x129113] SARG: 5:sarg() [0x80501c9] SARG: Maybe you have a broken date in your /var/log/squid/access.log file When I looked to /var/log/squid/ folder, I noticed that it contains some rotated logs: root@proxy-server:~# ls /var/log/squid/ access.log access.log.1 cache.log cache.log.1 store.log store.log.1 So maybe sarg installed logrotate with it? Or it comes with the standard Ubuntu? I don't remember I installed it manuallly. The question is: What could've gone wrong? Does it have something to do with rotating the log? How can I trace the error and start generating reports again?

    Read the article

  • Problems set-up Single Sign-On using Kerberos authentication

    - by user1124133
    I need for Ruby on Rail application set authentication via Active Directory using Kerberos authentication. Some technical information: I are using Apache installed mod_auth_kerb In httpd.conf I added LoadModule auth_kerb_module modules/mod_auth_kerb.so In /etc/krb5.conf I added following configuration [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = EU.ORG.COM dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h forwardable = yes [realms] EU.ORG.COM = { kdc = eudc05.eu.org.com:88 admin_server = eudc05.eu.org.com:749 default_domain = eu.org.com } [domain_realm] .eu.org.com = EU.ORG.COM eu.org.com = EU.ORG.COM [appdefaults] pam = { debug = true ticket_lifetime = 36000 renew_lifetime = 36000 forwardable = true krb4_convert = false } When I test kinit validuser and enter password then authentication is successful. klist returns: Ticket cache: FILE:/tmp/krb5cc_600 Default principal: [email protected] Valid starting Expires Service principal 02/08/13 13:46:40 02/08/13 23:46:47 krbtgt/[email protected] renew until 02/09/13 13:46:40 Kerberos 4 ticket cache: /tmp/tkt600 klist: You have no tickets cached In application Apache configuration I added IfModule mod_auth_kerb.c> Location /winlogin> AuthType Kerberos AuthName "Kerberos Loginsss" KrbMethodNegotiate off KrbAuthoritative on KrbVerifyKDC off KrbAuthRealms EU.ORG.COM Krb5Keytab /home/crmdata/httpd/apache.keytab KrbSaveCredentials off Require valid-user </Location> </IfModule> I restarted apache Now some tests: When I try to access application from Win7, I got pop-up message box, with text: Warning: This server is requesting that your username and password be sent in an insecure manner (basic authentification without a secure connection) When I enter valid credentials then my application opens successfully, and all works fine. Questions: Is ok that for user pop-ups such windows? If I use NTLM authentication then there no such pop-up. I checked IE Internet Options and there 'Enable Integrated Windows Authentication' is checked. Why IE try to send username and password to application apache? If I correct to understand then Windows self must make authentication via Active Directory using Kerberos protocol. When I try to access application from Win7 and I enter incorrect credentials to pop-up message box Application say Authentication failed (this is OK) In apache error log I see: [error] [client 192.168.56.1] krb5_get_init_creds_password() failed: Client not found in Kerberos database But now I cannot get possibility to enter valid credentials, only when I restart IE I can get again pop-up box. What could be incorrect or missing in my Kerberos setup? I read in some blog post that probably something is needed to be done in Active Directory side. What exactly?

    Read the article

  • Custom url rewrite stop working after 20 seconds?

    - by 101224863727594634919
    Hi all I have a simple question about using of custom URL rewrite module - http://weblogs.asp.net/scottgu/archive/2007/02/26/tip-trick-url-rewriting-with-asp-net.aspx. After period of time redirects stop working. When I trace non working requests I found that -URL_CACHE_ACCESS_END PhysicalPath URLInfoFromCache true URLInfoAddedToCache false ErrorCode 0 ErrorCode The operation completed successfully. (0x0) Is there any option to disable URL cache access for website. Thanks in advance.

    Read the article

  • what virtual machine should dell poweredg sc 1425 install?

    - by Nguyen Khanh Huy
    I'm intalling our local server and want to install a virtual machine but it seem vmware ESXi is not suit with our server Server: Dell SC 1424 CPU : 2 Xeon 3.2G (buss 800, cache L2 2M) Ram: 6G DDR ECC 266 Hard disk: 2 Hitachi Sata 1TB. Raid Dell Cerc 2s ( raid 0, 1) Nic: 2 Broadcom 1Gb/s I'm wondering if you're familiar with this area and have any idea about a VM software for our server. Just wanted to use server for some purposes ( web hosting, subversion and to experience some server OSs) Thank you for helping.

    Read the article

  • For the .com TLD, is there any known change tracking for WhoIs information?

    - by makerofthings7
    A client exposed information regarding what will become a controversial website in the domain WhoIs after she purchased it from auction. Is there any whois cache that will detect, save, and share the old whois information for that domain, after it has changed? (It's a website to provide birth control in countries where it is banned, and she may receive death threats for the information shared on it. Obviously something she wishes to avoid.)

    Read the article

  • How can i set up email alerts for disk failures on a windows server 2012 box?

    - by Leo
    I have a windows 2012 server with 3 storage spaces set up, each containing a mirrored pair of 2TB drives. What is the best way to set up alerting so that i receive an alert when a physical disk fails? Ideally i would like these alerts to be sent via email to a pre-defined address. The current server set up is as follows: Intel Core i7 2600k 3.4GHz Socket 1155 8MB Cache Asrock H77 PRO4/MVP Socket 1155 VGA DVI HDMI 7.1 Channel Audio ATX Motherboard 16GB RAM 1 x 60GB SSD (OS) 6 x 2TB SATA III 7200 HDD (DATA)

    Read the article

  • Fail-over caching reverse proxy

    - by sybreon
    Is there a way to configure varnish or any other caching reverse proxy, to serve pages from its cache when the back-end fails? At the moment, if the back-end goes down a 503 Service Unavailable error would be returned to the browser. I would prefer it if visitors got to see a cached version than an error page while the back-end is being fixed. My setup: [varnish (public ip)] <=== [router] <=== [web server (private ip)] PS: I have only one back-end web server.

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >