Search Results

Search found 5793 results on 232 pages for 'requests'.

Page 107/232 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • Best Solution for Load Balancing NFS File Access?

    - by DairyKnight
    I'm trying to find an optimum solution for accessing the NFS file share in my company. We have a central file server in North America and has 30GB~50GB of updated data everyday. And it's very slow for our Europe and Asia branches to access directly. Therefore, I'm trying to setup two replicate servers in those continents. I'm currently using rsync, but wonder if there exists a better solution acts more like a distributed RAID, which allows the user to transparently access the file whether synced or not. And user request will be dispatched to remote server if the file is not yet synced. I'm now looking into DRBD, but it seems not to have the functionality of auto-dispatching requests. Does anyone know if there's a better solution?

    Read the article

  • Are there any tools for monitoring individual Apache virtual hosts in real-time?

    - by Dave Forgac
    I'm looking for a way to monitor and record Apache traffic, separated by virtual host. I am currently using Munin to capture this and other data for the entire server however I can't seem to find a way to do this by vhost. This link describes using a module called mod_watch which is apparently no longer in development: http://www.freshnet.org/wordpress/2007/03/08/monitoring-apaches-virtualhost-with-munin/ The file that is listed as being compatible with Apache 2.x is reported to have problems with missing vhosts an reporting data correctly. Does anyone know of a reliable way to determine real-time traffic per vhost? If I can find this it should be easy enough to write a new Munin plugin. Edit: What I'd really like to see is something similar to the Apache server-status scoreboard page with the number of connections / requests separated by virtual host. This would give me the ability to check which vhost may be experiencing a spike in traffic in real time and would also provide the data needed for a Munin module (or some alternative performance monitoring / analysis system.)

    Read the article

  • spawn-fcgi/ fast CGi php crashes without traces in logs, on Gentoo

    - by user39046
    Hello, I recently moved from apache to a Nginx/fastcgi solution, I had it running on a Fedora system and had no problems, but, since i moved all to Gentoo , the Spawn-fCGI / fastcgi php daemon dies, and i can't find out any errors reports on /var/log/messages , so i don't know why this happens. I've seen that fastcgi is somehow different from the fedora distro, on gentoo as it has different conf files and init.d startup scripts, Can someone help me make it more stable? The number of requests that i had isn't any different from the ones I had on fedora, so i use the default conf that comes with the distro..and in about some hours it simply dies... Thank you very much

    Read the article

  • network issue ubuntu 8.04 in vmware esx

    - by hoberion
    ok, this is really pissing me off I have one ubuntu 8.04 instance running on vmware (esx) which decided after a reboot to stop resolving dns requests, I also cant connect to it using ssh although I can ping the server and its really that server (when I shutdown the server the ping also stops) stuff I tried: - reboot again :) - nslookup - serverip - setting networking to dhcp - offering some cute kittens to lucifer - removing the virtual nic and adding another (to get a different mac) - migrating the instance to another esx host - drinking 20 cups of espresso - stopped all services - running dnsmasq on another server and connecting to that dns - tcpdumping - disabling ip6 symptoms: cant resolve anything nslookup just says "no servers found..." although I can ping the servers traceroute to gateway doesnt work (even with traceroute -4 -n gatewayip) collegues laughing at me any thoughts

    Read the article

  • Will using Apache's ProxyPass directive on persistent Ajax connections alleviate the connection limit error?

    - by naurus
    I've got some javascript that keeps a persistent Ajax connection open for each client, and I know that this can cause some serious issues for apache, but not for lighttpd. One thing I learned from researching how to get around this was how to use the ProxyPass directive to send all requests for a certain directory to another address:port combination (without letting the user know). What I want to know is, if I put my PHP in a proxy'd (to lighttpd) directory and call that with javascript, will this still count against my apache connection limit? The reason I wonder is that apache is still serving the content, just not processing it. Seems to me that this would be a connection. Thanks

    Read the article

  • Load balancing and HTTPS strategies

    - by Dan
    I am faced with the following problem: Servers get saturated since current load balancing strategy is based on client IP. Some corporate clients access our servers from behind large proxies so all clients appear with same IP to our load balancer. I think we are using some hardware load balancing device (can investigate further if necessary). We need to maintain session affinity (site is constructed in ASP), so all requests with same IP get routed to the same node. Since all the communication goes over the HTTPS, no request data (like session Id) is available to balancer as a client discriminator. Is there a way to use some other data besides the IP to distinguish between clients and route the clients even when coming from same IP to different nodes? Note: I need to maintain the traffic between the balancer and nodes safe (encrypted).

    Read the article

  • MySQL is killing the server IO.

    - by OneOfOne
    I manage a fairly large/busy vBulletin forums (~2-3k requests per second, running on gigenet cloud), the database is ~ 10 GB (~9 milion posts, ~60 queries per second), lately MySQL have been grinding the disk like there's no tomorrow according to iotop and slowing the site. The last idea I can think of is using replication, but I'm not sure how much that would help and worried about database sync. I'm out of ideas, any tips on how to improve the situation would be highly appreciated. Specs : Debian Lenny 64bit ~12Ghz (6 cores) CPU, 7520gb RAM, 160gb disk. Kernel : 2.6.32-4-amd64 mysqld Ver 5.1.54-0.dotdeb.0 for debian-linux-gnu on x86_64 ((Debian)) Other software: vBulletin 3.8.4 memcached 1.2.2 PHP 5.3.5-0.dotdeb.0 (fpm-fcgi) (built: Jan 7 2011 00:07:27) lighttpd/1.4.28 (ssl) - a light and fast webserver PHP and vBulletin are configured to use memcached. MySQL Settings : [mysqld] key_buffer = 128M max_allowed_packet = 16M thread_cache_size = 8 myisam-recover = BACKUP max_connections = 1024 query_cache_limit = 2M query_cache_size = 128M expire_logs_days = 10 max_binlog_size = 100M key_buffer_size = 128M join_buffer_size = 8M tmp_table_size = 16M max_heap_table_size = 16M table_cache = 96

    Read the article

  • Blocking RDP connections from secondary IP addresses

    - by user73995
    Hi, I have a webserver running Windows 2008 R2 with multiple IPs assigned to it. I want to be able to RDP to the box via the main IP assigned to the box but not from any of the secondary IPs (ie... the websites IPs). I assume if I had control of the Firewall I could shut off the RDP port to the secondary IPs but it's in a hosted environment and I don't have access to the firewall. I'll contact the host to see if that's available but was hoping there was a setting I could set on the machine itself to not resolve RDP requests from the additional IPs. Any advice?

    Read the article

  • Apache2 mod_proxy and post-multipart size

    - by Pietro
    Hi, I have Apache2 configured to proxy all traffic directed to a specific virtual host to a local tomcat instance. All is good and fine but for multipart posts larger than ~100kb. Such posts fail on the tomcat end with an exception like SocketTimeoutException. If I connect directly to Tomcat (which listens on a port != 80) then all posts are handled just fine. The Apache virtual host config goes like this: NameVirtualHost * SetOutputFilter DEFLATE <VirtualHost *> ServerName foo.bar.com ErrorLog c:/wamp/logs/foo_error.log CustomLog c:/wamp/logs/foo_access.log combined ProxyTimeout 60 ProxyPass / http://localhost:10080/foo/ ProxyPassReverse / http://localhost:10080/foo/ ProxyPassReverseCookieDomain localhost bar.com ProxyPassReverseCookiePath /foo / </VirtualHost> I tried browsing the Apache2 and mod_proxy docs but found nothing useful. Any idea why Apache2 refuses to proxy requests bigger than X bytes ? Thanks!

    Read the article

  • Apache hanging with MaxClients is reached

    - by Ash White
    My Apache 2.2 (preform MPM) is hanging when MaxClients is reached, rather than queueing up requests and serving them when child processes become free. When this happens, the web server is totally unresponsive until it is manually restarted. The server stack is Ubuntu 8, MySQL 5, PHP 5. Hardware is Dual Xeons (2.8) with 2GB of RAM. It serves 30,000 - 50,000 pageviews per day. Static images, CSS, and JS are offloaded to a separate server and PHP is cached using eAccelerator. The HTML output of many pages is cached to the filesystem. Relevant Apache directives: KeepAlive On MaxKeepAliveRequests 50 KeepAliveTimeout 2 StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 2000

    Read the article

  • Django Dying on Shared Hosting Environment (Too Many MySQL Connections)

    - by Tom
    I've had a Django site up and running on HostGator (client requirement), following these instructions, for a few weeks now. I had seen two error emails about pages dying with (1040: Too many MySQL connections) but had never been able to recreate the problem. As of today, the site is completely unresponsive and all pages, even the static files, are dying with that error. Two questions: What can I do to fix this (other than caching more stuff)? Why would static files be dying like that? I can request them directly without a problem, so how are they getting run through Django? The shared hosting setup doesn't allow for a <Location> block, but there's a flag in the rewrite rule that says only requests for files that don't exist in the filesystem should be processed. All of my static files exist on the system, though they are symbolically linked files if it matters.

    Read the article

  • Access internal IP using public IP

    - by willvv
    Hi, I have a DSL modem with a public IP address (201.206.x.x), and I have a web server in my internal network (192.168.0.50). I set up the modem to forward requests to port 80 to my web server, so, if I access 201.206.x.x from outside my network, it shows my web page, the same happens if I access 192.168.0.50 from a computer inside my network. Now, the problem is when I try to access 201.206.x.x from my internal network, the browser tries to connect to the DSL modem configuration, instead of redirecting my request to my Web server. Which settings do I have to change in the modem to set up this redirection? Thanks!

    Read the article

  • DNS - Redirect from old server to new.

    - by jyoseph
    I have a server, (Server A) Windows Server 2003 that I was hosting some sites on. Now they are hosted on a different server (Server B). I recently switched the DNS at godaddy to point to the new nameservers. Is there something I can do on Server A to point all requests to Server A to Server B (basically a redirect from Server A to B)? What type of record would that be? This is while I'm waiting for the DNS changes I made to fully resolve. edit To further clarify. test.com may still be resolving to Server A, I'd like a DNS record on Server A that tells it to go to the new server. Is that possible?

    Read the article

  • Reasons why mod_jk wouldn't work and how to trace them

    - by Bozho
    I've been using one server, then I reinstalled everything on another server, and the mod_jk stopped working. Here is the situation: apache 2.0 sitting "in front" mod_jk used to connect to the apache to tomcat tomcat 6.0.26 used to server the actual requests I followed this tutorial. The result is: accessing http://mysite.com opens the index.html in /var/www/ accessing http://mysite.com:8080/ works OK the logs at /var/logs/apache2 show everything is OK: [Mon Mar 29 22:01:53.310 2010] [28349:3075389184] [info] init_jk::mod_jk.c (2830): mod_jk/1.2.26 initialized [Mon Mar 29 22:01:53 2010] [warn] No JkShmFile defined in httpd.conf. Using default /var/log/apache2/jk-runtime-status [Mon Mar 29 22:01:53 2010] [notice] Apache/2.2.9 (Debian) mod_jk/1.2.26 configured -- resuming normal operations I compared the server.xml, jk.conf, sites-enabled/mysite from the new server to those from the old one and they are identical. The domain name is the same (I updated the DNS record today, and it has refreshed successfully) So the question is, what can go wrong? Is there another place where problems would be logged, if such occur?

    Read the article

  • Ubuntu whois package and request limits

    - by Sam Hammamy
    I'm writing a django app with a form that accepts an IP and does a whois lookup on the discovered domain names. I've found the Ubuntu package whois which I plan to call from a python subprocess, and read the stdout into a StringIO, then parse for things like Registrar, Name Servers, etc. My question is, it seems that there are many paid whois services, which means that there must be a reason why people don't just use this Ubuntu package. I'm wondering if there's a request limit on the number of requests from a single IP to the package's whois server? I will probably be making 250 domain lookups per IP or maybe more. Also, I've found that some domains aren't searchable: qmul.ac.uk is searchable kat.ph is not searchable ahram.org.eg is not searchable Any particular reason for that?

    Read the article

  • Jumbled byte array after using TcpClient and TcpListener

    - by Dylan
    I want to use the TcpClient and TcpListener to send an mp3 file over a network. I implemented a solution of this using sockets, but there were some issues so I am investigating a new/better way to send a file. I create a byte array which looks like this: length_of_filename|filename|file This should then be transmitted using the above mentioned classes, yet on the server side the byte array I read is completely messed up and I'm not sure why. The method I use to send: public static void Send(String filePath) { try { IPEndPoint endPoint = new IPEndPoint(Settings.IpAddress, Settings.Port + 1); Byte[] fileData = File.ReadAllBytes(filePath); FileInfo fi = new FileInfo(filePath); List<byte> dataToSend = new List<byte>(); dataToSend.AddRange(BitConverter.GetBytes(Encoding.Unicode.GetByteCount(fi.Name))); // length of filename dataToSend.AddRange(Encoding.Unicode.GetBytes(fi.Name)); // filename dataToSend.AddRange(fileData); // file binary data using (TcpClient client = new TcpClient()) { client.Connect(Settings.IpAddress, Settings.Port + 1); // Get a client stream for reading and writing. using (NetworkStream stream = client.GetStream()) { // server is ready stream.Write(dataToSend.ToArray(), 0, dataToSend.ToArray().Length); } } } catch (ArgumentNullException e) { Debug.WriteLine(e); } catch (SocketException e) { Debug.WriteLine(e); } } } Then on the server side it looks as follows: private void Listen() { TcpListener server = null; try { // Setup the TcpListener Int32 port = Settings.Port + 1; IPAddress localAddr = IPAddress.Parse("127.0.0.1"); // TcpListener server = new TcpListener(port); server = new TcpListener(localAddr, port); // Start listening for client requests. server.Start(); // Buffer for reading data Byte[] bytes = new Byte[1024]; List<byte> data; // Enter the listening loop. while (true) { Debug.WriteLine("Waiting for a connection... "); string filePath = string.Empty; // Perform a blocking call to accept requests. // You could also user server.AcceptSocket() here. using (TcpClient client = server.AcceptTcpClient()) { Debug.WriteLine("Connected to client!"); data = new List<byte>(); // Get a stream object for reading and writing using (NetworkStream stream = client.GetStream()) { // Loop to receive all the data sent by the client. while ((stream.Read(bytes, 0, bytes.Length)) != 0) { data.AddRange(bytes); } } } int fileNameLength = BitConverter.ToInt32(data.ToArray(), 0); filePath = Encoding.Unicode.GetString(data.ToArray(), 4, fileNameLength); var binary = data.GetRange(4 + fileNameLength, data.Count - 4 - fileNameLength); Debug.WriteLine("File successfully downloaded!"); // write it to disk using (BinaryWriter writer = new BinaryWriter(File.Open(filePath, FileMode.Append))) { writer.Write(binary.ToArray(), 0, binary.Count); } } } catch (Exception ex) { Debug.WriteLine(ex); } finally { // Stop listening for new clients. server.Stop(); } } Can anyone see something that I am missing/doing wrong?

    Read the article

  • Does MS Forefront TMG cache authentication?

    - by SnOrfus
    I'm testing a client machine that makes requests to a biztalk server using a forefront machine as a web proxy. Upon first test I put in an invalid name/password into the receive port and received the correct error message (407). Then, I set the correct name/password and everything worked correctly. From there, I kept the correct information in the receive port but put an invalid name/password into the send adapter but the process completed successfully (should have failed with 407). I've ensured that both the recieve and send ports are not bypassing the proxy for local addresses. So the only thing that seems to make sense is if TMG is caching the authentication request coming from the machine I'm working on. Is this thinking correct, and if so, does anyone know how to disable it in TMG?

    Read the article

  • Retrieving an RSA key from a running instance of Apache?

    - by Nathan Osman
    I created an RSA keypair for an SSL certificate and stored the private key in /etc/ssl/private/server.key. Unfortunately this was the only copy of the private key that I had. Then I accidentally overwrote the file on disk (yes, I know). Apache is still running and still serving SSL requests, leading me to believe that there may be hope in recovering the private key. (Perhaps there is a symbolic link somewhere in /proc or something?) This server is running Ubuntu 12.04 LTS.

    Read the article

  • TIME_WAIT connections not being cleaned up after timeout period expires

    - by Mark Dawson
    I am stress testing one of my servers by hitting it with a constant stream of new network connections, the tcp_fin_timeout is set to 60, so if I send a constant stream of something like 100 requests per second, I would expect to see a rolling average of 6000 (60 * 100) connections in a TIME_WAIT state, this is happening, but looking in netstat (using -o) to see the timers, I see connections like: TIME_WAIT timewait (0.00/0/0) where their timeout has expired but the connection is still hanging around, I then eventually run out of connections. Anyone know why these connections don't get cleaned up? If I stop creating new connections they do eventually disappear but while I am constantly creating new connections they don't, seems like the kernel isn't getting chance to clean them up? Is there some other config options I need to set to remove the connections as soon as they have expired? The server is running Ubuntu and my web server is nginx. Also it has iptables with connection tracking, not sure if that would cause these TIME_WAIT connections to live on. Thanks Mark.

    Read the article

  • Running multiple services on Port 443, Tunnel SSH over HTTPS

    - by lajuette
    Situation: I want to tunnel SSH sessions through HTTPS. I have a very restrictive firewall/proxy which only allows HTTP, FTP and HTTPS traffic. What works: Setting up a tunnel through the proxy to a remote linux box that has a sshd listening at port 443 The problem: I have to have a web server (lighty) running at port 443. HTTPS traffic to other ports is forbidden by the proxy. Ideas so far: Set up a virtual host and proxy all incoming requests to localhost: (e.g. 22) $HTTP["host"] == "tunnel.mylinux.box" { proxy.server = ( "" => (("host" => "127.0.0.1", "port" => 22)) ) } Unfortunately this won't work. Am i doing something wrong, or is there a reason, that this won't work?

    Read the article

  • Gotchas for reverse proxy setups

    - by kojiro
    We run multiple web applications, some internal-only, some internal/external. I'm putting together a proposal that we use reverse proxy servers to isolate the origin servers, provide SSL termination and (when possible) provide load balancing. For much of our setup, I'm sure it will work nicely, but we do have a few lesser-known proprietary applications that may need special treatment when we move forward with reverse-proxying. What kinds of traps tend to cause problems when moving an origin server from being on the front lines to being behind a proxy? (For example, I can imagine problems if an application needed to know the IP address of incoming requests.)

    Read the article

  • Apahce - How to disable gzip content encoding (eg DEFLATE) for one set of URLs?

    - by Rory McCann
    I have a ubuntu apache webserver and I have enabled mod_deflate to gzip all the content. However there's one folder I'd like to disable the mod_deflate for. I was going to do something like this: <Location /myfolder> RemoveOutputFilter DEFLATE </Location> But that doesn't work. Rational: I am trying to debug an XMLRPC server and I am using wireshark to see what gets past in the HTTP requests, since the replies are gzipped, I can't see what's going on.

    Read the article

  • Apache redirect from one domain to another domain

    - by cpuguru
    Like many users, we tend to register the *.com and *.net versions of our domain names to prevent nefarious squatters. So if we wanted "foo.com" we'd also register "foo.net" and have them both resolve to the same IP address. Trying to set up Apache for the first time and need to know the proper way to redirect requests to "foo.net" to go to "foo.com" instead so that if a user types in "foo.net" they get magically redirected to "foo.com". I've been reading through the Apache URL Rewriting guide (http://httpd.apache.org/docs/1.3/misc/rewriteguide.html) and it's not readily apparent how to do this seemingly simple task. Please advise this apprentice oh wise Apache jedi...

    Read the article

  • apache2 mod_proxy configuration for single threaded servers

    - by The Doctor What
    I have a multiple instances of thin running behind apache 2.2's mod_proxy. The problem I have is that a couple pages, by design, take a while to run. If I just configure apache the obvious way (just add the thin urls as BalanceMember lines and no other configurations) then what happens is if someone clicks on the long-running page, then if enough web requests happen while it is running, someone eventually gets the same thin server and has to wait. Does anyone have some best practices or suggested configuration for mod_proxy and thin? Ciao!

    Read the article

  • mod_rewrite to page with HTTP auth

    - by Joe
    I'm trying to use modrewrite to proxy http:://myserver/cam1 to an internal, http-auth protected server at http:://admin:[email protected]/cgi/mjpg/mjpg.cgi No matter what I try, though, requests to http:://myserver/cam1 always prompt me for the username and password. I've tried all of these to no avail. RewriteRule ^/cam1 http://admin:[email protected]/cgi/mjpg/mjpg.cgi [P,L] RewriteRule ^/cam1 http://192.168.99.130/cgi/mjpg/mjpg.cgi [E=Authorization:Basic\ YWRtaW46YWRtaW4=,P,L] RewriteRule ^/cam1 http://192.168.99.130/cgi/mjpg/mjpg.cgi [E=HTTP_USERID:admin,E=HTTP_PASSWORD:admin,P,L]` Anybody have any other ideas?

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >