Search Results

Search found 5793 results on 232 pages for 'requests'.

Page 78/232 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • Aliased network interfaces and isc dhcp server

    - by Jonatan
    I have been banging my head on this for a long time now. There are many discussions on the net about this and similar problems, but none of the solutions seems to work for me. I have a Debian server with two ethernet network interfaces. One of them is connected to internet, while the other is connected to my LAN. The LAN network is 10.11.100.0 (netmask 255.255.255.0). We have some custom hardware that use network 10.4.1.0 (netmask 255.255.255.0) and we can't change that. But we need all hosts on 10.11.100.0 to be able to connect to devices on 10.4.1.0. So I added an alias for the LAN network interface so that the Debian server acts as a gateway between 10.11.100.0 and 10.4.1.0. But then the dhcp server stopped working. The log says: No subnet declaration for eth1:0 (no IPv4 addresses). ** Ignoring requests on eth1:0. If this is not what you want, please write a subnet declaration in your dhcpd.conf file for the network segment to which interface eth1:1 is attached. ** No subnet declaration for eth1:1 (no IPv4 addresses). ** Ignoring requests on eth1:1. If this is not what you want, please write a subnet declaration in your dhcpd.conf file for the network segment to which interface eth1:1 is attached. ** I had another server before, also running Debian but with the older dhcp3 server, and it worked without any problems. I've tried everything I can think of in dhcpd.conf etc, and I've also compared with the working configuration in the old server. The dhcp server need only handle devices on 10.11.100.0. Any hints? Here's all relevant config files: /etc/default/isc-dhcp-server INTERFACES="eth1" /etc/network/interfaces (I've left out eth0, that connects to the Internet, since there is no problem with that.) auto eth1:0 iface eth1:0 inet static address 10.11.100.202 netmask 255.255.255.0 auto eth1:1 iface eth1:1 inet static address 10.4.1.248 netmask 255.255.255.0 /etc/dhcp/dhcpd.conf ddns-update-style none; option domain-name "???.com"; option domain-name-servers ?.?.?.?; default-lease-time 86400; max-lease-time 604800; authorative; subnet 10.11.100.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; pool { range 10.11.100.50 10.11.100.99; } option routers 10.11.100.102; } I have tried to add shared-network etc, but didn't manage to get that to work. I get the same error message no matter what...

    Read the article

  • HAProxy causing delay

    - by user1221444
    I am trying to configure HAProxy to do load balancing for a custom webserver I created. Right now I am noticing an increasing delay with HAProxy as the size of the return message increases. For example, I ran four different tests, here are the results: Response 15kb through HAProxy: Avg. response time: .34 secs Transacation rate: 763 trans/sec Throughput: 11.08 MB/sec Response 2kb through HAProxy: Avg. response time: .08 secs Transaction rate: 1171 trans / sec Throughput: 2.51 MB/sec Response 15kb directly to server: Avg. response time: .11 sec Transaction rate: 1046 trans/sec throughput: 15.20 MB/sec Response 2kb directly to server: Avg. Response time: .05 secs Transaction rate: 1158 trans/sec Throughput: 2.48 MB/sec All transactions are HTTP requests. As you can see, there seems to be a much bigger difference between response times for when the response is bigger, than when it is smaller. I understand there will be a slight delay when using HAProxy. Not sure if it matters, but the test itself was run using siege. And during the test there was only one server behind the HAProxy(the same that was used in the direct to server tests). Here is my haproxy.config file: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 10000 user haproxy group haproxy daemon #debug defaults log global mode http option httplog option dontlognull retries 3 option redispatch option httpclose maxconn 10000 contimeout 10000 clitimeout 50000 srvtimeout 50000 balance roundrobin stats enable stats uri /stats listen lb1 10.1.10.26:80 maxconn 10000 server app1 10.1.10.200:8080 maxconn 5000 I couldn't find much in terms of options in this file that would help my problem. I have heard suggestions that I may have to adjust a few of my sysctl settings. I could not find a lot of information on this however, most documentation is for Linux 2.4 and 2.6 on the sysctl stuff, I am running 3.2(Ubuntu server 12.04), which seems to auto tuning, so I have no clue what I should or shouldn't be changing. Most settings changes I tried had no effect or a negative effect on performance. Just a notice, this is a very preliminary test, and my hope is that at deployment time, my HAProxy will be able to balance 10k-20k requests/sec to many servers, so if anyone could provide information to help me reach that goal, it would be much appreciated. Thank you very much for any information you can provide. And if you need anymore information from me please let me know, I will get you anything I can.

    Read the article

  • Apache consuming large memory with Apache Mobile Filter

    - by VarunGupta
    I installed and configured Apache Mobile Filter module for Apache to redirect users to mobile version of our website, depending on their user agents. I configured the module to use WURFL. But as soon as I start Apache it hogs a large amount of memory (without any in-coming web requests): Resident Memory: 300 to 400 MB Virtual Memory: 300 to 650 MB Without this module, Apache was consuming much lesser memory (4 to 10 MB). What could be the reason here?

    Read the article

  • Is there any way to ARP ping on Windows?

    - by e-t172
    On Linux and other systems, there is an utility called arping which can be used to send ARP requests ("pings") and show the answers, much like the "ping" utility but using ARP instead of ICMP. Is there any way to do the same on Windows? (I use Windows 7)

    Read the article

  • Suggested managed DNS provider?

    - by Arelius
    We own/manage a few domains (nothing too large or too trafficked). Currently our DNS servers are hosted onsite. For ease of management and lower-latency DNS requests we are interested in moving our Domains offsite, does anyone have recommendations for a good DNS provider?

    Read the article

  • How to I sniff this from iTunes?

    - by Alex
    If you have used Firebug, you know that you can see the "AJAX" requests back and forth. And you can see the headers sent. I would like the same thing. Except, I would like to sniff iTunes. I want to know the REST API that iTunes uses to talk to the cloud. As well as the user-agent and headers sent.

    Read the article

  • Running multiple FCGI/Django on Nginx for load sharing

    - by Barry
    I am running a web-service on Nginx/FastCGI/Django. Our processing time is fairly long and CPU intensive and I would like to be able to run multiple processes of Django/FastCGI to share the load. How do I set Nginx to rout requests from a single source to multiple instances of Django/FastCGI? (I can run the multiple instances on multiple ports/sockets, but I don't know how to make Nginx share the processing load between them.) Any help much appreciated.

    Read the article

  • Empty $_POST data

    - by Antimony
    I am trying to post a post to my MyBB server from a Python script, but try as I might, I can't get it to work. The request shows up in the forensic log and the headers are in the $_SERVER variable, but $_POST is always an empty array. The error log shows nothing, even at the debug level. I've already tried searching, but I haven't found anything that's helped. I already checked the post_max_size thing, which is 8M. Another factor is that it's just my own requests which aren't going through. Browser generated requests seem to do just fine. I've looked and looked, but I can't find anything I'm doing differently that should matter. Anyway, here is an example request. POST /newreply.php?tid=1&processed=1 HTTP/1.1 Host: <redacted> Accept-Encoding: identity Content-Length: 1153 Content-Type: multipart/form-data; boundary=-->0xa216654L Cookie: sid=<redacted>; mybb[lastvisit]=1354995469; mybb[lastactive]=1354995500; mybb[threadread]=a%3A1%3A%7Bi%3A1%3Bi%3A1354995469%3B%7D; mybb[forumread]=a%3A1%3A%7Bi%3A2%3Bi%3A1354995469%3B%7D; loginattempts=1; mybbuser=2_ZlVVfaYS9FstZGQzr4KiNRUm3Z4xAgJkTPPq2ouFcuaragOTVQ Accept: text/html User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:14.0) Gecko/20100101 Firefox/14.0.1 -->0xa216654L Content-Disposition: form-data; name="my_post_key" 257b2bbef4334000d9088169154900a3 -->0xa216654L Content-Disposition: form-data; name="quoted_ids" -->0xa216654L Content-Disposition: form-data; name="tid" 1 -->0xa216654L Content-Disposition: form-data; name="message" foo!2 -->0xa216654L Content-Disposition: form-data; name="attachmentact" -->0xa216654L Content-Disposition: form-data; name="attachmentaid" -->0xa216654L Content-Disposition: form-data; name="icon" -1 -->0xa216654L Content-Disposition: form-data; name="posthash" e93a2c78ce3f6807a86fd475ef4178cf -->0xa216654L Content-Disposition: form-data; name="postoptions[subscriptionmethod]" -->0xa216654L Content-Disposition: form-data; name="replyto" -->0xa216654L Content-Disposition: form-data; name="message_new" foo!2 -->0xa216654L Content-Disposition: form-data; name="submit" Post Reply -->0xa216654L Content-Disposition: form-data; name="attachment"; filename="" Content-Type: application/octet-stream -->0xa216654L Content-Disposition: form-data; name="action" do_newreply -->0xa216654L Content-Disposition: form-data; name="subject" Lol -->0xa216654L

    Read the article

  • Tomcat memory usage

    - by Adrian Mester
    I'm running tomcat on a ubuntu 10.4 VPS with 512MB of RAM (1024 burstable). I'm using it for development, so performance isn't an issue, but memory is. Tomcat is currently using about 250MB without any apps installed (I compared memory usage with tomcat stopped and running), and I also need to run lighttpd and mysql. Is there any way to get that number down? I don't need it to be able to handle a large number of requests at once.

    Read the article

  • Do proxies really provide anonymity?

    - by Somebody still uses you MS-DOS
    Do web proxies really provide anonymity? I mean, without someone asking for logs in a web proxy server for who/when connected, is it impossible to know who was behind that IP address? I'm asking this because I heard somewhere that some technologies (like "flash") bypass personal IP information for requests or something like that. (I'm a noob in server configuration and concepts like DNS and proxies. Thanks!)

    Read the article

  • 503 service unavailable debugging PHP files

    - by user25932
    I have a web server with apache 2.0 installed. It comes with Zend Server install pack. When I’m trying to debug my php files apache serves a blank page with 503 service unavailable. Of course slow server-side code is tying up Apache requests for far too long, but I need it to wait, until my debugging comes to end. How can I solve this problem.

    Read the article

  • access_log item w/out IP. Starts with "::1 - - [<date>]"

    - by Meltemi
    Looking at our Apache log I see normal requests like: 174.133.xxx.xxx - - [20/May/2010:17:36:44 -0700] "GET /index.html HTTP/1.1" 200 2004 but every so often i get a cluster of these w/out an IP address. ::1 - - [20/May/2010:18:47:21 -0700] "OPTIONS * HTTP/1.0" 200 - ::1 - - [20/May/2010:18:47:22 -0700] "OPTIONS * HTTP/1.0" 200 - ::1 - - [20/May/2010:18:47:23 -0700] "OPTIONS * HTTP/1.0" 200 - what do they mean and curious what causes them?

    Read the article

  • "The connection to the server was reset while the page was loading"

    - by andygeers
    I've just taken over managing the network for a small charity, and am finding internet access very flaky - we keep getting "The connection to the server was reset while the page was loading." errors (HTTP Error 12031 according to the Windows network diagnostic tool). It doesn't seem to be anything to do with our ISP since it also affects internal traffic (even requests to an Apache instance on my localhost!) Adjusting the MTU setting in the Windows XP registry sometimes seems to help for a few minutes after rebooting, but the problem always returns.

    Read the article

  • Can the Firefox password manager store and manage passwords for multiple sub-domains or different URLs in the same domain?

    - by Howiecamp
    Can the Firefox password manager store and manage passwords for multiple sub-domains, or for multiple URLs in the same domain? The default behavior of Firefox is that all requests for *.domain.com are treated as the same. I'd like to have Firefox do the following: Store and manage passwords separately for multiple sub-domains, e.g. mail.google.com and picasa.google.com Store and manage passwords separately for different URLs in the same domain, e.g. http://mail.google.com/a/company1.com and http://mail.google.com/a/company2.com

    Read the article

  • multiple puppet masters set up using inventory

    - by Oli
    I have managed to set up multiple puppet masters with one puppet master acting as a CA and clients are able to get a certificate from this CA server but use their designated puppet master to get their manifests. See this question for more info.. multiple puppet masters. However, there are a couple of things I have had to do to get this working correctly and have an error which I'll get to. First of all, to get inventory working for a puppet-client (PC) connecting to its designated puppet-master (PM), I had to copy the CA certs on PM1 to the PM2 ca directory. I ran this command: scp [email protected]:/var/lib/puppet/ssl/ca/* [email protected]:/var/lib/puppet/ssl/ca/. Once i have done that, I was able to uncomment the SSLCertificateChainFile, SSLCACertificateFile & SSLCARevocationFile section of my rack.conf VH file on the PM2. Once I had done this, inventory started to work. Does this sound an acceptable way to do things? Secondly, in the puppet.conf file, I am setting the designated PM server for that client. Unless there is a better way, this is how it'll work in my production setup. So PC1 will talk to PM1 and PC2 will talk to PM2. This is where I have an error. When PC2 first requests a cert from the CA on PM1, the cert appears and then I sign the cert on the CA on PM1. When I then do a puppet agent --test on PC2 (which has server = PM2 in puppet.conf), I get this error: Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: puppet-master2.test.net(10.1.1.161) access to /certificate_revocation_list/ca [find] at :112 However, if I change the PC2 puppet.conf file and specify server = PM1 and the rerun puppet agent --test, i do not get any errors. I can then revert the change in the puppet.conf file back to server = PM2 and everything seems to run normally. Do I have to set up some kind of ProxyPassMatch on PM2 for requests made from clients to /certificate_revocation_list/* and redirect them to PM1? Or how can I fix this error? Cheers, Oli

    Read the article

  • methods for preventing large scale data scraping from REST api

    - by Simon Kenyon Shepard
    I know the immediate answer to this is going to be there is no 100% reliable method of doing this. But I'd like to create a question that details the different possibilities, the difficulty of implementing them and success rates. I would like to go from simple software ip/request speed analysis to high end sophisticated soft/hardware tools, e.g. neural networks. With a goal of predicting and preventing bogus requests and attempts to scrape the service. Many Thanks.

    Read the article

  • Good maintained privacy Add-On/settings set that takes usability into account?

    - by Foo Bar
    For some weeks I've been trying to find a good set of Firefox Addons that give me a good portion of privacy/security without losing to much of usability. But I can't seem to find a nice combination of add-ons/settings that I'm happy with. Here's what I tried, together with the pros and cons that I discovered: HTTPS Everywhere: Has only pro's: just install and be happy (no interaction needed), loads known pages SLL-encrypted, is updated fairly often NoScript - Fine, but needs a lot of fine-tuning, often maintained, mainly blocks all non-HTML/CSS Content, but the author sometimes seems to do "untrustworthy" decission RequestPolicy - seems dead (last activity 6 months ago, has some annoying bugs, official support mail address is dead), but the purpose of this is really great: gives you full control over cross-site requests: blocks by default, let's you add sites to a whitelist, once this is done it works interaction-less in the background AdBlock Edge: blocks specific cross-site requests from a pre-defined whitelist (can never be fully sure, need to trust others) Disconnect: like AdBlock Edge, just looking different, has no interaction possibilities (can never be fully sure, need to trust others, can not interact even if I wanted to) Firefox own Cookie Managment (block by default, whitelist specific sites), after building own whitelist it does it's work in the background and I have full control All These addons together basically block everything unsecure. But there are a lot of redundancies: NoScript has a mixed-content blocker, but FF has it's own for a while now. Also the Cookie blocker from NoScript is reduntant to my FF-Cookie setting. NoScript also has an XSS-blocker, which is redundant to RequestPolicy. Disconnect and AdBlock are extremly redundant, but not fully. And there are some bugs (especially RequestPolicy). And RequestPolicy seems to be dead. All in all, this list is great but has these heavy drawbacks. My favourite set would be "NoScript Light" (only script blocking, without all the additonal redundant-to-other-addons hick-hack it does) + HTTPS Everywhere + RequestPolicy-clone (maintained, less buggy), because RequestPolicy makes all other "site-blockers" obsolete (because it blocks everything by default and let's me create a whitelist). But since RequestPolicy is buggy and seems to be dead I have to fallback to AdBlock Edge and Disconnect, which don't block all and and need more maintaining (whitelist updates, trust-check). Are there addons that fulfill my wishes?

    Read the article

  • Using wget to save sequential files as well as renaming the file extension

    - by Ian
    I run a cron job that requests a snapshot from a remote webcam at a local address: wget http://user:[email protected]/snapshot.cgi This creates the files snapshot.cgi, snapshot.cgi.1, snapshot.cgi.2, each time it's run. My desired result would be for the file to be named similar to file.1.jpg, file.2.jpg. Basically, sequentially or date/time named files with the correct file extension instead of .cgi. Any ideas?

    Read the article

  • Apache HTTPD as a proxy

    - by markovuksanovic
    I need to redirect all the requests from localhost:8080/app1/ to localhost/app1. which is the best way to do it. The only requirement is that the user must never be aware that he is accessing the application at port 80. i guess I need to set up Apache HTTPD proxying - I'm just not sure which is the best way to do it. Thanks in advance.

    Read the article

  • How to pass $_GET variables to a PHP script via the command line?

    - by George Edison
    I am trying to create a webserver that serves PHP scripts. Currently, it works as follows: The client requests /index.php?test=value The server invokes php index.php The server feeds the HTTP request headers as STDIN to the PHP process The server reads the output of php from STDOUT and returns it to the client All of this is working except that the parameters are not being passed to the PHP script because: var_dump($_GET); returns: array(0) { } How do $_GET parameters get passed to the PHP binary when it is invoked?

    Read the article

  • Scientific Linux - mysql and apache fail to start on reboot

    - by Derek Deed
    Both mysqld and httpd fail to restart following a reboot of the server, although chkconfig --list shows both daemons set to on for run levels 2,3,4 & 5 All control is being exectuted via Webmin Reboot server – MySQl and Apache not running MySQL Database Server MySQL version 5.1.69 MySQL is not running on your system - database list could not be retrieved. ________________________________________ Click this button to start the MySQL database server on your system with the command /etc/rc.d/init.d/mysqld start. This Webmin module cannot administer the database until it is started. Apache Webserver Apache version 2.2.15 Start Apache Search Docs.. Global configuration Existing virtual hosts Create virtual host Select all. | Invert selection. Default Server Defines the default settings for all other virtual servers, and processes any unhandled requests. Address Any Port Any Server Name Automatic Document Root /var/www/drupal Virtual Server Processes all requests on port 443 not handled by other virtual servers. Address Any Port 443 Server Name Automatic Document Root /var/www/drupal Select all. | Invert selection. chkconfig --list mysqld mysqld 0:off 1:off 2:on 3:on 4:on 5:on 6:off chkconfig --list httpd httpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off Manually Restart Apache chkconfig --list httpd httpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off Manually Restart MySQL chkconfig --list mysqld mysqld 0:off 1:off 2:on 3:on 4:on 5:on 6:off Everything now running okay; but no difference in the chkconfig outputs above. I tried: chkconfig --levels 235 httpd on /etc/init.d/httpd start and the same for mysqld but no change in operation. Log files show that the shutdown has been completed successfully; but there is no indication of the service restarting until it is executed manually: 131112 13:59:15 InnoDB: Starting shutdown... 131112 13:59:16 InnoDB: Shutdown completed; log sequence number 0 881747021 131112 13:59:16 [Note] /usr/libexec/mysqld: Shutdown complete 131112 13:59:16 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended 131112 14:09:52 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 131112 14:09:52 InnoDB: Initializing buffer pool, size = 8.0M 131112 14:09:52 InnoDB: Completed initialization of buffer pool And the Apache logs: [Tue Nov 12 13:59:13 2013] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Tue Nov 12 13:59:13 2013] [notice] Digest: generating secret for digest authentication ... [Tue Nov 12 13:59:13 2013] [notice] Digest: done [Tue Nov 12 13:59:14 2013] [notice] Apache/2.2.15 (Unix) DAV/2 PHP/5.3.3 mod_ssl/2.2.15 OpenSSL/1.0.0-fips configured -- resuming normal operations [Tue Nov 12 13:59:14 2013] [notice] caught SIGTERM, shutting down [Tue Nov 12 14:27:13 2013] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Tue Nov 12 14:27:13 2013] [notice] Digest: generating secret for digest authentication ... [Tue Nov 12 14:27:13 2013] [notice] Digest: done [Tue Nov 12 14:27:13 2013] [notice] Apache/2.2.15 (Unix) DAV/2 PHP/5.3.3 mod_ssl/2.2.15 OpenSSL/1.0.0-fips configured -- resuming normal operations Is anyone able to shed any light on this problem?

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >