Search Results

Search found 15035 results on 602 pages for 'request'.

Page 468/602 | < Previous Page | 464 465 466 467 468 469 470 471 472 473 474 475  | Next Page >

  • Apache2 slow serving static while healthy

    - by user45339
    My Apache status looks like; 201 requests/sec - 98.8 kB/second - 504 B/request 85 requests currently being processed, 345 idle workers _____CCW_C_____C__C__C_R____C_WC_________C__C____CW__C__CCC_____ __C____W______C___C___CW__C_C______C__W_C__C_____CCC____C______R CC_C_______C___C____C______________C______C__C________________C_ ___________________C______________________C_______C___C_____C___ CC____C__C___R_____C_C_CC__________C___C___________R____C_C_C___ ______C______W_W__W___C____________________C__WCC__R__R_C_______ R__RC________________________C___R____W__C____.................. .................................................... Server load is average 2 on a 4 core machine. IO utilization is 10-15% and doesn't have many jumps over 70%. Machine has almost 4 gb free and uses 0 swap. The site on the machine is a PHP site. All PHP code is optimized and fast mostly when it gets accessed, however sometimes requests get stuck. Stuck meaning; no response for at least 10 sec. We debugged the PHP code, but it is quite optimal and fast. We spend a lot of time on it until we decided to test the requesting of: <html><body>test</body></html> test.html page. This static resource also gets 'stuck' in the same manner the php pages get 'stuck'. How is the possible given the health of the system? I tested the network, but, when the PHP shows 'slowness' in the site monitoring, the html test files also take (far longer) than 10 sec to load using; time lynx -dump http://127.0.0.1/test.html We are kind of desperate to solve this problem, but we cannot seem to tackle it.

    Read the article

  • Azure's Ubuntu 12.0.4 fails to install PHP5

    - by Alex Kennberg
    Similar to this article from Azure themselves: http://www.windowsazure.com/en-us/manage/linux/common-tasks/install-lamp-stack/ I am trying to install PHP5 on Ubuntu 12.0.4 virtual machine. However, it fails installing the ssl-cert. $ sudo apt-get install php5 Reading package lists... Done Building dependency tree Reading state information... Done php5 is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 49 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up ssl-cert (1.0.28) ... Could not create certificate. Openssl output was: Generating a 2048 bit RSA private key ............................+++ ...................................................................................................................+++ writing new private key to '/etc/ssl/private/ssl-cert-snakeoil.key' ----- problems making Certificate Request 140320238503584:error:0D07A097:asn1 encoding routines:ASN1_mbstring_ncopy:string too long:a_mbstr.c:154:maxsize=64 dpkg: error processing ssl-cert (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: ssl-cert E: Sub-process /usr/bin/dpkg returned an error code (1) Any tips appreciated.

    Read the article

  • BackupPC - why does it use rsync --sender --server ... ?

    - by Jakobud
    I'm in the process of experimenting with BackupPC on a CentOS 5.5 server. I have everything pretty much setup with default values. I tried setting up a basic backup for a host's /www directory. The backup fails with the following errors: full backup started for directory /www Running: /usr/bin/ssh -q -x -l root target /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . /www/ Xfer PIDs are now 30395 Read EOF: Connection reset by peer Tried again: got 0 bytes Done: 0 files, 0 bytes Got fatal error during xfer (Unable to read 4 bytes) Backup aborted (Unable to read 4 bytes) Not saving this as a partial backup since it has fewer files than the prior one (got 0 and 0 files versus 0) First of all, yes I have my ssh keys setup to allow me to ssh to the target server without requiring a password. In the process of troubleshooting, I tried the above ssh command directly from the command line, and it hangs. Looking at the end of the debug messages for SSH I get: debug1: Sending subsystem: /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . /www/ Request for subsystem '/usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . /www/' failed on channel 0 Next I started looking at the rsync flags. I did not recognize --server and --sender. Looking at the rsync man pages, sure enough, I don't see anything about --server or --sender in there. What are those in there for? Looking at the BackupPC config I have this: RsyncClientPath = /usr/bin/rsync RsyncClientCmd = $sshPath -q -x -l root $host $rsyncPath $argList+ And for the arguments, I have the following listed: --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive Notice there is no --server, --sender or --ignore-times. Why are these things getting added in? Is this part of the problem?

    Read the article

  • procmail don't execute php script

    - by Phliplip
    Hi, I have setup a kannel SMS gateway on my FreeBSD 7.2 - the service works great. I'm now trying to setup a email2sms feature. For this i have created a system user called kannel and all mails are forwarded to this user. In the home dir of kannel i have the following files. -rw-r--r-- 1 kannel kannel 81B 17 jan 09:50 .procmailrc lrwxr-x--- 1 root kannel 58B 14 jan 13:24 email2sms.php @ -> some-what-some-where -rw-rw-rw- 1 root kannel 5,8K 17 jan 09:52 log.email2sms -rw------- 1 kannel kannel 1,3K 17 jan 09:50 procmail.log -rw-r----- 1 root kannel 606B 14 jan 13:28 rawmail.txt The file email2sms.php is a symlink to the a php script (ZendFramework Application) that takes the email from STDIN, and uses ZendFramework to parse that mail into an object. It then do a http request to the SMS gateway. The php-script works. Content of .procmailrc LOGFILE=$HOME/procmail.log VERBOSE=yes :0 | php email2sms.php >> log.email2sms From last sent email i have this in procmail.log procmail: [97744] Mon Jan 17 09:50:40 2011 procmail: [97744] Mon Jan 17 09:50:40 2011 procmail: Assigning "LASTFOLDER= php email2sms.php >> log.email2sms" procmail: Executing " php email2sms.php >> log.email2sms" procmail: Notified comsat: "kannel@:/home/user/kannel/ php email2sms.php >> log.email2sms" From [email protected] Mon Jan 17 09:50:40 2011 Subject: asdf as Folder: php email2sms.php >> log.email2sms 2600 But there is no new output to log.email2sms, and the script should output the subject of the email. If i sudo as the kannel user and pipe a file with raw email to the script, it executes just fine. [root@webserver /home/user/kannel]# /home/user/kannel/ sudo -u kannel cat rawmail.txt | php email2sms.php >> log.email2sms And the command outputs to log.email2sms as desired. Any ideas guys?

    Read the article

  • Open Source Chef Server can't upload cookbook

    - by veilig
    I just setup the open source chef server on an Ubuntu 12.04 EC2 instance, I've setup my webui and am able to get responses from my knife commands ie: knife node list, knife client list, knife user list, etc... I'm able to update roles, databags, environments, etc... but I cannot upload any cookbooks. I'm running my workstation on Mac OSX. I keep getting this output at the end of my command knife cookbook upload -VV curl. Doesn't matter what cookbook I upload, or if I upload them all - I keep getting the same response DEBUG: Chef::HTTP calling Chef::HTTP::ValidateContentLength#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::RemoteRequestID#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::Authenticator#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::Decompressor#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::CookieManager#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::JSONToModelOutput#handle_response /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http/json_output.rb:51:in `handle_response': undefined method `chomp' for nil:NilClass (NoMethodError) from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:229:in `block in apply_response_middleware' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:227:in `each' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:227:in `inject' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:227:in `apply_response_middleware' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:144:in `request' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:118:in `put' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/cookbook_uploader.rb:123:in `block in uploader_function_for' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:52:in `call' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:52:in `block (3 levels) in process' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:50:in `loop' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:50:in `block (2 levels) in process'INFO: HTTP Request Returned 204 No Content:

    Read the article

  • SQL Server 2008 second instance times out when logging in -- but only the first time?

    - by Kromey
    This is a strange one that has plagued me for a while now. When logging in to the second instance of SQL Server 2008 on one of our database servers, we get a timeout error: Cannot connect to servername\mssqlserver2. Additional information: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. (Microsoft SQL Server)") (This is the error message when trying to connect with Microsoft SQL Server Management Studio; other tools experience the same error, but of course say it in different ways.) Immediately re-attempting to log in works just fine, so whatever the cause it's ephemeral! This happens regardless of user or authentication type (both Windows and SQL Server authentication methods are supported on this instance). What's even weirder, though, is that the first instance on this server has never once demonstrated this problem. Server is a Windows Server 2008 R2 virtual server, hosted in Microsoft Hyper-V (host is likewise Server 2008 R2). The server has 2GB of RAM, and seems to regularly be using 90% of that -- could low memory be the cause of this issue? I could see this second instance -- which is not used very often yet -- being swapped out to disk, and then taking too long to load back into memory to respond in time to the connection request, but I'd rather have more than just my own hunch before I go scheduling a downtime for this server (the first instance is used regularly) and then just throwing extra resources at it in the blind hope that the problem goes away.

    Read the article

  • Hyper-V Virtual Machine won't respond over network

    - by Brad Gignac
    Recently, one of our Hyper-V virtual machines has periodically stopped responding over the network. It seems to be happening every few days, and it occasionally happens up to several times a day. I am by no means a sysadmin, so any direction you guys could provide would be very welcome. I've included everything I know to include below. If you need any additional information, I'll be glad to include it. I can connect through the Hyper-V console. I can't connect to network shares, IIS web apps, using RDP, or using ping. Memory usage seems to be normal (3 of 4 GB) Processor usage seems low. We don't know the exact time the server goes down, but the following error appears consistently around the time it goes down: Error 5719, NETLOGON This computer was not able to set up as secure session with a domain controller in domain *** due to the following: There are currently no logon servers available to service the logon request. This may lead to authentication problems. Make sure that this computer is connected to the network. If this problem persists, please contact your domain administrator.

    Read the article

  • Combining URL mapping and Access-Control-Allow-Origin: *

    - by ksangers
    I am in the progress of migrating an old banner system to a new one and in doing so I want to rewrite the old banner system's URL's to the new one. I load my banners via an AJAX request, and therefore I require the Access-Control-Allow-Origin to be set to *. I have the following VirtualHost configuration: <VirtualHost *:80> ServerAdmin [email protected] ServerName banner.studenten.net # we want to allow XMLHTTPRequests Header set Access-Control-Allow-Origin "*" RewriteEngine on RewriteMap bannersOldToNew txt:/home/user/banner.studenten.net/banner-studenten-net-to-ads-all4students-nl-map # check whether a zoneid exists in the query string RewriteCond %{QUERY_STRING} ^(.*)zoneid=([1-9][0-9]*)(.*) # make sure the requested banner has been mapped RewriteCond ${bannersOldToNew:%2|NOTFOUND} !=NOTFOUND # rewrite to ads.all4students.nl RewriteRule ^/ads/.* http://ads.all4students.nl/delivery/ajs.php?%1zoneid=${bannersOldToNew:%2}%3 [R] # else 404 or something ErrorLog ${APACHE_LOG_DIR}/banner.studenten.net-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/banner.studenten.net-access.log combined </VirtualHost> My map file, /home/user/banner.studenten.net/banner-studenten-net-to-ads-all4students-nl-map, contains something like: # oldId newId 140 11 141 12 142 13 Based on the above configuration I was expecting the following: GET /ads/ajs.php?zoneid=140 HTTP/1.1 Host: banner.studenten.net HTTP/1.1 302 Found ... Access-Control-Allow-Origin: * Location: http://ads.all4students.nl/delivery/ajs.php?zoneid=11 But instead I get the following: GET /ads/ajs.php?zoneid=140 HTTP/1.1 Host: banner.studenten.net HTTP/1.1 302 Found ... Location: http://ads.all4students.nl/delivery/ajs.php?zoneid=11 Note the missing Access-Control-Allow-Origin header, this means the XMLHttpRequest is denied and the banner is not displayed. Any suggestions on how to fix this in Apache?

    Read the article

  • Remote assistance from Remote Desktop sessions: unable to control

    - by syneticon-dj
    Since Remote Control (aka Session Shadowing) is gone for good in Server 2012 Remote Desktop Session hosts, I am looking for a replacement to support users in a cross-domain environment. Since Remote Assistance is supposed to work for Remote Desktop Sessions as well, I tried leveraging that for support purposes by enabling unsolicited remote assistance for all Remote Desktop Session Hosts via Group Policy. All seems to be working well except that the "expert" seems to be unable to actually excercise any mouse or keyboard control when the remote assistance session has been initiated from a Remote Desktop session itself. Mouse clicks and keyboard strokes from the "expert" session (Server 2012) seem to simply be ignored even after the assisted user has acknowledged the request for control. I would like to see this working through RD sessions for the support staff due to a number of reasons: not every support agent would have the appropriate client system version to support users on a specific terminal server (e.g. an agent might have a Windows Vista or Windows 7 station and thus be unable to offer assistance to users on Server 2012 RDSHs) a support agent would not necessarily have a station which is a member of the specific destination domain (mainly due to the reason that more than a single domain's users are supported) what am I missing?

    Read the article

  • PhpMyAdmin Missing parameter:

    - by Ali
    Everything was working fine until this moment I want to create a new database and I'm receiving this error db_create.php: Missing parameter: new_db (FAQ 2.8) Also when I was trying to export my database I also receive the following error export.php: Missing parameter: what (FAQ 2.8) export.php: Missing parameter: export_type (FAQ 2.8) When I looked it FAQ 2.8 from the suggested link in PHPMYADMIN 2.8 I get "Missing parameters" errors, what can I do? Here are a few points to check: In config.inc.php, try to leave the $cfg['PmaAbsoluteUri'] directive empty. See also FAQ 4.7. Maybe you have a broken PHP installation or you need to upgrade your Zend Optimizer. See http://bugs.php.net/bug.php?id=31134. If you are using Hardened PHP with the ini directive varfilter.max_request_variables set to the default (200) or another low value, you could get this error if your table has a high number of columns. Adjust this setting accordingly. (Thanks to Klaus Dorninger for the hint). In the php.ini directive arg_separator.input, a value of ";" will cause this error. Replace it with "&;". If you are using Hardened-PHP, you might want to increase request limits. The directory specified in the php.ini directive session.save_path does not exist or is read-only. I did tried with php.ini to make sure that I've session.save_path = "/tmp" I'm using Mac Xserver and running with MAMP I did tried everything to restart my server and nothing help. Please if anyone help me to give me some suggestion. Apologize if I post in a wrong place.

    Read the article

  • network policy + WPA enterprise (tkip) Windows 2008 R2

    - by Aceth
    hi I've attempted the following guide and in a bit of a pickle. http://techblog.mirabito.net.au/?p=87 My main goal is to have a username / password based wireless authentication with active directory integration. I keep getting the error Network Policy Server denied access to a user. Contact the Network Policy Server administrator for more information. User: Security ID: domain\rhysbeta Account Name: rhysbeta Account Domain: domain Fully Qualified Account Name: domain\rhysbeta Client Machine: Security ID: NULL SID Account Name: - Fully Qualified Account Name: - OS-Version: - Called Station Identifier: 00-12-BF-00-71-3C:wirelessname Calling Station Identifier: 00-23-76-5D-1E-31 NAS: NAS IPv4 Address: 0.0.0.0 NAS IPv6 Address: - NAS Identifier: - NAS Port-Type: Wireless - IEEE 802.11 NAS Port: 2 RADIUS Client: Client Friendly Name: Belkin54g Client IP Address: x.x.x.10 Authentication Details: Connection Request Policy Name: Secure Wireless Connections Network Policy Name: Secure Wireless Connections Authentication Provider: Windows Authentication Server: srvr.example.com Authentication Type: EAP EAP Type: - Account Session Identifier: - Logging Results: Accounting information was written to the local log file. Reason Code: 22 Reason: The client could not be authenticated because the Extensible Authentication Protocol (EAP) Type cannot be processed by the server. ` I would love to have it so that non domain devices

    Read the article

  • Cisco Catalyst 3550 + Alteon 184 Load-Balancing Issues...

    - by upkels
    I have just deployed a couple Cisco Catalyst 3550 switches, and a couple Alteon 184 Web Switches for load-balancing. I can ping all RIPs and VIPs to/from the Alteon. Topology Before: (server) <- (Alteon) <- (Internet) Topology Now: (server) <- (3550) <- Alteon <- (Internet) Cisco Port Configuration (Alteon Uplink Port): description LB_1_PORT_9_PRIMARY switchport access vlan 10 switchport mode access switchport nonegotiate speed 100 duplex full Alteon Port 9 Configuration (VLAN 10 WAN): >> Main# /c/port 9/cur Current Port 9 configuration: enabled pref fast, backup gig, PVID 10, BW Contract 1024 name UPLINK >> Main# /c/port 9/fast/cur Current Port 9 Fast link configuration: speed 100, mode full duplex, fctl none, auto off Cisco Configuration (Load-Balanced Servers Port): description LB_1_PORT_1_PRIMARY switchport access vlan 30 switchport mode access switchport nonegotiate speed 100 duplex full Alteon Port 1 Configuration (VLAN 30 LOAD-BALANCED LAN): >> Main# /c/port 1/cur Current Port 1 configuration: enabled pref fast, backup gig, PVID 30, BW Contract 1024 name LB_PORT_1 >> Main# /c/port 1/fast/cur Current Port 1 Fast link configuration: speed 100, mode full duplex, fctl both, auto on Each of my servers are on vlan 10 and 30, properly communicating. I have tried to turn on VLAN tagging on the Alteon, however it seems to cause all communications to stop working. When I tcpdump -i vlan30 on any of the webservers, I see normal ARP communications, and some STP communications, which may or may not be part of the problem: ... 15:00:51.035882 STP 802.1d, Config, Flags [none], bridge-id 801e.00:11:5c:62:fe:80.8041, length 42 15:00:51.493154 IP 10.1.1.254.33923 > 10.1.1.1.http: Flags [S], seq 707324510, win 8760, options [mss 1460], length 0 15:00:51.493336 IP 10.1.1.1.http > 10.1.1.254.33923: Flags [S.], seq 3981707623, ack 707324511, win 65535, options [mss 1460], len gth 0 15:00:51.493778 ARP, Request who-has 10.1.3.1 tell 10.1.3.254, length 46 etc... I'm not sure if I've provided enough information, so please let me know if any more is necessary. Thank you!

    Read the article

  • Migrateing to Windows Server 2008 R2 Domain Controllers - a few Questions/Issues

    - by Chris
    Ok so here's our setup: We have 2 Windows2k3 Domain Controllers. I am trying to replace them with Windows 2008 R2. The Win2k3 servers are DC01 and DC02. The Windows2k8 servers are DC1 and DC2. I prepared the Windows Server 2003 Forest Schema for a Domain Controller That Runs Windows Server 2008 or Windows Server 2008 R2. Then with both of the new servers up as member servers I dcpromo'd DC1 using the advanced option and added it successfully to my exisiting domain. Roles are GC, DNS and Active Directory Domain Services.I transferred The PDC, RID pool manager and Infrastructure master FSMO to the new DC.(DC1) The Schema master and Domain naming master are still on the old DC (DC01). The first issue I'm encountering is when i dcpromo the second DC (DC2) and select "Replicate data over the network from and existing domain controller" I select the new DC to replicate from (DC1) I get the following error: "Failed to identify the requested replica partner (dc1.xxx.org) as a valid domain controller with a machine account for (DC2$). This is likely due to either the machine account not being replicated to this domain controller because of replication latency or the domain controller not advertising the Active Directory Domain Services. Please consider retrying the operation with \dc01.xxx.org as the replica partner. "The server is unwilling to process the request." Is this because the Schema master and Domain naming master roles are still on the old DC (DC01)? And if so, if I transfer Schema master and Domain naming master roles to DC1 what is the risk or breaking my AD? I'm a little paranoid because this process HAS to be transparent. ANY down time or interruption will result in me getting a verbal ass kicking from my I.T. Director. Both of the new servers DNS point the the old DNS servers (DC01 and DC02) not themselves by the way. Thanks in Advance -Chris

    Read the article

  • Some HTTPS connections via NAT fail, but work on firewall itself.

    - by hnxn
    Hi, I am having trouble establishing some HTTPS connections from internal machines, even though these same connections work if initiated on the firewall itself. The firewall machine is running Ubuntu 10.04.1 and shorewall 4.4.6. The internet connection is Bell PPPoE DSL (in Canada). I have tried various MTU settings, it doesn't seem to make any difference. Other protocols (HTTP, FTP, etc) generally work. The problem seems to be limited to certain sites; this one never works from an internal machine, but always works from the firewall itself: From internal machine: $ wget https://images.fedex.com/images/ascend/shared/headers/nxgen/corp_logo.gif --2011-01-13 20:51:31-- https://images.fedex.com/images/ascend/shared/headers/nxgen/corp_logo.gif Resolving images.fedex.com... 184.24.96.69 Connecting to images.fedex.com|184.24.96.69|:443... connected. ^C From firewall: $ wget https://images.fedex.com/images/ascend/shared/headers/nxgen/corp_logo.gif --2011-01-13 20:58:28-- https://images.fedex.com/images/ascend/shared/headers/nxgen/corp_logo.gif Resolving images.fedex.com... 184.24.96.69 Connecting to images.fedex.com|184.24.96.69|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 840 [image/gif] Saving to: `corp_logo.gif' 2011-01-13 20:58:28 (149 MB/s) - `corp_logo.gif' saved [840/840] This URL always works from both internal and firewall: https://encrypted.google.com/images/logos/ssl_logo_lg.gif Any troubleshooting tips would be greatly appreciated!

    Read the article

  • ASP.NET Web API returns 404 for PUT only on some servers

    - by Greg Bacchus
    Ok, I have been racking my brain and the internet for a solution to this. I just can't figure it out. I have written a site that uses ASP.NET MVC Web API and all working nicely until I put it on staging server. The site works fine on my local machine and the dev web server. Both dev and staging servers are Win Server 2008 R2. The problem is this: basically the site works, but there are some API calls that use the HTTP PUT method. These fail on staging returning a 404, but work fine elsewhere. The first problem that I came across and fixed was in Request Filtering. But still getting the 404. I have turned on tracing in IIS and get the following problem. 168. -MODULE_SET_RESPONSE_ERROR_STATUS ModuleName IIS Web Core Notification 16 HttpStatus 404 HttpReason Not Found HttpSubStatus 0 ErrorCode 2147942402 ConfigExceptionInfo Notification MAP_REQUEST_HANDLER ErrorCode The system cannot find the file specified. (0x80070002) The configs are the same on dev and staging, matter of fact the whole site is a direct copy. Why would the GETs and POSTs work, but not the PUTs? Thanks Greg

    Read the article

  • Issue with SSL using HAProxy and Nginx

    - by Ben Chiappetta
    I'm building a highly available site using a multiple HAProxy load balancers, Nginx web serves, and MySQL servers. The site needs to be able to survive load balancer or web servers nodes going offline without any interruption of service to visitors. Currently, I have two boxes running HAProxy sharing a virtual IP using keepalived, which forward to two web servers running Nginx, which then tie into two MySQL boxes using MySQL replication and sharing a virtual IP using heartbeat. Everything is working correctly except for SSL traffic over HAProxy. I'm running version 1.5 dev12 with openssl support compiled in. When I try to navigate to the virtual IP for haproxy over https, I get the message: The plain HTTP request was sent to HTTPS port. Here's my haproxy.cfg so far, which was mainly assembled from other posts: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice # log 127.0.0.1 local0 user haproxy group haproxy daemon maxconn 20000 defaults log global option dontlognull balance leastconn clitimeout 60000 srvtimeout 60000 contimeout 5000 retries 3 option redispatch listen front bind :80 bind :443 ssl crt /etc/pki/tls/certs/cert.pem mode http option http-server-close option forwardfor reqadd X-Forwarded-Proto:\ https if { is_ssl } reqadd X-Proto:\ SSL if { is_ssl } server web01 192.168.25.34 check inter 1s server web02 192.168.25.32 check inter 1s stats enable stats uri /stats stats realm HAProxy\ Statistics stats auth admin:********* Any idea why SSL traffic isn't being passed correctly? Also, any other changes you would recommend? I still need to configure logging, so don't worry about that section. Thanks in advance your help.

    Read the article

  • Getting a "403 access denied" error instead of serving file (using django, gunicorn nginx)

    - by Finglish
    Getting a "403 access denied" error instead of serving file (using django, gunicorn nginx) I am attempting to use nginx to serve private files from django. For X-Access-Redirect settings I followed the following guide http://www.chicagodjango.com/blog/permission-based-file-serving/ Here is my site config file (/etc/nginx/site-available/sitename): server { listen 80; listen 443 default_server ssl; server_name localhost; client_max_body_size 50M; ssl_certificate /home/user/site.crt; ssl_certificate_key /home/user/site.key; access_log /home/user/nginx/access.log; error_log /home/user/nginx/error.log; location / { access_log /home/user/gunicorn/access.log; error_log /home/user/gunicorn/error.log; alias /path_to/app; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_pass http://127.0.0.1:8000; proxy_connect_timeout 100s; proxy_send_timeout 100s; proxy_read_timeout 100s; } location /protected/ { internal; alias /home/user/protected; } } I then tried using the following in my django view to test the download: response = HttpResponse() response['Content-Type'] = "application/zip" response['X-Accel-Redirect'] = '/protected/test.zip' return response but instead of the file download I get: 403 Forbidden nginx/1.1.19 Please note: I have removed all the personal data from the the config file, so if there are any obvious mistakes not related to my error that is probably why. My nginx error log gives me the following: 2012/09/18 13:44:36 [error] 23705#0: *44 directory index of "/home/user/protected/" is forbidden, client: 80.221.147.225, server: localhost, request: "GET /icbdazzled/tmpdir/ HTTP/1.1", host: "www.icb.fi"

    Read the article

  • Appears to be "randomly" switching between the acl matched backend and the default backend

    - by Xoor
    I have HAProxy acting as a proxy in front of: An NGinx instance An in-house load balancer in front of multiple dynamic services exposed with socket.io (websockets) My problem is that from time to time my connections are proxied correctly to my socket.io cluster, and then randomly it fallsback to routing to NGinx which obviously is annoying and meaningless since NGinx isn't mean't to handle the request. This happens when requesting for URLs of the format : http://mydomain.com/backends/* There's an ACL in the HAProxy config to match the '/backends/*' path. Here's a simplified version of my HAProxy config (removed extra unrelated entries and changed names): global daemon maxconn 4096 user haproxy group haproxy nbproc 4 defaults mode http timeout server 86400000 timeout connect 5000 log global #this frontend interface receives the incoming http requests frontend http-in mode http #process all requests made on port 80 bind *:80 #set a large timeout for websockets timeout client 86400000 # Default Backend default_backend www_backend # Loadfire (socket cluster) acl is_loadfire_backends path_beg /backends use_backend loadfire_backend if is_loadfire_backends # NGinx backend backend www_backend server www_nginx localhost:12346 maxconn 1024 # Loadfire backend backend loadfire_backend option forwardfor # This sets X-Forwarded-For option httpclose server loadfire localhost:7101 maxconn 2048 It's really quite confusing for me why the behaviour appears to be "random", since being hard to reproduce it's hard to debug. I appreciate any insight on this.

    Read the article

  • All websites migrated from server running IIS6 to IIS7

    - by Leah
    Hi, I hope someone will be able to help me with this. We have recently migrated all of our clients' sites to a new server running IIS7 - all the sites were originally running on a server running IIS6. Ever since the migration, lots of our clients are reporting error messages. There seems to be quite a number of issues related to sending emails and also, we have had the following error message reported by several different clients: Server Error in '/' Application. -------------------------------------------------------------------------------- Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that <machineKey> configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.HttpException: Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that <machineKey> configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. I have read elsewhere that this error can appear if a button is clicked before the whole page has finished loading. But as this error has now appeared on multiple sites and only since the server migration, it seems to me that it must be something else. I was wondering if someone could tell me if there is something specific which needs to be changed for .NET sites when sites are moved from a server running IIS6 to a server running IIS7? I don't deal with the actual servers very much so I'm afraid this is very much a grey area for me. Any help would be very much appreciated.

    Read the article

  • DNS issue for internal website routing internet connection from remote location

    - by Michael Paul
    I have an issue that I could use some help with. Our company has a main location and a remote location. Previously, the remote location was connected to the main location through an internet connection VPN tunnel. The connection was pitifully slow at 1.5Mbps, so we upgraded it with a 75Mbps direct link. That meant the remote location lost it's internet access, so we routed their access through the main office internet connection. Everything works perfect except for one thing. The website we host is not accessible from the remote location unless the IP address is used. If I do NSLOOKUP on our website address from a machine connected to the main location network, it resolves correctly to the inside IP address. However, if I do the same from a remote location machine, it resolves to the website's outside IP address. Our internal DNS server(s) have a pointer and CNAME records set up, and everything was working perfectly before the connection was upgraded. In addition, the remote location has a domain controller, DNS server and DHCP server to service these requests at the remote location and prevent these requests from getting routed back and forth over the link. So I think was it happening is that for some reason the DNS server at the remote location is not resolving our website name correctly and passing the requests on to the routers, which then push the request out to the internet DNS system. That resolves the name to our external IP. This is purely a DNS issue, everything else works just fine. I am just stumped on this one. Any ideas on how to fix this? Edit: I forgot to mention that at the remote side of the link is a Cisco ASA-5505 and at the main office there is a Cisco ASA-5510. The link is connected between these 2 devices and the routing is handled in the 5510. Thanks, Michael

    Read the article

  • How do I make rsync also check ctime?

    - by Benoît
    rsync detects files modification by comparing size and mtime. However, if for any reason, the mtime is unchanged, rsync won't detect the change, although it's possible to spot it by looking at the ctime. Of course, I can tell rsync do compare the whole files' contents, but that's very very expensive. Is there a way to make rsync smarter, for example by checking mtime+size are the same AND that ctime isn't newer than mtime (on both source and destination) ? Or should I open a feature request ? Here's an example: Create 2 files, same content and atime/mtime benoit@debian:~$ mkdir d1 && cd d1 benoit@debian:~/d1$ echo Hello > a benoit@debian:~/d1$ cp -a a b Rsync them to another (non-exisiting) directory: benoit@debian:~/d1$ cd .. benoit@debian:~$ rsync -av d1/ d2 sending incremental file list created directory d2 ./ a b sent 164 bytes received 53 bytes 434.00 bytes/sec total size is 12 speedup is 0.06 OK, everything is synced benoit@debian:~$ grep . d*/* d1/a:Hello d1/b:Hello d2/a:Hello d2/b:Hello Update file 'b', same size and then reset its atime/mtime benoit@debian:~$ echo World > d1/b benoit@debian:~$ touch -r d1/a d1/b Attempt to rsync again: benoit@debian:~$ rsync -av d1/ d2 sending incremental file list sent 63 bytes received 12 bytes 150.00 bytes/sec total size is 12 speedup is 0.16 Nope, rsync missed the change. benoit@debian:~$ grep . d*/* d1/a:Hello d1/b:World d2/a:Hello d2/b:Hello Tell rsync the compare the file content benoit@debian:~$ rsync -acv d1/ d2 sending incremental file list b sent 144 bytes received 31 bytes 350.00 bytes/sec total size is 12 speedup is 0.07 Gives the correct result: benoit@debian:~$ grep . d*/* d1/a:Hello d1/b:World d2/a:Hello d2/b:World

    Read the article

  • How to have SSL on Amazon Elastic Load Balancer with a Gunicorn EC2 server?

    - by Riegie Godwin
    I'm a self taught back end engineer so I'm learning all of this stuff as I go along. For the longest time, I've been using basic authentication for my users. Many developers are advising against this approach since each request will contain the username & password in clear text. Anyone with the right skills can sniff on the connection between my iOS application and my Django/Gunicorn Server and obtain their password. I wouldn't want to put my user's credentials at risk so I would like to implement a more secure way of authentication. SSL seems to be the most viable option. My server doesn't serve any static content or anything crazy of that sort. All the server does is send and receive "json" responses from and to my iOS application. Here is my current topology. iOS application ------ Amazon Elastic Load Balancer ------- EC2 Instances running HTTP Gunicorn. Gunicorn runs on port 8000. I have a CNAME record from GoDaddy for the Amazon Elastic Load Balancer DNS. So instead of using the long DNS to make requests, I just use server.example.com. To interact with my servers I send and receive requests to server.example.com:8000/ This setup works and has been solid. However I need to have a more secure way. I would like to setup SSL between my iOS application and my Elastic Load Balancer. How can I go about doing this? Since I am only sending json responses to my application, do I really need to buy a certificate from a CA or can I create my own? (since browsers will not be interacting with my servers. My servers are only designed to send json responses to my iOS application).

    Read the article

  • Debian, Apache2, CGI: paths issue

    - by Bubnoff
    I have a perl form email script on the servers cgi-bin directory ( /usr/lib/cgi-bin ). /etc/apache2/sites-enabled/000-default ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch AddHandler cgi-script cgi pl Order allow,deny Allow from all </Directory> The issue is with paths. html calls script here: <form name="Request" method="post" action="http://server-test.local/cgi-bin/formprocessorpro.pl" onsubmit="return checkWholeForm49874(this)"> The directory with the templates and configs is passed here: <input type="hidden" name="base_path" value="../contact" /> The path to this form is: http://server-test.local/formstest/contact.htm No matter what variation I try for the base_path I get an error from the formprocessor script that it can't find the directory: An error occurred when opening the Form Configuration File (../contact/form.cfg): No such file or directory. I need to move this script from an old server, configured by a previous sysadmin, to a new server. Since cgi-bin is automatically linked to /usr/lib/cgi-bin and linked such that the script resides: http://server-test.local/cgi-bin/formprocessorpro.pl I would imagine that, given that the templates are in the webroot in a directory called contact, the correct path would be: ../contact Any ideas? It's been awhile since I've messed with CGI. Bubnoff

    Read the article

  • SSL Returning Blank Page, No Catalina Errors

    - by Mr.Peabody
    This is my second, maybe third, time configuring SSL with Tomcat. Earlier I had created a self signed, which worked, and now using my signed is proving fruitless. I am using Tomcat, operating from the Amazon Linux API. When using the signed cert/keystore, my server is starting normally without errors. However, when trying to navigate to the domain it is giving me an "ERR_SSL_VERSION_OR_CIPHER_MISMATCH" error. My server.xml file looks as follows: <Connector port="8443" maxHttpHeaderSize="8192" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" disableUploadTimeout="true" acceptCount="100" scheme="https" secure="true" SSLEnabled="true" clientAuth="false" sslProtocol="TLS" keystoreFile="/home/ec2-user/.keystore/starchild.jks" keystorePass="d6b5385812252f180b961aa3630df504" /> It couldn't hurt to also mention that I'm using a wildcard certificate. Please let me know if anything looks amiss! EDIT: After looking more into this, I've determined there may be nothing is wrong with the Server.xml, or the listening ports. This is becoming more of an actual certificate error, as the curl request is giving me this error: curl: (35) Unknown SSL protocol error in connection to jira.mywebsite.com:-9824 Though, I can't seem to figure out what the "-9824" is. When comparing this curl to another similar setup (using the same Wildcard Certificate) it's turning up the full handshake, which is to be expected. I believe this is now between the protocol/cypher set default on JIRA servers.

    Read the article

  • Is current SATA 6 gb/s equipment simply unreliable?

    - by korkman
    I have a 45-disk array of Seagate Barracuda 3 TB ST3000DM001 (yes these are desktop drives I'm aware of that) in a Supermicro sc847 JBOD, connected via LSI 9285. I have found a solution for the problem description below by reducing speed via MegaCli -PhySetLinkSpeed -phy0 2 -a0; for i in $(seq 48); do MegaCli -PhySetLinkSpeed -phy${i} 2 -a0; done and rebooting. The question remains: Is this typical for current 6 gb/s equipment? Is this the sad state of SATA storage? Or is some of my equipment (the sff-8088 cables come to mind) bad? The Problem was: Synchronizing HW RAID-6, disks kept offlining. Fetching SMART values reveiled that those which offlined did not increase powered-on hours anymore. That is, their firmware (CC4C) seems to crash. Digging into the matter by switching to Software RAID-6, with the disks passed-through, I got tons of kernel messages scattered across all disks, with 6 gb/s: sd 0:0:9:0: [sdb] Sense Key : No Sense [current] Info fld=0x0 sd 0:0:9:0: [sdb] Add. Sense: No additional sense information And finally, when a disk offlines: megasas: [ 5]waiting for 160 commands to complete ... megasas: [35]waiting for 159 commands to complete ... megasas: [155]waiting for 156 commands to complete ... megaraid_sas: pending commands remain after waiting, will reset adapter. Ugly controller reset here, then minutes later: megaraid_sas: Reset successful. sd 0:0:28:0: Device offlined - not ready after error recovery ... sd 0:0:28:0: [sdu] Unhandled error code sd 0:0:28:0: [sdu] Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK sd 0:0:28:0: [sdu] CDB: Read(10): 28 00 23 21 2f 40 00 00 70 00 sd 0:0:28:0: [sdu] killing request Reduced speed to 3 gb/s like written above, all problems vanished.

    Read the article

< Previous Page | 464 465 466 467 468 469 470 471 472 473 474 475  | Next Page >