Search Results

Search found 15035 results on 602 pages for 'request'.

Page 548/602 | < Previous Page | 544 545 546 547 548 549 550 551 552 553 554 555  | Next Page >

  • Proxying webmin with nginx

    - by TheLQ
    I am attempting to proxy webmin behind nginx for various reasons that are outside the scope of this question. However I've been trying for a while now and can't seem to figure it out and think I'm to the point where I've exhausted all the permutations of the config file I can think of. What I have now: relevant nginx config (commented out options removed, I tried many) # Proxy for webmin location /admin/quackwall-webmin { proxy_pass http://127.0.0.1:10000; # Also tried ending with /admin/quackwall-webmin proxy_set_header Host $host; } /etc/webmin/config - Relevant parts webprefix=/admin/quackwall-webmin webprefixnoredir=1 referer=(nginx domain name) Webmin itself is on the standard ports, listening on all addresses temporarily for debugging. SSL has been disabled for right now. So I make a standard request for the login page. However all the CSS and images are broken, with the standard login page returned for all of the resources. In the webmin miniserv logs I see 127.0.0.1 - - [29/Oct/2012:12:29:00 -0400] "GET /admin/quackwall-webmin/session_login.cgi HTTP/1.0" 401 2453 127.0.0.1 - - [29/Oct/2012:12:29:01 -0400] "GET /admin/quackwall-webmin/unauthenticated/style.css HTTP/1.0" 401 2453 127.0.0.1 - - [29/Oct/2012:12:29:01 -0400] "GET /admin/quackwall-webmin/unauthenticated/sorttable.js HTTP/1.0" 401 2453 127.0.0.1 - - [29/Oct/2012:12:29:01 -0400] "GET /admin/quackwall-webmin/unauthenticated/toggleview.js HTTP/1.0" 401 2453 So all the URL's are returning 401s. Interestingly ngrep seems to show that the requests suceeded on the backend communication between nginx and webmin T 127.0.0.1:58908 -> 127.0.0.1:10000 [AP] POST /admin/quackwall-webmin/session_login.cgi HTTP/1.0..Host: (host)..Connection: close..User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW 64; rv:16.0) Gecko/20100101 Firefox/16.0..Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8..Accept-Language: en-US,en;q=0.5. .Accept-Encoding: gzip, deflate..Referer: http://(host)/admin/quackwall-webmin/session_login.cgi..Cookie: testing=1..Cache-Control: ma x-age=0..Content-Type: application/x-www-form-urlencoded..Content-Length: 41....page=%2F&user=(user)&pass=(pass) T 127.0.0.1:10000 -> 127.0.0.1:58908 [AP] HTTP/1.0 200 Document follows.. Various other permutations of these config options and others show similar results, with the URL sent to webmin by nginx either being /admin/quackwall-webmin/session_login.cgi, /admin/quackwall-webmin//session_login.cgi, and just /session_login.cgi. All give 201 Unauthenticated responses. All requests, even those that somewhat succeed (as in I can actually load the resources of the page) Is changing the webprefix in webmin even supported? What am I doing wrong? What else can I try?

    Read the article

  • WSUS appears to be functioning, however errors are reported in Application Log (DSS Authentication W

    - by Richard Slater
    I re-installed WSUS a few months ago on a new server as part of a server hardware refresh. It is functioning normally downloading, authorizing and supplying patches to workstations. System Specification: HP DL360 G5 Quad 2.5 Zeon 6GB RAM Not Virtualized WSUS 3.2.7600.3226 SQL 2005 Express Windows 2003 R2 SP2 WSS SSL Enabled Every six hours five events are logged to the Application Event Log: Source | Category | ID | Description -------------------------------------------------------------------------------------- Windows Server Update Services | Web Services | 12052 | The DSS Authentication Web Service is not working. Windows Server Update Services | Web Services | 12042 | The SimpleAuth Web Service is not working Windows Server Update Services | Web Services | 12022 | The Client Web Service is not working Windows Server Update Services | Web Services | 12032 | The Server Synchronization Web Service is not working Windows Server Update Services | Web Services | 12012 | The API Remoting Web Service is not working In addition the following .NET Stack Trace is logged in C:\Program Files\Update Services\LogFiles\SoftwareDistribution.log each stack trace is identical except for the names of the services: 2009-11-27 11:56:52.757 UTC Error WsusService.10 HmtWebServices.CheckApiRemotingWebService ApiRemoting WebService WebException:System.Net.WebException: The request failed with HTTP status 403: Forbidden. at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at Microsoft.UpdateServices.Internal.ApiRemoting.Ping(Int32 pingLevel) at Microsoft.UpdateServices.Internal.HealthMonitoring.HmtWebServices.CheckApiRemotingWebService(EventLoggingType type, HealthEventLogger logger) at Microsoft.UpdateServices.Internal.HealthMonitoring.HmtWebServices.CheckApiRemotingWebService(EventLoggingType type, HealthEventLogger logger) at Microsoft.UpdateServices.Internal.HealthMonitoring.HealthMonitoringTasks.ExecuteSubtask(HealthMonitoringSubtask subtask, EventLoggingType type, HealthEventLogger logger) at Microsoft.UpdateServices.Internal.HealthMonitoring.HmtWebServices.Execute(EventLoggingType type) at Microsoft.UpdateServices.Internal.HealthMonitoring.HealthMonitoringTasks.Execute(EventLoggingType type) at Microsoft.UpdateServices.Internal.HealthMonitoring.HealthMonitoringThreadManager.Execute(Boolean waitIfNecessary, EventLoggingType loggingType) at Microsoft.UpdateServices.Internal.HealthMonitoring.RemotingChannel.PrivateLogEvents() at System.Runtime.Remoting.Messaging.StackBuilderSink._PrivateProcessMessage(IntPtr md, Object[] args, Object server, Int32 methodPtr, Boolean fExecuteInContext, Object[]& outArgs) at System.Runtime.Remoting.Messaging.StackBuilderSink.SyncProcessMessage(IMessage msg, Int32 methodPtr, Boolean fExecuteInContext) at System.Runtime.Remoting.Messaging.ServerObjectTerminatorSink.SyncProcessMessage(IMessage reqMsg) at System.Runtime.Remoting.Lifetime.LeaseSink.SyncProcessMessage(IMessage msg) at System.Runtime.Remoting.Messaging.ServerContextTerminatorSink.SyncProcessMessage(IMessage reqMsg) at System.Runtime.Remoting.Channels.CrossContextChannel.SyncProcessMessageCallback(Object[] args) at System.Runtime.Remoting.Channels.ChannelServices.DispatchMessage(IServerChannelSinkStack sinkStack, IMessage msg, IMessage& replyMsg) at System.Runtime.Remoting.Channels.SoapServerFormatterSink.ProcessMessage(IServerChannelSinkStack sinkStack, IMessage requestMsg, ITransportHeaders requestHeaders, Stream requestStream, IMessage& responseMsg, ITransportHeaders& responseHeaders, Stream& responseStream) at System.Runtime.Remoting.Channels.BinaryServerFormatterSink.ProcessMessage(IServerChannelSinkStack sinkStack, IMessage requestMsg, ITransportHeaders requestHeaders, Stream requestStream, IMessage& responseMsg, ITransportHeaders& responseHeaders, Stream& responseStream) at System.Runtime.Remoting.Channels.Ipc.IpcServerTransportSink.ServiceRequest(Object state) at System.Runtime.Remoting.Channels.SocketHandler.ProcessRequestNow() at System.Runtime.Remoting.Channels.SocketHandler.BeginReadMessageCallback(IAsyncResult ar) at System.Runtime.Remoting.Channels.Ipc.IpcPort.AsyncFSCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOverlapped) at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP) So far I have tried the following: Ensuring the settings were accurate as per TechNet. Checked that there was a suitable binding to 127.0.0.1 binding in IIS. Gone through and checked the settings in IIS as per TechNet. I have discovered that you can run the command wsusutil checkhealth to force the healt check to run, wsusutil can be found in C:\Program Files\Update Services\Tools. When this executese it will tell you to check the application log.

    Read the article

  • IRP_MJ_WRITE latency up to 15 seconds

    - by racitup
    We have written an application that performs small (22kB) writes to multiple files at once (one thread performing asynchronous queued writes to multiple locations on behalf of other threads) on the same local volume (RAID1). 99.9% of the writes are low-latency but occasionally (maybe every minute or two) we get one or two huge latency writes (I have seen 10 seconds and above) without any real explanation. Platform: Win2003 Server with NTFS. Monitoring: Sysinternals Process Monitor (see link below) and our own application logging. We have tried multiple things to try and solve this that have been gleaned from a few websites, e.g.: Making the first part of file names unique to aid 8.3 name generation Writing files to multiple directories Changing Intel Disk Write Caching Windows File/Printer Sharing Minimize memory used Balance Maximize data throughput for file sharing Maximize data throughput for network applications System-Advanced-Performance-Advanced NtfsDisableLastAccessUpdate - use fsutil behavior set disablelastaccess 1 disable 8.3 name generation - use "fsutil behavior set disable8dot3 1" + restart Enable a large size file system cache Disable paging of the kernel code IO Page Lock Limit Turn Off (or On) the Indexing Service But nothing seems to make much difference. There's a whole host of things we haven't tried yet but we wondered if anyone had come across the same problem, a reason and a solution (programmatic or not)? We can reproduce the problem using IOMeter and a simple setup: Start IOMeter and remove all but the first worker thread in 'Topology' using the disconnect button. Select the Worker thread and put a cross in the box next to the disk you want to use in the Disk Targets tab and put '2000000' in Maximum Disk Size (NOTE: must have at least 1GB free space; sector size is 512 bytes) Next create a new access specification and add it to the worker thread: Transfer Request Size = 22kB 100% Sequential Percent of Access Spec = 100% Percent Read/Write = 100% Write Change Results Display Update Frequency to 5 seconds, Test Setup Run Time to 20 seconds and both 'Number of Workers to Spawn Automatically' settings to zero. Select the Worker Thread in the Topology panel and hit the Duplicate Worker button 59 times to create 60 threads with identical settings. Hit the 'Go' button (green flag) and monitor the Results tab. The 'Maximum I/O Response Time (ms)' always hits at least 3500 on our machine. Our machine isn't exactly slow (Xeon 8 core rack server with 4GB and onboard RAID). I'd be interested to see what other people get. We have a feeling it might be something to do with the NTFS filesystem (ours is currently 75% full of fragmented files) and we are going to try a few things around this principle. But it is also related to disk performance since we don't see it on a RAMDisk and it's not as severe on a RAID10 array. Any help is much appreciated. Richard Right-click and select 'Open Link in New Tab': ProcMon Result

    Read the article

  • Monitoring slow nginx/unicorn requests

    - by injekt
    I'm currently using Nginx to proxy requests to a Unicorn server running a Sinatra application. The application only has a couple of routes defined, those of which make fairly simple (non costly) queries to a PostgreSQL database, and finally return data in JSON format, these services are being monitored by God. I'm currently experiencing extremely slow response times from this application server. I have another two Unicorn servers being proxied via Nginx, and these are responding perfectly fine, so I think I can rule out any wrong doing from Nginx. Here is my God configuration: # God configuration APP_ROOT = File.expand_path '../', File.dirname(__FILE__) God.watch do |w| w.name = "app_name" w.interval = 30.seconds # default w.start = "cd #{APP_ROOT} && unicorn -c #{APP_ROOT}/config/unicorn.rb -D" # -QUIT = graceful shutdown, waits for workers to finish their current request before finishing w.stop = "kill -QUIT `cat #{APP_ROOT}/tmp/unicorn.pid`" w.restart = "kill -USR2 `cat #{APP_ROOT}/tmp/unicorn.pid`" w.start_grace = 10.seconds w.restart_grace = 10.seconds w.pid_file = "#{APP_ROOT}/tmp/unicorn.pid" # User under which to run the process w.uid = 'web' w.gid = 'web' # Cleanup the pid file (this is needed for processes running as a daemon) w.behavior(:clean_pid_file) # Conditions under which to start the process w.start_if do |start| start.condition(:process_running) do |c| c.interval = 5.seconds c.running = false end end # Conditions under which to restart the process w.restart_if do |restart| restart.condition(:memory_usage) do |c| c.above = 150.megabytes c.times = [3, 5] # 3 out of 5 intervals end restart.condition(:cpu_usage) do |c| c.above = 50.percent c.times = 5 end end w.lifecycle do |on| on.condition(:flapping) do |c| c.to_state = [:start, :restart] c.times = 5 c.within = 5.minute c.transition = :unmonitored c.retry_in = 10.minutes c.retry_times = 5 c.retry_within = 2.hours end end end Here is my Unicorn configuration: # Unicorn configuration file APP_ROOT = File.expand_path '../', File.dirname(__FILE__) worker_processes 8 preload_app true pid "#{APP_ROOT}/tmp/unicorn.pid" listen 8001 stderr_path "#{APP_ROOT}/log/unicorn.stderr.log" stdout_path "#{APP_ROOT}/log/unicorn.stdout.log" before_fork do |server, worker| old_pid = "#{APP_ROOT}/tmp/unicorn.pid.oldbin" if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # someone else did our job for us end end end I have checked God status logs but it appears CPU and Memory Usage are never out of bounds. I also have something to kill high memory workers, which can be found on the GitHub blog page here. When running a tail -f on the Unicorn logs I see some requests, but they're far and few between, when I was at around 60-100 a second before this trouble seemed to have arrived. This log also shows workers being reaped and started as expected. So my question is, how would I go about debugging this? What are the next steps I should be taking? I'm extremely baffled that the server will sometimes respond quickly, but at others time it's very slow, for long periods of time (which may or may not be peak traffic times). Any advice is much appreciated.

    Read the article

  • Cannot exclude a path from basic auth when using a front controller script

    - by Adam Monsen
    I have a small PHP/Apache2 web application wherein I'd like to do two seemingly incompatible operations: Route all requests through a single PHP script (a "front controller", if you will) Secure everything except API calls with HTTP basic authentication I can satisfy either requirement just fine in isolation, it's when I try to do both at once that I am blocked. For no good reason I'm trying to accomplish these requirements solely with Apache configuration. Here are the requirements stated as an example. A GET request for this URL: http://basic/api/listcars?max=10 should be sent through front.php without requiring basic auth. front.php will get /api/listcars?max=10 and do whatever it needs to with that. Here's what I think should work. In my /etc/hosts I added 127.0.0.1 basic and I am using this Apache config: <Location /> AuthType Basic AuthName "Home Secure" AuthUserFile /etc/apache2/passwords require valid-user </Location> <VirtualHost *:80> ServerName basic DocumentRoot /var/www/basic <Directory /var/www/basic> <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^(.*)$ /front.php/$1 [QSA,L] </IfModule> </Directory> <Location /api> Order deny,allow Allow from all Satisfy any </Location> </VirtualHost> But I still always get a HTTP 401: Authorization Required response. I can make it work by changing <Location /api> into <Location ~ /api> but this allows more than I want to past basic auth. I also tried changing the <Directory /var/www/basic> section into <Location />, but this doesn't work either (and it results in some strange values for PATH_TRANSLATED being passed to the script). I searched around and found many examples of selective exclusion of basic auth, but none that also incorporated a front controller. I could certainly do something like handle basic auth in the front controller, but if I can have Apache do that instead I'll be able to keep all authentication logic out of my PHP code. A friend suggested splitting this into two vhosts, which I know also works. This used to be two separate vhosts, actually. I'm using Apache 2.2.22 / PHP 5.3.10 on Ubuntu 12.04.

    Read the article

  • Apache Solr Admin on Tomcat Deployed in WebApps Directory

    - by KM01
    I am trying to get Apache Solr to work on Redhat6 and Tomcat6 (using these instructions), but get this error when browsing to the admin section, http://localhost:8080/solr-example/admin: HTTP Status 404 - missing core name in path type Status report message missing core name in path description The requested resource (missing core name in path) is not available. http://localhost:8080/solr-example loads fine, with a link to "Solr Admin." My setup is as follows: tomcat6: /etc/tomcat6 Solr: /app/solr/example I have a solr-example.xml in /etc/tomcat6/Catalina/localhost/, which reads: <?xml version="1.0" encoding="utf-8"?> <Context docBase="/app/solr/example/apache-solr-3.4.0.war" debug="0" crossContext="true"> <Environment name="solr/home" type="java.lang.String" value="/app/solr/example" override="true"/> </Context> I don't see anything in the logs (/var/log/tomcat6) ... only entires in catalina.out are regarding the starting and stopping of tomcat6. My questions are: 1.What else do I need to do to get "Solr Admin" to work under Tomcat? 2.Where are these "cores" supposed to be specified? I see an entry in /app/solr/example/solr/solr.xml ? <solr persistent="false"> adminPath: RequestHandler path to manage cores. If 'null' (or absent), cores will not be manageable via request handler <cores adminPath="/admin/cores" defaultCoreName="collection1"> <core name="collection1" instanceDir="." /> </cores> </solr> 3.How do I got about ensuring that logs are working correctly? I can't find logs that contain mention of the 404 above. Update in response to @quanta's comment: Downloaded former (apache-solr-3.4.0.tgz) dataDir was not set, now set to: <dataDir>${solr.data.dir:../solr/data}</dataDir> JAVA_OPTS: /usr/lib/jvm/java/bin/java -classpath :/usr/share/tomcat6/bin/bootstrap.jar:/usr/share/tomcat6/bin/tomcat-juli.jar:/usr/share/java/commons-daemon.jar -Dcatalina.base=/usr/share/tomcat6 -Dcatalina.home=/usr/share/tomcat6 -Djava.endorsed.dirs= -Djava.io.tmpdir=/var/cache/tomcat6/temp -Djava.util.logging.config.file=/usr/share/tomcat6/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager org.apache.catalina.startup.Bootstrap start catalina.out contains no indication of the above error

    Read the article

  • HTTP not working EC2 instance with own domain name

    - by bogdanvursu
    I have this problem I've already posted on the Amazon AWS forum. Unfortunately I haven't got a clear answer I and I was hoping you guys could help. Here's the link: http://developer.amazonwebservices.com/connect/thread.jspa?messageID=198238#198207 Basically I don't know why after associating an Elastic IP address and mapping it to one of my domains, FTP an ping work fine, but HTTP does a 302 redirect to the Amazon AWS hostname I had before associating the Elastic IP address. Here's the question from the AWS forum: I have an EC2 instance with HTTP and FTP installed. They both worked. Then I associated an Elastic IP address to that instance. Then I mapped that IP address to a name which is a subdomain of a domain I own. I think it's an A name (I didn't do the mapping personally). Now FTP works and HTTP doesn't. The AWS host name before the Elastic IP association: ec2-184-73-27-8.compute-1.amazonaws.com The AWS IP address and host name after the association: 174.129.7.254 and ec2-174-129-7-254.compute-1.amazonaws.com The domain which is mapped to 174.129.7.254 using an A record is: demo.flashxml.net FTP works means that I can connect to both 174.129.7.254, ec2-174-129-7-254.compute-1.amazonaws.com and demo.flashxml.net. HTTP doesn't work means that a HTTP request to 174.129.7.254, ec2-174-129-7-254.compute-1.amazonaws.com or demo.flashxml.net returns a 302 redirect to ec2-184-73-27-8.compute-1.amazonaws.com Here is my VirtualHost file: <VirtualHost *:80> DocumentRoot /home/ec2-user/public_html/wordpress ServerName demo.flashxml.net ErrorLog logs/ec2-user-error_log <Directory /home/ec2-user/public_html/wordpress> AllowOverride FileInfo Order Deny,Allow Allow from All </Directory> </VirtualHost> I finally figured out what was wrong. It's the fact that I installed Wordpress on the server using the hostname provided by Amazon. After associating the Elastic IP and updating the DNS records, the server was reachable - FTP working was the proof of that. The 302 redirect when accessing via HTTP was caused by Wordpress's hostname settings. So, what I've learned from all this was that I should setup my IP and DNS first and only after that install Wordpress or any other web app(s).

    Read the article

  • long access times and errors in iis application

    - by user55862
    I am having an issue with an IIS application (details of environment at the end of the message). The web site works great most of the time and I cannot reproduce any error in our test system. On the live system however with on averare of 5-15 requests per second I have a problem with that some requests (about 0.05%) will take over 300 seconds to complete. The other requests complete withing 5-10 seconds. It seem like if all the errornous requests end up with a Timer_EntityBody error in the error log. I have never seen this as an end user but I guess that they will receive some kind of error message. I am trying to find out what can be causing this errornous behaviour. Any ideas are welcome. I have read something about that there can be an MTU issue if ICMP and MTU protocols are blocked in the firewall. Does that sound reasonable? I have also read about updating to IIS 7 should do the trick. Does it sound reasonable? I think that the problem has another cause but I have no idea of what. I have tried running hte perormance monitor, monitoring for database locks and active transaction counts. I can see some of these in the perfmon log for the MSSQL server (another machine) for example: Active transactions is sometimes peaking and sometimes for long periods Lock waits per seconds is sometimes peaking Transactions per second is sometimes peaking Page IO Latch wait is sometimes peaking Lock wait time (ms) is sometimes peaking But I cannot see that any of these correlate to the errors in the IIS error log. On the IIS server machine I can also see with perfmon that some values peak a few times during a day: Request execution time Avg disk queue length I can neither see that any of these correlate to the errors in the IIS error log. In the below code I have anonymized by replacing some parts with HIDDEN The following can be seen in the access log 2010-10-01 08:35:05 W3SVC1301873091 **HIDDEN** POST /**HIDDEN**/Modules/BalanceModule.aspx - 80 - **HIDDEN** Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.1;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) ASP.NET_SessionId=**HIDDEN** 400 0 64 0 2241 127799 At the same time the following can be seen in the error log: 2010-10-01 08:35:05 **HIDDEN** 1999 **HIDDEN** 80 HTTP/1.0 POST /**HIDDEN**/Modules/BalanceModule.aspx - 1301873091 Timer_EntityBody Test+Pool I can tell the following about the environment: Server: Windows Server 2003 x64 SP2 running on VMWare HTTP Server: IIS v6.0 with ASP.NET 2.0.50727 Antivirus: Trend Micro OfficeScan (Is it a good idea to have this on a server?)

    Read the article

  • Guests can't access KVM host server by name although nslookup and dig returns correct record

    - by user190196
    So I have a KVM host that also runs an apache server with some yum repos. The VM guests are connected to the default virtual network, which is configured to offer DHCP and forwarding with NAT on virbr0 (192.168.12.1). The guests can successfully access the yum repos on the host by IP address, so for example curl 192.168.122.1/repo1 returns the content without problems. But I'd like to have the guests be able to reach the web server on the host by name rather IP address. I added the desired name record to the host's /etc/hosts file and libvirt's dnsmasq service seems to be serving that correctly to the guests since nslookup and dig successfully resolve the name on the guests: [root@localhost ~]# nslookup repo Server: 192.168.122.1 Address: 192.168.122.1#53 Name: repo Address: 192.168.122.1 [root@localhost ~]# dig repo ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6 <<>> repo ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55938 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;repo. IN A ;; ANSWER SECTION: repo. 0 IN A 192.168.122.1 ;; Query time: 0 msec ;; SERVER: 192.168.122.1#53(192.168.122.1) ;; WHEN: Tue Sep 17 02:10:46 2013 ;; MSG SIZE rcvd: 38 But curl/ping/etc still fail: [root@localhost ~]# curl repo curl: (6) Couldn't resolve host 'repo' While a request via ip address works: [root@localhost ~]# curl 192.168.122.1 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"> <html> <head> <title>Index of /</title> [...] Same with ping: [root@localhost ~]# ping repo ping: unknown host repo [root@localhost ~]# ping 192.168.122.1 PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data. 64 bytes from 192.168.122.1: icmp_seq=1 ttl=64 time=0.110 ms 64 bytes from 192.168.122.1: icmp_seq=2 ttl=64 time=0.146 ms 64 bytes from 192.168.122.1: icmp_seq=3 ttl=64 time=0.191 ms ^C --- 192.168.122.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2298ms rtt min/avg/max/mdev = 0.110/0.149/0.191/0.033 ms I tried adding repo 192.168.122.1 to the guests' /etc/hosts files but still no dice. Also tried changing guests' /etc/nsswitch.conf with both: hosts: files dns and hosts: dns files I've read the relevant libvirt documentation and I'm not sure where else to learn more about this and be able to move forward with it.

    Read the article

  • Postfix SMTP auth not working with virtual mailboxes + SASL + Courier userdb

    - by Greg K
    So I've read a variety of tutorials and how-to's and I'm struggling to make sense of how to get SMTP auth working with virtual mailboxes in Postfix. I used this Ubuntu tutorial to get set up. I'm using Courier-IMAP and POP3 for reading mail which seems to be working without issue. However, the credentials used to read a mailbox are not working for SMTP. I can see from /var/log/auth.log that PAM is being used, does this require a UNIX user account to work? As I'm using virtual mailboxes to avoid creating user accounts. li305-246 saslauthd[22856]: DEBUG: auth_pam: pam_authenticate failed: Authentication failure li305-246 saslauthd[22856]: do_auth : auth failure: [user=fred] [service=smtp] [realm=] [mech=pam] [reason=PAM auth error] /var/log/mail.log li305-246 postfix/smtpd[27091]: setting up TLS connection from mail-pb0-f43.google.com[209.85.160.43] li305-246 postfix/smtpd[27091]: Anonymous TLS connection established from mail-pb0-f43.google.com[209.85.160.43]: TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits) li305-246 postfix/smtpd[27091]: warning: SASL authentication failure: Password verification failed li305-246 postfix/smtpd[27091]: warning: mail-pb0-f43.google.com[209.85.160.43]: SASL PLAIN authentication failed: authentication failure I've created accounts in userdb as per this tutorial. Does Postfix also use authuserdb? What debug information is needed to help diagnose my issue? main.cf: # TLS parameters smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt smtpd_tls_key_file = /etc/ssl/private/smtpd.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # SMTP parameters smtpd_sasl_local_domain = smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination smtp_tls_security_level = may smtpd_tls_security_level = may smtpd_tls_auth_only = no smtp_tls_note_starttls_offer = yes smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom /etc/postfix/sasl/smtpd.conf pwcheck_method: saslauthd mech_list: plain login /etc/default/saslauthd START=yes PWDIR="/var/spool/postfix/var/run/saslauthd" PARAMS="-m ${PWDIR}" PIDFILE="${PWDIR}/saslauthd.pid" DESC="SASL Authentication Daemon" NAME="saslauthd" MECHANISMS="pam" MECH_OPTIONS="" THREADS=5 OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd" /etc/courier/authdaemonrc authmodulelist="authuserdb" I've only modified one line in authdaemonrc and restarted the service as per this tutorial. I've added accounts to /etc/courier/userdb via userdb and userdbpw and run makeuserdb as per the tutorial. SOLVED Thanks to Jenny D for suggesting use of rimap to auth against localhost IMAP server (which reads userdb credentials). I updated /etc/default/saslauthd to start saslauthd correctly (this page was useful) MECHANISMS="rimap" MECH_OPTIONS="localhost" THREADS=0 OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd -r" After doing this I got the following error in /var/log/auth.log: li305-246 saslauthd[28093]: auth_rimap: unexpected response to auth request: * BYE [ALERT] Fatal error: Account's mailbox directory is not owned by the correct uid or gid: li305-246 saslauthd[28093]: do_auth : auth failure: [user=fred] [service=smtp] [realm=] [mech=rimap] [reason=[ALERT] Unexpected response from remote authentication server] This blog post detailed a solution by setting IMAP_MAILBOX_SANITY_CHECK=0 in /etc/courier/imapd. Then restart your courier and saslauthd daemons for config changes to take effect. sudo /etc/init.d/courier-imap restart sudo /etc/init.d/courier-authdaemon restart sudo /etc/init.d/saslauthd restart Watch /var/log/auth.log while trying to send email. Hopefully you're good!

    Read the article

  • radvd is not assigning prefix

    - by Samik
    I'm currently trying to setup IPv6 address auto-configuration with router advertisement daemon (radvd) on a virtual machine running CentOS 6.5. But the eth0 interface is not obtaining that prefix. I've obtained the ULA prefix from here. Contents of /etc/sysctl.conf # Kernel sysctl configuration file for Red Hat Linux # # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and # sysctl.conf(5) for more details. # Controls IP packet forwarding net.ipv4.ip_forward = 0 net.ipv6.conf.all.forwarding = 1 # Controls source route verification net.ipv4.conf.default.rp_filter = 1 # Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 # Controls the System Request debugging functionality of the kernel kernel.sysrq = 0 # Controls whether core dumps will append the PID to the core filename. # Useful for debugging multi-threaded applications. kernel.core_uses_pid = 1 # Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1 # Disable netfilter on bridges. net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 # Controls the default maxmimum size of a mesage queue kernel.msgmnb = 65536 # Controls the maximum size of a message, in bytes kernel.msgmax = 65536 # Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736 # Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 Contents of /etc/radvd.conf # NOTE: there is no such thing as a working "by-default" configuration file. # At least the prefix needs to be specified. Please consult the radvd.conf(5) # man page and/or /usr/share/doc/radvd-*/radvd.conf.example for help. # # interface eth0 { AdvSendAdvert on; MinRtrAdvInterval 3; MaxRtrAdvInterval 10; AdvDefaultPreference low; AdvHomeAgentFlag off; prefix fd8a:8d9d:808f:1::/64 { AdvOnLink on; AdvAutonomous on; AdvRouterAddr on; }; }; Contents of /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=52:54:00:74:d7:46 TYPE=Ethernet UUID=af5db1cb-e809-4098-be1a-5a74dbb767b1 ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=dhcp IPV6INIT=yes IPV6_AUTOCONF=yes I've also enabled radvd at startup through chkconfig. Though I noticed that radvd is starting after interfaces are brought up. I've tried restarting the network service afterwards but still I get the following link-local address only #ip -6 addr show 1: lo: mtu 16436 inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qlen 1000 inet6 fe80::5054:ff:fe74:d746/64 scope link valid_lft forever preferred_lft forever Edit: Based on the answer given by Sander Steffann I still need clarification on some points but I'm posting here what worked. Contents of /etc/sysconfig/network NETWORKING=yes HOSTNAME=syslog-ng-server NETWORKING_IPV6=yes IPV6FORWARDING=yes Contents of /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=52:54:00:74:d7:46 TYPE=Ethernet UUID=af5db1cb-e809-4098-be1a-5a74dbb767b1 ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=dhcp IPV6INIT=yes IPV6_AUTOCONF=yes IPV6FORWARDING=no Removed following line from /etc/sysctl.conf net.ipv6.conf.all.forwarding = 1 Contents of /etc/radvd.conf is as previous.

    Read the article

  • How do I get the latest FastCGI and PHP versions to peacefully coexist on IIS 6?

    - by BHelman
    I have been going round and round trying to get any sort of PHP running on IIS 6. I somehow managed to successfully get version 5.1.4 running using the php5isapi.dll file. However, I want to upgrade a website to begin using a Content Management System. I have never dug into CMS before so I'm open to programs that are easy to use. I am currently looking into TomatoCMS and ImpressCMS - but that's beside the point. I have never done an installation with PHP before and I think I'm getting familiar with how it works. However the current situation is this. Microsoft's Web Platform Installer 2.0 installed FastCGI for me. I need to upgrade to PHP 5.3.1 for a CMS system. So I downloaded the Windows installer and let it go at it. After consulting several other blog articles, I believe I know how it is supposed to work but I am currently not having luck. THE SETUP *.php is a registered extension in IIS 6 for all websites (on Win 2k3). The application that it calls is C:\Windows\system32\inetsvr\fcgiext.dll, like it should. The fcgiext.ini config has the proper lines: [Types] php=PHP [PHP] ext=C:\program files\PHP\php-cgi.exe And the php.ini file also has the correct configs. All extensions are disabled and I changed the correct things for FastCGI. And everything is registered correctly with the PATH variable. Everything is exactly how it should be. BUT when I launch the "info.php" page () on another computer, I get the following error: FastCGI Error The FastCGI Handler was unable to process the request. Error Details: * Section [PHP] not found in config file. * Error Number: 1413 (0x80070585). * Error Description: Invalid index. HTTP Error 500 - Server Error. Internet Information Services (IIS) A quick Google search reveals that I have it all setup correctly as far as the INI's go and the mapping of the php extension. I am completely at a loss. Does anyone have any suggestions? Although the server is hosting three small websites, I don't really care what I have to do to it to get it to work.

    Read the article

  • Django + gunicorn + virtualenv + Supervisord issue

    - by Florian Le Goff
    Dear all, I have a strange issue with my virtualenv + gunicorn setup, only when gunicorn is launched via supervisord. I do realize that it may very well be an issue with my supervisord and I would appreciate any feedback on a better place to ask for help... In a nutshell : when I run gunicorn from my user shell, inside my virtualenv, everything is working flawlessly. I'm able to access all the views of my Django project. When gunicorn is launched by supervisord at the system startup, everything is OK. But, if I have to kill the gunicorn_django processes, or if I perform a supervisord restart, once that gunicorn_django has relaunched, every request is answered with a weird Traceback : (...) File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/__init__.py", line 77, in connection = connections[DEFAULT_DB_ALIAS] File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/utils.py", line 92, in __getitem__ backend = load_backend(db['ENGINE']) File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/utils.py", line 50, in load_backend raise ImproperlyConfigured(error_msg) TemplateSyntaxError: Caught ImproperlyConfigured while rendering: 'django.db.backends.postgresql_psycopg2' isn't an available database backend. Try using django.db.backends.XXX, where XXX is one of: 'dummy', 'mysql', 'oracle', 'postgresql', 'postgresql_psycopg2', 'sqlite3' Error was: cannot import name utils Full stack available here : http://pastebin.com/BJ5tNQ2N I'm running... Ubuntu/maverick (up-to-date) Python = 2.6.6 virtualenv = 1.5.1 gunicorn = 0.12.0 Django = 1.2.5 psycopg2 = '2.4-beta2 (dt dec pq3 ext)' gunicorn configuration : backlog = 2048 bind = "127.0.0.1:8000" pidfile = "/tmp/gunicorn-hc.pid" daemon = True debug = True workers = 3 logfile = "/home/hc/prod/log/gunicorn.log" loglevel = "info" supervisord configuration : [program:gunicorn] directory=/home/hc/prod/hc command=/home/hc/prod/venv/bin/gunicorn_django -c /home/hc/prod/hc/gunicorn.conf.py user=hc umask=022 autostart=True autorestart=True redirect_stderr=True Any advice ? I've been stuck on this one for quite a while. It seems like some weird memory limit, as I'm not enforcing anything special : $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 20 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Thank you.

    Read the article

  • nginx proxypath https redirect fails without trailing slash

    - by Thermionix
    I'm trying to setup Nginx to forward requests to several backend services using proxy_pass. The links on the pages that lack trailing slashes do have https:// in front, but get redirected to a http request with a trailing slash - which ends in connection refused - I only want these services to be available through https. So if a link is too https://example.com/internal/errorlogs in a browser when loaded https://example.com/internal/errorlogs gives Error Code 10061: Connection refused (it redirects to http://example.com/internal/errorlogs/) If I manually append the trialing slash https://example.com/internal/errorlogs/ it loads I've tried with varied trailing forward slashes appended to the proxypath and location in proxy.conf to no effect, have also added server_name_in_redirect off; This happens on more than one app under nginx, and works in apache reverse proxy Config files; proxy.conf location /internal { proxy_pass http://localhost:8081/internal; include proxy.inc; } .... more entries .... sites-enabled/main server { listen 443; server_name example.com; server_name_in_redirect off; include proxy.conf; ssl on; } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Ssl on; proxy_set_header X-Forwarded-Proto https; curl output -$ curl -I -k https://example.com/internal/errorlogs/ HTTP/1.1 200 OK Server: nginx/1.0.5 Date: Thu, 24 Nov 2011 23:32:07 GMT Content-Type: text/html;charset=utf-8 Connection: keep-alive Content-Length: 14327 -$ curl -I -k https://example.com/internal/errorlogs HTTP/1.1 301 Moved Permanently Server: nginx/1.0.5 Date: Thu, 24 Nov 2011 23:32:11 GMT Content-Type: text/html;charset=utf-8 Connection: keep-alive Content-Length: 127 Location: http://example.com/internal/errorlogs/

    Read the article

  • Nginx phpmyadmin redirecting to / instead of /phpmyadmin upon login

    - by Frederik Nielsen
    I am having issues with my phpmyadmin on my nginx install. When I enter <ServerIP>/phpmyadmin and logs in, I get redirected to <ServerIP>/index.php?<tokenstuff> instead of <ServerIP>/phpmyadmin/index.php?<tokenstuff> Nginx config file: user nginx; worker_processes 5; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 2; #gzip on; include /etc/nginx/conf.d/*.conf; } Default.conf: server { listen 80; server_name _; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { root /usr/share/nginx/html; index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root /usr/share/nginx/html; try_files $uri =404; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } location /phpmyadmin { root /usr/share/; index index.php index.html index.htm; location ~ ^/phpmyadmin/(.+\.php)$ { try_files $uri =404; root /usr/share/; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $request_filename; include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; } location ~* ^/phpmyadmin/(.+\.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt))$ { root /usr/share/; } } } (Any general tips on tidying op those config files are accepted too)

    Read the article

  • Torque jobs does not enter "E" state (unless "qrun")

    - by Vi.
    Jobs I add to the queue stays there in "Queued" state without attempts to be executed (unless I manually qrun them) /var/spool/torque/server_logs say just 04/11/2011 12:43:27;0100;PBS_Server;Job;16.localhost;enqueuing into batch, state 1 hop 1 04/11/2011 12:43:27;0008;PBS_Server;Job;16.localhost;Job Queued at request of test@localhost, owner = test@localhost, job name = Qqq, queue = batch The job requires just 1 CPU on 1 node. # qmgr -c "list queue batch" Queue batch queue_type = Execution total_jobs = 0 state_count = Transit:0 Queued:0 Held:0 Waiting:0 Running:0 Exiting:0 max_running = 3 acl_host_enable = True acl_hosts = localhost resources_min.ncpus = 1 resources_min.nodect = 1 resources_default.ncpus = 1 resources_default.nodes = 1 resources_default.walltime = 00:00:10 mtime = Mon Apr 11 12:07:10 2011 resources_assigned.ncpus = 0 resources_assigned.nodect = 0 kill_delay = 3 enabled = True started = True I can't set resources_assigned to nonzero because of Cannot set attribute, read only or insufficient permission resources_assigned.ncpus. When I qrun some task, this goes to mom's log: 04/11/2011 21:27:48;0001; pbs_mom;Svr;pbs_mom;LOG_DEBUG::mom_checkpoint_job_has_checkpoint, FALSE 04/11/2011 21:27:48;0001; pbs_mom;Job;TMomFinalizeJob3;job 18.localhost started, pid = 28592 04/11/2011 21:27:48;0080; pbs_mom;Job;18.localhost;scan_for_terminated: job 18.localhost task 1 terminated, sid=28592 04/11/2011 21:27:48;0008; pbs_mom;Job;18.localhost;job was terminated 04/11/2011 21:27:48;0080; pbs_mom;Svr;preobit_reply;top of preobit_reply 04/11/2011 21:27:48;0080; pbs_mom;Svr;preobit_reply;DIS_reply_read/decode_DIS_replySvr worked, top of while loop 04/11/2011 21:27:48;0080; pbs_mom;Svr;preobit_reply;in while loop, no error from job stat 04/11/2011 21:27:48;0080; pbs_mom;Job;18.localhost;obit sent to server Scheduler log (/var/spool/torque/sched_logs/20110705): 07/05/2011 21:44:53;0002; pbs_sched;Svr;Log;Log opened 07/05/2011 21:44:53;0002; pbs_sched;Svr;TokenAct;Account file /var/spool/torque/sched_priv/accounting/20110705 opened 07/05/2011 21:44:53;0002; pbs_sched;Svr;main;/usr/sbin/pbs_sched startup pid 16234 qstat -f: Job Id: 26.localhost Job_Name = qwe Job_Owner = test@localhost job_state = Q queue = batch server = localhost Checkpoint = u ctime = Tue Jul 5 21:43:31 2011 Error_Path = localhost:/home/test/jscfi/default/0.738784810485275/qwe.e26 Hold_Types = n Join_Path = n Keep_Files = n Mail_Points = a mtime = Tue Jul 5 21:43:31 2011 Output_Path = localhost:/home/test/jscfi/default/0.738784810485275/qwe.o26 Priority = 0 qtime = Tue Jul 5 21:43:31 2011 Rerunable = True Resource_List.ncpus = 1 Resource_List.neednodes = 1:ppn=1 Resource_List.nodect = 1 Resource_List.nodes = 1:ppn=1 Resource_List.walltime = 00:01:00 substate = 10 Variable_List = PBS_O_HOME=/home/test,PBS_O_LANG=en_US.UTF-8, PBS_O_LOGNAME=test, PBS_O_PATH=/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games, PBS_O_MAIL=/var/mail/test,PBS_O_SHELL=/bin/sh,PBS_SERVER=127.0.0.1, PBS_O_WORKDIR=/home/test/jscfi/default/0.738784810485275, PBS_O_QUEUE=batch,PBS_O_HOST=localhost euser = test egroup = test queue_rank = 1 queue_type = E etime = Tue Jul 5 21:43:31 2011 submit_args = run.pbs Walltime.Remaining = 6 fault_tolerant = False How to make it execute jobs automatically, without manual qrun?

    Read the article

  • Internal/external DNS with subdomains

    - by ScottMcGready
    I've got an internal DNS server (part of OS X server) and it's acting as the main DNS server for a specific (physical) site. When it can't resolve hostnames itself, it forwards requests to Google's DNS servers. Everything works well apart from a couple of issues, which I think may be related but can't get to the bottom of. I've got a number of intranet sites setup, that people can access by going to something like: intranet.mydomainname.com selfservice.mydomainname.com These point to various servers in the building that host these sites. Whether internal or external (without VPN), I can access these sites just dandy. Where the issue comes is when I want to host, say, test.mydomainname.com on an external server it fails to resolve as the primary zone for mydomainname.com is internal. How can I get it to look up Google's DNS (or an external one) for that zone if it's not in the list? I've tried everything I can think (adding my host's nameservers etc) of but nothing seems to work fully. Also I can't access intranet sites when connected via VPN and from what I can gather - I believe this might be related to the DNS issue but just wanted to give as much information as possible. Edit The domain mydomainname.com is hosted externally and pointed at the site's public IP. From there we can forward the requests to the relevant internal server. Externally everything works, internally though any subdomain of mydomainname.com is served locally, I want it to be served from Google's DNS / externally. DNS Configuration As per a request, here's the current DNS configuration (OS X server's DNS tab). I've blurred out the .private address as it's not really relevant but it's the server's name. The colored dots are just there to link everything together. Screenshot: In an attempt to clarify this is what I want: intranet.mydomain.com -> 192.168.0.12 selfservice.mydomain.com -> 192.168.0.13 *.mydomain.com -> forward to external DNS mydomain.com -> forward to external DNS At the moment any subdomain of mydomain.com is not forwarded on (think this is because of the primary zone being mydomain.com with a NS of intranet.mydomain.com but could do with a little nod in the right direction.

    Read the article

  • Detecting upload success/failure in a scripted command-line SFTP session?

    - by Will Martin
    I am writing a BASH shell script to upload all the files in a directory to a remote server and then delete them. It'll run every few hours via a CRON job. My complete script is below. The basic problem is that the part that's supposed to figure out whether the file uploaded successfully or not doesn't work. The SFTP command's exit status is always "0" regardless of whether the upload actually succeeded or not. How can I figure out whether a file uploaded correctly or not so that I can know whether to delete it or let it be? #!/bin/bash # First, save the folder path containing the files. FILES=/home/bob/theses/* # Initialize a blank variable to hold messages. MESSAGES="" ERRORS="" # These are for notifications of file totals. COUNT=0 ERRORCOUNT=0 # Loop through the files. for f in $FILES do # Get the base filename BASE=`basename $f` # Build the SFTP command. Note space in folder name. CMD='cd "Destination Folder"\n' CMD="${CMD}put ${f}\nquit\n" # Execute it. echo -e $CMD | sftp -oIdentityFile /home/bob/.ssh/id_rsa [email protected] # On success, make a note, then delete the local copy of the file. if [ $? == "0" ]; then MESSAGES="${MESSAGES}\tNew file: ${BASE}\n" (( COUNT=$COUNT+1 )) # Next line commented out for ease of testing #rm $f fi # On failure, add an error message. if [ $? != "0" ]; then ERRORS="${ERRORS}\tFailed to upload file ${BASE}\n" (( ERRORCOUNT=$ERRORCOUNT+1 )) fi done SUBJECT="New Theses" BODY="There were ${COUNT} files and ${ERRORCOUNT} errors in the latest batch.\n\n" if [ "$MESSAGES" != "" ]; then BODY="${BODY}New files:\n\n${MESSAGES}\n\n" fi if [ "$ERRORS" != "" ]; then BODY="${BODY}Problem files:\n\n${ERRORS}" fi # Send a notification. echo -e $BODY | mail -s $SUBJECT [email protected] Due to some operational considerations that make my head hurt, I cannot use SCP. The remote server is using WinSSHD on windows, and does not have EXEC privileges, so any SCP commands fail with the message "Exec request failed on channel 0". The uploading therefore has to be done via the interactive SFTP command.

    Read the article

  • Cant access a remote server due mistake by setting firewall rule

    - by LMIT
    I need help due a my silly mistake! So for long time i have a dedicate server hosted by register.it Usually i access remotly to this server (Windows 2008 server) by Terminal Server. Today i wanted to block one site that continually send request to my server. So i was adding a new rule in the firewall (the native firewall on windows 2008 server), as i did many time, but this time, probably i was sleeping with my brain i add a general rules that stop everything! So i cant access to the server anymore, as no any users can browse the sites, nothing is working because this rule block everything. I know that is a silly mistake, no need to tell me :) so please what i can do ? The only 1 thing that my provider let me is reboot the server by his control panel, but this not help me in any way because the firewall block me again. i have administrator username and password, so what i really can do ? there are some trick some tecnique, some expert guru that can help me in this very bad situation ? UPDATE i follow the Tony suggest and i did a NMAP to check if some ports are open but look like all closed: NMAP RESULT Starting Nmap 6.00 ( http://nmap.org ) at 2012-05-29 22:32 W. Europe Daylight Time NSE: Loaded 93 scripts for scanning. NSE: Script Pre-scanning. Initiating Parallel DNS resolution of 1 host. at 22:32 Completed Parallel DNS resolution of 1 host. at 22:33, 13.00s elapsed Initiating SYN Stealth Scan at 22:33 Scanning xxx.xxx.xxx.xxx [1000 ports] SYN Stealth Scan Timing: About 29.00% done; ETC: 22:34 (0:01:16 remaining) SYN Stealth Scan Timing: About 58.00% done; ETC: 22:34 (0:00:44 remaining) Completed SYN Stealth Scan at 22:34, 104.39s elapsed (1000 total ports) Initiating Service scan at 22:34 Initiating OS detection (try #1) against xxx.xxx.xxx.xxx Retrying OS detection (try #2) against xxx.xxx.xxx.xxx Initiating Traceroute at 22:34 Completed Traceroute at 22:35, 6.27s elapsed Initiating Parallel DNS resolution of 11 hosts. at 22:35 Completed Parallel DNS resolution of 11 hosts. at 22:35, 13.00s elapsed NSE: Script scanning xxx.xxx.xxx.xxx. Initiating NSE at 22:35 Completed NSE at 22:35, 0.00s elapsed Nmap scan report for xxx.xxx.xxx.xxx Host is up. All 1000 scanned ports on xxx.xxx.xxx.xxx are filtered Too many fingerprints match this host to give specific OS details TRACEROUTE (using proto 1/icmp) HOP RTT ADDRESS 1 ... ... ... 13 ... 30 NSE: Script Post-scanning. Read data files from: D:\Program Files\Nmap OS and Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 145.08 seconds Raw packets sent: 2116 (96.576KB) | Rcvd: 61 (4.082KB) Question: The provider locally can access by username and password ?

    Read the article

  • Forward all traffic through an ssh tunnel

    - by Eamorr
    I hope someone can follow this and I'll explain as best I can. I'm trying to forward all traffic from port 6999 on x.x.x.224, through an ssh tunnel, and onto port 7000 on x.x.x.218. Here is some ASCII art: |browser|-----|Squid on x.x.x.224|------|ssh tunnel|------<satellite link>-----|Squid on x.x.x.218|-----|www| 3128 6999 7000 80 When I remove the ssh tunnel, everything works fine. The idea is to turn off encryption on the ssh tunnel (to save bandwidth) and turn on maximum compression (to save more bandwidth). This is because it's a satellite link. Here's the ssh tunnel I've been using: ssh -C -f -C -o CompressionLevel=9 -o Cipher=none [email protected] -L 7000:172.16.1.224:6999 -N The trouble is, I don't know how to get data from Squid on x.x.x.224 into the ssh tunnel? Am I going about this the wrong way? Should I create an ssh tunnel on x.x.x.218? I use iptables to stop squid on x.x.x.224 from reading port 80, but to feed from port 6999 instead (i.e. via the ssh tunnel). Do I need another iptables rule? Any comments greatly appreciated. Many thanks in advance, Regarding Eduardo Ivanec's question, here is a netstat -i any port 7000 -nn dump from x.x.x.218: 14:42:15.386462 IP 172.16.1.224.40006 > 172.16.1.218.7000: Flags [S], seq 2804513708, win 14600, options [mss 1460,sackOK,TS val 86702647 ecr 0,nop,wscale 4], length 0 14:42:15.386690 IP 172.16.1.218.7000 > 172.16.1.224.40006: Flags [R.], seq 0, ack 2804513709, win 0, length 0 Update 2: When I run the second command, I get the following error in my browser: ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://109.123.109.205/index.php Zero Sized Reply Squid did not receive any data for this request. Your cache administrator is webmaster. Generated Fri, 01 Jul 2011 16:06:06 GMT by remote-site (squid/2.7.STABLE9) remote-site is 172.16.1.224 When I do a tcpdump -i any port 7000 -nn I get the following: root@remote-site:~# tcpdump -i any port 7000 -nn tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused

    Read the article

  • Multiple subnets on isc-dhcp-server using ddns with bind9

    - by legioxi
    On my network I have two subnets: 10.100.1.0/24 - Wired/wireless 10.100.7.0/24 - VPN Both subnets are served by isc-dhcp-server running on a Debian VM. This same VM runs bind9 for my DNS. ISC-DHCP-SERVER is configured to use DDNS and update BIND9 with hosts/IPs. Everything runs great until a device drops off the wired/wireless network and pops onto the VPN. When connecting on the VPN, a DHCP lease is handed out on the new subnet but DDNS does not update BIND9. Since the device has A/TXT/PTR records it appears ISC-DHCP-SERVER won't switch them to the new IP. The logs show: Connect to wireless: Nov 6 20:55:13 core-server named[2417]: client 127.0.0.1#57697: updating zone 'internal.mydomain.com/IN': adding an RR at 'demo-iphone.internal.mydomain.com' A Nov 6 20:55:13 core-server named[2417]: client 127.0.0.1#57697: updating zone 'internal.mydomain.com/IN': adding an RR at 'demo-iphone.internal.mydomain.com' TXT Nov 6 20:55:13 core-server dhcpd: DHCPACK on 10.100.1.160 to FF:FF:FF:FF:FF:FF (demo-iphone) via eth0 Nov 6 20:55:13 core-server dhcpd: Added new forward map from demo-iphone.internal.mydomain.com to 10.100.1.160 Nov 6 20:55:13 core-server dhcpd: Added reverse map from 160.49.21.172.in-addr.arpa. to demo-iphone.internal.mydomain.com Switch to VPN: Nov 6 20:56:34 core-server dhcpd: DHCPOFFER on 10.100.7.101 to BB:BB:BB:BB:BB:BB (demo-iphone) via 10.100.7.0 Nov 6 20:56:34 core-server named[2417]: client 127.0.0.1#57697: updating zone 'internal.mydomain.com/IN': update unsuccessful: demo-iphone.internal.mydomain.com: 'name not in use' prerequisite not satisfied (YXDOMAIN) Nov 6 20:56:34 core-server dhcpd: DHCPREQUEST for 10.100.7.101 (10.100.1.2) from BB:BB:BB:BB:BB:BB (demo-iphone) via 10.100.7.0 Nov 6 20:56:34 core-server dhcpd: DHCPACK on 10.100.7.101 to BB:BB:BB:BB:BB:BB (demo-iphone) via 10.100.7.0 Nov 6 20:56:34 core-server named[2417]: client 127.0.0.1#57697: updating zone 'internal.mydomain.com/IN': update unsuccessful: demo-iphone.internal.mydomain.com/TXT: 'RRset exists (value dependent)' prerequisite not satisfied (NXRRSET) Nov 6 20:56:34 core-server dhcpd: Forward map from demo-iphone.internal.mydomain.com to 10.100.7.101 FAILED: Has an address record but no DHCID, not mine. One thing to note is that the MAC of the device when connecting via VPN is the MAC of my Cisco ASA5512X and not the actual device. The ASA is relaying the DHCP request from the VPN client to the VM running ISC-DHCP-SERVER. Is there a way to get DDNS working in this scenario?

    Read the article

  • Stop squid caching 302 and 307 with deny_info

    - by 0xception
    TLDR: 302, 307 and Error pages are being cached. Need to force a refresh of the content. Long version: I've setup a very minimal squid instance running on a gateway which shouldn't not cache ANYTHING but needs to be solely used as a domain based web filter. I'm using another application which redirects un-authenticated users to the proxy which then uses the deny_info option redirects any non-whitelisted request to the login page. After the user has authenticated the firewall rule gets placed so they no longer get sent to the proxy. The problem is that when a user hits a website (xkcd.com) they are unauthenticated so they get redirected via the firewall: iptables -A unknown-user -t nat -p tcp --dport 80 -j REDIRECT --to-port 39135 to the proxy at this point squid redirects the user to the login page using a 302 (i've also tried 307, and i've also make sure the headers are set to no-cache and/or no-store for Cache-Control and Pragma). Then when the user logs into the system they get firewall rule which no longer directs them to the squid proxy. But if they go to xkcd.com again they will have the original redirection page cached and will once again get the login page. Any idea how to force these redirects to NOT be cached by the browser? Perhaps this is a problem w/ the browsers and not squid, but not sure how to get around it. Full squid config below. # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 acl localnet src 192.168.182.0/23 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl https port 443 acl http port 80 acl CONNECT method CONNECT # # Disable Cache # cache deny all via off negative_ttl 0 seconds refresh_all_ims on #error_default_language en # Allow manager access only from localhost http_access allow manager localhost http_access deny manager # Deny access to anything other then http http_access deny !http # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !https visible_hostname gate.ovatn.net # Disable memory pooling memory_pools off # Never use neigh cache objects for cgi-bin scripts hierarchy_stoplist cgi-bin ? # # URL rewrite Test Settings # #acl whitelist dstdomain "/etc/squid/domains-pre.lst" #url_rewrite_program /usr/lib/squid/redirector #url_rewrite_access allow !whitelist #url_rewrite_children 5 startup=0 idle=1 concurrency=0 #http_access allow all # # Deny Info Error Test # acl whitelist dstdomain "/etc/squid/domains-pre.lst" deny_info http://login.domain.com/ whitelist #deny_info ERR_ACCESS_DENIED whitelist http_access deny !whitelist http_access allow whitelist http_port 39135 transparent ## Debug Values access_log /var/log/squid/access-pre.log cache_log /var/log/squid/cache-pre.log # Production Values #access_log /dev/null #cache_log /dev/null # Set PID file pid_filename /var/run/gatekeeper-pre.pid SOLUTION: I believe I might have found a solution to this. After days and days trying to figure it out, only through a random stumble I found client_persistent_connections off server_persistent_connections off This did the trick. So it wasn't so much cache as it was a single persistent connection messing things up. W000T!

    Read the article

  • How to setup nginx and a subdomain

    - by Evolutio
    i have gitlab installed on my server and it works on all domains eg: git.lars-dev.de, lars-dev.de and *.lars-dev.de how I can run gitlab only on git.lars-dev.de and another subdomain on files.lars-dev.de? my lars-dev conf: server { listen *:80; ## listen for ipv4; this line is default and implied #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 root /var/www/webdata/lars-dev.de/htdocs; index index.html index.htm; server_name lars-dev.de; location / { try_files $uri $uri/ /index.html; } #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } and the gitlab configuration: upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket; } server { listen *:80; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea server_name git.lars-dev.de; # e.g., server_name source.example.com; server_tokens off; # don't show the version number, a security best practice root /home/git/gitlab/public; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab unicorn) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://gitlab; } }

    Read the article

  • Windows 8.1 IRQL_NOT_LESS_OR_EQUAL with Asus PCE-n53

    - by JArsenault89
    I saw the following question, and it is the exact same problem on my machine, I have tracked it to the ASUS PCE-n53 wireless card in my desktop. Does anyone know of a workaround? Windows 8.1 RTM installation crashes The adapter worked fine in windows 8... any ideas? EDIT: Crash Dump Analysis * Bugcheck Analysis * * IRQL_NOT_LESS_OR_EQUAL (a) An attempt was made to access a pageable (or completely invalid) address at an interrupt request level (IRQL) that is too high. This is usually caused by drivers using improper addresses. If a kernel debugger is available get the stack backtrace. Arguments: Arg1: 0000000000000000, memory referenced Arg2: 0000000000000002, IRQL Arg3: 0000000000000001, bitfield : bit 0 : value 0 = read operation, 1 = write operation bit 3 : value 0 = not an execute operation, 1 = execute operation (only on chips which support this level of status) Arg4: fffff801ef4f1316, address which referenced memory Debugging Details: WRITE_ADDRESS: 0000000000000000 CURRENT_IRQL: 2 FAULTING_IP: nt!KeReleaseSpinLock+16 fffff801`ef4f1316 f048832100 lock and qword ptr [rcx],0 DEFAULT_BUCKET_ID: WIN8_DRIVER_FAULT BUGCHECK_STR: AV PROCESS_NAME: System ANALYSIS_VERSION: 6.3.9600.16384 (debuggers(dbg).130821-1623) amd64fre TRAP_FRAME: ffffd00020d45550 -- (.trap 0xffffd00020d45550) NOTE: The trap frame does not contain all registers. Some register values may be zeroed or incorrect. rax=0000000000000001 rbx=0000000000000000 rcx=0000000000000000 rdx=0000000055920200 rsi=0000000000000000 rdi=0000000000000000 rip=fffff801ef4f1316 rsp=ffffd00020d456e0 rbp=ffffd00020d45768 r8=0000000055920222 r9=0000000035930000 r10=0000000055920222 r11=ffffd00020d456a8 r12=0000000000000000 r13=0000000000000000 r14=0000000000000000 r15=0000000000000000 iopl=0 nv up ei pl zr na po nc nt!KeReleaseSpinLock+0x16: fffff801ef4f1316 f048832100 lock and qword ptr [rcx],0 ds:0000000000000000=???????????????? Resetting default scope LOCK_ADDRESS: fffff801ef6da360 -- (!locks fffff801ef6da360) Resource @ nt!PiEngineLock (0xfffff801ef6da360) Exclusively owned Contention Count = 6 Threads: ffffe000010ff040-01<* 1 total locks, 1 locks currently held PNP_TRIAGE: Lock address : 0xfffff801ef6da360 Thread Count : 1 Thread address: 0xffffe000010ff040 Thread wait : 0x1fbe LAST_CONTROL_TRANSFER: from fffff801ef5647e9 to fffff801ef558ca0 STACK_TEXT: ffffd00020d45408 fffff801ef5647e9 : 000000000000000a 0000000000000000 0000000000000002 0000000000000001 : nt!KeBugCheckEx ffffd00020d45410 fffff801ef56303a : 0000000000000001 0000000000000000 ffff0c83e3e25300 ffffd00020d45550 : nt!KiBugCheckDispatch+0x69 ffffd00020d45550 fffff801ef4f1316 : 00000000000a5890 0000000000000001 0000000000000000 ffffe00004c00000 : nt!KiPageFault+0x23a ffffd00020d456e0 fffff80003b430ad : 00000000000afe80 ffffe00004c00000 00000000000a2f80 0000000035720000 : nt!KeReleaseSpinLock+0x16 ffffd00020d45710 fffff80003ac249f : ffffe00004c00000 00000000000000a8 ffffe00004c85050 0000000000000800 : netr28x+0x840ad ffffd00020d457b0 fffff80000b76475 : ffffd00020d459e8 ffffd00020d459f0 ffffe00004ac2006 ffffe00004ac21a0 : netr28x+0x349f ffffd00020d459a0 fffff80000baa248 : ffffe00004ac2eb8 0000000000000000 ffffe00000000000 ffffe00004ac21a0 : ndis!ndisMInvokeInitialize+0x39 ffffd00020d459e0 fffff80000b74784 : 0000000000000050 ffffe00004907ba0 0000000000000000 01cecbbc328e6cde : ndis!ndisMInitializeAdapter+0x4dc ffffd00020d46050 fffff80000b74d3d : 0000000000000050 ffffe0000443e770 ffffc00000951480 ffffe00004ac21a0 : ndis!ndisInitializeAdapter+0x60 ffffd00020d460a0 fffff80000b74c14 : ffffe00004ac21a0 ffffe00004ac2050 ffffe000047ec2a0 0000000000000000 : ndis!ndisPnPStartDevice+0x89 ffffd00020d460f0 fffff80000b87695 : ffffe00004ac21a0 ffffe00004ac21a0 ffffd00020d461b0 ffffe000047ec2a0 : ndis!ndisStartDeviceSynchronous+0x58 ffffd00020d46140 fffff80000b6a760 : ffffe000047ec2a0 ffffe00004ac21a0 0000000000000000 0000000000000000 : ndis!ndisPnPIrpStartDevice+0x13471 ffffd00020d46170 fffff8000032576c : ffffe00004b11501 ffffe00004b11570 0000000000000001 fffff80000325880 : ndis!ndisPnPDispatch+0x140 ffffd00020d461e0 fffff8000030b40a : ffffe000047ec2a0 0000000000000106 ffffd00020d462f0 ffffe00004b116c0 : Wdf01000!FxPkgFdo::PnpSendStartDeviceDownTheStackOverload+0xe8 ffffd00020d46250 fffff80000305942 : 0000000000000106 ffffd00020d462f0 0000000000000105 ffffd00020d464d0 : Wdf01000!FxPkgPnp::PnpEventInitStarting+0xa ffffd00020d46280 fffff80000305a5a : ffffe00004b116c8 0000000000000002 ffffe00004b11570 ffffe00004b11600 : Wdf01000!FxPkgPnp::PnpEnterNewState+0x102 ffffd00020d46310 fffff80000305bc4 : 0000000000000000 ffffd00020d46400 ffffe00004b116a0 0000000000000000 : Wdf01000!FxPkgPnp::PnpProcessEventInner+0xc2 ffffd00020d46390 fffff8000030c27a : 0000000000000000 ffffe00004b11570 0000000000000000 ffffe00004b11570 : Wdf01000!FxPkgPnp::PnpProcessEvent+0xe4 ffffd00020d46430 fffff80000300936 : ffffe00004b11570 ffffd00020d464c0 0000000000000000 ffffe00004a0e630 : Wdf01000!FxPkgPnp::_PnpStartDevice+0x1e ffffd00020d46460 fffff800002fba18 : ffffe000047ec2a0 ffffe000047ec2a0 0000000000000000 ffffe0000486f020 : Wdf01000!FxPkgPnp::Dispatch+0xd2 ffffd00020d464d0 fffff801ef838796 : 0000000000000000 fffff801ef6aa101 0000000000000000 ffffd000208aa180 : Wdf01000!FxDevice::DispatchWithLock+0x7d8 ffffd00020d465b0 fffff801ef4d5bad : ffffe000011dc3a0 ffffd00020d46659 0000000000000000 fffff801ef7f5ba4 : nt!PnpAsynchronousCall+0x102 ffffd00020d465f0 fffff801ef838e57 : ffffe000011db8d0 ffffe000011db8d0 ffffe00004a8d060 ffffc00002b11200 : nt!PnpStartDevice+0xc5 ffffd00020d466c0 fffff801ef838fe7 : ffffe000011db8d0 ffffe000011db8d0 0000000000000000 ffffe000011db8d0 : nt!PnpStartDeviceNode+0x147 ffffd00020d46790 fffff801ef7fd19e : ffffe000011db8d0 0000000000000001 0000000000000001 ffffe00000000001 : nt!PipProcessStartPhase1+0x53 ffffd00020d467d0 fffff801ef897b17 : ffffe000011db8d0 0000000000000001 0000000000000000 fffff801ef7ef7b2 : nt!PipProcessDevNodeTree+0x3ce ffffd00020d46a50 fffff801ef4f5033 : 0000000100000003 0000000000000000 0000000000000000 0000000000000000 : nt!PiRestartDevice+0xaf ffffd00020d46aa0 fffff801ef44565d : fffff801ef4f4c90 ffffd00020d46bd0 0000000000000000 ffffe00004a10170 : nt!PnpDeviceActionWorker+0x3a3 ffffd00020d46b50 fffff801ef4eec80 : 0000000000000000 ffffe000010ff040 ffffe000010ff040 ffffe0000035c900 : nt!ExpWorkerThread+0x2b5 ffffd00020d46c00 fffff801ef55f2c6 : ffffd00020472180 ffffe000010ff040 ffffe00000608040 ffffc00000002710 : nt!PspSystemThreadStartup+0x58 ffffd00020d46c60 0000000000000000 : ffffd00020d47000 ffffd00020d41000 0000000000000000 0000000000000000 : nt!KiStartSystemThread+0x16 STACK_COMMAND: kb FOLLOWUP_IP: netr28x+840ad fffff800`03b430ad 4533e4 xor r12d,r12d SYMBOL_STACK_INDEX: 4 SYMBOL_NAME: netr28x+840ad FOLLOWUP_NAME: MachineOwner MODULE_NAME: netr28x IMAGE_NAME: netr28x.sys DEBUG_FLR_IMAGE_TIMESTAMP: 51de7a8d FAILURE_BUCKET_ID: AV_netr28x+840ad BUCKET_ID: AV_netr28x+840ad ANALYSIS_SOURCE: KM FAILURE_ID_HASH_STRING: km:av_netr28x+840ad FAILURE_ID_HASH: {a1f86ced-f566-ac23-afeb-1aa88ea5ab8f} Followup: MachineOwner

    Read the article

  • How to troubleshoot performance issues of PHP, MySQL and generic I/O

    - by jbx
    I have a WordPress based website running on a shared hosting. Its response time is very decent (around 2s to retrieve the HTML page and 5s to load all the resources). I was planning to move it to a dedicated virtual server (Ubuntu 12.04 LTS), which should theoretically improve things and make them more consistent given its not shared. However I observed severe performance degredation, with the page taking 10seconds to be generated. I ruled out network issues by editing /etc/hosts on the server and mapping the domain to 127.0.0.1. I used the Apache load tester ab to get the HTML, so JS, CSS and images are all excluded. It still took 10 seconds. I have Zpanel installed on the server which also uses MySQL, and its pages come up quite fast (1.5s) and also phpMyAdmin. Performing some queries on the wordpress database directly through phpMyAdmin returns them quite fast too, with query times in the 10 to 30 millisecond region. Memory is also sufficient, with only 800Mb being used of the 1Gb physical memory available, so it doesn't seem to be a swap issue either. I have also installed APC to try to improve the PHP performance, but it didn't have any effect. What else should I look for? What could be causing this degradation in performance? Could it be some kind of I/O issue since I am running on a cloud based virtual server? I wish to be able to raise the issue with my provider but without showing actual data from some diagnosis I am afraid he will just blame my application. UPDATE with sar output (every second) when I did an HTTP request: 02:31:29 CPU %user %nice %system %iowait %steal %idle 02:31:30 all 0.00 0.00 0.00 0.00 0.00 100.00 02:31:31 all 2.22 0.00 2.22 0.00 0.00 95.56 02:31:32 all 41.67 0.00 6.25 0.00 2.08 50.00 02:31:33 all 86.36 0.00 13.64 0.00 0.00 0.00 02:31:34 all 75.00 0.00 25.00 0.00 0.00 0.00 02:31:35 all 93.18 0.00 6.82 0.00 0.00 0.00 02:31:36 all 90.70 0.00 9.30 0.00 0.00 0.00 02:31:37 all 71.05 0.00 0.00 0.00 0.00 28.95 02:31:38 all 14.89 0.00 10.64 0.00 2.13 72.34 02:31:39 all 2.56 0.00 0.00 0.00 0.00 97.44 02:31:40 all 0.00 0.00 0.00 0.00 0.00 100.00 02:31:41 all 0.00 0.00 0.00 0.00 0.00 100.00 My suspicion that this comes from I/O related issue is also because a caching plugin I use to reduce the amount of queries to the database, by precompiling PHP pages is actually making things worse instead of better. It seems that file access is making things worse instead.

    Read the article

< Previous Page | 544 545 546 547 548 549 550 551 552 553 554 555  | Next Page >