Search Results

Search found 2291 results on 92 pages for 'webserver'.

Page 77/92 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Sendmail delivering locally instead of to MTA in MX record

    - by CreativeNotice
    Ok, so I've got a box named websrv1.mydomain.com. It's a web server running ubuntu, apache2, sendmail, etc. My email is outsourced to a third party. So in my DNS I've got MX set to mx.thirdparty.net. I've no reason to accept incoming mail on my web server, every email should be sent to the third party. This works correctly accept with sending mail from the webserver (aka via cron or console). So from my web server, if I send an email to [email protected], it just disappears. No errors, nothing in dead.letter, nothing. I can send to any other address with no issues. If I send to [email protected] it's delivered locally which is fine. 1) Doing an nslookup shows the mx record is correct. 2) Running /mx mydomain.com from sendmail -bt returns the correct result. 3) Running sendmail -bv [email protected] returns: sudo sendmail -bv [email protected] [email protected]... deliverable: mailer esmtp, host mydomain.com., user [email protected] 4) Running 3,0 [email protected], returns: 3,0 [email protected] canonify input: me @ mydomain . com Canonify2 input: me Canonify2 returns: me canonify returns: me parse input: me Parse0 input: me Parse0 returns: me Parse1 input: me MailerToTriple input: me MailerToTriple returns: me Parse1 returns: $# esmtp $@ mydomain . com . $: me parse returns: $# esmtp $@ mydomain . com . $: me So I'm at a loss. Sendmail seems to see the mx record, but it's not using it.

    Read the article

  • Limit Apache 2 Memory Usage

    - by UltraNurd
    I am running a hobby webserver off of an ancient Blue & White G3/300 running Debian PPC Squeeze 2.6.30. The performance is okay for a while after a restart, but it eventually gets more and more bogged down. Right now it's at 76 days uptime, and the main culprit seems to be the memory usage of 10+ apache2 processes. I think I need to lower the values for StartServers, MinSpareServers, and/or MaxSpareServers, but I'm not sure which one to adjust, and there are three sections for each depending on which mpm module is in use. How do I tell which of the following sections I need to change, and what are some reasonable values given that the box has 448 MB physical memory (weird upgrade history of one each 64, 128, and 256 sticks)? <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> <IfModule mpm_event_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> There aren't any other instances of StartServers in my apache2.conf, but none of those mpm modules appear in mods-available or mods-enabled. Ideas? Thanks!

    Read the article

  • nginx + wordpress in /wordpress subdir

    - by nkr1pt
    I installed nginx and would like to setup wordpress as a final step. I followed many howtos but am unable to get it working. The setup is fairly straightforward, the root dir of the webserver is /data/Sites/nkr1pt.homelinux.net. In that root dir I created a symlink to the wordpress folder in /usr/local/wordpress, so in fact all wordpress files can be accessed at /data/Sites/nkr1pt.homelinux.net/wordpress. Permissions are ok. The plan is to get wordpress working at http://sirius/wordpress, the server's name is sirius. spawn-fcgi is running and listening on port 7777. Here you can see the relevant config: server { listen 80; listen 8080; server_name sirius; root /data/Sites/nkr1pt.homelinux.net; passenger_enabled on; passenger_base_uri /redmine; #charset koi8-r; #access_log logs/access.log main; location ^~ /data { root /data/Sites/nkr1pt.homelinux.net; autoindex on; auth_basic "Restricted"; auth_basic_user_file htpasswd; } location ^~ /dump { root /data/Sites/nkr1pt.homelinux.net; autoindex on; } location ^~ /wordpress { try_files $uri $uri/ /wordpress/index.php; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:7777 location ~ \.php$ { #fastcgi_split_path_info ^(/wordpress)(/.*)$; fastcgi_pass localhost:7777; #fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; #index index.php; } please note that redmine, and the locations dump and data are working perfectly, it is only wordpress that I cannot get to work. Can you please help me to the correct wordpress configuration in nginx? All help is much appreciated!

    Read the article

  • MySQL is killing the server IO.

    - by OneOfOne
    I manage a fairly large/busy vBulletin forums (running on gigenet cloud), the database is ~ 10 GB (~9 milion posts, ~60 queries per second), lately MySQL have been grinding the disk like there's no tomorrow according to iotop and slowing the site. The last idea I can think of is using replication, but I'm not sure how much that would help and worried about database sync. I'm out of ideas, any tips on how to improve the situation would be highly appreciated. Specs : Debian Lenny 64bit ~12Ghz (6x2GHz) CPU, 7520gb RAM, 160gb disk. Kernel : 2.6.32-4-amd64 mysqld Ver 5.1.54-0.dotdeb.0 for debian-linux-gnu on x86_64 ((Debian)) Other software: vBulletin 3.8.4 memcached 1.2.2 PHP 5.3.5-0.dotdeb.0 (fpm-fcgi) (built: Jan 7 2011 00:07:27) lighttpd/1.4.28 (ssl) - a light and fast webserver PHP and vBulletin are configured to use memcached. MySQL Settings : [mysqld] key_buffer = 128M max_allowed_packet = 16M thread_cache_size = 8 myisam-recover = BACKUP max_connections = 1024 query_cache_limit = 2M query_cache_size = 128M expire_logs_days = 10 max_binlog_size = 100M key_buffer_size = 128M join_buffer_size = 8M tmp_table_size = 16M max_heap_table_size = 16M table_cache = 96 Other : From the cloud's IO chart, we're averaging 100mb/s read. > vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 9 0 73140 36336 8968 1859160 0 0 42 15 3 2 6 1 89 5 > /etc/init.d/mysql status Threads: 49 Questions: 252139 Slow queries: 164 Opens: 53573 Flush tables: 1 Open tables: 337 Queries per second avg: 61.302. moved from superuser

    Read the article

  • What's the right way to create a Ubuntu user whose home directory is /var/www/SITE?

    - by Leonnears
    First of, I need to state I'm a complete ignorant when it comes to server administration on Ubuntu, and I'm doing what I can. I have been trying to do this for hours with no luck. Basically, I want to create a Ubuntu user whose home directory is /var/www/SITE, and prefered it is chroot'd to it. The chroot part is not so important right now, as first I prefer to make anything work. The user should be able to upload files here and the webserver (www-data user?) should be able to pick them up with no problem. I was able to create the user and give it the home directory /var/www/SITE. (the user is "anders"). I gave him a password, and "anders" can connect to FTP just fine and upload files. But here's where things don't work: While my user can upload files to that /var/www/SITE directory, when I access the webpage on my browser I get a Forbidden error. Note that anders is also a member of the www-data group. I can fix this by running sudo chmod g+s /var/www/SITE/* anders -R but this is of course not ideal. Ideally the files should "work" as soon as I upload them. What's the right way to fix this? If it matters (don't think so), I'm editing my files in Coda 2 and anders is the user for it.

    Read the article

  • Control Panel for MySQL and PostgreSQL server

    - by jfreak53
    I am looking for a control panel for a CentOS system that will allow me to add user's that can control their "own" MySQL and PostgreSQL Databases. I don't want to have to spend the $25 a month for cpanel on a dedicated to do this. Plus cPanel comes with all the rest like webserver and email that I don't need these to have. Basically I want to be able to create users that can create their own databases and only see those that they have created. I want to be able to control their disk space as with most panels. They need to be able to create their own DB users as well. Kloxo won't work as it doesn't natively support PostgreSQL. I tried straight PHPMyAdmin, but it won't let the user's create their own DB's unless they can also see everyone else's. VirtualMin I just can't get to work at all! ha ha I installed it an though it works great for itself if a user signs into Usermin they can see all DB's. If they sign into PHPMyAdmin (which basically means any program that directly connects to MySQL) they see all DB's. If they login to virtualmin then yes, they only see their's. But that won't work. I can't seem to think of another way to do this. I can use webmin and usermin directly but PHPMyAdmin again let's the user's I create either see only one DB and create none, or see all DB's. So that sound's like a permission problem in MySQL and PostgreSQL.

    Read the article

  • DNS setup problems with Windows Azure VPS

    - by jbigelow
    What is the proper to setup the A record (or CNAME) for a Windows Azure VPS? I can't connect to my website after setting up IIS and believe I don't have the correct DNS setup. I created a small VPS instance with the default Windows Server 2012 configuration. I RDP'd in and added the Webserver role. In my DNSMadeEasy control panel I added an A record with my Public Virtual IP Address. In IIS I went to the default website and added bindings for the hostname of my website, so I should be able to type mywebsite.com and see the IIS 8 splash screen, but instead my browser cannot connect. I attempted to navigate to the site by typing in my Virtual IP address into the browser and still cannot connect. I RDP'd back into the machine and turned off Windows Firewall. No change, still cannot navigate to my website. From within IIS I double checked my binding. If I click "browse *:80" I can bring up my website in IE with the http:// localhost address. If I click "browse mywebsite on *.80" IE says "This page cannot be displayed.", from within the RDP session I can view the site if I navigate to http:// 127.0.0.1 but not if I navigate to my Virtual IP, nor can I view the page if I try navigating to http:// mywebservername.cloudapp.net I'm thinking I must be fundamentally not understanding how do DNS setup with Azure VPS but my initial Google searches aren't turning up any helpful information. (spaces added after the http:// so serverfault doesn't try and render them as valid urls.)

    Read the article

  • Apache2 on Raspbian: Multiviews is enabled but not working [closed]

    - by Christian L
    I recently moved webserver, from a ubuntuserver set up by my brother (I have sudo) to a rasbianserver set up by my self. On the other server multiviews worked out of the box, but on the raspbian it does not seem to work althoug it seems to be enabled out of the box there as well. What I am trying to do is to get it to find my.doma.in/mobile.php when I enter my.doma.in/mobile in the adress field. I am using the same available-site-file as I did before, the file looks as this: <VirtualHost *:80> ServerName my.doma.in ServerAdmin [email protected] DocumentRoot /home/christian/www/do <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /home/christian/www/do> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> From what I have read various places while googling this issue I found that the negotiation module had to be enabled so I tried to enable it. sudo a2enmod negotiation Giving me this result Module negotiation already enabled I have read through the /etc/apache2/apache2.conf and I did not find anything in particular that seemed to be helping me there, but please do ask if you think I should post it. Any ideas on how to solve this through getting Multiviews to work?

    Read the article

  • Does WebDAV even work on IIS 7? I say nay

    - by FlavorScape
    I've tried every configuration from the top 10 stack overflow and server fault results for WebDAV 405 on IIS (for verb PROPFIND and PUT). I'm running server 2008 SP2. Followed all the instructions here. I'm no stranger to configuring servers. This has gotten nowhere after 8 hours. Confirmed system.webserver in applicationhost.config: <add name="WebDAV" path="*" verb="PROPFIND,PROPPATCH,MKCOL,PUT,COPY,DELETE,MOVE,LOCK,UNLOCK" modules="WebDAVModule" resourceType="Unspecified" requireAccess="None" /> Port 443 with basic auth, same issue. Tried port 80 with windows auth. Broken. (405) Windows authentication. Check. Added authoring rules for default site and application. Check. Not the firewall. Check. added "Desktop Experience" role feature Tried HTTPS with Basic Authentication on port 443. Does not work. No other services are running like Sharepoint. Check. confirmed user has read/write NT level permissions for the folder/virtual dir tried net use * http://localhost /user:MYDOMAIN\me myPass get error 1920, if I don't authenticate I get error 67 confirmed I'm not applying filtering to WebDAV: <requestFiltering> <fileExtensions applyToWebDAV="false" /> <verbs applyToWebDAV="false" /> <hiddenSegments applyToWebDAV="false" /> 405 - HTTP verb used to access this page is not allowed. The page you are looking for cannot be displayed because an invalid method (HTTP verb) was used to attempt access. SHOULD I JUST GIVE UP? Other questions that helped none: 405 - ‘Method not Allowed’ adding service hosted in IIS7 webdav on iis7.5 - simply cannot make it work http://studentguru.gr/b/kingherc/archive/2009/11/21/webdav-for-iis-7-on-windows-server-2008-r2.aspx

    Read the article

  • Adding PHP to Apache

    - by user528451
    Where I work, we use ancient technology that belongs in a museum. Further I have to get everything done through system admins. They are telling me in order to get PHP, they will need to upgrade the operating system as well as the Apache version. lcas100[67]% uname -a Linux lcas100 2.6.9-11.ELsmp #1 SMP Fri May 20 18:26:27 EDT 2005 i686 i686 i386 GNU/Linux lcas100[68]% cat /etc/*-release LSB_VERSION="1.3" Red Hat Enterprise Linux AS release 4 (Nahant) lcas100[75]% /ots/apache/bin/httpd -v Server version: Apache/1.3.31 (Unix) Server built: Nov 3 2004 18:47:31 This doesn't make sense to me because apparently Apache 1.3.x supports PHP: http://php.net/manual/en/install.unix.apache.php Furthermore, we have another machine that runs PHP and is running the exact same OS and OS version. The reason I want it on the former machine is because it is mounted on a different file system. Lastly they tell me that all software the Apache webserver runs will need to be reinstalled/recompiled (assuming an Apache upgrade WAS needed). I am not even sure about this. Are they full of it? Thanks

    Read the article

  • Why does `rpm` show 3 httpd packages, and which one provides the real httpd?

    - by Stefan Lasiewski
    I ran yum update on my CentOS5 webserver a few days ago. Today I just noticed that I have 3 httpd-* rpms! How can I end up with three RPMs for httpd (My other servers only have one httpd rpm). I want to make sure that my server has a patched, updated version of /usr/sbin/httpd. How can I tell which one of these packages provides the httpd binary at /usr/sbin/httpd? [root@node1 ~]# rpm -q httpd httpd-2.2.3-76.el5.centos httpd-2.2.3-78.el5.centos httpd-2.2.3-83.el5.centos [root@node1 ~]# /usr/sbin/httpd -V |grep version Server version: Apache/2.2.3 [root@node1 ~]# rpm -q httpd-2.2.3-76.el5.centos --list |grep -w /usr/sbin/httpd /usr/sbin/httpd /usr/sbin/httpd.event /usr/sbin/httpd.worker [root@node1 ~]# rpm -q httpd-2.2.3-78.el5.centos --list |grep -w /usr/sbin/httpd /usr/sbin/httpd /usr/sbin/httpd.event /usr/sbin/httpd.worker [root@node1 ~]# rpm -q httpd-2.2.3-83.el5.centos --list |grep -w /usr/sbin/httpd /usr/sbin/httpd /usr/sbin/httpd.event /usr/sbin/httpd.worker [root@node1 ~]# root@node1 ~]# rpm -q --provides httpd |grep -w httpd config(httpd) = 2.2.3-76.el5.centos httpd-mmn = 20051115 httpd = 2.2.3-76.el5.centos config(httpd) = 2.2.3-78.el5.centos httpd-mmn = 20051115 httpd = 2.2.3-78.el5.centos config(httpd) = 2.2.3-83.el5.centos httpd-mmn = 20051115 httpd = 2.2.3-83.el5.centos Update: Answering Mark Wagner's questions: [root@node1 ~]# rpm -q -f /usr/sbin/httpd httpd-2.2.3-76.el5.centos httpd-2.2.3-78.el5.centos httpd-2.2.3-83.el5.centos [root@node1 ~]# rpm -V httpd-2.2.3-83.el5.centos S.5..... c /etc/logrotate.d/httpd S.5..... c /etc/rc.d/init.d/httpd ....L... /var/www

    Read the article

  • Solaris ldap Authentication

    - by Tman
    Iv been having a trouble trying to get my Solaris 10 server to authenticate against an eDir server.im managed to Set up my linux(RHeL,SLES) servers to authenticate against the ldap Server.which works fine. Here is my configuration Files. ldapclient list: NS_LDAP_FILE_VERSION= 2.0 NS_LDAP_BINDDN= cn=proxyuser,o=AEDev NS_LDAP_BINDPASSWD= {NS1}ecfa88f3a945c22222233 NS_LDAP_SERVERS= 192.168.0.19 NS_LDAP_SEARCH_BASEDN= ou=auth,o=AEDev NS_LDAP_AUTH= simple NS_LDAP_SEARCH_SCOPE= sub NS_LDAP_CACHETTL= 0 NS_LDAP_CREDENTIAL_LEVEL= anonymous NS_LDAP_SERVICE_SEARCH_DESC= group:ou=Groups,ou=auth,o=AEDev NS_LDAP_SERVICE_SEARCH_DESC= shadow:ou=users,ou=auth,o=AEDev?sub?objectClass=shadowAccount NS_LDAP_SERVICE_SEARCH_DESC= passwd:ou=auth,o=AEDev?sub?objectClass=posixAccount NS_LDAP_BIND_TIME= 10 NS_LDAP_SERVICE_AUTH_METHOD= pam_ldap:simple getent passwd works fine: root:x:0:0:Super-User:/:/sbin/sh daemon:x:1:1::/: bin:x:2:2::/usr/bin: sys:x:3:3::/: adm:x:4:4:Admin:/var/adm: lp:x:71:8:Line Printer Admin:/usr/spool/lp: uucp:x:5:5:uucp Admin:/usr/lib/uucp: nuucp:x:9:9:uucp Admin:/var/spool/uucppublic:/usr/lib/uucp/uucico smmsp:x:25:25:SendMail Message Submission Program:/: listen:x:37:4:Network Admin:/usr/net/nls: gdm:x:50:50:GDM Reserved UID:/: webservd:x:80:80:WebServer Reserved UID:/: postgres:x:90:90:PostgreSQL Reserved UID:/:/usr/bin/pfksh svctag:x:95:12:Service Tag UID:/: nobody:x:60001:60001:NFS Anonymous Access User:/: noaccess:x:60002:60002:No Access User:/: nobody4:x:65534:65534:SunOS 4.x NFS Anonymous Access User:/: tlla:x:2012:100::/home/tlla: test:x:2011:100::/home/test: thato:x:2010:100::/home/thato: pam.conf login auth sufficient pam_unix_auth.so.1 #server_policy login auth sufficient /usr/lib/security/pam_ldap.so.1 try_first_pass login auth required pam_dial_auth.so.1 rlogin auth sufficient pam_rhosts_auth.so.1 rlogin auth requisite pam_authtok_get.so.1 rlogin auth required pam_dhkeys.so.1 rlogin auth required pam_unix_cred.so.1 rlogin auth sufficient pam_unix_auth.so.1 rlogin auth sufficient /usr/lib/security/pam_ldap.so.1 try_first_pass rsh auth sufficient pam_rhosts_auth.so.1 rsh auth required pam_unix_cred.so.1 rsh auth sufficient pam_unix_auth.so.1 #server_policy rsh auth sufficient /usr/lib/security/pam_ldap.so.1 try_first_pass other auth requisite pam_authtok_get.so.1 other auth required pam_dhkeys.so.1 other auth required pam_unix_cred.so.1 other auth sufficient pam_unix_auth.so.1 other auth sufficient /usr/lib/security/pam_ldap.so.1 try_first_pass passwd auth required pam_passwd_auth.so.1 passwd auth sufficient pam_unix_auth.so.1 ssh account sufficient pam_unix.so.1 ssh account sufficient /usr/lib/security/pam_ldap.so.1 try_first_pass other account requisite pam_roles.so.1 other account sufficient pam_unix_account.so.1 other account sufficient /usr/lib/security/pam_ldap.so.1 try_first_pass other password required pam_dhkeys.so.1 other password requisite pam_authtok_get.so.1 other password requisite pam_authtok_check.so.1 other password required pam_authtok_store.so.1 other password sufficient pam_unix.so.1 other password sufficient /usr/lib/security/pam_ldap.so.1 try_first_pass Local Authentication Works But LDAP Authentication Doesn't Work.

    Read the article

  • IIS8 behind a VPN + Windows Server 2012 - how to properly bind IP+Port

    - by ryugen
    This is my first question so I hope I'm going to give you enough information. I'm running Windows Server 2012 within the Hyper-V environment of my Windows 8 machine. Within Windows Server 2012 I'm running a VPN tool based on openVPN to hide my real IP. When I run IIS8 with the VPN disconnected it works flawlessly through the Internet (port 80 forwarded correctly). But as soon as I connect to the VPN I can't reach my site through the domain anymore. Now I tried basically everything I know which is why I'm asking you guys. I tried binding IIS8 to the IP of my virtual ethernet card. I tried changing the priority of the NIC through the "Network and sharing center" via the advanced tab. I used ipconfig /flushdns in case there was something wrong in the DNS handling. Hell, I even turned off the Windows firewall. I also used a port scanner to verify the problem. The webserver is reachable on port 80 with VPN disconnected and immediately gets unreachable on connect. Theoretically both IPs (my regular one AND the VPN) should be reachable or at least not impair the other one right? Do you have any other suggestion? Do I have to route something somewhere somehow?

    Read the article

  • Unable to get HTTPS MEX endpoint to work

    - by Rahul
    I have been trying to configure WCF to work with Azure ACS. This WCF configuration has 2 bugs: It does not publish MEX end point. It does not invoke custom behaviour extension. (It just stopped doing that after I made some changes which I can't remember) What could be possibly wrong here? <configuration> <configSections> <section name="microsoft.identityModel" type="Microsoft.IdentityModel.Configuration.MicrosoftIdentityModelSection, Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </configSections> <location path="FederationMetadata"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <system.web> <compilation debug="true" targetFramework="4.0"> <assemblies> <add assembly="Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </assemblies> </compilation> </system.web> <system.serviceModel> <services> <service name="production" behaviorConfiguration="AccessServiceBehavior"> <endpoint contract="IMetadataExchange" binding="mexHttpsBinding" address="mex" /> <endpoint address="" binding="customBinding" contract="Samples.RoleBasedAccessControl.Service.IService1" bindingConfiguration="serviceBinding" /> </service> </services> <behaviors> <serviceBehaviors> <behavior name="AccessServiceBehavior"> <federatedServiceHostConfiguration /> <sessionExtension/> <useRequestHeadersForMetadataAddress> <defaultPorts> <add scheme="http" port="8000" /> <add scheme="https" port="8443" /> </defaultPorts> </useRequestHeadersForMetadataAddress> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpsGetEnabled="true" /> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="true" /> <serviceCredentials> <!--Certificate added by FedUtil. Subject='CN=DefaultApplicationCertificate', Issuer='CN=DefaultApplicationCertificate'.--> <serviceCertificate findValue="XXXXXXXXXXXXXXX" storeLocation="LocalMachine" storeName="My" x509FindType="FindByThumbprint" /> </serviceCredentials> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> <extensions> <behaviorExtensions> <add name="sessionExtension" type="Samples.RoleBasedAccessControl.Service.RsaSessionServiceBehaviorExtension, Samples.RoleBasedAccessControl.Service, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> <add name="federatedServiceHostConfiguration" type="Microsoft.IdentityModel.Configuration.ConfigureServiceHostBehaviorExtensionElement, Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </behaviorExtensions> </extensions> <protocolMapping> <add scheme="http" binding="customBinding" bindingConfiguration="serviceBinding" /> <add scheme="https" binding="customBinding" bindingConfiguration="serviceBinding"/> </protocolMapping> <bindings> <customBinding> <binding name="serviceBinding"> <security authenticationMode="SecureConversation" messageSecurityVersion="WSSecurity11WSTrust13WSSecureConversation13WSSecurityPolicy12BasicSecurityProfile10" requireSecurityContextCancellation="false"> <secureConversationBootstrap authenticationMode="IssuedTokenOverTransport" messageSecurityVersion="WSSecurity11WSTrust13WSSecureConversation13WSSecurityPolicy12BasicSecurityProfile10"> <issuedTokenParameters> <additionalRequestParameters> <AppliesTo xmlns="http://schemas.xmlsoap.org/ws/2004/09/policy"> <EndpointReference xmlns="http://www.w3.org/2005/08/addressing"> <Address>https://127.0.0.1:81/</Address> </EndpointReference> </AppliesTo> </additionalRequestParameters> <claimTypeRequirements> <add claimType="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name" isOptional="true" /> <add claimType="http://schemas.microsoft.com/ws/2008/06/identity/claims/role" isOptional="true" /> <add claimType="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier" isOptional="true" /> <add claimType="http://schemas.microsoft.com/accesscontrolservice/2010/07/claims/identityprovider" isOptional="true" /> </claimTypeRequirements> <issuerMetadata address="https://XXXXYYYY.accesscontrol.windows.net/v2/wstrust/mex" /> </issuedTokenParameters> </secureConversationBootstrap> </security> <httpsTransport /> </binding> </customBinding> </bindings> </system.serviceModel> <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> </system.webServer> <microsoft.identityModel> <service> <audienceUris> <add value="http://127.0.0.1:81/" /> </audienceUris> <issuerNameRegistry type="Microsoft.IdentityModel.Tokens.ConfigurationBasedIssuerNameRegistry, Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"> <trustedIssuers> <add thumbprint="THUMBPRINT HERE" name="https://XXXYYYY.accesscontrol.windows.net/" /> </trustedIssuers> </issuerNameRegistry> <certificateValidation certificateValidationMode="None" /> </service> </microsoft.identityModel> <appSettings> <add key="FederationMetadataLocation" value="https://XXXYYYY.accesscontrol.windows.net/FederationMetadata/2007-06/FederationMetadata.xml " /> </appSettings> </configuration> Edit: Further implementation details I have the following Behaviour Extension Element (which is not getting invoked currently) public class RsaSessionServiceBehaviorExtension : BehaviorExtensionElement { public override Type BehaviorType { get { return typeof(RsaSessionServiceBehavior); } } protected override object CreateBehavior() { return new RsaSessionServiceBehavior(); } } The namespaces and assemblies are correct in the config. There is more code involved for checking token validation, but in my opinion at least MEX should get published and CreateBehavior() should get invoked in order for me to proceed further.

    Read the article

  • Is there any way to detect when nginx has completed a graceful shutdown?

    - by Daniel Vandersluis
    I have a ruby on rails application which is running on passenger and nginx, with one main webserver and multiple application servers. I am trying to update my deployment process in order to minimize (or ideally, remove) any downtime caused by the deployment. The main roadblock right now is that passenger takes some time to restart (ie. reload the application), so in order to get around this, I want to stagger my restarts so that only one app server gets restart at a time. In order to do this without losing any long running passenger processes, I am thinking I need to gracefully shutdown the app server's nginx instance, which will cause it to no longer accept new connections but continue to process the existing ones; as well, HAProxy will detect that the app server is down and route new requests to the other server. However, assuming that there is a long-running process, I am not sure how to detect when the graceful shutdown has completed so that I can start it back up. Since the shutdown is caused by sending a signal (ie. kill -QUIT $( cat /var/run/nginx.pid )), and the kill command will return immediately, I cannot combine commands (ie. kill ... && touch restarted), as the touch command will execute immediately, even if nginx hasn't completed its shutdown. Is there any good way to do this?

    Read the article

  • Windows Server 2003 with Apache and IIS causing random faulting and performance issues with Apache?

    - by contrebis
    I'm trying to fix a problem on a Windows Server 2003 SE install which is running IIS6 and Apache webserver (with PHP and MySQL). IIS sites are bound to one IP, Apache to the other. Everything seemed fine till the other IP address was installed to allow a webservice to run under IIS. Symptoms: Apache now responds very slowly, even requests for static files (often 30 seconds or more) Sporadic errors are appearing in the event logs like: Faulting application httpd.exe, version 2.2.14.0, faulting module php5ts.dll, version 5.2.13.13, fault address 0x000ac14f. I've double-checked the config files, taken account of this question/answer http://serverfault.com/questions/51230/running-iis-and-apache-on-the-same-windows-server, upped the Apache log level to debug, run TCPView to check for conflicting bindings, upgraded to latest Apache/PHP versions but still no success or indication of a cause. Any suggestions on where to look, or debugging tips would be gratefully received. I'm a web programmer so not so familiar with Windows Server admin or details of the networking stack. Running PHP under IIS is not an option and hosting on another server is non-ideal.

    Read the article

  • Apache randomly loses permission to see files.

    - by arbales
    I have a server (Leopard Server, not my choice) running Apache and MySQL. Several months ago, the server began to raise "Forbidden" errors at random intervals, preventing access to a PHP application. This behavior randomly ceased. Now, several days ago I installed Passenger and deployed a Sintra/Rack application. The application runs as a user acarneg (for example) from /Library/WebServer/Documents/presto/current/public, acarneg owns the entire structure. The _www user has access to the directory via ACL chmod +a "_www allow read,write,...". Everything works great! But after a randomish interval, often ~12 or ~24 hours, Passenger throws an error that also prevents the PHP application from running. Passenger Error #2. Cannot stat file config.ru. Permission denied. But the permissions haven't changed (confirmed) and all one has to do to resolve the error is sudo apachectl graceful. If the permissions aren't changing and Apache doesn't seem to have a legit problem, what is causing this mess? Why did it stop before, and why has it resumed!?!?!? Thanks for the help!

    Read the article

  • Apache2, Tomcat6, and proxy redirects

    - by Randal Hale
    So here is my question - go easy and slow. I'm a GIS Consultant and general hack with linux. I inherited this volunteer job essentially because I knew more than the rest of the team - or the rest of the team isn't as stubborn as I am... With that said a number of people have been mucking around in the server before I got involved so I've been cleaning up a lot of things. The domain names have been changed to protect the innocent. I have a server running Apache2 (port 80) and tomcat6 (8080) running on ubuntu server 10.4. There is a virtual host on Apache2 called "Runner" (the domain is runner.org). I have mod_proxy loaded. I am trying to redirect everyone that visits runner.org to http://some.ip.address:8080/openrunner-webapp/ So far I've gotten runner.org assigned to the apache2 server. Someone set up a redirect in the httpd.conf file but I believe it needs to go into the virtualhost. I tried setting the redirect in the virtualhost as: *ProxyPass / http://localhost:8080/openrunner-webapp All that does is show me the root of the Apache webserver. Anyway I'm stuck

    Read the article

  • Nginx's speed, and how to replicate it [migrated]

    - by Mediocre Gopher
    I'm interested in this from more than an academic standpoint rather than a practical standpoint; I don't plan on creating a production webserver to compete with nginx. What I'm wondering is how exactly nginx is so fast. The top google response for this is this thread, but it merely links to a cryptic slideshow and a general covering of different io strategies. All other results seem to simply describe how fast nginx is, rather then the reason. I tried building a simple erlang server to try to compete with nginx, but to no avail; nginx won out. All my server does is spawn a new process for each request, uses that process to read the file to a socket, then closes the file and kills the thread. It's not complicated, but given erlang's lightweight processes and underlying aio structure I thought it would compete, but nginx still wins out by a consistent 300 ms average under a heavy stress test. What is nginx doing that my simple server isn't? My first thought would be keeping files in main memory instead of tossing them between requests, but the filesystem cache does this already so I didn't think it would make that great of difference. Am I wrong? Or is there something else that I'm missing?

    Read the article

  • How do I get a subdomain on Xampp Apache @ localhost?

    - by jasondavis
    **UPDATE- I got it working now, I just had to change to The port number is important here. I just modified my windows HOST file @ C:\Windows\System32\drivers\etc and added this to the end of it 127.0.0.1 images.localhost 127.0.0.1 w-w-w.friendproject-.com 127.0.0.1 friendproject.-com Then I modified my httpd-vhosts.conf file on Apache under Xampp @ C:\webserver\apache\conf\extra Under the part where it shows examples for adding virtualhost I added this code below: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot /htdocs/images/ ServerName images.localhost </VirtualHost> <VirtualHost *:80> DocumentRoot /htdocs/ ServerName friendproject.com/ </VirtualHost> <VirtualHost *:80> DocumentRoot /htdocs/ ServerName w-ww-.friendproject.c-om/ </VirtualHost> Now the problem is when I go to any of the newly added domains in the browser I get this error below and even worse news is I now get this error even when going to http://localhost/ which worked fine before doing this I realize I can change everything back but I really need to at least get htt-p://im-ages.localhost to work. What do I do? Access forbidden! You don't have permission to access the requested directory. There is either no index document or the directory is read-protected. If you think this is a server error, please contact the webmaster . Error 403 localhost 07/25/09 21:20:14 Apache/2.2.11 (Win32) DAV/2 mod_ssl/2.2.11 OpenSSL/0.9.8i PHP/5.2.9

    Read the article

  • Can't ping a DNS zone on windows server 2008 R2

    - by Roberto Fernandes
    I´ve just configured a windows server 2008 r2, but got a lot of problems on DNS role. Let me talk about the server configuration: name: fdserver IP address: 192.168.0.10 I have a DNS zone called "fd.local". This is my domain and it´s working ok. I´ve created a zone called fdserver, and inside this zone a record (A) with "*" as a host. because this is a webserver, i´ve configured apache so if you enter something like "site.fdserver" it will point you to the "site" folder. This is working ok ONLY inside the server. This server is a DNS server too... and have 3 entries: 192.168.0.10 (his own IP), 8.8.8.8 and 8.8.4.4 (google public DNS). Now start the problems... Most of the computers on my network, CAN join the domain without problems. But just CAN'T ping "something.fdserver". Now comes the strange thing... If I remove the twoo secondary entries on my DNS server (8.8.8.8 and 8.8.4.4), it obvious stop accessing websites (like microsoft.com), but now the computer CAN ping "something.fdserver". I don´t know If I explained correctly... and my English is terrible... but inside the server is all working as it supposed to work. But in the workstation machines, it work only if I remove the secondary DNS!! If you need any details, just ask! thanks!

    Read the article

  • Howto: SaaS / PHP Application / Tenants / Security

    - by Ben Fransen
    Hi all, Being completely new in the webhostingcorner I have a few questions on how to implement/setup a webserver for a SaaS application. I'm about to rent my own server for a new product (CMS) I'm launching in two months. Developing the system wasn't that much of wild ride to me, but a correct way to implement it, is. So lets say this is my situation: I want to host 10 websites for 8 clients. There are 6 single sites, and two clients have two websites they can manage with my software. The CMS must be placed on the server too, all clients are connecting to 1 system The database must be placed Depending on the contract a client makes, the client gets some storage. How to measure the used storage over the DB, FileSystem and email Clients may not, in any case be able to somehow get outside their directory, but from the CMS directory the CMS must be able to create files and dirs in a clients directory (for templates, imagegalleries, widgets, etc, etc). I was thinking about something like a dirstructure like this: ./CMS/ [all CMS files] ./Websites/*/ [all websites] My hostingprovider will install updates to the os (CentOS, latest) and the admin panel (Direct Admin). Is there anybody with experience on this topic? Or do you have some thoughts about it? please join the conversation since I'm completely new to this. Ben

    Read the article

  • svn using nginx Commit failed: path not found

    - by Alaa Alomari
    I have built svn server on my nginx webserver. my nginx configuration is server { listen 80; server_name svn.mysite.com; location / { access_log off; proxy_pass http://svn.mysite.com:81; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; } } Now, i can svn co and svn up normally without having any problem and when i try to commit i get error: $svn up At revision 1285. $ svn info Path: . URL: http://svn.mysite.com/elpis-repo/crons Repository Root: http://svn.mysite.com/elpis-repo Repository UUID: 5303c0ba-bda0-4e3c-91d8-7dab350363a1 Revision: 1285 Node Kind: directory Schedule: normal Last Changed Author: alaa Last Changed Rev: 1280 Last Changed Date: 2012-04-29 10:18:34 +0300 (Sun, 29 Apr 2012) $svn st M config.php $svn ci -m "Just a test, add blank line to config" config.php Sending config.php svn: Commit failed (details follow): svn: File 'config.php' is out of date svn: '/elpis-repo/!svn/bc/1285/crons/config.php' path not found if i try to svn co on port 81 (my proxy_pass which is apache) and then svn ci, it will work smoothly! but why it doesn't work when i use nginx to accomplish it? any idea is highly appreciated.

    Read the article

  • New router messed up server 2003 setup...

    - by Aceth
    Hey, We were sent a new 2wire router today configured it as best we can to match the old bt voyager. We've also got X static IP's. We've manage to get our webserver on one of the new IP's public facing. then we use a hardware firewall which is in a DMZ again with a different static IP. This firewall then is our gateway for our internal LAN. with a few servers etc. The problem we're having is only our PDC (primary Domain controller which has exchange 2003 on) can't ping externally even an external IP. We've connected laptops to the 2wire router and obtain a private ip 192.168.1.X and it works fine can ping etc. our other servers with an internal ip behind the firewall can ping out fine. We've connected to the firewalls logging console and the pings from the server are allowed through so its fine there. The server in question is a Windows server 2003 R2 Enterprise SP2 + Exchange 2003 Server doesn't have firewall turned on. it has static private IP .. gateway is pointing to the right one External Static IP is routing fine inwards We've ran out of ideas .. help??

    Read the article

  • LSI 9260-8i w/ 6 256gb SSDs - RAID 5, 6, 10, or bad idea overall?

    - by Michael Pearson
    We're provisioning a new production server for our reasonably busy website. Our choice of host have available a 6 drive configuration with a LSI 9260-8i card. My initial thought was to fill all six bays with SSDs (Intel 520 256gb) and set them up in RAID. Good, bad, or terrible idea? Can the card handle it? Should we be using RAID 5, 6 or 10? This would be the first time the provider have filled all six slots for this rackmount with SSDs, so they're a bit hesitant. I'm wondering if somebody else with this card has done something similar in a production environment. We do about 43gb of writes per day and currently use about 300gb of storage. The server acts as webserver, database, and image store for approx 1 million files. The plan is to underprovision the SSDs by approximately 10% to 20% to increase their overall lifespan & performance. The fallback option is 2x480gb SSDs in RAID 1 and another 2x1TB HDDs in RAID 1. The motivation behind this is that the server rental cost difference between 2xSSDs and 6xSSDs is minimal (compared to the overall cost of the rental). We do not have any special high-IOPs requirements. However, if the configuration is known to work, I don't see a good reason to not use it and not have to worry about having separate 'fast and small' and 'slow and large' disks.

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >