Search Results

Search found 8284 results on 332 pages for 'trusted sites'.

Page 68/332 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • What's the piece of hardware listening on Facebook's or Wikipedia's IP address?

    - by Igor Ostrovsky
    I am trying to understand how massive sites like Facebook or Wikipedia work, for my intellectual curiosity. I read about various techniques for building scalable sites, but I am still puzzled about one particular detail. The part that confuses me is that ultimately, the DNS will map the entire domain to a single IP address, or a handful of IP addresses in the case of round-robin DNS. For example, wikipedia.org has only one type-A DNS record. So, people from all over the world visiting Wikipedia have to send a request to the one IP address specified in DNS. What is the piece of hardware that listens on the IP address for a massive site, and how can it possibly handle all the load coming from the requests for users all over the world? Edit 1: Thanks for all the responses! Anycast seems like a feasible answer... Does anyone know of a way to check whether a particular IP address is anycast-routed, so that I could verify that this really is the trick used in practice by large sites? Edit 2: After more reading on the topic, it appears that anycast is not typically used for dynamic web content. Anycast is usually used for UDP (e.g., DNS lookups), or sometimes for static content. One interesting thing to note is that Facebook uses profile.ak.fbcdn.net to host static content like style sheets and javascript libraries. Each time I ping this name, I get a response from a different IP address. However, I can't tell whether this is anycast in action, or a completely different technique. Back to my original question: as far as I can tell, even a large site will have a single expensive piece of load-balancing hardware listening on its handful of public IP addresses.

    Read the article

  • Adventures in Drupal multisite config with mod_rewrite and clean urls

    - by moexu
    The university where I work is planning to offer Drupal hosting to staff/faculty who want a Drupal site. We've set up Drupal multisite with clean urls and it's mostly working except for some weird redirects. If you have two sites where one is a substring of the other then you'll randomly be redirected to the other site. I tracked the problem to how mod_rewrite does path matching, so with a config file like this: RewriteCond %{REQUEST_URI} ^/drupal RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupal/index.php?q=$1 [last,qsappend] RewriteCond %{REQUEST_URI} ^/drupaltest RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupaltest/index.php?q=$1 [last,qsappend] /drupaltest will match the /drupal line and all of the links on the /drupaltest page will be rewritten to point to /drupal. If you put the end of string character ($) at the end of each rewrite condition then it will always match on the correct site and the links will always be rewritten correctly. That breaks down as soon as a user logs in though because the query string is appended to the url so just the base url will no longer match. You can also fix the problem by ordering the sites in the config file so that the smallest substring will always be last. I suggested storing all of the sites in a table and then querying, sorting, and rewriting the config file every time a Drupal site is requested so that we could guarantee the order. The system administrator thought that was kludgy and didn't address the root problem. Disabling clean urls should also fix the problem but the users really want them so I'd prefer to keep them if possible. I think we could also fix it by using an .htaccess file in each site to handle the clean url rewriting but that also seems suboptimal since it will generate a higher load on the server and the server is intended to host the majority of the university's external facing web content. Is there some magic I can do with mod_rewrite to get it to work? Would another solution be better? Am I doing something the wrong way to begin with?

    Read the article

  • Site resolves fine without "www", "www" creates database error

    - by PatrickS
    Working with BOA ( Barracuda / Octopus / Aegir ) , I've installed a few Drupal sites without any problems and following the same process for all. BOA is running on Nginx. All sites are going thru Cloudflare's network , where I set the same DNS settings. A example.com points to IPADDRESS A www points to IPADDRESS the nameservers of each domain are pointing to Cloudflare's respective nameservers. It all works , except for one site that works perfectly without "www" , but with "www" returns the typical database error if Drupal can't find the site's database. Site off-line The site is currently not available due to technical problems. Please try again later. Thank you for your understanding. If you are the maintainer of this site, please check your database settings in the settings.php file and ensure that your hosting provider's database server is running. For more help, see the handbook, or contact your hosting provider. In BOA, all sites have the same alias, basically a symlink, redirecting like this... www.example.com - example.com

    Read the article

  • How to handle sh: fetch: command not found

    - by Tyler Johnson
    Okay, I'm a noobie. I know how to build and compose a website, but I have no idea what I'm doing when it comes to servers and server commands, etc. I've recently had a problem with all of my sites on our servers going down all at once and then I have to go in and reboot the server for them to come up again. At first this was annoying, but now it is becoming agonizing as it now takes 3-4 reboots for the websites to come back up. I contacted support for my hosting, but they are not being very helpful. They just keep telling me what the issue might be and basically telling me that I'm going to have to look into it and figure it out, which really isn't possible since I know nothing. Anyway, here are the things they said were possible reasons: They said I have "strange logs" in my Apache webserver log, error: sh: fetch: command not found. My php.ini memory limit is: 256M which is very high. It should be 32M or 64M. Server is reaching Max Clients, meaning we have more than 150 visitors at a time. (They supposedly "fixed" this, but the sites/server are still going down) I have some Wordpress sites with plugins getting errors like: PHP Warning: pack(): Type H: illegal hex digit G in... PHP Fatal error: Cannot use object of type stdClass as array in... PHP Fatal error: Maximum execution time of 30 seconds exceeded in... PHP Fatal error: Call to undefined function file_exists() in... PHP Parse error: syntax error, unexpected '<' I know that's a lot, but I really am at wits end and have no idea what to do now. If anyone could maybe give me some advice or point me in the right direction I would greatly appreciate it! Thanks! Oh, and here are the specs for my server: RAM: 2048MB CPU Shares: 40 Primary Disk: 50GB Data Transfer: 75GB Port Speed: 5Mbps

    Read the article

  • Small TCP Window on WAN between 2 Locations

    - by Brent
    Site A: Denver datacenter. 60MBPS. Site B: Chicago. 100MBPS. ICMP pings: Packets: Sent = 176, Received = 176, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 74ms, Maximum = 94ms, Average = 75ms File transfer between sites that never goes past ~7MBPS: Windows Update download at 60MBPS+: Site to site: IPSec VPN using two Cisco 5520's. CPU at 3-4% and lots of memory to spare. The latency between to two sites is very acceptable so I can't see an issue why it is performing so slow when transferring between the two sites. I have found that any type of transfer (FTP, HTTP, Windows file shares) will never go above ~7MBPS. When the WAN was first setup, I was able to get transfers at 50-60MBPS, which is what is expected due to the WAN connection at the Site A at 60MBPS. Then a few days later, I was not able to get anything going faster than ~7MBPS. Is there a upstream router between Denver and Chicago causing this? I want to take the blame away from our setup as downloads from Windows Update go blazing fast and for the first few days after the site to site VPN came up, I was transferring VM images at 50-60MBPS. Our stack: HP P2000 MSA - HP C7000 Chassis - HP Flex-10 - Cisco Gigabit switch - Cisco ASA - WAN

    Read the article

  • Redirect 301 fails with a path as destination

    - by Martijn Heemels
    I'm using a large number of Redirect 301's which are suddenly failing on a new webserver. We're in pre-production tests on the new webserver, prior to migrating the sites, but some sites are failing with 500 Internal Server Error. The content, both databases and files, are mirrored from the old to the new server, so we can test if all sites work properly. I traced this problem to mod_alias' Redirect statement, which is used from .htaccess to redirect visitors and search engines from old content to new pages. Apparently the Apache server requires the destination to be a full url, including protocol and hostname. Redirect 301 /directory/ /target/ # Not Valid Redirect 301 /main.html / # Not Valid Redirect 301 /directory/ http://www.example.com/target/ # Valid Redirect 301 /main.html http://www.example.com/ # Valid This contradicts the Apache documentation for Apache 2.2, which states: The new URL should be an absolute URL beginning with a scheme and hostname, but a URL-path beginning with a slash may also be used, in which case the scheme and hostname of the current server will be added. Of course I verified that we're using Apache 2.2 on both the old and the new server. The old server is a Gentoo box with Apache 2.2.11, while the new one is a RHEL 5 box with Apache 2.2.3. The workaround would be to change all paths to full URL's, or to convert the statements to mod_rewrite rules, but I'd prefer the documented behaviour. What are your experiences?

    Read the article

  • PHP Causing Segmentation Fault & Apache Blank Response

    - by Joe
    I recently updated a Debian Lenny server to php 5.3,5 using the dotdeb source. Soon after doing so certain (but not all) sites on the server stopped responding to requests. A blank response would be returned - no headers, no content, nothing. I found this related question on stackoverflow which seems to describe something similar and used the same code the user had in their answer to see if I could replicate the issue: <?php class A { public function __construct() { new B; } } class B { public function __construct() { new A; } } new A; print 'Loaded Class A'; ?> This triggered the problem - the page returned absolutely nothing despite the original question stating this was fixed in PHP 5.5.0. No CPU block as you'd expect, no wait, just an almost instant zero response. I then ran this same code from the cli (php -f test.php) and the only output I got was 'Segmentation fault'. Tailing the kernel log I've spotted: Feb 16 07:04:06 creature kernel: [192203.269037] php[17710] general protection ip:76ef37 sp:7fff155e9bb0 error:0 in php5[400000+870000] Feb 16 08:57:31 creature kernel: [199639.699854] apache2[31136]: segfault at 7fff13a84fe0 ip 7f730514ea40 sp 7fff13a85008 error 6 in libphp5.so[7f7304ce8000+915000] All extremely odd and I'm not sure what it's pointing to/what I should do to debug this further. As I said some sites work but code such as the above definitely trigger it. Not that the sites I want to server have code like that - it's just an example. Any help is much appreciated!

    Read the article

  • Why am I getting this error in the logs?

    - by Matt
    Ok so I just started a new ubuntu server 11.10 and i added the vhost and all seems ok ...I also restarted apache but when i visit the browser i get a blank page the server ip is http://23.21.197.126/ but when i tail the log tail -f /var/log/apache2/error.log [Wed Feb 01 02:19:20 2012] [error] [client 208.104.53.51] File does not exist: /etc/apache2/htdocs [Wed Feb 01 02:19:24 2012] [error] [client 208.104.53.51] File does not exist: /etc/apache2/htdocs but my only file in sites-enabled is this <VirtualHost 23.21.197.126:80> ServerAdmin [email protected] ServerName logicxl.com # ServerAlias DocumentRoot /srv/crm/current/public ErrorLog /srv/crm/logs/error.log <Directory "/srv/crm/current/public"> Order allow,deny Allow from all </Directory> </VirtualHost> is there something i am missing .....the document root should be /srv/crm/current/public and not /etc/apache2/htdocs as the error suggests Any ideas on how to fix this UPDATE sudo apache2ctl -S VirtualHost configuration: 23.21.197.126:80 is a NameVirtualHost default server logicxl.com (/etc/apache2/sites-enabled/crm:1) port 80 namevhost logicxl.com (/etc/apache2/sites-enabled/crm:1) Syntax OK UPDATE <VirtualHost *:80> ServerAdmin [email protected] ServerName logicxl.com DocumentRoot /srv/crm/current/public <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /srv/crm/current/public/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost>

    Read the article

  • How can I prevent Apache from asking for credentials on non SSL site

    - by Scott
    I have a web server with several virtual hosts. Some of those hosts have an associated ssl site. I have a DirectoryMatch directive in my main config file which requires basic authentication to any directory with secured as part of the directory path. On sites that have an SSL site, I have a rewrite rule (located in the non ssl config for that site), that redirects to the SSL site, same uri. The problem is the http (80) site asks for credentials first, and then the https (443) site asks for credentials again. I would like to prevent the http site from asking and thus avoid the potential for someone entering credentials and having them sent in clear text. I know I could move the DirectoryMatch down to the specific site, and just put the auth statement in the SSL config, but that would introduce the possibility of forgetting to protect critical directories when creating new sites. Here are the pertinent declarations: httpd.conf (all sites): <DirectoryMatch "_secured_"> AuthType Basic AuthName "+ + + Restrcted Area on Server + + +" AuthUserFile /home/websvr/.auth/std.auth Require valid-user </DirectoryMatch> site.conf (specific to individual site) <DirectoryMatch "_secured_"> RewriteEngine On RewriteRule .*(_secured_.*) https://site.com/$1 </DirectoryMatch> Is there a way to leave DirectoryMatch in the main config file and prevent the request for authorization from the http site? Running Apache 2 on Ubuntu 10.04 server from the default package. I have AllowOverride set to none - I prefer to handle things in the config files instead of .htaccess.

    Read the article

  • CC.NET + SVN : Server certificate issue

    - by MSI
    I am trying to setup Continuous Integration in our office. Being a puny little developer I am facing this supposedly infamous problem: " Source control operation failed: svn: OPTIONS of 'https://trunkURL': Server certificate verification failed: issuer is not trusted" So I tried the following solution - Run CC.NET service (server running as win service) using a domain account (rather than default LOCAL SYSTEMS) and accept cert permanently using command prompt under that user by using svn log/list on the repo. Doesn't help :(. I am getting the following from my artifact/log files(or dashboard) ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control operation failed: svn: OPTIONS of 'https://TrunkURL': Server certificate verification failed: issuer is not trusted (https://ServerAdd) . Process command: E:\(svn.exe Path) log https://TrunkURL -r "{2010-11-08T02:12:20Z}:{2010-11-08T02:13:21Z}" --verbose --xml --no-auth-cache --non-interactive at ThoughtWorks.CruiseControl.Core.Sourcecontrol.ProcessSourceControl.Execute(ProcessInfo processInfo) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Svn.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModificationsWithLogging(ISourceControl sc, IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModifications(ISourceControl sourceControl, IIntegrationResult lastBuild, IIntegrationResult thisBuild) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) We are using VisualSVN Server and CC.NET for this adventure. Tips, suggestions will be highly appreciated. Thanks

    Read the article

  • Custom Extensions on Managed Chromebooks

    - by user417669
    I am a developer looking for the best way to set up different schools with their own custom, private extensions (ie School A should be the only one with access to Extension A). Theoretically, I am aware that there are a few ways to get a custom, private extension pushed out on a domain: Host the .crx on a server and click "Specify a Custom App" in the management console. Create a Domain App by uploading a zip to the Chrome Web Store Upload the extension from my developer account to the Chrome Web Store and publish to a single "trusted tester," or make it unlisted Option (1), hosting the .crx, has not been working. I am not sure why, but the extension is simply not pushing out. I link directly to the crx file, which has the right ID and MIME type, still, no dice. If anyone has any tips or suggestions for getting this to work, I would love to hear them! Option (2), having the school create a domain app, seems a bit inefficient because it requires all schools to upload their own zip. So essentially I would have to email a zip file to the school, and have them publish it. All updates to the extension will also require a similar process, so this doesn't seem ideal. I doubt that option (3) would work. If I published to the admin as a "trusted tester", I don't think that the other people in the domain would be able to access it. If it is unlisted, I do not know how an admin could find it in the Chrome Web Store dialog. Also, I would rather avoid security through obscurity. Has anyone had success with hosting the extension and using the Specify a Custom App feature? Any other suggestions for getting a Custom Extension pushed out by the management console? Thanks so much!

    Read the article

  • EFS Remote Encryption

    - by Apoulet
    We have been trying to setup EFS across our domain. Unfortunately Reading/Writing file over network share does not work, we get an "Access Denied" error. Another worrying fact is that I managed to get it working for 1 machine but no other would work. The machines are all Windows 2008R2, running as VM under ESXi host. According to: http://technet.microsoft.com/en-us/library/bb457116.aspx#EHAA We setup the involved machine to be trusted for delegation The user are not restricted and can be trusted for delegation. The users have logged-in on both side and can read/write encrypted files without issues locally. I enabled Kerberos logging in the registry and this is the relevant logs that I get on the machine that has the encrypted files. In order for all certificate that the user possess (Only Key Name changes): Event ID 5058: Audit Success, "Other System Events" Key file operation. Subject: Security ID: {MyDOMAIN}\{MyID} Account Name: {MyID} Account Domain: {MyDOMAIN} Logon ID: 0xbXXXXXXX Cryptographic Parameters: Provider Name: Microsoft Software Key Storage Provider Algorithm Name: Not Available. Key Name: {CE885431-9B4F-47C2-8415-2D766B999999} Key Type: User key. Key File Operation Information: File Path: C:\Users\{MyID}\AppData\Roaming\Microsoft\Crypto\RSA\S-1-5-21-4585646465656-260371901-2912106767-1207\66099999999991e891f187e791277da03d_dfe9ecd8-31c4-4b0f-9b57-6fd3cab90760 Operation: Read persisted key from file. Return Code: 0x0[/code] Event ID 5061: Audit Faillure, "System Intergrity" [code]Cryptographic operation. Subject: Security ID: {MyDOMAIN}\{MyID} Account Name: {MyID} Account Domain: {MyDOMAIN} Logon ID: 0xbXXXXXXX Cryptographic Parameters: Provider Name: Microsoft Software Key Storage Provider Algorithm Name: RSA Key Name: {CE885431-9B4F-47C2-8415-2D766B999999} Key Type: User key. Cryptographic Operation: Operation: Open Key. Return Code: 0x8009000b Could this be related to this error from the CryptAcquireContext function NTE_BAD_KEY_STATE 0x8009000BL The user password has changed since the private keys were encrypted. The problem is that the users I using at the moment can not change their password.

    Read the article

  • IIS 6 Windows Authentication in ASP.Net app fails

    - by Kjensen
    I am trying to install an ASP.Net app on an IIS6 webserver. The site requires the user to authenticate with windows, and this works on several other apps on the same server. In IIS I have enabled anonymous access and windows authentication. In web.config, authentication is set to: <authentication mode="Windows"/> and authorization...: <authorization> <allow roles="Users"/> <deny users="*"/> </authorization> Ie. allow all users in role "Users" and deny everybody else. This is the approach that is working with several other apps on the same server. If I run the site, I am prompted for username and password. If I remove the line: <deny users="*"/> I can access the site and everything works - but the user credentials are not passed to the site (Page.User.Identity.Name returns a blank string in ASP.Net). The site has identical (inherited) file permissions as other working sites on the server. The only difference in authentication/authorization between this site and the other working sites is, that this runs Asp.Net 4 (but there are other working asp.net 4 sites on the server as well). What am I missing here? Where should I look?

    Read the article

  • Cannot Start Passenger 3.0.18 Using Mountain Lion (OS X Server) and RVM

    - by LightBe Corp
    I recently did a clean install of Mountain Lion on my Mac Mini Server. I installed version 3.0.18 using a gem according to the directions on http://www.phusionpassenger.com with no errors that I could see. rvmsudo gem install passenger-enterprise-server-3.0.18.gem rvmsudo passenger-install-apache2-module Here are my entries in /etc/apache2/httpd.conf with my username masked: LoadModule passenger_module /Users/username/.rvm/gems/ruby-1.9.3-p327/gems/passenger-enterprise-server-3.0.18/ext/apache2/mod_passenger.so PassengerRoot /Users/username/.rvm/gems/ruby-1.9.3-p327/gems/passenger-enterprise-server-3.0.18 PassengerRuby /Users/username/.rvm/wrappers/ruby-1.9.3-p327/ruby I uncommented out the following statement: Include /private/etc/apache2/extra/httpd-vhosts.conf Here is a sample virtual host entry. I have three of them in the file. <VirtualHost *:80> ServerName www.mydomain.com ServerAlias mydomain.com PassengerAppRoot /Users/username/Sites/myfolder/ DocumentRoot /Users/username/Sites/myfolder/public <Directory /Users/username/Sites/myfolder/public> Allow from all AllowOverride all Options -MultiViews </Directory> </VirtualHost> I have restarted Apache several times. Here is information from my server: [~]$ ps -ef | grep Passenger 501 18804 303 0 12:39PM ttys000 0:00.00 grep Passenger [~]$ rvmsudo passenger-status Password: **ERROR: Phusion Passenger doesn't seem to be running.** [~]$ rvmsudo passenger-config --version 3.0.18 I have tried doing online searches on this. I was surprised that there was not all that much on this specific error even though from my understanding Passenger has been around for a few years. I have posted this issue on the Phusion Passenger Google Groups but have not heard anything. Any help would be appreciated, the sooner the better LOL. Seriously I need to have one of my three websites up by tomorrow evening. This is the only issue stopping that from happening. Thanks again.

    Read the article

  • Linux: prevent outgoing TCP flood

    - by Willem
    I run several hundred webservers behind loadbalancers, hosting many different sites with a plethora of applications (of which I have no control). About once every month, one of the sites gets hacked and a flood script is uploaded to attack some bank or political institution. In the past, these were always UDP floods which were effectively resolved by blocking outgoing UDP traffic on the individual webserver. Yesterday they started flooding a large US bank from our servers using many TCP connections to port 80. As these type of connections are perfectly valid for our applications, just blocking them is not an acceptable solution. I am considering the following alternatives. Which one would you recommend? Have you implemented these, and how? Limit on the webserver (iptables) outgoing TCP packets with source port != 80 Same but with queueing (tc) Rate limit outgoing traffic per user per server. Quite an administrative burden, as there are potentially 1000's of different users per application server. Maybe this: how can I limit per user bandwidth? Anything else? Naturally, I'm also looking into ways to minimize the chance of hackers getting into one of our hosted sites, but as that mechanism will never be 100% waterproof, I want to severely limit the impact of an intrusion. Cheers!

    Read the article

  • configuring apache with mod_mono for .net app

    - by Mystere Man
    I'm having a huge problem getting mod_mono and apache configured to work correctly. I've had this working at one time, but I can't seem to figure out where i'm going wrong. I'm using mono-server4. I'm trying to use a seperate port from the main website. So I have in /etc/apache2/sites-available (with a link from sites-enabled) a vhost configuration that looks like this: <VirtualHost *:9999> ServerName XXX ServerAdmin web-admin@XXX DocumentRoot /var/xxx MonoServerPath XXX "/usr/bin/mod-mono-server4" MonoDebug XXX true MonoSetEnv XXX MONO_IOMAP=all MonoApplications XXX "/:/var/xxx" <Location "/"> Allow from all Order allow,deny MonoSetServerAlias XXX SetHandler mono SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip dont-vary </Location> <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript </IfModule> </VirtualHost> I used mono-server4-admin to create the application mono-server4-admin --path=/var/xxx --app=/XXX --port=9999 When i start apache, it gives the error: Syntax error on line 13 of /etc/apache2/sites-enabled/xxx: Server alias 'XXX, not found. This corresponds with the MonoSetServerAlias statement. So I commented it out, and when I do that apache starts. However, when I try to access the site, I get a 500 error. The access log indicates that it's trying to access the app on port 80, rather than 9999. I'm not sure what the problem is here. Can anyone help me get figure out where I went wrong? My mono-server4-hosts.conf contains this: # start /etc/mono-server4/conf.d/RMRSite/10_XXX Alias /XXX "/var/xxx" AddMonoApplications default "/XXX:/var/xxx" <Directory /var/xxx> SetHandler mono <IfModule mod_dir.c> DirectoryIndex index.aspx </IfModule> </Directory> # end /etc/mono-server4/conf.d/XXX/10_XXX Also, my /etc/mono-server4/conf.d/XXX/10_XXX contains this: This is the configuration file for the XXX virtualhost path = /var/xxx alias = /XXX vhost = localhost port = 9999

    Read the article

  • How can I setup nginx to serve virtualhosts with rails(unicorn/passenger) and php-fpm

    - by NewAlexandria
    I would like to serve multiple sites on one instance. I install nginx, php-fpm, and a rails app. I use sites like this to guide me. I configure php-fpm to listen to a local socket listen = /var/run/php-fpm/php-fpm.sock I configure ngnix with multiple hosts: include /etc/nginx/conf.d/*.conf I have several site php conf files like /etc/nginx/conf.d/site1.conf server { listen 80; server_name site1.com www.site1.com; root /var/www/site1; location / { index index.html index.php; } location ~ \.php$ { fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name; } } and rails site conf files like upstream rails { server 127.0.0.1:3000; } server { listen 80; server_name site2.com www.site2.com; root /var/www/site2; location / { proxy_pass http://rails; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_set_header X-Url-Scheme $scheme; } } I have a unicorn rails server running via rails s -p 3000 Yet, no sites come up for either site1.com or site2.com. I can get to the rails site at www.site2.com:3000 What is wrong? I've spent 2 days (nearly 30hr) trying many different blogs, SO / SF questions, etc. Please share your insight or answer. edit 1: No log entries are created when I try to visit either site. It's like the requests never come in.

    Read the article

  • 404 not found error for virtual host

    - by qubit
    Hello, In my /etc/apache2/sites-enabled, i have a file site2.com.conf, which defines a virtual host as follows : <VirtualHost *:80> ServerAdmin hostmaster@wharfage ServerName site2.com ServerAlias www.site2.com site2.com DirectoryIndex index.html index.htm index.php DocumentRoot /var/www LogLevel debug ErrorLog /var/log/apache2/site2_error.log CustomLog /var/log/apache2/site2_access.log combined ServerSignature Off <Location /> Options -Indexes </Location> Alias /favicon.ico /srv/site2/static/favicon.ico Alias /static /srv/site2/static # Alias /media /usr/local/lib/python2.5/site-packages/django/contrib/admin/media Alias /admin/media /var/lib/python-support/python2.5/django/contrib/admin/media WSGIScriptAlias / /srv/site2/wsgi/django.wsgi WSGIDaemonProcess site2 user=samj group=samj processes=1 threads=10 WSGIProcessGroup site2 </VirtualHost> I do the following to enable the site : 1) In /etc/apache2/sites-enabled, i run the command a2ensite site2.com.conf 2) I then get a message site successfully enabled, and then i run the command /etc/init.d/apache2 reload. But, if i navigate to www.site2.com, i get 404 not found. I do have an index.html in /var/www (permissions:777 and ownership www-data:www-data), and i have also verified that a symlink was created for site2.com.conf in /etc/apache2/sites-enabled. Any way to fix this ? Thank you.

    Read the article

  • What are good systems for managing PHP/MySQL infrastructure?

    - by sbrattla
    I work in a company which is about to migrate most applications from in-house custom built Java/Tomcat applications to Drupal. Due to company policies, applications and websites need to run on in-house servers. This means that we need infrastructure for Drupal (PHP/MySQL) applications. This must have been solved a million times already. I believe this is what web-hosting companies does every day. Even though we work on a much smaller scale than web-hosting companies, i assume it would make sense to look at the task as if we're going to have an internal small-scale web-hosting company. This means that the guys in IT operations could be "responsible" for "offering" web-hosting, while developers could use these "services". We have three environments; dev(elopment), test and prod(uction). It would make sense that developers could log in to a system and create/edit/delete dev and test sites as they'd like. Production sites should be available through the same system, but only available to IT ops. We need to work with clusters of web servers, meaning that an administration system should be capable of creating/editing/deleting sites across multiple servers. I know there's no "this is it" answer to my question; but what would be a good place to start to get going with this? Apart from the actual hardware, what would be a good administration system for this?

    Read the article

  • Servers/Websites Keep Going Down

    - by Tyler Johnson
    Okay, I'm a noobie. I know how to build and compose a website, but I have no idea what I'm doing when it comes to servers and server commands, etc. I've recently had a problem with all of my sites on our servers going down all at once and then I have to go in and reboot the server for them to come up again. At first this was annoying, but now it is becoming agonizing as it now takes 3-4 reboots for the websites to come back up. I contacted support for my hosting, but they are not being very helpful. They just keep telling me what the issue might be and basically telling me that I'm going to have to look into it and figure it out, which really isn't possible since I know nothing. Anyway, here are the things they said were possible reasons: They said I have "strange logs" in my Apache webserver log, error: sh: fetch: command not found. My php.ini memory limit is: 256M which is very high. It should be 32M or 64M. Server is reaching Max Clients, meaning we have more than 150 visitors at a time. (They supposedly "fixed" this, but the sites/server are still going down) I have some Wordpress sites with plugins getting errors like: PHP Warning: pack(): Type H: illegal hex digit G in... PHP Fatal error: Cannot use object of type stdClass as array in... PHP Fatal error: Maximum execution time of 30 seconds exceeded in... PHP Fatal error: Call to undefined function file_exists() in... PHP Parse error: syntax error, unexpected '<' I know that's a lot, but I really am at wits end and have no idea what to do now. If anyone could maybe give me some advice or point me in the right direction I would greatly appreciate it! Thanks! Oh, and here are the specs for my server: RAM: 2048MB CPU Shares: 40 Primary Disk: 50GB Data Transfer: 75GB Port Speed: 5Mbps Type: Linux

    Read the article

  • Nginx Server Block Not Working? - Already running other vhosts just this one not working

    - by daveaspinall
    Im running a Debian 6 LEMP server with multiple virtual hosts and everything has been fine for 5 or so sites. But I've just tried adding another but for some reason it's just not working. By not working I mean in Chrome I get the "Oops! Google Chrome could not connect to subdomain.domain.net" error. I've changed the domain for security to subdomain.example.com and the IP is masked. Hosts file (I have multiple sub domains): xxx.xxx.xx.xxx *.example.com *.example Server Block: server { listen 80; server_name subdomain.example.com; access_log /srv/www/subdomain.example.com/logs/access.log; error_log /srv/www/subdomain.example.com/logs/error.log; root /srv/www/subdomain.example.com/public_html; location / { index index.html index.htm index.php; } location ~ \.php$ { include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } I've created the system link to the file in the /etc/nginx/sites-enabled/ directory and restarted/reloaded nginx. DNS seems fine: # ping -c 2 subdomain PING subdomain.example.com (xxx.xxx.xx.xxx) 56(84) bytes of data. 64 bytes from www.example.com (xxx.xxx.xx.xxx): icmp_req=1 ttl=64 time=0.035 ms 64 bytes from www.example.com (xxx.xxx.xx.xxx): icmp_req=2 ttl=64 time=0.048 ms Checking the file with cURL works: # curl http://subdomain.example.com HTML - OK Emptied browser cache but still no dice. Anything I'm missing? Like I mentioned, I have a few sites running fine on the server currently so php-fpm etc etc are working. Any help would be much appreciated! Cheers, Dave

    Read the article

  • Seperate external and intranet portals using the same functions .htaccess

    - by jezzipin
    We are currently struggling with setting up rules for a .htaccess file for a website built upon our company product. The product is built using PLSQL and procedures can be accessed using URLs. We use this functionality to present different options to our users. These options can be injected into HTML pages using replacement tags. So, the tag [user_menu] is always replaced with: /wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} for external sites and /intranet/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} for internal sites. The issue we are having is twofold. We need to write our .htaccess rules so that the user can access the functionality whether they are internal or external. So, the links should work as follows: http://www.example.com/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} or http://www.example.com/internal/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} This is the other problem. As you can see for the internal link above, the procedure needs to be prefixed with internal instead or intranet. We cannot change this in our standard tags as this will affect other sites so we need to achieve this also using htaccess. Could anyone assist with this issue? I apologise if this is brief or confusing but it's something i've never done before and have been given the task of doing. I apologise for the lack of code that will be posted above however I am a front end developer and have been left to make these changes having no prior experience of .htaccess to please bare with me.

    Read the article

  • Script to mirror MS SQL Server databases between 2 servers

    - by David W
    Hi I have about 200 sites each of which have 2 servers running MSSQL (2k5 at some sites, 2k8 at others) One server is production and the other is primarily there as a backup. We're rebuilding all of these servers this year and as part of that we will have to set up mirroring for ... a lot ... of databases. Some of these sites have 45 databases so mirroring them manually is going to be a huge pain. I was going to write a batch script which uses SQLCMD to backup the database and log, copies to the secondary server, restores the backup and log with norecovery, creates the endpoints and sets the partner. This in itself isn't too complicated, but i'd love to see what other people have done as i'm not very confident in catching errors using the process i've outlined above. I've seen Tools to manage sql 2008 database mirroring? Which looks really good, but the formatting is jumbled and I can't get it to work. If anyone has any other scripts they've written and are willing to share I'd be eternally grateful. Ideally I'd love to be able to use a script to ensure there are matching endpoints (same ports) on both servers, backup the database, backup the log, copy the backups to second server, restore database and log with norecovery, set the partners on both servers, and somehow confirm that the databases are linked and synchronized. Well, thanks for reading :)

    Read the article

  • Pulling application updates from closest server?

    - by Mike Morris
    Setup: 6 Major Sites with Server 2003/2008 DCs doing DHCP/AD Integrated DNS, each on their own subnet. All connect back to datacenter through a 3 mbps WAN ERP server running in the datacenter, accessed by clients at all sites Currently, when we update the software, I manually push a copy of the updated client/config files down to each DC. I have a script that we run on each PC to update the clients. It determines what subnet the PC is on, and pulls the software from that DC. It's messy, but it works. The client has an autoupdate feature, but it'll only pull from the application server (which is housed in the datacenter, over the 3 meg link). It takes forever, since the updates are not "patches" but a full version of the client, even for minor upgrades (bad design). After the most recent patch, you can configure the clients to pull from a different server. Unfortunately, it is the same for all clients. Is there some kind of DNS magic I can use to pull from the local server? For instance, if I tell the clients their update server is ERPUPDATE, can I have their local DNS server return a different IP for ERPUPDATE than the other sites? Example: Client 1 is at site A, client 2 is at site b. They each run the software and a version change is detected. As per the config files, the clients look to ERPUPDATE for their updated client. Client 1 queries DNS for the IP of ERPUPDATE at its current location (site A) DNS at site A returns 192.1.1.5 Client 1 pulls update from 192.1.1.5 Client 2 queries DNS for the IP of ERPUPDATE at its current location (site B) DNS at site B returns 192.1.2.5 Client 2 pulls update from 192.1.2.5 Excuse the poor explanation, I worked 61 hours over the weekend and haven't completely rebounded. I'll be happy to clarify if needed!

    Read the article

  • Watchguard Firebox "split" fibre optic line into 2 interfaces

    - by fRAiLtY-
    We have a requirement on our Watchguard Firebox XTM505 to be able to split our incoming external interface, in this case a fibre optic dedicated leased line, 100/100. We use the line in our office of approx 30 machines however we also re-sell to an external company who utilise it to provide wireless internet solutions to the public. The current infrastructure is as follows: Data in (Leased Line) - Juniper SRX210 managed by ISP - 1 cable out into unmanaged Netgear switch - 1 cable into our firewall and office network, 1 cable to our external providers core router managed by them. We have been informed that having the unmanaged switch in the position it is poses a security risk and that a good option would be to get our Watchguard Firewall to perform the split, by separating our office onto a trusted interface, and by "passing through" the external line to their managed router. It is alleged that the Watchguard is capable of doing this and also rate limiting the interfaces, i.e. 20mbps for the trusted interface and 80mbps for the "pass-through", however Watchguard technical support don't seem to be able to understand what we're trying to achieve. Can anyone provide any advice on whether this is possible on a Watchguard device and how or perhaps if there's a better way of achieving this, perhaps with a managed switch instead of unmanaged? Cheers

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >