Search Results

Search found 10610 results on 425 pages for 'apache hadoop'.

Page 412/425 | < Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >

  • Building an SSL server farm

    - by dan
    I'm interested in building the the architecture in the article referenced below. I currently have a modestly-priced layer-4 load balancer and my application servers are the SSL endpoints. I want to put an SSL server farm in between my load balancer and my app servers. Then I will put another inexpensive load balancer between the SSL farm and my app servers, to do layer-7 routing. My web application has a fairly high amount of consumer traffic, that 6 servers can handle at about 50% capacity. Additionally, I have infrastructure traffic that is several orders of magnitude heavier than my consumer traffic. This is data coming in from all over the world that must integrate with my web application in real time. In total I have 18 app servers to handle all the traffic, plus 6 database servers. I will be adding 6 more app servers over the next 2 weeks and another 6 the 2 weeks after that. Conservatively, I estimate I will need to scale to 120 servers by the end of the year. My motivation right now is to separate the consumer traffic from the infrastructure traffic. The consumer traffic is higher priority than the infrastructure traffic and I cannot allow a stampede on the infrastructure side to take down my consumer-facing servers. Having a website that is always up is the top priority. However if there is a failure in one of the consumer app servers, I want to route that traffic to the servers designated for infrastructure traffic. The complication is that all the traffic is addressed using the same hostname and is nearly 100% https. The only way in my case to distinguish infrastructure from consumer traffic is by URL (poor architecture I inherited), so I need a layer 7 load balancer to be able to route. However for that to work I need either a fancy hardware-based SSL terminator or an SSL server farm as described above. Because my user base is rapidly scaling, I worry that if I go down the hardware path it will become very expensive very fast, especially since I will need 4 of everything for high availability (2 identical setups in 2 facilities). Meanwhile, the above diagram seems very flexible and more horizontally scalable. Has anyone built this before? Are there pre-built configurations? What considerations should I make and what software should I use (I've heard of people using apache with mod-ssl, nginx, and stunnel)? Also, when does it make sense to buy an expensive load balancer vs building an SSL server farm? http://1wt.eu/articles/2006_lb/index_05.html

    Read the article

  • BIND DNS server (Windows) - Unable to access my local domain from other computers on LAN

    - by Ricardo Saraiva
    I have a BIND DNS server running on my Windows 7 development machine and I'm serving pages with WAMPSERVER. My ideia is to develop some tools (in PHP) for my intranet at work and I want them to be accessible via LAN in this format: http://tools.mycompany.com I've already placed BIND and I can access http://tools.mycompany.com on the machine that holds BIND server, but I cannot access it from other LAN computers. I've done the following on my router: defined static IP's for all LAN computers set Port Forwarding to my server (remember: it serves DNS and Web pages) set DNS server configuration to point to my LAN server On LAN computers, I went to Local Area Network properties and also changed the DNS server IP in order to point to my local DNS server. If it helps, here is my named.conf file: options { directory "c:\windows\SysWOW64\dns\etc"; forwarders {127.0.0.1; 8.8.8.8; 8.8.4.4;}; pid-file "run\named.pid"; allow-transfer { none; }; recursion no; }; logging{ channel my_log{ file "log\named.log" versions 3 size 2m; severity info; print-time yes; print-severity yes; print-category yes; }; category default{ my_log; }; }; zone "mycompany.com" IN { type master; file "zones\db.mycompany.com.txt"; allow-transfer { none; }; }; key "rndc-key" { algorithm hmac-md5; secret "qfApxn0NxXiaacFHpI86Rg=="; }; controls { inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndc-key"; }; }; ...and a single zone I've defined - file db.mycompany.com.txt: $TTL 6h @ IN SOA tools.mycompany.com. hostmaster.mycompany.com. ( 2014042601 10800 3600 604800 86400 ) @ NS tools.mycompany.com. tools IN A 192.168.1.4 www IN A 192.168.1.4 On the file above 192.168.1.4 is the IP of the local machine inside my LAN. Can someone help me here? I need my web pages to be accessible from other computers inside my LAN using my custom domain name. I've tried on other computers and they can access my server via http://192.168.1.4/, but no able when using http://tools.mycompany.com . Please, consider the following: I'm completely new to BIND I have basic knowledge in Apache configuration Thanks a lot for your help.

    Read the article

  • Transient network dropout for Xen DomU's

    - by Stephen C
    We've got a CentOS server running a cluster of virtuals. Occasionally the cluster's internal network drops out for a minute or so ... and then comes back. The problem is somehow related to the actual network traffic, but it is not a simple load issue. (The system is generally lightly loaded, and the problem occurs irrespective of actual load.) The setup: CentOS 5.6 on Dom0, various CentOS on the DomU's Hardware - a Dell R710 with a BroadCom NextXpress 2 NIC (sigh) using the latest drivers for the NIC from BroadCom Xen configured to use network-bridge and vif-bridge Some iptable tweaks to route an unrelated port to one of the virtuals. The system has one externally visible IP address, and Dom0 runs an Apache httpd configured with a number of virtual hosts each of which reverse proxies to web servers running on the virtuals. (The virtuals have to be NAT'ed, primarily because we don't have enough allocated public IP addresses.) The symptoms: Works fine most of the time. When someone tries to UPLOAD a large file to one virtuals, the internal network drops out ... for all virtuals: The Dom0 httpd sees a network timeout talking to the backend server on the virtual and reports a 502. A previously established ssh connection from Dom0 to any of the DomU's freezes. Our monitoring shows ping failures for traffic between virtuals. The Xen consoles to the DomU's do not freeze. No log messages in any log files that I can see, on either Dom0 or the DomU's ... apart from the Dom0 httpd logs. After a minute or so, the problem clears by itself. This is 100% reproducible. What we've tried: Downloading, building and installing the latest BNX2 driver on Dom0 Turning off MSI on the NIC - adding "options bnx2 disable_msi=1" to /etc/modprobe.conf Turning off tcp segmentation offload - "ethtool -K eth0 tso off". Sacrificing a black rooster at midnight. I've exhausted all my options apart from switching to KVM ... or slaughtering more roosters. Any suggestions?

    Read the article

  • NGiNX performance degrades over time.

    - by Rylea Stark
    So here's the situation, I run a small cluster, Dedicated box for MySQL, and a dedicated PHP-FPM/NGINX box, Nginx talks to php-fpm via socket, As far as i can tell the problem does not lie in php-fpm, it lies somewhere in my configuration. What happens, is the site loads instant for a few moments after starting and slowly starts to degrade to load times of greater than 2 seconds, eventually taking 12 seconds to complete a load, PHP is configured to close a child after 175 requests, and spawn 20 at start and have a max of 60. Not really sure where the bottle neck is, most of my code is optimized and works flawlessly, but these issues with nginx will most likely force me to switch back over to Apache, And I really dont want to do that, NGINX.conf configuration below. user www-data; worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 512; multi_accept on; use epoll; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; resolver_timeout 5s; satisfy all; ## Size Limits limit_zone brainbug $binary_remote_addr 5m; client_body_buffer_size 8k; client_header_buffer_size 75M; client_max_body_size 1k; large_client_header_buffers 2 1k; ## Timeouts client_body_timeout 60; client_header_timeout 60; keepalive_timeout 60; send_timeout 60; ## General Options ignore_invalid_headers on; recursive_error_pages on; sendfile on; server_name_in_redirect off; server_tokens off; ## TCP options tcp_nodelay on; #tcp_nopush on; output_buffers 128 512k; gzip on; gzip_http_version 1.0; gzip_comp_level 7; gzip_proxied any; gzip_min_length 0; gzip_buffers 32 32k; gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image/jpeg image/png image/gif; ## Disable GZIP for MSIE 1-6 gzip_disable "MSIE [1-6].(?!.*SV1)"; ## Set a vary header so downstream proxies don't send cached gzipped content to IE6 gzip_vary on; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }

    Read the article

  • Some emails are being delivered, some returned

    - by Tom Broucke
    I have my own VPS where a site is running (control panel: directadmin). When I send mails, some are being delivered (hotmail, gmail, [email protected] ,...), others are not ([email protected]), others are delivered after being greylisted ([email protected]). /var/log/exim/mainlog What could be the cause of this? Is the problem Sender-Side or Receiver-Side? case 1: [email protected] (delivered) 2012-06-20 15:02:03 1ShKXr-0005Sc-7g <= [email protected] U=apache P=local S=1319 T="Password reset" from <[email protected]> for [email protected] 2012-06-20 15:02:03 1ShKXr-0005Sc-7g gmail-smtp-in-v4v6.l.google.com [2a00:1450:8005::1b] Network is unreachable 2012-06-20 15:02:03 1ShKXr-0005Sc-7g => [email protected] F=<[email protected]> R=lookuphost T=remote_smtp S=1355 H=gmail-smtp-in-v4v6.l.google.com [173.194.67.27] X=TLSv1:RC4-SHA:128 C="250 2.0.0 OK 1340196103 cp4si34336466wib.14" 2012-06-20 15:02:03 1ShKXr-0005Sc-7g Completed case 2: [email protected] (not being delivered) 2012-06-21 09:57:14 1ShcGQ-0007No-5H <= [email protected] H=localhost ([91.230.245.141]) [127.0.0.1] P=esmtpa A=login:[email protected] S=740 [email protected] T="hey" from <[email protected]> for [email protected] 2012-06-21 09:57:14 1ShcGQ-0007No-5H ** [email protected] F=<[email protected]> R=virtual_aliases: 2012-06-21 09:57:14 1ShcGQ-0007Nt-7Z <= <> R=1ShcGQ-0007No-5H U=mail P=local S=1546 T="Mail delivery failed: returning message to sender" from <> for [email protected] 2012-06-21 09:57:14 1ShcGQ-0007No-5H Completed 2012-06-21 09:57:14 1ShcGQ-0007Nt-7Z => info <[email protected]> F=<> R=virtual_user T=virtual_localdelivery S=1643 2012-06-21 09:57:14 1ShcGQ-0007Nt-7Z Completed case 3: [email protected] (greylisted) 2012-06-21 15:29:02 1ShhRW-000862-BV <= [email protected] H=localhost ([91.230.245.141]) [127.0.0.1] P=esmtpa A=login:[email protected] S=782 [email protected] T="testmail squirrel" from <[email protected]> for [email protected] 2012-06-21 15:29:02 1ShhRW-000862-BV SMTP error from remote mail server after RCPT TO:<[email protected]>: host mx-cluster-b1.one.com [195.47.247.194]: 450 4.7.1 <[email protected]>: Recipient address rejected: Greylisted for 5 minutes 2012-06-21 15:29:02 1ShhRW-000862-BV == [email protected] R=lookuphost T=remote_smtp defer (-44): SMTP error from remote mail server after RCPT TO:<[email protected]>: host mx-cluster-b2.one.com [195.47.247.195]: 450 4.7.1 <[email protected]>: Recipient address rejected: Greylisted for 5 minutes Notice that the "from" in case1 differs in case2: [email protected] or [email protected]. Thanks for your time!

    Read the article

  • how do i install intermediate certificate

    - by getmizanur
    I have installed private key (pem encoded) and public key certificate (pem encoded) on amazon load balancer however when i check the ssl with site test tool (http://www.networking4all.com/en/support/tools/site+check/), i get the following error Error while checking the SSL Certificate!! Unable to get the local issuer of the certificate. The issuer of a locally looked up certificate could not be found. Normally this indicates that not all intermediate certificates are installed on the server. i converted crt file to pem using these command from this tutorial openssl x509 -in input.crt -out input.der -outform DER openssl x509 -in input.der -inform DER -out output.pem -outform PEM during setting up of amazon load balancer only option i left out was certificate chain (pem encoded) however this was optional. could this be cause of my issue? and if so i how do i create certificate chain? for the last question i have tried googling however i'm getting more confused than before. please help many thanks in advance. UPDATE @all thanks for the helpful advice. if you make request to verisign they will give you a certificate chain however this chain includes public crt, intermediate crt and root crt. make sure to remove the public crt from your certificate chain (which is the top most certificate) before adding it to your certification chain box of your amazon load balancer. if you are making https request from an android app then above instruction may not work for older android os such as 2.1 and 2.2. to make it work on older android os [https://knowledge.verisign.com/support/ssl-certificates-support/index?page=content&id=AR657&actp=LIST&viewlocale=en_US]. on this link click on "retail ssl" tab and then click on "secure site" "CA Bundle for Apache Server". copy and past these intermediate certs into certificate chain box. just incase if you have not found it here is the direct link [https://knowledge.verisign.com/support/ssl-certificates-support/index?page=content&id=AR1409] if you are using geo trust certificates then solution is much the same for android devices however you need to copy and past their intermediate certs for android. PS: sorry for the long urls however "new users can only post a maximum of two hyperlinks"

    Read the article

  • Multiple SSL Certificates Running on Mac OS X 10.6

    I have been running into walls with this for a while, so I posted at stackoverflow, and I was pointed over here... I am attempting to setup multiple IP addresses on Snow Leopard so that I can develop with SSL certificates. I am running XAMPP - I don't know if that is the problem, but I guess I would run into the same problems, considering the built in apache is turned off. So first up I looked into starting up the IPs on start up. I got up an running with a new StartupItem that runs correctly, because I can ping the ip address: ping 127.0.0.2 ping 127.0.0.1 And both of them work. So now I have IP addresses, which as you may know are not standard on OSx. I edited the /etc/hosts file to include the new sites too: 127.0.0.1 site1.local 127.0.0.2 site2.local I had already changed the httpd.conf to use the httpd-vhosts.conf - because I had a few sites running on the one IP address. I have edited the vhosts file so a site looks like this: <VirtualHost 127.0.0.1:80> DocumentRoot "/Users/jim/Documents/Projects/site1/web" ServerName site1.local <Directory "/Users/jim/Documents/Projects/site1"> Order deny,allow Deny from All Allow from 127.0.0.1 AllowOverride All </Directory> </VirtualHost> <VirtualHost 127.0.0.1:443> DocumentRoot "/Users/jim/Documents/Projects/site1/web" ServerName site1.local SSLEngine On SSLCertificateFile "/Applications/XAMPP/etc/ssl-certs/myssl.crt" SSLCertificateKeyFile "/Applications/XAMPP/etc/ssl-certs/myssl.key" SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown <Directory "/Users/jim/Documents/Projects/site1"> Order deny,allow Deny from All Allow from 127.0.0.1 AllowOverride All </Directory> </VirtualHost> In the above code, you can change the 1's to 2's and it is the setup for the second site. They do use the same certificate, which is why they are on different IP addresses. I also included the NameVirtualHost information at the top of the file: NameVirtualHost 127.0.0.1:80 NameVirtualHost 127.0.0.2:80 NameVirtualHost 127.0.0.1:443 NameVirtualHost 127.0.0.2:443 I can ping site1.local and site2.local. I can use telnet ( telnet site2.local 80 ) to get into both sites. But in Safari I can only get to the first site1.local - navigating to site2.local gives me either the localhost main page (which is included in the vhosts) or gives me a Access forbidden!. I am usure what to do, any suggestions would be awesome.

    Read the article

  • PHP-FPM performing worse than mod_php

    - by lordstyx
    Recently the website I maintain has been growing a lot and I saw the point coming where I'd want to switch from apache to nginx, because I kept on reading that it performs way better. Now I've done the switch, and I have to say, nginx is keeping up just fine. However, php-fpm is forming a problem. Where the php pages used to take 0.1 second to generate with the same load they now take around 3 seconds! Furthermore the error.log from nginx is being spammed with errors like: upstream timed out (110: Connection timed out) while connecting to upstream, client: ... I also tried using unix sockets instead, but those would complain about the following: connect() to unix:/tmp/php5-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream I've fiddled with settings here and there but nothing seems to work. Changing the amount of pm.max_children doesn't seem to help a lot either, but with it's current amount at 350 it seems to be the lesser of all evil. The server that's being used has 3 GB RAM (not all of it is free due to a MySQL server also running) along with 2 dual-core processors (4 cores in total). Am I doing something majorly wrong with the settings here, or is the server simply not capable enough? EDIT: Here is the nginx server block server { listen 80; listen [::]:80 default ipv6only=on; root /var/www; index index.php index.html index.htm; server_name localhost; location / { try_files $uri $uri/ /index.html; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; deny all; } location = /50x.html { root /usr/share/nginx/www; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini try_files $uri = 404; # With php5-cgi alone: fastcgi_pass 127.0.0.1:9000; # With php5-fpm: #fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } location ~ /\.ht { deny all; } } And the php-fpm pool: [www] user = www-data group = www-data listen = 127.0.0.1:9000 ;listen = /tmp/php5-fpm.sock listen.backlog = -1 pm = dynamic pm.max_children = 350 pm.start_servers = 200 pm.min_spare_servers = 10 pm.max_spare_servers = 350 pm.max_requests = 1536 rlimit_files = 65536 rlimit_core = unlimited chdir = /

    Read the article

  • Apache2 Virtual Host broken; displayed default index.html on subdomain, but correct content on www.subdomain

    - by Robert K
    I've got a Linode configured as a Ubuntu 10.04.2 web server with Apache 2.2.14. I have a total of 4 sites, all defined under /etc/apache2/sites-available as virtual hosts. All sites are almost identical clones for configuration. And all sites but my last work successfully. default: (www.)exampleadnetwork.com (www.)example.com reseller.example.com trouble: client1.example.com I keep getting this page when I visit the client1.example.com site: It works! This is the default web page for this server. The web server software is running but no content has been added, yet. In my ports.conf file I have the NameVirtualHost correctly set to my IP address on port 80. If I access the "www.sub.example.com" alias the site works! If I access it without the www I see the "It Works" excerpt posted above. Even apache2ctl -S shows that my vhost file parses correctly and is added to the mix. My vhost configuration file is as follows: <VirtualHost 127.0.0.1:80> ServerAdmin [email protected] ServerName client1.example.com ServerAlias client1.example.com www.client1.example.com DocumentRoot /srv/www/client1.example.com/public_html/ ErrorLog /srv/www/client1.example.com/logs/error.log CustomLog /srv/www/client1.example.com/logs/access.log combined <directory /srv/www/client1.example.com/public_html/> Options -Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </directory> </VirtualHost> The other sites are variations of: <VirtualHost 127.0.0.1:80> ServerAdmin [email protected] ServerName example.com ServerAlias example.com www.example.com DocumentRoot /srv/www/example.com/public_html/ ErrorLog /srv/www/example.com/logs/error.log CustomLog /srv/www/example.com/logs/access.log combined </VirtualHost> The only site the differs is the other subdomain: <VirtualHost 127.0.0.1:80> ServerAdmin [email protected] ServerName reseller.example.com ServerAlias reseller.example.com DocumentRoot /srv/www/reseller.example.com/public_html/ ErrorLog /srv/www/reseller.example.com/logs/error.log CustomLog /srv/www/reseller.example.com/logs/access.log combined </VirtualHost> Filenames are the FQDN without the www. prefix. I've followed this advice, but still cannot access subdomain properly.

    Read the article

  • Why won't my Windows 8 Command line update its path

    - by mawcsco
    I needed to add a new entry to my PATH variable. This is a common activity for me in my job, but I've recently started using Windows 8. I assumed the process would be similar to Windows 7, Vista, XP... Here's my sequence of events: Open System properties (Start- [type "Control Panel"] - Control Panel\System and Security\System - Advanced system settings - Environment Variables) Add the new path to beginning of my USER PATH variable (C:\dev\Java\apache-ant-1.8.4\bin;) Opened a command prompt (Start - [type "command prompt" enter] - [type "path" enter] My new path entry is not available (see attached image and vide). I Duplicated the exact same process on a Windows 7 machine and it worked. EDIT Windows 8 Environment Variables and Command Prompt video EDIT This is definitely not the behavior of Windows 7. Watch this video to see the behavior I expect working in Windows 7. http://youtu.be/95JXY5X0fII EDIT 5/31/2013 So, after much frustration, I wrote a small C# app to test the WM_SETTINGCHANGE event. This code receives the event in both Windows 7 and Windows 8. However, in Windows 8 on my system, I do not get the correct path; but, I do in Windows 7. This could not be reproduced in other Windows 8 systems. Here is the C# code. using System; using Microsoft.Win32; public sealed class App { static void Main() { SystemEvents.UserPreferenceChanging += new UserPreferenceChangingEventHandler(OnUserPreferenceChanging); Console.WriteLine("Waiting for system events."); Console.WriteLine("Press <Enter> to exit."); Console.ReadLine(); } static void OnUserPreferenceChanging(object sender, UserPreferenceChangingEventArgs e) { Console.WriteLine("The user preference is changing. Category={0}", e.Category); Console.WriteLine("path={0}", System.Environment.GetEnvironmentVariable("PATH")); } } OnUserPreferenceChanging is equivalent to WM_SETTINGCHANGE C# program running in Windows 7 (you can see the event come through and it picks up the correct path). C# program running in Windows 8 (you can see the event come through, but the wrong path). There is something about my environment that is precipitating this problem. However, is this a Windows 8 bug?

    Read the article

  • PHP 5.2 to 5.3 not upgrading, no errors

    - by Webnet
    I'm following this guide: http://atik97.wordpress.com/2010/06/12/how-to-upgrade-to-php-5-3-in-ubuntu-9-10/ I've done all the steps, but it's still showing php 5.2.6 - any ideas? I have also tried -cgi instead of -cli, neither have any effect. update I've tried rebooting the server to see if that would have any effect and unfortunately it didn't update Output of dpkg -l *php*: Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Cfg-files/Unpacked/Failed-cfg/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Hold/Reinst-required/X=both-problems (Status,Err: uppercase=bad) ||/ Name Version Description +++-=============================================-=============================================-========================================================================================================== un libapache2-mod-php4 <none> (no description available) ii libapache2-mod-php5 5.2.6.dfsg.1-3ubuntu4.6 server-side, HTML-embedded scripting language (Apache 2 module) un libapache2-mod-php5filter <none> (no description available) ii php-pear 5.2.6.dfsg.1-3ubuntu4.6 PEAR - PHP Extension and Application Repository un php4-cli <none> (no description available) un php4-dev <none> (no description available) un php4-mysql <none> (no description available) un php4-pear <none> (no description available) ii php5 5.2.6.dfsg.1-3ubuntu4.6 server-side, HTML-embedded scripting language (metapackage) ii php5-cgi 5.2.6.dfsg.1-3ubuntu4.6 server-side, HTML-embedded scripting language (CGI binary) ii php5-cli 5.2.6.dfsg.1-3ubuntu4.6 command-line interpreter for the php5 scripting language ii php5-common 5.2.6.dfsg.1-3ubuntu4.6 Common files for packages built from the php5 source ii php5-curl 5.2.6.dfsg.1-3ubuntu4.6 CURL module for php5 un php5-dev <none> (no description available) ii php5-gd 5.2.6.dfsg.1-3ubuntu4.6 GD module for php5 ii php5-imap 5.2.6-0ubuntu5.1 IMAP module for php5 un php5-json <none> (no description available) ii php5-mcrypt 5.2.6-0ubuntu2 MCrypt module for php5 ii php5-mysql 5.2.6.dfsg.1-3ubuntu4.6 MySQL module for php5 un php5-mysqli <none> (no description available) ii php5-xsl 5.2.6.dfsg.1-3ubuntu4.6 XSL module for php5 un phpapi-20060613+lfs <none> (no description available) ii phpmyadmin 4:3.1.2-1ubuntu0.2 MySQL web administration tool update The following commands and their outputs: grep php53 /etc/apt/sources.list deb http://php53.dotdeb.org stable all deb-src http://php53.dotdeb.org stable all apt-cache search -f "libapache2-mod-php5" http://pastebin.com/XNXdsXYC update I've updated the question with more details on installed packages.

    Read the article

  • Subdomains, folders, internationalization, and hosting solutions

    - by justinbach
    I'm a web developer and I recently landed a gig to develop the US / international version of a site for a company that's big in Europe but hasn't done much expansion into the US yet. They've got an existing site at company.com, which should remain visible to European customers after the new site goes up, and an existing (not great) site at company.us, which I'm going to be redeveloping (the .us site will be taken down when my version goes up--keep reading for details). My solution needs to take into account the fact that there are going to be new, localized versions of the site in the fairly near future, so the framework I'm writing needs to be able to handle localizations fairly easily (dynamically load language packs, etc). The tricky thing is the European branch of the company manages the .com site hosting (IIS-based) and the DNS, while I'll be managing the US hosting (and future localizations), which will likely be apache-based. I've never been a big fan of the ".us" TLD--I think most US users are accustomed to visiting the .com--so the thought is that the European branch will detect the IP of inbound traffic and redirect all US-based addresses to us.example.com (or whatever the appropriate localized subdomain might be), which would point to the IP address of my host. I'd then serve the appropriate locale-specific content by pulling the subdomain from the $_SERVER superglobal (assuming PHP). I couldn't find any examples of international organizations that take a subdomain-based approach for localization, but I'm not sure I have any other options as a result of the unique hosting structure here (in that there's not a unified hosting solution for the European and US sites). In my experience, the US version of an international site would live at domain.com/us, not at us.domain.com, and I'd imagine that this has to do with SEO (subdomains are treated as separate sites, so improved rankings for the US site wouldn't help the Canadian version if subdomains are used to differentiate between them). My question is: is there a better approach to solving this problem than the one I'm taking? Ideally, I'd like to use a folder-based approach (see adidas.com as an example of what I'm talking about), but I'm not sure that's a possibility given that the US site (and other localizations) will not be hosted on the same server as the rest of the .com. Can you, in IIS, map a folder (e.g. domain.com/us) to a different IP address? What would you recommend? Thanks for your consideration.

    Read the article

  • Lighttpd with FastCGI configuration running ViewVC - rewrite problems

    - by 0xC0000022L
    At the moment I am struggling with the configuration of lighttpd together with ViewVC. The configuration was ported from Apache 2.2.x, which is still running on the machine, serving the WebDAV/SVN stuff, being proxied through. Now, the problem I am having appears to be with the rewrite rules and I'm not really sure what I am missing here. Here's my configuration (slightly condensed to keep it concise): var.hgwebfcgi = "/var/www/vcs/bin/hgweb.fcgi" var.viewvcfcgi = "/var/www/vcs/bin/wsgi/viewvc.fcgi" var.viewvcstatic = "/var/www/vcs/templates/docroot" var.vcs_errorlog = "/var/log/lighttpd/error.log" var.vcs_accesslog = "/var/log/lighttpd/access.log" $HTTP["host"] =~ "domain.tld" { $SERVER["socket"] == ":443" { protocol = "https://" ssl.engine = "enable" ssl.pemfile = "/etc/lighttpd/ssl/..." ssl.ca-file = "/etc/lighttpd/ssl/..." ssl.use-sslv2 = "disable" setenv.add-environment = ( "HTTPS" => "on" ) url.rewrite-once += ("^/mercurial$" => "/mercurial/" ) url.rewrite-once += ("^/$" => "/viewvc.fcgi" ) alias.url += ( "/viewvc-static" => var.viewvcstatic ) alias.url += ( "/robots.txt" => var.robots ) alias.url += ( "/favicon.ico" => var.favicon ) alias.url += ( "/mercurial" => var.hgwebfcgi ) alias.url += ( "/viewvc.fcgi" => var.viewvcfcgi ) $HTTP["url"] =~ "^/mercurial" { fastcgi.server += ( ".fcgi" => ( ( "bin-path" => var.hgwebfcgi, "socket" => "/tmp/hgwebdir.sock", "min-procs" => 1, "max-procs" => 5 ) ) ) } else $HTTP["url"] =~ "^/viewvc\.fcgi" { fastcgi.server += ( ".fcgi" => ( ( "bin-path" => var.viewvcfcgi, "socket" => "/tmp/viewvc.sock", "min-procs" => 1, "max-procs" => 5 ) ) ) } expire.url = ( "/viewvc-static" => "access plus 60 days" ) server.errorlog = var.vcs_errorlog accesslog.filename = var.vcs_accesslog } } Now, when I access the domain.tld, I correctly see the index of the repositories. However, when I look at the links for each respective repository (or click them, for that matter), it's of the form https://domain.tld/viewvc.fcgi/reponame instead of the intended https://domain.tld/reponame. What do I have to change/add to achieve this? Do I have to "abuse" the index file mechanism somehow? Goal is to keep the /mercurial alias functional. So far I've tried sifting through the lighttpd book from Packt again, also through the lighttpd documentation, but found nothing that seemed to match the problem.

    Read the article

  • SSH Connection Refused - Debug using Recovery Console

    - by olrehm
    Hey everyone, I have found a ton of questions answered about debugging why one cannot connect via SSH, but they all seem to require that you can still access the system - or say that without that nothing can be done. In my case, I cannot access the system directly, but I do have access to the filesystem using a recovery console. So this is the situation: My provider made some kernel update today and in the process also rebooted my server. For some reason, I cannot connect via SSH anymore, but instead get a ssh: connect to host mydomain.de port 22: Connection refused I do not know whether sshd is just not running, or whether something (e.g. iptables) blocks my ssh connection attempts. I looked at the logfiles, none of the files in /var/log contain any mentioning on ssh, and /var/log/auth.log is empty. Before the kernel update, I could log in just fine and used certificates so that I would not need a password everytime I connect from my local machine. What I tried so far: I looked in /etc/rc*.d/ for a link to the /etc/init.d/ssh script and found none. So I am expecting that sshd is not started properly on boot. Since I cannot run any programs in my system, I cannot use update-rc to change this. I tried to make a link manually using ln -s /etc/init.d/ssh /etc/rc6.d/K09sshd and restarted the server - this did not fix the problem. I do not know wether it is at all possible to do it like this and whether it is correct to create it in rc6.d and whether the K09 is correct. I just copied that from apache. I also tried to change my /etc/iptables.rules file to allow everything: # Generated by iptables-save v1.4.0 on Thu Dec 10 18:05:32 2009 *mangle :PREROUTING ACCEPT [7468813:1758703692] :INPUT ACCEPT [7468810:1758703548] :FORWARD ACCEPT [3:144] :OUTPUT ACCEPT [7935930:3682829426] :POSTROUTING ACCEPT [7935933:3682829570] COMMIT # Completed on Thu Dec 10 18:05:32 2009 # Generated by iptables-save v1.4.0 on Thu Dec 10 18:05:32 2009 *filter :INPUT ACCEPT [7339662:1665166559] :FORWARD ACCEPT [3:144] :OUTPUT ACCEPT [7935930:3682829426] -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp --dport 25 -j ACCEPT -A INPUT -p tcp -m tcp --dport 993 -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m tcp --dport 143 -j ACCEPT -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 8080 -s localhost -j ACCEPT -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 -A INPUT -j ACCEPT -A FORWARD -j ACCEPT -A OUTPUT -j ACCEPT COMMIT # Completed on Thu Dec 10 18:05:32 2009 # Generated by iptables-save v1.4.0 on Thu Dec 10 18:05:32 2009 *nat :PREROUTING ACCEPT [101662:5379853] :POSTROUTING ACCEPT [393275:25394346] :OUTPUT ACCEPT [393273:25394250] COMMIT # Completed on Thu Dec 10 18:05:32 2009 I am not sure this is done correctly or has any effect at all. I also did not find any mentioning of iptables in any file in /var/log. So what else can I do? Thank you for your help.

    Read the article

  • How do you backup 40+ Centos5.5 servers?

    - by John Little
    We are embarrassed to ask this question. Apologies for our lack of UNIX expertise. We have inherited 40+ centos 5.5 servers, and don't know how to back them up. We need low level clone type images so that we could restore the servers from scratch if we had to replace the HDs etc. We have used the "dd" command, but we assume this only works if you want to back up one local disk to another, not 40 servers to one server with an external USB HD attached. All 40 servers have a pair of mirrored disks (dont know if its HW or SW raid). Most only have 100MB used. SErvers are running apache, zend, tomcat, mysql etc. Ideally we dont want to have to shut them down to backup (but could). We assume that standard unix commands like tar, cpio, rsync, scp etc. are of no use as they only copy files, not partitions, all attributes, groups etc. i.e. do not produce a result which can simply be re-imaged to a new HD to get the serer back from dead. We have a large SAN, a spare windows box and spare unix boxes, but these are only visible to one layer in the network. We have an unused Dell DL2000 monster tape unit, but no sw or documentation for it. WE have a copy of symantec backup exec, but we have no budget for unix client licenses. (The company has negative amounts of money). We need to be able to initiate the backup remotely, as we can only access the servers in person in an emergency (i.e. to restore) Googling returns some applications to do this, e.g. clonezilla - looks difficult to install and invasive. Mondo, only seems to support backup if you are local to the machine. Amanda might be an option, but looks like days/weeks of work to learn and setup? Is there anything built into Centos, or do we have to go the route of installing, learning and configuring a set of backup softwares? Any ideas? This must be a pretty standard problem which goggling doesnt give an obvious answer.

    Read the article

  • only removing index.php rule works on my NginX and CodeIgniter as rewrite. Why?

    - by Atomei Cosmin
    I am very new in rewriting in nginx but although I've spent 2 days reading on forums, I still can't get some Codeigniter rewrites working ... server { listen *:80; server_name artademy.com www.artademy.com; root /var/www/artademy.com/web; index index.html index.htm index.php index.cgi index.pl index.xhtml; if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?/$1; } if (!-e $request_filename) { rewrite ^/(index.php\?)/(.*)$ /$1/mobile_app last; break; } error_log /var/log/ispconfig/httpd/artademy.com/error.log; access_log /var/log/ispconfig/httpd/artademy.com/access.log combined; ## Disable .htaccess and other hidden files location ~ /\. { deny all; access_log off; log_not_found off; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location /stats { index index.html index.php; auth_basic "Members Only"; auth_basic_user_file /var/www/clients/client0/web3/.htpasswd_stats; } location ^~ /awstats-icon { alias /usr/share/awstats/icon; } location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9012; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_intercept_errors on; } } Codeigniter settings are: well for uri_protocol: REQUEST_URI; What i noticed is that from this rule: rewrite ^/(.)$ /index.php?/$1; it works ever if i write it like this: rewrite ^/(.)$ /index.php?; It might be a wild guess but it stops at the question mark... Anyhow what I need are rules as these from .htaccess: RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{QUERY_STRING} ^lang=([a-z]{2})$ RewriteRule ^([a-z]{2})$ index.php?/home_page?lang=$1 [L,QSA] RewriteRule ^([a-z]{2})$ index.php?/home_page?lang=$1 [L,QSA] #how_it_works RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^how-it-works/(en)$ index.php?/how_it_works?lang=en [L,QSA] #order_status RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^order-status/(en)$ index.php?/order_status?lang=en [L,QSA] Can anyone tell me what i'm doing wrong and show me a proper way for at least one rule? It would be more than helpful. Thank you in advance! ^^ PS: I made it work on apache by using Path_info for uri_protocol.. if this info is of any help, and i remember having kind of the same problem there too but switching to path_info made it all good.

    Read the article

  • NGINX rewrite rules help. Redirect not working and want to get rid of index.php in urls

    - by Tamerax
    hey! I have 2 questions for nginx users. 1) I'm trying to setup my joomla server onto my new linode running NGINX and after much (like days) of searching and testing, I finally have a config that works with with SEF url plugins...sorta. I was using an apache system on the old server and it used mod_rewrite and life was fine in terms of SEF. Since NGINX doesn't have mod_rewrite, I found something that works BUT it constantly leaves index.php in the urls. ex: http://mysite.com/index.php/forum i want it to be just http://mysite.com/forum but without mod_rewrite it doesn't seem to be possible in joomla that i'm aware of. I know in wordpress it IS possible but I have to use a plugin. Here is my config file: server { listen 80; server_name mysite.com www.mysite.com; access_log /home/public_html/mysite.com/log/access.log; error_log /home/public_html/mysite.com/log/error.log; root /home/public_html/mysite.com/public/; large_client_header_buffers 4 8k; # prevent some 400 errors index index.php index.html; fastcgi_index index.php; location / { expires 30d; error_page 404 = @joomla; log_not_found off; } # Rewrite location @joomla { rewrite ^(.*)$ /index.php?q=last; } # Static Files location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico)$ { access_log off; expires 30d; } # PHP location ~ \.php { keepalive_timeout 0; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/public_html/mysite.com/public /$fastcgi_script_name; } } 2) second question should be easy but i can't get it to work. I want to use the same config I posted above and have either mysite.com or www.mysite.com both forward to mysite.com/portal. Basically when you hit up the front page with or without the www, it all gets forwarded to a sub directory on the server I called Portal. I have tried several variations of: rewrite ^/(.*) http://www.example.com/portal/$1 permanent; but it usually ends with firefox telling me there is some crazy loop happening the address bar saying something like mysite.com/portalportalportalportalportal.........on and on. So, any help on either of these issues would be awesome!! Thanks!!

    Read the article

  • turn off disable the performance cache

    - by jessie
    OK I run a streaming website and my CMS is giving me an error when uploading videos "Failed To Find Flength File" ok so I did some research. The answer I got from the coder was below. I did do all that, but the only thing I could not do is turn off what he refers to as performance cache, talked about in the last sentence... I am on a Cent OS Assuming the script is set up properly, you are probably dealing with some kind of write-caching. Some servers perform write-caching which prevents writing out the flength file or the entire CGITemp file during the upload. The flength file or the CGITemp file do not actually hit the disk until the upload is complete, making it worthless for reporting on progress during the upload. This may be fixed using a .htaccess file assuming your host supports them. Here is a link to an excellent tutorial on using .htaccess files. I strongly recommend giving it a quick read before attempting to install your own .htaccess file. 1. A mod_security module for Apache. To fix it just create a file called .htaccess (that's a period followed by "htaccess") and put the following lines in that file. Upload the file into the directory where the Uber-Uploader CGI ".pl" scripts resides, or in some directory above it (like your server's DOCUMENT_ROOT, i.e. the top-level of your webspace). htaccess files must be uploaded as ASCII mode, not BINARY. You may need to CHMOD the htaccess file to 644 or (RW-R--R--). # Turn off mod_security filtering. SecFilterEngine Off # The below probably isn't needed, # but better safe than sorry. SecFilterScanPOST Off If the above method does not work, try putting the following lines into the file SetEnvIfNoCase Content-Type \ "^multipart/form-data;" "MODSEC_NOPOSTBUFFERING=Do not buffer file uploads" mod_gzip_on No 2. "Performance Cache" enabled on OS X SERVER. If you're running OS X Server and the progress bar isn't working, it could be because of "performance caching." Apparently if ANY of your hosted sites are using performance caching, then by default, all sites (domains) will attempt to. The fix then is to disable the performance cache on all hosted sites.

    Read the article

  • Repeated requests on our server?

    - by pitty.platsch
    I encountered something strange in the access log of our Apache server which I cannot explain. Requests for webpages that I or my colleagues do from the office's Windows network get repeated by another IP (that we don't know) a couple of seconds later. The user agent repeating our requests is Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; InfoPath.2) Has anyone an idea? Update: I've got some more information now. The referrer of the replicate is set to the URL I requested before and it's not the exact same request as the protocol version is changed from 'HTTP/1.1' to 'HTTP/1.0'. The IP is not just one, it's just one of a subnet (80.40.134.*). It's just the first request to a resource that's get repeated, so it seems the "spy" is building up some kind of cache of visited places. The repeater is also picky. I tried randomly URLs with different HTTP status codes and different file patterns. 301s and 200s are redone, 404s not. Image extensions seem to be ignored. While doing my tests I discovered that this behavior seems to be common as I found other clients visiting just after the first requests: 66.249.73.184 - - [25/Oct/2012:10:51:33 +0100] "GET /foobar/ HTTP/1.1" 200 10952 "-" "Mediapartners-Google" 50.17.125.180 - - [25/Oct/2012:10:51:33 +0100] "GET /foobar/ HTTP/1.1" 200 41312 "-" "Mozilla/5.0 (compatible; proximic; +http://www.proximic.com/info/spider.php)" I wasn't aware about this practice, so I don't see it that much as a threat anymore. I still want to find out who this is, so any further help is appreciated. I'll try later if this also happens if I query some other server where I have access to the access logs and will update here then.

    Read the article

  • What is the difference between running a Windows service vs. running through shell?

    - by Zack
    I am trying to troubleshoot an issue on a Windows 2008 server where running attempting to connect to a "Timberline Data Source" ODBC driver crashes if the call is in a "service" context, but succeeds if the call is initiated manually in a Remote Desktop session. I have set the service to run as my user. I'm wondering if, all else being equal (user, machine, etc), are there any fundamental security/environment differences between running a process as a service vs manually? --- Implementation Details --- In case it is helpful for anyone, I had a system that started as an attempt to connect to a Timberline Database using ODBC and a Python CGI script called via IIS 7. The script itself works fine, however, as soon as I attempt to perform the ODBC connect function, the script crashes without throwing an exception. The script was able to connect fine when executed via command line. The same thing happened when using a C#/.net service, attempting to run via Apache, Windows Scheduler or even a 3rd party scheduling tool. With the last option (the 3rd party scheduling tool, pycron) I set the service up log in as my user and had the same issue (I confirmed via Task Manager that the process running user was, in fact, me). It just doesn't make sense to me why a service, which should be running as my user, appears to still be operating in a different security context or environment. Also, if it's important, the Timberline database is referenced by computer name on the network ("\\timberline-server\Timberline Office\Accounts\AT" or something to that effect) I also realized that, as Joel pointed out, the server DOES have a mapped drive ("Y:" which is mapped to "\\timberline-server\Timberline Office") The DSN is set up at the "System DSN" level which, according to the ODBC Administration Tool, means that the DSN is available to users and services Since I'm not allowed to answer this question yet, I'll post the solution that I arrived on: As Joel Coel mentioned, there actually was a mapped drive scenario. I didn't realize this because the DSN specified a path using UNC. However, it seems as though the actual Timberline Driver referred to a mapped drive. Since services don't start with the mapped drive, I was forced to add the drive mapping code into my service. Since it was written in python, I used code from a Stackoverflow answer that was able to map the drive on the fly.

    Read the article

  • What else can I do to secure my Linux server?

    - by eric01
    I want to put a web application on my Linux server: I will first explain to you what the web app will do and then I will tell you what I did so far to secure my brand new Linux system. The app will be a classified ads website (like gumtree.co.uk) where users can sell their items, upload images, send to and receive emails from the admin. It will use SSL for some pages. I will need SSH. So far, what I did to secure my stock Ubuntu (latest version) is the following: NOTE: I probably did some things that will prevent the application from doing all its tasks, so please let me know of that. My machine's sole purpose will be hosting the website. (I put numbers as bullet points so you can refer to them more easily) 1) Firewall I installed Uncomplicated Firewall. Deny IN & OUT by default Rules: Allow IN & OUT: HTTP, IMAP, POP3, SMTP, SSH, UDP port 53 (DNS), UDP port 123 (SNTP), SSL, port 443 (the ones I didn't allow were FTP, NFS, Samba, VNC, CUPS) When I install MySQL & Apache, I will open up Port 3306 IN & OUT. 2) Secure the partition in /etc/fstab, I added the following line at the end: tmpfs /dev/shm tmpfs defaults,rw 0 0 Then in console: mount -o remount /dev/shm 3) Secure the kernel In the file /etc/sysctl.conf, there are a few different filters to uncomment. I didn't know which one was relevant to web app hosting. Which one should I activate? They are the following: A) Turn on Source Address Verification in all interfaces to prevent spoofing attacks B) Uncomment the next line to enable packet forwarding for IPv4 C) Uncomment the next line to enable packet forwarding for IPv6 D) Do no accept ICMP redirects (we are not a router) E) Accept ICMP redirects only for gateways listed in our default gateway list F) Do not send ICMP redirects G) Do not accept IP source route packets (we are not a router) H) Log Martian Packets 4) Configure the passwd file Replace "sh" by "false" for all accounts except user account and root. I also did it for the account called sshd. I am not sure whether it will prevent SSH connection (which I want to use) or if it's something else. 5) Configure the shadow file In the console: passwd -l to lock all accounts except user account. 6) Install rkhunter and chkrootkit 7) Install Bum Disabled those services: "High performance mail server", "unreadable (kerneloops)","unreadable (speech-dispatcher)","Restores DNS" (should this one stay on?) 8) Install Apparmor_profiles 9) Install clamav & freshclam (antivirus and update) What did I do wrong and what should I do more to secure this Linux machine? Thanks a lot in advance

    Read the article

  • Nginx common configuration that I might have missed

    - by ApPeL
    I recently moved from Apache Mod_wsgi to Nginx, and I have seen a major improvement on speed a lowering on memory usage and I am generally very happy with the it. I am not a server expert, so please be gentle. I am wondering if there are any small configuration that I might have missed, that will cause me some issues in the long run... Please see my nginx.conf file user nginx nginx; worker_processes 4; error_log /var/log/nginx/error_log info; events { worker_connections 1024; use epoll; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $bytes_sent ' '"$http_referer" "$http_user_agent" ' '"$gzip_ratio"'; client_header_timeout 10m; client_body_timeout 10m; send_timeout 10m; connection_pool_size 256; client_header_buffer_size 1k; large_client_header_buffers 4 2k; request_pool_size 4k; gzip on; gzip_min_length 1100; gzip_buffers 4 8k; gzip_types text/plain; output_buffers 1 32k; postpone_output 1460; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 75 20; ignore_invalid_headers on; index index.html; server { listen 80; server_name localhost; location /media/ { root /www/django_test1/myapp; # Notice this is the /media folder that we create above } location /mediaadmin/ { alias /opt/python2.6/lib/python2.6/site-packages/django/contrib/admin/media/; # Notice this is the /media folder that we create above } location / { # host and port to fastcgi server fastcgi_pass 127.0.0.1:8080; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param QUERY_STRING $query_string; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_pass_header Authorization; fastcgi_intercept_errors off; client_max_body_size 100M; } access_log /var/log/nginx/localhost.access_log main; error_log /var/log/nginx/localhost.error_log; } }

    Read the article

  • unable to connect site to different port

    - by JohnMerlino
    I have a domain was registered at godaddy named http://mysite.com/. I logged into godaddy and I went to All Products Domains Domain Management. I clicked on the appropriate domain and it took me to the Domain Details page. I clicked Launch under DNS Manager and it took me to the Zone File Editor. I noticed that notify.mysite.com was pointing to an IP address pointing to a dead server, so I switched it to an operating server. Then I pinged the domain to see where it was pointing to and it was correctly pointing to the working server. So I copied the default configuration under sites-available: sudo cp default notify.mysite.com. And then I made some edits to it to have it point to a different document root to serve files at a different port: Listen 1740 Listen 64.135.xx.xxx:1740 #I also tried this as well: NameVirtualHost 64.135.xx.xxx:1740 <VirtualHost 64.135.xx.xxx:1740> ServerAdmin [email protected] ServerName notify.mysite.com DocumentRoot /var/www/test/public <Directory /var/www/test/public> Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Then I enabled the virtual host. Then I went to the document root and added an index.html file with some text in it. Then I restarted apache. The restart gave no errors. Then I type the correct domain in URL: http://notify.mysite.com:1740/ and I get: Oops! Google Chrome could not connect to notify.mysite.com:1740 Somehow it took out all my other sites. Now even the ones that were responding on port 80 are no longe responding, even though I did not touch the virtual hosts for them. I get this message now: Oops! Google Chrome could not connect to mysite.com However, ping responds: ping mysite.com PING mysite.com (64.135.12.134): 56 data bytes 64 bytes from 64.135.12.134: icmp_seq=0 ttl=49 time=20.839 ms 64 bytes from 64.135.12.134: icmp_seq=1 ttl=49 time=20.489 ms The result of telnet: $ telnet guarddoggps.com 80 Trying 64.135.12.134... telnet: connect to address 64.135.12.134: Connection refused telnet: Unable to connect to remote host

    Read the article

  • MySQL 5.5 - Lost connection to MySQL server during query

    - by bully
    I have an Ubuntu 12.04 LTS server running at a german hoster (virtualized system). # uname -a Linux ... 3.2.0-27-generic #43-Ubuntu SMP Fri Jul 6 14:25:57 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux I want to migrate a Web CMS system, called Contao. It's not my first migration, but my first migration having connection issues with mysql. Migration went successfully, I have the same Contao version running (it's more or less just copy / paste). For the database behind, I did: apt-get install mysql-server phpmyadmin I set a root password and added a user for the CMS which has enough rights on its own database (and only its database) for doing the stuff it has to do. Data import via phpmyadmin worked just fine. I can access the backend of the CMS (which needs to deal with the database already). If I try to access the frontend now, I get the following error: Fatal error: Uncaught exception Exception with message Query error: Lost connection to MySQL server during query (<query statement here, nothing special, just a select>) thrown in /var/www/system/libraries/Database.php on line 686 (Keep in mind: I can access mysql with phpmyadmin and through the backend, working like a charme, it's just the frontend call causing errors). If I spam F5 in my browser I can sometimes even kill the mysql deamon. If I run # mysqld --log-warnings=2 I get this: ... 120921 7:57:31 [Note] mysqld: ready for connections. Version: '5.5.24-0ubuntu0.12.04.1' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 05:57:37 UTC - mysqld got signal 4 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=151 thread_count=1 connection_count=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 346679 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x7f1485db3b20 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f1480041e60 thread_stack 0x30000 mysqld(my_print_stacktrace+0x29)[0x7f1483b96459] mysqld(handle_fatal_signal+0x483)[0x7f1483a5c1d3] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f1482797cb0] /lib/x86_64-linux-gnu/libm.so.6(+0x42e11)[0x7f14821cae11] mysqld(_ZN10SQL_SELECT17test_quick_selectEP3THD6BitmapILj64EEyyb+0x1368)[0x7f1483b26cb8] mysqld(+0x33116a)[0x7f148397916a] mysqld(_ZN4JOIN8optimizeEv+0x558)[0x7f148397d3e8] mysqld(_Z12mysql_selectP3THDPPP4ItemP10TABLE_LISTjR4ListIS1_ES2_jP8st_orderSB_S2_SB_yP13select_resultP18st_select_lex_unitP13st_select_lex+0xdd)[0x7f148397fd7d] mysqld(_Z13handle_selectP3THDP3LEXP13select_resultm+0x17c)[0x7f1483985d2c] mysqld(+0x2f4524)[0x7f148393c524] mysqld(_Z21mysql_execute_commandP3THD+0x293e)[0x7f14839451de] mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x10f)[0x7f1483948bef] mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1365)[0x7f148394a025] mysqld(_Z24do_handle_one_connectionP3THD+0x1bd)[0x7f14839ec7cd] mysqld(handle_one_connection+0x50)[0x7f14839ec830] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f148278fe9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f1481eba4bd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f1464004b60): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED From /var/log/syslog: Sep 21 07:17:01 s16477249 CRON[23855]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Sep 21 07:18:51 s16477249 kernel: [231923.349159] type=1400 audit(1348204731.333:70): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=23946 comm="apparmor_parser" Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23990]: Upgrading MySQL tables if necessary. Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: /usr/bin/mysql_upgrade: the '--basedir' option is always ignored Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: Looking for 'mysql' as: /usr/bin/mysql Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: This installation of MySQL is already upgraded to 5.5.24, use --force if you still need to run mysql_upgrade Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[24004]: Checking for insecure root accounts. Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[24009]: Triggering myisam-recover for all MyISAM tables I'm using MyISAM tables all over, nothing with InnoDB there. Starting / stopping mysql is done via sudo service mysql start sudo service mysql stop After using google a little bit, I experimented a little bit with timeouts, correct socket path in the /etc/mysql/my.cnf file, but nothing helped. There are some old (from 2008) Gentoo bugs, where re-compiling just solved the problem. I already re-installed mysql via: sudo apt-get remove mysql-server mysql-common sudo apt-get autoremove sudo apt-get install mysql-server without any results. This is the first time I'm running into this problem, and I'm not very experienced with this kind of mysql 'administration'. So mainly, I want to know if anyone of you could help me out please :) Is it a mysql bug? Is something broken in the Ubuntu repositories? Is this one of those misterious 'use-tcp-connection-instead-of-socket-stuff-because-there-are-problems-on-virtualized-machines-with-sockets'-problem? Or am I completly on the wrong way and I just miss-configured something? Remember, phpmyadmin and access to the backend (which uses the database, too) is just fine. Maybe something with Apache? What can I do? Any help is appreciated, so thanks in advance :)

    Read the article

  • Simple-Talk development: a quick history lesson

    - by Michael Williamson
    Up until a few months ago, Simple-Talk ran on a pure .NET stack, with IIS as the web server and SQL Server as the database. Unfortunately, the platform for the site hadn’t quite gotten the love and attention it deserved. On the one hand, in the words of our esteemed editor Tony “I’d consider the current platform to be a “success”; it cost $10K, has lasted for 6 years, was finished, end to end in 6 months, and although we moan about it has got us quite a long way.” On the other hand, it was becoming increasingly clear that it needed some serious work. Among other issues, we had authors that wouldn’t blog because our current blogging platform, Community Server, was too painful for them to use. Forgetting about Simple-Talk for a moment, if you ask somebody what blogging platform they’d choose, the odds are they’d say WordPress. Regardless of its technical merits, it’s probably the most popular blogging platform, and it certainly seemed easier to use than Community Server. The issue was that WordPress is normally hosted on a Linux stack running PHP, Apache and MySQL — quite a difference from our Microsoft technology stack. We certainly didn’t want to rewrite the entire site — we just wanted a better blogging platform, with the rest of the existing, legacy site left as is. At a very high level, Simple-Talk’s technical design was originally very straightforward: when your browser sends an HTTP request to Simple-Talk, IIS (the web server) takes the request, does some work, and sends back a response. In order to keep the legacy site running, except with WordPress running the blogs, a different design is called for. We now use nginx as a reverse-proxy, which can then delegate requests to the appropriate application: So, when your browser sends a request to Simple-Talk, nginx takes that request and checks which part of the site you’re trying to access. Most of the time, it just passes the request along to IIS, which can then respond in much the same way it always has. However, if your request is for the blogs, then nginx delegates the request to WordPress. Unfortunately, as simple as that diagram looks, it hides an awful lot of complexity. In particular, the legacy site running on IIS was made up of four .NET applications. I’ve already mentioned one of these applications, Community Server, which handled the old blogs as well as managing membership and the forums. We have a couple of other applications to manage both our newsletters and our articles, and our own custom application to do some of the rendering on the site, such as the front page and the articles. When I say that it was made up of four .NET applications, this might conjure up an image in your mind of how they fit together: You might imagine four .NET applications, each with their own database, communicating over well-defined APIs. Sadly, reality was a little disappointing: We had four .NET applications that all ran on the same database. Worse still, there were many queries that happily joined across tables from multiple applications, meaning that each application was heavily dependent on the exact data schema that each other application used. Add to this that many of the queries were at least dozens of lines long, and practically identical to other queries except in a few key spots, and we can see that attempting to replace one component of the system would be more than a little tricky. However, the problems with the old system do give us a good place to start thinking about desirable qualities from any changes to the platform. Specifically: Maintainability — the tight coupling between each .NET application made it difficult to update any one application without also having to make changes elsewhere Replaceability — the tight coupling also meant that replacing one component wouldn’t be straightforward, especially if it wasn’t on a similar Microsoft stack. We’d like to be able to replace different parts without having to modify the existing codebase extensively Reusability — we’d like to be able to combine the different pieces of the system in different ways for different sites Repeatable deployments — rather than having to deploy the site manually with a long list of instructions, we should be able to deploy the entire site with a single command, allowing you to create a new instance of the site easily whether on production, staging servers, test servers or your own local machine Testability — if we can deploy the site with a single command, and each part of the site is no longer dependent on the specifics of how every other part of the site works, we can begin to run automated tests against the site, and against individual parts, both to prevent regressions and to do a little test-driven development In the next part, I’ll describe the high-level architecture we now have that hopefully brings us a little closer to these five traits.

    Read the article

< Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >