Search Results

Search found 21481 results on 860 pages for 'www yegorov p ru'.

Page 153/860 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • Apache 403 Forbidden Error when accessing local web server using local IP address

    - by amjo324
    I have an odd problem when attempting to browse to pages stored on a local web server (Apache 2.2). The pages are served as expected when I browse to localhost or 127.0.0.1 on port 80. Yet when I attempt to browse to the same pages by referencing the local IP address (192.168.x.x), I receive a HTTP 403 (Forbidden) error. In essence, http://localhost:80 works but 192.168.x.x:80 doesn't even though I'm specifying the IP of the local machine. You may be thinking "who cares? just use localhost". However, this is the first step in troubleshooting why I cannot remotely access these pages from different hosts on my LAN. I'm presuming this can't be a firewall issue as I'm only connecting to the local machine. Even so, I verified there was no iptables rules that could be having an effect. I've checked the Apache error logs and the corresponding line of relevance is: [Sat Oct 19 07:38:35 2013] [error] [client 192.168.x.x] client denied by server configuration: /var/www/ I've inspected most of the apache config files and they don't appear to differ from what you would expect with a default install. I can't see anything in apache2.conf that would be a problem and httpd.conf is an empty file. This is an excerpt from /etc/apache2/sites-enabled/000-default: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> Any insight as to where I can look next to find a solution ? Thanks in advance.

    Read the article

  • nginx rewrite or internal redirection cycle

    - by gyre
    Im banging my head against a table trying to figure out what is causing redirection cycle in my nginx configuration when trying to access URL which does not exist Configuration goes as follows: server { listen 127.0.0.1:8080; server_name .somedomain.com; root /var/www/somedomain.com; access_log /var/log/nginx/somedomain.com-access.nginx.log; error_log /var/log/nginx/somedomain.com-error.nginx.log debug; location ~* \.php.$ { # Proxy all requests with an URI ending with .php* # (includes PHP, PHP3, PHP4, PHP5...) include /etc/nginx/fastcgi.conf; } # all other files location / { root /var/www/somedomain.com; try_files $uri $uri/ ; } error_page 404 /errors/404.html; location /errors/ { alias /var/www/errors/; } #this loads custom logging configuration which disables favicon error logging include /etc/nginx/drop.conf; } this domain is a simple STATIC HTML site just for some testing purposes. I'd expect that the error_page directive would kick in in response to PHP-FPM not being able to find given files as I have fastcgi_intercept_errors on; in http block and nave error_page set up, but I'm guessing the request fails even before that somewhere on internal redirects. Any help would be much appreciated.

    Read the article

  • Apache Proxy Pass and Web Sockets

    - by James
    I'm using Apache with the mod_proxy module to reverse proxy my Node.js application through to port 80, so that we can access it as an internal application. I have a file in sites-enabled which contains this: VirtualHost *:80> DocumentRoot /var/www/internal/ ServerName internal ServerAlias internal <Directory /var/www/internal/public/> Options All AllowOverride All Order allow,deny Allow from all </Directory> ProxyRequests off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://localhost:8080/ retry=0 ProxyPassReverse / http://localhost:8080/ ProxyPreserveHost on ProxyTimeout 1200 LogLevel debug AllowEncodedSlashes on </VirtualHost> As I said, our application is written in Node.js and we're using socket.io to make use of web-sockets, as our application also contains realtime elements to it. The problem is, mod_proxy doesn't seem to handle web sockets and we get errors when trying to use them: WebSocket connection to 'ws://bloot/socket.io/1/websocket/nHtTh6ZwQjSXlmI7UMua' failed: Unexpected response code: 502 How can we fix this issue and keep sockets working, as the only way we can get it working currently is to access the site via ip:port which we don't want to do. Also, as a side question, how can I get ErrorDocument to work properly? Our error files are stored in /var/www/internal/public/error/ but they seem to get put through the proxy too?

    Read the article

  • Moodle serves on IP only - will not work with mod_proxy

    - by Jon H
    I'm trying to set a moodle server up on an Ubuntu box, which already serves Plone & Trac via Apache. In my Moodle config I have $CFG-wwwroot = 'http://www.server-name.org/moodle' The configuration below works fine for the first two, but when I visit www.server-name.com/moodle I get: Incorrect access detected, this server may be accessed only through "http://xxx.xxx.xxx.xxx:8888/moodle" address, sorry It then forwards to the IP address, where Moodle functions fine. What am I missing to get the server name approach working correctly? Apache Config follows: LoadModule transform_module /usr/lib/apache2/modules/mod_transform.so Listen 8080 Listen 8888 Include /etc/phpmyadmin/apache.conf <VirtualHost xxx.xxx.xxx.xxx:8080> <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPreserveHost On <Location /> ProxyPass http://127.0.0.1:8082/ ProxyPassReverse http://127.0.0.1:8082/ </Location> </VirtualHost> <VirtualHost xxx.xxx.xxx.xxx:80> ServerName www.server-name.org ServerAlias server-name.org ProxyRequests Off FilterDeclare MyStyle RESOURCE FilterProvider MyStyle XSLT resp=Content-Type $text/html TransformOptions +ApacheFS +HTML TransformCache /theme.xsl /home/web/webapps/plone/theme.xsl TransformSet /theme.xsl FilterChain MyStyle ProxyPass /issue-tracker ! ProxyPass /moodle ! <Location /issue-tracker/login> AuthType Basic AuthName "Trac" AuthUserFile /home/web/webapps/plone/parts/trac/trac.htpasswd Require valid-user </Location> Alias /moodle /usr/share/moodle/ <Directory /usr/share/moodle/> Options +FollowSymLinks AllowOverride None order allow,deny allow from all <IfModule mod_dir.c> DirectoryIndex index.php </IfModule> </Directory> </VirtualHost>

    Read the article

  • Server Directory Not Accessible

    - by GusDeCooL
    I got strange things happen on live server, but normal in local server. My local server is using mac, and my live server is linux. Consider i try to access some files http://redddor.babonmultimedia.com/assets/images/map-1.jpg This work correctly. http://redddor.babonmultimedia.com/assets/modules/evogallery/check.php Return 404, I'm pretty sure my file is in there and there is no typo mistake. How come it give me 404? There is only one .htaccess on the root server and it's configuration is like this. # For full documentation and other suggested options, please see # http://svn.modxcms.com/docs/display/MODx096/Friendly+URL+Solutions # including for unexpected logouts in multi-server/cloud environments # and especially for the first three commented out rules #php_flag register_globals Off #AddDefaultCharset utf-8 #php_value date.timezone Europe/Moscow Options +FollowSymlinks RewriteEngine On RewriteBase / <IfModule mod_security.c> SecFilterEngine Off </IfModule> # Fix Apache internal dummy connections from breaking [(site_url)] cache RewriteCond %{HTTP_USER_AGENT} ^.*internal\ dummy\ connection.*$ [NC] RewriteRule .* - [F,L] # Rewrite domain.com -> www.domain.com -- used with SEO Strict URLs plugin #RewriteCond %{HTTP_HOST} . #RewriteCond %{HTTP_HOST} !^www\.example\.com [NC] #RewriteRule (.*) http://www.example.com/$1 [R=301,L] # Exclude /assets and /manager directories and images from rewrite rules RewriteRule ^(manager|assets)/*$ - [L] RewriteRule \.(jpg|jpeg|png|gif|ico)$ - [L] # For Friendly URLs RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] # Reduce server overhead by enabling output compression if supported. #php_flag zlib.output_compression On #php_value zlib.output_compression_level 5

    Read the article

  • Intermittent CNAME forwarding

    - by Godric Seer
    I host a personal website on an old desktop that is LAMP based. Since I have a dynamic IP, I use no-ip to make sure I have a working domain name at all times. I also have a domain I have bought on GoDaddy where I have a CNAME record forwarding the www subdomain to my no-ip domain. At all times, I can connect to my website through the no-ip domain without issue. For the past several weeks, I never had an issue using the GoDaddy domain to connect (ssh or https). As of today, however, the GoDaddy domain only works for about 10 minutes at a time. I get server not found errors most of the time. Also, if I happen to be using the GoDaddy domain for an ssh connection, the connection will freeze. I have attempted to run tests using a couple of online DNS check websites, but have not gotten any errors at any time. I also contacted GoDaddy support but they had no issues connecting to the website, and therefore did not see any issues. I would like advice on how I could debug/resolve this issue. Since the problem appeared without me changing anything on my end, I hope it will resolve itself, but knowing the cause in case it happens again would be preferable. EDIT: I changed the configuration in GoDaddy to create an A (Host) that points at my current IP. This works fine, so I can access the site through the GoDaddy domain without the preceding www. I am currently waiting for a new CNAME record to propagate that points the www subdomain at the main host, rather than my no-ip domain.

    Read the article

  • Can't get simple Apache VHost up and running

    - by TK Kocheran
    Unfortunately, I can't seem to get a simple Apache VHost online. I used to simply have one VHost which bound to all: <VirtualHost *:80>, but this isn't appropriate for security anymore. I need to have one VHost for localhost requests (ie my dev server) and one for incoming requests via my domain name. Here's my new VHost: NameVirtualHost domain1.com <VirtualHost domain1.com:80> DocumentRoot /var/www ServerName domain1.com </VirtualHost> <VirtualHost domain2.com:80> DocumentRoot /var/www ServerName domain2.com </VirtualHost> After I restart my server, I see the following errors in my log: [Wed Feb 16 11:26:36 2011] [error] [client ####.###.###.###] File does not exist: /htdocs [Wed Feb 16 11:26:36 2011] [error] [client ####.###.###.###] File does not exist: /htdocs What am I doing wrong? EDIT As per the answer give below, I have modified my configuration. Here are my configuration files: /etc/apache2/ports.conf: Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> Here are my actual defined sites: /etc/apache2/sites-enabled/000-localhost: NameVirtualHost 127.0.0.1:80 <VirtualHost 127.0.0.1:80> ServerAdmin ######### DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> RewriteEngine On RewriteLog "/var/log/apache2/mod_rewrite.log" RewriteLogLevel 9 <Location /> <Limit GET POST PUT> order allow,deny allow from all deny from 65.34.248.110 deny from 69.122.239.3 deny from 58.218.199.147 deny from 65.34.248.110 </Limit> </Location> </VirtualHost> /etc/apache2/sites-enabled/001-rfkrocktk.dyndns.org: NameVirtualHost rfkrocktk.dyndns.org:80 <VirtualHost rfkrocktk.dyndns.org:80> DocumentRoot /var/www ServerName rfkrocktk.dyndns.org </VirtualHost> And, just for kicks, my main file: /etc/apache2/apache2.conf: # # Based upon the NCSA server configuration files originally by Rob McCool. # # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See http://httpd.apache.org/docs/2.2/ for detailed information about # the directives. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # The configuration directives are grouped into three basic sections: # 1. Directives that control the operation of the Apache server process as a # whole (the 'global environment'). # 2. Directives that define the parameters of the 'main' or 'default' server, # which responds to requests that aren't handled by a virtual host. # These directives also provide default values for the settings # of all virtual hosts. # 3. Settings for virtual hosts, which allow Web requests to be sent to # different IP addresses or hostnames and have them handled by the # same Apache server process. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "/var/log/apache2/foo.log" # with ServerRoot set to "" will be interpreted by the # server as "//var/log/apache2/foo.log". # ### Section 1: Global Environment # # The directives in this section affect the overall operation of Apache, # such as the number of concurrent requests it can handle or where it # can find its configuration files. # # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the LockFile documentation (available # at <URL:http://httpd.apache.org/docs-2.1/mod/mpm_common.html#lockfile>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # ServerRoot "/etc/apache2" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # #<IfModule !mpm_winnt.c> #<IfModule !mpm_netware.c> LockFile /var/lock/apache2/accept.lock #</IfModule> #</IfModule> # # PidFile: The file in which the server should record its process # identification number when it starts. # This needs to be set in /etc/apache2/envvars # PidFile ${APACHE_PID_FILE} # # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 15 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # event MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_event_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog /var/log/apache2/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn # Include module configuration: Include /etc/apache2/mods-enabled/*.load Include /etc/apache2/mods-enabled/*.conf # Include all the user configurations: Include /etc/apache2/httpd.conf # Include ports listing Include /etc/apache2/ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # # Define an access log for VirtualHosts that don't define their own logfile CustomLog /var/log/apache2/other_vhosts_access.log vhost_combined # Include of directories ignores editors' and dpkg's backup files, # see README.Debian for details. # Include generic snippets of statements Include /etc/apache2/conf.d/ # Include the virtual host configurations: Include /etc/apache2/sites-enabled/ what else do I need to do to fix it? Should I be telling apache to listen on 127.0.0.1:80, or isn't it already listening there?

    Read the article

  • Postfix Relay to Office365

    - by woodsbw
    I am trying to setup a Postfix server on a Linux box to relay all mail to our Office365 (Exchange, hosted by Microsoft) mail server, but, I keep getting an error regarding the sending address: BB338140DC1: to= relay=pod51010.outlook.com[157.56.234.118]:587, delay=7.6, delays=0.01/0/2.5/5.1, dsn=5.7.1, status=bounced (host pod51010.outlook.com[157.56.234.118] said: 550 5.7.1 Client does not have permissions to send as this sender (in reply to end of DATA command)) Office 365 requires that the sending address in the MAIL FROM and From: header be the same as the address used to authenticate. I have tried everything I can think of in the config to get this working. My postconf -n: append_dot_mydomain = no biff = no config_directory = /etc/postfix debug_peer_list = 127.0.0.1 inet_interfaces = loopback-only inet_protocols = all mailbox_size_limit = 0 mydestination = xxxxx, localhost.localdomain, localhost myhostname = localhost mynetworks = 127.0.0.0/8 recipient_delimiter = + relay_domains = our.doamin relayhost = [pod51010.outlook.com]:587 sender_canonical_classes = envelope_sender sender_canonical_maps = hash:/etc/postfix/sender_canonical smtp_always_send_ehlo = yes smtp_sasl_auth_enable = yes smtp_sasl_mechanism_filter = login smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = smtp_tls_CAfile = /etc/postfix/cacert.pem smtp_tls_loglevel = 1 smtp_tls_security_level = may smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes sender_canonical: www-data [email protected] root [email protected] www-data@localhost [email protected] root@localhost [email protected] Also, sasl_passwd is set to the correct credentials (tested them using swaks multiple times.) Authentication works, and sends the message when the from headers are correct (also tested using swaks....which works) The emails are coming from PHP, so I have also tried altering the sendmail path in php.ini to use pass the correct from address via -f So, for some reason, mail coming from www-data and root are not having the from fields rewritten to Office 365's satisfaction, and it won't send the message. Any postfix gurus out there that can help me setup this relay?

    Read the article

  • Using IIS7 as a reverse proxy

    - by Jon
    My question is pretty much identical to the question listed but they did not get an answer as they ended up using Linux as the reverse proxy. http://serverfault.com/questions/55309/using-iis7-as-a-reverse-proxy I need to have IIS the main site and linux (Apache) being the proxied site(s). so I have site1.com (IIS7) site2.com (Linux Apache) they have subdomains of sub1.site1.com sub2.site1.com sub3.site2.com I want all traffic to go to site1.com and to say anything that is site2.com should be proxied to linux box on internal network, (believe ARR can do this but not sure how). I can not have it running as Apache doing the proxying as I need IIS exposed directly. any and all advice would be great. EDIT I think this might help me: <rule name="Canonical Host Name" stopProcessing="true"> <match url="(.*)" /> <conditions> <add input="{HTTP_HOST}" negate="true" pattern="^cto\.com$" /> <add input="{HTTP_HOST}" negate="true" pattern="^antoniochagoury\.com$" /> <add input="{HTTP_HOST}" negate="true" pattern="www.antoniochagoury\.com$" /> </conditions> <action type="Redirect" url="http://www.cto20.com/{R:1}" redirectType="Permanent" /> </rule> from: http://www.cto20.com/post/Tips-Tricks-3-URL-Rewriting-Rules-Everyone-Should-Use.aspx I will have a look at this when I have access to the IIS7 box. Thanks

    Read the article

  • vhost configuration for owncloud

    - by Razer
    I'm using apache2 for hosting owncloud. I configured a vhost file for owncloud, but every time I go on the site my browser downloads a ruby file. Here is my vhost configuration: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName http://rsserver.fritz.box DocumentRoot /var/www/owncloud/ <Directory /var/www/owncloud/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Apache error log tells me: [Sat Jun 16 20:46:04 2012] [error] [client xx.xx.xx.xx] Options FollowSymLinks or SymLinksIfOwnerMatch is off which implies that RewriteRule directive is forbidden: /var/www/owncloud/core/templates/403.php mod_rewrite is enabled. Where is the problem?

    Read the article

  • How use DNS server to create simple HA (High availability) of my website?

    - by marc22
    Welcome, How can i use DNS server to create simple HA (High availability) of website ? For example if my web-server ( for better understanding i use internal IP in real it will be other hosting companies) 192.168.0.120 :80 (is offline) traffic go to 192.168.0.130 :80 You have right, i use bad word "hight avability" of course i was thinking about failover. Using few IP in A records is good for simple load-balancing. But not in case, if i want notice user about failure (for example display page, Oops something is wrong without our server, we working on it) against "can't establish connection". I was thinking about setting up something like this 2 DNS servers, one installed on www server Both have low TTL on my domain, set up 2 ns records first for DNS with my apache server second to other dns If user try connect he will get ip of www server using first dns, if that dns is offline (probably www server is also down) so it will try second NS record, what will point to another dns, that dns will point to "backup" page. That's what i would like to do. If You have other idea please share. Reverse proxy is not option, because IP of server can change, or i can use other country for backup.

    Read the article

  • Unable to get prosody running on Ubuntu 10.04 (lua issues)

    - by user90374
    All this is performed on Ubuntu 10.04.4 LTS Server I installed LUA 5.1.4 following this procedure - http://ubuntuforums.org/showthread.php?t=1874860 I installed prosody following this command (after downloading the package) - sudo dpkg -i prosody_0.8.2-1_i386.deb After installation, I get the following error: I have tried to use as suggested luarock and sudo apt-get install to fix these. But still it keeps showing me these errors. Selecting previously deselected package prosody. (Reading database ... 59416 files and directories currently installed.) Unpacking prosody (from prosody_0.8.2-1_i386.deb) ... Setting up prosody (0.8.2-1) ... * Starting Prosody XMPP Server prosody ************** Prosody was unable to find luaexpat This package can be obtained in the following ways: Source: www[dot]keplerproject[dot]org/luaexpat/ Debian/Ubuntu: sudo apt-get install liblua5.1-expat0 luarocks: luarocks install luaexpat luaexpat is required for Prosody to run, so we will now exit. More help can be found on our website, at prosody[dot]im/doc/depends ************ Prosody was unable to find luasocket This package can be obtained in the following ways: Source: www[dot]tecgraf[dot]puc-rio[dot]br/~diego/professional/luasocket/ Debian/Ubuntu: sudo apt-get install liblua5.1-socket2 luarocks: luarocks install luasocket luasocket is required for Prosody to run, so we will now exit. More help can be found on our website, at prosody[dot]im/doc/depends ************ Prosody was unable to find LuaSec This package can be obtained in the following ways: Source: www[dot]inf[dot]puc-rio[dot]br/~brunoos/luasec/ Debian/Ubuntu: prosody[dot]im/download/start#debian_and_ubuntu luarocks: luarocks install luasec SSL/TLS support will not be available More help can be found on our website, at prosody[dot]im/doc/depends [fail] invoke-rc.d: initscript prosody, action "start" failed. dpkg: error processing prosody (--install): subprocess installed post-installation script returned error exit status 1 Processing triggers for man-db ... Processing triggers for ureadahead ... Errors were encountered while processing: prosody Thanks a lot for your patience and answers.

    Read the article

  • Postfix 554 <[email protected]>: Relay access denied

    - by Matt
    So i am trying to set postfix up and I am running into some problems.....here is my files vim /etc/postfix/main.cf relayhost = [smtp.gmail.com]:587 smtp_connection_cache_destinations = smtp.gmail.com smtp_sasl_auth_enable=yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_tls_security_options = noanonymous tls_random_source = dev:/dev/urandom smtp_tls_CAfile= /etc/pki/CA/cacert.pem smtp_tls_security_level = may smtp_tls_scert_verifydepth = 9 append_dot_mydomain = no readme_directory = no myhostname = maggie.deliverypath.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = maggie.deliverypath.com, localhost.deliverypath.com, , localhost mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all I also have the gmail password info vim /etc/postfix/sasl_passwd gmail-smtp.l.google.com [email protected]:somepass smtp.gmail.com [email protected]:somepass then I try to follow this article and i get this output telnet mail.demoslice.com 25 Trying 67.207.128.80... Connected to www.slicehost.com. Escape character is '^]'. 220 www.slicehost.com ESMTP Postfix (Ubuntu) HELO test.demoslice.com 250 www.slicehost.com MAIL FROM:<[email protected]> 250 Ok RCPT TO:<[email protected]> 554 <[email protected]>: Relay access denied its started service postfix start * Starting Postfix Mail Transport Agent postfix ...done. then the screen gets frozen and i cant do anything....any ideas

    Read the article

  • Virtualhost setup for Ruby on Rails application (mod passenger)

    - by Ingo86
    Hi all, I'm trying to install Redmine under apache. The apache server works on a local network. My apache setup consist on a single virtual host. I can get insto different directories using simply the path corresponding: http://ip_address/folder_of_the_project_1 How can I setup the virtualhost to make redmine works in this situation? Here is my current virtualhost setup: NameVirtualHost * <VirtualHost *> ServerAdmin webmaster@localhost DocumentRoot /var/www/ RailsBaseURI /redmine <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> <Directory /var/www/redmine/public> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> Thank you, Ingo86

    Read the article

  • reverse proxy not rewriting to https

    - by polishpt
    I need your help. I'm having problems with reverse proxy rewriting to https: I have an alfresco app running on top of tomcat and as a front and an Apache server - it's site-enabled looks like that: <VirtualHost *:80> ServerName alfresco JkMount /* ajp13_worker <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature Off </VirtualHost> I also have a reverse proxy server running on second machine and i want it to rewrite queries to https. It's site-enabled looks like that: <VirtualHost 192.168.251.50:80> ServerName alfresco DocumentRoot /var/www/ RewriteEngine on RewriteRule (.*) https://alfresco/ [R] LogLevel warn ErrorLog /var/log/apache2/alfresco-80-error.log CustomLog /var/log/apache2/alfresco-80-access.log combined ServerSignature Off </VirtualHost> <VirtualHost 192.168.251.50:443> ServerName alfresco DocumentRoot /var/www/ SSLEngine On SSLProxyEngine On SSLCertificateFile /etc/ssl/certs/alfresco.pem SSLCertificateKeyFile /etc/ssl/private/alfresco.key SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /alfresco http://192.168.251.50:8080/alfresco ProxyPassReverse /alfresco http://192.168.251.50:8080/alfresco LogLevel warn ErrorLog /var/log/apache2/alfresco-443-error.log CustomLog /var/log/apache2/alfresco-443-access.log combined ServerSignature Off </VirtualHost> Now, ProxyPass works, when I go to alfresco/alfrsco in a browser application opens, but rewriting to https doesn't work. Plese help. Regards when I go to 192.168.251.50 Tomcat configuration page shows up. When I go to 192.268.251.50:8080 - the same as above when I go to 192.168.251.50:8080/alfresco - alfresco app page shows app when I go to alfresco/alfresco - same as above when i go to https://alfresco or https://alfresco i get an error connecting to a server

    Read the article

  • Better approach to archiving large amounts of original video footage using optical media (DVD/Blu-ra

    - by Rob
    This question is to share my experience as well as ask for suggestions for better methods. Along with 2 friends, I completed the making of a short documentary film in 2006. Clip is at: http://www.youtube.com/mediamotioninvision The film was edited in Adobe Premiere Pro 1.5 on Windows XP. More details and screenshot here: http://www.flickr.com/photos/smilingrobbie/1350235514/ ( note this is not intended to be a plug, we've moved on from this initial learning curve project ;) ) The film is in 4:3 standard definition 720x576 PAL format. As well as retaining the final 30minute film, I wanted to keep all original files that assembled together to make the film. The footage was 83.5Gb So I archived them to over 20 4.7Gb DVD recordables in the original .avi format (i.e. data DVD-ROM format, NOT DVD-Video Mpeg2) Some .avi DV video files were larger than 4.7Gb so I used 7-zip to split them ( here is a guide as to how to do that: http://www.linglom.com/2008/10/12/how-to-split-a-large-file-using-7-zip/ ) To recombine them, a dos shell command like this would do that: copy /b file.avi.* file.avi would do the job, where .* is a wild card to include all the split parts e.g. 001, 002...00n assuming they are all in the same directory path folder. file.avi is the recombined file identical to the original. Later on, I bought a LG BE06 LU10 USB 2.0 Super-multi Blu-ray burner and archived the footage to 2 (two) x 50Gb BD-R DL discs. Again in the original format, written as files to a BD-R in the BD-R BD-ROM UDF format readable by PC/Mac etc, NOT Blu-ray video/film format. This seems to be a good solution for me, because: the archive is in a robust, reasonably permanent, non-volatile medium, i.e. DVD recordable / Blu-ray (debates about stability of optical media organic chemical dye compounds/substrates aside) the format of the archive is accessible by open source tools or just plain Windows Explorer and it's not in a proprietary format I just thought I'd ask folks for their experience on better methods, if such exist.

    Read the article

  • How to configure multiple virtual hosts for multiple users on Linux/Apache2.2

    - by authentictech
    I want to set up a virtual hosting server on Linux/Apache2.2 that allows multiple users to set up multiple website domains as would be appropriate for commercial shared hosting. I have seen examples (from my then perspective as a shared hosting customer) that allow users to store their web files in their user home directory with directories to correspond to the virtual host domain, e.g.: /home/user1/www/example1.com /home/user2/www/example2.com instead of using /var/www Questions: How would you configure this in your Apache configuration files? (Don't worry about DNS) Is this the best way to manage multiple virtual hosts? Are there others? What safety or security issues do you think I should be aware of in doing this? Many thanks, folks. Edit: If you want to only answer question 1, please feel free, as that is the most urgent to me at this moment and I would consider that an answer to the question. I have done it for myself since posting, but I am not confident that it's the best solution and I would like to know how an experienced sysadmin would do it. Thanks.

    Read the article

  • Squid SSL transparent proxy - SSL_connect:error in SSLv2/v3 read server hello A

    - by larryzhao
    I am trying to setup a SSL proxy for one of my internal servers to visit https://www.googleapis.com using Squid, to make my Rails application on that server to reach googleapis.com via the proxy. I am new to this, so my approach is to setup a SSL transparent proxy with Squid. I build Squid 3.3 on Ubuntu 12.04, generated a pair of ssl key and crt, and configure squid like this: http_port 443 transparent cert=/home/larry/ssl/server.csr key=/home/larry/ssl/server.key And leaves almost all other configurations default. The authorization of the dir that holds key/crt is drwxrwxr-x 2 proxy proxy 4096 Oct 17 15:45 ssl Back on my dev laptop, I put <proxy-server-ip> www.googleapis.com in my /etc/hosts to make the call goes to my proxy server. But when I try it in my rails application, I got: SSL_connect returned=1 errno=0 state=SSLv2/v3 read server hello A: unknown protocol And I also tried with openssl in cli: openssl s_client -state -nbio -connect www.googleapis.com:443 2>&1 | grep "^SSL" SSL_connect:before/connect initialization SSL_connect:SSLv2/v3 write client hello A SSL_connect:error in SSLv2/v3 read server hello A SSL_connect:error in SSLv2/v3 read server hello A Where did I do wrong?

    Read the article

  • Postfix: Using google apps for stmp errors

    - by Zed Said
    I am using postfix and need to send the mail using google apps smtp. I am getting errors after I thought I had set everything up correctly: May 11 09:50:57 zedsaid postfix/error[22214]: 00E009693FB: to=<[email protected]>, relay=none, delay=2466, delays=2462/3.4/0/0.06, dsn=4.7.0, status=deferred (delivery temporarily suspended: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.155.109]: no mechanism available) May 11 09:50:57 zedsaid postfix/error[22213]: 0ACB36D1B94: to=<[email protected]>, relay=none, delay=2486, delays=2482/3.4/0/0.06, dsn=4.7.0, status=deferred (delivery temporarily suspended: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.155.109]: no mechanism available) May 11 09:50:57 zedsaid postfix/error[22232]: 067379693D3: to=<[email protected]>, relay=none, delay=2421, delays=2417/3.4/0/0.06, dsn=4.7.0, status=deferred (delivery temporarily suspended: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.155.109]: no mechanism available) main.cf: # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters #smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem #smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = zedsaid.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = #relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all delay_warning_time = 4h smtpd_recipient_limit = 16 # how many error before back off. smtpd_soft_error_limit = 3 # how many max errors before blocking it. smtpd_hard_error_limit = 12 ## Gmail Relay relayhost = [smtp.gmail.com]:587 smtp_use_tls = yes smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_sasl_tls_security_options = noanonymous smtp_sasl_mechanism_filter = login smtp_tls_eccert_file = smtp_tls_eckey_file = smtp_use_tls = yes smtp_enforce_tls = no smtp_tls_CAfile = /etc/postfix/cacert.pem smtpd_tls_received_header = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport debug_peer_list = smtp.gmail.com debug_peer_level = 3 What am I doing wrong?

    Read the article

  • Expanding to dual video cards

    - by Anthony Greco
    I know a lot of factors can go into play here, so I will list my current hardware and setup: MOBO: GIGABYTE GA-890FXA-UD5 [http://www.newegg.com/Product/Product.aspx?Item=N82E16813128441] Processor: AMD Phenom II X6 1090T Black Edition Thuban 3.2GHz [http://www.newegg.com/Product/Product.aspx?Item=N82E16819103849] Ram: G.SKILL Ripjaws Series 16GB (4 x 4GB) [https://secure.newegg.com/NewMyAccount/OrderHistory.aspx?RandomID=4933910872745320111128011418] Current video card: EVGA 01G-P3-1366-TR GeForce GTX 460 SE [http://www.newegg.com/Product/Product.aspx?Item=N82E16814130591] OS: Windows 7 Ultimate x64 Currently I can run 2 monitors just fine in my setup. However, I want to upgrade this to 4 monitors. My question is, what is the best way to do this? I remember in the past reading I need the same type of video card, however would any GeForce GTX work, or do i need that very specific model (EVGA 01G-P3-1366-TR GeForce GTX 460 SE)? Are there any issues I should be aware of before I order 2 new monitors and a video card? Are there video cards better setup for this? I know NVidia offers SLI, however I do not know if my mobo is compliant. My mobo also offers CrossFireX configuration, though from what it says only Radeon are compliant. Any suggestions / feedbacks on my best route with my current setup is appreciated. Even if you suggest buying 2 new identical video cards, as long as you can mention which and why that is better I really appreciate it. Note: I really do not do any gaming. I sometimes do some 3D work in Unity and very rarely in Maya. Besides that I mostly do all my computer work in Visual Studios and Photoshop. I however need the 2 extra monitors because I monitor sometimes 5 remote desktops at once and switching on only 2 is becoming a very big pain. Also seeing 3 side by side while I work on the 4th will be very helpful. Again, I appreciate any feedback, as I have googled a bunch and just want to make sure what I buy will work.

    Read the article

  • Apache + PHP via FastCGI

    - by Wilco
    I'm running into some problems while attempting to run PHP via FastCGI in Apache. I have the FastCGI module loaded, but get the following error when attempting to load a page: The requested URL /fastcgi/php54.fcgi/index.php was not found on this server. Somewhere, it seems that the script to be executed is appended to the executable without any spaces. Is this where the problem likely is? Below I've included snippets from my Apache configuration (hopefully this is enough): LoadModule fastcgi_module libexec/apache2/mod_fastcgi.so FastCgiIpcDir /var/run/fastcgi AddHandler fastcgi-script .fcgi FastCgiConfig -autoUpdate -singleThreshold 100 -killInterval 300 AddType application/x-httpd-php .php ScriptAlias /fastcgi/ /Library/WebServer/FCGI-Executables/ <Directory "/Library/WebServer/FCGI-Executables"> Options +ExecCGI SetHandler fastcgi-script Order allow,deny Allow from all <VirtualHost *:80> ServerName www.somedomain.com ServerAdmin [email protected] DocumentRoot "/Web/www.somedomain.com" DirectoryIndex index.html index.php default.html CustomLog /var/log/apache2/access_log combinedvhost ErrorLog /var/log/apache2/error_log Action application/x-httpd-php /fastcgi/php54.fcgi <IfModule mod_ssl.c> SSLEngine Off SSLCipherSuite "ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM" SSLProtocol -ALL +SSLv3 +TLSv1 SSLProxyEngine On SSLProxyProtocol -ALL +SSLv3 +TLSv1 </IfModule> <Directory "/Web/www.somedomain.com"> Options All -Indexes +ExecCGI +Includes +MultiViews AllowOverride All <IfModule mod_dav.c> DAV Off </IfModule> <IfDefine !WEBSERVICE_ON> Deny from all ErrorDocument 403 /customerror/websitesoff403.html </IfDefine> </Directory> </VirtualHost> ... and this is the executable: #!/bin/sh PHP_FCGI_CHILDREN=1 PHP_FCGI_MAX_REQUESTS=5000 export PHP_FCGI_CHILDREN export PHP_FCGI_MAX_REQUESTS exec /opt/local/bin/php-cgi54

    Read the article

  • Either, nginx+php-fpm bad config or nginx+php-fpm cannot handle high query?

    - by The Wolf
    I have wordpress installed in my server configured(hopefully with nginx+php-fpm+mariaDB). I am trying to import using wordpress importer a 1.5MB xml file. Everytime I try to upload it using the importer, it got cut of... meaning just blank screen result.. Here is my error log: actually I just posted 2 of the errors [error] 858#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.xx.xx, server: xxx.com, request: "GET xxxx.html HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xxx.com" [error] 858#0: *13 connect() failed (111: Connection refused) while connecting to upstream, client: xxx.x.xx.xx, server: xxx.com, request: "GET xxxx.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xxx.com" I don't know what is the reason why it can't process the wordpress export .xml. I already increased max_file_upload & etc., but nothing happens. Hope somebody can help me. Here are my conf: nginx.conf user nginx; worker_processes 8; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; server_tokens off; keepalive_timeout 65; fastcgi_read_timeout 500; #gzip on; client_max_body_size 2M; php-fpm.conf ;;;;;;;;;;;;;;;;;;;;; ; FPM Configuration ; ;;;;;;;;;;;;;;;;;;;;; ; All relative paths in this configuration file are relative to PHP's install ; prefix. ; Include one or more files. If glob(3) exists, it is used to include a bunch of ; files from a glob(3) pattern. This directive can be used everywhere in the ; file. include=/etc/php-fpm.d/*.conf ;;;;;;;;;;;;;;;;;; ; Global Options ; ;;;;;;;;;;;;;;;;;; [global] ; Pid file ; Default Value: none pid = /var/run/php-fpm/php-fpm.pid ; Error log file ; Default Value: /var/log/php-fpm.log error_log = /var/log/php-fpm/error.log ; Log level ; Possible Values: alert, error, warning, notice, debug ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes daemonize = no ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; See /etc/php-fpm.d/*.conf [root@host etc]# vim php-fpm.conf [root@host etc]# vim php-fpm.conf ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes daemonize = no ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; See /etc/php-fpm.d/*.conf ps aux [root@host etc]# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.1 2900 1380 ? Ss Jun02 0:00 init root 2 0.0 0.0 0 0 ? S Jun02 0:00 [kthreadd/9308] root 3 0.0 0.0 0 0 ? S Jun02 0:00 [khelper/9308] root 124 0.0 0.0 2464 576 ? S<s Jun02 0:00 /sbin/udevd -d root 460 0.0 0.1 35976 1308 ? Sl Jun02 0:00 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5 root 474 0.0 0.0 8940 1028 ? Ss Jun02 0:00 /usr/sbin/sshd root 481 0.0 0.0 3264 876 ? Ss Jun02 0:00 xinetd -stayalive -pidfile /var/run/xinetd.pid root 491 0.0 0.1 6268 1432 ? S Jun02 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/host.busilak.com. mysql 584 0.1 6.8 679072 71456 ? Sl Jun02 0:04 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --use root 586 0.0 0.3 12008 3820 ? Ss Jun02 0:01 sshd: root@pts/0 root 629 0.0 0.0 9140 756 ? Ss Jun02 0:00 /usr/sbin/saslauthd -m /var/run/saslauthd -a pam -n 2 root 630 0.0 0.0 9140 520 ? S Jun02 0:00 /usr/sbin/saslauthd -m /var/run/saslauthd -a pam -n 2 root 645 0.0 0.1 12788 1928 ? Ss Jun02 0:01 sendmail: accepting connections smmsp 653 0.0 0.1 12576 1728 ? Ss Jun02 0:00 sendmail: Queue runner@01:00:00 for /var/spool/clientmqueue root 691 0.0 0.1 7148 1184 ? Ss Jun02 0:00 crond root 698 0.0 0.1 6272 1688 pts/0 Ss Jun02 0:00 -bash root 1006 0.0 0.0 7828 924 ? Ss 00:30 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf nginx 1007 0.0 0.1 8156 1724 ? S 00:30 0:00 nginx: worker process nginx 1008 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1009 0.0 0.1 8020 1356 ? S 00:30 0:00 nginx: worker process nginx 1011 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1012 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1013 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1014 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1015 0.0 0.1 8024 1344 ? S 00:30 0:00 nginx: worker process root 1030 0.0 0.2 25396 2904 ? Ss 00:30 0:00 php-fpm: master process (/etc/php-fpm.conf) apache 1031 0.0 1.9 40700 20624 ? S 00:30 0:00 php-fpm: pool www apache 1032 0.0 2.0 41924 21888 ? S 00:30 0:01 php-fpm: pool www apache 1033 0.0 1.9 41212 20848 ? S 00:30 0:01 php-fpm: pool www apache 1034 0.0 1.9 40956 20792 ? S 00:30 0:01 php-fpm: pool www apache 1035 0.0 2.0 41560 21556 ? S 00:30 0:02 php-fpm: pool www apache 1040 0.0 1.8 39292 19120 ? S 00:30 0:00 php-fpm: pool www root 1125 0.0 0.0 6080 1040 pts/0 R+ 01:04 0:00 ps aux netstat -l [root@host etc]# netstat -l Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 *:ssh *:* LISTEN tcp 0 0 localhost.localdomain:smtp *:* LISTEN tcp 0 0 localhost.locald:cslistener *:* LISTEN tcp 0 0 *:mysql *:* LISTEN tcp 0 0 *:http *:* LISTEN tcp 0 0 *:ssh *:* LISTEN Active UNIX domain sockets (only servers) Proto RefCnt Flags Type State I-Node Path unix 2 [ ACC ] STREAM LISTENING 60575947 /var/run/saslauthd/mux unix 2 [ ACC ] STREAM LISTENING 60574168 @/com/ubuntu/upstart unix 2 [ ACC ] STREAM LISTENING 60575873 /var/lib/mysql/mysql.sock Hope somebody can help me to figure out what is the problem.

    Read the article

  • Change OpenVZ route to pass through ip failover

    - by Kevin Campion
    I have one dedicaced server with its own IP and another IP (failover) who refer to the first. I will wish to change the gateway of a Proxmox virtual machine (openvz) who runs on this dedicaced server to go through the failover IP rather than the ip of host main server. Once connected to a virtual machine, when I do a traceroute VE# traceroute www.google.fr traceroute to www.google.fr (209.85.229.104), 30 hops max, 60 byte packets 1 MY_SERVER_NAME.ovh.net (xxx.xxx.xxx.xxx FIRST_IP_MAIN_SERVER) 0.021 ms 0.010 ms 0.009 ms The first line tells me the ip of host main server. I would like that the traceroute display the second IP failover. VE# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.0.2.1 * 255.255.255.255 UH 0 0 0 venet0 default 192.0.2.1 0.0.0.0 UG 0 0 0 venet0 With iptables HOST# iptables -t nat -L Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- anywhere anywhere MASQUERADE all -- anywhere anywhere SNAT tcp -- anywhere 10.10.101.2 tcp dpt:www state NEW,RELATED,ESTABLISHED,UNTRACKED to:SECOND_IP_FAILOVER SNAT all -- 10.10.101.2 anywhere to:SECOND_IP_FAILOVER 10.10.101.2 is the virtual machine IP (interface venet0) Any ideas ?

    Read the article

  • Linux apache developing configuration

    - by Jeffrey Vandenborne
    Recenly reinstalled my system, and came to a point where I need apache and php. I've been searching a long time, but I can't figure out how to configure apache the best way for a developer computer. The plan is simple, I want to install apache 2 + mysql server so I can develop some php website. I don't want to install lamp though, just the apache2, php5 and mysql. The problem that I've been looking an answer for is the permissions on the /var/www/ folder. I've tried making it my folder using the chown command, followed by a chmod -R 755 /var/www. Most things work then, but fwrite for example won't work, because I need to give write permissions to everyone, unless I change my global umask to 000 I'm not sure what I can do. In short: I want to install apache2, php5, mysql-server without using lamp, but configured in a way so I can open up netbeans, start a project with root in /var/www/, and run every single function without permission faults. Does anyone have experiences or workarounds to this? Extra: OS: Ubuntu 10.04 ARCH: x86_64

    Read the article

  • ServerName wildcards in Apache name-based virtual hosts?

    - by Martijn Heemels
    On our LAN I've set up several 'fake' TLDs in the DNS server, with the intention of using them for Apache name-based virtual hosting. I'd like to combine this with mass-virtual-hosting (i.e. VirtualDocumentRoot) on an Ubuntu 10.04 LAMP server. However, I can't get it to select the right vhost! Here is a summary of the Apache config: NameVirtualHost 10.10.0.205 <VirtualHost 10.10.0.205> ServerName *.test VirtualDocumentRoot /var/www/%-3.0.%-2/test/%1/ CustomLog /var/log/apache2/access.log vhost_combined </VirtualHost> <VirtualHost 10.10.0.205> ServerName *.dev VirtualDocumentRoot /var/www/%-3.0.%-2/dev/%1/ CustomLog /var/log/apache2/access.log vhost_combined </VirtualHost> A hostname such as www.domain.com.dev, correctly resolves to 10.10.0.205, but always selects the top vhost, instead of the bottom one, which matches more closely. I was under the impression that Apache would first try to match the ServerName before defaulting to the top vhost for a given IP. What am I doing wrong? Or is this not possible and must I use another IP for each TLD? apachectl -S outputs (trimmed): 10.10.0.205:* is a NameVirtualHost default server *.test port * namevhost *.test port * namevhost *.dev

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >