Search Results

Search found 4834 results on 194 pages for 'nsswitch conf'.

Page 181/194 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • Puppet - Possible to use software design patterns in modules?

    - by Mike Purcell
    As I work with puppet, I find myself wanting to automate more complex setups, for example vhosts for X number of websites. As my puppet manifests get more complex I find it difficult to apply the DRY (don't repeat yourself) principle. Below is a simplified snippet of what I am after, but doesn't work because puppet throws various errors depending up whether I use classes or defines. I'd like to get some feed back from some seasoned puppetmasters on how they might approach this solution. # site.pp import 'nodes' # nodes.pp node nodes_dev { $service_env = 'dev' } node nodes_prod { $service_env = 'prod' } import 'nodes/dev' import 'nodes/prod' # nodes/dev.pp node 'service1.ownij.lan' inherits nodes_dev { httpd::vhost::package::site { 'foo': } httpd::vhost::package::site { 'bar': } } # modules/vhost/package.pp class httpd::vhost::package { class manage($port) { # More complex stuff goes here like ensuring that conf paths and uris exist # As well as log files, which is I why I want to do the work once and use many notify { $service_env: } notify { $port: } } define site { case $name { 'foo': { class 'httpd::vhost::package::manage': port => 20000 } } 'bar': { class 'httpd::vhost::package::manage': port => 20001 } } } } } That code snippet gives me a Duplicate declaration: Class[Httpd::Vhost::Package::Manage] error, and if I switch the manage class to a define, and attempt to access a global or pass in a variable common to both foo and bar, I get a Duplicate declaration: Notify[dev] error. Any suggestions how I can implement the DRY principle and still get puppet to work? -- UPDATE -- I'm still having a problem trying to ensure that some of my vhosts, which may share a parent directory, are setup correctly. Something like this: node 'service1.ownij.lan' inherits nodes_dev { httpd::vhost::package::site { 'foo_sitea': } httpd::vhost::package::site { 'foo_siteb': } httpd::vhost::package::site { 'bar': } } What I need to happen is that sitea and siteb have the same parent "foo" folder. The problem I am having is when I call a define to ensure the "foo" folder exists. Below is the site define as I have it, hopefully it will make sense what I am trying to accomplish. class httpd::vhost::package { File { owner => root, group => root, mode => 0660 } define site() { $app_parts = split($name, '[_]') $app_primary = $app_parts[0] if ($app_parts[1] == '') { $tpl_path_partial_app = "${app_primary}" $app_sub = '' } else { $tpl_path_partial_app = "${app_primary}/${app_parts[1]}" $app_sub = $app_parts[1] } include httpd::vhost::log::base httpd::vhost::log::app { $name: app_primary => $app_primary, app_sub => $app_sub } } } class httpd::vhost::log { class base { $paths = [ '/tmp', '/tmp/var', '/tmp/var/log', '/tmp/var/log/httpd', "/tmp/var/log/httpd/${service_env}" ] file { $paths: ensure => directory } } define app($app_primary, $app_sub) { $paths = [ "/tmp/var/log/httpd/${service_env}/${app_primary}", "/tmp/var/log/httpd/${service_env}/${app_primary}/${app_sub}" ] file { $paths: ensure => directory } } } The include httpd::vhost::log::base works fine, because it is "included", which means it is only implemented once, even though site is called multiple times. The error I am getting is: Duplicate declaration: File[/tmp/var/log/httpd/dev/foo]. I looked into using exec, but not sure this is the correct route, surely others have had to deal with this before and any insight is appreciated as I have been grappling with this for a few weeks. Thanks.

    Read the article

  • My virtualhost not working for non-www version

    - by johnlai2004
    I have a development web server (ubuntu + apache) that can be accessed via the url glacialsummit.com. For some reason, http://www.glacialsummit.com serves pages from the /srv/www/glacialsummit.com/ directory, but http://glacialsummit.com serves pages from the /var/www/ directory. Here's what some of my virtualhost config files look like filename: /etc/apache2/sites-enabled/glacialsummit.com <VirtualHost 97.107.140.47:80> ServerAdmin [email protected] ServerName glacialsummit.com ServerAlias www.glacialsummit.com DocumentRoot /srv/www/glacialsummit.com/public_html/ ErrorLog /srv/www/glacialsummit.com/logs/error.log CustomLog /srv/www/glacialsummit.com/logs/access.log combined </VirtualHost> <VirtualHost 97.107.140.47:443> ServerAdmin [email protected] ServerName glacialsummit.com ServerAlias www.glacialsummit.com DocumentRoot /srv/www/glacialsummit.com/public_html/ ErrorLog /srv/www/glacialsummit.com/logs/error.log CustomLog /srv/www/glacialsummit.com/logs/access.log combined SSLEngine on SSLCertificateFile /etc/ssl/localcerts/www.glacialsummit.com.crt SSLCertificateKeyFile /etc/ssl/localcerts/www.glacialsummit.com.key <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 </VirtualHost> <VirtualHost 97.107.140.47:80> ServerAdmin [email protected] ServerName project.glacialsummit.com ServerAlias www.project.glacialsummit.com DocumentRoot /srv/www/project.glacialsummit.com/public_html/ ErrorLog /srv/www/project.glacialsummit.com/logs/error.log CustomLog /srv/www/project.glacialsummit.com/logs/access.log combined </VirtualHost> ## i have many other vhosts that work fine in this file filename /etc/apache2/sites-enabled/000-default <VirtualHost 97.107.140.47:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> filename: /etc/apache2/ports.conf NameVirtualHost 97.107.140.47:80 Listen 80 <IfModule mod_ssl.c> # SSL name based virtual hosts are not yet supported, therefore no # NameVirtualHost statement here Listen 443 </IfModule> How do I make http://glacialsummit.com serve web pages from /srv/www/glacialsummit.com/public_html just like http://www.glacialsummit.com?

    Read the article

  • Postmaster uses excessive CPU and Disk Writes

    - by wolfcastle
    using PostgreSQL 9.1.2 I'm seeing excessive CPU usage and large amounts of writes to disk from postmaster tasks. This happens even while my application is doing almost nothing (10s of inserts per MINUTE). There are a reasonable number of connections open however. I've been trying to determine what in my application is causing this. I'm pretty newb with postgresql, and haven't gotten anywhere so far. I've turned on some logging options in my config file, and looked at connections in the pg_stat_activity table, but they are all idle. Yet each connection consumes ~ 50% CPU, and is writing ~15M/s to disk (reading nothing). I'm basically using the stock postgresql.conf with very little tweaks. I'd appreciate any advice or pointers on what I can do to track this down. Here is a sample of what top/iotop is showing me: Cpu(s): 18.9%us, 14.4%sy, 0.0%ni, 53.4%id, 11.8%wa, 0.0%hi, 1.5%si, 0.0%st Mem: 32865916k total, 7263720k used, 25602196k free, 575608k buffers Swap: 16777208k total, 0k used, 16777208k free, 4464212k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 17057 postgres 20 0 236m 33m 13m R 45.0 0.1 73:48.78 postmaster 17188 postgres 20 0 219m 15m 11m R 42.3 0.0 61:45.57 postmaster 17963 postgres 20 0 219m 16m 11m R 42.3 0.1 27:15.01 postmaster 17084 postgres 20 0 219m 15m 11m S 41.7 0.0 63:13.64 postmaster 17964 postgres 20 0 219m 17m 12m R 41.7 0.1 27:23.28 postmaster 18688 postgres 20 0 219m 15m 11m R 41.3 0.0 63:46.81 postmaster 17088 postgres 20 0 226m 24m 12m R 41.0 0.1 64:39.63 postmaster 24767 postgres 20 0 219m 17m 12m R 41.0 0.1 24:39.24 postmaster 18660 postgres 20 0 219m 14m 9.9m S 40.7 0.0 60:51.52 postmaster 18664 postgres 20 0 218m 15m 11m S 40.7 0.0 61:39.61 postmaster 17962 postgres 20 0 222m 19m 11m S 40.3 0.1 11:48.79 postmaster 18671 postgres 20 0 219m 14m 9m S 39.4 0.0 60:53.21 postmaster 26168 postgres 20 0 219m 15m 10m S 38.4 0.0 59:04.55 postmaster Total DISK READ: 0.00 B/s | Total DISK WRITE: 195.97 M/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 17962 be/4 postgres 0.00 B/s 14.83 M/s 0.00 % 0.25 % postgres: aggw aggw [local] idle 17084 be/4 postgres 0.00 B/s 15.53 M/s 0.00 % 0.24 % postgres: aggw aggw [local] idle 17963 be/4 postgres 0.00 B/s 15.00 M/s 0.00 % 0.24 % postgres: aggw aggw [local] idle 17188 be/4 postgres 0.00 B/s 14.80 M/s 0.00 % 0.24 % postgres: aggw aggw [local] idle 17964 be/4 postgres 0.00 B/s 15.50 M/s 0.00 % 0.24 % postgres: aggw aggw [local] idle 18664 be/4 postgres 0.00 B/s 15.13 M/s 0.00 % 0.23 % postgres: aggw aggw [local] idle 17088 be/4 postgres 0.00 B/s 14.71 M/s 0.00 % 0.13 % postgres: aggw aggw [local] idle 18688 be/4 postgres 0.00 B/s 14.72 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle 24767 be/4 postgres 0.00 B/s 14.93 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle 18671 be/4 postgres 0.00 B/s 16.14 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle 17057 be/4 postgres 0.00 B/s 13.58 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle 26168 be/4 postgres 0.00 B/s 15.50 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle 18660 be/4 postgres 0.00 B/s 15.85 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle

    Read the article

  • ubuntu eth0 not reconnecting after cable unplugged

    - by Alex
    I'm running kubuntu 9.10 w/ gnome, I have a static IP defined in /etc/network/interfaces When I unplugged my network cable and rebooted, then reconnected the network cable I was not able to connect. I tried using sudo ifup eth0, and then ifconfig and it seemed as though the IP address had been assigned and I was connected, but I wasn't. I then did ifdown eth0, and again ifup eth0. For some reason I'm not able to access the network. Furthermore, I also attempted to connect via wlan, and was able to connect to the wireless network, but cannot "see" the network. I can't transfer data or access the internet or anything on the network including the router. How do I resolve this? topsy@monolyth:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:1c:25:1c:df:70 inet addr:192.168.1.145 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21c:25ff:fe1c:df70/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5720 errors:0 dropped:0 overruns:0 frame:0 TX packets:565 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:378035 (378.0 KB) TX bytes:46832 (46.8 KB) Memory:fe000000-fe020000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4 errors:0 dropped:0 overruns:0 frame:0 TX packets:4 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:240 (240.0 B) TX bytes:240 (240.0 B) By access the network I mean the local network as well as the internet. topsy@monolyth:~$ ping 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. 64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=9.14 ms 64 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=1.24 ms 64 bytes from 192.168.1.1: icmp_seq=3 ttl=64 time=1.01 ms 64 bytes from 192.168.1.1: icmp_seq=4 ttl=64 time=1.00 ms [snip... all OK, icmp_seq from 5-30, time between 0.981-1.25ms] ^C --- 192.168.1.1 ping statistics --- 30 packets transmitted, 30 received, 0% packet loss, time 29035ms rtt min/avg/max/mdev = 0.971/1.300/9.140/1.458 ms topsy@monolyth:~$ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.0 * 255.255.255.0 U 0 0 0 eth0 link-local * 255.255.0.0 U 1000 0 0 eth0 default 192.168.1.1 0.0.0.0 UG 100 0 0 eth0 root@monolyth:~# cat /etc/resolv.conf # Generated by NetworkManager

    Read the article

  • Nginx + PHP - No input file specified

    - by F21
    I am running Ubuntu Desktop 12.04 with nginx 1.2.6. PHP is PHP-FPM 5.4.9. This is the relevant part of my nginx.conf: http { include mime.types; default_type application/octet-stream; sendfile on; root /www keepalive_timeout 65; server { server_name testapp.com; root /www/app/www/; index index.php index.html index.htm; location ~ \.php$ { fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } server { listen 80 default_server; index index.html index.php; location ~ \.php$ { fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } } In my hosts file, I redirect 2 domains: testapp.com and test.com to 127.0.0.1. My web files are all stored in /www. From the above settings, if I visit test.com/phpinfo.php and test.com/app/www, everything works as expected and I get output from PHP. However, if I visit testapp.com, I get the dreaded No input file specified. error. So, at this point, I pull out the log files and have a look: 2012/12/19 16:00:53 [error] 12183#0: *17 FastCGI sent in stderr: "Unable to open primary script: /www/app/www/index.php (No such file or directory)" while reading response header from upstream, client: 127.0.0.1, server: testapp.com, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "testapp.com" This baffles me because I have checked again and again and /www/app/www/index.php definitely exists! This is also validated by the fact that test.com/app/www/index.php works which means the file exists and the permissions are correct. Why is this happening and what are the root causes of things breaking for just the testapp.com v-host?

    Read the article

  • Trying to install wordpress inside rails app with nginx and fastcgi

    - by pinouchon
    I have a rails app (let's call it myapp) running at www.myapp.com. I want to add a wordpress blog at www.myapp.com/blog. The webserver for the rails app is thin (see the upstream block). The wordpress runs with php-fastcgi. The rails app works fine. My problem is the following: in /home/myapp/myapp/log/error.log error I get: 2013/06/24 10:19:40 [error] 26066#0: *4 connect() failed (111: Connection refused) while connecti\ ng to upstream, client: xx.xx.138.20, server: www.myapp.com, request: "GET /blog/ HTTP/1.1", \ upstream: "fastcgi://127.0.0.1:9000", host: "www.myapp.com" Here is the nginx conf file: upstream myapp { server unix:/tmp/thin_myapp.0.sock; server unix:/tmp/thin_myapp.1.sock; server unix:/tmp/thin_myapp2.sock; } server { listen 80; server_name www.myapp.com; client_max_body_size 20M; access_log /home/myapp/myapp/log/access.log; error_log /home/myapp/myapp/log/error.log error; root /home/myapp/myapp/public; index index.html; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; # Index HTML Files if (-f $document_root/cache/$uri/index.html) { rewrite (.*) /cache/$1/index.html break; } if (!-f $request_filename) { proxy_pass http://myapp; break; } # try_files /system/maintenance.html $uri $uri/index.html $uri.html @ruby; } location /blog/ { root /var/www/wordpress; fastcgi_index index.php; if (!-e $request_filename) { rewrite ^(.*)$ /blog/index.php?q=$1 last; } include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME /var/www/wordpress$fastcgi_script_name; fastcgi_pass localhost:9000; # port to FastCGI } } Any ideas why that doesn't work ? How do I make sure that php-factcgi is configured properly ? Edit: I cant test if fastcgi is running with telnet: $> telnet 127.0.0.1 9000 Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused And it's not.

    Read the article

  • 403 Forbidden Error when trying to view localhost on Apache

    - by misbehavens
    I think my Apache must be all screwed up. I don't know if it ever worked. I just upgraded to Snow Leopard, and the first step on this tutorial is to start apache and check that it's working by opening http://localhost. It starts fine but when I go to localhost I get a 403 forbidden error. I don't know where to start figuring out how to fix it, so I wonder if a fresh install of Apache would do the trick. What do you think? Update: I found some error logs in /private/var/log/apache2/. Found this in one of the logs. Not sure what it means: [Tue Nov 10 17:53:08 2009] [notice] caught SIGTERM, shutting down [Tue Nov 10 21:49:17 2009] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] Warning: DocumentRoot [/usr/docs/dummy-host.example.com] does not exist Warning: DocumentRoot [/usr/docs/dummy-host2.example.com] does not exist httpd: Could not reliably determine the server's fully qualified domain name, using Andrews-Mac-Pro.local for ServerName mod_bonjour: Skipping user 'andrew' - cannot read index file '/Users/andrew/Sites/index.html'. [Tue Nov 10 21:49:19 2009] [notice] Digest: generating secret for digest authentication ... [Tue Nov 10 21:49:19 2009] [notice] Digest: done [Tue Nov 10 21:49:19 2009] [notice] Apache/2.2.11 (Unix) mod_ssl/2.2.11 OpenSSL/0.9.8k DAV/2 PHP/5.3.0 configured -- resuming normal operations Update: I also found something in the dummy-host.example.com-error_log file. I didn't set these dummy-host things by the way. Is this the default configuration? [Tue Nov 10 21:59:57 2009] [error] [client ::1] client denied by server configuration: /usr/docs Update: Woohoo! I found the file that had the virtual host definitions. It was in /etc/apache2/extra/httpd-vhosts.conf. It had those two dummy virtual host settings in there. I added a localhost virtual host. Not sure if this is necessary, but since it wasn't working before, decided to do it anyway. After removing the old virtual hosts, adding my new localhost virtual host, and restarting apache, it seems to work. So I guess whenever I want to add a virtual host, I only need to add them to this file? Or is there a hosts file somewhere, like there is on Linux? Update: Yes, there is an /etc/hosts file that need to be changed to. Add the virtual host name to that file.

    Read the article

  • nginx, php-cgi and "No input file specified."

    - by Stephen Belanger
    I'm trying to get nginx to play nice with php-cgi, but it's not quite working how I'd like. I'm using some set variables to allow for dynamic host names--basically anything.local. I know that stuff is working because I can access static files properly, however php files don't work. I get the standard "No input file specified." error which normally occurs when the file doesn't exist, but it definitely does exist and the path is correct because I can access the static files in the same path. It could possibly be a permissions thing, but I'm not sure how that could be an issue. I'm running this on Windows under my own user account, so I think it should have permission unless php-cgi is running under a different user without me telling it to. . Here's my config; worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip on; server { # Listen for HTTP listen 80; # Match to local host names. server_name *.local; # We need to store a "cleaned" host. set $no_www $host; set $no_local $host; # Strip out www. if ($host ~* www\.(.*)) { set $no_www $1; rewrite ^(.*)$ $scheme://$no_www$1 permanent; } # Strip local for directory names. if ($no_www ~* (.*)\.local) { set $no_local $1; } # Define default path handler. location / { root ../Users/Stephen/Documents/Work/$no_local.com/hosts/main/docs; index index.php index.html index.htm; # Route non-existent paths through Kohana system router. try_files $uri $uri/ /index.php?kohana_uri=$request_uri; } # pass PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { root ../Users/Stephen/Documents/Work/$no_local.com/hosts/main/docs; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi.conf; } # Prevent access to system files. location ~ /\. { return 404; } location ~* ^/(modules|application|system) { return 404; } } }

    Read the article

  • How to reset mysql's replication settings completely, without reinstalling it?

    - by user38060
    I set up mysql replication by adding references to binlogs, relay logs etc in my.cnf restarted mysql, it worked. I wanted to change it so I deleted all binlog related files including log-bin.index, removed binlog statements from my.cnf restarted server, works set master to '', purge master logs since now(), reset slave, stop slave, stop master. now, to set up replication again, I added binlog statements to the server. But then I hit this problem when restarting with: sudo mysqld (the only way to see mysql's startup errors) I get this error: /usr/sbin/mysqld: File '/etc/mysql/var/log-bin.index' not found (Errcode: 13) Because indeed, this file does not exist! (I deleted it, while trying to set up a new replication system) Hmm, if I change the config line to: log-bin-index = log-bin.index I get a different error: [ERROR] Can't generate a unique log-filename /etc/mysql/var/bin.(1-999) [ERROR] MSYQL_BIN_LOG::open failed to generate new file name. [ERROR] Aborting The first time I set up replication on this system, I didn't need to create this file. I did the same thing - added references to a previously non-existing file, and mysql created it. Same with relay logs, etc. I don't know why mysql insists on trying to read the old folder. Should I just reinstall the whole package again? That seems like overkill. my my.cnf: [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = IP key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP table_cache = 64 sort_buffer =64K net_buffer_length =2K query_cache_limit = 1M query_cache_size = 16M slow_query_log_file = /etc/mysql/var/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes expire_logs_days = 10 max_binlog_size = 100M server-id = 3 log-bin = /etc/mysql/var/bin.log log-slave-updates log-bin-index = /etc/mysql/var/log-bin.index log-error = /etc/mysql/var/error.log relay-log = /etc/mysql/var/relay.log relay-log-info-file = /etc/mysql/var/relay-log.info relay-log-index = /etc/mysql/var/relay-log.index auto_increment_increment = 10 auto_increment_offset = 3 master-host = HOST master-user = USER master-password=PWD replicate-do-db = DBNAME collation_server=utf8_unicode_ci character_set_server=utf8 skip-character-set-client-handshake [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash [myisamchk] key_buffer_size = 16M sort_buffer_size = 8M [mysqlhotcopy] interactive-timeout !includedir /etc/mysql/conf.d/ Update: Changing all the /etc/mysql/var/xxx paths in binlog & relay log statements to local has somehow solved the problem. I thought it was apparmor causing it at first, but when I added /etc/mysql/* rw, to apparmor's config and restarted it, it still couldn't read the full path.

    Read the article

  • Tomcat with virtual hosts - 404

    - by Thardas
    I have a CentOS 5.2 server set up with Apache 2.2.3 and Tomcat 5.5.27. The server hosts multiple virtual hosts connected to multiple Tomcats. For instance we have one tomcat for development and testing and one tomcat for production. project.demo.us.com points to dev tomcat and project.us.com points to production tomcat. Here's the virtual host's configuration: <VirtualHost *:80> ServerName project.demo.us.com CustomLog logs/project.demo.us.com/access_log combined env=!VLOG ErrorLog logs/project.demo.us.com/error_log DocumentRoot /var/www/vhosts/project.demo.us.com <Directory /var/www/vhosts/project.demo.us.com> Allow from all AllowOverride All Options -Indexes FollowSymLinks </Directory> ########## ########## ########## JkMount /project/* online </VirtualHost> JkMount line defines that we use online worker and our workers.properties contains this: worker.list=..., online, ... worker.online.port=7703 worker.online.host=localhost worker.online.type=ajp13 worker.online.lbfactor=1 And tomcat's conf/server.xml contains: <Connector port="7703" enableLookups="false" redirectPort="8443" protocol="AJP/1.3" URIEncoding="UTF-8" maxThreads="80" minSpareThreads="10" maxSpareThreads="15"/> I'm not sure what redirectPort is but I tried to telnet to that port and there's no one answering, so it shouldn't matter? Tomcat's webapps directory contains project.war and the server automatically deployed it under project directory which contains index.jsp and hello.html. The latter is for static debugging purposes. Now when I try to access http://project.demo.us.com/project/index.jsp, I get Tomcat's HTTP Status 404 - The requested resource () is not available. The same thing happens to hello.html so it's not working with static content either. Apache's access_log contains: 88.112.152.31 - - [10/Aug/2009:12:15:14 +0300] "GET /demo/index.jsp HTTP/1.1" 404 952 "-" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2" I couldn't find any mention of the request in Tomcat's logs. If I shutdown this specific tomcat, I no longer get Tomcat's 404 but Apache's 503 Service Temporarily Unavailable, so I should be configuring the correct Tomcat. Is there something obvious that I'm missing? Is there any place where I could find out what path the Tomcat is using to look for requested files?

    Read the article

  • Log - Server kernel: INFO: task httpd:000000 blocked for more than 120 seconds

    - by valter
    Almost everyday my server is crashing due to hight server load, and even restarting apache or mysql can't solve the problem. I need to reboot the server to solve, or it crash again due to the high load. The log system records something like this when it crashes: Aug 11 18:33:53 server kernel: INFO: task httpd:20008 blocked for more than 120 seconds. Aug 11 18:33:53 server kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 11 18:33:53 server kernel: httpd D ffffffff801538ac 0 20008 5816 20066 19809 (NOTLB) Aug 11 18:33:53 server kernel: ffff81025a299dc8 0000000000000082 ffff81033b4c0740 ffffffff80009a14 Aug 11 18:33:53 server kernel: ffff8101063f8d80 0000000000000009 ffff8100b758f7e0 ffff8101c57187e0 Aug 11 18:33:53 server kernel: 00009436d4100b6c 000000000001d50f ffff8100b758f9c8 000000083b531588 Aug 11 18:33:53 server kernel: Call Trace: Aug 11 18:33:53 server kernel: [<ffffffff80009a14>] __link_path_walk+0x173/0xfb9 Aug 11 18:33:53 server kernel: [<ffffffff8002cc16>] mntput_no_expire+0x19/0x89 Aug 11 18:33:53 server kernel: [<ffffffff80063c4f>] __mutex_lock_slowpath+0x60/0x9b Aug 11 18:33:53 server kernel: [<ffffffff80023908>] __path_lookup_intent_open+0x56/0x97 Aug 11 18:33:53 server kernel: [<ffffffff80063c99>] .text.lock.mutex+0xf/0x14 Aug 11 18:33:53 server kernel: [<ffffffff8001b21f>] open_namei+0xea/0x712 Aug 11 18:33:54 server kernel: [<ffffffff8002768a>] do_filp_open+0x1c/0x38 Aug 11 18:33:54 server kernel: Firewall: *UDP_IN Blocked* IN=eth1 OUT= MAC=ff:ff:ff:ff:ff:ff:00:30:48:9e:6e:99:08:00 SRC=208.43.135.158 DST=255.255.255.255 LEN=151 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=UDP SPT=38354 DPT=6112 LEN=131 Aug 11 18:33:54 server kernel: [<ffffffff8001a061>] do_sys_open+0x44/0xbe Aug 11 18:33:54 server kernel: [<ffffffff8005d28d>] tracesys+0xd5/0xe0 I googled a lot trying to find a solution. But it looks that the solution is just to update the kernel or disk driver, thinks that I don't know how to do. In this url http://bugs.centos.org/view.php?id=4515 a lot o people report similar problems, except the fact that they are not related to httpd like mine. According to one member, one solution would be to add "elevator=noop " to /etc/grub.conf like in this example: title CentOS (2.6.18-238.12.1.el5xen) root (hd0,0) kernel /vmlinuz-2.6.18-238.12.1.el5xen ro root=/dev/VolGroup00/LogVol00 elevator=noop initrd /initrd-2.6.18-238.12.1.el5xen.img Would this really solve the problem? My disk are working in RAID. Can this cause some problem to my server? Is there any other solution?

    Read the article

  • StrongSwan + xl2tpd client timeout between 2-5 minutes

    - by Howard Guo
    I run CentOS 6.4 on Amazon EC2, using xl2tpd-1.3.1 from EPEL repository together with StrongSwan 5.0.4. I setup a simple IPSec connection: conn l2tp type=transport keyexchange=ikev1 rekey=no authby=psk leftsubnet=0.0.0.0/0 rightsubnet=0.0.0.0/0 compress=yes auto=add And here is xl2tpd.conf: [global] ipsec saref = yes [lns default] ip range = 192.168.0.2-192.168.0.250 local ip = 192.168.0.1 ppp debug = yes pppoptfile = /etc/ppp/options.xl2tpd length bit = yes Here is options.xl2tpd: ms-dns 8.8.4.4 auth lock debug proxyarp There is only one client - Android 4.2 Android connects successfully: Oct 27 19:45:02 ip-172-31-17-30 xl2tpd[2706]: Connection established to x.x.x.x, 59578. Local: 18934, Remote: 29291 (ref=0/0). LNS session is 'default' Oct 27 19:45:02 ip-172-31-17-30 xl2tpd[2706]: Call established with x.x.x.x, Local: 36452, Remote: 29845, Serial: -1369754322 Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: pppd 2.4.5 started by howard, uid 0 Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: Using interface ppp0 Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: Connect: ppp0 <--> /dev/pts/0 Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: peer from calling number x.x.x.x authorized Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: Deflate (15) compression enabled Oct 27 19:45:03 ip-172-31-17-30 pppd[2709]: Cannot determine ethernet address for proxy ARP Oct 27 19:45:03 ip-172-31-17-30 pppd[2709]: local IP address 192.168.0.1 Oct 27 19:45:03 ip-172-31-17-30 pppd[2709]: remote IP address 192.168.0.2 Oct 27 19:45:03 ip-172-31-17-30 charon: 06[KNL] 192.168.0.1 appeared on ppp0 Oct 27 19:45:03 ip-172-31-17-30 charon: 06[KNL] 192.168.0.1 disappeared from ppp0 Oct 27 19:45:03 ip-172-31-17-30 charon: 06[KNL] 192.168.0.1 appeared on ppp0 Oct 27 19:45:03 ip-172-31-17-30 charon: 06[KNL] interface ppp0 activated In the meanwhile, Internet works perfectly on the Android client, the VPN connection is stable and fast. However, it always happens that within 2-5 minutes after the connection is established: Oct 27 19:47:07 ip-172-31-17-30 xl2tpd[2706]: Maximum retries exceeded for tunnel 18934. Closing. Oct 27 19:47:07 ip-172-31-17-30 xl2tpd[2706]: Connection 29291 closed to 95.91.227.224, port 59578 (Timeout) Oct 27 19:47:07 ip-172-31-17-30 charon: 06[KNL] interface ppp0 deactivated Oct 27 19:47:07 ip-172-31-17-30 charon: 06[KNL] interface ppp0 deleted Then the VPN connection is broken. So what might have gone wrong? The same L2TP service works flawlessly on iOS 7, MacOS 10.8, and Windows 7, there is no disconnection issue on those OSes. Thank you!

    Read the article

  • iCloud stuff stops working while connected to OpenVPN [closed]

    - by Taco Bob
    I have a fairly simple OpenVPN setup on an OpenVZ VPS with Ubuntu 11.10. Client is the Viscosity client on Mac OS X 10.8.2, and after some testing, we can rule out the client as being part of the problem. Everything has been working fine except for Apple's iCloud stuff. Web surfing, email, FTP, NNTP, and Skype are all working as expected. It's ONLY the iCloud services that cease to function. If I connect to the VPN, iCloud stuff stops working. I no longer get anything in Messages, Calendar items don't get updated, and Notifications stop working. If I disconnect, the iCloud stuff all starts working. Connect again, iCloud stops working. Here's the server.conf: status openvpn-status.log log /var/log/openvpn.log verb 4 port 1194 proto udp dev tun ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh1024.pem server 10.9.8.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1" push “dhcp-option DNS 10.9.8.1? keepalive 10 120 duplicate-cn cipher BF-CBC comp-lzo user nobody group nogroup persist-key persist-tun tun-mtu 1500 mssfix 1400 I'm using iptables in a script, and it's also fairly simplistic. iptables -F iptables -t nat -F iptables -t mangle -F iptables -A FORWARD -i tun0 -o venet0 -j ACCEPT iptables -A FORWARD -i venet0 -o tun0 -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j ACCEPT iptables -A INPUT -p tcp --dport 1194 -j ACCEPT iptables -A INPUT -p udp --dport 1194 -j ACCEPT iptables -t nat -A POSTROUTING -s 10.9.8.0/24 -j SNAT --to-source <server's public ip> echo 1 > /proc/sys/net/ipv4/ip_forward I tried forwarding ports as well, with no success. iptables -A FORWARD -p tcp -d 10.9.8.0/24 --dport 5222:5230 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 5222:5230 -j DNAT --to-destination 10.9.8.6 I am also sometimes behind a double-NAT situation that I have no control over. Client -> work VPN -> my OpenVPN box -> Internet. Client -> Airport Express -> ISP (which is doing NAT) -> my OpenVPN box -> Internet. Those two situations are just the fact of life where I am, and I cannot change them. I do have full control over my client and the OpenVPN server. I am completely out of ideas. I have posted a similar query at the OpenVPN forums, but it hasn't posted yet and seems to be in their moderation queue still. Tried on freenode irc channels, but nobody is awake, so here I am. I have Googled extensively for this, and can find nothing that is related. Help me get iCloud stuff working again!

    Read the article

  • How to get php mail function to work on Debian “squeeze”?

    - by Neel Kamal
    I have installed Apache and PHP5 on my debian server. Firstly I tried it using sendmail. Here is the step by step procedure that I have tried : Step 1: apt-get install sendmail Step 2: /etc/init.d/apache2 restart But this didn't work. Then I tried using external SMTP . My domain is registered on BigRock.I registerd an email address there [email protected] and it gave me the required credentials. On server I installed sSMTP > apt-get install ssmtp > Configured "/etc/ssmtp/ssmtp.conf" file. In the configuration file I added [email protected] mailhub=smtp.fostergen.com:587 (Here I have doubt. I am not sure what to use here. I tried smtp.fostergen.com:587, smtp.fostergen.com:25,mx1.mailhostbox.com :587,mx1.mailhostbox.com:25. I am still not sure what to use here mailhostbox.com. I used mx1.mailhostbox.com as it was the mx entry for my domain on bigrock Here is the screenshot![screenshot of bigrock's email management tool] ) hostname=vs3204.ams2.alvotec.de (I entered the command hostname -f on my server and got it as result ) FromLineOverride=YES UseSTARTTLS=YES [email protected] AuthPass=password provided during email registration on bigrock > edited /etc/ssmtp/revaliases (added " root:[email protected]:mx1.mailhostbox.com :587 " in the last line) > edited php.ini file (sendmail_path = /usr/sbin/ssmtp -t) > /etc/init.d/apache2 restart But this didn't work. After this I tried eSMTP. Steps Performed : > apt-get install esmtp > edited /etc/esmtprc hostname=smtp.fostergen.com:587 username= [email protected] password: password provide by bigrock mda="/usr/bin/procmail -d %T" > linked eSMTP to the legacy Sendmail path by execting the command "ln -s /usr/bin/esmtp /usr/bin/sendmail" > edited php.ini file (/usr/bin/sendmail -t -i) > /etc/init.d/apache2 restart But this technique also failed. I just want to send email to users through php mail function. Kindly help. Where I am going wrong?

    Read the article

  • Building NanoBSD inside a jail

    - by ptomli
    I'm trying to setup a jail to enable building a NanoBSD image. It's actually a jail on top of a NanoBSD install. The problem I have is that I'm unable to mount the md device in order to do the 'build image' part. Is it simply not possible to mount an md device inside a jail, or is there some other knob I need to twiddle? On the host /etc/rc.conf.local jail_enable="YES" jail_mount_enable="YES" jail_list="build" jail_set_hostname_allow="NO" jail_build_hostname="build.vm" jail_build_ip="192.168.0.100" jail_build_rootdir="/mnt/zpool0/jails/build/home" jail_build_devfs_enable="YES" jail_build_devfs_ruleset="devfsrules_jail_build" /etc/devfs.rules [devfsrules_jail_build=5] # nothing Inside the jail [root@build /usr/obj/nanobsd.PROLIANT_MICROSERVER]# sysctl security.jail security.jail.param.cpuset.id: 0 security.jail.param.host.hostid: 0 security.jail.param.host.hostuuid: 64 security.jail.param.host.domainname: 256 security.jail.param.host.hostname: 256 security.jail.param.children.max: 0 security.jail.param.children.cur: 0 security.jail.param.enforce_statfs: 0 security.jail.param.securelevel: 0 security.jail.param.path: 1024 security.jail.param.name: 256 security.jail.param.parent: 0 security.jail.param.jid: 0 security.jail.enforce_statfs: 1 security.jail.mount_allowed: 1 security.jail.chflags_allowed: 1 security.jail.allow_raw_sockets: 0 security.jail.sysvipc_allowed: 0 security.jail.socket_unixiproute_only: 1 security.jail.set_hostname_allowed: 0 security.jail.jail_max_af_ips: 255 security.jail.jailed: 1 [root@build /usr/obj/nanobsd.PROLIANT_MICROSERVER]# mdconfig -l md2 md0 md1 md0 and md1 are the ramdisks of the host. bsdlabel looks sensible [root@build /usr/obj/nanobsd.PROLIANT_MICROSERVER]# bsdlabel /dev/md2s1 # /dev/md2s1: 8 partitions: # size offset fstype [fsize bsize bps/cpg] a: 1012016 16 4.2BSD 0 0 0 c: 1012032 0 unused 0 0 # "raw" part, don't edit newfs runs ok [root@build /usr/obj/nanobsd.PROLIANT_MICROSERVER]# newfs -U /dev/md2s1a /dev/md2s1a: 494.1MB (1012016 sectors) block size 16384, fragment size 2048 using 4 cylinder groups of 123.55MB, 7907 blks, 15872 inodes. with soft updates super-block backups (for fsck -b #) at: 160, 253184, 506208, 759232 mount fails [root@build /usr/obj/nanobsd.PROLIANT_MICROSERVER]# mount /dev/md2s1a _.mnt/ mount: /dev/md2s1a : Operation not permitted UPDATE: One of my colleagues pointed out There are some file systems types that can't be securely mounted within a jail no matter what, like UFS, MSDOFS, EXTFS, XFS, REISERFS, NTFS, etc. because the user mounting it has access to raw storage and can corrupt it in a way that it will panic entire system. From http://www.mail-archive.com/[email protected]/msg160389.html So it seems that the standard nanobsd.sh won't run inside a jail while it uses the md device to build the image. One potential solution I'll try is to chroot from the host into the build jail, rather than jexec a shell.

    Read the article

  • How to make Horde connect to mysql with UTF-8 character set?

    - by jkj
    How to tell horde 3.3.11 to use UTF-8 for it's mysql connection? The $conf['sql']['charset'] only tells horde what is expected from the database. Horde uses MDB2 to connect to mysql. Is there way to force MDB2 or mysql character_set_client from php.ini? So far I found two workarounds: Force mysql to ignore character set requested by client [mysqld] skip-character-set-client-handshake=1 default-character-set=utf8 Force mysql to run SET NAMES utf8 on every connection [mysqld] init-connect='SET NAMES utf8' Both have drawbacks on multi user mysql server. The first disables converting character sets alltogether and the second one forces every connection to produce UTF-8. [EDIT] Found the problem. The 'charset' parameter was unset the last minute before sending to SQL backend. This is probably due to mysql not being able to digest utf-8 but utf8. Mysql specific mapping is required to make it work. I just worked around it by translating utf-8 - utf8. Won't work with any other databases with this patch though. --- lib/Horde/Share/sql.php.orig 2011-07-04 17:09:33.349334890 +0300 +++ lib/Horde/Share/sql.php 2011-07-04 17:11:06.238636462 +0300 @@ -753,7 +753,13 @@ /* Connect to the sql server using the supplied parameters. */ require_once 'MDB2.php'; $params = $this->_params; - unset($params['charset']); + + if ($params['charset'] == 'utf-8') { + $params['charset'] = 'utf8'; + } else { + unset($params['charset']); + } + $this->_write_db = &MDB2::factory($params); if (is_a($this->_write_db, 'PEAR_Error')) { Horde::fatal($this->_write_db, __FILE__, __LINE__); @@ -792,7 +798,13 @@ /* Check if we need to set up the read DB connection seperately. */ if (!empty($this->_params['splitread'])) { $params = array_merge($params, $this->_params['read']); - unset($params['charset']); + + if ($params['charset'] == 'utf-8') { + $params['charset'] = 'utf8'; + } else { + unset($params['charset']); + } + $this->_db = &MDB2::singleton($params); if (is_a($this->_db, 'PEAR_Error')) { Horde::fatal($this->_db, __FILE__, __LINE__);

    Read the article

  • Weird DNS bug - external server resolves to internal IP

    - by emilecantin
    I have a server that is hosted by my university. I have root access, but no control over network setup, firewall, etc. This server's DNS resolves to an internal IP here on campus (10.x.x.x), and an external IP outside campus. I also have a few servers hosted at Amazon, and they mostly work well. However, one of them started to resolve the university server by its internal IP address. This causes problems, as 10.x.x.x on Amazon EC2 is someone else. I have connected to the Amazon server with SSH agent forwarding a few times in the past, to access a Git repository on the university server. Any idea what could cause this? EDIT: Here's my /etc/resolv.conf # Generated by dhcpcd for interface eth0 search ec2.internal nameserver 172.16.0.23 Here's the output of dig myserver.myuniversity.ca.: ; <<>> DiG 9.8.1-P1 <<>> myserver.myuniversity.ca. ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 34470 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;myserver.myuniversity.ca. IN A ;; ANSWER SECTION: myserver.myuniversity.ca. 537586 IN A 10.43.x.x ;; Query time: 2 msec ;; SERVER: 172.16.0.23#53(172.16.0.23) ;; WHEN: Wed Nov 28 16:07:21 2012 ;; MSG SIZE rcvd: 60 Here's the expected output (on another Amazon server): ; <<>> DiG 9.8.1-P1 <<>> myserver.myuniversity.ca. ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8045 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;myserver.myuniversity.ca. IN A ;; ANSWER SECTION: myserver.myuniversity.ca. 601733 IN A x.x.239.1 ;; Query time: 1 msec ;; SERVER: 172.16.0.23#53(172.16.0.23) ;; WHEN: Wed Nov 28 16:09:36 2012 ;; MSG SIZE rcvd: 60

    Read the article

  • OpenWRT based gateway with dnsmasq and internal server with bind

    - by Peter
    I have router based on OpenWRT which has dnsmasq 2.59. Inside my local area network I have a NS server bind. This server has internal and external views for a couple of my domains. My router forwards port 53 TCP and UDP from outside IP (router WAN) to this server. For the external clients everything works fine. In order to organize the internal view, I decided to add the exception to /etc/dnsmasq.conf server=/mydomain1.com/192.168.1.1 server=/mydomain2.com/192.168.1.1 server=/mydomain3.com/192.168.1.1 (192.168.1.1 - IP address of the NS server) According to dnsmasq manstrong text: More specific domains take precendence over less specific domains, so: --server=/google.com/1.2.3.4 --server=/www.google.com/2.3.4.5 will send queries for *.google.com to 1.2.3.4, except *www.google.com, which will go to 2.3.4.5 this domain name with all the sub-domains is supposed to be forward to my NS server. Everything works (SOA, NS, MX, CNAME, TXT, SRV etc.) except for A-record: # nslookup -type=a mydomain1.com Server: 192.168.1.100 Address: 192.168.1.100#53 *** Can't find mydomain1.com: No answer 192.168.1.100 - IP address of my router (dnsmasq) However, I can get the answer for the TXT-record query: # nslookup -type=txt mydomain1.com Server: 192.168.1.100 Address: 192.168.1.100#53 mydomain1.com text = "v=spf1 include:mydomain1.com -all" When I just specify the local IP of my NS server (direct access to the server without using dnsmasq) then the results are: # nslookup -type=a mydomain1.com 192.168.1.1 Server: 192.168.1.1 Address: 192.168.1.1#53 Name: mydomain1.com Address: 192.168.1.1 There is a similar situation with the MX-record: C:\>nslookup -type=mx mydomain1.com Server: router.lan Address: 192.168.1.100 mydomain1.com MX preference = 10, mail exchanger = mail.mydomain1.com mydomain1.com nameserver = ns.mydomain1.com mail.mydomain1.com internet address = 192.168.1.1 ns.mydomain1.com internet address = 192.168.1.1 C:\>nslookup -type=a mail.mydomain1.com Server: router.lan Address: 192.168.1.100 *** No address (A) records available for mail.mydomain1.com This is a dig result: # dig +nocmd mydomain1.com any +multiline +noall +answer mydomain1.com. 86400 IN SOA ns.mydomain1.com. hostmaster.mydomain1.com. ( 121204007 ; serial 28800 ; refresh (8 hours) 7200 ; retry (2 hours) 604800 ; expire (1 week) 3600 ; minimum (1 hour) ) mydomain1.com. 86400 IN NS ns.mydomain1.com. mydomain1.com. 86400 IN A 192.168.1.1 mydomain1.com. 604800 IN MX 10 mail.mydomain1.com. mydomain1.com. 3600 IN TXT "v=spf1 include:mydomain1.com -all" When I try to ping: # ping mydomain1.com ping: cannot resolve mydomain1.com: Unknown host Is it a bug of dnsmasq 2.59? How to manage this problem?

    Read the article

  • Rails 3 shows 404 error instead of index.html (nginx + unicorn)

    - by Miko
    I have an index.html in public/ that should be loading by default but instead I get a 404 error when I try to access http://example.com/ The page you were looking for doesn't exist. You may have mistyped the address or the page may have moved. This has something to do with nginx and unicorn which I am using to power Rails 3 When take unicorn out of the nginx configuration file, the problem goes away and index.html loads just fine. Here is my nginx configuration file: upstream unicorn { server unix:/tmp/.sock fail_timeout=0; } server { server_name example.com; root /www/example.com/current/public; index index.html; keepalive_timeout 5; location / { try_files $uri @unicorn; } location @unicorn { proxy_pass http://unicorn; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_redirect off; } } My config/routes.rb is pretty much empty: Advertise::Application.routes.draw do |map| resources :users end The index.html file is located in public/index.html and it loads fine if I request it directly: http://example.com/index.html To reiterate, when I remove all references to unicorn from the nginx conf, index.html loads without any problems, I have a hard time understanding why this occurs because nginx should be trying to load that file on its own by default. -- Here is the error stack from production.log: Started GET "/" for 68.107.80.21 at 2010-08-08 12:06:29 -0700 Processing by HomeController#index as HTML Completed in 1ms ActionView::MissingTemplate (Missing template home/index with {:handlers=>[:erb, :rjs, :builder, :rhtml, :rxml, :haml], :formats=>[:html], :locale=>[:en, :en]} in view paths "/www/example.com/releases/20100808170224/app/views", "/www/example.com/releases/20100808170224/vendor/plugins/paperclip/app/views", "/www/example.com/releases/20100808170224/vendor/plugins/haml/app/views"): /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/paths.rb:14:in `find' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/lookup_context.rb:79:in `find' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/base.rb:186:in `find_template' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/render/rendering.rb:45:in `_determine_template' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/render/rendering.rb:23:in `render' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/haml-3.0.15/lib/haml/helpers/action_view_mods.rb:13:in `render_with_haml' etc... -- nginx error log for this virtualhost comes up empty: 2010/08/08 12:40:22 [info] 3118#0: *1 client 68.107.80.21 closed keepalive connection My guess is unicorn is intercepting the request to index.html before nginx gets to process it.

    Read the article

  • TCP/IP & throughput between FreeNAS (BSD) server & other LAN machines

    - by Tim Dickerson
    I have got a question for someone that knows BSD a bit better than me that are in regards to my LAN setup at home/work here outside Chicago. I can't seem to fully optimize my network's (LAN) thoughput via my FreeNAS (BSD based) file server. It runs with the latest FreeBSD release which is modified to support several protocols for file transfers and more. Every machine that is behind my Smoothwall (Linux based) router is on the usual 192.168.0.x subnet and for most part works just fine. Behind the Smoothwall box, all machines are connected to a GB HP unmanaged switch. I host a large WISP here and have an OC-3 connection here at home/work and have no issues with downloading/uploading from/to the 'net'. My problem is with throughput. When I try and transfer large files...really any for that matter..between any of the machines to/and from the FreeNAS server via FTP, the max throughput I can achieve say between a Win 7 or a Linux box is ~65Mbit/sec. All machines are running Intel Pro 1000 GB NIC's and all cable is CAT6. Each is set to 'auto negotiation' and each shows 1500 MTU Full Duplex @1GB so I know the hardware is okay. I have not adjusted the MTU on any machine as I understand it to be pointless unless certain configurations are used (I assume I am not one of those). My settings for the FreeNAS machine are the following: # FreeNAS /etc/sysctl.conf - pertinent settings shown kern.ipc.maxsockbuf=262144 kern.ipc.nmbclusters=32768 kern.ipc.somaxconn=8192 kern.maxfiles=65536 kern.maxfilesperproc=32768 net.inet.tcp.delayed_ack=0 net.inet.tcp.inflight.enable=0 net.inet.tcp.path_mtu_discovery=0 net.inet.tcp.recvbuf_auto=1 net.inet.tcp.recvbuf_inc=524288 net.inet.tcp.recvbuf_max=16777216 net.inet.tcp.recvspace=65536 net.inet.tcp.rfc1323=1 net.inet.tcp.sendbuf_inc=16384 net.inet.tcp.sendbuf_max=16777216 net.inet.tcp.sendspace=65536 net.inet.udp.recvspace=65536 net.local.stream.recvspace=65536 net.local.stream.sendspace=65536 net.inet.tcp.hostcache.expire=1 From what I can tell, that looks to be a somewhat optimized profile for a typical BSD machine acting as a server for a LAN. I might be wrong and just wanted to find out from someone that knows BSD better than I do if indeed that is ok or if something is out of tune or what. Are there other ways I would find better for P2P file transfers? I honestly do not know what I SHOULD be looking for with respect to throughput between the NAS box and another client when xferring files via FTP, but I am told that what I get on average (40-70MB/sec) is too low for what it could be. I have thought about adding another NIC in the FreeNAS box as well as the Win7 machine and use a X-over cable via a static route, but wanted to check with someone first to see if that might be worth it or not. I don't know if doing that would bypass the HP GB switch and allow for a machine to machine xfer anyways. The FTP client I use is: Filezilla and have tried both active and passive modes with no real gain over each other. The NAS box runs ProFTPD.

    Read the article

  • Volume group disappeared, LVs still available

    - by Ben
    I've run into an issue with my KVM host which runs VMs on a LVM volume. As of last night the logical volumes are no longer seen as such (I can't create snapshots of them even though I have been for months now). Running any scans all result in nothing being found: [root@apollo ~]# pvscan No matching physical volumes found [root@apollo ~]# vgscan Reading all physical volumes. This may take a while... No volume groups found root@apollo ~]# lvscan No volume groups found If I try restoring the VG conf backup from /etc/lvm/backups/vg0 I get the following error: [root@apollo ~]# vgcfgrestore -f /etc/lvm/backup/vg0 vg0 Couldn't find device with uuid 20zG25-H8MU-UQPf-u0hD-NftW-ngsC-mG63dt. Cannot restore Volume Group vg0 with 1 PVs marked as missing. Restore failed. /etc/lvm/backups/vg0 has the following for the physical volume: physical_volumes { pv0 { id = "20zG25-H8MU-UQPf-u0hD-NftW-ngsC-mG63dt" device = "/dev/sda5" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 4292870143 # 1.99902 Terabytes pe_start = 384 pe_count = 524031 # 1.99902 Terabytes } } fdisk -l /dev/sda shows the following: [root@apollo ~]# fdisk -l /dev/sda Disk /dev/sda: 6000.1 GB, 6000069312512 bytes 64 heads, 32 sectors/track, 5722112 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000188b7 Device Boot Start End Blocks Id System /dev/sda1 2 32768 33553408 82 Linux swap / Solaris /dev/sda2 32769 33280 524288 83 Linux /dev/sda3 33281 1081856 1073741824 83 Linux /dev/sda4 1081857 3177984 2146435072 85 Linux extended /dev/sda5 1081857 3177984 2146435071+ 8e Linux LVM The server is running a 4 disk HW RAID10 which seems perfectly healthy according to megacli and smartd. The only odd message in /var/log/messages is the following which shows up every couple of hours: Jun 10 09:41:57 apollo udevd[527]: failed to create queue file: No space left on device Output of df -h [root@apollo ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 1016G 119G 847G 13% / /dev/sda2 508M 67M 416M 14% /boot Does anyone have any ideas what to do next? The VMs are all running fine at the moment apart from not being able to snapshot them. Updated with extra info It's not a lack of inodes: [root@apollo ~]# df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda3 67108864 48066 67060798 1% / /dev/sda2 32768 47 32721 1% /boot pvs, vgs & lvs either output nothing or "No volume groups found".

    Read the article

  • Passenger 2.2.4, nginx 0.7.61 and SSL

    - by boompa
    Has anyone had any luck configuring Passenger and nginx with SSL? I've spent hours trying to get this configuration working as I'd like, using what few resources there are out there on the net, and I can't get any of the supposedly forwarded headers to show up in the Rails controller. For example, with a conf file of the following (and multiple variations thereof): server { listen 3000; server_name .example.com; root /Users/website/public; passenger_enabled on; rails_env development; } server { listen 3443; root /Users/website/public; rails_env development; passenger_enabled on; ssl on; #ssl_verify_client on; ssl_certificate /Users/website/ssl/server.crt; ssl_certificate_key /Users/website/ssl/server.key; #ssl_client_certificate /Users/website/ssl/CA.crt; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X_FORWARDED_PROTO https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_set_header X-SSL-Subject $ssl_client_s_dn; #proxy_set_header X-SSL-Issuer $ssl_client_i_dn; proxy_redirect off; proxy_max_temp_file_size 0; } and Rails code in the controller like this: request.headers.each { |k, v| RAILS_DEFAULT_LOGGER.error "Header #{k} Val #{v}" } other headers appear, but not those set in nginx, e.g.: Header rack.multithread Val false Header REQUEST_URI Val /login/new Header REMOTE_PORT Val 64021 Header rack.multiprocess Val true Header PASSENGER_USE_GLOBAL_QUEUE Val false Header PASSENGER_APP_TYPE Val rails Header SCGI Val 1 Header SERVER_PORT Val 3443 Header HTTP_ACCEPT_CHARSET Val ISO-8859-1,utf-8;q=0.7,*;q=0.7 Header rack.request.query_hash Val Header DOCUMENT_ROOT Val /Users/website/public I've even gone so far as to modify Passenger's abstract_request_handler's main_loop method, i.e., headers, input = parse_request(client) if headers if headers[REQUEST_METHOD] == PING process_ping(headers, input, client) else headers.each { |h,v| log.unknown "abstract_request_handler: #{h} = #{v}" } process_request(headers, input, client) end end only to find that the supposedly added headers do not exist there either: abstract_request_handler: HTTP_KEEP_ALIVE = 300 abstract_request_handler: HTTP_USER_AGENT = Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1) Gecko/20090624 Firefox/3.5 abstract_request_handler: PASSENGER_SPAWN_METHOD = smart-lv2 abstract_request_handler: CONTENT_LENGTH = 0 abstract_request_handler: HTTP_IF_NONE_MATCH = "b6e8b9afbc1110ee3bf0c87e119252ad" abstract_request_handler: HTTP_ACCEPT_LANGUAGE = en-us,en;q=0.5 abstract_request_handler: SERVER_PROTOCOL = HTTP/1.1 abstract_request_handler: HTTPS = on abstract_request_handler: REMOTE_ADDR = 127.0.0.1 abstract_request_handler: SERVER_SOFTWARE = nginx/0.7.61 abstract_request_handler: SERVER_ADDR = 127.0.0.1 abstract_request_handler: SCRIPT_NAME = abstract_request_handler: PASSENGER_ENVIRONMENT = development abstract_request_handler: REMOTE_PORT = 64021 abstract_request_handler: REQUEST_URI = /login/new abstract_request_handler: HTTP_ACCEPT_CHARSET = ISO-8859-1,utf-8;q=0.7,*;q=0.7 abstract_request_handler: SERVER_PORT = 3443 abstract_request_handler: SCGI = 1 abstract_request_handler: PASSENGER_APP_TYPE = rails abstract_request_handler: PASSENGER_USE_GLOBAL_QUEUE = false I'm tired of banging my head against the wall, so I'd truly appreciate any help I can get!

    Read the article

  • pptpd configuration

    - by Ian R.
    I would like a little help on configuring pptp so I can use my server as a vpn server since I have 10 ip's on it and I travel a lot so that would really help me and my partners. I managed to install everything needed but my vpn client fails to connect due to some reason that I cannot understand. I know there are 2 files in pptp that you're supposed to edit so I will post my 2 files here: /etc/ppp/pptpd-options name pptpd refuse-pap refuse-chap refuse-mschap require-mschap-v2 require-mppe-128 proxyarp nodefaultroute lock nobsdcomp /etc/pptpd.conf option /etc/ppp/pptpd-options logwtmp localip xx.158.177.231 remoteip xx.158.177.103,xx.158.177.116,xx.158.177.121,xx.158.177.124,xx.158.177.125,xx.158.177.131,xx.158.177.134,xx.158.177.139,xx.158.177.142,xx.158.177.145 interfaces file eth0 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.231 Bcast:xx.158.177.255 Mask:255.255.254.0 inet6 addr: xx80::216:3eff:fe51:31ba/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:56352 errors:0 dropped:0 overruns:0 frame:0 TX packets:3xx15 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4884030 (4.8 MB) TX bytes:6780974 (6.7 MB) Interrupt:16 eth0:1 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.103 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 eth0:2 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.116 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 eth0:3 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.121 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 eth0:4 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.124 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 eth0:5 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.125 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 eth0:6 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.131 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 eth0:7 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.134 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 eth0:8 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.139 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 eth0:9 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.142 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 eth0:10 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.145 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:3 errors:0 dropped:0 overruns:0 frame:0 TX packets:3 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:286 (286.0 B) TX bytes:286 (286.0 B)

    Read the article

  • Pinning based on origin of a reprepro repository.

    - by Shtééf
    I'm on Ubuntu 10.04, and trying to set up a repository using reprepro. I'd also like the pin everything in that repository to be preferred over anything else, even if packages are older versions. (It will only contain a select set of packages.) However, I cannot seem to get the pinning to work, and believe it has something to do with the repository side of things, rather than the apt configuration on the client. I've taken the following steps to set up my repository Installed a web server (my personal choice here is Cherokee), Created the directory /var/www/apt/, Created the file conf/distributions, like so: Origin: Shteef Label: Shteef Suite: lucid Version: 10.04 Codename: lucid Architectures: i386 amd64 source Components: main Description: My personal repository Ran reprepro export from the /var/www/apt/ directory. Now on any other machine, I can add this (empty) repository over HTTP to my /etc/apt/sources.list, and run apt-get update without any errors: Ign http://archive.lan lucid Release.gpg Ign http://archive.lan/apt/ lucid/main Translation-en_US Get:1 http://archive.lan lucid Release [2,244B] Ign http://archive.lan lucid/main Packages Ign http://archive.lan lucid/main Sources Ign http://archive.lan lucid/main Packages Ign http://archive.lan lucid/main Sources Hit http://archive.lan lucid/main Packages Hit http://archive.lan lucid/main Sources In my case, now I want to use an old version of Asterisk, namely Asterisk 1.4. I rebuilt the asterisk-1:1.4.21.2~dfsg-3ubuntu2.1 package from Ubuntu 9.04 (with some small changes to fix dependencies) and uploaded it to my repository. At this point I can see the new package in aptitude, but it naturally prefers the newer Asterisk 1.6 currently in the Ubuntu 10.04 repositories. To try and fix that, I have created /etc/apt/preferences.d/personal like so: Package: * Pin: release o=Shteef Pin-Priority: 1000 But when I try to install the asterisk package, it will still prefer the 1.6 version over my own 1.4 version. This is what apt-cache policy asterisk shows: asterisk: Installed: (none) Candidate: 1:1.6.2.5-0ubuntu1 Version table: 1:1.6.2.5-0ubuntu1 0 500 http://nl.archive.ubuntu.com/ubuntu/ lucid/universe Packages 1:1.4.21.2~dfsg-3ubuntu2.1shteef1 0 500 http://archive.lan/apt/ lucid/main Packages Clearly, it is not picking up my pin. In fact, when I run just apt-cache policy, I get the following: Package files: 100 /var/lib/dpkg/status release a=now 500 http://archive.lan/apt/ lucid/main Packages origin archive.lan 500 http://security.ubuntu.com/ubuntu/ lucid-security/multiverse Packages release v=10.04,o=Ubuntu,a=lucid-security,n=lucid,l=Ubuntu,c=multiverse origin security.ubuntu.com [...] Unlike Ubuntu's repository, apt doesn't seem to pick up a release-line at all for my own repository. I'm suspecting this is the cause why I can't pin on release o=Shteef in my preferences file. But I can't find any noticable difference between my repository's Release files and Ubuntu's that would cause this. Is there a step I've missed or mistake I've made in setting up my repository?

    Read the article

  • How to enable caching on Apache / Ubuntu Linux?

    - by Jim Mischel
    I have a large (several megabytes) XML file that's updated rather frequently (every 10 minutes or less) and gets a lot of traffic. I'd like to implement some caching to reduce bandwidth and server load. Looking at the Apache documents, I see a dizzying array of configuration options that involve various combinations of mod_expires, mod_headers, and mod_cache (and variants). I end up running in circles and the results aren't what I expect. I'm comfortable editing the various configuration files if I have some idea what I'm supposed to change. But at the moment I'm poking around in the dark and that's never a comfortable feeling. So, perhaps if I describe what I want, somebody here can take me by the hand and say, "This is what you need to do." Periodically, this file, call it "stuff.xml" is updated and a new version copied to the directory. The external url would be, for example, http://example.com/stuff.xml. Understand, this part works. Whenever I request the file, I get the expected result. But the file is big and I want to save bandwidth, so first I'd like to implement conditional GET semantics with the If-Modified-Since header. How do I do this? I've enabled mod_headers and mod_expired and added the <FilesMatching> section in my httpd.conf as recommended in countless examples I've seen online, but that didn't change the behavior when made a conditional GET request. I always get a status 200 with the entire document. So how the heck do I implement this? That'll cut down on neeless transfers. I'd also like to limit the amount of data transferred. Seeing as this is XML, gzipping it should save me 50% or more. My next step would be to somehow gzip the file and, if it's not too difficult, store it in memory. That'll cut down on per-access data transfer, and also reduce disk transfers. So how do I implement this type of caching? Thanks in advance.

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >