Search Results

Search found 5559 results on 223 pages for 'httpd conf'.

Page 40/223 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • How to completely disable IPv6 for loopback interface on RHEL 5.6

    - by Marc D
    I've done lots of research on how to disable IPv6 on RedHat Linux and I have it almost completely disabled. However the loopback interface is still getting an inet6 loopback address (::1/128). I can't find where IPV6 is still enabled for loopback. To disable IPV6 I added the following settings to /etc/sysctl.conf: net.ipv6.conf.default.disable_ipv6=1 net.ipv6.conf.all.disable_ipv6=1 And also added the following line to /etc/sysconfig/network: NETWORKING_IPV6=no After rebooting, the inet6 address is gone from my physical interface (eth0), but is still there for lo: # ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:50:56:xx:xx:xx brd ff:ff:ff:ff:ff:ff inet 10.x.x.x/21 brd 10.x.x.x scope global eth0 If I manually remove the IPV6 address from loopback and then bounce the interface, it comes back: # ip addr del ::1/128 dev lo # ip addr show lo 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo # ip link set lo down # ip link set lo up # ip addr show lo 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever I believe IPV6 should be disabled at the kernel level as confirmed by sysctl: # sysctl net.ipv6.conf.lo.disable_ipv6 net.ipv6.conf.lo.disable_ipv6 = 1 Any ideas on what else would cause the loopback interface to get an IPV6 address?

    Read the article

  • Nginx, proxy passing to Apache, and SSL

    - by Vic
    I have Nginx and Apache set up with Nginx proxy-passing everything to Apache except static resources. I have a server set up for port 80 like so: server { listen 80; server_name *.example1.com *.example2.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass http://127.0.0.1:8080; include /etc/nginx/conf.d/proxy.conf; } } And since we have multiple ssl sites (with different ssl certificates) I have a server{} block for each of them like so: server { listen 443 ssl; server_name *.example1.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass https://127.0.0.1:8443; include /etc/nginx/conf.d/proxy.conf; proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Proto https; } } server { listen 443 ssl; server_name *.example2.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass https://127.0.0.1:8445; include /etc/nginx/conf.d/proxy.conf; proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Proto https; } } First of all, I think there is a very obvious problem here, which is that I'm double-encrypting everything, first at the nginx level and then again by Apache. To make everything worse, I just started using Amazon's Elastic Load Balancer, so I added the certificate to the ELB and now SSL encryption is happening three times. That's gotta be horrible for performance. What is the sane way to handle this? Should I be forwarding https on the ELB - http on nginx - http on apache? Secondly, there is so much duplication above. Is the best method to not repeat myself to put all of the static asset handling in an include file and just include it in the server?

    Read the article

  • Nginx & Apache Cannot get try_files to work with permalinks

    - by tcherokee
    I have been working on this for the past two weeks not and for some reason I cannot seem to get nginx's try_files to work with my wordpress permalinks. I am hoping someone will be able to tell me where I am going wrong and also hopefully tell me if I made any major errors with my configurations as well (I am an nginx newbie... but learning :) ). Here are my Configuration files nginx.conf user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## # Defines the cache log format, cache log location # and the main access log location. log_format cache '***$time_local ' '$upstream_cache_status ' 'Cache-Control: $upstream_http_cache_control ' 'Expires: $upstream_http_expires ' '$host ' '"$request" ($status) ' '"$http_user_agent" ' ; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } mydomain.com.conf server { listen 123.456.78.901:80; # IP goes here. server_name www.mydomain.com mydomain.com; #root /var/www/mydomain.com/prod; index index.php; ## mydomain.com -> www.mydomain.com (301 - Permanent) if ($host !~* ^(www|dev)) { rewrite ^/(.*)$ $scheme://www.$host/$1 permanent; } # Add trailing slash to */wp-admin requests. rewrite /wp-admin$ $scheme://$host$uri/ permanent; # All media (including uploaded) is under wp-content/ so # instead of caching the response from apache, we're just # going to use nginx to serve directly from there. location ~* ^/(wp-content|wp-includes)/(.*)\.(jpg|png|gif|jpeg|css|js|m$ root /var/www/mydomain.com/prod; } # Don't cache these pages. location ~* ^/(wp-admin|wp-login.php) { proxy_pass http://backend; } location / { if ($http_cookie ~* "wordpress_logged_in_[^=]*=([^%]+)%7C") { set $do_not_cache 1; } proxy_cache_key "$scheme://$host$request_uri $do_not_cache"; proxy_cache main; proxy_pass http://backend; proxy_cache_valid 30m; # 200, 301 and 302 will be cached. # Fallback to stale cache on certain errors. # 503 is deliberately missing, if we're down for maintenance # we want the page to display. #try_files $uri $uri/ /index.php?q=$uri$args; #try_files $uri =404; proxy_cache_use_stale error timeout invalid_header http_500 http_502 http_504 http_404; } # Cache purge URL - works in tandem with WP plugin. # location ~ /purge(/.*) { # proxy_cache_purge main "$scheme://$host$1"; # } # No access to .htaccess files. location ~ /\.ht { deny all; } } # End server gzip.conf # Gzip Configuration. gzip on; gzip_disable msie6; gzip_static on; gzip_comp_level 4; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; proxy.conf # Set proxy headers for the passthrough proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; add_header X-Cache-Status $upstream_cache_status; backend.conf upstream backend { # Defines backends. # Extracting here makes it easier to load balance # in the future. Needs to be specific IP as Plesk # doesn't have Apache listening on localhost. ip_hash; server 127.0.0.1:8001; # IP goes here. } cache.conf # Proxy cache and temp configuration. proxy_cache_path /var/www/nginx_cache levels=1:2 keys_zone=main:10m max_size=1g inactive=30m; proxy_temp_path /var/www/nginx_temp; proxy_cache_key "$scheme://$host$request_uri"; proxy_redirect off; # Cache different return codes for different lengths of time # We cached normal pages for 10 minutes proxy_cache_valid 200 302 10m; proxy_cache_valid 404 1m; The two commented out try_files in location \ of the mydomain config files are the ones I tried. This error I found in the error log can be found below. ...rewrite or internal redirection cycle while internally redirecting to "/index.php" Thanks in advance

    Read the article

  • Setting environment variables in OS X

    - by Percival Ulysses
    Despite the warning that questions that can be answered are preferred, this question is more a request for comments. I apologize for this, but I feel that it is valuable nonetheless. The problem to set up environment variables such that they are available for GUI applications has been around since the dawn of Mac OS X. The solution with ~/.MacOSX/environment.plist never satisfied me because it was not reliable, and bash style globbing wasn't available. Another solution is the use of Login Hooks with a suitable shell script, but these are deprecated. The Apple approved way for such functionality as provided by login hooks is the use of Launch Agents. I provided a launch agent that is located in /Library/LaunchAgents/: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>user.conf.launchd</string> <key>Program</key> <string>/Users/Shared/conflaunchd.sh</string> <key>ProgramArguments</key> <array> <string>~/.conf.launchd</string> </array> <key>EnableGlobbing</key> <true/> <key>RunAtLoad</key> <true/> <key>LimitLoadToSessionType</key> <array> <string>Aqua</string> <string>StandardIO</string> </array> </dict> </plist> The real work is done in the shell script /Users/Shared/conflaunchd.sh, which reads ~/.conf.launchd and feeds it to launchctl: #! /bin/bash #filename="$1" filename="$HOME/.conf.launchd" if [ ! -r "$filename" ]; then exit fi eval $(/usr/libexec/path_helper -s) while read line; do # skip lines that only contain whitespace or a comment if [ ! -n "$line" -o `expr "$line" : '#'` -gt 0 ]; then continue; fi eval launchctl $line done <"$filename" exit 0 Notice the call of path_helper to get PATH set up right. Finally, ~/.conf.launchd looks like that setenv PATH ~/Applications:"${PATH}" setenv TEXINPUTS .:~/Documents/texmf//: setenv BIBINPUTS .:~/Documents/texmf/bibtex//: setenv BSTINPUTS .:~/Documents/texmf/bibtex//: # Locale setenv LANG en_US.UTF-8 These are launchctl commands, see its manpage for further information. Works fine for me (I should mention that I'm still a Snow Leopard guy), GUI applications such as texstudio can see my local texmf tree. Things that can be improved: The shell script has a #filename="$1" in it. This is not accidental, as the file name should be feeded to the script by the launch agent as an argument, but that doesn't work. It is possible to put the script in the launch agent itsself. I am not sure how secure this solution is, as it uses eval with user provided strings. It should be mentioned that Apple intended a somewhat similar approach by putting stuff in ~/launchd.conf, but it is currently unsupported as to this date and OS (see the manpage of launchd.conf). I guess that things like globbing would not work as they do in this proposal. Finally, I would mention the sources I used as information on Launch Agents, but StackExchange doesn't let me [1], [2], [3]. Again, I am sorry that this is not a real question, I still hope it is useful.

    Read the article

  • How to force 640*480@60Hz screen resolution on xubuntu 12.04

    - by c2h2
    It seems xubuntu won't be able to correctly set resolution at 640*480@60Hz at its Display settings. And I am unable to correctly my super small 6.4 inch Mitsubishi VGA panel via VGA cable. I have tried to hack both X11 conf /etc/X11/Xorg.conf and xfce4 conf, but all the document I can find is outdated. and conf files are changed into other location. Can someone give me a hand and I'll mark correct for other people to use? Thanks! EDIT: The board is an Intel Atom D2700, gpu is SGX545. I tried to use xrandr --output default --mode 640*480 It seems works fine, but refresh rate is 75Hz, but the screen only suports 60Hz So I used xrandr --output default --mode 640*480 --rate 60 but it give error: xrandr: Failed to get size of gamma for output default Can anyone pointing any directions?

    Read the article

  • Nginx HTTPS redirects causing loop

    - by Ben Chiappetta
    I've been banging my head against the wall trying to figure this out, so if anyone can help I'd appreciate it. My Nginx conf has three different redirect loops, haven't been able to get any of the three to work right. The three problem areas are: Redirecting memcache directory to SSL Redirecting accounts directory to SSL Redirecting SSL to www if non-www nginx.conf: user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; error_log /var/log/nginx/error.log notice; sendfile on; #tcp_nopush on; keepalive_timeout 65; proxy_set_header X-Url-Scheme $scheme; #gzip on; rewrite_log on; include /etc/nginx/conf.d/*.conf; } conf.d/default.conf: server { listen 80; server_name <redacted>.net; rewrite ^(.*) http://www.<redacted>.net$1; } server { listen 80; server_name www.<redacted>.net; set_real_ip_from 192.168.30.4; set_real_ip_from 192.168.30.5; set_real_ip_from 192.168.30.10; real_ip_header X-Forwarded-For; #charset koi8-r; access_log /var/log/nginx/host.access.log main; root /var/www/html; index index.php index.html index.htm; location =/memcache { rewrite ^/(.*)$ https://$server_name$request_uri? permanent; } location /accounts { rewrite ^/(.*)$ https://$server_name$request_uri? permanent; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; try_files $uri = 404; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } } conf.d/ssl.conf: # HTTPS server # server { listen 443; server_name <redacted>.net; rewrite ^(.*) https://www.<redacted>.net$1; } server { listen 443 default_server ssl; server_name www.<redacted>.net; set_real_ip_from 192.168.30.4; set_real_ip_from 192.168.30.5; set_real_ip_from 192.168.30.10; real_ip_header X-Forwarded-For; proxy_set_header X-Forwarded_Proto https; proxy_set_header Host $host; proxy_redirect off; proxy_max_temp_file_size 0; proxy_set_header X-Forwarded-Ssl on; set $https_enabled on; ssl_certificate <redacted>.crt; ssl_certificate_key <redacted>.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; root /var/www/html; index index.php index.html index.htm; location /memcache { auth_basic "Restricted"; auth_basic_user_file $document_root/memcache/.htpasswd; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS on; include /etc/nginx/fastcgi_params; try_files $uri = 404; } }

    Read the article

  • bind9 - forwarders are not working

    - by Sarp Kaya
    I am experiencing an issue with bind. If i want to resolve any domain name that is on the zone file. It works fine. However, when I try to resolve anything that does not belong to the zone file. I know that actual DNS servers that are being forwarded are working fine. But somehow bind9 fails to use them. The content of /etc/bind/named.conf.options is: options { directory "/var/cache/bind"; forwarders { 131.181.127.32; 131.181.59.48; }; dnssec-validation auto; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; I have also tried to use only one ip address and it still did not work. also the content of /etc/bind/named.conf is: include "/etc/bind/named.conf.options"; include "/etc/bind/named.conf.local"; include "/etc/bind/named.conf.default-zones"; So there is no problem with including options file. Any recommendations for fixing this problem?

    Read the article

  • Apache and Virtual Hosts Problem on OS X

    - by Charles Chadwick
    I recently formatted and installed my iMac. I am running 10.6.5. Prior to this format, I had the default Apache web server up and running with several virtual hosts, and everything ran beautifully. After formatting, I set everything back up again, and now Apache is acting funny. Here is a description of what I have going on. My default root directory for the Apache Web server is pointed to an external hard drive. In my httpd.conf, here is what I have: DocumentRoot "/Storage/Sites" Then a few lines beneath that: <Directory /> Options FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> And then beneath that: <Directory "/Storage/Sites"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from All </Directory> At the end of this file, I have commented out the user dir include conf file: Include /private/etc/apache2/extra/httpd-userdir.conf And uncommented the virtual hosts conf file: Include /private/etc/apache2/extra/httpd-vhosts.conf Moving on, I have the following entry in my vhosts file: <VirtualHost *:80> DocumentRoot "/Storage/Sites/mysite" ServerName mysite.dev </VirtualHost> I also have a host record in my /etc/hosts file that points mysite.dev to 127.0.0.1 (I also tried using my router IP, 192.168.1.2). The problem I am coming across is, despite having PHP files in /Storage/Sites/mysite, the server is still looking at /Storage/Sites. I know this because in the DocumentRoot contains a php file with phpinfo() (whereas the index.php file in mysite has different code). I have tried setting up other virtual hosts, but they are still doing the same thing. Also, "NameVirtualHost *:80" is in my vhosts file. I saw as a solution on another thread here. Doesn't seem to make a difference. Any ideas on this? Let me know if this is not enough information.

    Read the article

  • No remote access to PostgreSQL db

    - by gattol
    i'm stuck in connecting to a PostresSQL database from remote host. The server is accepting incoming connections on port 5432 and i've configured pg_hba.conf like this: local all all md5 host all all 0.0.0.0/0 md5 and the postgresql.conf like this: listen_addresses = '*' port = 5432 max_connections = 100 I don't have any problem accessing from local but when i try to connect via psql with something like this: psql -U myuser -h hostname db_name I get this error: psql: FATAL: no pg_hba.conf entry for host "87.zz.yy.xxx", user "myuser", database "db_name", SSL off I also tried to put the host 87.zz.yy.xxx in the pg_hba.conf file without success.

    Read the article

  • How to configure DNS BIND to work locally on one computer?

    - by user619656
    I want to do some changes to the BIND source code. In order to test those changes I want to be able to post queries to my local BIND server and for it to use only the local zone files. I know how to make the zone files and somewhat the named.conf file but what should i put in /etc/resolv.conf? In resolv.conf currently there is the line nameserver 192.168.0.1 witch i guess is my router IP address and the queries go through the router to my ISP. I want those queries to go to the local BIND server and to look for answers in the zone files i provided. Is there a way for this using resolf.conf file or should i do something else?

    Read the article

  • linux automatic change permissions in resolv.file

    - by rikr
    In various linux servers I see how the permissions of the /etc/resolv.conf file change automatically. In state normal: -r--r--r-- 1 root root 103 Jul 4 11:50 resolv.conf In changed state: -r--r----- 1 root root 103 Jul 4 11:50 resolv.conf I installed auditd for monitoring it, and these are the two entries between the change: type=PATH msg=audit(07/04/2012 12:20:02.719:303) : item=0 name=/etc/resolv.conf inode=137102 dev=fe:00 mode=file,644 ouid=root ogid=root rdev=00:00 type=CWD msg=audit(07/04/2012 12:20:02.719:303) : cwd=/ type=SYSCALL msg=audit(07/04/2012 12:20:02.719:303) : arch=x86_64 syscall=open success=yes exit=3 a0=7feeb1405dec a1=0 a2=1b6 a3=0 items=1 ppid=1585 pid=3445 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=4294967295 comm=hostid exe=/usr/bin/hostid key=(null) type=PATH msg=audit(07/04/2012 12:50:03.727:304) : item=0 name=/etc/resolv.conf inode=137102 dev=fe:00 mode=file,440 ouid=root ogid=root rdev=00:00 type=CWD msg=audit(07/04/2012 12:50:03.727:304) : cwd=/ type=SYSCALL msg=audit(07/04/2012 12:50:03.727:304) : arch=x86_64 syscall=open success=yes exit=3 a0=7f2bcf7abdec a1=0 a2=1b6 a3=0 items=1 ppid=1585 pid=3610 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=4294967295 comm=hostid exe=/usr/bin/hostid key=(null) any ideas?

    Read the article

  • Using rspec to check creation of template

    - by Brian
    I am trying to use rspec with puppet to check the generation of a configuration file from an .erb file. However, I get the error 1) customizations should generate valid logstash.conf Failure/Error: content = catalogue.resource('file', 'logstash.conf').send(:parameters)[:content] ArgumentError: wrong number of arguments (0 for 1) # ./spec/classes/logstash_spec.rb:29:in `catalogue' # ./spec/classes/logstash_spec.rb:29 And the logstash_spec.rb: describe "customizations" do let(:params) { {:template => "profiles/logstash/output_broker.erb", :options => {'opt_a' => 'value_a' } } } it 'should generate valid logstash.conf' do content = catalogue.resource('file', 'logstash.conf').send(:parameters)[:content] content.should match('logstash') end end

    Read the article

  • Disable modsec2 blacklist rule for specific hostname

    - by KevinL
    I have a server running Apache2 with mod_security2. In modsec2.user.conf, there is a blacklist rule: ###BLACKLIST### SecRule REQUEST_URI "mkdir" I need to disable that rule for just one hostname on the server. I realize I could just remove it entirely but I'd rather keep it on for the other sites. I realize you can use the SecRuleRemoveByID directive, based on each rule's ID, but as you can see above, this has no ID, it's just a string. How do I disable that rule for just www.example.com, is there something I can do in custom.conf, whitelist.conf or exclude.conf ?

    Read the article

  • BIND9 Forwarding by view

    - by Triztian
    Hi I think this is a simple issue, I'd like to forward only to certain IPs in the LAN network, for example I have 2 acl lists: acl "office1" { 192.168.1.15; // With internet access }; acl "production" { 192.168.1.101; // No internet access }; I know that there probably should be more efficient ways to restrict internet access, but at the moment this is what I'd like to try.Here's what I've tried in named.conf.local // Inlcude my acl definitions include "/etc/bind/acls.conf"; view "no-internet" { match-clients { production; }; include "/etc/bind/named.conf.default-zones"; zone "localdomain.com" { type master; file "/etc/bind/db.localdomain.com"; }; zone "1.168.192.in-addr.arpa" { type master; file "/etc/bind/db.192.168.1"; }; } view "internet" { match-clients { office1; }; include "/etc/bind/named.conf.default-zones"; forwarders { 201.56.59.14; // Made Up 201.56.59.15; // Made Up }; zone "localdomain.com" { type master; file "/etc/bind/db.localdomain.com"; }; zone "1.168.192.in-addr.arpa" { type master; file "/etc/bind/db.192.168.1"; }; }; As you can see I want a localdomain.com defined for every computer in my network and forward internet access to the computers in the office but not to the ones on the production floor. I've modified my conf file, however the IP in the "no-internet" acl is able to resolve the domains, even though I've rebooted the computer, flushed the DNS using ipconfig /flushdns and set my DNS Server as the only one, why is this still happening? Thanks in advance.

    Read the article

  • Requisite of file.append

    - by Jeff Strunk
    Is it possible to make salt require that a particular file was appended to as opposed to the file merely existing? It seems like I can only require a file state. The source looks like it strips out any method names from a require attribute. In the example below, I only want the foo service to run if my lines have been appended to /etc/security/limits.conf. file.append: - name: /etc/security/limits.conf - text: - root hard nofile 65535 - root soft nofile 65535 foo: service.running: - enable: True - require: - file.append: /etc/security/limits.conf

    Read the article

  • How to extend a file definition from an existing module in the node?

    - by c33s
    I use an older version of the example42 mysql module, which defines the mysql.conf file but not its content. Mmy goal is to just include the mysql module and add a content definition in the node. class mysql { ... file { "mysql.conf": path => "${mysql::params::configfile}", mode => "${mysql::params::configfile_mode}", owner => "${mysql::params::configfile_owner}", group => "${mysql::params::configfile_group}", ensure => present, require => Package["mysql"], notify => Service["mysql"], } ... } node xyz { include mysql File["mysql.conf"] { content => template("mymodule/mysql.conf.erb")} } The above code produces a "Only subclasses can override parameters" What is the correct way to just add a content definition to an existing file definition?

    Read the article

  • Can't upload project to PPA using Quickly

    - by RobinJ
    I can't get Quickly to upload my project into my PPA. I've set up my PGP key and used it so sign the code of conduct, and the PPA exists. I don't know what other usefull information I can supply. robin@RobinJ:~/Ubuntu One/Python/gtkreddit$ quickly share --ppa robinj/gtkredditGet Launchpad Settings Launchpad connection is ok gpg: WARNING: unsafe permissions on configuration file `/home/robin/.gnupg/gpg.conf' gpg: WARNING: unsafe enclosing directory permissions on configuration file `/home/robin/.gnupg/gpg.conf' gpg: WARNING: unsafe permissions on configuration file `/home/robin/.gnupg/gpg.conf' gpg: WARNING: unsafe enclosing directory permissions on configuration file `/home/robin/.gnupg/gpg.conf' Traceback (most recent call last): File "/usr/share/quickly/templates/ubuntu-application/share.py", line 138, in <module> license.licensing() File "/usr/share/quickly/templates/ubuntu-application/license.py", line 284, in licensing {'translatable': 'yes'}) File "/usr/share/quickly/templates/ubuntu-application/internal/quicklyutils.py", line 166, in change_xml_elem xml_tree.find(parent_node).insert(0, new_node) AttributeError: 'NoneType' object has no attribute 'insert' ERROR: share command failed Aborting I reported this as a bug on Launchpad, because I assume that it is a bug. If you know a quick workaround, please let me know. https://bugs.launchpad.net/ubuntu/+source/quickly/+bug/1018138

    Read the article

  • Mod Rewrite - directing HTTP/HTTPS traffic to the appropriate virtual hosts

    - by kce
    I have an Apache2 web server (v. 2.2.16) running on Debian hosting three virtual hosts. The first two hosts are HTTP only (server1 and server2). The last host is HTTPS only (server3). My virtual host configuration files can be found at pastebin. I would like to use mod rewrite to get the following behavior: Any request for http://server3 is re-directed to https://server3 Any request for either https://server1 or https://server2 is re-directed to http://server1 or http://server2 as appropriate. Currently, requesting http://server3 gives you a 403 because indexing is disabled for that host and a request for https://server1 or https://server2 will resolve as https://server3 (as its the only virtual host running SSL). This behavior is not desirable. So far I have added a rewrite rule to the central configuration file (myServerWideConfs.conf), with unfortunately no effect. I was under the impression that this rule (or something similar) should rewrite all https:// requests for server1 and server2 to the proper http:// request. RewriteEngine On RewriteCond %{HTTP_HOST} !^server3 [NC] RewriteRule (.*) http://%{HTTP_HOST} My question is two-fold: What mod rewrite rules should I use to accomplish this? And where should they go? Debian's packaging of Apache has a pretty granular (i.e., fractured) configuration file layout; should my rewrite rules go in /etc/apache2/apache2.conf, /etc/apache2/conf.d/myServerWideConfs.conf, or the individual virtual host files? Is mod rewrite the right tool to accomplish this or am I missing something in my greater apache configuration?

    Read the article

  • Use 3 monitors w/built-in intel adapter + two old nvidia PCI cards on 10.10?

    - by Kendall Gifford
    I'd like to move from windows with my current workstation. The only thing holding me back is that I have 3 monitors connected to the system and I really take advantage of the real estate when working. I just installed Ubuntu 10.10 on the system and one of the monitors is up and running just fine. This monitor is connected to the built-in Intel adapter. I also have two old nVidia GeForce4 MX 4000 (nv19pl) cards in my two PCI slots with two monitors connected to them respectively. I installed the legacy (and proprietary) nVidia drivers (the nvidia-96 package) that claims to support these old cards. Now the question is how to get X configured to use all adapters (using two different drivers) so I can use all three monitors (and is this even possible)? From what I've read, it looks like I'll have to write an xorg.conf file since the nVidia driver doesn't support the auto-magic configuration supported by other drivers. On this site: http://wiki.ubuntu.com/X/Config it says that on 10.10 I just need to write an xorg.conf "containing only those sections and options that you need to override Xorg's autoconfigurated settings". So, does this mean I can get away with only including the nVidia-specific configuration stuff and all else will get auto-configured? Or, will providing a config with a "Device" section overrule the auto-magic from detecting/using the Intel adapter? I ran the included nvidia-xconfig to generate a basic, nVidia-specific xorg.conf but I'm hesitant to reboot with it in place, suspecting I'll have a screwed up display. Also, is there any way (any tool or command) to generate an xorg.conf from the current, auto-configured running state of an X session? If I have to write a full, complete config, I'd rather start with one that includes everything that's been auto-detected thus far (and merge it with my nVidia version). Anyhow, any info and thoughts are greatly appreciated (as are answers).

    Read the article

  • Missing X-Spam-Status header

    - by Walt Stoneburner
    I recently upgraded to Ubuntu 14.04.1 LTS (trusty) and have followed the directions in https://help.ubuntu.com/14.04/serverguide/mail-filtering.html and am sending and receiving mail just fine. While I do see X-Virus-Scanned headers in my messages, which suggests mail is indeed being processed, I do not see any X-Spam-Level or X-Spam-Score headers being added to messages. This makes downstream procmailrc and client-side filtering ...more difficult. While having $final_spam_destiny = D_DISCARD in /etc/amavis/conf.d/20-debian_defaults does greatly reduce spam to my inbox, I had concerns of false-positives prior to tuning and didn't know were there going, so have set it to D_PASS for the time being. This exposed the problem. I'm not sure where to look to start diagnosing the problem (otherwise I'd post a suspect configuration file). /etc/amavis/conf.d/15-content_filter_mode has the lines uncommented to enable virus and spam checks, and virus checking appears to be working according to the headers. Spam Assassin certainly seems to be starting just fine, too. SpamAssassin debug facilities: info SA info: zoom: able to use 360/360 'body_0' compiled rules (100%) SpamAssassin loaded plugins: AskDNS, AutoLearnThreshold, Bayes, BodyEval, Check, DKIM, DNSEval, FreeMail, HTMLEval, HTTPSMismatch, Hashcash, HeaderEval, ImageInfo, MIMEEval, MIMEHeader, Pyzor, Razor2, RelayEval, ReplaceTags, Rule2XSBody, SPF, SpamCop, URIDNSBL, URIDetail, URIEval, VBounce, WLBLEval, WhiteListSubject SpamControl: init_pre_fork on SpamAssassin done I've also set $log_level = 2; in /etc/amavis/conf.d/50-user and don't see any obvious errors rolling by in the logs. Q: Any recommendations of what to try next? UPDATE (it appears that I have the right setting already): /etc/amavis/conf.d$ grep sa_tag_level_deflt * 20-debian_defaults:# $sa_tag_level_deflt = 2.0; # add spam info headers if at, or above that level 20-debian_defaults:$sa_tag_level_deflt = -999; # add spam info headers if at, or above that level

    Read the article

  • Package Manager cannot access repositories but internet is working

    - by kazman
    I am currently at a conference in another country and my package manager cannot access repositories. My internet is working fine and I can ping the repositories or go to them in a browser, but package manager fails to access them. If I sudo apt-get update it throws Something wicked happened resolving 'wwwproxy:3128' (-5 - No address associated with hostname) (or Ign's). This proxy corresponds to my proxy at my office back at home, but I have disabled proxy in the package manager. Scanning for best repository doesn't work either, it doesn't manage to connect to any. I have searched for this online and have checked things about my apt.conf file. My apt.conf contains: Acquire::http::proxy "http://wwwproxy:3128/"; Acquire::https::proxy "https://wwwproxy:3128/"; Acquire::ftp::proxy "ftp://wwwproxy:3128/"; Acquire::socks::proxy "socks://wwwproxy:3128/"; If I remove apt.conf (or replace with blank), it makes no difference. I don't see that it should since I am connecting directly (and have set it so in my network options in Package manager network settings) I have also tried some things with resolv.conf (changing name address to primary and secondary dns) to no avail. (im not sure if this would help, following other advice) I am running 12.04. (I wrote this very quickly and wrote down everything I have tried to possibly shorten the troubleshooting process, have very limited time between lectures and need this sorted asap, my apologies)

    Read the article

  • error when start Subversion Server in UberSVN

    - by dakiquang
    I'm using uberSVN on Ubuntu server. Now, I can't checkout, commit source from Subversion Server ( domain.com:9880, port subversion) I tried to start httpd by command line as: sudo /opt/ubersvn/bin/httpdserverctl start , but I get error: httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName httpd (pid 895) already running I tried run this command as follow: cd /opt/ubersvn/bin sudo ./ubersvncontrol start then get errors: Starting SysV Tomcat Using CATALINA_BASE: /opt/ubersvn/tomcat Using CATALINA_HOME: /opt/ubersvn/tomcat Using CATALINA_TMPDIR: /opt/ubersvn/tomcat/temp Using JRE_HOME: /opt/ubersvn/jre Using CLASSPATH: /opt/ubersvn/tomcat/bin/bootstrap.jar Using CATALINA_PID: /opt/ubersvn/data/run/tomcat.pid Existing PID file found during start. Tomcat appears to still be running with PID 943. Start aborted. In the Ubersvn website GUI, I tried start Ubersvn server but get error : How to start Subversion Server ?

    Read the article

  • Stopping an specify Apache instance

    - by user1435991
    I have two Apache instances setup in my server (Solaris 10): Instance 1: /etc/apache2 Instance 2: /etc/apache2-instance2 To start the instance 1, I execute the following command: /usr/apache2/bin/apachectl -f /etc/apache2/httpd.conf And instance 2: /usr/apache2/bin/apachectl -f /etc/apache2-instance2/httpd.conf Both instances run perfectly, however the problem comes when I want to stop the instances. I have not been able to find a parameter to indicate what instance I want to stop. if I execute this command: /usr/apache2/bin/apachectl -k stop It will stop always the Instance 1 (the default one). The only solution that I could find to stop the instance 2 was to do this: kill -TERM 'cat /var/run/apache2-instance2/httpd.pid' Is this the only way to do it? or what is the best solution? I remember that I did something similar in Ubuntu setting a the global variable APACHE_CONFDIR before calling apachectl

    Read the article

  • CentOS 5.8 Server, manual install PHP 5.2.17

    - by Shiro
    I would like manually install PHP 5.2.17. I manage to install httpd and mysql. But when I want to PHP 5.2.17, I could not found a proper guide. These the step I had done with a fresh installation of CentOS 5.8 x86_64 (server & server GUI) yum install httpd httpd-devel /etc/init.d start OK /etc/init.d stop OK yum install mysql mysql-server mysql-devel yum remove php yum groupinstall "Development Tools" yum install libxml2-devel wget http://www.php.net/release/php5.2.17.zip to get php5.2.17 (client requirement must use this version) cd php5.2.17 ./configure --with-apxs2=/usr/sbin/apxs --with-mysql=/usr/local This is the area I confuse. I could not found the /usr/sbin/apxs in my system. I do another Google search on how to manually install PHP, they pointed using ./configure --with-apxs2=/usr/local/apache2/bin/apxs --with-mysql Both localtino I also cannot find apxs or apache2. I scare I make any mistake on it. Please help and guide on this. I am newbie in CentOS

    Read the article

  • FreeBSD 8.2 + Apache 2.2 + mod_auth_pam2: unable to authenticate

    - by zneak
    I've installed Apache 2.2 and mod_auth_pam2 from ports, but I can't get local UNIX authentication to work. When I access the protected part of my local website, I do get the authentication request, and with pam_permit.so, it works. However, when I change pam_permit.so to the real thing, pam_unix.so, I get this message in httpd-error.log: [error] PAM: user 'foo' - not authenticated: authentication error This is the relevant part of my Apache config, though I don't think it's the problem as it works with pam_permit.so: <Location /foo> AuthBasicAuthoritative Off AuthPAM_Enabled on AuthPAM_FallThrough off AuthType Basic AuthName "Secret place" Require valid-user </Location> This is my /etc/pam.d/httpd, though I don't think it's the problem either, since it works with pam_permit.so: auth required pam_unix.so account required pam_unix.so So what am I missing? What does it take to have pam_unix.so work for httpd under FreeBSD?

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >