Search Results

Search found 5559 results on 223 pages for 'httpd conf'.

Page 152/223 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • FreeBSD high load loopback interface

    - by user1740915
    I have a problem with a FreeBSD server. There is a FreeBSD 9.0 amd64, two network cards em1 (internet), em0 (local network) configured firewall ipfw, natd, squid (not transparent), the server acts as a gateway for access to the Internet. Next problem: upload via squid is very low. At this moment I see next: natd, dhcpd load the cpu at that time when uploading through squid and there are a lot of traffic through the loopback interface. ipfw show output 0100 655389684 36707144666 allow ip from any to any via lo0 00200 0 0 deny ip from any to 127.0.0.0/8 00300 0 0 deny ip from 127.0.0.0/8 to any 00400 0 0 deny ip from any to ::1 00500 0 0 deny ip from ::1 to any 00600 4 292 allow ipv6-icmp from :: to ff02::/16 00700 0 0 allow ipv6-icmp from fe80::/10 to fe80::/10 00800 1 76 allow ipv6-icmp from fe80::/10 to ff02::/16 00900 0 0 allow ipv6-icmp from any to any ip6 icmp6types 1 01000 0 0 allow ipv6-icmp from any to any ip6 icmp6types 2,135,136 01100 1615 76160 deny ip from 192.168.1.1 to any in via em1 01200 0 0 deny ip from 199.69.99.11 to any in via em0 01300 46652 3705426 deny ip from any to 172.16.0.0/12 via em1 01400 3936404 345618870 deny ip from any to 192.168.0.0/16 via em1 01500 4 336 deny ip from any to 0.0.0.0/8 via em1 01600 4129 387621 deny ip from any to 169.254.0.0/16 via em1 01700 0 0 deny ip from any to 192.0.2.0/24 via em1 01800 917566 33777571 deny ip from any to 224.0.0.0/4 via em1 01900 147872 22029252 deny ip from any to 240.0.0.0/4 via em1 02000 1132194739 1190981955947 divert 8668 ip4 from any to any via em1 02100 3 248 deny ip from 172.16.0.0/12 to any via em1 02200 35925 2281289 deny ip from 192.168.0.0/16 to any via em1 02300 1808 122494 deny ip from 0.0.0.0/8 to any via em1 02400 3 174 deny ip from 169.254.0.0/16 to any via em1 02500 0 0 deny ip from 192.0.2.0/24 to any via em1 02600 0 0 deny ip from 224.0.0.0/4 to any via em1 02700 0 0 deny ip from 240.0.0.0/4 to any via em1 02800 960156249 1095316736582 allow tcp from any to any established 02900 64236062 8243196577 allow ip from any to any frag 03000 34 1756 allow tcp from any to me dst-port 25 setup 03100 193 11580 allow tcp from any to me dst-port 53 setup 03200 63 4222 allow udp from any to me dst-port 53 03300 64 8350 allow udp from me 53 to any 03400 417 24140 allow tcp from any to me dst-port 80 setup 03500 211 10472 allow ip from any to me dst-port 3389 setup 05300 77 4488 allow ip from any to me dst-port 1723 setup 05400 3 156 allow ip from any to me dst-port 8443 setup 05500 9882 590596 allow tcp from any to me dst-port 22 setup 05600 1 60 allow ip from any to me dst-port 2000 setup 05700 0 0 allow ip from any to me dst-port 2201 setup 07400 4241779 216690096 deny log logamount 1000 ip4 from any to any in via em1 setup proto tcp 07500 21135656 1048824936 allow tcp from any to any setup 07600 474447 35298081 allow udp from me to any dst-port 53 keep-state 07700 532 40612 allow udp from me to any dst-port 123 keep-state 65535 1990638432 1122305322718 allow ip from any to any systat -ifstat when uploading via squid Load Average ||| Interface Traffic Peak Total tun0 in 79.507 KB/s 232.479 KB/s 42.314 GB out 2.022 MB/s 2.424 MB/s 59.662 GB lo0 in 4.450 MB/s 4.450 MB/s 43.723 GB out 4.450 MB/s 4.450 MB/s 43.723 GB em1 in 2.629 MB/s 2.982 MB/s 464.533 GB out 2.493 MB/s 2.875 MB/s 484.673 GB em0 in 240.458 KB/s 296.941 KB/s 442.368 GB out 512.508 KB/s 850.857 KB/s 416.122 GB top output PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 66885 root 1 92 0 26672K 2784K CPU3 3 528:43 65.48% natd 9160 dhcpd 1 45 0 31032K 9280K CPU1 1 7:40 32.96% dhcpd 66455 root 1 20 0 18344K 2856K select 1 119:27 1.37% openvpn 16043 squid 1 20 0 44404K 17884K kqread 2 0:22 0.29% squid squid.conf cat /usr/local/etc/squid/squid.conf # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 10.0.0.0/8 # RFC1918 possible internal network acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port 192.168.1.1:3128 # Uncomment and adjust the following to add a disk cache directory. #cache_dir ufs /var/squid/cache 100 16 256 # Leave coredumps in the first cache dir coredump_dir /var/squid/cache I understand that the traffic passes through the SQUID several times. But can not find why.

    Read the article

  • How is this modsec rule getting triggered?

    - by BipedalShark
    I made a GET request to the URL, http://domain.tld/test/docs/index.php?create_table=1&step=2 and got a 403 response code. It turns out this modsec rule is getting triggered: Access denied with code 403 (phase 2). Pattern match "(?:ogg|gopher|zlib|(?:ht|f)tps?)\:/" at ARGS:gltr_redir. [file "/opt/mod_security/10_asl_rules.conf"] [line "827"] [id "340153"] [rev "22"] [msg "Generic PHP code injection protection via ARGS 3"] [severity "CRITICAL"] I would assume ARGS refers to GET/POST data, but there's no gltr_redir in the query string. And, being a GET request, there's obviously no POST data. So how is this rule being triggered?

    Read the article

  • HTTPS and Certification for dummies

    - by Poxy
    I had never used https on a site and now want to try it. I did some research, but not sure that I understood everything. Answers and corrections are greatly appreciated. Here we go: To use https I need to generate ‘private’ and ‘public’ keys for the web server I use. In my case it’s apache (manual: http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html) Https protocol should be bind to port 443. Q: How to do it? Is it done by default? Where can I check configuration? Aplying https. Q: If I see https in browser does it mean that the data traffic on the page IS encrypted? Any form on the page would submit data via https? Though all the data gonna be encrypted, the browsers would still show ugly red messages. This is just because they do not know anything about my certificate. They have about a hundred certificates pre-installed but mine is not one of them, obviously. But the data IS encrypted by https. If I want browsers to recognize my certificate, I would need to have it signed by one of the certification authorities (ca) that has its certificate pre-installed (e.g. thawte, geotrust, rapidssl etc). UPD: To reed about ssl/tsl: The First Few Milliseconds of an HTTPS Connection, I found it very informative. Examples for PHP (openssl.org) of how to make use of ssl/tsl on the server side are published here.

    Read the article

  • Using both domain users and local users for Squid authentication?

    - by Massimo
    I'm working on a Squid proxy which needs to authenticate users against an Active Directory domain; this works fine, Samba was correctly set up and Squid authenticates users via ntlm_auth. Relevant lines in squid.conf: auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 5 auth_param ntlm keep_alive on acl Authenticated proxy_auth REQUIRED http_access allow Authenticated http_access deny all Now, I need a way to allow access to users which don't have a domain account. I know I could create an "internet user" account in the domain, but this would allow access, although limited, to domain resources (file shares, etc.); I need something that will allow only Internet access. The ideal solution would be using a local account on the proxy server, either a Linux account or a Squid one; I know Squid supports this, but I'm unable to have it use both domain authentication and Squid/local authentication if domain auth is unsuccesful. Can this be done? How?

    Read the article

  • Weird behaviour with OpenVPN: can not connect to a few websites

    - by Gaby Solis
    My OpenVPN server is Ubuntu 10.04.4 LTS and openvpn version is 2.x My client is on Win 7. He can access most sites but not Youtube, Facebook, Twitter, groups.google.com, etc My server.conf is: local x.x.x.x port 1194 proto udp dev tun ca /etc/openvpn/keys/ca.crt cert /etc/openvpn/keys/server.crt key /etc/openvpn/keys/server.key dh /etc/openvpn/keys/dh1024.pem server 10.8.0.0 255.255.255.0 push "redirect-gateway def1" push "dhcp-option DNS 8.8.8.8" client-to-client keepalive 10 120 comp-lzo persist-key persist-tun status /etc/openvpn/keys/openvpn-status.log verb 4 I can access Youtube etc using SSH Tunnel + SOCKS Proxy, and the Ubuntu server can access all sites. so nothing is wrong with the Ubuntu server. With little information I can provide, I am not looking for a quck solution. How can I debug?

    Read the article

  • nginx: php-fastcgi running but php files not executing

    - by Daniel
    I have recently set up a nginx server with PHP running as FastCGI process. The server is running with HTML files however PHP files are downloading instead of displaying and PHP code is not processed. This is what I have in nginx.conf: server { listen 80; server_name pubserver; location ~ \.php$ { root /usr/share/nginx/html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/nginx/html$fastcgi_script_name; include fastcgi_params; } } The command netstat -tulpn | grep :9000 displays the following which indicates php-fastcgi is running and listening on port 9000: tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 2663/php-cgi If it's if any importance my server is running on CentOS 6 and I installed nginx and PHP using the repositories from The Fedora Project.

    Read the article

  • debian - running unattended-upgrades on a particular day of the week

    - by dastra
    We're running unattended-upgrades on debian squeeze, and would like it to run once a week, only on a Wednesday morning. To attempt this, we have set: APT::Periodic::Unattended-Upgrade "7" in /etc/apt/apt.conf.d/50unattended-upgrades And then touched the /var/lib/apt/periodic/update-stamp to set the timestamp to a Wednesday, for instance: touch -t 201211280000 /var/lib/apt/periodic/update-stamp Running: stamp=$(date --date=$(date -r /var/lib/apt/periodic/update-stamp --iso-8601) +%s 2/dev/null) date -u --date="1970-01-01 $stamp sec GMT" Gives the correct timestamp: Wed Nov 28 00:00:00 UTC 2012 However, unattended-upgrades then seems to ignore this, and run the updates on a Saturday morning. Could anyone enlighten me as to how this parameter works, and how to set up upgrades to run on a Wednesday?

    Read the article

  • OS X Can't Access Netatalk Shares

    - by Rogger Matamoros
    In my home network, I have a server running Ubuntu 10.4, configured to share files to my MacBook Pro running Snow Leopard (10.6.8) with Netatalk and Avahi. It was working like a charm until one day it stopped working. I can see my server in Finder, I can enter a username and password, and it seems to accept it all, but Finder gets stuck "Connecting" until it times out. I've checked the afpd.conf and AppleVolumes.default. They are all intact. My guess is that an update broke something, but I don't know how to troubleshoot it further. Any suggestions?

    Read the article

  • How can I make gitosis distinguish between two users with the same username

    - by bryan kennedy
    I have a gitosis system that seems to be working correctly except for a common problem we run into where I can't distingush permissions between two users who have the same username, but different hosts. For example: [email protected] 's SSH key is in the key folder. And so is [email protected] 's SSH is also in the key folder. These two jsmith's are two different people on two different computers. However, when I configure them in the gitosis.conf file with the usernames jsmith@computer or jsmith@machine, it seems like each user just gets the same permission. Can gitosis not distinguish the full username (name and host)? If not, how do I deal with multiple users accessing our system with common usernames? Thanks for any help.

    Read the article

  • My linux server "Number of processes created" and "Context switches" are growing incredibly fast

    - by Jorge Fuentes González
    I have a strange behaviour in my server :-/. Is a OpenVZ VPS (I think is OpenVZ, because /proc/user_beancounters exists and df -h returns /dev/simfs drive. Also ifconfig returns venet0). When I do cat /proc/stat, I can see how each second about 50-100 processes are created and happens about 800k-1200k context switches! All that info is with the server completely idle, no traffic nor programs running. Top shows 0 load average and 100% idle CPU. I've closed all non-needed services (httpd, mysqld, sendmail, nagios, named...) and the problem still happens. I do ps -ALf each second too and I don't see any changes, only a new ps process is created each time and the PID is just the same as before + 1, so new processes are not created, so I thought that process growing in cat /proc/stat must be threads (Yes, seems that processes in /proc/stat counts threads creation too as this states: http://webcache.googleusercontent.com/search?q=cache:8NLgzKEzHQQJ:www.linuxhowtos.org/System/procstat.htm&hl=es&tbo=d&gl=es&strip=1). I've changed to /proc dir and done cat [PID]\status with all PIDs listed with ls (Including kernel ones) and in any process voluntary_ctxt_switches nor nonvoluntary_ctxt_switches are growing at the same speed as cat /proc/stat does (just a few tens/second), Threads keeps the same also. I've done strace -p PID to all process too so I can see if any process is crating threads or something but the only process that has a bit of movement is ssh and that movement is read/write operations because of the data is sending to my terminal. After that, I've done vmstat -s and saw that forks is growing at the same speed processes in /proc/stat does. As http://linux.die.net/man/2/fork says, each fork() creates a new PID but my server PID is not growing! The last thing I can think of is that all process data that proc/stat and vmstat -s show is shared with all the other VPS stored in the same machine, but I don't know if that is correct... If someone can throw some light on this I would be really grateful.

    Read the article

  • Installing Bugzilla on Ubuntu 9.04 and Plesk

    - by makeflo
    Hey guys. I'm trying to install the latest Bugzilla version on my ubuntu server. (Want to use a subdomain like bugs.domain.com) I already installed all necessary perl modules and check_modules.pl doesn't show any errors. But when I'm running the testserver.pl script I get the following: TEST-OK Webserver is running under group id in $webservergroup TEST-FAILED Fetch of images/padlock.png failed I'm also not able to visit ANY file within the bugzilla folder from the browser. I'm always getting a 404 error. The bugzilla folder and all containing files are set to apache as the owner. I tried to enter the apache configuration form the installation guide in the http.include file of the domain and in the vhosts.conf file of the subdomain as well. I don't know what to do... Playing with plesks' suexecgroup doesn't bring any solution... I hope you can help me! Thanks in advance!

    Read the article

  • Calling system commands from Perl

    - by Dan J
    In an older version of our code, we called out from Perl to do an LDAP search as follows: # Pass the base DN in via the ldapsearch-specific environment variable # (rather than as the "-b" paramater) to avoid problems of shell # interpretation of special characters in the DN. $ENV{LDAP_BASEDN} = $ldn; $lcmd = "ldapsearch -x -T -1 -h $gLdapServer" . <snip> " > $lworkfile 2>&1"; system($lcmd); if (($? != 0) || (! -e "$lworkfile")) { # Handle the error } The code above would result in a successful LDAP search, and the output of that search would be in the file $lworkfile. Unfortunately, we recently reconfigured openldap on this server so that a "BASE DC=" is specified in /etc/openldap/ldap.conf and /etc/ldap.conf. That change seems to mean ldapsearch ignores the LDAP_BASEDN environment variable, and so my ldapsearch fails. I've tried a couple of different fixes but without success so far: (1) I tried going back to using the "-b" argument to ldapsearch, but escaping the shell metacharacters. I started writing the escaping code: my $ldn_escaped = $ldn; $ldn_escaped =~ s/\/\\/g; $ldn_escaped =~ s/`/\`/g; $ldn_escaped =~ s/$/\$/g; $ldn_escaped =~ s/"/\"/g; That threw up some Perl errors because I haven't escaped those regexes properly in Perl (the line number matches the regex with the backticks in). Backticks found where operator expected at /tmp/mycommand line 404, at end of line At the same time I started to doubt this approach and looked for a better one. (2) I then saw some Stackoverflow questions (here and here) that suggested a better solution. Here's the code: print("Processing..."); # Pass the arguments to ldapsearch by invoking open() with an array. # This ensures the shell does NOT interpret shell metacharacters. my(@cmd_args) = ("-x", "-T", "-1", "-h", "$gLdapPool", "-b", "$ldn", <snip> ); $lcmd = "ldapsearch"; open my $lldap_output, "-|", $lcmd, @cmd_args; while (my $lline = <$lldap_output>) { # I can parse the contents of my file fine } $lldap_output->close; The two problems I am having with approach (2) are: a) Calling open or system with an array of arguments does not let me pass > $lworkfile 2>&1 to the command, so I can't stop the ldapsearch output being sent to screen, which makes my output look ugly: Processing...ldap_bind: Success (0) additional info: Success b) I can't figure out how to choose which location (i.e. path and file name) to the file handle passed to open, i.e. I don't know where $lldap_output is. Can I move/rename it, or inspect it to find out where it is (or is it not actually saved to disk)? Based on the problems with (2), this makes me think I should return back to approach (1), but I'm not quite sure how to

    Read the article

  • How do I disable nginx sending messages to syslog?

    - by altman
    My nginx sends lots of messages to syslog, but I don't need them. In my nginx.conf: error_log /var/log/nginx-error.log notice; ...... server { access_log off; location / { .... } } but, in my /var/log/message you see Nov 22 23:25:09 cache3 nginx: 2011/11/22 23:25:09 [error] 3437#0: *32172530 kevent() reported about an closed connection (60: Operation timed out) while reading response header from upstream, client: , server: , request: "GET http://www.igoido012.com//vk HTTP/1.1", upstream: "http:////vk", host: "www.igoido012.com", referrer: "http://www.baidu.com/" Nov 22 23:25:09 cache3 nginx: 2011/11/22 23:25:09 [error] 3437#0: *32099531 upstream timed out (60: Operation timed out) while reading response header from upstream, client: , server: , request: "GET http://t.web2.qq.com/channel/poll?msg_id=0&clientid=431509&t=1321975433305 HTTP/1.1", upstream: "http://:80/channel/poll?msg_id=0&clientid=431509&t=1321975433305", host: "t.web2.qq.com", referrer: "http://t.web2.qq.com/proxy.html?v=20110331001" How can I prevent nginx sending messages to my syslog?

    Read the article

  • virtualbox ftp hangs on list command

    - by Tiddo
    Hi all, I have virtual box installed on a windows 7 64-bit computer, with Cent OS 5.5 as guest os. I want to be able to use ftp between those. I've installed vsftpd on the guest os, and the guest os uses a nat connection with the host os for internet. So far, I am able to connect to the guest os using ftp (in filezilla), but after the list command is executed, nothing happens, until the command is timed out. This happens in both active and passive mode. I do have set a pasv_min/max_port in the vsftpd.conf file, listing is enabled, and the ports are redirected in virtualbox. Also the ftp_data_port is set to 20. I also tried setting the pasv_address, but I had to set it to 127.0.0.1, but than filezilla gives me this: Command: PASV Response: 500 OOPS: bad family Command: PORT 127,0,0,1,139,204 Response: 500 OOPS: child died Can someone help me with this?

    Read the article

  • Problem with deploying django application on mod_wsgi

    - by Shehzad009
    Hello, I seem to have a problem deploying django with mod_wsgi. In the past I've used mod_python but I want to make the change. I have been using Graham Dumpleton notes here http://code.google.com/p/modwsgi/wiki/IntegrationWithDjango1, but it still seem to not work. I get a Internal Server Error. django.wsgi file: import os import sys sys.path.append('/var/www/html') sys.path.append('/var/www/html/c2duo_crm') os.environ['DJANGO_SETTINGS_MODULE'] = 'c2duo_crm.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() WSGIScriptAlias / /var/www/html/c2duo_crm/apache/django.wsgi Apache httpd file: <Directory /var/www/html/c2duo_crm/apache> Order allow,deny Allow from all </Directory> In my apache error log, it says I have this error This is not all of it, but I've got the most important part: [Errno 13] Permission denied: '/.python-eggs' [Thu Mar 03 14:59:25 2011] [error] [client 127.0.0.1] [Thu Mar 03 14:59:25 2011] [error] [client 127.0.0.1] The Python egg cache directory is currently set to: [Thu Mar 03 14:59:25 2011] [error] [client 127.0.0.1] [Thu Mar 03 14:59:25 2011] [error] [client 127.0.0.1] /.python-eggs [Thu Mar 03 14:59:25 2011] [error] [client 127.0.0.1] [Thu Mar 03 14:59:25 2011] [error] [client 127.0.0.1] Perhaps your account does not have write access to this directory? You can [Thu Mar 03 14:59:25 2011] [error] [client 127.0.0.1] change the cache directory by setting the PYTHON_EGG_CACHE environment [Thu Mar 03 14:59:25 2011] [error] [client 127.0.0.1] variable to point to an accessible directory.

    Read the article

  • Asterisk does not recognise DTMF tones from mobile phones

    - by Eugene van der Merwe
    We have an Asterisk 1.8.7.0 (the Elastix derivative) switchboard. Every since a month ago, seemingly out of the blue, the switchboard does not recognise DTMF tones any more from mobile phones. Testing the switchboard using 7777 works. Testing the switchboard from a normal phones works. Testing the switchboard from a mobile phone fails. Looking at the log file I can't see anything. I used 'asterisk -rvvvv' and 'tail -f /var/log/asterisk/full' to see the live output and scan the logs. I guess I don't see anything because it's simply not recognising the DTMF tones. I did brief research and found an old setting for SIP phones, 'rfc2833compensate=yes', and tried adding this to 'sip_general_custom.conf'. After that I did 'core restart when convenient' but that didn't make any difference. Could anyone give me some additional troubleshooting steps?

    Read the article

  • Using Supervisord, how can I start a brand new worker via supervisorctl without restarting other workers?

    - by cballou
    Let's say I have a number of existing workers running in supervisord. I want to add a new worker to the group as well as start the new worker. I perform the following steps: I modify the file /etc/supervisor/supervisord.conf and add the new worker config Back on the command line, I enter sudo supervisorctl I run reread to read the new configuration file settings. Attempting to run start workers:exampleWorkerName gives the error workers:"exampleWorkerName": ERROR (no such process) So, my question is, how can I start this new worker process without affecting my other existing workers? I'd rather not perform a supervisorctl reload or /etc/init.d/supervisord restart command.

    Read the article

  • Apache NameVirtualHost on port 443 ignores ServerAlias

    - by Ryan
    I've got a name-based virtual host setup on port 443 such that requests on host 'apple.fruitdomain' are proxied to the apple-app and requests on host 'orange.fruitdomain' are proxied to orange-app. This is working, but I'd like to add a ServerAlias for each such that requests on host 'apple' are proxied to apple-app and requests on host 'orange' are proxied to the orange-app. If I simply add a ServerAlias directive to the virtual host it doesn't work. ssl.conf below: Listen 443 NameVirtualHost *:443 <VirtualHost *:443> ServerName apple.fruitdomain ServerAlias apple SSLProxyEngine on ProxyPass /apple-app https://localhost:8181/apple-app ProxyPassReverse /apple-app https://localhost:8181/apple-app ... </VirtualHost> <VirtualHost *:443> ServerName orange.fruitdomain ServerAlias orange SSLProxyEngine on ProxyPass /orange-app https://localhost:8181/orange-app ProxyPassReverse /orange-app https://localhost:8181/orange-app ... </VirtualHost> Interestingly if I do a similar setup but with port 80 then the ServerAlias works...

    Read the article

  • Why is my Drupal Registration email considered spam by gmail? (headers included)

    - by Jasper
    I just created a Drupal website on a uni.cc subdomain that is brand-new also (it has barely had the 24 hours to propagate). However, when signing up for a test account, the confirmation email was marked as spam by gmail. Below are the headers of the email, which may provide some clues. Delivered-To: *my_email*@gmail.com Received: by 10.213.20.84 with SMTP id e20cs81420ebb; Mon, 19 Apr 2010 08:07:33 -0700 (PDT) Received: by 10.115.65.19 with SMTP id s19mr3930949wak.203.1271689651710; Mon, 19 Apr 2010 08:07:31 -0700 (PDT) Return-Path: <[email protected]> Received: from bat.unixbsd.info (bat.unixbsd.info [208.87.242.79]) by mx.google.com with ESMTP id 12si14637941iwn.9.2010.04.19.08.07.31; Mon, 19 Apr 2010 08:07:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of [email protected] designates 208.87.242.79 as permitted sender) client-ip=208.87.242.79; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of [email protected] designates 208.87.242.79 as permitted sender) [email protected] Received: from nobody by bat.unixbsd.info with local (Exim 4.69) (envelope-from <[email protected]>) id 1O3sZP-0004mH-Ra for *my_email*@gmail.com; Mon, 19 Apr 2010 08:07:32 -0700 To: *my_email*@gmail.com Subject: Account details for Test at YuGiOh Rebirth MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed; delsp=yes Content-Transfer-Encoding: 8Bit X-Mailer: Drupal Errors-To: info -A T- yugiohrebirth.uni.cc From: info -A T- yugiohrebirth.uni.cc Message-Id: <[email protected]> Date: Mon, 19 Apr 2010 08:07:31 -0700 X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - bat.unixbsd.info X-AntiAbuse: Original Domain - gmail.com X-AntiAbuse: Originator/Caller UID/GID - [99 500] / [47 12] X-AntiAbuse: Sender Address Domain - bat.unixbsd.info X-Source: X-Source-Args: /usr/local/apache/bin/httpd -DSSL X-Source-Dir: gmh.ugtech.net:/public_html/YuGiOhRebirth

    Read the article

  • php-fpm start error

    - by Sujay
    I am using php-fpm. I recently recompiled php for including imap functions. But on php-fpm start it gives the following error: Starting php_fpm Error in argument 1, char 1: no argument for option - Usage: php-cgi [-q] [-h] [-s] [-v] [-i] [-f ] php-cgi [args...] -a Run interactively -C Do not chdir to the script's directory -c | Look for php.ini file in this directory -n No php.ini file will be used -d foo[=bar] Define INI entry foo with value 'bar' -e Generate extended information for debugger/profiler -f Parse . Implies `-q' -h This help -i PHP information -l Syntax check only (lint) -m Show compiled in modules -q Quiet-mode. Suppress HTTP Header output. -s Display colour syntax highlighted source. -v Version number -w Display source with stripped comments and whitespace. -z Load Zend extension ................................... failed What could be the problem? Is it in php-fpm.conf or php.ini.

    Read the article

  • mod_rpaf with apache error_log

    - by Camden S.
    I'm using mod-rpaf with Apache 2.4 and it's working properly (showing the real client IP's) in my Apache access_log... but not in my error_log. My error log just shows the client IP address of the proxy server (my load balancer in this case) Here's an example of what I see in my error_log where 123.123.123.123 is the IP of my load balancer/proxy. == /usr/local/apache2/logs/error_log <== [Tue Jun 05 20:24:31.027525 2012] [access_compat:error] [pid 9145:tid 140485731845888] [client 123.123.123.123:20396] AH01797: client denied by server configuration: /wwwroot/private/secret.pdf The exact same request produces the following in my access_log where 456.456.456.456 is a real client IP (not the IP of the load balancer). 456.456.456.456 - - [05/Jun/2012:20:24:31 +0000] "GET /wwwroot/private/secret.pdf HTTP/1.1" 403 228 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:12.0) Gecko/20100101 Firefox/12.0" Here's my httpd.conf entry: # RPAF LoadModule rpaf_module modules/mod_rpaf-2.0.so RPAFenable On RPAFproxy_ips 127.0.0.1 123.123.123.123 RPAFsethostname On RPAFheader X-Forwarded-For What do I need to do to get the real IP addresses showing in my Apache error_log?

    Read the article

  • Snow Leopard connecting to Unbuntu 10.04 through Samba failure -- need help fixing.

    - by Chris Altman
    I have a Ubuntu 10.04 web server. I want to connect to it with my OSX 10.6 machine and Finder. I have installed openSSH and Samba on the Ubuntu machine. In my smb.conf I have a Share Definition: [www] comment = Development Computer WWW path = /var/www writeable = yes browseable = yes allow hosts = 192.168.1. I can connect to the machine through Finder using a non-root user. When I attempt to add files thought Finder I get an "Insufficient Permissions" error. Please help. I am not sure if the issue is in the Samba configuration or OSX 10.6 Thank you

    Read the article

  • Permission issue for apache

    - by Aamir Adnan
    Environment Details: Amazon Ec2 Ubuntu 12.04 Django + mod_wsgi + python 2.6 web server: apache2 I have mounted a 10GB ebs volume to an instance to /mnt/ebs1/. After mounting the volume and formatting, I have placed all my project files in /mnt/ebs1/project. the wsgi file is in /mnt/ebs1/project/apache/django.wsgi. The content of wsgi file is: import os, sys sys.path.insert(0, '/mnt/ebs1/project') sys.path.insert(1, '/mnt/ebs1') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.configs.common.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() My httpd.conf file looks as: LoadModule wsgi_module /usr/lib/apache2/modules/mod_wsgi.so WSGIPythonHome /usr/bin/python2.6 WSGIScriptAlias / /mnt/ebs1/project/apache/django.wsgi <Directory /mnt/ebs1/project> Order allow,deny Allow from all </Directory> <Directory /mnt/ebs1/project/apache> Order allow,deny Allow from all </Directory> Alias /static/ /mnt/ebs1/project/static/ <Directory /mnt/ebs1/project/static> Order deny,allow Allow from all </Directory> The above configurations gives me Forbidden: You don't have permission to access / on this server. I tried to find the user which is running apache using ps aux which is www-data and has group www-data. I have tried to change the ownership of /mnt/ebs1 and its subdirectories using chown -R www-data:www-data /mnt/ebs1 but that still does not solve the problem. Can any one tell me what I am doing wrong or have missed?

    Read the article

  • Kubuntu Monitor Out of Range

    - by uncleo
    My X server seems to be faulty, but I don't know why. I had some problems last week with a new motherboard, but they seemed to be resolved. Today, I updated Kubuntu and restarted, but my Viewsonic VX900 LCD monitor shows the splash screen for several seconds, then displays the message "Out of Range" I have tried various kernels in Grub, some in recovery mode, most do the same thing. No key combinations bring up a shell prompt like I expect. The one exception was a very old kernel, that would only mount the file system as read-only, so I can't fix up the xorg.conf file. Is there anything to be done?

    Read the article

  • OpenVPN Server Ethernet Bridging Question

    - by Hooplad
    Hello All, I am having a difficult time properly configuring an ethernet bridge using OpenVPN 2.0.9 install on CentOS 5 ( VPN server ). The goal that I am trying to complete is to connect a VM ( instance running on the same CentOS machine ) acting as a Microsoft Business Contact Manager server. I would then like this "BCM server" to serve Windows XP clients on 192.168.1.0/24 network as well as clients connecting from VPN ( 10.8.0.0/24 ). The setup as it is now was based off a known working configuration. The problem with the working configuration was that it would allow to the client to connect and access everything running on the VPN server ( SVN, Samba, VM Server ) but not any computers on the 192.168.1.0/24 network. I must disclose that the VPN server is behind a router/firewall. Ports are being forwarded correctly ( again, clients were able to connect to the VPN server with no problem. netcat confirms the udp port is open as well ). current ifconfig output br0 Link encap:Ethernet HWaddr 00:21:5E:4D:3A:C2 inet addr:192.168.1.169 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::221:5eff:fe4d:3ac2/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:846890 errors:0 dropped:0 overruns:0 frame:0 TX packets:3072351 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:42686842 (40.7 MiB) TX bytes:4540654180 (4.2 GiB) eth0 Link encap:Ethernet HWaddr 00:21:5E:4D:3A:C2 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:882641 errors:0 dropped:0 overruns:0 frame:0 TX packets:1781383 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:82342803 (78.5 MiB) TX bytes:2614727660 (2.4 GiB) Interrupt:169 eth1 Link encap:Ethernet HWaddr 00:21:5E:4D:3A:C3 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:650 errors:0 dropped:0 overruns:0 frame:0 TX packets:1347223 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:67403 (65.8 KiB) TX bytes:1959529142 (1.8 GiB) Interrupt:233 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:17452058 errors:0 dropped:0 overruns:0 frame:0 TX packets:17452058 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:94020256229 (87.5 GiB) TX bytes:94020256229 (87.5 GiB) tap0 Link encap:Ethernet HWaddr DE:18:C6:D7:01:63 inet6 addr: fe80::dc18:c6ff:fed7:163/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:3086 errors:0 dropped:166 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:0 (0.0 b) TX bytes:315099 (307.7 KiB) vmnet1 Link encap:Ethernet HWaddr 00:50:56:C0:00:01 inet addr:192.168.177.1 Bcast:192.168.177.255 Mask:255.255.255.0 inet6 addr: fe80::250:56ff:fec0:1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:4224 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) vmnet8 Link encap:Ethernet HWaddr 00:50:56:C0:00:08 inet addr:192.168.55.1 Bcast:192.168.55.255 Mask:255.255.255.0 inet6 addr: fe80::250:56ff:fec0:8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:4226 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) current route table Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.55.0 * 255.255.255.0 U 0 0 0 vmnet8 192.168.177.0 * 255.255.255.0 U 0 0 0 vmnet1 192.168.1.0 * 255.255.255.0 U 0 0 0 br0 current iptables output Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination server_known_working.conf local banshee port 1194 proto udp dev tap0 ca ca.crt cert banshee_server.crt key banshee_server.key dh dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "route 192.168.1.0 255.255.255.0" client-to-client keepalive 10 120 tls-auth ta.key 0 user nobody group nobody persist-key persist-tun status openvpn-status.log verb 4 The following is the current CentOS server config file. server_ethernet_bridged.conf ( current ) local 192.168.1.169 port 1194 proto udp dev tap0 ca ca.crt cert server.crt key server.key dh dh1024.pem ifconfig-pool-persist ipp.txt server-bridge 192.168.1.169 255.255.255.0 192.168.1.200 192.168.1.210 push "route 192.168.1.0 255.255.255.0 192.168.1.1" client-to-client keepalive 10 120 tls-auth ta.key 0 user nobody group nobody persist-key persist-tun status openvpn-status.log verb 6 The following is one of the client's config file that was used with the known working configuration. client.opvn client dev tap proto udp remote XXX.XXX.XXX 1194 resolv-retry infinite nobind persist-key persist-tun ca client.crt cert client.crt key client.key tls-auth client.key 1 verb 3 I have tried the HOWTO provided by OpenVPN as well as others http://www.thebakershome.net/openvpn%5Ftutorial?page=1 with no success. Any help or suggestions would be appreciated.

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >