Search Results

Search found 4073 results on 163 pages for 'hosts deny'.

Page 81/163 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Can I use Apache 2.0 ErrorDocument and Mod Alias to store error pages outside the DocumentRoot?

    - by Pete
    I'd like to store my nicely designed set of http error documents outside the DocumentRoot. Alias /errors /data/opt/apache-httpd-2.0.63/htdocs/errorpages/ <Directory "/data/opt/apache-httpd-2.0.63/htdocs/errorpages/"> order deny,allow allow from all </Directory> ErrorDocument 500 /errors/500.html ErrorDocument 404 /errors/404.html ErrorDocument 401 /errors/default.html ErrorDocument 403 /errors/default.html ErrorDocument 502 /errors/default.html ErrorDocument 503 /errors/default.html However, when I do this, all I get is "The requested URL /errors/default.html resulted in an error." Is it possible to use mod_alias and the ErrorDocument directive together like this in Apache 2.0?

    Read the article

  • Asterisk: Forcing a sip peer to connect via ipv6?

    - by growse
    I've got an asterisk server that connects to an upstream provider over a WAN. The upstream provider supports both IPv4 and IPv6 connectivity, and the asterisk server is behind a NAT. When asterisk connects to the upstream sip peer via IPv6, everything works perfectly. The issue I have is that when I configure the asterisk server IPv6 address via DHCPv6, a race condition means that asterisk sometimes ends up attempting to contact the upstream peer via IPv4 (the SIP DNS name has both A and AAAA records). This is because asterisk starts up before the system has a valid IPv6 address. The connection does not work via IPv4 because of the NAT. Is there a way of configuring the peer to specify that it should only be contactable over IPv6? I guess it might be possible to hack together a firewall rule to deny all IPv4 traffic to that IP, but it'd be easier to configure this within asterisk itself.

    Read the article

  • Can't deploy rails 4 app on Bluehost with Passenger 4 and nginx

    - by user2205763
    I am at Bluehost (dedicated server) trying to run a rails 4 app. I asked to have my server re-imaged, specifying that I do not want rails, ruby, or passenger install automatically as I wanted to install the latest versions myself using a version manager (Bluehost by default offers rails 2.3, ruby 1.8, and passenger 3, which won't work with my app). I installed ruby 1.9.3p327, rails 4.0.0, and passenger 4.0.5. I can verify this by typing, "ruby -v", "rails -v", and "passenger -v" (also "gem -v"). I made sure to install these not as root, so that I don't get a 403 forbidden error when trying to deploy the app. I installed passenger by typing "gem install passenger", and then installed the nginx passenger module (into "/nginx") with "passenger-install-nginx-module". I am trying to run my rails app on a subdomain, http://development.thegraduate.hk (I am using the subdomain to show my client progress on the website). In bluehost I created that subdomain, and had it point to "public_html/thegraduate". I then created a symlink from "rails_apps/thegraduate/public" to "public_html/thegraduate" and verified that the symlink exists. The problem is: when I go to http://development.thegraduate.hk, I get a directory listing. There is nothing resembling a rails app. I have not added a .htaccess file to /rails_apps/thegraduate/public, as that was never specified in the installation of passenger. It was meant to be 'install and go'. When I type "passenger-memory-status", I get 3 things: - Apache processes (7) - Nginx processes (0) - Passenger processes (0) So it appears that nginx and passenger are not running, and I can't figure out how to get it to run (I'm not looking to have it run as a standalone server). Here is my nginx.conf file (/nginx/conf/nginx.conf): #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { passenger_root /home/thegrad4/.rbenv/versions/1.9.3-p327/lib/ruby/gems/1.9.1/gems/passenger-4.0.5; passenger_ruby /home/thegrad4/.rbenv/versions/1.9.3-p327/bin/ruby; include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name development.thegraduate.hk; root ~/rails_apps/thegraduate/public; passenger_enabled on; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } I don't get any errors, just the directory listing. I've tried to be as detailed as possible. Any help on this issue would be greatly appreciated as I've been stumped for the past 3 days. Scouring the web has not helped as my issue seems to be specific to me. Thanks so much. If there are any potential details I forgot to specify, just ask. ** ADDITIONAL INFORMATION ** Going to development.thegraduate.hk/public/ will correctly display the index.html page in /rails_apps/thegraduate/public. However, changing root in the routes.rb file to "root = 'home#index'" does nothing.

    Read the article

  • Configuring Apache reverse proxy

    - by Martin
    I have loadbalancer server and edges. I am trying to configure reverse proxy in order to hide the backend servers PL1,2,3. PL 1,2,3 are not located in same subnet. They are located in different locations. PL1 Lb1 -> PL2 PL3 I tried to configure Apache reverse proxy but it is not sending request to PL1,2,3. Reverse proxy worked only when I configured apache to send request to local server on other port. ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /PL1 http://PL1server.com/ ProxyPassReverse /PL1 http://PL1server.com/ The above configuration did not worked. Could you help me to solve the issue. Or is there other proxy types like Squid,Socks5 to solve this issue. Does the reverse proxy fails if we use IP address or domain URL in ProxyPass and ProxyPassReverse ?

    Read the article

  • Problems getting Squirrelmail and passenger working on apache

    - by Kenneth
    I'm trying to have a setup where I want to run a squirrelmail and Passenger on the same apache server, having a url point to squirrelmail and everything else handled by passenger. I've gotten so far that both squirrelmail and passenger will run fine by themselves but when passenger is running it handles all urls. So far I've tried using Alias and Redirect to point a webmail/ url to squirrelmails directory but that does not work. Here is my httpd.conf file: <VirtualHost *:80> ServerName not.my.real.server.name DocumentRoot /var/www/sinatra/public # Does not work: #Redirect webmail/ /usr/share/squirrelmail/ #<Directory /usr/share/squirrelmail> # Require all granted #</Directory> <Directory /var/www/sinatra/public> Order allow,deny Allow from all </Directory> </VirtualHost>

    Read the article

  • Windows Server 2003 Hacked - Files Being Uploaded

    - by jreedinc
    Blank directories are being created on my Windows Server 2003 virtual server with sub directories that are weird (for example: "88ÿ ÿ ÿÿþþ þþ13þ"). It looks like they are uploading bootlegged DVDs and pirated software. All of my bandwidth and file space is being eaten up. Could this be a shared permissions issue? Where should I look to further investigate this? My security permissions for the directory that is being hit are as followed: Administrators - ALL GRANTED IIS_WPG - Read & Execute, List Folder Contents, Read Internet Guest - DENY SYSTEM - ALL GRANTED Users - Read & Execute, List Folder Contents, Read My Event Viewer is showing many Logon/Logoff with NO IP?

    Read the article

  • How can I have APF block script kiddies that mod_security detects?

    - by Gaia
    In one of the vhosts' error_log I found thousands of lines like these, all from the same IP: [Mon Apr 19 08:15:59 2010] [error] [client 61.147.67.206] mod_security: Access denied with code 403. Pattern match "(chr|fwrite|fopen|system|e?chr|passthru|popen|proc_open|shell_exec|exec|proc_nice|proc_terminate|proc_get_status|proc_close|pfsockopen|leak|apache_child_terminate|posix_kill|posix_mkfifo|posix_setpgid|posix_setsid|posix_setuid|phpinfo)\\\\(.*\\\\)\\\\;" at THE_REQUEST [id "330001"] [rev "1"] [msg "Generic PHP exploit pattern denied"] [severity "CRITICAL"] [hostname "x.x.x.x"] [uri "//webmail/config.inc.php?p=phpinfo();"] Given how obvious the situation is, how come mod_security isnt automatically adding at least that IP to deny rules? There is no way someone hasnt thought of this before...

    Read the article

  • Nginx: Rewriting entire URI to query string

    - by Doug
    Still pretty new to nginx here, trying to get a simple rewrite to work, but the server just responds '404 not found' My server block server { listen 80; listen [::]:80; server_name pics.example.com; root /home/pics; rewrite ^(.*)$ index.php?tag=$1; location / { try_files $uri $uri/ $uri.php /index.html $uri =404; #try_files $uri =404; fastcgi_split_path_info ^([a-z]+)(/.+)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; deny all; } } pics.example.com/foobear should rewrite to pics.example.com/index.php?tag=foobear

    Read the article

  • Multiple PHP versions running as cgi

    - by Pierre
    I'm trying to install a second version of PHP, to run alongside the current version of php. I've compiled the latest php source from github (5.5-DEV), and I'm trying to run it as CGI. Here is my virtual host config: <VirtualHost *:8055> DocumentRoot /Library/WebServer/Documents/ ScriptAlias /cgi-bin/ /usr/local/php55/cgi Action php55-cgi /cgi-bin/php-cgi AddHandler php55-cgi .php <Directory /Library/WebServer/Documents/> Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order Allow,Deny Allow from all </Directory> DirectoryIndex index.html index.php </VirtualHost> But when I go to http://127.0.0.1:8055/info.php, I get the following error: Forbidden You don't have permission to access /cgi-bin/php-cgi/info.php on this server Edit I'm now switching between LoadModule php5_module /usr/local/php54/libphp5.so and LoadModule php5_module /usr/local/php55/libphp5.so It works for now, but is not ideal. I would like to have the different versions of php on different virtual hosts

    Read the article

  • solr reverse proxy Apache2

    - by Steven
    I am trying to setup Apache2 as Reverse Proxy for solr. Apache and Solr are on the same machine. Apache is serving other stuff as regular web server,too. solsearch config file in /etc/apache2/config.d/ # Proxy specific settings ProxyRequests Off ProxyPreserveHost Off <Proxy *> AddDefaultCharset off Order deny,allow Allow from all </Proxy> ProxyPass /solrsearch http://localhost:8983/solr/collection1/browse ProxyPassReverse /solrsearch http://localhost:8983/solr/collection1/browse Now trying [http://localhost/solsearch] gives me the first page of [http://localhost:8983/solr/collection1/browse], but with broken layout (like css missing). Result: error.log of apache: File does not exist: /var/www/solr, referer: [http://192.168.1.150/solrsearch]

    Read the article

  • Where should I go with hosting my site: VPS, GAE, another option?

    - by Jonathan Hayward
    My website, http://JonathansCorner.com/, began life before 1994 as www.imsa.edu/~jhayward/ and has been through various iterations and improvements to content, HTML, and the like, but remains a literature site that is from a web administrator's perspective fairly simple and primitive: a fair amount of static HTML and supporting files, a little bit of CGI and URI rewriting, .htaccess files providing Expires: headers and the like. An associated site demoes various CGI scripts that fall under the category of "and other creations"; the site as a whole has the purpose of sharing my creative works, and so far a fairly rudimentary use of Apache functionality, supported by Unix tools to, for instance, update RSS feed and the "starting point" link on the home page, has served that purpose fairly well. I looked around here on web hosting, and found the note on web host reccommendations as a good note for "What are some of people's favorite web hosts overall," but I wanted to ask a more focused question of "What are the best web hosts for criteria XYZ:" I am looking at a VPS so I will have root, be able to install stuff and edit Apache's config files etc., running Gentoo or other Linux, BSD, or the like. I would like a system that is secure enough that the host's vulnerabilities are mostly the ones that come along with what I am trying to do: that is, I won't be trying to administer and secure an ancient Linux like some have complained about at 1and1. I would like good uptime/reliability and competent support staff: if the level 1 help desk is going to tell me to go to "My Computer" on a Linux box, I'd like to be able to get past them. Ideally I would like a site hosted within some place that will have low latency for U.S. visitors in particular. I would like a hosting solution that will be with a stable business, one that will probably be around, and one unlikely to vanish without warning. With those things specified, I would be interested in knowing what are the less expensive options. (I expect that some of the things I've specified will knock out all of the cheapest options, but I'm still interested in price.) With all that stated, I'd like to back up a bit and look at whether I am asking the right question. I am concerned that the above is a very good way of asking, "How can I keep my site in line with the wave of the past?" I am wondering if it might be specifically wiser to look to adapt my site to newer technologies instead of trying to keep it on older technologies. For instance, while I would hardly portray my site as a way to show off the full power of Google App Engine, the main site at least should be a straightforward port if I were to do that. And beyond Google App Engine, my knowledge of cloud solutions is basic. If it is a better and more future-proof solution to port my site to another kind of solution, I would be interested in knowing where those future-proof solutions lie. So I would be interested in wisdom. If the question I asked in detail is still a good question to be asking, what would people suggest? Or if I should seriously consider porting my site to a newer basic option, what should I try there? Any thoughts would be appreciated.

    Read the article

  • fail2ban and denyhosts constantly ban me on Ubuntu

    - by Trey Parkman
    I just got an Ubuntu instance on Linode. To secure the SSH on it, I installed fail2ban (using apt-get), but then had a problem: fail2ban kept banning my IP (for limited durations, thankfully) even though I was entering the correct password. So I removed fail2ban and installed denyhosts instead. Same problem, but more severe: It seems like every time I SSH in, my IP gets banned. I remove it from /etc/hosts.deny, restart denyhosts and log in again, and my IP gets banned again. The only explanation I can think of is that I've been SSH-ing in as root (yes, yes, I know); maybe something is set somewhere that blocks anyone who SSH-es in as root, even if they log in successfully? This seems bizarre to me. Any ideas? (Whitelisting my IP is a temporary fix. I don't want to only be able to log on from one IP.)

    Read the article

  • Apache file negotiation failed

    - by lorenzo.marcon
    I'm having the following issue on a host using Apache 2.2.22 + PHP 5.4.0 I need to provide the file /home/server1/htdocs/admin/contents.php when a user makes the request: http://server1/admin/contents, but I obtain this message on the server error_log. Negotiation: discovered file(s) matching request: /home/server1/htdocs/admin/contents (None could be negotiated) Notice that I have mod_negotiation enabled and MultiViews among the options for the related virtualhost: <Directory "/home/server1/htdocs"> Options Indexes Includes FollowSymLinks MultiViews Order allow,deny Allow from all AllowOverride All </Directory> I also use mod_rewrite, with the following .htaccess rules: RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^([^\./]*)$ index.php?t=$1 [L] </IfModule> It seems very strange, but on the same box with PHP 5.3.6 it used to work correctly. I'm just trying an upgrade to PHP 5.4.0, but I cannot solve this negotiation issue. Any idea on why Apache cannot match contents.php when asking for content (which should be what mod_negotiation is supposed to do)?

    Read the article

  • Routing to a Terminal Services Cluster

    - by Dave
    I am trying to connect to a Load Balanced Windows 2008 R2 cluster using Remote Desktop Services. I have no trouble connecting to the the Servers' IP addresses (.253.16 and .253.17) or the Cluster address (.253.20) from inside the subnet (.253). The trouble is when I try to connect from the other subnet(.251). I can remote to the other non-clustered servers (.253.12 and .253.15) inside the .253 subnet from the .251 without an issue. I receive a ping reply from the cluster and other servers when I am on the .251 subnet. But when I try to connect via remote desktop it times out but only to any of the IPs on the cluster (.20,.17,.16). My ASA 5510 handling the routing reports message in the log: Deny TCP (no connection) from 192.168.251.2/4283 to 192.168.253.16/3389 flag FIN PSH ACK Here is a picture if it helps http://dl.dropbox.com/u/4217864/terminal%20server.jpg Thanks for any help

    Read the article

  • Can PHP be run in Apache via mod_php and mod_fcgi side by side?

    - by Mario Parris
    I have an existing installation of Apache (2.2.10 Windows x86) using mod_php and PHP 5.2.6. Can I run another site in a virtual host using FastCGI and a different version of PHP, while stilling running the main site in mod_php? I've made an attempt, but when I add my FCGI settings to the virtual host container, Apache is unable to restart. httpd.conf mod_php settings: LoadModule php5_module "C:\PHP\php-5.2.17-Win32-VC6-x86\php5apache2_2.dll" AddHandler application/x-httpd-php .php PHPIniDir "C:\PHP\php-5.2.17-Win32-VC6-x86" httpd-vhosts.conf fastcgi settings: <VirtualHost *:80> DocumentRoot "C:/Inetpub/wwwroot/site-b/source/public" ServerName local.siteb.com ServerAlias local.siteb.com SetEnv PHPRC "C:\PHP\php-5.3.5-nts-Win32-VC6-x86\php.ini" FcgidInitialEnv PHPRC "C:\PHP\php-5.3.5-nts-Win32-VC6-x86" FcgidWrapper "C:\PHP\php-5.3.5-nts-Win32-VC6-x86\php-cgi.exe" .php AddHandler fcgid-script .php </VirtualHost> <Directory "C:/Inetpub/wwwroot/site-b/source/public"> Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order allow,deny Allow from all </Directory>

    Read the article

  • MediaTemple DV SSL and Passenger

    - by pcasa
    Followed these instructions to get Passenger and media temple's apache talking to each other. http://greggoodwin.com/2009/03/01/install-ruby-on-rails-with-passenger-on-mediatemple-dv-35-how-to/ I have ssl_requirement installed and pages requesting SSL but can't figure out which .conf file gets edited and what to put in it. httpd.conf, vhosts.conf, ssl.conf, vhosts_ssl.conf? For what its worth where my vhosts.conf file is at, there is also a httpd.include that looks like it holds some info from certs created by Plesk. In there it says to create a /var/www/vhosts/sitename.com/conf/vhost_ssl.conf file for ssl. Currently I have vhosts.conf in /var/www/vhosts/sitename.com/conf/vhosts.conf And it looks like ServerAlias www.sitename.com DocumentRoot /var/www/vhosts/sitename.com/rails/sitename/public <Directory "/var/www/vhosts/sitename.com/rails/sitename/public"> Options FollowSymLinks AllowOverride None Order allow,deny RailsEnv development Allow from all </Directory> RailsBaseURI /

    Read the article

  • Googlebot DNS error HostPapa

    - by Gravy
    Received a message from Google Webmaster Tools: Over the last 24 hours, Googlebot encountered 2 errors while attempting to retrieve DNS information for your site. The overall error rate for DNS queries for your site is 40.0%. You can see more details about these errors in Webmaster Tools. Recommended action Contacted HostPapa and they deny that there is any issue with the site / server!!! Support in terms of what I can do to actually resolve this issue is non-existent!!!! The site is currently online. And I don't know much about DNS... so any advice about what I can do to resolve this problem would be much appreciated.

    Read the article

  • What's the closest equivalent of Little Snitch (Mac program) on Windows?

    - by Charles Scowcroft
    I'm using Windows 7 and would like to have a feature like Little Snitch on the Mac that alerts you whenever a program on your computer makes an outgoing connection. Description of Little Snitch from its website: Little Snitch informs you whenever a program attempts to establish an outgoing Internet connection. You can then choose to allow or deny this connection, or define a rule how to handle similar, future connection attempts. This reliably prevents private data from being sent out without your knowledge. Little Snitch runs inconspicuously in the background and it can also detect network related activity of viruses, trojans and other malware. Little Snitch provides flexible configuration options, allowing you to grant specific permissions to your trusted applications or to prevent others from establishing particular Internet connections at all. So you will only be warned in those cases that really need your attention. Is there a program like Little Snitch for Windows?

    Read the article

  • Domain and TS migration

    - by Windex
    The migration steps outlined by Microsoft in the ts migration seem to deal with moving TS to a different server on the same domain and call for adding the licensing service to another system, move the licenses and then put TS on whatever server you want. However with migrating the domain as well I don't have any place to move the TS server to. So my thought was to simply re-activate my licenses on the new server using the same method as a new TS setup. My question is essentially will this work the way I think it will or will the MS activation clearing house deny the new server? Is there a procedure to follow that "deactivates" the licenses on a server so that the clearing house knows there are some free? (FWIW I can look up the license information through the eopen website and have access to the original license doc.)

    Read the article

  • Apache: how to set custom 401 error page and save original behaviour

    - by petRUShka
    I have Kerberos-based authentication with Apache/2.2.3 (Linux/SUSE). When user is trying to open some url, browser ask him about domain login and password like in HTTP Basic Auth. If user cancel such request 3 times Apache returns 401 Authorization Required error page. My current virtual host config is <Directory /home/user/www/current/public/> Options -MultiViews +FollowSymLinks AllowOverride None Order allow,deny Allow from all AuthType Kerberos AuthName "Domain login" KrbAuthRealms DOMAIN.COM KrbMethodK5Passwd On Krb5KeyTab /etc/httpd/httpd.keytab require valid-user </Directory> I want to set nice custom 401 error page with some instructions for users. And I added such line in virtual host config: ErrorDocument 401 /pages/401 It works, when user can't authorize apache redirects him to my nice page. But Apache doesn't ask user login\password as it did before. I want this functionality and nice error page simultaneously! Is it possible to make it works properly?

    Read the article

  • redirect all youtube video requests to a specific one

    - by iTayb
    I'm on an IT team in my company and I would like to block youtube to users. I don't want to just deny access to the whole youtube domain, but only to replace the .flv/.mp4 request with the one that I want. That way, if someone tries to watch youtube videos on the network, He'll get a video of why using our expensive bandwidth for pleasure is a no-no. I thought about using a packet manipulation program and just replace the video ID with something that I want, but I didn't manage to do it right.

    Read the article

  • Set top level directory to be handled by Perl?

    - by Sam Lee
    I have an Apache server set up to use mod_perl. I have it set up to handle all requests using a Perl module MyModule. Here is part of my httpd.conf: LoadModule perl_module modules/mod_perl.so <Directory /> Order Deny,Allow Allow from all </Directory> PerlModule MyModule <Location /> SetHandler modperl PerlResponseHandler MyModule </Location> This seems to work fine, except top level directory (ie. www.mysite.com/) is not being sent to MyModule. What's going wrong?

    Read the article

  • performance block countries using iptables /netfilter

    - by markus
    It's easy to block IPs from country using iptables (e.g. like http://www.cyberciti.biz/faq/block-entier-country-using-iptables/). However I read that the performance can go down if the deny list get too large. An alternative is installing the iptables geoip patch or using ipset ( http://www.jsimmons.co.uk/2010/06/08/using-ipset-with-iptables-in-ubuntu-lts-1004-to-block-large-ip-ranges/) instead of iptables. Does anyone have experience with the various approaches and can say something about the performance differences ? Are there are other ways to block country IPs in linux which I did't mentioned above?

    Read the article

  • Apache server as reverse proxy is removing xmlns info from html tag

    - by Johnco
    I have a Java application running in tomcat, in front of which I have an Apache http server as a reverse proxy. However, the proxy is removing all xmlns data from the html tag, which breaks all the Facebook's FBML which is never parsed. My current config is as follows: ProxyRequests off ProxyHTMLDocType XHTML ProxyPassReverseCookiePath /cas / <Location /> ProxyPass http://localhost:8080/cas ProxyPassReverse http://localhost:8080/cas </Location> ProxyHTMLURLMap /cas / SetOutputFilter proxy-html <Proxy *> Order deny,allow Allow from all Satisfy all </Proxy> Thanks in advance.

    Read the article

  • Exchange Mail Flow

    - by Tuck918
    Hello. I have a question. We have one Exchange 2003 server and two Exchange 2007 servers. Most all of our mailboxes are on 2007 but we do still have one shared mailbox, unity mailbox and a journling mailbox on 2003. Public Folders have been set to replicate to 2007. I have set up a send connector on 2007 with a cost of 1. Receive connectors have Anonymous Users checked on 2007. On 2003 there are two connectors: the Internet Email connector and the connector that connects 2003 to 2007. We have a SPAM filtering device that email goes through before it is handed off to Exchange. The SPAM filtering device is set to send email to one of our Exchange 2007 servers. Here is my question/problem: Even though the SPAM filtering device is set to forward email to Exchange 2007, somehow all of our email is still going through the Exchange 2003 server before it finally hits the users mailboxes on the Exchange 2007 server. How can I change it so that all email goes directly to Exchange 2007 and never routes through Excahnge 2003 both ways, inbound and outbound? Would also like to add: In the EMC under Org- Hub- Send Connector there are two connectors. One is the "Internet Connector" from the 2003 box and the other is the new one I created. THe address space on the 2003 one is set to a cost of 2, no smart hosts and the 2003 box is listed as the Source Server. THe other Send Connector has an address space of 1, no smart host and has the 2 excahnge 2007 servers listed as the source servers. In EMC under Server- Hub- my two exchange 2007 servers are listed. Each one has 2 receive connectors. Both Recieve Connectors are setup the same way. THe Default Receive Connector has Anonymous Users checked. The other Recieve Connector is labled "Client" and I am not sure what it does or why its there. Anonymous Users are not checked. No smart hosts configured on 2003. Additional details Currently we have 3 excahnge servers. One exchange 2003 server and two excahnge 2007 servers. THe exchange 2003 server is the acting "bridgehead" serverand all email is routing through this server, inbound and outbound. We are wanting to decommission this server and use our two exchange 2007 servers as our mailbox servers. All of of user mailboxes are already on one of the exchange 2007 boxes and we want to put whats left on the exchange 2003 box on our other excahnge 2007 box. Both excahnge 2007 servers are currently CAS, HT and MB servers. We have a SPAM filtering device that sits between our excahnge servers and the firewall and have it configured to send messages to one of the excahgne 2007 servers but when we look at the message headers we can see that messgaes are still being routed to the excahnge 2003 box. We want to bypass the exchange 2003 in the routing process as it is dying and is starting to have major issues so everytime it goes down our email is down. Is there possible some sort of AD routing link/site link stuff going on?

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >