Search Results

Search found 6253 results on 251 pages for 'apache2 ssl'.

Page 77/251 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Apache redirect some requests to another server

    - by mucie
    We just bought a new server. We want our old server to respond the https connections(because of ssl certificate) and new server to respond the rest. New server is ready but i don't know how to redirect requests to new one. mydomain.com => old machine ip 10.10.10.41 => new machine Requests will come through mydomain.com. If it is https: respond else redirect to 10.10.10.41 How should i configure apache for this situation?

    Read the article

  • proxy: no HTTP 0.9 request (with no host line)

    - by TestPlanManagement.com
    I'm getting a bunch of these errors in my error.log: [client 1.2.3.4] proxy: no HTTP 0.9 request (with no host line) on incoming request and preserver hose set forcing hostname to be www.mydomain.com for uri / My config is essentially: ProxyRequests Off <VirtualHost 1.2.3.4:80> ServerName www.mydomain.com DocumentRoot "c:/apache/htdocs" ProxyPreserveHost On ProxyPass / http://172.1.1.1/ </VirtualHost> <VirtualHost 1.2.3.4:443> ServerName www.mydomain.com DocumentRoot "c:/apache/htdocs" # SSL Stuff ProxyPreserveHost On ProxyPass / http://172.1.1.1/ </VirtualHost> Anyone have an idea how to eliminate those warnings?

    Read the article

  • How do you bypass TLS/SSL cetification validation in WCF for Exchange Web Services

    - by Sevki
    I wan't to bypass SSL and use regular http protocol to connect to a Exchange 2007 server however we dont want to invest in a real SSL cert and the one we use is needed for blackberry enterprise server. Is there a way to bypass this here is the exception Request failed. The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. Service.Credentials = new WebCredentials(ShacxEwsUserName, ShacxEwsUserPassword, ShacxEwsUserDomain); Service.Url = new Uri(ShacxEwsServiceUrl); How do you make ExchangeService accept bad ssl.

    Read the article

  • Why am I getting Network error: 403 Forbidden in firebug for files I am not trying to access?

    - by moomoochoo
    QUESTIONs I'd like to know why I am getting Network error: 403 Forbidden in firebug for files that I am not trying to access? is it likely to cause any serious problems on the webserver? how to fix it. Why is my browser trying to access those files in the error message? DETAILS I’m using wampserver 2.2 to access a folder via the browser. The browser is on the same computer as the server. The computer is running windows 7 ultimate. When I view a web folder via my browser hXXp://localhost/folder I can see the folder contents ok but in firebug I get Network error: 403 Forbidden I’m not deliberately trying to access those files in the error msgs. You will notice they are in a completely different folder to the one I am looking at. I check the apache_error.log and see [Wed Sep 26 00:05:10 2012] [error] [client 127.0.0.1] client denied by server configuration: C:/apache2, referer: hxxp://localhost/folder/ Wampserver 2.2 is installed on D drive. I took a look at the httpd.conf file but I couldn't find any references to c: When I look in Apache’s access.log I see 127.0.0.1 - - [26/Sep/2012:00:05:10 +0900] "GET /icons/blank.gif HTTP/1.1" 403 217 127.0.0.1 - - [26/Sep/2012:00:05:10 +0900] "GET /icons/back.gif HTTP/1.1" 403 216 127.0.0.1 - - [26/Sep/2012:00:05:10 +0900] "GET /icons/text.gif HTTP/1.1" 403 216 127.0.0.1 - - [26/Sep/2012:00:05:10 +0900] "GET /icons/unknown.gif HTTP/1.1" 403 219 127.0.0.1 - - [26/Sep/2012:00:05:10 +0900] "GET /icons/folder.gif HTTP/1.1" 403 218 CONFIGURATION Wampserver 2.2 installed on Drive D Apache 2.2.22 PHP 5.4.3 MySQL 5.5.24 Firebug 1.10.3 Firefox 15.0.1

    Read the article

  • Unable to install PHP-FPM on Apache (Failed to connect to FastCGI server)

    - by Nyxynyx
    I have been having problem installing php-fpm for use with apache2-mpm-worker. This is the guide that I am following. According to the guide's Step 5, Alias /php5-fcgi /usr/lib/cgi-bin/php5-fcgi FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -host 127.0.0.1:9000 -pass-header Authorization However I cannot find php5-fcgi at /usr/lib, but only /usr/bin/php5-cgi and /usr/bin/php-cgi, which I am not sure if they are the same. So I changed the lines in Step 5 to: Alias /php5-fcgi /usr/bin/php5-fcgi FastCgiExternalServer /usr/bin/php5-fcgi -host 127.0.0.1:9000 -pass-header On restarting Apache, it's logs gave the errors: [notice] caught SIGTERM, shutting down [alert] (4)Interrupted system call: FastCGI: read() from pipe failed (0) [alert] (4)Interrupted system call: FastCGI: the PM is shutting down, Apache seems to have disappeared - bye [notice] Apache/2.2.22 (Ubuntu) mod_fastcgi/mod_fastcgi-SNAP-0910052141 configured -- resuming normal operations [notice] FastCGI: process manager initialized (pid 16348) And on loading the index page [error] [client 10.0.2.2] (111)Connection refused: FastCGI: failed to connect to server "/usr/bin/php5-cgi": connect() failed [error] [client 10.0.2.2] FastCGI: incomplete headers (0 bytes) received from server "/usr/bin/php5-cgi" [error] [client 10.0.2.2] File does not exist: /var/www/mydomain/public/favicon.ico Question: Any idea why php5-fcgi is missing, and how should this problem be fixed? Thank you!! :)

    Read the article

  • Have set Expiration time: Still getting "Query string present but no explicit expiration time"

    - by oligofren
    I have one local Apache instance running with mod_cache (+ disk & mem) enabled, and it seems to cache content from my appserver fine. My app server sets Expiration headers and Last-modified. Yet, when deploying on a production server with the same modules enabled, I am getting the following error in my logs: blablabla not cached. Reason: Query string present but no explicit expiration time Any clues on why Apache is not caching content? The only difference is the Apache version. Locally I am running 2.2. This is from my config CacheRoot "/var/cache/apache2/" CacheEnable disk / This is example output < HTTP/1.1 200 OK < Date: Mon, 19 Nov 2012 16:09:13 GMT < Server: Sun GlassFish Enterprise Server v2.1.1 < X-Powered-By: Servlet/2.5 < Expires: Tue Nov 20 05:00:00 CET 2012 < Last-Modified: Mon Nov 19 17:09:13 CET 2012 < Cache-Control: no-transform < Content-Type: application/x-javascript < Transfer-Encoding: chunked

    Read the article

  • Apache/Mongrel/Redmine installation problem (VirtualHost/ProxyPass)

    - by Riddler
    I am installing Redmine as per this step-by-step instruction: http://justnotes.co.cc/2010/02/11/how-to-install-redmine-on-ubuntu/ I am using Ubuntu 10.04.1, Apache 2.2.14, Mongrel 1.1.5. On the VirtualHost configuration stage, I am using this: <VirtualHost *:80> ServerName myserver.lv ProxyPass /redmine/ http://localhost:8000/ ProxyPassReverse /redmine/ http://localhost:8000 ProxyPreserveHost on <Proxy *> Order allow,deny Allow from all </Proxy> </VirtualHost> But, when I direct my browser to http://<my-server's-ip>/redmine/ what I see is not the redmine web application but "Index of /redmine" with, well, index of the files from the root directory of Redmine. Any idea how to fix that? P.S. Tried removing the VirtualHost stuff alltogether and instead adding the following simple clauses to apache2.conf: <Proxy *> Order allow,deny Allow from all </Proxy> ProxyPass /redmine/ http://localhost:8000/ ProxyPassReverse /redmine/ http://localhost:8000/ ProxyPreserveHost on As a result, the behavior changes! Now http://<my-server's-ip>/redmine/ produces the source code of the Redmine's start page, so it is served, but apparently not rendered. At the same time, still, http://<my-server's-ip>:8000/ works perfectly fine, so Mongrel is serving the Redmine application as it should, it's just that something is wrong with my VirtualHost/proxying clauses in the .conf file.

    Read the article

  • How to remove request blocking on apache reverse proxy after failure of backend before asking backen

    - by matnagel
    I am working on an apache2 reverse proxy vhost. When the server behind apache is down, the first request to apache shows the error page of course. But at subsequent requests it seems apache delays for some time before asking the backend server again. During all this time (which is short but in development I don't want a delay at all) only the apache error page is shown to the browser, although the backend server is already up. Where is this setting in apache, what is this behaviour, and how can I set the delay time to zero? Edit: I am not trying to change the timeout for a single request. I want to change the blocking time. It is my experience that apache blocks further requests for a certain time before asking a backend server again that has failed once. Edit2: This is what apache delivers: Service Temporarily Unavailable The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later. Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.7 with Suhosin-Patch proxy_html/3.0.0 Server at localhost Port 80 After hitting Ctrl-R in firefox for 60 seconds the page finally appears.

    Read the article

  • Making Apache 2.2 on SuSE Linux Case In-Sensitive. Which is a better approach?

    - by pingu
    Problem: http://<server>/home/APPLE.html http://<server>/hoME/APPLE.html http://<server>/HOME/aPPLE.html http://<server>/hoME/aPPLE.html All the above should pick this http://<server>/home/apple.html I implemented 2 solutions and both are working fine. Not sure which one is better(performance). Please Suggest..Also Directive - CheckCaseOnly on never worked Option 1: a)Enable:mod_speling In /etc/sysconfig/apache2 - APACHE_MODULES="rewrite speling apparmor......" b) Add directive - CheckSpelling on (Either in .htaccess or add in httpd.conf) In httpd.conf <Directory srv/www/htdcos/home> Order allow,deny CheckSpelling on Allow from all </Directory> or In .htaccess inside /srv/www/htdcos/home(your content folder) CheckSpelling on Option 2: a) Enable: mod_rewrite b) Write the rule vhost(you can not write RewriteMap in directory. check apache docs ) <VirtualHost _default_:80> <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine on RewriteMap lc int:tolower RewriteCond %{REQUEST_URI} [A-Z] RewriteRule (.*) ${lc:$1} [R=301,L] </IfModule> </VirtualHost> <VirtualHost _default_:80> <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine on RewriteMap lc int:tolower RewriteCond %{REQUEST_URI} [A-Z] RewriteRule (.*) ${lc:$1} [R=301,L] </IfModule> </VirtualHost> This changes the entire request uri into lowercase. I want this to happen for specific folder, but RewriteMap doesn't work in .htaccess. I am novice in regex and Rewrite. I need a RewriteCond which checks only /css//. can any body help

    Read the article

  • SSH over HTTPS with proxytunnel and nginx

    - by Thermionix
    I'm trying to setup an ssh over https connection using nginx. I haven't found any working examples, so any help would be appreciated! ~$ cat .ssh/config Host example.net Hostname example.net ProtocolKeepAlives 30 DynamicForward 8118 ProxyCommand /usr/bin/proxytunnel -p ssh.example.net:443 -d localhost:22 -E -v -H "User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Win32)" ~$ ssh [email protected] Local proxy ssh.example.net resolves to 115.xxx.xxx.xxx Connected to ssh.example.net:443 (local proxy) Tunneling to localhost:22 (destination) Communication with local proxy: -> CONNECT localhost:22 HTTP/1.0 -> Proxy-Connection: Keep-Alive -> User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Win32) <- <html> <- <head><title>400 Bad Request</title></head> <- <body bgcolor="white"> <- <center><h1>400 Bad Request</h1></center> <- <hr><center>nginx/1.0.5</center> <- </body> <- </html> analyze_HTTP: readline failed: Connection closed by remote host ssh_exchange_identification: Connection closed by remote host Nginx config on the server; ~$ cat /etc/nginx/sites-enabled/ssh upstream tunnel { server localhost:22; } server { listen 443; server_name ssh.example.net; location / { proxy_pass http://tunnel; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; } ssl on; ssl_certificate /etc/ssl/certs/server.cer; ssl_certificate_key /etc/ssl/private/server.key; } ~$ tail /var/log/nginx/access.log 203.xxx.xxx.xxx - - [08/Feb/2012:15:17:39 +1100] "CONNECT localhost:22 HTTP/1.0" 400 173 "-" "-"

    Read the article

  • nginx starts up before apache

    - by paullb
    I've been fumbling through setting up redmine on a unbuntu (12.04) box and somewhere along the line NginX got set up and now apache no longer loads because nginx has already grabbed the port. I tried removing NginX with the below command but that didn't seem to make any difference. When I restarted the server and pointed my web browser I still got the "Welcome to NginX" message sudo apt-get purge nginx I have confirmed that NginX is gone because when I run the above now I get as an output Package nginx is not installed, so not removed Yet everytime I start the machine it is running again. I noticed the following for the running processes (if that is helpful) root 923 0.0 0.0 76784 1280 ? Ss 03:00 0:00 nginx: master process /usr/sbin/nginx www-data 925 0.0 0.0 77092 1704 ? S 03:00 0:00 nginx: worker process www-data 926 0.0 0.1 77092 2204 ? S 03:00 0:00 nginx: worker process www-data 927 0.0 0.0 77092 1704 ? S 03:00 0:00 nginx: worker process www-data 928 0.0 0.0 77092 1704 ? S 03:00 0:00 nginx: worker process Any advice for bringing back apache2 as the "default" (for lack of a better term) web server?

    Read the article

  • "Options ExecCGI is off in this directory" When try to run Ruby code using mod_ruby

    - by Itay Moav
    I am on Ubuntu, Apache 2.2 Installed the fcgi via apt-get then removed it via apt-get remove. Installed mod-ruby configuration I added to Apache: LoadModule ruby_module /usr/lib/apache2/modules/mod_ruby.so RubyRequire apache/ruby-run <Directory /var/www> Options +ExecCGI </Directory> <Files *.rb> SetHandler ruby-object RubyHandler Apache::RubyRun.instance </Files> <Files *.rbx> SetHandler ruby-object RubyHandler Apache::RubyRun.instance </Files> I have a file in the www direcoty with puts 'baba' I have other files in that directory, all accessible via Apache. Test file has been chmod 777 In the browser I get 403. In Apache error log I get: [error] access to /var/www/t.rb failed for (null), reason: Options ExecCGI is off in this directory If I move this to a sub folder rubytest and modify the relevant config to be: <Directory /var/www/rubytest> Options +ExecCGI </Directory> and making sure the directory has 755 permissions on it, it just try to download the file, as if it does not recognize the postfix *.rb any more If I give directory and files 777 it fails: usr/lib/ruby/1.8/apache/ruby-run.rb:53: warning: Insecure world writable dir /var/www/rubytest in LOAD_PATH, mode 040777 [Tue May 24 19:39:58 2011] [error] mod_ruby: error in ruby [Tue May 24 19:39:58 2011] [error] mod_ruby: /usr/lib/ruby/1.8/apache/ruby-run.rb:53:in load': loading from unsafe file /var/www/rubytest/t.rb (SecurityError) [Tue May 24 19:39:58 2011] [error] mod_ruby: from /usr/lib/ruby/1.8/apache/ruby-run.rb:53:in handler' BUT, IF I USE *.rbx it works like a charm...go figure.

    Read the article

  • Wordpress hacked. Disabled hacked site but bad traffic continues [closed]

    - by tetranz
    Possible Duplicate: My server's been hacked EMERGENCY My Ubuntu 10.04 LTS VPS has been hacked, probably via a WordPress site. I was alerted to it when I noticed the incoming traffic was unusually high. A WordPress site was littered with eval(base64_decode(...)) code in lots of files. My fault, I had some files writeable by www-data which shouldn't have been. I've disabled that site (a2dissite ... and restart Apache). This has reduced it but I am still getting some malware type traffic. My server runs several WordPress and Drupal sites and a home grown PHP site. I have captured traffic with tcpdump and looked at it Wireshark. It's reaching out to the login page of some Joomla sites, trying multiple logins. The traffic stops when I stop Apache. If I a2dissite every site and reload (not restart) Apache the traffic continues. At that point I have no virtual hosts running and no DocumentRoot in my apache2.conf so I don't know how Apache is still running something. I have searched the other sites with grep for likely looking php code with no success. I may have missed it but I haven't found anything suspicious in the Apache logs. I have mod-status running. I haven't really seen anything much there except that someone is still trying to do a POST to the theme page on the disabled WordPress site but they now get a 404. What should I be looking for? Are there any tools or whatever which would give me more info about how Apache is generating that traffic? Thanks

    Read the article

  • How to direct reverse proxy requests using wildcard vhosts

    - by HonoredMule
    I'm interested in running a reverse proxy with 2-3 virtual machines behind it. Each internal server will run multiple virtual hosts, and rather than manually configuring each individual vhost on the proxy (a variety of vhosts come and go too often for this to be practical), I would like to use something which can employ pattern matching in a sequential order to find the appropriate back-end server. For example: Server 1: *.dev.mysite.com Server 2: *.stage.mysite.com Server 3: *.mysite.com, dev.mysite.com, stage.mysite.com, mysite.com Server 4: * In the above configuration, task.dev.mysite.com would go to Server 1, dev.mysite.com would go to Server 3, yoursite.stage.mysite.com to Server 2, www.mysite.com to Server 3, and yoursite.com to Server 4. I've looked into using Squid, Varnish, and nginx so far. I have my opinions regarding their respective desirability and general suitability, but it's not readily apparent if any of them can handle dynamic server selection in this manner and not require per-vhost configuration. Apache on the other hand can do this handily and simply, but otherwise (aside from being well-known and familiar) seems very poorly suited to the partly-performance-serving task. Performance isn't actually a major concern yet, but it seems foolish to use Apache if another system will perform far better and can also handle the desired 'hands-free' configuration. But so is frequently having to adjust the gateway for all production services and risk network-wide outage...and so also is setting oneself up for longer downtime later if Apache becomes a too-small bottleneck. Which of these (or other) reverse proxies can do it/would do it best? And maybe I should post this as a separate question, but if Apache is the only practical option, how safe/reliable/predictable is apache-mpm-event in apache2.2 (Ubuntu 12.04.1) particularly for a dedicated reverse proxy? As I understand it the Event MPM was declared "safe" as of 2.4 but it's unclear whether reaching stability in 2.4 has any implications for the older (2.2) versions available in official/stable package channels of various distros.

    Read the article

  • Apache Alias Isn't In Directory Listing

    - by Phunt
    I've got a site running on my home server that's just a front end for me to grab files remotely. There's no pages, just a directory listing (Options Indexes...). I wanted to add a link to a directory outside of the webroot so I made an alias. After a minute of dealing with permissions, I can now navigate to the directory by typing the URL into the browser, but the directory isn't listed in the root index. Is there a way to do this without creating a symlink in the root? Server: Ubuntu 11.04, Apache 2.2.19 Relevant vhost: <VirtualHost *:80> ServerName some.url.net DocumentRoot "/var/www/some.url.net" <Directory /var/www/some.url.net> Options Indexes FollowSymLinks AllowOverride None Order Allow,Deny Allow From All AuthType Basic AuthName "TPS Reports" AuthUserFile /usr/local/apache2/passwd/some.url.net Require user user1 user2 </Directory> Alias /some_alias "/media/usb_drive/extra files" <Directory "/media/usb_drive/extra files"> Options Indexes FollowSymLinks Order Allow,Deny Allow From All </Directory> </VirtualHost>

    Read the article

  • Etag configuration with multiple apache servers or CDN / How does Google do ETags?

    - by perrierism
    I have an application which is served from two apache2 servers and I want to configure the ETags on static content. In the future I would also like to use a CDN. I see that this is supposed to be a problem because the Etag information will be different from server to server... The ETag format for Apache 1.3 and 2.x is inode-size-timestamp. Although a given file may reside in the same directory across multiple servers, and have the same file size, permissions, timestamp, etc., its inode is different from one server to the next. So if you're using more than one webserver to host your app (like 90% of the webapps you use everyday do), it's supposed to be an issue. However I see Google uses Etags, and certainly they use multiple servers and CDN and edge caching, etc... I get a 304 response for any cached Google content. How do they do it? How do you get around the multiple server issue? Is there a way to configure this with Apache?

    Read the article

  • How to create VirtualHost in Ubuntu 12.10

    - by Mifas
    I had followed many articles to 'How to create VirtualHost in Ubuntu'. This is what have I done Installed Apache sudo apt-get install lamp-server^ phpmyadmin I created folder called site1.com in /var/www/ Then I have created the file in /etc/apache2/sites-available/site1.com Then added the following code to that site1.com file <VirtualHost *:80> ServerName www.site1.com ServerAdmin [email protected] ServerAlias site1.com DocumentRoot /var/www/site1.com # Other directives here <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/site1.com/> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny Allow from all </Directory> </VirtualHost> Then after that I edit the host file added the following line of code 127.0.0.1 site1.com Edit Also I enable the site1.com via sudo a2ensite site1.com Then i restart the apache serivice. (Even i restarted the pc) When I go to the site1.com, It will say The connection has timed out Error Message. But I can browse via localhost/site1.com. I have been trying since last two days. No solution. And followed many articles and videos.

    Read the article

  • I am using apache mod rewrie to redirect http to https but now cannot connect to localhost/phpmyadmin

    - by user1787331
    here is my /etc/apache2/sites-enabled/000-default <VirtualHost *:80> ServerAdmin [email protected] RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://mysite.com DocumentRoot /var/www/http <Directory /> Options None AllowOverride None </Directory> <Directory /var/www/http> Options -Indexes -FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Not sure how to fix this. Any thoughts?

    Read the article

  • Apache, suexec, PHP, suPHP

    - by Chris_K
    While I'm quite comfortable as a Linux user, my Linux Admin-fu is a bit weak. Thus, I'm here looking for guidance with a CentOS server I'm about to build. I need to setup an Apache2 web server for a few of our clients. I want each client's web content to be under their home directory (USERDIR in apache.conf, right?) for the static HTML sites. I want Apache to run as the client (suexec?). Some of their stuff will be PHP apps and I'm under the impression I'll want to look at suphp as well then. So basically I want to look like a small version of a shared web hosting company. Considering how common those are I thought I'd easily find a nice current How-To guide on setting this all up but so far I've had very little luck. I suspect my search words are off. So the questions (feel free to answer any or all): Anyone have some solid links to current/modern guides that would help me set this all up? No, the apache documentation site is not a guide ;-) Since I have a mix of static sites and PHP apps do I want/need both suexec and suphp installed? If so, does that introduce any challenges I should be aware of? Should I be looking at other options instead of suexec and suphp? I plan to give the end users SSH, SFTP or SCP access to their stuff (if that affects anything). Thanks in advance for your help.

    Read the article

  • multiple ip for a server not reachable

    - by andrewk
    FYI: I've read everything on Serverfault related to this question and have faced a different issue. Simply put, I've got one server (apache2) with couple of sites on it. It currently has 1 ip. I'm trying to assign/add another ip to that server, so I can give each site a different ip for ssl purposes. I am not lucking out. The new ip simply is unreachable, I've pinged it. This is what I've got below, what am I doing wrong. auto lo iface lo inet loopback auto eth0 eth0:0 eth0:1 iface eth0 inet static address 70.116.5.244 netmask 255.255.255.0 gateway 70.116.5.1 #THE NEW IP iface eth0:0 inet static address 26.175.217.102 netmask 255.255.255.0 #PRIVATE IP iface eth0:1 inet static address 192.168.158.88 netmask 255.255.128.0 NOTE: THESE IP'S ARE TWEAKED BUT RELATIVE I've read many questions here 90% similar to this but most actually have the IP respond, not this case. Thanks netstar -r output Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface default gw-u6.linode.co 0.0.0.0 UG 0 0 0 eth0 70.116.5.0 * 255.255.255.0 U 0 0 0 eth0 26.175.217.0 * 255.255.255.0 U 0 0 0 eth0 192.168.128.0 * 255.255.128.0 U 0 0 0 eth0

    Read the article

  • Hiding a Website from Search Engine Bots and Viewers by Disabling Default VirtualHost

    - by Basel Shishani
    When staging a website on a remote VPS, we would like it to be accessible to team members only, and we would also like to keep the search engine bots off until the site is finalized. Access control by host whether in Iptables or Apache is not desirable, as accessing hosts can vary. After some reading in Apache config and other SF postings, I settled on the following design that relies on restricting access to only through specific domain names: Default virtual host would be disabled in Apache config as follows - relying on Apache behavior to use first virtual host for site default: <VirtualHost *:80> # Anything matching this should be silently ignored. </VirtualHost> <VirtualHost *:80> ServerName secretsiteone.com DocumentRoot /var/www/secretsiteone.com </VirtualHost> <VirtualHost *:80> ServerName secretsitetwo.com ... </VirtualHost> Then each team member can add the domain names in their local /etc/hosts: xx.xx.xx.xx secrethostone.com My question is: is the above technique good enough to achieve the above said goals esp restricting SE bots, or is it possible that bots would work around that. Note: I understand that mod_rewrite rules con be used to achieve a similar effect as discussed here: How to disable default VirtualHost in apache2?, so the same question would apply to that technique too. Also please note: the content is not highly secretive - the idea is not to devise something that is hack proof, so we are not concerned about traffic interception or the like. The idea is to keep competitors and casual surfers from viewing the content before it's released, and to prevent SE bots from indexing it.

    Read the article

  • Apache 403 after configuring varnish

    - by w0rldart
    I just don't know where else to look and what else to do. I keep getting a 403 error on all my vhosts after setting varnish 3.0 Apacher log: [error] [client 127.0.0.1] client denied by server configuration: /etc/apache2/htdocs Headers: http://domain.com/ GET / HTTP/1.1 Host: domain.com User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko/20100101 Firefox/16.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate DNT: 1 Connection: keep-alive Cookie: __utma=106762181.277908140.1348005089.1354040972.1354058508.6; __utmz=106762181.1348005089.1.1.utmcsr=OTHERDOMAIN.com|utmccn=(referral)|utmcmd=referral|utmcct=/galerias/cocinas Cache-Control: max-age=0 HTTP/1.1 403 Forbidden Vary: Accept-Encoding Content-Encoding: gzip Content-Type: text/html; charset=iso-8859-1 X-Cacheable: YES Content-Length: 223 Accept-Ranges: bytes Date: Sat, 01 Dec 2012 20:35:14 GMT X-Varnish: 1030961813 1030961811 Age: 26 Via: 1.1 varnish Connection: keep-alive X-Cache: HIT ---------------------------------------------------------- /etc/default/varnish: DAEMON_OPTS="-a ip.ip.ip.ip:80 \ -T localhost:6082 \ -f /etc/varnish/main.domain.vcl \ -S /etc/varnish/secret \ -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,1G" #-s malloc,256m" My vcl file: http://pastebin.com/axJ57kD8 So, any ideas what I could be missing? Update Just so you know, ports: NameVirtualHost *:8000 Listen 8000 and <VirtualHost 205.13.12.12:8000>

    Read the article

  • Moodle serves on IP only - will not work with mod_proxy

    - by Jon H
    I'm trying to set a moodle server up on an Ubuntu box, which already serves Plone & Trac via Apache. In my Moodle config I have $CFG-wwwroot = 'http://www.server-name.org/moodle' The configuration below works fine for the first two, but when I visit www.server-name.com/moodle I get: Incorrect access detected, this server may be accessed only through "http://xxx.xxx.xxx.xxx:8888/moodle" address, sorry It then forwards to the IP address, where Moodle functions fine. What am I missing to get the server name approach working correctly? Apache Config follows: LoadModule transform_module /usr/lib/apache2/modules/mod_transform.so Listen 8080 Listen 8888 Include /etc/phpmyadmin/apache.conf <VirtualHost xxx.xxx.xxx.xxx:8080> <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPreserveHost On <Location /> ProxyPass http://127.0.0.1:8082/ ProxyPassReverse http://127.0.0.1:8082/ </Location> </VirtualHost> <VirtualHost xxx.xxx.xxx.xxx:80> ServerName www.server-name.org ServerAlias server-name.org ProxyRequests Off FilterDeclare MyStyle RESOURCE FilterProvider MyStyle XSLT resp=Content-Type $text/html TransformOptions +ApacheFS +HTML TransformCache /theme.xsl /home/web/webapps/plone/theme.xsl TransformSet /theme.xsl FilterChain MyStyle ProxyPass /issue-tracker ! ProxyPass /moodle ! <Location /issue-tracker/login> AuthType Basic AuthName "Trac" AuthUserFile /home/web/webapps/plone/parts/trac/trac.htpasswd Require valid-user </Location> Alias /moodle /usr/share/moodle/ <Directory /usr/share/moodle/> Options +FollowSymLinks AllowOverride None order allow,deny allow from all <IfModule mod_dir.c> DirectoryIndex index.php </IfModule> </Directory> </VirtualHost>

    Read the article

  • Apache start failing after apache config modifications, showing syntax error, cannot load php5apache2_2.dll into server

    - by Sandeepan Nath
    I am stuck again with apache setup guys. I am working on a Windows 7 system. I copied the working php5 installation directory from teammates, copied the necessary .dll files from inside php5 installation folder (like they were in the working setup of teammates) to my windows/system32/. Apache server started successfully with the default apache config file. I was able to access localhost in browser. But php code was not parsing. I noticed no such line like the following in the apache config file:- # PHP5 module LoadModule php5_module D:/php5/php5apache2_2.dll If I add this line, apache server start fails. Running test configuration gives the following error - httpd.exe: Syntax error on line 60 of C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/httpd.conf: Cannot load D:/php5/php5apache2_2.dll into server: The specified procedure could not be found. But the dll file is there in the specified location and I have given all permissions to the current system user to the php5 installation directory. The same line also appears in the apache error log, though I am not sure when exactly logs are written to the log file. I am confused if log entries are not made if I have opened the log file for reading? lol ... because I could not observe a pattern in when entries are made. I saw some log entries being made, some not. Oh, why is apache setup such a headache always????

    Read the article

  • .htaccess authorization requiring username/password for every resource

    - by webworm
    I am using Apache2 on Ubuntu and I have having some "weird" user authorization issues. I am using .htaccess to control access to my directories. I have many users and have grouped them into user groups which are defined in a "group" file. I then use .htaccess within each directory to define which users have access to the directory and which do not. Here is an example .htaccess file. AuthUserFile /var/local/.htpasswd AuthGroupFile /var/local/groups AuthName "Username and Password Required" AuthType Basic require group design admin Everything is working with one exception. I added a new user to one of my groups and though they can gain access to the directory they are prompted for a username and password for every resource (i.e. image, CSS). After a while I can just keep selecting "cancel" and I will get a page with just html with no images or CSS. I would think the browser would just cache the username/password. It seems to be working well for other users. Any thoughts?

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >