Search Results

Search found 8020 results on 321 pages for 'webcenter sites'.

Page 74/321 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • IE9 will not navigate to some websites but Google Chrome can

    - by Storchburp
    Was recommended by a friend to ask for help here. I am using Internet Explorer 9. As of two days ago I was suddenly unable to navigate to any part of the following websites: www.computerandvideogames.com www.deviantart.com www.cnet.com However I can still access all of them normally through Google Chrome. I am on a college network but these sites are also accessible through fixed terminals provided by the school and are definitely not blocked. I do not know of any other sites similarly affected. There is no popup, no error message, no diversion to a site telling me I can't access / am blocked etc. I can be on www.google.com and attempt to access these sites through the URL or google search, and my cursor will just show the little moving blue wheel next to the arrow for a couple of seconds, and the page displayed on my browser will not change; ie. not navigating at all. Running antivirus software, changing proxy settings in IE, clearing cookies, unplugging/plugging in computer, restarting PC etc have not changed the situation. Any assistance or advice would be greatly appreciated. Thanks in advance.

    Read the article

  • Unable to create new virtual hosts using MAMP with OSX Mavericks

    - by user2961676
    I have been using virtual hosts on my Mac with MAMP, which has worked up until now. I have 2 working virtual hosts that i created in the same manner, which still work, but for some reason I am unable to create any new virtual hosts. When i attempt to go to a newly crated virtual host in my browser it generates a 404 Not Found error. The only thing i can think of possibly after i updated OSX to Mavericks, but i'm not sure what that would have done, or why the old virtual hosts still work. See excerpt below from vhosts.conf file. So, franklin.dev works, jamiepjones.dev works, but sheilahixson.dev does not. <VirtualHost *:80> DocumentRoot "/Users/jamiejones/Sites/franklin" ServerName franklin.dev ErrorLog "logs/franlkin.dev-error_log" CustomLog "logs/franklin.dev-access_log" common </VirtualHost> <VirtualHost *:80> DocumentRoot "/Users/jamiejones/Sites/jamiepjones-wp" ServerName jamiepjones.dev ErrorLog "logs/jamiepjones.dev-error_log" CustomLog "logs/jamiepjones.dev-access_log" common </VirtualHost> <VirtualHost *:80> DocumentRoot "/Users/jamiejones/Sites/sheilahixson” ServerName sheilahixson.dev ServerAlias www.sheilahixson.dev ErrorLog "logs/sheilahixson.dev-error_log" CustomLog "logs/sheilahixson.dev-access_log" common </VirtualHost> and hosts file: 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost 127.0.0.1 jamies-MacBook-Pro.Belkin # MAMP PRO - Do NOT remove this entry! 127.0.0.1 hixson # MAMP PRO - Do NOT remove this entry! 127.0.0.1 franklin.dev 127.0.0.1 jamiepjones.dev 127.0.0.1 sheilahixson.dev Please help!

    Read the article

  • apache2 + mod_fastcgi + suexec + php5.2 = unstable on high load...

    - by redguy..pl
    I am hosting several (~30) different sites on one server with apache2+fastcgi+suexec+php5. Sites have different loads and different execution times of their scripts (some of them process request for 5-7 seconds, some <1sek). Sometimes when single site receives very high load (all php instances of this site are created and used) - whole apache server hangs. Apache (worker mpm) creates new processes up to the upper limit. It looks like it is starting to queue ALL new request for EVERY site, not only the one that has high load and quickly achieves process limits... restart of apache solves the problem... config: FastCgiConfig -singleThreshold 1 -multiThreshold 10 -listen-queue-depth 30 -maxProcesses 80 -maxClassProcesses 12 -idle-timeout 30 -pass-header HTTP_AUTHORIZATION -pass-header If-Modified-Since -pass-header If-None-Match (earlier have default -listen-queue-depth = 100, but it didn't change anything...) Any suggestions? Another question - how is implemented this listen queue? is it one queue for whole apache, or unique queue for every defined php apllication (suexec site)? I would like to achieve something like this: when one site receives high load and its queue is full - server bounces next request, but only for this one site.. Other sites should work properly...

    Read the article

  • Obey server_name in Nginx

    - by pascal
    I want nginx/0.7.6 (on debian, i.e. with config files in /etc/nginx/sites-enabled/) to serve a site on exactly one subdomain (indicated by the Host header) and nothing on all others. But it staunchly ignores my server_name settings?! In sites-enabled/sub.domain: server { listen 80; server_name sub.domain; location / { … } } Adding a sites-enabled/00-default with server { listen 80; return 444; } Does nothing (I guess it just matches requests with no Host?) server { listen 80; server_name *.domain; return 444; } Does prevent Host: domain requests from giving results for Host: sub.domain, but still treats Host: arbitrary as Host: sub-domain. The, to my eyes, obvious solution isn't accepted: server { listen 80; server_name *; return 444; } Neither is server { listen 80 default_server; return 444; } Since order seems to be important: renaming 00-default to zz-default, which, if sorted, places it last, doesn't change anything. But debian's main config just includes *, so I guess they could be included in some arbitrary file-system defined order? This returns no content when Host: is not sub.domain as expected, but still returns the content when Host is completely missing. I thought the first block should handle exactly that case!? Is it because it's the first block? server { listen 80; return 444; } server { listen 80; server_name ~^.*$; return 444; }

    Read the article

  • Cannot get mod_rewrite to work on Mac OSX Mountain Lion

    - by Joel Joel Binks
    I have tried everything I can think of and it still doesn't work. I am trying to get the example code from Larry Ullman's Advanced PHP book to work. His instructions were a bit lacking so I had to do some research. Here is what I have configured: username.conf <Directory "/Users/me/Sites/"> Options Indexes MultiViews FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> httpd.conf LoadModule rewrite_module libexec/apache2/mod_rewrite.so DocumentRoot "/Users/me/Sites" <Directory /> Options Indexes MultiViews FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> <Directory "Users/me/Sites"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from all </Directory> .htaccess <IfModule mod_rewrite.so> RewriteEngine on RewriteBase /phplearning/ADVANCED/ch02/ # Redirect certain paths to index.php: RewriteRule ^(about|contact|this|that|search)/?$ index.php?p=$1 RewriteLog "/var/log/apache/rewrite.log" RewriteLogLevel 2 </IfModule> Nothing has worked and it won't even log to the rewrite.log file. What have I done wrong? FYI even when I set up an extremely simple rule or use the root as the rewrite base, it still fails. I have also verified the mod_rewrite module is running. I am really angry.

    Read the article

  • Apache 2.4, Ubuntu 12.04 Forbidden Errors

    - by tubaguy50035
    I just installed Apache 2.4 today, and I'm having some issues getting vhost configuration to work correctly. Below is the vhost conf <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /hosting/Client/site.com/www ServerName site.com ServerAlias www.site.com <Directory "/hosting/Client/site.com/www"> Options +Indexes +FollowSymLinks Order allow,deny Allow from all </Directory> DirectoryIndex index.html </VirtualHost> There is an index.html file in /hosting/Client/site.com/www. When I go to the site, I receive a 403 forbidden error. The www-data group is the group on the www folder, which I've already given all permissions (r/w/x). I'm really at a loss as to why this is happening. Any thoughts? If I remove the vhost and go straight to the IP address, I get the default, "It works!" page. So I know that it's working. The error log says "client denied by server configuration". apache2ctl -S dump: nick@server:~$ apache2ctl -S /usr/sbin/apache2ctl: 87: ulimit: error setting limit (Operation not permitted) VirtualHost configuration: *:80 is a NameVirtualHost default server site.com (/etc/apache2/sites-enabled/site.com.conf:1) port 80 namevhost site.com (/etc/apache2/sites-enabled/site.com.conf:1) alias www.site.com port 80 namevhost site.com (/etc/apache2/sites-enabled/site.com.conf:1) alias www.site.com ServerRoot: "/etc/apache2" Main DocumentRoot: "/var/www" Main ErrorLog: "/var/log/apache2/error.log" Mutex watchdog-callback: using_defaults Mutex default: dir="/var/lock/apache2" mechanism=fcntl Mutex mpm-accept: using_defaults PidFile: "/var/run/apache2.pid" Define: DUMP_VHOSTS Define: DUMP_RUN_CFG Define: ENALBLE_USR_LIB_CGI_BIN User: name="www-data" id=33 not_used Group: name="www-data" id=33 not_used Ouput of namei -mo /hosting/Client/site/www/index.html f: /hosting/Client/site.com/www/index.html drwxr-xr-x root root / drwxr-xr-x root root hosting drwxr-xr-x root root Client drwxr-xr-x nick www-data site.com drwxr-xr-x nick www-data www -rw-rwxr-x nick www-data index.html

    Read the article

  • Getting Windows (VMware) to load from OSX's localhost without an Internet Connection

    - by Jonah Goldstein
    I'm using MAMP to host my local sites, and VirtualHostX so that I can access sites during local development via a convenient URL like mysite.dev I'm also running Windows XP via VirtualBox, and it would be great to be able to load up any of my local sites within windows while offline as currently often working without access, on the move, unfortunately. I know that I can append my IP and a nice domain name to the host file in C:/WINDOWS/system32/drivers/etc ... and i can find my IP simply through terminal with "ifconfig" while I'm online. The problem is that when I'm not online, there's no IP. Even if there is an IP (when i have a connection), I still have grab it and update the windows hosts' file all the time, since I'm developing from a laptop and have a new IP at the drop of a dime. I found a tutorial where the author is able to get a permanent IP. He uses VMware Fusion as his VMachine, which is the only difference between his setup and mine. By running the terminal command "ifconfig vmnet1" he gets: a secret IP the virtual machine uses to talk to OSX And that doesn't change - which is awesome. I'm assuming it exists even if he's offline. His tutorial is here, http://bit.ly/U2lq It would be pretty fantabulous if I could replicate this with virtualBox. Anyone have ideas? Thanks:)

    Read the article

  • Where to place Nginx IP blacklist config file?

    - by ProfessionalAmateur
    I have an Nginx web server hosting two sites. I created a blockips.conf file to blacklist IP addresses that are constantly probing the server and included this file in the nginx.conf file. However in my access logs for the sites I still see these IP addresses showing up. Do I need to include the black list in each site's conf instead of the global conf for Nginx? Here is my nginx.conf user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; include /etc/nginx/conf.d/*.conf; # Load virtual host configuration files. include /etc/nginx/sites-enabled/*; # BLOCK SPAMMERS IP ADDRESSES include /etc/nginx/conf.d/blockips.conf; } blockips.conf deny 58.218.199.250; access.log still shows this IP address. 58.218.199.250 - - [27/Sep/2012:06:41:03 -0600] "GET http://59.53.91.9/proxy/judge.php HTTP/1.1" 403 570 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" "-" What am I doing incorrectly?

    Read the article

  • Apache2 Virtualhost practice config issue

    - by sisko
    I am practicing virtualhost configuration. In my /var/www directory I have created 3 directories called test1, test2 and test3 each of which has a simple index.php script in it. I:E test1/index.php etc. In /etc/apache2/sites-available/test1 I have the following configuration: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName test1 DocumentRoot /var/www/test1 <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/test1/> Options -Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> All the other sites have a similar virtualHost definition. I have enabled the site(the symlink appears in sites-enabled) and I have restarted apache. However, when I visit localhost/test1, I get a 404 Error. My error log show the following message: [Wed Oct 23 06:22:52 2013] [error] [client 127.0.0.1] File does not exist: /var/www/test1/test1 I don't know why I get the double test1/test1 in the error logs. I'm trying to find the right virtualHost setup which will allow all 3 test websites to be served from their URLs I:E test1/index.php, test2/index.php and test3/index.php. Can anyone help me out, please?

    Read the article

  • NTDS Replication Warning (Event ID 2089)

    - by Chris_K
    I have a simple little network with 3 AD servers in 2 sites. Site A has Win2k3 SP2 and Win2k SP4 servers, site B has a single Win2k3 SP2 server. All have been in place for at least 3 years now. Just last week I started getting Event 2089 "not backed up" warnings (example below) on both of the win2k3 servers. I understand what the message means, no need to send me links to the technet article explaining it. I'll improve my backups. What I'm more curious about is why did I just start getting this message now? Why haven't I been getting it for the past 3 years?!? Perhaps this is related: I recently decommissioned a few other sites and AD controllers (there used to be 3 more sites, each with their own controller). Don't worry, I did proper DCpromo exercises and made sure we didn't lose anything. But would shutting those down possibly be related to why I get this error now? This won't keep me awake at night but I am curious as to what changed... Event Type: Warning Event Source: NTDS Replication Event Category: Backup Event ID: 2089 Date: 3/28/2010 Time: 9:25:27 AM User: NT AUTHORITY\ANONYMOUS LOGON Computer: RedactedName Description: This directory partition has not been backed up since at least the following number of days. Directory partition: DC=MyDomain,DC=com 'Backup latency interval' (days): 30 It is recommended that you take a backup as often as possible to recover from accidental loss of data. However if you haven't taken a backup since at least the 'backup latency interval' number of days, this message will be logged every day until a backup is taken. You can take a backup of any replica that holds this partition. By default the 'Backup latency interval' is set to half the 'Tombstone Lifetime Interval'. If you want to change the default 'Backup latency interval', you could do so by adding the following registry key. 'Backup latency interval' (days) registry key: System\CurrentControlSet\Services\NTDS\Parameters\Backup Latency Threshold (days) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • apache2 + mod_fastcgi + suexec + php5.2 = unstable on high load

    I am hosting several (~30) different sites on one server with apache2+fastcgi+suexec+php5. Sites have different loads and different execution times of their scripts (some of them process request for 5-7 seconds, some <1sek). Sometimes when single site receives very high load (all php instances of this site are created and used) - whole apache server hangs. Apache (worker mpm) creates new processes up to the upper limit. It looks like it is starting to queue ALL new request for EVERY site, not only the one that has high load and quickly achieves process limits... restart of apache solves the problem... config: FastCgiConfig -singleThreshold 1 -multiThreshold 10 -listen-queue-depth 30 -maxProcesses 80 -maxClassProcesses 12 -idle-timeout 30 -pass-header HTTP_AUTHORIZATION -pass-header If-Modified-Since -pass-header If-None-Match (earlier have default -listen-queue-depth = 100, but it didn't change anything...) Any suggestions? Another question - how is implemented this listen queue? is it one queue for whole apache, or unique queue for every defined php apllication (suexec site)? I would like to achieve something like this: when one site receives high load and its queue is full - server bounces next request, but only for this one site.. Other sites should work properly...

    Read the article

  • NGINX Remove index.php /index.php/something/more/ to /something/more

    - by Gaston
    I'm trying to clean urls in NGINX using framework DooPHP. This = - http://example.com/index.php/something/more/ To This = - http://example.com/something/more/ I want to remove (clean url) the "index.php" from the url if someone try to enter in the first form. Like a permanent redirect. How to do this config on NGINX? Thanks. [Update: Actual nginx config] server { listen 80; server_name vip.example.com; rewrite ^/(.*) https://vip.example.com/$1 permanent; } server { listen 443; server_name vip.example.com; error_page 404 /vip.example.com/404.html; error_page 403 /vip.example.com/403.html; error_page 401 /vip.example.com/401.html; location /vip.example.com { root /sites/errors; } ssl on; ssl_certificate /etc/nginx/config/server.csr; ssl_certificate_key /etc/nginx/config/server.sky; if (!-e $request_filename){ rewrite /.* /index.php; } location / { auth_basic "example Team Access"; auth_basic_user_file config/htpasswd; root /sites/vip.example.com; index index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /sites/vip.example.com$fastcgi_script_name; include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; } }

    Read the article

  • IIS 7 doesn't do a 301 redirect

    - by rohde
    Hi there! I have just migrated some sites to IIS7 from IIS6, and am experiencing problems with one of the sites. When a request comes in for the site (www.ourdomain.com/site1/) with a trailing slash everything is fine. But if the trailing slash is left out (www.ourdomain.com/site1), then the request fails. Apparently it doesn't do a 301 redirect on the slashl-less URL, resulting in ASP.NET throwing an exception. This is not happening with the other sites at the same domain. What can be the cause of this? EDIT: The exception I get is this: System.Web.HttpException: Failed to Execute URL. at System.Web.Hosting. ISAPIWorkerRequestInProcForIIS6.BeginExecuteUrl(String url, String method, String childHeaders, Boolean sendHeaders, Boolean addUserIndo, IntPtr token, String name, String authType, Byte[] entity, AsyncCallback cb, Object state) at System.Web.HttpResponse.BeginExecuteUrlForEntireResponse(String pathOverride, NameValueCollection requestHeaders, AsyncCallback cb, Object state) at System.Web.DefaultHttpHandler.BeginProcessRequest(HttpContext context, AsyncCallback callback, Object state) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)

    Read the article

  • Cisco ASA Site-to-Site VPN Dropping

    - by ScottAdair
    I have three sites, Toronto (1.1.1.1), Mississauga (2.2.2.2) and San Francisco (3.3.3.3). All three sites have ASA 5520. All the sites are connected together with two site-to-site VPN links between each other location. My issue is that the tunnel between Toronto and San Francisco is very unstable, dropping every 40 min to 60 mins. The tunnel between Toronto and Mississauga (which is configured in the same manner) is fine with no drops. I also noticed that my pings with drop but the ASA thinks that the tunnel is still up and running. Here is the configuration of the tunnel. Toronto (1.1.1.1) crypto map Outside_map 1 match address Outside_cryptomap crypto map Outside_map 1 set peer 3.3.3.3 crypto map Outside_map 1 set ikev1 transform-set ESP-AES-256-MD5 ESP-AES-256-SHA crypto map Outside_map 1 set ikev2 ipsec-proposal AES256 group-policy GroupPolicy_3.3.3.3 internal group-policy GroupPolicy_3.3.3.3 attributes vpn-idle-timeout none vpn-tunnel-protocol ikev1 ikev2 tunnel-group 3.3.3.3 type ipsec-l2l tunnel-group 3.3.3.3 general-attributes default-group-policy GroupPolicy_3.3.3.3 tunnel-group 3.3.3.3 ipsec-attributes ikev1 pre-shared-key ***** isakmp keepalive disable ikev2 remote-authentication pre-shared-key ***** ikev2 local-authentication pre-shared-key ***** San Francisco (3.3.3.3) crypto map Outside_map0 2 match address Outside_cryptomap_1 crypto map Outside_map0 2 set peer 1.1.1.1 crypto map Outside_map0 2 set ikev1 transform-set ESP-AES-256-MD5 ESP-AES-256-SHA crypto map Outside_map0 2 set ikev2 ipsec-proposal AES256 group-policy GroupPolicy_1.1.1.1 internal group-policy GroupPolicy_1.1.1.1 attributes vpn-idle-timeout none vpn-tunnel-protocol ikev1 ikev2 tunnel-group 1.1.1.1 type ipsec-l2l tunnel-group 1.1.1.1 general-attributes default-group-policy GroupPolicy_1.1.1.1 tunnel-group 1.1.1.1 ipsec-attributes ikev1 pre-shared-key ***** isakmp keepalive disable ikev2 remote-authentication pre-shared-key ***** ikev2 local-authentication pre-shared-key ***** I'm at a loss. Any ideas?

    Read the article

  • IIS7 Binary Stream Error

    - by WDuffy
    I'm struggling to understand the problem I'm seeing so please accept my apology if the question is vague. I'm running a classic asp app in IIS7 and everything seems to work fine except for one issue that has me stumped. Basically, files can be downloaded from the server which is done using the sendBinary method of Persits ASP Upload component. This component works fine for uploading etc it's just the downloading I have a problem. The strange thing is, I cannot have a pure asp page that serves the binary file. Everything works fine in II6, but in II7 there is a strange problem. For example this does not work. <% SET objUpload = server.createObject("Persits.Upload") objUpload.sendBinary "D:\sites\file.pdf", true, "application/octet-binary", true SET objUpload = NOTHING %> However, if I put anything in front of the asp code the file is served fine. serve this <% SET objUpload = server.createObject("Persits.Upload") objUpload.sendBinary "D:\sites\file.pdf", true, "application/octet-binary", true SET objUpload = NOTHING %> I can also write something to the response stream first and it works fine. <% Response.Write("server this") SET objUpload = server.createObject("Persits.Upload") objUpload.sendBinary "D:\sites\file.pdf", true, "application/octet-binary", true SET objUpload = NOTHING %> Does anyone have any ideas of what could be causing this or has anyone ran into a similar situation? I'm sure it has something to do with the setup in IIS 7.

    Read the article

  • Recommendation on remote access setup for accessing customer systems

    - by gregmac
    I'm looking for a product recommendation (open or commercial) that will allow remote access to customer sites for tech support purposes. We need to be able to gain access to help troubleshoot problems on servers. Currently end up using anything from RDP on public IP, to various VPNs that clients happen to have, to webex-type sessions that require lots of interaction from both sides to get things working. This often means a problem that could take 10 minutes to solve takes an extra 30+ minutes messing around trying to get a connection up. There are multiple customer sites, which should NOT have access to each other. At each site, there is anywhere from 1 to 8 servers (Windows 2003 or 2008) that need to be accessed. Support connection to machines even if they're behind a firewall/router with no public IP Be able to selectively allow/deny access from customer site. Customer site should not be able to connect outbound to anywhere else (our systems, or other customer sites) Support multiple users from our end If not a VPN connection (where RDP could be used over top), should support: Remote desktop access, including copy/paste File transfers Preferably would have some way to list all remote systems, showing online/offline. Anyone have any suggestions?

    Read the article

  • Apache-style multiviews with Nginx

    - by Kenn
    I'm interested in switching from Apache/mod_php to Nginx for some non-CMS sites I'm running. The sites in question are either completely static HTML files or simple PHP, but the one thing they have in common is that I'm currently using Apache's mod_negotiation to serve them up without file extensions. I'm not concerned with actual content negotiation; I'm using this just so I don't have to use file extensions in my URLs. For example, the file at /info/contact.php is accessed via a URL of just /info/contact The actual file is a .php file in that location, but I don't use the extension in the URLs. This gives me slightly shorter, cleaner URLs and also doesn't expose what's essentially a meaningless implementation detail to the user. In Apache, all this takes is enabling mod_negotiation and adding +MultiViews to the Options for the site. In Nginx I gather I'll be rewriting somehow but being new to Nginx, I'm not exactly sure how to do it. These sites are currently working fine proxied from Nginx to Apache, but I'd like to try running them solely with Nginx/fastcgi. They work fine this way as long as I'm using the extensions, so the fastcgi aspect is working great. My concern now is just with removing those extensions. It's important to keep in mind that the filename is not always in the URL, in the case of subdirectories. That is, /foo/bar should look for /foo/bar.php or /foo/bar/index.php /foo/ should look for /foo/index.php Is there a simple way to achieve this with Nginx or should I stick with proxying to Apache?

    Read the article

  • All virtualhosts serving Apache default files

    - by tj111
    I'm trying to configure Apache as an in-network webserver, and am using the sites-available/sites-enabled feature as opposed to just static vhost files. I set up a couple VirtualHosts, all with a unique DocumentRoot, however request for all the VirtualHosts just serve up the "It's Working!" default file. I can't for the life of me figure out why it won't serve the content out of the correct directory. Here's the contents of the virtualhost directive files, let me know if I need to post more. default (note that apache renames this to 000-default in sites-enabled, so it's not an ordering issue) NameVirtualHost *:80 ServerName emp <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName emp DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> billmed <VirtualHost *:80> ServerName billmed.emp ServerRoot /home/empression/Projects/billmed/web/httpdocs <Directory "/home/empression/Projects/billmed/web/httpdocs"> Order Allow,Deny Allow from All </Directory> </VirtualHost> Note that I have DNS zones for both emp and billmed.emp, as well as entries in /etc/hosts. My ultimate goal is to set up this machine as an in-house webserver with a custom tld (emp), but progress has been pretty slow.

    Read the article

  • Using WebDAV for automated downloads

    - by Geo Ego
    I currently manage a number of sites (at one point about a dozen, currently four, but soon growing into the dozens or hundreds) that serve a piece of software to clients at their remote locations. Our web server is Windows SBS Server 2k3, and the remote servers are Windows Server 2k3.When we have new versions of the software, I upload this new software to a specific directory and rename it; each time the clients boot, they pull their software from that specific directory. With just a few sites, it's no problem for me to RDP in and copy the files over. As the number grows, this will quickly become quite unwieldy. So I'm thinking that WebDAV would be part of a solution, so that I could simply push the newest version to our server (Windows SBS Server 2003) and make it available to the sites to grab. However, on the remote server side, what are some suggestions for automating the download? I only want the servers to download the files during downtime (between 3 AM and 9 AM), and I only want them to download if there is a new version available. I had thought of writing a program that checked the files on the WebDAV server at a regular interval, compared a hash of the current software to a hash of the software on the server, and only downloaded if they were different, but I'm wondering if there is something I am unaware of that can automate the process.

    Read the article

  • What's the piece of hardware listening on Facebook's or Wikipedia's IP address?

    - by Igor Ostrovsky
    I am trying to understand how massive sites like Facebook or Wikipedia work, for my intellectual curiosity. I read about various techniques for building scalable sites, but I am still puzzled about one particular detail. The part that confuses me is that ultimately, the DNS will map the entire domain to a single IP address, or a handful of IP addresses in the case of round-robin DNS. For example, wikipedia.org has only one type-A DNS record. So, people from all over the world visiting Wikipedia have to send a request to the one IP address specified in DNS. What is the piece of hardware that listens on the IP address for a massive site, and how can it possibly handle all the load coming from the requests for users all over the world? Edit 1: Thanks for all the responses! Anycast seems like a feasible answer... Does anyone know of a way to check whether a particular IP address is anycast-routed, so that I could verify that this really is the trick used in practice by large sites? Edit 2: After more reading on the topic, it appears that anycast is not typically used for dynamic web content. Anycast is usually used for UDP (e.g., DNS lookups), or sometimes for static content. One interesting thing to note is that Facebook uses profile.ak.fbcdn.net to host static content like style sheets and javascript libraries. Each time I ping this name, I get a response from a different IP address. However, I can't tell whether this is anycast in action, or a completely different technique. Back to my original question: as far as I can tell, even a large site will have a single expensive piece of load-balancing hardware listening on its handful of public IP addresses.

    Read the article

  • Site resolves fine without "www", "www" creates database error

    - by PatrickS
    Working with BOA ( Barracuda / Octopus / Aegir ) , I've installed a few Drupal sites without any problems and following the same process for all. BOA is running on Nginx. All sites are going thru Cloudflare's network , where I set the same DNS settings. A example.com points to IPADDRESS A www points to IPADDRESS the nameservers of each domain are pointing to Cloudflare's respective nameservers. It all works , except for one site that works perfectly without "www" , but with "www" returns the typical database error if Drupal can't find the site's database. Site off-line The site is currently not available due to technical problems. Please try again later. Thank you for your understanding. If you are the maintainer of this site, please check your database settings in the settings.php file and ensure that your hosting provider's database server is running. For more help, see the handbook, or contact your hosting provider. In BOA, all sites have the same alias, basically a symlink, redirecting like this... www.example.com - example.com

    Read the article

  • Adventures in Drupal multisite config with mod_rewrite and clean urls

    - by moexu
    The university where I work is planning to offer Drupal hosting to staff/faculty who want a Drupal site. We've set up Drupal multisite with clean urls and it's mostly working except for some weird redirects. If you have two sites where one is a substring of the other then you'll randomly be redirected to the other site. I tracked the problem to how mod_rewrite does path matching, so with a config file like this: RewriteCond %{REQUEST_URI} ^/drupal RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupal/index.php?q=$1 [last,qsappend] RewriteCond %{REQUEST_URI} ^/drupaltest RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupaltest/index.php?q=$1 [last,qsappend] /drupaltest will match the /drupal line and all of the links on the /drupaltest page will be rewritten to point to /drupal. If you put the end of string character ($) at the end of each rewrite condition then it will always match on the correct site and the links will always be rewritten correctly. That breaks down as soon as a user logs in though because the query string is appended to the url so just the base url will no longer match. You can also fix the problem by ordering the sites in the config file so that the smallest substring will always be last. I suggested storing all of the sites in a table and then querying, sorting, and rewriting the config file every time a Drupal site is requested so that we could guarantee the order. The system administrator thought that was kludgy and didn't address the root problem. Disabling clean urls should also fix the problem but the users really want them so I'd prefer to keep them if possible. I think we could also fix it by using an .htaccess file in each site to handle the clean url rewriting but that also seems suboptimal since it will generate a higher load on the server and the server is intended to host the majority of the university's external facing web content. Is there some magic I can do with mod_rewrite to get it to work? Would another solution be better? Am I doing something the wrong way to begin with?

    Read the article

  • How to handle sh: fetch: command not found

    - by Tyler Johnson
    Okay, I'm a noobie. I know how to build and compose a website, but I have no idea what I'm doing when it comes to servers and server commands, etc. I've recently had a problem with all of my sites on our servers going down all at once and then I have to go in and reboot the server for them to come up again. At first this was annoying, but now it is becoming agonizing as it now takes 3-4 reboots for the websites to come back up. I contacted support for my hosting, but they are not being very helpful. They just keep telling me what the issue might be and basically telling me that I'm going to have to look into it and figure it out, which really isn't possible since I know nothing. Anyway, here are the things they said were possible reasons: They said I have "strange logs" in my Apache webserver log, error: sh: fetch: command not found. My php.ini memory limit is: 256M which is very high. It should be 32M or 64M. Server is reaching Max Clients, meaning we have more than 150 visitors at a time. (They supposedly "fixed" this, but the sites/server are still going down) I have some Wordpress sites with plugins getting errors like: PHP Warning: pack(): Type H: illegal hex digit G in... PHP Fatal error: Cannot use object of type stdClass as array in... PHP Fatal error: Maximum execution time of 30 seconds exceeded in... PHP Fatal error: Call to undefined function file_exists() in... PHP Parse error: syntax error, unexpected '<' I know that's a lot, but I really am at wits end and have no idea what to do now. If anyone could maybe give me some advice or point me in the right direction I would greatly appreciate it! Thanks! Oh, and here are the specs for my server: RAM: 2048MB CPU Shares: 40 Primary Disk: 50GB Data Transfer: 75GB Port Speed: 5Mbps

    Read the article

  • Small TCP Window on WAN between 2 Locations

    - by Brent
    Site A: Denver datacenter. 60MBPS. Site B: Chicago. 100MBPS. ICMP pings: Packets: Sent = 176, Received = 176, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 74ms, Maximum = 94ms, Average = 75ms File transfer between sites that never goes past ~7MBPS: Windows Update download at 60MBPS+: Site to site: IPSec VPN using two Cisco 5520's. CPU at 3-4% and lots of memory to spare. The latency between to two sites is very acceptable so I can't see an issue why it is performing so slow when transferring between the two sites. I have found that any type of transfer (FTP, HTTP, Windows file shares) will never go above ~7MBPS. When the WAN was first setup, I was able to get transfers at 50-60MBPS, which is what is expected due to the WAN connection at the Site A at 60MBPS. Then a few days later, I was not able to get anything going faster than ~7MBPS. Is there a upstream router between Denver and Chicago causing this? I want to take the blame away from our setup as downloads from Windows Update go blazing fast and for the first few days after the site to site VPN came up, I was transferring VM images at 50-60MBPS. Our stack: HP P2000 MSA - HP C7000 Chassis - HP Flex-10 - Cisco Gigabit switch - Cisco ASA - WAN

    Read the article

  • Redirect 301 fails with a path as destination

    - by Martijn Heemels
    I'm using a large number of Redirect 301's which are suddenly failing on a new webserver. We're in pre-production tests on the new webserver, prior to migrating the sites, but some sites are failing with 500 Internal Server Error. The content, both databases and files, are mirrored from the old to the new server, so we can test if all sites work properly. I traced this problem to mod_alias' Redirect statement, which is used from .htaccess to redirect visitors and search engines from old content to new pages. Apparently the Apache server requires the destination to be a full url, including protocol and hostname. Redirect 301 /directory/ /target/ # Not Valid Redirect 301 /main.html / # Not Valid Redirect 301 /directory/ http://www.example.com/target/ # Valid Redirect 301 /main.html http://www.example.com/ # Valid This contradicts the Apache documentation for Apache 2.2, which states: The new URL should be an absolute URL beginning with a scheme and hostname, but a URL-path beginning with a slash may also be used, in which case the scheme and hostname of the current server will be added. Of course I verified that we're using Apache 2.2 on both the old and the new server. The old server is a Gentoo box with Apache 2.2.11, while the new one is a RHEL 5 box with Apache 2.2.3. The workaround would be to change all paths to full URL's, or to convert the statements to mod_rewrite rules, but I'd prefer the documented behaviour. What are your experiences?

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >