Search Results

Search found 7555 results on 303 pages for 'sites'.

Page 60/303 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Small TCP Window on WAN between 2 Locations

    - by Brent
    Site A: Denver datacenter. 60MBPS. Site B: Chicago. 100MBPS. ICMP pings: Packets: Sent = 176, Received = 176, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 74ms, Maximum = 94ms, Average = 75ms File transfer between sites that never goes past ~7MBPS: Windows Update download at 60MBPS+: Site to site: IPSec VPN using two Cisco 5520's. CPU at 3-4% and lots of memory to spare. The latency between to two sites is very acceptable so I can't see an issue why it is performing so slow when transferring between the two sites. I have found that any type of transfer (FTP, HTTP, Windows file shares) will never go above ~7MBPS. When the WAN was first setup, I was able to get transfers at 50-60MBPS, which is what is expected due to the WAN connection at the Site A at 60MBPS. Then a few days later, I was not able to get anything going faster than ~7MBPS. Is there a upstream router between Denver and Chicago causing this? I want to take the blame away from our setup as downloads from Windows Update go blazing fast and for the first few days after the site to site VPN came up, I was transferring VM images at 50-60MBPS. Our stack: HP P2000 MSA - HP C7000 Chassis - HP Flex-10 - Cisco Gigabit switch - Cisco ASA - WAN

    Read the article

  • Redirect 301 fails with a path as destination

    - by Martijn Heemels
    I'm using a large number of Redirect 301's which are suddenly failing on a new webserver. We're in pre-production tests on the new webserver, prior to migrating the sites, but some sites are failing with 500 Internal Server Error. The content, both databases and files, are mirrored from the old to the new server, so we can test if all sites work properly. I traced this problem to mod_alias' Redirect statement, which is used from .htaccess to redirect visitors and search engines from old content to new pages. Apparently the Apache server requires the destination to be a full url, including protocol and hostname. Redirect 301 /directory/ /target/ # Not Valid Redirect 301 /main.html / # Not Valid Redirect 301 /directory/ http://www.example.com/target/ # Valid Redirect 301 /main.html http://www.example.com/ # Valid This contradicts the Apache documentation for Apache 2.2, which states: The new URL should be an absolute URL beginning with a scheme and hostname, but a URL-path beginning with a slash may also be used, in which case the scheme and hostname of the current server will be added. Of course I verified that we're using Apache 2.2 on both the old and the new server. The old server is a Gentoo box with Apache 2.2.11, while the new one is a RHEL 5 box with Apache 2.2.3. The workaround would be to change all paths to full URL's, or to convert the statements to mod_rewrite rules, but I'd prefer the documented behaviour. What are your experiences?

    Read the article

  • PHP Causing Segmentation Fault & Apache Blank Response

    - by Joe
    I recently updated a Debian Lenny server to php 5.3,5 using the dotdeb source. Soon after doing so certain (but not all) sites on the server stopped responding to requests. A blank response would be returned - no headers, no content, nothing. I found this related question on stackoverflow which seems to describe something similar and used the same code the user had in their answer to see if I could replicate the issue: <?php class A { public function __construct() { new B; } } class B { public function __construct() { new A; } } new A; print 'Loaded Class A'; ?> This triggered the problem - the page returned absolutely nothing despite the original question stating this was fixed in PHP 5.5.0. No CPU block as you'd expect, no wait, just an almost instant zero response. I then ran this same code from the cli (php -f test.php) and the only output I got was 'Segmentation fault'. Tailing the kernel log I've spotted: Feb 16 07:04:06 creature kernel: [192203.269037] php[17710] general protection ip:76ef37 sp:7fff155e9bb0 error:0 in php5[400000+870000] Feb 16 08:57:31 creature kernel: [199639.699854] apache2[31136]: segfault at 7fff13a84fe0 ip 7f730514ea40 sp 7fff13a85008 error 6 in libphp5.so[7f7304ce8000+915000] All extremely odd and I'm not sure what it's pointing to/what I should do to debug this further. As I said some sites work but code such as the above definitely trigger it. Not that the sites I want to server have code like that - it's just an example. Any help is much appreciated!

    Read the article

  • Why am I getting this error in the logs?

    - by Matt
    Ok so I just started a new ubuntu server 11.10 and i added the vhost and all seems ok ...I also restarted apache but when i visit the browser i get a blank page the server ip is http://23.21.197.126/ but when i tail the log tail -f /var/log/apache2/error.log [Wed Feb 01 02:19:20 2012] [error] [client 208.104.53.51] File does not exist: /etc/apache2/htdocs [Wed Feb 01 02:19:24 2012] [error] [client 208.104.53.51] File does not exist: /etc/apache2/htdocs but my only file in sites-enabled is this <VirtualHost 23.21.197.126:80> ServerAdmin [email protected] ServerName logicxl.com # ServerAlias DocumentRoot /srv/crm/current/public ErrorLog /srv/crm/logs/error.log <Directory "/srv/crm/current/public"> Order allow,deny Allow from all </Directory> </VirtualHost> is there something i am missing .....the document root should be /srv/crm/current/public and not /etc/apache2/htdocs as the error suggests Any ideas on how to fix this UPDATE sudo apache2ctl -S VirtualHost configuration: 23.21.197.126:80 is a NameVirtualHost default server logicxl.com (/etc/apache2/sites-enabled/crm:1) port 80 namevhost logicxl.com (/etc/apache2/sites-enabled/crm:1) Syntax OK UPDATE <VirtualHost *:80> ServerAdmin [email protected] ServerName logicxl.com DocumentRoot /srv/crm/current/public <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /srv/crm/current/public/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost>

    Read the article

  • How can I prevent Apache from asking for credentials on non SSL site

    - by Scott
    I have a web server with several virtual hosts. Some of those hosts have an associated ssl site. I have a DirectoryMatch directive in my main config file which requires basic authentication to any directory with secured as part of the directory path. On sites that have an SSL site, I have a rewrite rule (located in the non ssl config for that site), that redirects to the SSL site, same uri. The problem is the http (80) site asks for credentials first, and then the https (443) site asks for credentials again. I would like to prevent the http site from asking and thus avoid the potential for someone entering credentials and having them sent in clear text. I know I could move the DirectoryMatch down to the specific site, and just put the auth statement in the SSL config, but that would introduce the possibility of forgetting to protect critical directories when creating new sites. Here are the pertinent declarations: httpd.conf (all sites): <DirectoryMatch "_secured_"> AuthType Basic AuthName "+ + + Restrcted Area on Server + + +" AuthUserFile /home/websvr/.auth/std.auth Require valid-user </DirectoryMatch> site.conf (specific to individual site) <DirectoryMatch "_secured_"> RewriteEngine On RewriteRule .*(_secured_.*) https://site.com/$1 </DirectoryMatch> Is there a way to leave DirectoryMatch in the main config file and prevent the request for authorization from the http site? Running Apache 2 on Ubuntu 10.04 server from the default package. I have AllowOverride set to none - I prefer to handle things in the config files instead of .htaccess.

    Read the article

  • IIS 6 Windows Authentication in ASP.Net app fails

    - by Kjensen
    I am trying to install an ASP.Net app on an IIS6 webserver. The site requires the user to authenticate with windows, and this works on several other apps on the same server. In IIS I have enabled anonymous access and windows authentication. In web.config, authentication is set to: <authentication mode="Windows"/> and authorization...: <authorization> <allow roles="Users"/> <deny users="*"/> </authorization> Ie. allow all users in role "Users" and deny everybody else. This is the approach that is working with several other apps on the same server. If I run the site, I am prompted for username and password. If I remove the line: <deny users="*"/> I can access the site and everything works - but the user credentials are not passed to the site (Page.User.Identity.Name returns a blank string in ASP.Net). The site has identical (inherited) file permissions as other working sites on the server. The only difference in authentication/authorization between this site and the other working sites is, that this runs Asp.Net 4 (but there are other working asp.net 4 sites on the server as well). What am I missing here? Where should I look?

    Read the article

  • Cannot Start Passenger 3.0.18 Using Mountain Lion (OS X Server) and RVM

    - by LightBe Corp
    I recently did a clean install of Mountain Lion on my Mac Mini Server. I installed version 3.0.18 using a gem according to the directions on http://www.phusionpassenger.com with no errors that I could see. rvmsudo gem install passenger-enterprise-server-3.0.18.gem rvmsudo passenger-install-apache2-module Here are my entries in /etc/apache2/httpd.conf with my username masked: LoadModule passenger_module /Users/username/.rvm/gems/ruby-1.9.3-p327/gems/passenger-enterprise-server-3.0.18/ext/apache2/mod_passenger.so PassengerRoot /Users/username/.rvm/gems/ruby-1.9.3-p327/gems/passenger-enterprise-server-3.0.18 PassengerRuby /Users/username/.rvm/wrappers/ruby-1.9.3-p327/ruby I uncommented out the following statement: Include /private/etc/apache2/extra/httpd-vhosts.conf Here is a sample virtual host entry. I have three of them in the file. <VirtualHost *:80> ServerName www.mydomain.com ServerAlias mydomain.com PassengerAppRoot /Users/username/Sites/myfolder/ DocumentRoot /Users/username/Sites/myfolder/public <Directory /Users/username/Sites/myfolder/public> Allow from all AllowOverride all Options -MultiViews </Directory> </VirtualHost> I have restarted Apache several times. Here is information from my server: [~]$ ps -ef | grep Passenger 501 18804 303 0 12:39PM ttys000 0:00.00 grep Passenger [~]$ rvmsudo passenger-status Password: **ERROR: Phusion Passenger doesn't seem to be running.** [~]$ rvmsudo passenger-config --version 3.0.18 I have tried doing online searches on this. I was surprised that there was not all that much on this specific error even though from my understanding Passenger has been around for a few years. I have posted this issue on the Phusion Passenger Google Groups but have not heard anything. Any help would be appreciated, the sooner the better LOL. Seriously I need to have one of my three websites up by tomorrow evening. This is the only issue stopping that from happening. Thanks again.

    Read the article

  • Linux: prevent outgoing TCP flood

    - by Willem
    I run several hundred webservers behind loadbalancers, hosting many different sites with a plethora of applications (of which I have no control). About once every month, one of the sites gets hacked and a flood script is uploaded to attack some bank or political institution. In the past, these were always UDP floods which were effectively resolved by blocking outgoing UDP traffic on the individual webserver. Yesterday they started flooding a large US bank from our servers using many TCP connections to port 80. As these type of connections are perfectly valid for our applications, just blocking them is not an acceptable solution. I am considering the following alternatives. Which one would you recommend? Have you implemented these, and how? Limit on the webserver (iptables) outgoing TCP packets with source port != 80 Same but with queueing (tc) Rate limit outgoing traffic per user per server. Quite an administrative burden, as there are potentially 1000's of different users per application server. Maybe this: how can I limit per user bandwidth? Anything else? Naturally, I'm also looking into ways to minimize the chance of hackers getting into one of our hosted sites, but as that mechanism will never be 100% waterproof, I want to severely limit the impact of an intrusion. Cheers!

    Read the article

  • configuring apache with mod_mono for .net app

    - by Mystere Man
    I'm having a huge problem getting mod_mono and apache configured to work correctly. I've had this working at one time, but I can't seem to figure out where i'm going wrong. I'm using mono-server4. I'm trying to use a seperate port from the main website. So I have in /etc/apache2/sites-available (with a link from sites-enabled) a vhost configuration that looks like this: <VirtualHost *:9999> ServerName XXX ServerAdmin web-admin@XXX DocumentRoot /var/xxx MonoServerPath XXX "/usr/bin/mod-mono-server4" MonoDebug XXX true MonoSetEnv XXX MONO_IOMAP=all MonoApplications XXX "/:/var/xxx" <Location "/"> Allow from all Order allow,deny MonoSetServerAlias XXX SetHandler mono SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip dont-vary </Location> <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript </IfModule> </VirtualHost> I used mono-server4-admin to create the application mono-server4-admin --path=/var/xxx --app=/XXX --port=9999 When i start apache, it gives the error: Syntax error on line 13 of /etc/apache2/sites-enabled/xxx: Server alias 'XXX, not found. This corresponds with the MonoSetServerAlias statement. So I commented it out, and when I do that apache starts. However, when I try to access the site, I get a 500 error. The access log indicates that it's trying to access the app on port 80, rather than 9999. I'm not sure what the problem is here. Can anyone help me get figure out where I went wrong? My mono-server4-hosts.conf contains this: # start /etc/mono-server4/conf.d/RMRSite/10_XXX Alias /XXX "/var/xxx" AddMonoApplications default "/XXX:/var/xxx" <Directory /var/xxx> SetHandler mono <IfModule mod_dir.c> DirectoryIndex index.aspx </IfModule> </Directory> # end /etc/mono-server4/conf.d/XXX/10_XXX Also, my /etc/mono-server4/conf.d/XXX/10_XXX contains this: This is the configuration file for the XXX virtualhost path = /var/xxx alias = /XXX vhost = localhost port = 9999

    Read the article

  • How can I setup nginx to serve virtualhosts with rails(unicorn/passenger) and php-fpm

    - by NewAlexandria
    I would like to serve multiple sites on one instance. I install nginx, php-fpm, and a rails app. I use sites like this to guide me. I configure php-fpm to listen to a local socket listen = /var/run/php-fpm/php-fpm.sock I configure ngnix with multiple hosts: include /etc/nginx/conf.d/*.conf I have several site php conf files like /etc/nginx/conf.d/site1.conf server { listen 80; server_name site1.com www.site1.com; root /var/www/site1; location / { index index.html index.php; } location ~ \.php$ { fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name; } } and rails site conf files like upstream rails { server 127.0.0.1:3000; } server { listen 80; server_name site2.com www.site2.com; root /var/www/site2; location / { proxy_pass http://rails; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_set_header X-Url-Scheme $scheme; } } I have a unicorn rails server running via rails s -p 3000 Yet, no sites come up for either site1.com or site2.com. I can get to the rails site at www.site2.com:3000 What is wrong? I've spent 2 days (nearly 30hr) trying many different blogs, SO / SF questions, etc. Please share your insight or answer. edit 1: No log entries are created when I try to visit either site. It's like the requests never come in.

    Read the article

  • 404 not found error for virtual host

    - by qubit
    Hello, In my /etc/apache2/sites-enabled, i have a file site2.com.conf, which defines a virtual host as follows : <VirtualHost *:80> ServerAdmin hostmaster@wharfage ServerName site2.com ServerAlias www.site2.com site2.com DirectoryIndex index.html index.htm index.php DocumentRoot /var/www LogLevel debug ErrorLog /var/log/apache2/site2_error.log CustomLog /var/log/apache2/site2_access.log combined ServerSignature Off <Location /> Options -Indexes </Location> Alias /favicon.ico /srv/site2/static/favicon.ico Alias /static /srv/site2/static # Alias /media /usr/local/lib/python2.5/site-packages/django/contrib/admin/media Alias /admin/media /var/lib/python-support/python2.5/django/contrib/admin/media WSGIScriptAlias / /srv/site2/wsgi/django.wsgi WSGIDaemonProcess site2 user=samj group=samj processes=1 threads=10 WSGIProcessGroup site2 </VirtualHost> I do the following to enable the site : 1) In /etc/apache2/sites-enabled, i run the command a2ensite site2.com.conf 2) I then get a message site successfully enabled, and then i run the command /etc/init.d/apache2 reload. But, if i navigate to www.site2.com, i get 404 not found. I do have an index.html in /var/www (permissions:777 and ownership www-data:www-data), and i have also verified that a symlink was created for site2.com.conf in /etc/apache2/sites-enabled. Any way to fix this ? Thank you.

    Read the article

  • What are good systems for managing PHP/MySQL infrastructure?

    - by sbrattla
    I work in a company which is about to migrate most applications from in-house custom built Java/Tomcat applications to Drupal. Due to company policies, applications and websites need to run on in-house servers. This means that we need infrastructure for Drupal (PHP/MySQL) applications. This must have been solved a million times already. I believe this is what web-hosting companies does every day. Even though we work on a much smaller scale than web-hosting companies, i assume it would make sense to look at the task as if we're going to have an internal small-scale web-hosting company. This means that the guys in IT operations could be "responsible" for "offering" web-hosting, while developers could use these "services". We have three environments; dev(elopment), test and prod(uction). It would make sense that developers could log in to a system and create/edit/delete dev and test sites as they'd like. Production sites should be available through the same system, but only available to IT ops. We need to work with clusters of web servers, meaning that an administration system should be capable of creating/editing/deleting sites across multiple servers. I know there's no "this is it" answer to my question; but what would be a good place to start to get going with this? Apart from the actual hardware, what would be a good administration system for this?

    Read the article

  • Servers/Websites Keep Going Down

    - by Tyler Johnson
    Okay, I'm a noobie. I know how to build and compose a website, but I have no idea what I'm doing when it comes to servers and server commands, etc. I've recently had a problem with all of my sites on our servers going down all at once and then I have to go in and reboot the server for them to come up again. At first this was annoying, but now it is becoming agonizing as it now takes 3-4 reboots for the websites to come back up. I contacted support for my hosting, but they are not being very helpful. They just keep telling me what the issue might be and basically telling me that I'm going to have to look into it and figure it out, which really isn't possible since I know nothing. Anyway, here are the things they said were possible reasons: They said I have "strange logs" in my Apache webserver log, error: sh: fetch: command not found. My php.ini memory limit is: 256M which is very high. It should be 32M or 64M. Server is reaching Max Clients, meaning we have more than 150 visitors at a time. (They supposedly "fixed" this, but the sites/server are still going down) I have some Wordpress sites with plugins getting errors like: PHP Warning: pack(): Type H: illegal hex digit G in... PHP Fatal error: Cannot use object of type stdClass as array in... PHP Fatal error: Maximum execution time of 30 seconds exceeded in... PHP Fatal error: Call to undefined function file_exists() in... PHP Parse error: syntax error, unexpected '<' I know that's a lot, but I really am at wits end and have no idea what to do now. If anyone could maybe give me some advice or point me in the right direction I would greatly appreciate it! Thanks! Oh, and here are the specs for my server: RAM: 2048MB CPU Shares: 40 Primary Disk: 50GB Data Transfer: 75GB Port Speed: 5Mbps Type: Linux

    Read the article

  • Nginx Server Block Not Working? - Already running other vhosts just this one not working

    - by daveaspinall
    Im running a Debian 6 LEMP server with multiple virtual hosts and everything has been fine for 5 or so sites. But I've just tried adding another but for some reason it's just not working. By not working I mean in Chrome I get the "Oops! Google Chrome could not connect to subdomain.domain.net" error. I've changed the domain for security to subdomain.example.com and the IP is masked. Hosts file (I have multiple sub domains): xxx.xxx.xx.xxx *.example.com *.example Server Block: server { listen 80; server_name subdomain.example.com; access_log /srv/www/subdomain.example.com/logs/access.log; error_log /srv/www/subdomain.example.com/logs/error.log; root /srv/www/subdomain.example.com/public_html; location / { index index.html index.htm index.php; } location ~ \.php$ { include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } I've created the system link to the file in the /etc/nginx/sites-enabled/ directory and restarted/reloaded nginx. DNS seems fine: # ping -c 2 subdomain PING subdomain.example.com (xxx.xxx.xx.xxx) 56(84) bytes of data. 64 bytes from www.example.com (xxx.xxx.xx.xxx): icmp_req=1 ttl=64 time=0.035 ms 64 bytes from www.example.com (xxx.xxx.xx.xxx): icmp_req=2 ttl=64 time=0.048 ms Checking the file with cURL works: # curl http://subdomain.example.com HTML - OK Emptied browser cache but still no dice. Anything I'm missing? Like I mentioned, I have a few sites running fine on the server currently so php-fpm etc etc are working. Any help would be much appreciated! Cheers, Dave

    Read the article

  • Seperate external and intranet portals using the same functions .htaccess

    - by jezzipin
    We are currently struggling with setting up rules for a .htaccess file for a website built upon our company product. The product is built using PLSQL and procedures can be accessed using URLs. We use this functionality to present different options to our users. These options can be injected into HTML pages using replacement tags. So, the tag [user_menu] is always replaced with: /wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} for external sites and /intranet/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} for internal sites. The issue we are having is twofold. We need to write our .htaccess rules so that the user can access the functionality whether they are internal or external. So, the links should work as follows: http://www.example.com/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} or http://www.example.com/internal/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} This is the other problem. As you can see for the internal link above, the procedure needs to be prefixed with internal instead or intranet. We cannot change this in our standard tags as this will affect other sites so we need to achieve this also using htaccess. Could anyone assist with this issue? I apologise if this is brief or confusing but it's something i've never done before and have been given the task of doing. I apologise for the lack of code that will be posted above however I am a front end developer and have been left to make these changes having no prior experience of .htaccess to please bare with me.

    Read the article

  • Script to mirror MS SQL Server databases between 2 servers

    - by David W
    Hi I have about 200 sites each of which have 2 servers running MSSQL (2k5 at some sites, 2k8 at others) One server is production and the other is primarily there as a backup. We're rebuilding all of these servers this year and as part of that we will have to set up mirroring for ... a lot ... of databases. Some of these sites have 45 databases so mirroring them manually is going to be a huge pain. I was going to write a batch script which uses SQLCMD to backup the database and log, copies to the secondary server, restores the backup and log with norecovery, creates the endpoints and sets the partner. This in itself isn't too complicated, but i'd love to see what other people have done as i'm not very confident in catching errors using the process i've outlined above. I've seen Tools to manage sql 2008 database mirroring? Which looks really good, but the formatting is jumbled and I can't get it to work. If anyone has any other scripts they've written and are willing to share I'd be eternally grateful. Ideally I'd love to be able to use a script to ensure there are matching endpoints (same ports) on both servers, backup the database, backup the log, copy the backups to second server, restore database and log with norecovery, set the partners on both servers, and somehow confirm that the databases are linked and synchronized. Well, thanks for reading :)

    Read the article

  • Pulling application updates from closest server?

    - by Mike Morris
    Setup: 6 Major Sites with Server 2003/2008 DCs doing DHCP/AD Integrated DNS, each on their own subnet. All connect back to datacenter through a 3 mbps WAN ERP server running in the datacenter, accessed by clients at all sites Currently, when we update the software, I manually push a copy of the updated client/config files down to each DC. I have a script that we run on each PC to update the clients. It determines what subnet the PC is on, and pulls the software from that DC. It's messy, but it works. The client has an autoupdate feature, but it'll only pull from the application server (which is housed in the datacenter, over the 3 meg link). It takes forever, since the updates are not "patches" but a full version of the client, even for minor upgrades (bad design). After the most recent patch, you can configure the clients to pull from a different server. Unfortunately, it is the same for all clients. Is there some kind of DNS magic I can use to pull from the local server? For instance, if I tell the clients their update server is ERPUPDATE, can I have their local DNS server return a different IP for ERPUPDATE than the other sites? Example: Client 1 is at site A, client 2 is at site b. They each run the software and a version change is detected. As per the config files, the clients look to ERPUPDATE for their updated client. Client 1 queries DNS for the IP of ERPUPDATE at its current location (site A) DNS at site A returns 192.1.1.5 Client 1 pulls update from 192.1.1.5 Client 2 queries DNS for the IP of ERPUPDATE at its current location (site B) DNS at site B returns 192.1.2.5 Client 2 pulls update from 192.1.2.5 Excuse the poor explanation, I worked 61 hours over the weekend and haven't completely rebounded. I'll be happy to clarify if needed!

    Read the article

  • How to Access User Directory shared by Apache on OS X Mountain Lion?

    - by schluchc
    When trying to access the local user web page on localhost/~username, I get a "403 Forbidden". The system web page in /Library/WebServer/Documents is accessible on localhost/ though, so I assume Apache is working fine. I know that this problem has been discussed several times, also on superuser. I implemented and checked all I could find, but I still couldn't solve the problem and would be glad if someone had a suggestion for this particular case: sudo apachectl -t returns Syntax OK. I have a username.conf file in /etc/apache2/users/: <Directory "/Users/username/Sites/"> Options Indexes MultiViews FollowSymLinks AllowOverride AuthConfig Limit Order allow,deny Allow from all </Directory> as proposed here [SuperUser] and in several other tutorials. The permissions of the username.conf file are -rw-r--r-- root wheel, as they should be. The httpd.conf is unchanged and therefore contains the line Include /private/etc/apache2/extra/httpd-userdir.conf. That file in turn contains UserDir Sites Include /private/etc/apache2/users/*.conf <IfModule bonjour_module> RegisterUserSite customized-users </IfModule> So the httpd*.conf files should be ok. The permissions of /Users/username/Sites is drwxr-xr-x 10 username staff and -rw-r--r--@ 1 username staff for the index.html. In the error log I simply get a [Sun Nov 25 22:14:32 2012] [error] [client 127.0.0.1] (13)Permission denied: access to /~username/ denied. And yes, after each change I did the sudo apachectl restart. Any help no how to solve the problem or how to further analyze it would be highly appreciated!

    Read the article

  • Viewing local websites on my iOS device over Wi-fi

    - by John
    Trying to view some local html/css/js files in a mobile browser on my iOS device. Thought maybe file-sharing would be an option, and is, but I'm not completely satisfied with it. Any time I try to do the following an error occurs. Web sharing is on and available at http://192.168.1.101/~user but I have to manually copy the files in. If I try to symlink a folder in so that the address could be viewed at ''~user/some_dir by issuing $ ln -s /Users/user/dev/some_dir ~/Sites/ then I get a 403 forbidden error. I've tried to remedy this by modifying a user.conf file in /private/etc/apache2/ and using the following syntax: <Directory "/Users/user/Sites/"> Options Indexes MultiViews SymLinks AllowOverride None Order allow,deny Allow from all </Directory> but nope, still doesn't work. I get a 403 error. If I try to symlink each individual file in instead of using a directory as a sub-directory, same error. Any help would be greatly appreciated! I'd just like to symlink directories into the ~/Sites one and browse them on my iOS device over wifi. I'm on OS X 10.7 Lion trying to connect with iOS 5.

    Read the article

  • Adventures in Drupal multisite config with mod_rewrite and clean urls

    - by moexu
    The university where I work is planning to offer Drupal hosting to staff/faculty who want a Drupal site. We've set up Drupal multisite with clean urls and it's mostly working except for some weird redirects. If you have two sites where one is a substring of the other then you'll randomly be redirected to the other site. I tracked the problem to how mod_rewrite does path matching, so with a config file like this: RewriteCond %{REQUEST_URI} ^/drupal RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupal/index.php?q=$1 [last,qsappend] RewriteCond %{REQUEST_URI} ^/drupaltest RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupaltest/index.php?q=$1 [last,qsappend] /drupaltest will match the /drupal line and all of the links on the /drupaltest page will be rewritten to point to /drupal. If you put the end of string character ($) at the end of each rewrite condition then it will always match on the correct site and the links will always be rewritten correctly. That breaks down as soon as a user logs in though because the query string is appended to the url so just the base url will no longer match. You can also fix the problem by ordering the sites in the config file so that the smallest substring will always be last. I suggested storing all of the sites in a table and then querying, sorting, and rewriting the config file every time a Drupal site is requested so that we could guarantee the order. The system administrator thought that was kludgy and didn't address the root problem. Disabling clean urls should also fix the problem but the users really want them so I'd prefer to keep them if possible. I think we could also fix it by using an .htaccess file in each site to handle the clean url rewriting but that also seems suboptimal since it will generate a higher load on the server and the server is intended to host the majority of the university's external facing web content. Is there some magic I can do with mod_rewrite to get it to work? Would another solution be better? Am I doing something the wrong way to begin with?

    Read the article

  • Multiple SSL certificates on Apache using multiple public IPs - not working

    - by St. Even
    I need configure multiple SSL certificates on a single Apache server. I already know that I need multiple external IP addresses as I cannot use SNI (only running Apache 2.2.3 on this server). I assumed that I had everything configured correctly, unfortunately things are not working as they should (or maybe I should say, as I expected them to work)... In my httpd.conf I have: NameVirtualHost *:80 NameVirtualHost *:443 Lets say my public IP is 12.0.0.1 and my private IP is 192.168.0.1. When I use the public IP in my vhost my default website is being shown instead the one defined in my vhost, e.g.: <VirtualHost 12.0.0.1:443> ServerAdmin [email protected] ServerName blablabla.site.com DocumentRoot /data/sites/blablabla.site.com ErrorLog /data/sites/blablabla.site.com-error.log #CustomLog /data/sites/blablabla.site.com-access.log common SSLEngine On SSLCertificateFile /etc/httpd/conf/ssl/blablabla.site.com.crt SSLCertificateKeyFile /etc/httpd/conf/ssl/blablabla.site.com.key SSLCertificateChainFile /etc/httpd/conf/ssl/blablabla.site.com.ca-bundle <Location /> SSLRequireSSL On SSLVerifyDepth 1 SSLOptions +StdEnvVars +StrictRequire </Location> </VirtualHost> When I use the private IP in my vhost everything works as it should (the website defined in my vhost is being shown), e.g.: <VirtualHost 192.168.0.1:443> ...same as above... </VirtualHost> My server is listening on all interfaces: [root@grbictwebp02 httpd]# netstat -tulpn | grep :443 tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 5585/httpd What am I doing wrong? If I cannot get this to work I cannot continue to add the second SSL certificate on the other public IP... If more information is required just let me know!

    Read the article

  • Google Chrome 'ruined' after someone else logged in

    - by MHJ96
    Google Chrome was the default browser that came with my laptop. I have a Google account which I was logged into Chrome with. Someone else logged into my chrome using their account which has resulted in everything being lost. I logged in again to find my theme, bookmarks, history, most visited sites were gone and now instead of 'piling up' under the symbol already pinned to the Windows 7 task bar it opens a second symbol on the taskbar and 'piles up' under that instead which has never happened before. I have tried unpinning and repinning which didn't work. I have tried syncing my account numerous times to no avail, I have searched high and low on the Chrome forums for any kind of answer. I have tried accessing that thing in documents to try and recover my bookmarks but I can't find them (I had hidden files enabled etc etc). I really really want it back how it was as I had a lot of bookmarked sites and quick access to sites and everything was how i used it, and I hate that it now opens a new icon on the taskbar.

    Read the article

  • Referer is passed from HTTPS to HTTP in some cases... How?

    - by ravisorg
    In theory browsers do not pass on referer information from HTTPS to HTTP sites. And in my experience this has always been true. But I just found an exception, and I want to understand why it works so I can use it as well. Search for "what is my referer" on https://www.google.ca/ eg: https://www.google.ca/search?q=what+is+my+referer There are a few sites that will show referer. They all seem to "work" when they shouldn't. For example, click the www.whatismyreferer.com one. I get: Your referer: https://www.google.ca/ Note that sometimes, rarely, I get "no referer" as the result. Go back and click the link again and it'll "work" the next time. This should not happen. www.whatismyreferer.com is a non-HTTPS site. The referer header should not be being passed, but it is. What's going on here, and how can I do the same from my HTTPS site to the HTTP sites I'm linking to?

    Read the article

  • News feed APIs for general news

    - by dassouki
    I'm building a database + tool that scours news feeds for a certain term. For example "food poisoning from nuts". I want to scour social media sites, news sites, major news aggregators, etc... for that term. Question 1: What are some of the news aggregator APIs out there? Question 2: How Would you go about coding and receiving only the latest news from the API?

    Read the article

  • Intermittent PolicyException: Execution permission cannot be acquired.

    - by Aaron Maenpaa
    We are intermittently seeing the following exception shortly after an App Pool recycle in an ASP.NET application: System.Configuration.ConfigurationErrorsException: Could not load file or assembly 'Microsoft.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. Failed to grant permission to execute. (Exception from HRESULT: 0x80131418) ---> System.IO.FileLoadException: Could not load file or assembly 'Microsoft.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. Failed to grant permission to execute. (Exception from HRESULT: 0x80131418) File name: 'Microsoft.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' ---> System.Security.Policy.PolicyException: Execution permission cannot be acquired. at System.Security.SecurityManager.ResolvePolicy(Evidence evidence, PermissionSet reqdPset, PermissionSet optPset, PermissionSet denyPset, PermissionSet& denied, Boolean checkExecutionPermission) at System.Security.SecurityManager.ResolvePolicy(Evidence evidence, PermissionSet reqdPset, PermissionSet optPset, PermissionSet denyPset, PermissionSet& denied, Int32& securitySpecialFlags, Boolean checkExecutionPermission) at System.Reflection.Assembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoad(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.Reflection.Assembly.Load(String assemblyString) at System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) The specific DLL that fails to load varies from incident to incident, but is always one referenced by the main assembly. We're running on ASP.NET 3.5 on Windows Server 2008. This seems to happen in batches affecting some but not all of sites on the same App Pool. We have a large number of sites all running the same code. Once a site has failed to load a DLL it throws up a Yellow Screen of Death until the next App Pool recycle. We haven't been able to reproduce this behavior and the sites seem to work fine for days or weeks at a time (and many App Pool recycles) before failing. Has anybody else seen similar behavior? Update: We've tried reproducing the failure by setting up a few hundred sites and writing a script to hit them repeatedly while recycling the App Pool once every couple of minutes and were unable to accomplish much other than loading down the server's CPU for a few days straight. We then tried messing (locking one of the DLLs, changing the file permissions) with the copies of the DLLs that ASP.NET makes and managed to reproduce similar behavior but not the same exception. Does anybody have any ideas on how to adjust the security policy to get it to throw a System.Security.Policy.PolicyException: Execution permission cannot be acquired. when loading a specific DLL?

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >