Search Results

Search found 2130 results on 86 pages for 'serve u'.

Page 62/86 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • How Can I Make Apache Stop Serving ALL Unknown File Types (like .php~)?

    - by user223304
    I am coming from IIS and moving to Apache and recently found out that Apache by default serves up files of an unknown file extension as PURE TEXT. This can be an issue if a user uses certain programs that back up .php files as .php~. Then the .php~ file becomes completely readable by simply navigating to it in a browser. To make matters worse these .php~ files are often considered 'hidden' in the linux environment from the user so some may not even know they exist. Bots have been created around this fact that scour the internet looking for popular file name backups and extracting potentially secure info from them. I already know how to stop serving up .php~ files or any specific file extensions. I also know not to use any editors that would save backup files like this. My question is, how can I stop this default Apache behavior of serving up ANY non-MIME file type at all? I just don't like the this behavior and would like to stop it. I don't want it serving up .aspx~, .html~, .bob, .carl, no extension or anything else that is not a real MIME type. I know that I can probably go and use a directive to first Deny access to all file types. Then add the ones I want to serve out one by one. But I'm wondering if there's an easier/quicker way. Thanks for any help.

    Read the article

  • 403 with Apache and Symfony on Ubuntu 10.04

    - by Dominic Santos
    I'm trying to run symfony on my apache installation (I'm using xampp for the whole package) and it keeps giving me a 403 error every time I try to access my website. I've got vhosts set up with the following: <VirtualHost *:80> ServerName localhost DocumentRoot "/opt/lampp/htdocs" DirectoryIndex index.php <Directory "/opt/lampp/htdocs"> AllowOverride All Allow from All </Directory> </VirtualHost> <VirtualHost *:80> ServerName servername.localhost DocumentRoot /home/me/web/server/web DirectoryIndex index.php Alias /sf "/lib/vendor/symfony/data/bin/web/sf" <Directory "/home/me/web/server/web"> AllowOverride All Allow from All </Directory> </VirtualHost> <Directory "/lib/vendor/symfony/data/bin/web/sf"> Allow from All </Directory> I've also added "127.0.0.1 servername.localhost" in my hosts file. When I try to access "servername.localhost" it just gives me a 403 error. I've chmod'd 777 the symfony directory and my website directory in my home directory and used './symfony project:permissions' to let symfony check that permissions are set up correctly but still not result. If I move my website directory into "/opt/lampp/htdocs" then it will serve it from there but still has problems access the symfony stuff such as the debug toolbar. Any help would be appreciated.

    Read the article

  • ProxyPass for specific vhost with mod_rewrite

    - by Steve Robbins
    I have a web server that it set up to dynamically server different document roots for different domains <VirtualHost *:80> <IfModule mod_rewrite.c> # Stage sites :: www.[document root].server.company.com => /home/www/[document root] RewriteCond %{HTTP_HOST} ^www\.[^.]+\.server\.company\.com$ RewriteRule ^(.+) %{HTTP_HOST}$1 [C] RewriteRule ^www\.([^.]+)\.server\.company\.com(.*) /home/www/$1/$2 [L] </IfModule> </VirtualHost> This makes it so that www.foo.server.company.com will serve the document root of server.company.com:/home/www/foo/ For one of these sites, I need to add a ProxyPass, but I only want it to be applied to that one site. I tried something like <VirtualHost *:80> <Directory /home/www/foo> UseCanonicalName Off ProxyPreserveHost On ProxyRequests Off ProxyPass /services http://www-test.foo.com/services ProxyPassReverse /services http://www-test.foo.com/services </Directory> </VirtualHost> But then I get these errors ProxyPreserveHost not allowed here ProxyPass|ProxyPassMatch can not have a path when defined in a location. How can I set up a ProxyPass for a single virtual host?

    Read the article

  • How many reverse proxies (nginx, haproxy) is too many?

    - by Alysum
    I'm setting up a HA (high availability) cluster using nginx, haproxy & apache. I've been reading great things about nginx and haproxy. People tend to choose one or the other but I like both. Haproxy is more flexible for load balancing than nginx's simple round robin (even with the upstream-fair patch). But I'd like to keep nginx for redirecting non-https to https among other things right at the point of entry to the cluster. On the other hand, nginx is a lot faster for serving static contents and would reduce the load on the powerful apache which loves to eat a lot of RAM! Here is my planned setup: Load balancer: nginx listens on port 80/443 and proxy_forwards to haproxy on 8080 on the same server to load balance between the multiple nodes. Nodes: nginx on the node listens to requests coming from haproxy on 8080, if the content is static, serve it. But if it's a backend script (in my case PHP), proxy forward to apache2 on the same node server listenning on a different port number. Technically this setup works but my concerns are whether having the requests going through several proxies is going to slow down requests? Most of the requests will be PHP requests as the backends are services (which means groing from nginx - haproxy - nginx - apache). Thoughts? Cheers

    Read the article

  • What is the best way to create a failover cluster for my IIS website?

    - by ObligatoryMoniker
    Our eCommerce website www.tervis.com currently runs on two servers: SQL server: 2005 x 86 on Windows Server 2003 Standard x86 with a single dual core processor and 4 gb of memeory IIS server: Windows Server 2008 Web edition x64 with dual quad core hyper threaded processors and 32 gb of memory Tervis.com's revenue has steadily grown to the point where we need to have redundant servers deployed with a fail over mechanism so that we do not have any down time. Because the SQL server is so underpowered compared to the web server my thought was to purchase: 2 x SQL Server 2008 R2 web edition x64 single processor license 2 x Windows Server 2008 R2 Web Edition Licenses 1 x New Physical dual quad core 32 GB server 1 x F5 Load Balancer I need the Windows Server 2008 R2 Web Edition licenses so that I can run SQL and IIS on the same box for both of these servers. The thought is to run this as an active/passive fail over cluster that could be upgraded to an active/active cluster if we purchased the additional SQL licensing. The F5 load balancer would serve as the device that monitors the two servers and if the current active one stops responding then fails over to using the other server. To be clear this is not windows clustering but simply using a load balancer to fail over between two computers so that you now have a cluster in the general sense. Is this really the best way to accomplish what I need? Is there some way to leverage the old server 2003 SQL server to function as the devices that funnels http requests to the appropriate active server and then fails over if a problem occurs? Is there any third party clustering software that might help me accomplish this in a simpler fashion?

    Read the article

  • Outlook conversation view and categories

    - by Greg Jackson
    At work, I tend to receive a couple of hundred emails a day. To keep from being overwhelmed, I have been using categories to sort and prioritize my mail messages. I auto-assign categories, then group by them: Code Reviews, To, CC, Distribution List/BCC. This means that, for example, a message that's explicitly to me will always show up higher in my inbox than one I get because I'm on a Distribution List. It's a huge time saver and it brings important emails to my attention much more quickly. Recently, the email threads I'm involved in have started to get quite long, and I'd like to be able to use conversation view, or at least sort by subject. Outlook, however, doesn't seem to support any (useful) combination of conversation view and categories. I've tried the following things without success: Grouping by category, then conversation view -- Outlook gives me an error (the grouping/sort combination is too complex). Using a custom view to group by conversation -- category doesn't show up as an option to sort by Grouping by category, then subject -- Getting closer, but the top subject is the first alphabetically, not the most recent Grouping by conversation, then category -- This works, but it doesn't do me much good, because the top conversation is the latest, without regard to what category it belongs to Is there a way for me to retain my category system or something similar while taking advantage of grouping related emails together? I've written Outlook plugins in the past, so even that's not too out there to serve as a proper solution.

    Read the article

  • Using WebDAV for automated downloads

    - by Geo Ego
    I currently manage a number of sites (at one point about a dozen, currently four, but soon growing into the dozens or hundreds) that serve a piece of software to clients at their remote locations. Our web server is Windows SBS Server 2k3, and the remote servers are Windows Server 2k3.When we have new versions of the software, I upload this new software to a specific directory and rename it; each time the clients boot, they pull their software from that specific directory. With just a few sites, it's no problem for me to RDP in and copy the files over. As the number grows, this will quickly become quite unwieldy. So I'm thinking that WebDAV would be part of a solution, so that I could simply push the newest version to our server (Windows SBS Server 2003) and make it available to the sites to grab. However, on the remote server side, what are some suggestions for automating the download? I only want the servers to download the files during downtime (between 3 AM and 9 AM), and I only want them to download if there is a new version available. I had thought of writing a program that checked the files on the WebDAV server at a regular interval, compared a hash of the current software to a hash of the software on the server, and only downloaded if they were different, but I'm wondering if there is something I am unaware of that can automate the process.

    Read the article

  • hdfs configuration

    - by Ananymous
    I am a newbie. Trying to setup a hdfs system to serve my data (I don't plan to use mapreduce) at my lab. So far I have read, cluster setup in but I am still confused. Several questions: Do I need to have a secondary namenode? There are 2 files, masters and slaves. Do I really need these 2 files eventhough I just want hdfs? If I need them, what should go in there? I assume my namenode in masters and datanodes as slaves? Do I need slaves nodes What configuration files are needed for namenode, secondary namenode, datanode and client? (I assume core-site.xml is needed for all 4)? In addition, can someone suggest a good configuration model? sample configuration for namenode, secondary namenode, datanode, and the client would be very helpful. I am getting confused because it seems most of the documentation assumes I want to use map-reduce which isn't the case.

    Read the article

  • How do I set up a shared directory on Linux?

    - by JR Lawhorne
    I have a linux server I want to use to share files between users in my company. Users will access the machine with sftp or secure shell. Here is what I have: cd /home ls -l drwxrwsr-x 5 userA staff 4096 Jul 22 15:00 shared (other listings omitted) I want all users in the staff group to be able to create, modify, delete any file and/or directory in the shared folder. I don't want anyone else to have access to the folder at all. I have: Added the users to the staff group by modifying /etc/group and running grpconv to update /etc/gshadow Run chown -R userA.staff /home/shared Run chmod -R 2775 /home/shared Now, users in the staff group can create new files but they aren't allowed to open the existing files in the directory for edit. I suspect this is due to the primary group id associated with each user which is still set to be the group created when the user was created. So, the PGID of user 'userA' is 'userA'. I'd rather not change the primary group of the users to 'staff' if I can help it but if it is the easiest option, I would consider it. And, a variation on a theme, I'd like to do this same thing with another directory but also allow the apache user to read files in the directory and serve them. What's the best way to set this up?

    Read the article

  • PHP script not automatically updating when moved to another server

    - by user32007
    A friend built a ranking system on his site and I am trying to host in on mine via WordPress and Go Daddy. It updates for him but when I load it to my site, it works for 6 hours, but as soon as the reload is supposed to occur, it errors and I get a 500 timeout error. His page is at: jeremynoeljohnson .com/yakezieclub My page is currently at http://sweatingthebigstuff.com/yakezieclub but when you ?reload=1 it will give the error. Any idea why this might be happening? Any settings that I might need to change? Here is the top of the index.php file. I'm not sure which part of any of it is messing up. I literally uploaded the same code as him. Here's the reload part: $cachefile = "rankings.html"; $daycachefile = "rankings_history.xml"; $cachetime = (60 * 60) * 6; // every 6 hours, the cache refreshes $daycachetime = (60 * 60) * 24; // every 24 hours, the history will be written to // - or whenever the page is requested after 24 hours has passed $writenewdata = false; if (!empty($_GET['reload'])) { if ($_GET['reload']== 1) { $cachetime = 1; } } if (!empty($_GET['reloadhistory'])) { if ($_GET['reloadhistory'] == 1) { $daycachetime = 1; $cachetime = 1; } } if (file_exists($daycachefile) && (time() - $daycachetime < filemtime($daycachefile))) { // Do nothing } else { $writenewdata = true; $cachetime = 1; } // Serve from the cache if it is younger than $cachetime if (file_exists($cachefile) && (time() - $cachetime < filemtime($cachefile))) { include($cachefile); echo "<!-- Cached ".date('jS F Y H:i', filemtime($cachefile))." -->"; exit; } ob_start(); // start the output buffer ?>

    Read the article

  • Apache-style multiviews with Nginx

    - by Kenn
    I'm interested in switching from Apache/mod_php to Nginx for some non-CMS sites I'm running. The sites in question are either completely static HTML files or simple PHP, but the one thing they have in common is that I'm currently using Apache's mod_negotiation to serve them up without file extensions. I'm not concerned with actual content negotiation; I'm using this just so I don't have to use file extensions in my URLs. For example, the file at /info/contact.php is accessed via a URL of just /info/contact The actual file is a .php file in that location, but I don't use the extension in the URLs. This gives me slightly shorter, cleaner URLs and also doesn't expose what's essentially a meaningless implementation detail to the user. In Apache, all this takes is enabling mod_negotiation and adding +MultiViews to the Options for the site. In Nginx I gather I'll be rewriting somehow but being new to Nginx, I'm not exactly sure how to do it. These sites are currently working fine proxied from Nginx to Apache, but I'd like to try running them solely with Nginx/fastcgi. They work fine this way as long as I'm using the extensions, so the fastcgi aspect is working great. My concern now is just with removing those extensions. It's important to keep in mind that the filename is not always in the URL, in the case of subdirectories. That is, /foo/bar should look for /foo/bar.php or /foo/bar/index.php /foo/ should look for /foo/index.php Is there a simple way to achieve this with Nginx or should I stick with proxying to Apache?

    Read the article

  • Plone site randomly serving wrong content

    - by Chris Miller
    I have a Plone site that has begun to randomly serve up the wrong content. Any given content suddenly shows something else. Sometimes a JPEG loads a stylesheet instead or a stylesheet loads as a page or a page as an image. The images move around, some times our site logo shows a bullet, or one of the other site images. Fiddler shows the wrong content in the response, the apache logs show the content type of the incorrect file (so if the an image loads in place of a style sheet, apache shows that). We thought mod_proxy was the source of our grief, but we get the problem hitting Zope directly. I never get the wrong content using the Medusa Monitor to repeatedly hit the content. I do see ConflictErrors in the instance.log file, and they seem to be correlated to the problem, but not 100%. ZPublisher.Conflict ConflictError at \path\to\object: database conflict error (oid 0x3586, class BTrees._OIBTree.OIBTree, serial this txn started with blah, serial currently committed blah) (X conflicts (0 unresolved) since startup blah) I pulled that off the web, it's not from our logs, but it's the same message. This may be a red herring, it sounds like those messages are normal. We've updated to the 3.3.5, same problems. I'm at a loss. I'm wondering if there a good way to intercept what is being served? Secondly, is there a way to increase the verbosity of the access log to included the content-type? I've even seen the problem manifest in ZMI. It happens more often when we're authenticated. Sometimes it can take a thousand reloads to see the problem, other times it happens in different ways every time we reload. I believe we've seen this problem for a couple years, but it was very intermittent, a page would show the content of a GIF, then a reload later wouldn't happen for a long time. Now it's a huge problem.

    Read the article

  • both ssl and non-ssl on single port

    - by Zulakis
    I would like to make my apache2 webserver serve both http and https on the same port. With the different method i tried it was either not working on http or on https.. How can I do this? Update: If I enable SSL and then visit the with http I get page like this: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Reason: You're speaking plain HTTP to an SSL-enabled server port.<br /> Instead use the HTTPS scheme to access this URL, please.<br /> <blockquote>Hint: <a href="https://server/"><b>https://server/</b></a></blockquote></p> <hr> <address>Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny16 with Suhosin-Patch mod_ssl/2.2.9 OpenSSL/0.9.8g Server at server Port 443</address> </body></html> Because of this, it seems very much possible to have both http and https on the same port. A first step would be to change this default-page so it would present a 301-Moved header. Update2: According to this, it is possible. Now, the question is just how to configure apache to do it.

    Read the article

  • IIS6 Sending a 404 for ".exe" files.

    - by Tracker1
    Recently a bunch of files I had setup for download via IIS6's web server stopped working. They are a number of setup files ending in ".exe" and were working prior to a few months ago. I have the file permissions set properly, and even enabled browsing in IIS to determine that the paths are indeed correct. I'm not sure if it is related, but the directories with a period stopped working as well. ex: "~/download/ApplicationName/0.9/AppName-setup-0.9.123b2.exe" When I rename the directory to say 0_9 the browsing works, but the file itself delivers a 404 message from IIS. For now, I've setup FileZilla FTP for anonymous access to these files, but would prefer to continue using IIS. I've considered creating an HTTP handler to serve the .exe files, but would really prefer a configuration solution. I just can't figure out why it isn't working, as all the settings are correct. Directory is setup for read access. "Everyone" has read permissions on the files themselves, and the directory browsing (aside from the folder "0.9" to "0_9" rename) shows the files.

    Read the article

  • Prevent nginx from redirecting traffic from https to http when used as a reverse proxy

    - by Chris Pratt
    Here's my abbreviated nginx vhost conf: upstream gunicorn { server 127.0.0.1:8080 fail_timeout=0; } server { listen 80; listen 443 ssl; server_name domain.com ~^.+\.domain\.com$; location / { try_files $uri @proxy; } location @proxy { proxy_pass_header Server; proxy_redirect off; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_connect_timeout 10; proxy_read_timeout 120; proxy_pass http://gunicorn; } } The same server needs to serve both HTTP and HTTPS, however, when the upstream issues a redirect (for instance, after a form is processed), all HTTPS requests are redirected to HTTP. The only thing I have found that will correct this issue is changing proxy_redirect to the following: proxy_redirect http:// https://; That works wonderfully for requests coming from HTTPS, but if a redirect is issued over HTTP it also redirects that to HTTPS, which is a problem. Out of desperation, I tried: if ($scheme = 'https') { proxy_redirect http:// https://; } But nginx complains that proxy_redirect isn't allowed here. The only other option I can think of is to define the two servers separately and set proxy_redirect only on the SSL one, but then I would have duplicate the rest of the conf (there's a lot in the server directive that I omitted for simplicity sake). I know I could also use an include directive to factor out the redundancy, but I really want to keep just one conf file without any dependencies. So, first, is there something I'm missing that will negate the problem entirely? Or, second, if not, is there any other way (besides including an external file) to factor out the redundant config information so that I can separate out the HTTP and HTTPS versions of the server config?

    Read the article

  • Is my webserver being abused for banking fraud?

    - by koffie
    Since a few weeks i'm getting a lot of 403 errors from apache in my log files that seem to be related to a bank frauding scheme. The relevant log entries look like this (The ip 1.2.3.4 is one I made up, I did not modify the rest of each line) www.bradesco.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:32 +0100] "GET / HTTP/1.1" 403 427 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" www.bb.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:32 +0100] "GET / HTTP/1.1" 403 370 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" www.santander.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:33 +0100] "GET / HTTP/1.1" 403 370 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" www.banese.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:33 +0100] "GET / HTTP/1.1" 403 370 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" the logformat I use is: LogFormat "%V:%p %U %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" The strange thing is that all these domains are domains of banks and 3 out of the 4 domains are also in the list of the bank frauding scheme described on: http://www.abuse.ch/?p=2925 I would really like to know if my server is being abused for bank frauding or not. I suspect not, because it's giving 403 to all requests. But any extra checks that I can do to ensure that my server is not being abused are welcome. I'm also curious on how the "bad guys" expected my server to behave. I.e. are they just expecting my server to act as a proxy to hide the ip of the fake site, or are they expecting that my server will actually serve the fake banking website? Is the ip 1.2.3.4 more likely to be the ip of a victim or the ip of a bad guy. I suspect a bad guy, because it's quite unlikely that a real person would visit 4 bank sites in a second. If it's from a bad guy I'm very curious at what he is trying to do.

    Read the article

  • Simplest DNS solution for remote offices

    - by dunxd
    I look after a bunch of remote offices that connect via VPN - a Cisco ASA 5505 in each office acts as Firewall and VPN end point. Beyond that we keep things as simple as possible in the offices to minimise the support burden. We don't have any kind of server except in offices large enough to justify having someone dedicated to IT. Basically there is the ASA, some computers, a network printer and a switch. One of the problems I am seeing in a lot of offices is that DNS requests looking up hosts inside our network often fail - I'm assuming timeouts due to the offices internet connection (they are all in developing world countries) having some sub-optimal qualities (e.g. high latency caused by VSAT segments, or packet loss. The obvious solution to this is to have some sort of local DNS service that can serve local requests - so I think it would need to do zone transfers from our Microsoft Windows 2008 R2 DNS servers at HQ. However, simply installing Windows Servers in each office is both expensive, and creates a support burden. This got me thinking about pfsense/m0n0wall on embedded devices - those can act as a DNS server, and could be configured at HQ and sent out as just something that needs to be plugged into the network and can then be forgotten about by the staff locally. Maybe there are some alternatives to the ASA 5505 that include some DNS functionality. Has anyone here dealt with the problem, either using some kind of embedded device, or found some other solution? Any gotchas or reasons to avoid what I have suggested?

    Read the article

  • 10 GigE interfaces limits single connection throughput to 1 Gb on a ProCurve 4208vl

    - by wazoox
    The setup is as follow : 3 Linux servers with Intel CX4 10 GigE controllers and an X-Serve with a Myricom 10 GigE CX4 controller are connected to a ProCurve 4208vl switch, with a myriad of other machines connected through good ol' 1000 base-T. The interfaces are actually set up as 10 Gig, according to both the switch monitoring interface and the servers (ethtool, etc). However a single connection between two 10 GigE equipped machines through the switch is limited to exactly 1Gb. If I connect two of the 10 GigE machines directly with a CX4 cable, netperf reports the link bandwidth as 9000 Mb/s. NFS achieves about 550 MB/s transfers. But when I'm using the switch, the connection tops at 950 Mb/s through netperf and 110 MB/s with NFS. When I open several connections from 3 of the machines to the 4th, I get 350 MB/s of NFS transfer speed. So each individual 10 GigE ports actually can reach much more than 1 Gb, but individual connections are strictly limited to 1 Gb. Conclusion : the 10 GigE connection through the switch behaves exactly like a trunk of 10 1 Gb connections. That doesn't make any sense to me, unless HP planned these ports only for cascading switches or strictly for many-clients-to-single-server connection. Unfortunately this is NOT the envisioned setup, we need big throughput from machine to machine. Is this a not-so-known (or carefully hidden...) limitation of this type of switch? Should I suggest seppuku to the HP representative? Does anyone have any idea on how to enable a proper behaviour ? I upgraded for an hefty price from bonded 1Gb links to 10 GigE and see exactly ZERO gain! That's absolutely unacceptable.

    Read the article

  • Connect to Apache times out randomly

    - by Amadan
    We are trying to set up an Apache server on a remote machine, but we experience strange behaviour. Checking with telnet remote.machine 80, one of these things happen randomly: Connect and serve content normally (no delay) Connect after a long pause Connect normally, then time out without response Timeout on connect Once connected, the request seems to be processed normally. These things do not occur if I connect from that machine directly to localhost 80. The Apache is dedicated, as is the server it runs on (runs only this one application, no-one else is using it for anything else). I am not an administrator of the remote site, and I do not know the network architecture over there, but apparently it's firewalled: (HTTP port is open, SSH port is IP-restricted, most others are closed). If there was any one pattern, I might have some ideas, but this variety of symptoms baffles me. Any ideas as to what could be causing this? Apache is 2.2; Server version is: Linux version 2.6.9-22.ELsmp ([email protected]) (gcc version 3.4.4 20050721 (Red Hat 3.4.4-2)) #1 SMP Mon Sep 19 18:32:14 EDT 2005

    Read the article

  • Running HTTP and HTTPS connections for a single domain (say, www.example.com) through a Cisco ACE SS

    - by Paddu
    My web application config has a Cisco ACE load balancing across a server farm and I want to use the ACE as an SSL endpoint as well. To make this work, the network architect has come up with a design where all secure pages have to be served from secure.my-domain.com, while non-secure pages are served up from www.my-domain.com. The reason for this is apparently that the configuring the Cisco ACE to accept HTTPS requests on port 443 for a particular public IP prevents the simultaneous acceptance of HTTP requests on port 80 for the same IP. While I'm not a networking (or Cisco) expert, this seems to be intuitively wrong, as it would prevent any website using the Cisco ACE to serve pages on http://www.my-domain.com and https://www.my-domain.com simultaneously. In this situation, my questions are: Is this truly a limitation of the Cisco ACE when used as an SSL endpoint? If not, then can I assume that we can set up the ACE to accept connections for a particular IP on ports 80 and 443, and function as an SSL endpoint for the incoming requests on 443? Links to appropriate documentation most welcome here. Assuming the setup in the previous question, can I then redirect both sets of requests to the same server farm on the same port?

    Read the article

  • Cannot Access Shared Folder From IIS

    - by Tim Scott
    From IIS I need to access a folder on another computer. Both servers are Window 2008 SP2, and they live in a Virtual Private Cloud on Amazon EC2. They reach one another by private IP -- they are in WORKGROUP, not a domain. I can access the shared folder manually when logged in to the client as Administrator. But IIS gets "access denied." Here's what I have done: Set File Sharing = ON Set Password Protected Sharing = OFF Set Public Folder Sharing = ON Shared the folder Added permission to the share: Everyone, Full Control Added permission to the share: NETWORK SERVICE, Full Control Verified that File & Printer Sharing is checked in Windows Firewall Opened port 445 to inbound traffic from local sources I tried adding <remote-machine-name>\NETWORK SERVICE to the share but it says it does not recognize the machine, which makes sense, I guess. As I said, from the other computer I have no trouble accessing the shared folder from my user account, but IIS is shut out. How does the file server even know the difference? I would assume that with Everyone given full control and password protected sharing turned off, it would not matter what the client user account is. In any case, how to solve? UPDATE: To clarify, I am not trying to serve up files on the share directly through IIS. Rather I am writing files to the share from my code (System.IO).

    Read the article

  • Apache Virtual host (SSL) Doc Root issue

    - by Steve Hamber
    I am having issues with the SSL document root of my vhosts configuration. Http sees to work fine and navigates to the root directory and publishes the page fine - DocumentRoot /var/www/html/websites/ssl.domain.co.uk/ (as specified in my vhost config) However, https seems to be looking for files in the main apache document root found further up the httpd.conf file, and is not being overwritten by the vhost config. (I assume that vhost config does overwrite the default doc root?). DocumentRoot: The directory out of which you will serve your documents. By default, all requests are taken from this directory, but symbolic links and aliases may be used to point to other locations. DocumentRoot "/var/www/html/websites/" Here is my config, I am quite a new Linux guy so any advise is appreciated on why this is happening!? NameVirtualHost *:80 NameVirtualHost *:443 <VirtualHost *:443> ServerAdmin root@localhost DocumentRoot /var/www/html/websites/https_domain.co.uk/ ServerName ssl.domain.co.uk ErrorLog /etc/httpd/logs/ssl.domain.co.uk/ssl.domain.co.uk-error_log CustomLog /etc/httpd/logs/ssl.domain.co.uk/ssl.domain.o.uk-access_log common SSLEngine on SSLOptions +StrictRequire SSLCertificateFile /var/www/ssl/ssl_domain_co_uk.crt SSLCertificateKeyFile /var/www/ssl/domain.co.uk.key SSLCACertificateFile /var/www/ssl/ssl_domain_co_uk.ca-bundle </VirtualHost> <VirtualHost *:80> ServerAdmin root@localhost DocumentRoot /var/www/html/websites/ssl.domain.co.uk/ ServerName ssl.domain.co.uk ErrorLog /etc/httpd/logs/ssl.domain.co.uk/ssl.domain.xo.uk-error_log CustomLog /etc/httpd/logs/ssl.domain.co.uk/ssl.domain.xo.uk-access_log common </VirtualHost>

    Read the article

  • Configure IIS site to work with host header & hosts file entry

    - by HarveySaayman
    I'm I bit of an IIS / Web noob (I'm a C# backend service / winforms dev) so please bare with me :-) I've set up a site in IIS on my local dev machine. In the bindings section of the site ive added 4 bindings, all 4 for http: Host Name Port IP Address blog.sourcecube.co.za 26581 * www.blog.sourcecube.co.za 26581 * blog.sourcecube.co.za 26581 127.0.0.1 www.blog.sourcecube.co.za 26581 127.0.0.1 in my hosts file (drivers\etc\hosts), i've added the folling entries: 127.0.0.1 blog.sourcecube.co.za 127.0.0.1 www.blog.sourcecube.co.za when i ping my domain name from the command line it does in fact resolve to the loopback address, 127.0.0.1. So what I'm expecting to happen when i navigate to blog.sourcecube.co.za in my browser is for it to resolve to 127.0.0.1, and when the request hits IIS, it should know which site to serve because of the host header? But when i navigate to blog.sourcecube.co.za, i get an "Unable to connect, Firefox can't establish a connection to the server at blog.sourcecube.co.za" error. What am I doing wrong? --- UPDATE --- Navigating to blog.sourcecube.co.za:26581 from my browser works... I'd like get it working without specifying the port number though.

    Read the article

  • Is Cherokee (probably) the best static content server for beginner sysadmins?

    - by Bad Learner
    I have read the pros and cons of most of the popular web servers and have come to a conclusion that Apache would (probably) be the best web server for serving dynamic content - - no wonder YouTube, Flickr and Facbook, among many others, use it. I do not know if that C10K problem applies to Apache even when serving dynamic content only, but I think any web server used to serve dynamic content needs some good tweaking for optimized performance, and the fact that nothing beats Apache when it comes to documentation, resources and support on the web, I think should will go with Apache for dynamic content. That apart, the confusion begins when it comes to choosing web servers for static content (including streaming videos). I see that Nginx, Cherokee and Lighttpd are among the best (I am not considering non-open source or non-linux stuff here). So, which too choose? I know one cannot go wrong with any of the three (Nginx, Cherokee, Lighttpd). Lighttpd's development has evidently gotten slower than it was a good time ago. The documentation is pretty good for all the three, and hopefully, so are the resources (knowledge of these among the users of Stackoverflow/Serverfault sites, the web etc). Precisely, and noting point [2] and [3], if I am not wrong, I should either go with Nginx or Cherokee. I would love to see someone clarify these... is Cherokee just as fast (mb/s), performant (connections/s), and reliable (think downtime/restarting server) as Nginx for serving static content and load balancing, for small, medium to large (and really large) websites and applications? (Think, the size of YouTube, Apache or Facebook.) if the answer for the Q above is a big "hell, yes!" then, I should probably prefer Cherokee, right? Because, since I am a beginner, it would a lot easier to setup Cherokee as it has a graphical admin user interface + really good documentation. Yes? I could be wrong, I could be right. I put down what I know so that you can offer most relevant advise. Pardon if anything I've said is offensive.

    Read the article

  • Configuring network route between two routers on home network

    - by Paul
    I have a home network - the main router connected to the internet (and has wifi) is a Netopia box. Connected to it is a Linksys router. Everything currently works - I can connect via the wireless network and get to the internet. Machines connected to the Linksys can connect with each other and connect to the internet. Both routers are configured to serve addresses via DHCP (Netopia 192.168.1.1 - 192.168.1.99), Linksys (192.168.0.1 - 192.168.0.100). Here's how they are connected: Internet <-> Netopia w/wifi (192.168.1.254) <-> Linksys (192.168.0.1) I decided I really need to allow wireless connections to also communicate with machines behind the Linksys router. Currently the Linksys is configured to obtain an IP address via DHCP. I thought this would be straightforward. I configured the Linksys to have a static IP address: IP: 192.168.1.100 Mask: 255.255.255.0 GW: 192.168.1.254 Then I configured a static route on the Netopia: Network: 192.168.0.0 Mask: 255.255.255.0 GW: 192.168.1.100 So it should now look like this: Internet <-> Netopia w/wifi (192.168.1.254) <-> (192.168.1.100) Linksys (192.168.0.1) I reset both routers. I cannot ping the Netopia (192.168.1.254) from inside the Linksys network, and if I attempt to ping 192.168.0.1 from a wifi connection I get a "Destination host not available" error. Obviously I'm missing something, but I'm not sure where. Any ideas on what I'm missing?

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >