Search Results

Search found 55168 results on 2207 pages for 'http header'.

Page 142/2207 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • Serving static web files off a non-standard port

    - by Nimmy Lebby
    I'm close to deploying a Django project to production. I'm looking over some infrastructure decisions. Something that came up was serving static files with a different server such as lighttpd. However, we're starting off with a single dedicated server so our only option would be to use a non-standard port for the static file webserver. Is there precedence for this? I.e. Does anyone "big" do this? Any particular port I should use or shy away from using? Can anyone thing of some downsides of going this route?

    Read the article

  • Webmin apache on CentOS 6.3 results in 403 forbidden, permissions are OK

    - by Mario De Schaepmeester
    First of all, I will mention that the permissions are fine for the document root directory, which is /webapps/nimbus/www/public_html The www directory contains a PHP application. PHP is a problem for later if it doesn't work, as I've tested it with a plain html file (does not work either) I just get 403 forbidden responses. The permissions are 755 on webapps and all subdirectories. I've checked other questions here and on the internet, but it was all about those permissions. Whatever info you still need, just ask, I don't know what's relevant as it's the first time ever I'm using webmin or configuring apache.

    Read the article

  • Using nginx and/or varnish to cache server-generated 301 redirects

    - by rlotun
    I'm implementing a sort of url-shortener service. What happens is that I have some backend app server that takes in a request, does some computation and returns a 301 redirected url back upstream to an nginx frontend: request ---> nginx ----> app_server What I want to be able to do is cache this returned 301 url for the same request (a specific url with a "short code"). Does nginx do this caching automatically? Or should I drop in something like varnish in between nginx and the app_server? I can easily cache this in memcache, but that would require hitting the app_server, which I'm sure can be dispensed with after the first request. Thanks.

    Read the article

  • How do .so files avoid problems associated with passing header-only templates like MS dll files have?

    - by Doug T.
    Based on the discussion around this question. I'd like to know how .so files/the ELF format/the gcc toolchain avoid problems passing classes defined purely in header files (like the std library). According to Jan in that answer, the dynamic linker/loader only picks one version of such a class to load if its defined in two .so files. So if two .so files have two definitions, perhaps with different compiler options/etc, the dynamic linker can pick one to use. Is this correct? How does this work with inlining? For example, MSVC inlines templates aggressively. This makes the solution I describe above untenable for dlls. Does Gcc never inline header-only templates like the std library as MSVC does? If so wouldn't that make the functionality of ELF described above ineffective in these cases?

    Read the article

  • Circumventing a manual HTML login page for "unclassified" websites

    - by auramo
    The IT department just made my life a little bit harder again: they introduced a manual HTML login page for all websites they have not "classified". This means that all the applications which try to access unclassified websites for e.g. downloading plugins do not work. Examples: Eclipse plugin installation, Maven builds etc. What would be the easiest workaround for this? The best I've come up with is try to extend/customize Ruby's httpproxy.rb that comes with Webrick. I would automate the manual login process whenever that login response page is detected. This sounds quite painful, and I think there might/should be simpler options?

    Read the article

  • How to set up custom 401 error page or redirect in WSS3 SP2

    - by Stacy Vicknair
    I've got a WSS3 sharepoint site that requires windows authentication both in IIS and via the Sharepoint site. What I would like to do is in the case that a user does not provide valid AD credentials they are redirected to a custom error page. Currently, if the user immediately hits cancel when prompthey will see a plain text response of "401 UNAUTHORIZED". If they make an attempt and then hit cancel they instead see a blank page. I have looked into several options such as customErrors, httpModule interception (only saw examples for this after the user is authenticated), IIS Url rewrites (didn't see how this could help). Is there a good way to do this?

    Read the article

  • Winamp "Now Playing" POST to PHP script

    - by Brad
    I have tried/researched 8 or so different plugins for Winamp that allow this functionality, and none of them seem to work on Windows 7 x64! I need a plugin for Winamp that will send the current playing track information to a PHP script, via GET or POST. I have almost no requirements... it can be the name/artist, or the filename (preferred). No fancy functionality needed, just something basic! The four that I actually are: Now Playing XML HTML Server SongStat Currently Hearing Any suggestions? (Note to Moderators: No, I'm not looking for "shopping recommendations", I just need a plugin that works. There probably is only "one answer" to this question, and if there isn't, feel free to make it a CW.)

    Read the article

  • Only allow the POST method for a specific file in a directory

    - by Dave Chen
    I have one file that should only be accessible via the POST method. /var/www/folder/index.php The document root is /var/www/ and index.php is nested inside a folder. My configurations are as follows: <Directory "/var/www/folder"> <Files "index.php"> order deny,allow Allow from all <LimitExcept POST> Deny from all </LimitExcept> </Files> </Directory> I visit my server at 127.0.0.1/folder but I can GET and POST the file just like normal. I've also tried reversing the order, order allow,deny, require, limitexcept and limit. How can I only allow POST requests to be processed by one file in a folder?

    Read the article

  • Broadband Traffic Question

    - by rutherford
    I have a broadband ADSL line with plus.net in the UK. Having checked the modem there is no firewall or any weird features enabled. But since I arrived at the apartment (the broadband already being installed), I cannot log into Twitter nor update any of my wordpress blogs (I can browse them and log in, but cannot save any edits or new posts). It only seems to affect these two sites in their unique ways. If I take the netbook I use in this place out to say a McDonalds or some other wifi access point then these sites work fine again. Anyone know what could possibly be preventing access of the pages in question? The only thing common to these pages are the POST response they are expecting. But POST form submission works fine on other sites...

    Read the article

  • Processing files from a Content Distribution Network problem

    - by Derek
    From what I understand that CDNs are meant to physically cache your static files in multiple regions closer to your users. However, I've noticed a few websites that when a page is requested from their server, they grab the asset files from their cdn, process them (compress, minify, etc.) cache the results on their server and then send them to the user requesting the page. This doesn't make too much sense to me. Wouldn't processing the files on your server eliminate the gains from using a cdn? Is this a normal way of doing things, or am I not understanding the whole asset management concept?

    Read the article

  • Is it possible to do a 301 redirect AND redirect to the requested resource?

    - by Pure.Krome
    For one of our projects, we're doing a rebranding of the website name, logo, etc... As such, we need to 301 Moved Permenantly redirect all users from the old domain to the new domain. With IIS7, that's pretty simple. We just create a new website that redirects all traffic to a host-headered domain .. to the new one. But this loses their original destination resource. eg. Old Domain: www.OldDomain.com New Domain: www.NewDomain.com User: www.OldDomain.com/user/PureKrome -> 301 --> www.newDomain.com Notice how it's going to the new domain BUT not to /user/PureKrome? How can I do this so it goes to the new domain and keeps the original resource request? I'm guessing URL-ReWriter for IIS7 might help? Also, what happens if I want to do this... CurrentDomain 1: Domain.com CorrectDomain 1: www.Domain.com CurrentDomain 2: AnotherDomain.com CorrectDomain 2: www.AnotherDomain.com Is it also possible to have those in the same IIS website? So any URL to domain.com will 301 to www.domain.com Right now I'm making 2 IIS websites, with a 301 hardcoded (which still means I lose the original resource request, too). Help!

    Read the article

  • How would I recognize the "spoon-feeding problem" on a dynamic webapp server?

    - by Don Spaulding
    The "spoon-feeding problem", as it was recently explained to me, happens when connections to your application server are tied up feeding data across slow network connections to your clients. This makes sense to me and now I understand the importance of putting a highly-concurrent proxy in front of my app servers. My question is, how did the first person to recognize this problem figure it out? What *nix tools and troubleshooting techniques would help me to recognize this problem if I hadn't had it explained to me?

    Read the article

  • google webmaster soft 404 on 301

    - by Daniel
    I'm looking through google webmaster that my page is generating soft 404 errors (https://support.google.com/webmasters/answer/181708?hl=en) google says: We recommend that you always return a 404 (Not found) or a 410 (Gone) response code in response to a request for a non-existing page But I've got redirects set up that handle old pages to redirect to the proper new pages using a 301. The website links changed because of a use of a framework, which allows it to be more consistent, but the old pages till have links out there to these. Should I be worried about this? IS google penalizing the site for this? (Using IIS 8, Tomcat, CF10, Win)

    Read the article

  • MySQL based authentication with crypt()ed password fails in Apache 2.2

    - by Fester Bestertester
    I'm trying to set up a simple CalDAV/CardDAV server with a Radicale backend and an Apache 2.2 frontend. So far, it's all nice and simple, but I can't get the MySQL based authentication to work. I'd like to authenticate users against an existing MySQL database, and I need the REMOTE_USER variable to be set (pretty much like in the configuration examples for Radicale). I've tried mod_auth_mysql, which authenticated the users nicely, but failed to set the REMOTE_USER variable. The newer alternative seems to be mod_authn_dbd, which doesn't seem to like the crypted passwords in the MySQL database. According to the documentation, crypted passwords should work, so maybe I'm just missing a simple parameter. The configuration looks like this: DBDriver mysql DBDParams "sock=/var/run/mysqld/mysqld.sock dbname=myAuthDB user=myAuthUser pass=myAuthPW <Directory /> AllowOverride None Order allow,deny allow from all AuthName 'CalDav' AuthType Basic AuthBasicProvider dbd require valid-user AuthDBDUserPWQuery "SELECT crypt FROM myAuthTable WHERE id=%s" </Directory> I've tested the query, it works fine. And as mentioned before, mod_auth_mysql worked nicely against the same database, but didn't set the required variables. Am I just missing some configuration parameter? Or is mod_authn_dbd just not the right tool to achieve what I want?

    Read the article

  • How to filter Varnish logs based on XID?

    - by Martijn Heemels
    I'm running into infrequent 503 errors which appear hard to pinpoint. Varnishlog is driving me mad, since I can't seem to get the information I want out of it. I'd like to see both the client- and backend-communications as seen by Varnish. I thought the XID number, which is logged on Varnish's default error page, would allow me to filter the exact request out of the logging buffer. However, no combination of varnishlog parameters gives me the output I need. The following only shows the client-side communication: varnishlog -d -c -m ReqStart:1427305652 while this only shows the resulting backend communication: varnishlog -d -b -m TxHeader:1427305652 Is there a one-liner to show the entire request?

    Read the article

  • What does this error mean in my IIS7 Failed Request Tracing report?

    - by Pure.Krome
    when I attempt to goto any page in my web application (i'm migrating the code from an asp.net web site to web application, and now testing it) .. i keep getting some not authenticated error(s) . So, i've turned on FREB and this is what it says... I'm not sure what that means? Secondly, i've also made sure that my site (or at least the default document which has been setup to be default.aspx) has anonymous on and the rest off. Proof: - C:\Windows\System32\inetsrv>appcmd list config "My Web App/default.aspx" -section:anonymousAuthentication <system.webServer> <security> <authentication> <anonymousAuthentication enabled="true" userName="IUSR" /> </authentication> </security> </system.webServer> C:\Windows\System32\inetsrv>appcmd list config "My Web App" -section:anonymousAuthentication <system.webServer> <security> <authentication> <anonymousAuthentication enabled="true" userName="IUSR" /> </authentication> </security> </system.webServer> Can someone please help?

    Read the article

  • IPTables configuration help

    - by Sam
    I'm after some help with setting up IPTables. Mostly the configuration is working, but regardless of what I try I cannot allow localhost to access the local Apache only (i.e. localhost to access localhost:80 only). Here is my script: !/bin/bash Allow root to access external web and ftp iptables -t filter -A OUTPUT -p tcp --dport 21 --match owner --uid-owner 0 -j ACCEPT iptables -t filter -A OUTPUT -p tcp --dport 80 --match owner --uid-owner 0 -j ACCEPT Allow DNS queries iptables -A OUTPUT -p udp --dport 53 -j ACCEPT iptables -A OUTPUT -p tcp --dport 53 -j ACCEPT Allow in and outbound SSH to/from any server iptables -A INPUT -p tcp -s 0/0 --dport 22 -j ACCEPT iptables -A OUTPUT -p tcp -d 0/0 --sport 22 -j ACCEPT Accept ICMP requests iptables -A INPUT -p icmp -s 0/0 -j ACCEPT iptables -A OUTPUT -p icmp -d 0/0 -j ACCEPT Accept connections from any local machines but disallow localhost access to networked machines iptables -A INPUT -s 10.0.1.0/24 -j ACCEPT iptables -A OUTPUT -d 10.0.1.0/24 -j DROP Drop ALL other traffic iptables -A OUTPUT -p tcp -d 0/0 -j DROP iptables -A OUTPUT -p udp -d 0/0 -j DROP Now I have tried many permutations and I'm obviously missing everything. I place them above the in/out bound SSH to/from, so it's not the precedence order. If someone could give me the heads up on allowing only the local machine to access the local web server, that'd be great. Cheers guys.

    Read the article

  • Capture virtual machine traffic in Fiddler

    - by HtS
    I'm running Ubuntu in a virtual machine (host machine is Windows 7). Is it possible to use Fiddler in the host machine to capture the traffic from the virtual machine? Seeing as the virtual machine's network must be passing through the host computers NIC, can Fiddler capture the packets? (I don't know of any free alternative to Fiddler for Linux, except Tamper Data, but I need a bit more control). Thanks.

    Read the article

  • erratic response times with Apache 2.0.52 on redhat 4.

    - by Kevin
    Under load, we've noticed response times from Apache vary greatly for the same 7k image. It can range anywhere from .01 seconds to 25 seconds or greater. Unfortunately, due to corporate policy constraints we are pretty much stuck on Apache 2.0.52. I'm at best an Apache novice so I'm in over my head with this problem. My focus recently has turned to our choice of MPM modules. We use the worker model on a dual core hyper threaded blade. It doesn't appear that swapping is an issue, and I don't see any signs of a hardware problem. I've read that worker is optimal on hardware with many CPU's where prefork it more suitable for our specific hardware profile. I can see conceptually how choosing the wrong MPM could result in this erratic behavior, but I'm not confident that it's the root cause here. Has anyone else seen this type of range in your response times for simple static content? What else should I be looking into here?

    Read the article

  • Capture virtual machine traffic in Fiddler

    - by HtS
    I'm running Ubuntu in a virtual machine (host machine is Windows 7). Is it possible to use Fiddler in the host machine to capture the traffic from the virtual machine? Seeing as the virtual machine's network must be passing through the host computers NIC, can Fiddler capture the packets? (I don't know of any free alternative to Fiddler for Linux, except Tamper Data, but I need a bit more control). Thanks.

    Read the article

  • How to maximize SEO of an internationalized website?

    - by alditis
    I am developing a website that should be multi languages. I would like your opinions about what the best way to do it, considering get the most SEO possible. I have these alternatives: Altarnative 1: Separate domains http://www.miweb.com -- English for default http://www.miweb.com.fr http://www.miweb.com.es http://www.miweb.com.it Altarnative 2: Sub domains http://www.miweb.com -- English for default http://fr.miweb.com http://es.miweb.com http://it.miweb.com Altarnative 3: Sub folders http://www.miweb.com -- English for default http://www.miweb.com/fr http://www.miweb.com/es http://www.miweb.com/it I would like to know about your experiences. I hope the question is okay. Any suggestions, comments, objections or idea is welcome. Thank you.

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >