Search Results

Search found 62215 results on 2489 pages for 'http basic authentication'.

Page 207/2489 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • How to use PAM to restrict HTTP access for some users?

    - by MaxB
    I've read that PAM can be used to restrict HTTP access for some users, but I can't figure out how to do it in Ubuntu 12.04. The /etc/security/time.conf man page contains this example: All users except for root are denied access to console-login at all times: login ; tty* & !ttyp* ; !root ; !Al0000-2400 For this to work, /etc/pam.d/login needs to have a line account requisite pam_time.so This example works, and I tried to adapt it to limit HTTP access from the console. I added http ; tty* & !ttyp* ; !root ; !Al0000-2400 to /etc/security/time.conf, and created /etc/pam.d/http with account requisite pam_time.so This doesn't work. I can still use wget as non-root from the console.

    Read the article

  • Client-based program to track response time for online webservice

    - by Søren Haagerup
    I am helping a customer with general IT support, and they have a problem with a hosted web-based system being slow. The provider of the system blames the client's computer, and the client calls me for help. I blame the provider, but it is hard to get them to do something about it without rock-solid evidence. And every time the provider comes around for a TeamViewer session, everything of course runs smoothly. Does there exist a client program or browser plugin that tracks statistics about response time for specific web services?

    Read the article

  • How can Facebook's session get mixed up because of NAT and/or Proxy

    - by Alex
    Have received some reports from a customer (a very large company) they reported issues from clients who are using Facebook. These clients claim that once in a while when they log in to Facebook they end up in someone else's session. I know that network is NATed then Proxied before getting to Facebook.com. Although I'm not able to explain how this issue can occur. Is it possible that the Proxy is not sending the right session back to the clients? How can they end up with someone else's session since Facebook is cookie based session?? Anyone seen this before?

    Read the article

  • Can I enable gzip/deflate in IIS6 without restart?

    - by nibblebot
    I have gone through the steps of enabling static compression for my IIS 6.0 site: enable it in IIS Manager enable edit-while-running add the extensions i need to compress directly to the metabase: js, css wait for the metabase.xml to update to the latest major history version It is still not compressing JS and CSS. Is there anyway to enable this without iisreset?

    Read the article

  • Serving static web files off a non-standard port

    - by Nimmy Lebby
    I'm close to deploying a Django project to production. I'm looking over some infrastructure decisions. Something that came up was serving static files with a different server such as lighttpd. However, we're starting off with a single dedicated server so our only option would be to use a non-standard port for the static file webserver. Is there precedence for this? I.e. Does anyone "big" do this? Any particular port I should use or shy away from using? Can anyone thing of some downsides of going this route?

    Read the article

  • View unknown content-types in Firefox?

    - by dirtside
    A certain URL on my server returns Content-Type: application/json. The filename ends with .phtml, so whenever I go to it, Firefox 3.5 asks me if I want to save it or open it with another program. But the answer is neither; I want to view it in Firefox as if it was text/plain. Suggestions? The Applications tab of the Preferences window doesn't give this as an option for the "PHTML" type which now is listed in there (ever since I tried to open it with no external program).

    Read the article

  • Webmin apache on CentOS 6.3 results in 403 forbidden, permissions are OK

    - by Mario De Schaepmeester
    First of all, I will mention that the permissions are fine for the document root directory, which is /webapps/nimbus/www/public_html The www directory contains a PHP application. PHP is a problem for later if it doesn't work, as I've tested it with a plain html file (does not work either) I just get 403 forbidden responses. The permissions are 755 on webapps and all subdirectories. I've checked other questions here and on the internet, but it was all about those permissions. Whatever info you still need, just ask, I don't know what's relevant as it's the first time ever I'm using webmin or configuring apache.

    Read the article

  • how do I setup Apache's Content-Encoding Header?

    - by Nick
    When attempting to validate my site with the W3C validator, it returns the error, "Don't know how to decode Content-Encoding 'none'". Firebug confirms that my server is sending the header, "Content-Encoding: none". But I can't find any directive in apache2.conf or in my vhost that sets the Content-Encoding header. Where does the directive go, and what should it be set to? UPDATE: On further examination it seems something is wrong with mod_deflate (gzip). It's zipping my css files just fine, but is not zipping the html generated by my php scripts. I have: AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css And the pages are showing a mime type of: "text/html". But content encoding is "none" and they aren't zipping. Perhaps these issues are related?

    Read the article

  • Using nginx and/or varnish to cache server-generated 301 redirects

    - by rlotun
    I'm implementing a sort of url-shortener service. What happens is that I have some backend app server that takes in a request, does some computation and returns a 301 redirected url back upstream to an nginx frontend: request ---> nginx ----> app_server What I want to be able to do is cache this returned 301 url for the same request (a specific url with a "short code"). Does nginx do this caching automatically? Or should I drop in something like varnish in between nginx and the app_server? I can easily cache this in memcache, but that would require hitting the app_server, which I'm sure can be dispensed with after the first request. Thanks.

    Read the article

  • nginx rule to capture header and append value as query string

    - by John Schulze
    I have an interesting problem I need to solve in nginx: one of the sites I'm building receives inbound traffic on port 80 (and only port 80) which may have a certain header set in the request. If this header is present I need to capture the value of it and append that as a querystring parameter before doing a temporary redirect (rewrite) to a different (secure) server, while passing the paramater and any other querystring params along. This should be very doable, but how!? Many thanks, JS

    Read the article

  • Circumventing a manual HTML login page for "unclassified" websites

    - by auramo
    The IT department just made my life a little bit harder again: they introduced a manual HTML login page for all websites they have not "classified". This means that all the applications which try to access unclassified websites for e.g. downloading plugins do not work. Examples: Eclipse plugin installation, Maven builds etc. What would be the easiest workaround for this? The best I've come up with is try to extend/customize Ruby's httpproxy.rb that comes with Webrick. I would automate the manual login process whenever that login response page is detected. This sounds quite painful, and I think there might/should be simpler options?

    Read the article

  • Enforcing a specific order for cookie headers

    - by Paul
    We have an application that cares about the order of cookie headers. It shouldn't, since this isn't mandated by the standards and indeed we're getting the headers in various different orders So we would like to rewrite the headers in Apache so that the cookie headers always appear in a specific order. Is there any way of doing this? An ideal solution would be specifically about cookie headers, but something that lets us mess with the header order more generally would do too.

    Read the article

  • Only allow the POST method for a specific file in a directory

    - by Dave Chen
    I have one file that should only be accessible via the POST method. /var/www/folder/index.php The document root is /var/www/ and index.php is nested inside a folder. My configurations are as follows: <Directory "/var/www/folder"> <Files "index.php"> order deny,allow Allow from all <LimitExcept POST> Deny from all </LimitExcept> </Files> </Directory> I visit my server at 127.0.0.1/folder but I can GET and POST the file just like normal. I've also tried reversing the order, order allow,deny, require, limitexcept and limit. How can I only allow POST requests to be processed by one file in a folder?

    Read the article

  • Broadband Traffic Question

    - by rutherford
    I have a broadband ADSL line with plus.net in the UK. Having checked the modem there is no firewall or any weird features enabled. But since I arrived at the apartment (the broadband already being installed), I cannot log into Twitter nor update any of my wordpress blogs (I can browse them and log in, but cannot save any edits or new posts). It only seems to affect these two sites in their unique ways. If I take the netbook I use in this place out to say a McDonalds or some other wifi access point then these sites work fine again. Anyone know what could possibly be preventing access of the pages in question? The only thing common to these pages are the POST response they are expecting. But POST form submission works fine on other sites...

    Read the article

  • Cache Control Headers with IIS 7.5

    - by Brad
    I'm trying to wrap my head around client side (web browser) caching and how it works in relation to IIS 7.5 cache control headers. In particular: If we want to force clients to reload cached resources, how must IIS be configured? Do we need to set expire web content immediately if the resources on the server have a more recent Modified Date (or ETag value)? Right now we're not setting any cache headers. So if I set a cache header of no-cache (which I think is the equivalent of expire web content immediately) will that force the web browser to obtain a new version of a particular file. Or will the browser only request a new version after it deems its current copy to be stale and then from that point forward not cache it? Would a best practice be to set a cache control flag of 1 week, then 8 days before I know I am going to make a change set the cache control down to for instance 30 minutes? But if I do that and then need to immediately expire an item from users caches because there was an issue with it how do I do that?

    Read the article

  • Processing files from a Content Distribution Network problem

    - by Derek
    From what I understand that CDNs are meant to physically cache your static files in multiple regions closer to your users. However, I've noticed a few websites that when a page is requested from their server, they grab the asset files from their cdn, process them (compress, minify, etc.) cache the results on their server and then send them to the user requesting the page. This doesn't make too much sense to me. Wouldn't processing the files on your server eliminate the gains from using a cdn? Is this a normal way of doing things, or am I not understanding the whole asset management concept?

    Read the article

  • Is it possible to do a 301 redirect AND redirect to the requested resource?

    - by Pure.Krome
    For one of our projects, we're doing a rebranding of the website name, logo, etc... As such, we need to 301 Moved Permenantly redirect all users from the old domain to the new domain. With IIS7, that's pretty simple. We just create a new website that redirects all traffic to a host-headered domain .. to the new one. But this loses their original destination resource. eg. Old Domain: www.OldDomain.com New Domain: www.NewDomain.com User: www.OldDomain.com/user/PureKrome -> 301 --> www.newDomain.com Notice how it's going to the new domain BUT not to /user/PureKrome? How can I do this so it goes to the new domain and keeps the original resource request? I'm guessing URL-ReWriter for IIS7 might help? Also, what happens if I want to do this... CurrentDomain 1: Domain.com CorrectDomain 1: www.Domain.com CurrentDomain 2: AnotherDomain.com CorrectDomain 2: www.AnotherDomain.com Is it also possible to have those in the same IIS website? So any URL to domain.com will 301 to www.domain.com Right now I'm making 2 IIS websites, with a 301 hardcoded (which still means I lose the original resource request, too). Help!

    Read the article

  • How would I recognize the "spoon-feeding problem" on a dynamic webapp server?

    - by Don Spaulding
    The "spoon-feeding problem", as it was recently explained to me, happens when connections to your application server are tied up feeding data across slow network connections to your clients. This makes sense to me and now I understand the importance of putting a highly-concurrent proxy in front of my app servers. My question is, how did the first person to recognize this problem figure it out? What *nix tools and troubleshooting techniques would help me to recognize this problem if I hadn't had it explained to me?

    Read the article

  • google webmaster soft 404 on 301

    - by Daniel
    I'm looking through google webmaster that my page is generating soft 404 errors (https://support.google.com/webmasters/answer/181708?hl=en) google says: We recommend that you always return a 404 (Not found) or a 410 (Gone) response code in response to a request for a non-existing page But I've got redirects set up that handle old pages to redirect to the proper new pages using a 301. The website links changed because of a use of a framework, which allows it to be more consistent, but the old pages till have links out there to these. Should I be worried about this? IS google penalizing the site for this? (Using IIS 8, Tomcat, CF10, Win)

    Read the article

  • How to filter Varnish logs based on XID?

    - by Martijn Heemels
    I'm running into infrequent 503 errors which appear hard to pinpoint. Varnishlog is driving me mad, since I can't seem to get the information I want out of it. I'd like to see both the client- and backend-communications as seen by Varnish. I thought the XID number, which is logged on Varnish's default error page, would allow me to filter the exact request out of the logging buffer. However, no combination of varnishlog parameters gives me the output I need. The following only shows the client-side communication: varnishlog -d -c -m ReqStart:1427305652 while this only shows the resulting backend communication: varnishlog -d -b -m TxHeader:1427305652 Is there a one-liner to show the entire request?

    Read the article

  • Amazon S3 not sending Content-Type header

    - by Luke____
    I have an application that downloads content from various sources. It relies on the "Content-Type" header being set on images. The majority of web-servers do this correctly but it appears Amazon S3 server is not setting the Content-Type. I assume Amazon servers are configured correctly so what could be the problem? Are these images not uploaded correctly? Or should I not be relying on content type being set? Example Thanks

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >