Search Results

Search found 64048 results on 2562 pages for 'http post'.

Page 133/2562 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • Using wildcard domains to serve images without http blocking

    - by iopener
    I read that browsers sometimes block waiting for multiple images from the same host, and I'm trying to do everything I can to speed up page load times. One caveat: I need to serve files over HTTPS. Any opinions about whether this is feasible: Setup a wildcard cert for *.domain.com. Whenever I need an image, generate an number based on a hash mod 5 of the filename, and append it to an 'img' subdomain (eg img1.domain.com, img4.domain.com, img3.domain.com, etc.); the hash will make any filename always use the same subdomain, and therefore the browser should be able to cache the images Configure a dynamic virtualhost record to point all img#. subdomains to /var/www/img I am looking for feedback about this plan. My concerns are: Will I get warnings when my page has https:// links to multiple subdomains? Is the dynamic virtualhost record I'm talking about even possible? Considering the amount of processing this would require, is it likely to even produce any kind of overall benefit? I'm probably averaging a half-dozen images per page, with only half being changed on each page refresh. Thanks in advance for you feedback.

    Read the article

  • Possible to find what IP address caused a 403.6 error?

    - by Abe Miessler
    I have a website setup in IIS that is rejecting all IP addresses except for a specific set. I thought I had added my machines IP address to this list, but evertime I try to access I get the error: HTTP Error 403.6 - Forbidden: IP address of the client has been rejected. Internet Information Services (IIS) I would like to figure out what IP address it think I am coming form when it rejects me. I tried looking at the IIS logs and in the Security Event Logs but I'm not seeing anything. Suggestions on where to look?

    Read the article

  • Animated HTTP request visualisation on Apache

    - by Simon Bennett
    This is more a question to appease my memory in trying to remember what it was I saw a while ago. I remember being introduced to a realtime server visualisation tool that showed the current requests that Apache was handling in a kind of fireworks effect on screen. Each request/group of requests would be shot across the screen in varying colours. I can't for the life for me remember what is was called and hunting around here and Google has left me empty handed. Just wondering if anybody else was able to plug this gem from the memory and ease my pain! Thanks

    Read the article

  • How to defend agains botnet http requests

    - by Killercode
    I have a server with WHM + CPanel and 5 of my costumer got infected with zbot. This means that the domains they have are constantly receiving requests to certain destinations. I tried to use mod_security but seems that it can't filter every requests... I don't really know why? I still see in the access log the connection comming in and it's consuming a LOT of bandwidth and server load Those accounts have already been clean so all of those requests go to error 404 (the ones catched on mod_security I am dropping the connection). Is there anymore ways to defend against this requests?

    Read the article

  • BIG IP - HTTPS Health Monitor setup

    - by djo
    I have a Web site that we have setup a health monitoring pages so we can take our servers in and out of the Big-IP as we see fit. Now we have just moved onto Big-IP and the issue I have hit is that you setup Health Monitors for port 80 and 443, now the 80 check works fine but when I to get the 443 check to look at our file it fails. Now I am aware as I am hitting the this page on the IP address over HTTPS is going to cause a cert error but I would have guessed that BIG-Ip would have been setup just to accept the cert and carry on with the check. Is what I am wanting to do possible? Also is there a way of just using a HTTP monitor for HTTPS? Because if port 80 has stopped sending traffic then if i use the same monitor for 443 it will stop traffic to that. Any help would be great! Thanks

    Read the article

  • Adding SSL to Heroku site post launch

    - by dineth
    I have a rails API that I want to deploy on Heroku. $20/month for a SSL site on heroku is a little steep given I am not earning anything out of this app yet. I am after advice and wondering if it is possible to add SSL sometime in the future? This is for a iOS app that I'm writing. Basically the idea would be that I continue to use https://myapp.heroku.com through their piggyback SSL. Once I get some cash in, I want to transition to using https://www.myapp.com. At this point the API would still need to work for app users who haven't upgraded to a new version of the app that points to the new domain. Anyone know if this is possible? Would both URLs continue to work? My gut feeling tells me this is not possible. Any advice would help. Thanks!

    Read the article

  • Nginx - basic http authentication on PHP-script

    - by half_bit
    I added a PHP-Script that serves as "cgi-bin", Configuration: location ~^/cgi-bin/.*\.(cgi|pl|py|rb) { gzip off; fastcgi_pass 127.0.0.1:9000; fastcgi_index cgi-bin.php; fastcgi_param SCRIPT_FILENAME /etc/nginx/cgi-bin.php; fastcgi_param SCRIPT_NAME /cgi-bin/cgi-bin.php; fastcgi_param X_SCRIPT_FILENAME /usr/lib/$fastcgi_script_name; fastcgi_param X_SCRIPT_NAME $fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REMOTE_USER $remote_user; } PHP-Script: <?php $descriptorspec = array( 0 => array("pipe", "r"), // stdin is a pipe that the child will read from 1 => array("pipe", "w"), // stdout is a pipe that the child will write to 2 => array("pipe", "w") // stderr is a file to write to ); $newenv = $_SERVER; $newenv["SCRIPT_FILENAME"] = $_SERVER["X_SCRIPT_FILENAME"]; $newenv["SCRIPT_NAME"] = $_SERVER["X_SCRIPT_NAME"]; if (is_executable($_SERVER["X_SCRIPT_FILENAME"])) { $process = proc_open($_SERVER["X_SCRIPT_FILENAME"], $descriptorspec, $pipes, NULL, $newenv); if (is_resource($process)) { fclose($pipes[0]); $head = fgets($pipes[1]); while (strcmp($head, "\n")) { header($head); $head = fgets($pipes[1]); } fpassthru($pipes[1]); fclose($pipes[1]); fclose($pipes[2]); $return_value = proc_close($process); } else { header("Status: 500 Internal Server Error"); echo("Internal Server Error"); } } else { header("Status: 404 Page Not Found"); echo("Page Not Found"); } ?> The problem with it thought is that I cannot add basic authentication. As soon as I enable it for location ~/cgi-bin it gives me a 404 error when I try to look it up. How can I solve this? I thought about restricting access to only my second server where I then add basic authentication over a proxy, but there must be a simpler solution. Sorry for the bad title, I couldn't think of a better one.

    Read the article

  • How can I reroute a sub-domain to localhost + port number?

    - by urig
    I have several web applications running on my developer machine. They mimic our production web applications which are hosted on sub-domain. For example, consider: api.myserver.com - is mimicked by 127.0.0.1:8000 www.myserver.com - is mimicked by 127.0.0.1:8008 and so on... How can I make it so that, on my Windows 7 machine, HTTP calls to "api.myserver.com" (note the lack of port number) are redirected to 127.0.0.1:8000 etc? Note that this needs to apply both to client-side calls (in the browser) and server-side calls (from IIS to Python development server and vice versa). Do I need a proxy to run locally to achieve this? Can you recommend such a tool?

    Read the article

  • HTTP Proxypass of subdomain

    - by enedebe
    I'm trying to install a proxy on my gateway that everything that comes from a subdomain for example sub.mydomain.com goes to an inside server at a :3000 port. I'm installing a redmine server inside my network that has to be reached from outside. Any idea of how to do that? I think in httpd as proxypass, but I don't know how to get just the subdomain name to proxy it. My gateway is currently Clearos machine. Thanks

    Read the article

  • Proxy service likes Apache Http

    - by Aptos
    Currently I try to simulate my app as distributed servers, so I let them run on localhost:9000 and localhost:9001, i tried using apache load balancer but it is really hard to config on mac, my idea is the second server localhost:9001 will be kept ideal and the requests only be redirected to them when the first server is downed. Is there any good free program can do that ? (except Apache httpd). Extra functions: my application is written in java and maintain an in-memory object, is there any service that can synchronize that object between 2 servers so they can keep uptodate status of other (the second one takes state of the first one)? Is there any app can support that? Thank you very much.

    Read the article

  • HTTPS vs. VPN for communication between business partners?

    - by Andrew H
    A business partner has asked to set up a site-to-site VPN just so that a few servers can communicate with each other over HTTPS. I'm convinced this isn't necessary, or even desirable. To be fair it must be part of a wider policy, potentially even a legal requirement. However I'd like to convince them to simply offer an IP to us (and us only) and a port of their choosing for HTTPS. Has anyone had a similar experience, or had to come up with a cast-iron argument against a VPN? Allow me to expand a little - we have a web service that initiates a connection to the partner's corresponding service using an encrypted HTTP connection. The connection uses a client certificate to authenticate. The connection is firewalled so only our IPs can contact the service. So why is a VPN necessary?

    Read the article

  • Domain redirection to port on Windows Server 2008

    - by Rauffle
    I have a Windows server running IIS. I wish to run a piece of software that hosts a web interface on a non-standard HTTP port (let's say, port 9999). I have static DNS entries on my router for two FQDNs, both of which direct to the Windows server. I wish to have requests to 'website1' to continue to go to the IIS website on port 80, but requests for 'website2' to instead go to port 9999 to be handled by the other application. How can I accomplish this? Right now I can get to the application by going to 'website1:9999' or 'website2:9999'.

    Read the article

  • Can Squid 2.7 proxy gzipped content

    - by Tom Styles
    We have a forward proxy for our network which is Squid 2.7. This is managed for us by a third party. We noticed recently that http requests going from our network to the web were having the Accept-Encoding header removed. This was resulting in all web traffic across our network (approx 8000+ PCs) being uncompressed even though the browsers and server on each end were capable. We have asked the third party to look into this and they have said it is because Squid 2.7 does not support compression. I understand this to be true but I was under the impression that the compression happened on the webserver rather than the proxy. So... Can Squid 2.7 proxy and/or cache content that is gzipped? If it can, how/why might it be configured such that the Accept-Encoding header is being removed?

    Read the article

  • Setting up a simple proxy

    - by waiwai933
    I'm going to China for a week, and I'd prefer to be able to watch YouTube while I'm there. Since it's blocked, I presume I'm going to need a proxy. I have a Mac and a Linux box at home that I can use, but I'm not sure how complicated setting up a proxy is. From what I understand, I should be able to do it with a browser that supports HTTP 1.1 CONNECT if I connect to my machine at home. Can I do this, and if so, what browser can I use, or if I have misunderstood something, do I have any other simple solutions?

    Read the article

  • Window 7 image in vmware will allow network connection out but not http

    - by Ormis
    I am currently trying to create a set of images to deploy on my network, but I've run in to a snag. When I create my own Windows 7 image I can successfully use NAT for connecting to the network but whenever I try to access a webpage I get nothing. To be more specific, All firewalls/iptables are disabled on my host machine, my virtual machine, and my network. I can do lookups and all addresses respond correctly (i'm even using Google's DNS). On the host OS i have full connectivity. On the virtual machine I can ping any device I want and all addresses resolve correctly. Within a browser I cannot reach any page via hostname or IP. I feel almost like port 80 is being blocked but i can't find any reason this would be the case. If anyone has had this occur before, I would love some insight to the problem. I initially asked this on stackoverflow and now my eyes are now opened up to superuser. Thank you for any help you can provide.

    Read the article

  • Changing URL of Wordpress Post Completely

    - by HollerTrain
    I have a WP site where I want only specific Pages to have a completely different url. For example, the Page url might be 'www.domain.com/page_name' and I would like for it to be 'www.someotherdomain.com/page_name'. Is this possible with WP and if so how do I go about doing it? Any help would be greatly appreciated.

    Read the article

  • Random timeout now and then

    - by KenavR
    Maybe this is a to generic question, but since we have this issue for quite a while now, I give it a shot. We have some applications which use HTTP for the connection between the client (website or fat-client) and the server. The Computer who runs this applications is in a Network behind a firewall and a proxy, the server isn't inside the same network. The problem is that every now and then the https Request times out and depending on the Client the Application "hangs" or does some other funky stuff. The problem is definitely inside our network, because if i try the applications outside our network it works fine. Can you give me a hint where i can most likely find the problem?

    Read the article

  • htaccess filesMatch exclusion

    - by Hikari
    I have the following directive in my htaccess <filesMatch "\.(gif|jpe?g|png|js|css|swf|php|ico|txt|pdf|xml|html?)$"> FileETag None <ifModule mod_headers.c> Header unset ETag Header set Cache-Control "max-age=0, no-cache, no-store, must-revalidate" Header set Pragma "no-cache" Header set Expires "Wed, 11 Jan 1984 05:00:00 GMT" </ifModule> </filesMatch> I copied that regex from someplace in Web months ago. It should add those headers to any HTTP Response that does NOT have those extensions. But it's not working, it's adding them to any Response. I also need to create another directive to add Header set Cache-Control "max-age=3600, public" to Responses of files that DOES have them. Could anybody help me make proper fileMatch regexes?

    Read the article

  • a friendly elaboratation on how iceweasel misses as recent request was discovered in post tut

    - by v3x3n
    I would like to specify what you asked about in how iceweasel misses on in hopes that you are interested in considering the answer to be valid considering the package should run efficient to a novice user especially when getting a first impression on a new software, browser, or package they are given by default. First let me say that I am very grateful for all debian has and continues to do for its user base and I am not complaining, only hoping that in mentioning such it will eventually aid in improving the disappointing experience iceweasel gave me in multiple ways right from the start of using it. I am sure tremendous amounts of hard work went into making it work as well as it does, and tremendous effort continues to go into IW like the rest of the package and distros such as this... Having said that, I do aim to keep my examples simple as possible but also explained in enough detail as to not leave out important info to get a full idea of my issue... (Thanks for understanding if I sound novice to debian, I am but continue to learn and advance by trial and error each day, and of course the wonderful help from all the great minds available here and there)... Well, my bone to pick (unfortunately as I disdain being a complainer when so much has been done for us, the end user) with IW is mainly 3 immediately noticed problems in offering me a smooth test drive (completely normal browsing conduct of your average user, I should indicate) as soon as I started out a task using IW (heading directly to google images and searched for a few web sized images, then saved a few (no more than 800 to 900 in dimensions, typically a normal procedure I would imagine) I experienced a horrendous reoccurring freeze and lag time that almost left me to kill the process more than once on each attempt and continued to compound so that by a few attempts later, even 1 single image save locked up the entire browser, ultimately giving up on my procedure in its simplicity with a bit of aggravation, if there was something I had done wrong on my end pre-attempt or during installation. Secondly, visiting video player sites in this case youtube will not allow for any video feed to play whatsoever and again, experienced a slight freezing that eventually dissipated as long as I didn't attempt to press play or reload another video trial... Very annoying and possibly related to my first issue (which could rely on my own lack of knowledge in setup config, but that troubles me if so).... Lastly as another user has stated already, in which brought you to ask them what exactly seemed to be the problem in the first place is also my final straw that broke the weasel's back, leading me here to find a solution to a updated version of firefox... being that iceweasel (an apparent build of firefox 17.x.x) leaving the user to find themselves out of luck when installing their favorite or much useful add-ons or plugins that aid their task or normal usage online.... As the other user clearly stated... there is almost zero options and barely any luck to find compatible plugins with such an outdated version still being used... I can only imagine if as a novoice or standard user to linux... and beginner to debian (which I am all for btw!) if I am discovering major conductivity issues which hinder normal user conduct right off the bat... that there is probably a bundle or more of security vulnerabilities that would unravel to me if I continued to give mr Hommey's package any further attention, or interest in continuing to use... This is a deal breaker on any account, unless all 3 of these issues stated are due to a very n00berish setting or lack thereof (which if that is the case, such was not properly stated for beginner in any place available by following installation tutorials such as LVM installers, which I felt lacked a clear understanding to someone who is trying to learn why and how, rather than, a fix to a potentially broken out of box installation... I really think that is irrelevant in any case however, as there is much step by step support regarding that kind of thing, just saying... its very challenging, along side a job to keep deadlines on, and the need to learn a better and more costomizible system, that was not as popularized when I became a dev via windows (OK OK I know thats a huge mistake and now I am way off original topic but anyway) Thanks very much for taking the time to read over this, and I hope it honestly gives you a few good and valid reasons for users wanting and seeking a way to get around iceweasel altogether... Finally, if I am missing something vital that renders all I've stated invalid or useless, please feel free to shove that elephant in the room my direction! Thanks for all you do, as I am sure it is very useful and dedicated, being you are a debian maintainer, and super user! Much love for intelligence, freedom and sharing the secrets of the web! May it remain unregulated and a good place to grow and learn for generations to come! Stay true! Take care now! -v3x (v3x @ gmx .com)

    Read the article

  • Squid throws error, The requested URL could not be retrieved

    - by Supratik
    Hi Sometimes I am getting the following error The requested URL could not be retrieved While trying to retrieve the URL: http://groups.google.com/ The following error was encountered: Unable to determine IP address from host name for groups.google.com The dnsserver returned: Refused: The name server refuses to perform the specified operation. This means that: The cache was not able to resolve the hostname presented in the URL. Check if the address is correct. Your cache administrator is root. What could be the reason for the above error ? Regards Supratik

    Read the article

  • HTTP transfer speeds start fast then slows to a crawl

    - by AnITAdmin
    We just got a new dedicated 1 gigabit server running IIS. The CPU is 15% or less, the RAM (4 GB total) has 3 GB unused... We are pushing 110 mbits per second... Speeds are really slow.. And, if fact, here's how it happens: We connect, and then the speeds are really fast, and quickly decline to 40 kBps or less. What's going on? It seems the server just wont go above 120 mbits per second. The files are all very large. 50 MB to 500 MB... Could this be a factor? Again, CPU, RAM, UI responsiveness when accessing remotely all seem fine.

    Read the article

  • How to make sure clients update their browser cache when my website is updated?

    - by user64204
    I am using the HTTP 1.1 Cache-Control header to implement client-side caching. Since I update my website only once a month I would like the CSS and JS files to be cached for 30 days with Cache-Control: max-age=2592000. The problem is that the 30-day period defined by Cache-Control doesn't coincide with the website update cycle, it starts from the moment the users visit the site and ends 30 days later, which means an update could occur in the meantime and users would be running with outdated content for a while, which could break the rendering of the website if for instance the HTML and CSS no longer match. How can I perform client-side caching of content for periods of several days but somehow get users to refresh their CSS/JS files after the website has been updated? One solution I could think of is that if website updates can be schedule, the max-age returned by the server could be decreased every day accordingly so that no matter when people visit the website, the end of caching period would coincide with the update of the website, but changing the server configuration every day goes against one of my sysadmin principles (once it's running, don't touch it).

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >