Search Results

Search found 9696 results on 388 pages for 'proxy authentication'.

Page 13/388 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • reverse proxy only from one internal server

    - by hrost
    I have configured a reverse proxy and is working ok for one internal server, for example our mail server. Now, I like to know if it is possible to configure a reverse proxy for only one server /application (in this case our web intranet). Our problem is Intranet call another aplication inside same intranet server and another internal servers, and the only way that I know to publish this resources is make a reverse proxy in our dmz apache for all apllications servers, but I like that from our DMZ reverse apache only intranet will be called, and other applications will be called by intranet server, and not reverse proxy. I like to configure with this system for security reason, and only allow external access to one server. I have configured With Debian Squeeze and apache 2.2 It is possible? How?

    Read the article

  • UDP Reverse Proxy

    - by user180195
    I have found a way to make reverse-proxy to an external IP. Here is how someone making a request will see it's request being passed: Clients sends request Request reaches the Datacenter one in some place That datacenter, acting as a reverse proxy will redirect the same exact request to another datacenter. The datacenter will then process the request Although, this only works with TCP/HTTP (Looking currently at HaProxy). I am hosting game servers at the other datacenter (where the proxy is not) that are using UDP protocol. Do you know how can I setup a reverse proxy using the UDP protocol.

    Read the article

  • Subversion: Avoid proxy use on intranet

    - by l0b0
    I'm trying to exclude all intranet hosts from proxy use, but it looks like http-proxy-exceptions just looks at the command line string, not what the host name in the string resolves to. Because of this, it looks like the only way to avoid proxy use on the IP 123.456.789.012*, which also answers to vcs, vcs.example.org, svn, svn.example.org, subversion and subversion.example.org is to list all of them, not just the IP. Is there some trick to make Subversion either resolve IP addresses before checking for proxy exceptions, or exclude everything that is not a fully qualified DNS name (that is, doesn't contain a dot)? * Yes, I know that's not a valid IP

    Read the article

  • Ubuntu 9.10 RSA authentication: ssh fails, filezilla runs fine

    - by MariusPontmercy
    This is quite a mistery for me. I usually use passwordless RSA authentication to login into my remote *nix servers with ssh and sftp. Never had any problem until now. I cannot connect to an Ubuntu 9.10 machine: user@myclient$ ssh -i .ssh/Ganymede_key [email protected] [...] debug1: Host 'ganymede.server.com' is known and matches the RSA host key. debug1: Found key in /home/user/.ssh/known_hosts:14 debug2: bits set: 494/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: .ssh/Ganymede_key (0xb96a0ef8) debug2: key: .ssh/Ganymede_key ((nil)) debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering public key: .ssh/Ganymede_key debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Trying private key: .ssh/Ganymede_key debug1: read PEM private key done: type RSA debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug2: we did not send a packet, disable method debug1: Next authentication method: keyboard-interactive debug2: userauth_kbdint debug2: we sent a keyboard-interactive packet, wait for reply debug2: input_userauth_info_req debug2: input_userauth_info_req: num_prompts 1 Then it falls back to password authentication. If I disable password authentication on the remote machine my connection attempt just fails with a "Permission denied (publickey)." state. Same thing for sftp from command line. The "funny" thing is that the exact same RSA key works like a charm with a Filezilla sftp session instead: 12:08:00 Trace: Offered public key from "/home/user/.filezilla/keys/Ganymede_key" 12:08:00 Trace: Offer of public key accepted, trying to authenticate using it. 12:08:01 Trace: Access granted 12:08:01 Trace: Opened channel for session 12:08:01 Trace: Started a shell/command 12:08:01 Status: Connected to ganymede.server.com 12:08:02 Trace: CSftpControlSocket::ConnectParseResponse() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Retrieving directory listing... 12:08:02 Trace: CSftpControlSocket::SendNextCommand() 12:08:02 Trace: CSftpControlSocket::ChangeDirSend() 12:08:02 Command: pwd 12:08:02 Response: Current directory is: "/root" 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Trace: CSftpControlSocket::ParseSubcommandResult(0) 12:08:02 Trace: CSftpControlSocket::ListSubcommandResult() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Directory listing successful Any thoughts? M

    Read the article

  • Blocking IP's Nginx behind proxy

    - by FunkyChicken
    I'm running a Nginx 1.2.4 webserver here, and I'm behind a proxy of my hoster to prevent ddos attacks. The downside of being behind this proxy is that I need to get the REAL IP information from an extra header. In PHP it works great by doing $_SERVER[HTTP_X_REAL_IP] for example. Now before I was behind this proxy of my hoster I had a very effective way of blocking certain IP's by doing this: include /etc/nginx/block.conf and to allow/deny IP's there. But now due to the proxy, Nginx sees all traffic coming from 1 IP. Is there a way I can get Nginx to read the IP's like how PHP does, with the X-REAL-IP header?

    Read the article

  • How to circumvent ISP Limiting "Unknown" traffic - (SSH)Proxy, VPN

    - by connery
    I am having issues with using a proxy/VPN, with my current ISP (Comenersol, Spain). From my point of view they limit traffic by protocol or by traffic they "know" and "dont know". I'll explain my findings so far below. Internet connection in Spain: ~400-420KByte/sec (speedtest.net) OpenVPN Server in Sweden(pfsense): 100/100Mbit. LZO Compression. TCP. Tun. Aes128 Squid Proxy server in Sweden (pfsense): 100/100 (same box as the vpn server). Plain, no encryption. Runs in stealth mode to hide the use of proxy. NOT running OpenVPN or Squid Proxy, this is my findings: When I download a file from my pfsense box in Sweden, I get maximum speed When I run speedtest.net and choose any european server (including Swedish), I get max speed When I download a torrent (with non default port above 10K), I get limited to ~100KByte/sec. Encryption is turned off If I download something through https, I get max speed Running either Squid Proxy or VPN, this is my findings When I download a file from my pfsense box in Sweden, I get ~100KByte/sec When I run speedtest.net and choose any european server (including Swedish and Spanish), I get ~100Kbyte/sec When I download a torrent, I get same limitation ~100KByte/sec When I download something through https, I get ~100KByte/sec I verify the speeds above with speedtest.net measure, firefox measure in addition to having bmon running in terminal in the background. This way I am certain that the speeds I get presented, are in fact correct. If I connect through a different ISP with VPN or Squid Proxy, I get better speeds (400KByte/sec ++) In short: Whenever I tunnel my traffic through Sweden, my SPanish ISP throttles the traffic. I thought tunneling it through Squid would solve the issue, since I then would no longer hide my traffic through encryption. This does not seem to be the case. Wget and fetch gives same result. I did not try 'nc', but I assume this would give the same result. Does anyone know how to circumvent this issue? I would very much like to be able to get full speed with Swedish ip, as this would make me able to stream TV at higher quality than today. 100KByte/sec just does not cut it quality wise. Thanks for reading. Looking forward for your help.

    Read the article

  • Multi- authentication scenario for a public internet service using Kerberos

    - by StrangeLoop
    I have a public web server which has users coming from internet (via HTTPS) and from a corporate intranet. I wish to use Kerberos authentication for the intranet users so that they would be automatically logged in the web application without the need to provide any login/password (assuming they are already logged to the Windows domain). For the users coming from internet I want to provide traditional basic/form- based authentication. User/password data for these users would be stored internally in a database used by the application. Web application will be configured to use Kerberos authentication for users coming from specific intranet ip networks and basic/form- based authentication will be used for the rest of the users. From a security perspective, are there some risks involved in this kind of setup or is this a generally accepted solution? My understanding is that server doesn't need access to KDC (see Kerberos authentication, service host and access to KDC) and it can be completely isolated from AD and corporate intranet. The server has a keytab file stored locally that is used to decrypt tickets sent by the users coming from intranet. The tickets only contain username and domain of the incoming user. Server never sees the passwords of authenticated users. If the server would be hacked and the keytab file compromised, it would mean that attacker could forge tickets for any domain user and get access to the web application as any user. But typically this is the case anyway if hacker gains access to the keytab file on the local filesystem. The encryption key contained in the keytab file is based on the service account password in AD and is in hashed form, I guess it is very difficult to brute force this password if strong Kerberos encryption like AES-256-SHA1 is used. As the server has no network access to intranet, even the compromised service account couldn't be directly used for anything.

    Read the article

  • Encrypt client connection with squid forward proxy using SSL

    - by Twisted Whisper
    I'm setting up a Squid forward proxy and I'm wondering if I could configure Squid in such a way that the connection from my web browser to squid is https regardless of whether the connection from squid to the destination website is http or https. In other words, I want my connection from my web browser to my forward proxy to be encrypted even though I'm just surfing normal http website through that proxy. Can it be done?

    Read the article

  • Proxy - Some general questions

    - by user68802
    Is it possible to accomplish the following scenario with a proxy server? We are having one internet facing server that we want to put behind a proxy for some reasons. We want everything to work as before. When they do a request all connections will be forward to the internal server which will send back the information through the proxy. We want to be able to change to proxy to show an maintenance page whenever we are doing maintenance and change it back to forwarding traffic when we are done. We do also want to be able to keep forwarding all users that are using the sites but show an maintenance page for all new users for a time before showing the maintenance page for everyone in order to give the users some time to finish their work before kicking them out.

    Read the article

  • mod_proxy failing as forward proxy in simple configuration

    - by Stabledog
    (On Mac OS X 10.6, Apache 2.2.11) Following the oft-repeated googled advice, I've set up mod_proxy on my Mac to act as a forward proxy for http requests. My httpd.conf contains this: <IfModule mod_proxy> ProxyRequests On ProxyVia On <Proxy *> Allow from all </Proxy> (Yes, I realize that's not ideal, but I'm behind a firewall trying to figure out why the thing doesn't work at all) So, when I point my browser's proxy settings to the local server (ip_address:80), here's what happens: I browse to http://www.cnn.com I see via sniffer that this is sent to Apache on the Mac Apache responds with its default home page ("It works!" is all this page says) So... Apache is not doing as expected -- it is not forwarding my browser's request out onto the Internet to cnn. Nothing in the logfile indicates an error or problem, and Apache returns a 200 header to the browser. Clearly there is some very basic configuration step I'm not understanding... but what?

    Read the article

  • HTTPS Proxy which answers CONNECT with own certificate

    - by user1109542
    I'm configuring a DMZ which has the following Scheme: Internet - Server A - Security Appliance - Server B - Intranet In this DMZ I need a Proxy server for http(s) connections from the Intranet to Internet. The Problem is, that all Traffic should be scanned by the Security Appliance. For this I have to terminate the SSL Connection at Server B, proxy it as plain http to Server A through the Security Appliance and then further as https into the Internet. An encryption is then persistent between the Client and Server B and the Target Server and Server A. The communication between Server A and Server B is unencrypted. I know about the security risks and that the client will see some warning about the unknown CA of Server B's certificate. As Software I want to use Apache Web Servers on Server A and Server B. As first step I tried to configure Server B that it serves as endpoint for the SSL Encryption. So it has to establish the encryption with the client (answering HTTP CONNECT). Listen 8443 <VirtualHost *:8443> ProxyRequests On ProxyPreserveHost On AllowCONNECT 443 # SSL ErrorLog logs/ssl_error_log TransferLog logs/ssl_access_log LogLevel debug SSLProxyEngine on SSLProxyMachineCertificateFile /etc/pki/tls/certs/localhost_private_public.crt <Proxy *> Order deny,allow Deny from all Allow from 192.168.0.0/22 </Proxy> </VirtualHost> With this Proxy only the CONNECT request is passed through and an encrypted Connection between the client and the target is established. Unfortunately there is no possibility to configure mod_proxy_connect to decrypt the SSL connection. Is there any possibility to accomplish that kind of proxying with Apache?

    Read the article

  • Tomcat and HTTPS connect timeout (local Proxy resolves it)

    - by smas
    I have web application on the Tomcat with webservices. I've noticed that all web services connected to https get timeout. I run this app on my localhost in my company. When I redirect all my connections through Fiddler (local proxy) everything works correctly. I don't want to execute fiddler all the time. my computer -> [FIDDLER local proxy] -> [remote proxy] // WORKS my computer -> [remote proxy] // timeout How to increase tomcat logging to get more technical logs than only "timeout". Is there any other way to get more information what blocks the https URL?

    Read the article

  • Proxy Server suggestions

    - by Jon Menefee
    Here is the question I have that hopefully is not too general of a question. I have a network with approximately 25 PC's, 3 servers and 25 IP cameras. I have a firewall already on the network and it works fine for what I need, but my client is asking me if there is a way to put a Proxy server on the network to monitor where his employees are going when they surf the Internet. He is not wanting to block them (at least not thru the Proxy server), but he wants to make sure that they arent going to sites that would compromise the networked PCs. I have looked at TMG and it is a little more than what I want. I hesitate adding another firewall to the system because of the security cameras that are presently on the network (IP Cameras). I just want to put a policy in AD that would make certain Users (or Computers) use a Proxy server. Any suggestions on a good proxy server are welcome. Thank you

    Read the article

  • proxy server software for windows xp

    - by This is it
    Hi Is there a proxy software for Windows XP that can support HTTP, HTTPS, SMTP, IMAP, POP, FTP, etc. (the more the better)? We tried apache mod_proxy but FTP doesn't work through it (at least not all features). Maybe we don't need proxy server for following requirements (we are open for suggestions): - Centralized access point to internet for several users (proxy server?) - Users use Browser (firefox/IE - HTTP, HTTPS), Outlook (SMTP, POP, IMAP) - Logs to see where did users go on internet - Possiblity to connect to FTP server Windows xp, which is used as proxy server, is a virtual machine, and all other users/clients are virtual machines as well (Hyper-V)... Thanks for suggestions. Cheers

    Read the article

  • Make my web-server traffic go through proxy?

    - by Eli
    I have a question that may or may not be possible. Basically Comcast isn't going to let me host a website on their ISP unless I have a business account. So, I spoke with my uncle who is big into networking and he told me to host my website through a proxy so Comcast cannot associate the website IP with my IP. I have purchases a proxy from proxy-hub.com, but I cannot seem to figure out what to do next. I may be approaching this totally wrong and I may need to create my own proxy server. Anybody have a clue? Thanks.

    Read the article

  • Turning off ASP.Net WebForms authentication for one sub-directory

    - by Keith
    I have a large enterprise application containing both WebForms and MVC pages. It has existing authentication and authorisation settings that I don't want to change. The WebForms authentication is configured in the web.config: <authentication mode="Forms"> <forms blah... blah... blah /> </authentication> <authorization> <deny users="?" /> </authorization> Fairly standard so far. I have a REST service that is part of this big application and I want to use HTTP authentication instead for this one service. So, when a user attempts to get JSON data from the REST service it returns an HTTP 401 status and a WWW-Authenticate header. If they respond with a correctly formed HTTP Authorization response it lets them in. The problem is that WebForms overrides this at a low level - if you return 401 (Unauthorised) it overrides that with a 302 (redirection to login page). That's fine in the browser but useless for a REST service. I want to turn off the authentication setting in the web.config: <location path="rest"> <system.web> <authentication mode="None" /> <authorization><allow users="?" /></authorization> </system.web> </location> The authorisation bit works fine, but when I try to change the authentication I get an exception: It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. I'm configuring this at application level though - it's in the root web.config How do I override the authentication so that all of the rest of the site uses WebForms authentication and this one directory uses none? This is similar to another question: 401 response code for json requests with ASP.NET MVC, but I'm not looking for the same solution - I don't want to just remove the WebForms authentication and add new custom code globally, there's far to much risk and work involved. I want to change just the one directory in configuration.

    Read the article

  • How to secure Firefox traffic (+DNS) through SOCKS proxy under Ubuntu 10.04?

    - by Maarx
    I'm using Ubuntu 10.04, and starting a SOCKS proxy with 'ssh -D', and setting Ubuntu to use it with "System - Preferences - Network Proxy". Firefox uses the proxy, and the proxy's IP appears when I visit a site like http://www.whatismyip.com/. My question is, is Firefox resolving DNS requests through this proxy? Is my web-browsing truly secure? (That is, until I exit the other end of the proxy. I know it's insecure after that.) (And I've verified the keys, I'm not being man-in-the-middled) (And--screw it. You know what I mean. Is it resolving DNS requests through the proxy?) I don't know how I would go about verifying such a thing for myself. Using additional hardware such as another debugging proxy is not an option. If Firefox isn't resolving my DNS requests through the SOCKS proxy, how do I go about fixing it?

    Read the article

  • Programmatically Set Proxy Address, Port, User, Password throught Windows Registry

    - by Fábio Antunes
    I'm writing a small C# application that will use Internet Explorer to interact with a couple a websites, with help from WatiN. However, it will also require from time to time to use a proxy. I've came across Programmatically Set Browser Proxy Settings in C#, but this only enables me to enter a proxy address, and I also need to enter a Proxy username and password. How can I do that? Note: It doesn't matter if a solution changes the entire system Internet settings. However, I would prefer to change only IE proxy settings (for any connection). The solution has to work with IE8 and Windows XP SP3 or higher. I'd like to have the possibility to read the Proxy settings first, so that later I can undo my changes. EDIT With the help of the Windows Registry accessible through Microsoft.Win32.RegistryKey, i was able to apply a proxy something like this: RegistryKey registry = Registry.CurrentUser.OpenSubKey("Software\\Microsoft\\Windows\\CurrentVersion\\Internet Settings", true); registry.SetValue("ProxyEnable", 1); registry.SetValue("ProxyServer", "127.0.0.1:8080"); But how can i specify a username and a password to login at the proxy server? I also noticed that IE doesn't refresh the Proxy details for its connections once the registry was changed how can i order IE to refresh its connection settings from the registry? Thanks

    Read the article

  • NTLM Authentication fails when behind Proxy server

    - by Jan Petersen
    Hi All, I've seen a number of post about consuming Web Services from behind a proxy server, but none that seams to address this problem. I'm building a desktop application, using Java, JAX-WS in NetBeans. I have a working prototype, that can query the server for authentication mode, successfully authenticate and retrieve a list of web site. However, if I run the same app from a network that is behind a proxy server (the proxy does not require authentication), then I'm running into trouble. I have sniffed the traffic, and noticed the following: Behind Proxy # Result Protocol Host URL 1 200 HTTP host.domain.com /_vti_bin/Authentication.asmx 2 401 HTTP host.domain.com /_vti_bin/Webs.asmx 3 401 HTTP host.domain.com /_vti_bin/Webs.asmx 4 401 HTTP host.domain.com /_vti_bin/Webs.asmx 5 401 HTTP host.domain.com /_vti_bin/Webs.asmx Without Proxy # Result Protocol Host URL 1 200 HTTP host.domain.com /_vti_bin/Authentication.asmx 2 401 HTTP host.domain.com /_vti_bin/Webs.asmx 3 401 HTTP host.domain.com /_vti_bin/Webs.asmx 4 401 HTTP host.domain.com /_vti_bin/Webs.asmx 5 401 HTTP host.domain.com /_vti_bin/Webs.asmx 6 200 HTTP host.domain.com /_vti_bin/Webs.asmx When running the code from a network without a proxy server, I successfully Authentication with the server, but when I'm behind the proxy server, the traffic is cut-off at the 5th message, and thus don't succeed. I know from the Java docs that On Microsoft Windows platforms, NTLM authentication attempts to acquire the user credentials from the system without prompting the user's authenticator object. If these credentials are not accepted by the server then the user's authenticator will be called. Given that my Authentication code is called only ones, and only as the 5th attempt, it appears as if the connection is dropped when behind the proxy server before my Authentication object is used. Is there any way I can control the behavior of Authentication module, to not have it use the system credentials? I have put the source text java class files of a demo app up, showing the issue at the following urls (it's a bit to long even in the short demo form to post here). link text Br Jan

    Read the article

  • use proxy in python to fetch a webpage

    - by carmao
    I am trying to write a function in Python to use a public anonymous proxy and fetch a webpage, but I got a rather strange error. The code (I have Python 2.4): import urllib2 def get_source_html_proxy(url, pip, timeout): # timeout in seconds (maximum number of seconds willing for the code to wait in # case there is a proxy that is not working, then it gives up) proxy_handler = urllib2.ProxyHandler({'http': pip}) opener = urllib2.build_opener(proxy_handler) opener.addheaders = [('User-agent', 'Mozilla/5.0')] urllib2.install_opener(opener) req=urllib2.Request(url) sock=urllib2.urlopen(req) timp=0 # a counter that is going to measure the time until the result (webpage) is # returned while 1: data = sock.read(1024) timp=timp+1 if len(data) < 1024: break timpLimita=50000000 * timeout if timp==timpLimita: # 5 millions is about 1 second break if timp==timpLimita: print IPul + ": Connection is working, but the webpage is fetched in more than 50 seconds. This proxy returns the following IP: " + str(data) return str(data) else: print "This proxy " + IPul + "= good proxy. " + "It returns the following IP: " + str(data) return str(data) # Now, I call the function to test it for one single proxy (IP:port) that does not support user and password (a public high anonymity proxy) #(I put a proxy that I know is working - slow, but is working) rez=get_source_html_proxy("http://www.whatismyip.com/automation/n09230945.asp", "93.84.221.248:3128", 50) print rez The error: Traceback (most recent call last): File "./public_html/cgi-bin/teste5.py", line 43, in ? rez=get_source_html_proxy("http://www.whatismyip.com/automation/n09230945.asp", "93.84.221.248:3128", 50) File "./public_html/cgi-bin/teste5.py", line 18, in get_source_html_proxy sock=urllib2.urlopen(req) File "/usr/lib64/python2.4/urllib2.py", line 130, in urlopen return _opener.open(url, data) File "/usr/lib64/python2.4/urllib2.py", line 358, in open response = self._open(req, data) File "/usr/lib64/python2.4/urllib2.py", line 376, in _open '_open', req) File "/usr/lib64/python2.4/urllib2.py", line 337, in _call_chain result = func(*args) File "/usr/lib64/python2.4/urllib2.py", line 573, in lambda r, proxy=url, type=type, meth=self.proxy_open: \ File "/usr/lib64/python2.4/urllib2.py", line 580, in proxy_open if '@' in host: TypeError: iterable argument required I do not know why the character "@" is an issue (I have no such in my code. Should I have?) Thanks in advance for your valuable help.

    Read the article

  • Rails' page caching vs. HTTP reverse proxy caches

    - by John Topley
    I've been catching up with the Scaling Rails screencasts. In episode 11 which covers advanced HTTP caching (using reverse proxy caches such as Varnish and Squid etc.), they recommend only considering using a reverse proxy cache once you've already exhausted the possibilities of page, action and fragment caching within your Rails application (as well as memcached etc. but that's not relevant to this question). What I can't quite understand is how using an HTTP reverse proxy cache can provide a performance boost for an application that already uses page caching. To simplify matters, let's assume that I'm talking about a single host here. This is my understanding of how both techniques work (maybe I'm wrong): With page caching the Rails process is hit initially and then generates a static HTML file that is served directly by the Web server for subsequent requests, for as long as the cache for that request is valid. If the cache has expired then Rails is hit again and the static file is regenerated with the updated content ready for the next request With an HTTP reverse proxy cache the Rails process is hit when the proxy needs to determine whether the content is stale or not. This is done using various HTTP headers such as ETag, Last-Modified etc. If the content is fresh then Rails responds to the proxy with an HTTP 304 Not Modified and the proxy serves its cached content to the browser, or even better, responds with its own HTTP 304. If the content is stale then Rails serves the updated content to the proxy which caches it and then serves it to the browser If my understanding is correct, then doesn't page caching result in less hits to the Rails process? There isn't all that back and forth to determine if the content is stale, meaning better performance than reverse proxy caching. Why might you use both techniques in conjunction?

    Read the article

  • Firefox proxy dilemma

    - by Mike L.
    Any idea why when using system proxy settings in firefox, it can not accept a proxy such as: user:[email protected]:port ??? IE will allow and connect to a proxy in this format. Not only does firefox not work, but it does not prompt for the password, nor attempt to make a connection to the proxy. Basically get a "proxy server not found" error. Anybody know a way around this? I am working on a proxy switching program for IE & Firefox, and I would like to use system-wide proxy settings. If I just store the server:port combination, firefox prompts for the password, as well as IE. Then they can be cached and it will not ask again. Maybe my only option is to programmatically cache the user/pass? Anybody know a way to do this? I am pretty sure IE stores them at HTTP basic authentication passwords and I can add them with AddCredential. After saving a password for a proxy in firefox, it shows up in saved passwords in a format like "moz-proxy://server:port" anybody know how to programmatically add a saved password to firefox? Thanks

    Read the article

  • Firefox proxy delima

    - by Mike L.
    Any idea why when using system proxy settings in firefox, it can not accept a proxy such as: user:[email protected]:port ??? IE will allow and connect to a proxy in this format. Not only does firefox not work, but it does not prompt for the password, nor attempt to make a connection to the proxy. Basically get a "proxy server not found" error. Anybody know a way around this? I am working on a proxy switching program for IE & Firefox, and I would like to use system-wide proxy settings. If I just store the server:port combination, firefox prompts for the password, as well as IE. Then they can be cached and it will not ask again. Maybe my only option is to programmatically cache the user/pass? Anybody know a way to do this? I am pretty sure IE stores them at HTTP basic authentication passwords and I can add them with AddCredential. After saving a password for a proxy in firefox, it shows up in saved passwords in a format like "moz-proxy://server:port" anybody know how to programmatically add a saved password to firefox? Thanks

    Read the article

  • W2k8 RC1: Windows Media Servers (WMS) as proxy

    - by da_didi
    I will have one streaming-server (W2k8, unknown streaming protocol [rtsp, mss, http]) and half dozen streaming-servers as proxies to save bandwidth. I have read the documentation and installed the modules, but I am unsure how I have to configure the proxy's according to http://technet.microsoft.com/de-de/library/ee126142(en-us,WS.10).aspx - as a proxy or reverse proxy and how I minimize the bandwidth needs between origin server and proxy's. What is the best way to realize my setup? Any short how-tos? How can I announce all players to use the proxy? Route all rtsp/mms/http-requests through my proxy? Announce the proxy with DHCP-releases? Thanks!

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >