Search Results

Search found 9634 results on 386 pages for 'proxy pattern'.

Page 96/386 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • Home computer as ssh bridge

    - by pistacchio
    Hi at work, due to our network configuration, i cannot ssh external servers. We are on a Windows environment. I need to ssh a server of mine, but i can only exit from our LAN via port 88. How could I use my home MacOs box to accept an http connection from my home computer and route it via ssh to the server i need to' connect to? Thanks.

    Read the article

  • How to build an outbound load balancer with linux?

    - by matnagel
    We have a small house in the countryside and there is no fixed broadband, so we had a mobile flatrate first, and for 2 people with 2 computers it was too slow, so now we have 2 flatrates for 2 client machines. So I pay 2 flatrates and have double bandwith theoretically. There is a local network in the house that connects everything. But when I am alone I wonder how I can use both connections at the same time. I want to build a solution where I can browse the web and page requests are spread between the 2 connections. I imagine there are expensive routers who can split the traffic between 2 lines. But is there a good way to do it with linux? The solution I am looking for will split the requests already for one page (multiple images, css files, javascrfipt files) between the two lines.

    Read the article

  • Changing Internet connection on ISA Server 2000

    - by garyb32234234
    Hi We are getting a new internet connection installed and will need to unplug the old one and connect this to our ISA Server 2000. Will this be a simple swap out job? We will be given a new ip, which i know i will have to enter into the external network card TCP/IP page. I will also be given the default gateway to enter. The ISP engineer said we may have to reset the ARP? cache, if we dont know how we will have to reset the ISA server? Has anyone any experience? The current connection is with the same ISP but it was owned by the business park were we are located and they linked up an ethernet port to what i assume is their own router. Hope you can help, i know that ISA 2000 is somewhat less easy to use than the newer versions.

    Read the article

  • proxy_ajp wildcards

    - by The Digital Ninja
    I need to setup apache so that any site.com/ANYTHING/servlet/ANYTHING goes over ajp into tomcat, but regular files will go through apache still. I have been messing around with this to no avail <LocationMatch "./*/servlet/*"> Order Allow,Deny Allow from all ProxyPass ajp://localhost:8009/ ProxyPassReverse / </LocationMatch> This works at directing everything to our tomcat insance. ProxyPass / ajp://localhost:8009/

    Read the article

  • How to browse to a webserver which is reachable through the SSH port only

    - by GetFree
    I have a server at work which is behind a firewall (the company's firewall) so it is reachable only thrugh port 22 (SSH). I'm able to connect to the server with putty without problems. Also, that server has Apache running and listening on port 80 as usual. But I cant connect to the website using my browser since port 80 (and everyone else) is blocked by the company's firewall. Is there a way I can make my browser to connect to Apache in that server so I can browse the site I'm working on? Thanks.

    Read the article

  • Having Troubles Getting My Apache Server Online(NodeJS and Apache)

    - by Jeff Armingol
    I am new here. This is my situation. I am using nodejs modules, serialport2 and socket.io, because I am trying to forward the data from my arduino hardware through serialports. In my server side script, I read the data then forward it to the client side. Now I am using Apache to serve the html page,which is the client side. I am running Nodejs on port 8000 and Apache on port 80. It is running OKAY when I view it in my browser typing localhost:80. The data is appearing and seems fine. Now when I tried to get my Apache server online using a Free DDNS provider(http://www.noip.com/) and my port80, it loaded the webpage but there are no data appearing on the page. What seems to be the problem here? Really need your expertise and advice. Thanks in advanced!

    Read the article

  • Varnish does not start properly (crashes after startup) with no error messages

    - by Matthew Savage
    I am running Varnish (2.0.4 from the Ubuntu unstable apt repository, though I have also used the standard repository) in a test environment (Virtual Machines) on Ubuntu 9.10, soon to be 10.04. When I have a working configuration and the server starts successfully it seems like everything is fine, however if, for whatever reason, I stop and then restart the varnish daemon it doesn't always startup properly, and there are no errors going into syslog or messages to indicate what might be wrong. If I run varnish in debug mode (-d) and issue start when prompted then 7 times out of time it will run, but occasionally it will just shut down 'silently'. My startup command is (the $1 allows for me to pass -d to the script this lives in): varnishd -a :80 $1 \ -T 127.0.0.1:6082 \ -s malloc,1GB \ -f /home/deploy/mysite.vcl \ -u deploy \ -g deploy \ -p obj_workspace=4096 \ -p sess_workspace=262144 \ -p listen_depth=2048 \ -p overflow_max=2000 \ -p ping_interval=2 \ -p log_hashstring=off \ -h classic,5000009 \ -p thread_pool_max=1000 \ -p lru_interval=60 \ -p esi_syntax=0x00000003 \ -p sess_timeout=10 \ -p thread_pools=1 \ -p thread_pool_min=100 \ -p shm_workspace=32768 \ -p thread_pool_add_delay=1 and the VCL looks like this: # nginx/passenger server, HTTP:81 backend default { .host = "127.0.0.1"; .port = "81"; } sub vcl_recv { # Don't cache the /useradmin or /admin path if (req.url ~ "^/(useradmin|admin|session|sessions|login|members|logout|forgot_password)") { pipe; } # If cache is 'regenerating' then allow for old cache to be served set req.grace = 2m; # Forward to cache lookup lookup; } # This should be obvious sub vcl_hit { deliver; } sub vcl_fetch { # See link #16, allow for old cache serving set obj.grace = 2m; if (req.url ~ "\.(png|gif|jpg|swf|css|js)$") { deliver; } remove obj.http.Set-Cookie; remove obj.http.Etag; set obj.http.Cache-Control = "no-cache"; set obj.ttl = 7d; deliver; } Any suggestions would be greatly appreciated, this is driving me absolutely crazy, especially because its such an inconsistent behaviour.

    Read the article

  • How to delete everything except .svn directories?

    - by Arek
    I have quite complex directory tree. There are many subdirectories, in those subdirectories beside other files and directories are ".svn" directories. Now, under linux I want to delete all files and directories except the .svn directories. I found many solutions about opposite behaviour - deleting all .svn directories in the tree. Can somebody quote me the correct answer for deleting everything except .svn?

    Read the article

  • Watching Netflix through a VPN

    - by Sergio
    Recently I bought a VPS from DigitalOcean. I setup a PPTP VPN so I could watch us Netflix content from outside the US. Now that I have it setup and all my traffic is going through the VPN, Netflix is still showing my home country content. Pandora is working, and when I search my IP it shows im in NY, so I guess traffic is being routed correctly. I have also tried to delete flash settings and cookies from browser. Any ideas on what could be happening?

    Read the article

  • Firefox hangs waiting for ssl.google-analytics.com

    - by squillman
    It seems that FF has a problem with 403 Access Denied responses from proxies, at least for ssl.google-analytics.com. I've found this post which describes my problem. I'm posting my workaround as an answer, but would also welcome any more information if anyone has it as I can't find anything! EDIT: Note that the current version of Firefox which is experiencing this issue is 3.0.10 EDIT: Still there for FF 3.5...

    Read the article

  • Alias a Virtual Directory or Application as Root on IIS 7

    - by manyxcxi
    Our current IIS setup as two applications running on different paths at (for example) http://server/sub-a and http://server/sub-b. I want to alias http://server/sub-a as root so that just going to http://server/ will bring up the contents of sub-a. The problem I face is that when I initially set up a ReverseProxy it negatively affected http://server/sub-b. I know this is a fairly common problem- how have you solved it? 99.9% of my experience is with Apache, so I feel a tad lost in the GUI world of IIS.

    Read the article

  • mod_rpaf problems with Nginx front, Apache back-end after Ubuntu upgrade

    - by Kenn
    I'm running an Nginx front-end for static files, and proxying to an Apache backend for PHP and Passenger, using Apache's mod_rpaf to set the correct remote IP address on the backend. Everything worked fine until I upgraded to Ubuntu 12.04 (Precise). Now Apache reports all connections coming from 127.0.0.1. Here's the relevant configuration. Nothing here changed with the upgrade. Nginx: proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; mod_rpaf: <IfModule mod_rpaf.c> RPAFenable On RPAFsethostname On RPAFproxy_ips 127.0.0.1 ::1 RPAFheader X-Forwarded-For </IfModule> I'm using %{X-Forwarded-For}i in my Apache LogFormat directive and the access logs are showing the correct remote address, so I know Nginx is passing the address along properly. In a phpinfo() test, HTTP_X_FORWARDED_FOR is showing the correct remote address, but REMOTE_ADDR is 127.0.0.1. This is reflected in PHP applications as well, such as WordPress comments. I've tried switching Nginx and mod_rpaf to X-Real-IP with no effect. Did something change that I missed? Relevant version info, everything installed from the Ubuntu repository: Nginx 1.1.19 Apache 2.2.22 mod_rpaf 0.6

    Read the article

  • I have a problem with FTP service.

    - by Diego
    HI, I follow your istruction and everythig works. I have an DHCP server than it assign "Ip client" without gateway. Internet with IE or Firefox Browser works but FTP service doesn't work. In squid.conf I have put a line: acl Safe_ports port 80 21 443 389 5307 8080 3144 8282 88 8443 20443 11438 1443 8050 30021 10443 4747 4774 1384 Have I to put gateway in DHCP Server? Have you any suggestion for me? Thanks for your help

    Read the article

  • Do I still need to send the "Expires" header, or can I assume that web caches understand "Cache-Cont

    - by chris_l
    I want to reduce the overhead caused by HTTP headers to a minimum, so I'd like to avoid the "Expires" header, and use "Cache-Control" only - or maybe the other way around (I'm planning to send very short HTTP responses to browsers, so the answer to this question doesn't fully apply here: My headers account for a significant percentage). AFAIK, the "Cache-Control" header was standardized in HTTP 1.1, but are there still web caches/proxies, that don't understand it? Note: This is a sub-question to my stackoverflow (bounty) question

    Read the article

  • Are there other application layer firewalls like Microfot TMG (ISA) that do advanced http rules?

    - by Bret Fisher
    Since the old days ISA and now TMG have had several great features that I often want to deploy to my customers because of the enhanced functionality and security, but often the cost of an additinal server HW, Windows Server, and TMG license is too much to justify when compaired to a $300-500 appliance. Are there other gateway firewalls that can perform one or more of these application layer features: pre-auth incoming http traffic against AD/LDAP before sending packets to internal server (forms auth or basic creds popup)? read host headers of incoming http traffic (even on https) to a single public IP and route packets to different internal servers based on that host header?

    Read the article

  • VCL - configuration for Magento and Varnish 3.0.2

    - by Tomas
    I would like to kindly ask if there's someone who can help me configure Varnish for Magento to reach far more hits. My current ratio from varnishstat is: cache_hit=271 cache_miss=926 I'm kindly asking this because I've googled almost every site related to this theme, but 99.9% of configurations don't work because of outdated code. Details of my set-up: I use Varnish on port 80, Apache on port 81, PageCache as Magento varnish module, APC for PHP speed and Memcached for dynamic caching. Load speed is about 1.5s on home-page (Pingdom.com average results) USA ping & 2.5s Europe. Servers are located in Toronto, Canada. EDIT: This is my full VCL configuration http://pastebin.com/885BzHCs (I just use xxx.xxx.xxx.xxx for my IPs) This is the info from the command (varnishtop -i TxHeader -I Cookie): TxHeader Cookie: frontend=965b5...(*lots of numbers); adminhtml=3ae65...(*lots of numbers); EXTERNAL_NO_CACHE=1 "(*lots of numbers)" is just my adding to the info Any idea how to avoid Varnish hitting this cookies? (If I got correctly the idea about avoiding Vanrish hitting the cookie and not caching the home page). Thank you for any help!

    Read the article

  • Is it possible to have an external server within a company's firewall?

    - by Jonathan
    Hi guys, I am sure this is server admin 101, but I am unsure of the answer and would love some help. I am a software developer I have built an application for a client and am currently hosting it successfully on SliceHost. We are now coming out of Beta and the client wants to have the application within their firewall, but they do not want to deal with headache of hosting and maintaining the server. Is there a way I can recommend that we put our server at SliceHost within their Firewall? Is that an easy thing to do? Their specific requirements are: For my application to authenticate against their Active Directory, and Only allow access to the application from within their network If that is not possible, what should I recommend to my client?

    Read the article

  • Route multiple subdomains on one external ip to multiple internal ips

    - by Abenil
    i have several subdomains(git.example.org, build.example.org, etc.), i have a router with an external ip and i have several virtual machines on a host computer with internal ips. Now i want to route git.example.org to internal ip 10.0.2.1 and build.example.org to internal ip 10.0.2.2. How can I do this? I setup in the Router that all traffic on port 80 is comming to my host computer with internal ip 10.0.2.3 and installed Squid on that computer. I added the following lines to the squid.conf file: cache_peer 10.0.2.1 parent 80 0 no-query originserver name=server_1 cache_peer_domain server_1 git.example.org cache_peer 10.0.2.2 parent 80 0 no-query originserver name=server_2 cache_peer_domain server_2 build.example.org But this is not working for me. :( Any help appreciated. Regards Nils Update: Here is the solution for Apache http://serverfault.com/a/273693

    Read the article

  • Circumventing a manual HTML login page for "unclassified" websites

    - by auramo
    The IT department just made my life a little bit harder again: they introduced a manual HTML login page for all websites they have not "classified". This means that all the applications which try to access unclassified websites for e.g. downloading plugins do not work. Examples: Eclipse plugin installation, Maven builds etc. What would be the easiest workaround for this? The best I've come up with is try to extend/customize Ruby's httpproxy.rb that comes with Webrick. I would automate the manual login process whenever that login response page is detected. This sounds quite painful, and I think there might/should be simpler options?

    Read the article

  • nginx proxying different servers for different subdomains

    - by The.Anti.9
    i just set up an nginx server. On the same computer as nginx, I have apache running on port 8000 (this was previously set up.) and I want no subdomain and the www. subdomain to go to the local apache instance. But i want the stuff. subdomain to link to my server where i keep all my miscellaneous files (pictures, documents, etc.), which is also listening on port 80 at the ip 192.168.1.102. I tried configuring it, but when i go to my domain, I just get the "Welcome to nginx!". Here's what I have: user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; include /etc/nginx/conf.d/*.conf; server { listen 80; server_name theanti9.com www.theanti9.com; access_log /var/log/nginx/access.log; location / { proxy_pass http://localhost:8000; } } server { listen 80; server_name stuff.theanti9.com; access_log /var/log/nginx/access.log; location / { proxy_pass http://192.168.1.102:80; } } } I'm not really sure what's wrong. Any suggestions?

    Read the article

  • Nginx location to match query parameters

    - by Dave
    Is it possible in nginx to have a location {} block that matches query parameters. For example I want to pick up that "preview=true" in this url and then instruct it to do several different things, all possible in a location block. http://192.158.0.1/web/test.php?hello=test&preview=true&another=var The problem I'm having is that my test stuff doesn't seem to match, it seems like I can only match the URL itself? E.g. location ~ ^(.*)(preview)(.*)$ Or something aloong those lines?

    Read the article

  • Different Servers for incoming mails

    - by André
    Hi everybody, not sure if what I want is possible so I´d appreciate any pointers. I have full control over the infrastructure (DNS and servers) Currently I receive mails for domain.tld. MX record for domain.tld is gw.domain.tld. gw then does some spam and virus checking and forwards the mails to the internal exchange server. GW is a Proxmox Mail Gateway Box (Free license) Now what I want is to distribute mails for different recipients to other mail servers. Basicly I only want [email protected] and [email protected] to go to the exchange as before, but all others go to a different mail server (based on linux). Any idea how I could achieve this?

    Read the article

  • Squid stale-while-revalidate not working when max-age=0

    - by Wiliam
    Squid 2.7 always reaches backend, expected is to reach backend using stale-while-revalidate only when cache expires, not when client triggers max-age=0. Script: <?php header('Cache-Control: public, max-age=10, stale-if-error=200, stale-while-revalidate=500'); header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT"); sleep(2); die("OK"); And squid config: # http_port public_ip:port accel defaultsite= default hostname, if not provided http_port 80 accel defaultsite=mydomain.com # IP and port of your main application server (or multiple) cache_peer 127.0.0.1 parent 8000 0 no-query allow-miss originserver name=main # Do not tell the world that which squid version we're running httpd_suppress_version_string on # Remove the Caching Control header for upstream servers header_access Cache-Control deny all #header_access Last-Modified deny all # log all incoming traffic in Apache format logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %<st "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh access_log /usr/local/squid/var/logs/squid.log combined all cache_effective_user squid refresh_pattern . 10080 90% 999999 ignore-no-cache override-expire ignore-private icp_port 0

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >