Search Results

Search found 9634 results on 386 pages for 'proxy pattern'.

Page 144/386 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • Is HAProxy able to pass SSL requests to Apache + mod_ssl?

    - by Josh Smeaton
    Most of the documentation I've read regarding HAProxy and SSL seems to suggest that SSL must be handled before it reaches HAProxy. Most solutions focus on using stunnel, and a few suggest Apache + mod_ssl infront of HAProxy. Our problem though, is that we use Apache as a reverse proxy to a number of other sites which use their own certificates. Ideally what we'd like, is for HAProxy to pass all SSL traffic to Apache, and let Apache handle either the SSL or reverse proxying. Our current setup: Apache Reverse Proxy -> Apache + mod_ssl -> Application What I'd like to do: HAProxy -> Apache Reverse Proxy -> Apache + mod_ssl -> Application Is it possible to do this? Is HAProxy capable of forwarding SSL traffic to be handled by a server BEHIND it?

    Read the article

  • Nginx ssl redirection of images

    - by krishna raj
    Hi. I am trying to set up nginx as reverse proxy for a tomcat server using SSL connection. I want the client's browser to load my tomcat application when nginx reverse proxy's IP is called from client's browser. My tomcat application's address is 192.168.25.25 and nginx proxy's address is 192.168.25.50 In my nginx.conf file i have added these lines # location / { proxy_pass https://192.168.25.25:443/myapp/; proxy_redirect https://192.168.25.25/myapp/ https://192.168.25.25/; } # Some of the images in my application is stored at 192.168.25.25/images/ . Now these directories cant be accessed as the proxy_pass is set to 192.168.25.25:443/myapp. Is there way to access images directory also without changing proxy_pass ? Thanks in advance.

    Read the article

  • Make Apache to listen in multiple IPs

    - by Enrique Becerra
    Hi I'm in a big LAN, which is behind a proxy/firewall I'm working with an apache/php/mysql application, which is hosted in a small server besides my workstation. This server is connected to the LAN also and is behind the proxy: The server has a local IP assigned: 10.64.x.x Also, this server has a public IP assigned (or redirected from within the proxy/firewall) which is: 200.41.x.x I can't access public IP from LAN, but I can ping to the public IP from outside the building How should I configure Apache to listen also for public IP and open the 80 port for people accessing from outside the building?. It is set now to Listen 10.64.x.x:80 Thanks a lot in advance,

    Read the article

  • Need help trouble shooting Https webserver error - SSL Handshake failed

    - by DerNalia
    I followed this guide: http://hints.macworld.com/article.php?story=20041129143420344 Here is my virtual host definition <VirtualHost *:443> SSLEngine on SSLProxyEngine On RequestHeader set Front-End-Https "On" CacheDisable * SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL DocumentRoot "/Users/me/projects/myproject/public" ServerName ssl.mydomain.com ServerAlias *.ssl.mydomain.com SSLCertificateKeyFile "/private/etc/apache2/certs/webserver.nopass.key" SSLCertificateFile "/private/etc/apache2/certs/newcert.pem" SSLCACertificateFile "/private/etc/apache2/certs/demoCA/cacert.pem" SSLCARevocationPath "/private/etc/apache2/certs/demoCA/crl" ErrorLog "/Users/me/Desktop/ssl.log" ProxyPass / https://localhost:3002/ ProxyPassReverse / https://localhost:3002 ProxyPreserveHost on </VirtualHost> And when I try connecting to the sevre viov the web browser, I get this error: [Thu Feb 02 16:50:40 2012] [error] (502)Unknown error: 502: proxy: pass request body failed to 127.0.0.1:3002 (localhost) [Thu Feb 02 16:50:40 2012] [error] [client 96.11.81.39] proxy: Error during SSL Handshake with remote server returned by /session/new [Thu Feb 02 16:50:40 2012] [error] proxy: pass request body failed to 127.0.0.1:3002 (localhost) from 96.11.81.39 () how do I debug / fix this?

    Read the article

  • IIS7 URL Rewrite - Rewrite CSS files

    - by user1231958
    I'm trying to rewrite certain CSS files with some rules, so it replaces every single instance of links in the CSS (as in background: url("/myuri.jpg")) with a prefix (as in background: url("/zeus/myuri.jpg")) These are the rules. <rule name="ReverseProxyOutboundRule2" preCondition="IsCSS" enabled="true" stopProcessing="false"> <match filterByTags="None" pattern="url\(&quot;(.*)&quot;\)" /> <action type="Rewrite" value="url(&quot;/zeus{R:1}&quot;)" /> <conditions> <add input="{URL}" pattern="/zeus" /> </conditions> </rule> <preCondition name="IsCSS"> <add input="{RESPONSE_CONTENT_TYPE}" pattern="^text/css" /> </preCondition> However, only one url is being replaced this way and somehow the rest is being ignored. Thank you beforehand.

    Read the article

  • Data transfer to my own computer from a website host by the same computer

    - by gunbuster363
    Hi all, I have a question about using a web site host in my computer, say Computer A, using any web server hosting application e.g : apache. I connect to my website in my very same computer A, and request to download a file of size 1Mb, in otherwords, I am connecting to my own computer and want to download a file in my computer. In addition, my internet access is bound by a proxy server in a way of gateway. The questions are - does the file transfer really exist? Or is it a local file copying between 2 location? Will my data packet go through the proxy, to the internet, and go back to the proxy and return to me? Thanks everyone who are watching this question.

    Read the article

  • VirtualBox networking problem, host XP, guest Debian

    - by Silma
    Hi, I'm trying to set up a development environment in a virtual machine on my laptop, with debian os. I have both lan and wlan available on the host machine, yet I can't connect to the internet using either. As I said the host OS is windows XP and the guest OS will be the latest Debian, we downloaded the business card net install so we need internet access from the beginning, besides we need the virtual machine to be visible on the local network (for my fellow developers). We tried host-only networking, NAT, bridging, with proxy (the local network uses a proxy to connect to the internet) and without proxy, nothing seems to work. What else can we do? Thanks a lot.

    Read the article

  • ssh all machines behind a router

    - by Luc
    Hello, I have several machines on my lan. One is used as a http proxy to target web sites located on the others (that's working fine now thanks to ServerFault). On my router, Port 22 is NATed to this proxy machine. I would like to be able to access the other machines, within internet, with something like: ssh user@first_machine.my_domain.tld ssh user@second_machine.my_domain.tld Could I use the proxy machine to 'filter' the incoming ssh request and to route them to the correct machine ? (in the same way it's possible to do so for web sites using a mix of mod_proxy and namevirtualhost in Apache) Thanks a lot, Luc

    Read the article

  • Track IP Messenger's chatting by wireshark

    - by Kumar P
    We have Linux server ( RHEL 5 ), and some client machines ( Windows XP ) in local area network. We using server as proxy server. I am using squid proxy. My windows machines using internet by proxy. Now my client machines using IP messenger for chatting and sharing files with in local network. How can i trace what they are doing or chatting by ip messenger, from my server by wireshark packet sniffer ? If i can't do it by wireshark , What will you give idea about it...

    Read the article

  • multiple valgrind errors: Conditional jump or move depends on uninitialised value(s)

    - by Hristo
    I'm running valgrind and I'm getting the following error (this is not the only one): ==21743== Conditional jump or move depends on uninitialised value(s) ==21743== at 0x4A06509: index (mc_replace_strmem.c:164) ==21743== by 0x33B7CBB3CD: gaih_inet (in /lib64/libc-2.5.so) ==21743== by 0x33B7CBD629: getaddrinfo (in /lib64/libc-2.5.so) ==21743== by 0x401A5F: tunnelURL (proxy.c:336) ==21743== by 0x40142A: client_thread (proxy.c:194) ==21743== by 0x33B8806616: start_thread (in /lib64/libpthread-2.5.so) ==21743== by 0x33B7CD3C2C: clone (in /lib64/libc-2.5.so) My tunnelURL() function looks like this: char * tunnelURL(char *url) { char * a = strstr(url, "//"); a += 2; char * path = strstr(a, "/"); char host[256]; strncpy (host, a, strlen(a)-strlen(path)); /* * The following is courtesy of Beej's Guide */ int status; int proxySocketFD; struct addrinfo hints; struct addrinfo *servinfo; // will point to the results memset(&hints, 0, sizeof(hints)); // make sure the struct is empty hints.ai_family = AF_INET; // don't care IPv4 or IPv6 hints.ai_socktype = SOCK_STREAM; // TCP stream sockets hints.ai_flags = AI_PASSIVE; // fill in my IP for me if ((status = getaddrinfo(host, "80", &hints, &servinfo)) != 0) { perror("getaddrinfo() fail"); exit(1); } // create socket if ((proxySocketFD = socket(servinfo->ai_family, servinfo->ai_socktype, servinfo->ai_protocol)) == -1) { perror("proxy socket() fail"); exit(1); } // connect if (connect(proxySocketFD, servinfo->ai_addr, servinfo->ai_addrlen) != 0) { printf("connect() fail"); exit(1); } // construct request char request[strlen(path) + strlen(host) + 26]; sprintf(request, "GET %s HTTP/1.1\r\nHost: %s\r\n\r\n", path, host); printf("%s", request); // send request send(proxySocketFD, request, strlen(request), 0); // receive response int i = 0; int amntRecvd = 0; char *pageContentBuffer = (char*) malloc(4096 * sizeof(char)); while ((amntRecvd = recv(proxySocketFD, pageContentBuffer + i, 4096, 0)) > 0) { i += amntRecvd; realloc(pageContentBuffer, i * 4096 * sizeof(char)); } // close proxy socket close(proxySocketFD); // deallocate memory freeaddrinfo(servinfo); return pageContentBuffer; } Line 336 corresponds to the if statement with the getaddrinfo() function call. I'm not really sure what I haven't initialized. The string I'm passing in "should" be already set... I'm printing it out just fine. I also get another error corresponding to the same line of code: ==21743== Use of uninitialised value of size 8 ==21743== at 0x33B7D05816: __nscd_cache_search (in /lib64/libc-2.5.so) ==21743== by 0x33B7D0438B: nscd_gethst_r (in /lib64/libc-2.5.so) ==21743== by 0x33B7D04B26: __nscd_gethostbyname2_r (in /lib64/libc-2.5.so) ==21743== by 0x33B7CE9F5E: gethostbyname2_r@@GLIBC_2.2.5 (in /lib64/libc-2.5.so) ==21743== by 0x33B7CBC522: gaih_inet (in /lib64/libc-2.5.so) ==21743== by 0x33B7CBD629: getaddrinfo (in /lib64/libc-2.5.so) ==21743== by 0x401A5F: tunnelURL (proxy.c:336) ==21743== by 0x40142A: client_thread (proxy.c:194) ==21743== by 0x33B8806616: start_thread (in /lib64/libpthread-2.5.so) ==21743== by 0x33B7CD3C2C: clone (in /lib64/libc-2.5.so) Any ideas as to what might becausing this? This is written in C btw... Thanks, Hristo

    Read the article

  • JSF SSL Hazzard

    - by java beginner
    In my application it is required that only certain pages need to be secured using SSL so I configured it security-constraint> <display-name>Security Settings</display-name> <web-resource-collection> <web-resource-name>SSL Pages</web-resource-name> <description/> <url-pattern>/*.jsp</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> </web-resource-collection> <user-data-constraint> <description>CONFIDENTIAL requires SSL</description> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> and added filter http://blogs.sun.com/jluehe/entry/how_to_downshift_from_https but only one hazard is there. I am using it with richFaces. Once it goes to HTTPS its not changing the page—I mean if I perform post action it doesn't actually happen. But if I do it from the local machine's browser it works perfectly, from a remote browser it stucks with HTTPS and not changing after that. Here is my web.xml's snap: <filter> <filter-name>MyFilter</filter-name> <filter-class>MyFilter</filter-class> <init-param> <param-name>httpPort</param-name> <param-value>8080</param-value> </init-param> </filter> <filter-mapping> <filter-name>MyFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <security-constraint> <web-resource-collection> <web-resource-name>Protected resource</web-resource-name> <url-pattern>somePattern</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> </web-resource-collection> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> and some other filters of richfaces. Problem is strange. If I try to access the web app from local's machine's browser it works fine but in remote machine's browser once it get into HTTP, all the forms of that page aswell as href stops working.(JSF,facelet is used.)

    Read the article

  • Cisco IOS PBR - PBRing Skype

    - by Azz
    I've got a very simple question, which seems to be extremely difficult when put into practice. I have a Cisco IOS router with two Internet links (one over a WAN, through a proxy, everywhere, etc.) the other direct Internet. Most traffic destined for the internet goes through the proxy over the WAN. I want Skype traffic (why the client uses skype, I don't know..) to go out of the Internet link, while the rest of the traffic goes over the WAN through the proxy, etc. Apparently skype is very difficult to detect/classify because of it's many adaptations to being blocked. Is there any way to identify Skype on an IOS router (2911), and set it's next hop IP/interface? Thank you, Aaron

    Read the article

  • Serving static files fails - nginx

    - by Sergei
    Hi, I've been looking and trying around all night, but without success. I configured nginx to serve my static files and proxy all the other traffic: server { listen 80; server_name mydomain.com; access_log /home/boudewijn/www/bbt/brouwers/logs/access.log; error_log /home/boudewijn/www/bbt/brouwers/logs/error.log; location / { proxy_pass http://127.0.0.1:8080; include /etc/nginx/proxy.conf; } location /media/ { root /home/boudewijn/www/bbt/brouwers/; } } The proxy passing is no problem, but when I go to mydomain.com/media/ or try to access any testfile over there, it's without success. I paid attention to the difference between root and alias, my media folder exists, I paid attention to the trailing slashes, but still I get a 404 when trying to access my static media files. Any help?

    Read the article

  • using iptables to change a destination port but keep the ip the same.

    - by Scott Chamberlain
    I am playing around with transparent proxies, The current way I am doing things is the program makes a request to a computer on port 80, I use iptables -t nat -A OUTPUT -p tcp --destination-port 80 -j REDIRECT --to-port 1234 to redirect to my proxy that I am playing with. the proxy will send out a request to port 81 (as all outbound port 80 are being fed back in to the proxy so I want to do something like iptables -t nat -A OUTPUT -p tcp --destination-port 81 -j DNAT --to-destination xxxx:80 The problem lies with the xxxx part. How do I change the destination port without changing changing the destination ip? Or am I doing this setup completely wrong, I am learning after all and constructive criticism is definitely appreciated.

    Read the article

  • Is -1 a magic number? An anti-pattern? A code smell? Quotes and guidelines from authorities

    - by polygenelubricants
    I've seen -1 used in various APIs, most commonly when searching into a "collection" with zero-based indices, usually to indicate the "not found" index. This "works" because -1 is never a legal index to begin with. It seems that any negative number should work, but I think -1 is almost always used, as some sort of (unwritten?) convention. I would like to limit the scope to Java at least for now. My questions are: What are the official words from Sun regarding using -1 as a "special" return value like this? What quotes are there regarding this issue, from e.g. James Gosling, Josh Bloch, or even other authoritative figures outside of Java? What were some of the notable discussions regarding this issue in the past?

    Read the article

  • IP Tunneling for Spotify? [closed]

    - by everwicked
    I was in the UK and enjoyed Spotify relentlessly. Now I've moved back to Greece and I can't even pay for the darn thing. So my idea was this- I have a server in France and it has a fail-over IP in the UK. So I installed a proxy server on it and made it listen to the UK IP. So far so good. Then, I played Spotify for a while through the proxy server just fine, and it thought I was in the UK. But now... it gives me an error message that I'm in another country than the one on my profile (UK). I don't really understand why - maybe they also geolocate the IP address of the client, not just the proxy server? Either way, I'm kinda stuck - is there a way to tunnel Spotify's network traffic through my server transparently? Maybe a VPN or something similar? Thanks

    Read the article

  • Apache ProxyPass ignore static files

    - by virtualeyes
    Having an issue with Apache front server connecting to a Jetty application server. I thought that ProxyPass ! in a location block was supposed to NOT pass on processing to the application server, but for some reason that is not happening in my case, Jetty shows a 404 on the missing statics (js, css, etc.) Here's my Apache (v 2.4, BTW) virtual host block: DocumentRoot /path/to/foo ServerName foo.com ServerAdmin [email protected] RewriteEngine On <Directory /path/to/foo> AllowOverride None Require all granted </Directory> ProxyRequests Off ProxyVia Off ProxyPreserveHost On <Proxy *> AddDefaultCharset off Order deny,allow Allow from all </Proxy> # don't pass through requests for statics (image,js,css, etc.) <Location /static/> ProxyPass ! </Location> <Location /> ProxyPass http://localhost:8081/ ProxyPassReverse http://localhost:8081/ SetEnv proxy-sendchunks 1 </Location>

    Read the article

  • How do I rewrite *.example.com to www.example.com?

    - by Lekensteyn
    In my network, I've some Ubuntu machines which need to download files from nl.archive.ubuntu.com. Since it's quite a waste of time to download everything multiple times, I've setup a squid proxy for caching the data. Another use for this proxy was rewriting requests for archive.ubuntu.com or *.archive.ubuntu.com to nl.archive.ubuntu.com because this mirror is faster than the US mirrors. This has worked quite well, but after a recent install of my caching machine, the configuration was lost. I remember having a separate perl program for handling this rewrite. How do I setup such a squid proxy which rewrites the host *.example.com to www.example.com and cache the result of the latter?

    Read the article

  • SSL client auth in nginx with multiple server section

    - by Bastien974
    I want to implement ssl_verify_client in nginx. This works perfectly when I only have one server section, which listen to 443. In my case I have multiple, all listening on 443 but to different server_name. For one particular server (proxy.mydomain.com), I'm adding the SSL client verify, but when I test the connectivity with openssl s_client -connect proxy.mydomain.com:443 -cert xxx.crt -key xxx.key and then do a GET / HTTP/1.1 host: proxy.mydomain.com It's not working, 400 No required SSL certificate was sent I think nginx is not receiving the proper server_name and is directing it to the first server listening to 443. So I tried to listen on another port and it worked right away. What's the issue and how can I fix it ?

    Read the article

  • Search Domain Not Working With Squid

    - by Kyle Brandt
    I just set up a squid proxy as a parent proxy to HAVP. When I or other users try to access a domain with an address like "http://foo" I get the following squid error in the browser: The dnsserver returned: Server Failure: The name server was unable to process this query. However, "http://foo.companyname.com" works fine. The search domain in resolv.conf on both the client and proxy host is companyname.com. (There a better term for "search domain"?) Is there a way to correct this, maybe something in the squid.conf file?.

    Read the article

  • 250k connections for comet with node.js

    - by Nenad
    How to implement node.js to be able to handle 250k connections as comet server (client side we use socket.io)? Would the use of nginx as proxy/loadbalancer be the right solution? Or will HA-Proxy be the better way? Has anyone real world experience with 100k+ connections and can share his setup? Would a setup like this be the right one (Quad core CPU per server - start 4 Instances of node.js per Server?): nginx (as proxy / load balancing server) / | \ / | \ / | \ / | \ node server #1 node server #2 node server #3 4 instances 4 instances 4 instances

    Read the article

  • mysql cluster virtual ip

    - by user225995
    I am new in mysql cluster and mysql cluster and versions are not my choice. I setup four machines. Two of them manager , Two of them data cluster (ndb and mysqld). And i integrate with mysql utilities master/slave configuration. Everything working fine. Mysql version 5.6.17, ndb 7.3.5 , servers ubuntu 14.04. There will be no much transactions. The only important thing is HA. Everythings must be double. My problem is virtual ip. Since I have only one farm which have master slave configuration, how can i do it without proxy? If I must use proxy which proxy is better?

    Read the article

  • grep only returns help text

    - by Pete Mancini
    Well, I am perplexed. I am working with an Ubuntu server and I type in grep 'bash' *.sh BUT fgrep 'bash' *.sh works like a champ. which grep and which fgrep both point to their respective executables in /bin. I am perplexed as to what I am doing wrong. EXAMPLE output: $ grep -F 'grounding' repl.clj Usage: grep [OPTION]... PATTERN [FILE]... Search for PATTERN in each FILE or standard input. PATTERN is, by default, a basic regular expression (BRE). Example: grep -i 'hello world' menu.h main.c $ fgrep 'grounding' repl.clj (p/concepts-for-grounding-term imp1 "PERSON" "summary") See? grep is failing but fgrep is working fine. That is why I am perplexed.

    Read the article

  • Apache mod_wsgi elegant clustering method

    - by Dr I
    I'm currently trying to build a scalable infrastructure for my Python webservers. Actually, I'm trying to find the most elegant way to build a scalable cluster to host all my Python WebServices. For now, I'm using three servers like this: 1 x PuppetMaster to deploy my servers. 2 x Apache Reverse Proxy Front-end servers. 1 x Apache HTTPd Server which host the Python WSGI Applications and binded to using mod_wsgi. 4 x MongoDB Clustered server. Everything is OK concerning the Reverse proxy and the DB Backend, I'm able to easily add a new Reverse Proxy and a new DB Node, but my problem is about the Python WebServer. I thinked to just provision a new node with exactly the same configuration and a rsync replication between the two nodes, but It's not really usefull in term of deployement for my developpers etc. So if you have a solution which is as efficient and elegant that the Tomcat Cluster I'll be really happy to ear it ;-)

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >