Search Results

Search found 13716 results on 549 pages for 'proxy classes'.

Page 351/549 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • IIS 7 URL Rewrite to GeoServer running on Apache

    - by Maxim Zaslavsky
    I'm building a mapping application based on OpenLayers that uses GeoServer to serve up mapping data. The problem I'm having is that besides the map images I'm requesting through WMS, I'm using jQuery AJAX to get information from GeoServer. As GeoServer is running on a different port, my requests are being blocked due to cross-site scripting security policies in JavaScript. As a Java application, GeoServer runs on Apache on port 8080, while my IIS instance is running on port 80. Instead of building a proxy, I've decided to use URL Rewriting in IIS7 to fix this problem. I'm following this guide, but it's still not working. Here are my URL Rewrite rule settings: Matches URL: (.*) Condition: {HTTP_URL} matching /geoserver Action: rewrite to http://localhost:8080/{R:1}, appending query string When I request http://localhost/geoserver/wms?QUERY_LAYERS=SanDiego:FWSA_sandiego&LAYERS=SanDiego:FWSA_sandiego&SERVICE=WMS&VERSION=1.1.1&FEATURE_COUNT=20&REQUEST=GetFeatureInfo&EXCEPTIONS=application/vnd.ogc.se_xml&BBOX=-13009123.590156,3862057.2905992,-13006066.109025,3865114.7717302&INFO_FORMAT=text/html&x=20&y=20&width=40&height=40&srs=EPSG:900913, however, all I get is a 404, although the same request on port 8080 returns the proper result. What am I doing wrong? Thanks in advance.

    Read the article

  • Windows Server 2008 R2 DNS - One IP, multiple servers

    - by Blu Dragon
    I need opinions and examples on how to best to accomplish the setup I am looking for. I have a public-facing AD domain server with one public IP address. I have setup an external zone for example.com and I successfully have my own name servers pointing to it at ns0.example.com and ns1.example.com. I also have an internal zone for my private network at home.example.com. I am behind a router with the domain server in the DMZ. I want dev.example.com to be accessible from the outside world over https and to point to internal IP address 192.168.1.78. Likewise, I want www.example.com to be accessible from the outside world and point to internal IP address 192.168.1.79. Both dev and www servers are CentOS 5.6 VMs running inside of Hyper-V on the domain server (bad idea I know but I am limited on hardware atm). What is best way to achieve this? From what I have read and researched on Google, I may need to setup a reverse proxy but I am not sure how well that will work with SSL.

    Read the article

  • How to handle server failure in an n-tier architecture?

    - by andy
    Imagine I have an n-tier architecture in an auto-scaled cloud environment with say: a load balancer in a failover pair reverse proxy tier web app tier db tier Each tier needs to connect to the instances in the tier below. What are the standard ways of connecting tiers to make them resilient to failure of nodes in each tier? i.e. how does each tier get the IP addresses of each node in the tier below? For example if all reverse proxies should route traffic to all web app nodes, how could they be set up so that they don't send traffic to dead web app nodes, and so that when new web app nodes are brought online they can send traffic to it? I could run an agent that would update all the configs to all the nodes, but it seems inefficient. I could put an LB pair between each tier, so the tier above only needs to connect to the load balancers, but how do I handle the problem of the LBs dying? This just seems to shunt the problem of tier A needing to know the IPs of all nodes in tier B, to all nodes in tier A needing to know the IPs of all LBs between tiers A and B. For some applications, they can implement retry logic if they contact a node in the tier below that doesn't respond, but is there any way that some middleware could direct traffic to only live nodes in the following tier? If I was hosting on AWS I could use an ELB between tiers, but I want to know how I could achieve the same functionality myself. I've read (briefly) about heartbeat and keepalived - are these relevant here? What are the virtual IPs they talk about and how are they managed? Are there still single points of failure using them?

    Read the article

  • Odd behavior of setting REMOTE_ADDR between Apache, Nginx, and AWS ELB

    - by Chris Drumgoole
    I have encountered a strange issue and am curious if others have encountered this as well. and if there is absolutely anything that can be done.. We have a set up where we have multiple AWS EC2 Linux machines sitting behind a ELB. The EC2 machines are running Nginx. Let's refer to these as my production machines (because they are!) I also have a Rackspace cloud machine running apache. Completely separate. Let's call this the test server. Now, there's a ISP here in Singapore that seems to be funneling traffic through a transparent proxy or something, and when you do a IP check, the IP often changes. In fact, I noticed that when I check on http://www.whatismyip.com, the ip seems to be stable (doesn't change) across refreshes. But, http://www.whatismyipaddress.com, on refreshing, the IP changes! (so my ISP is doing weird stuff). Now, back to my set up, I noticed a couple of things: Checking the REMOTE_ADDR variable from PHP when connecting to a single Nginx production machine (bypassing the load balancer), is set to the stable IP that does change. Checking the REMOTE_ADDR variable from PHP when connecting to the test Apache server, it is set to the IP that does change on refreshes. Checking the headers when connecting to the nginx production machines through the ELB, the ELB sets the HTTP_X_FORWARDED_FOR to the stable IP. Has anyone experienced this odd behavior? Is there nothing that I can do? And which IP should I "trust"? (the one Apache gives, or the one ELB and Nginx gives?) Thanks! Chris

    Read the article

  • Microsoft Web Farm Framework 502 error

    - by hermiod
    My apologies if this is a really stupid question, but I have installed and set up the new Microsoft Web Farm Framework in a UAT environment with a controller machine, a primary, and a secondary server. I have installed one of our in-house applications on to the primary server and it has been successfully provisioned across to the secondary. I have tried accessing the server via the url http:\mis-uat-002\testfarm\reviewer where mis-uat-002 is the name of the farm controller, testfarm is the name of the farm, and reviewer is the name of the application. When using this url I get a "502 - Web server received an invalid response while acting as a gateway or proxy server." error. If I access the application directly via http:\mis-uat-003\reviewer (mis-uat-003 is the primary farm server, reviewer is the application) then it works no problem. Is there some additional set up that needs to be done in order for the farm controller to do its job? It should be noted that this is a UAT environment so is not on a domain of any sort. However, if this was successful and the WFF came out with V1.0 fairly soon, we would be looking to run this on a Windows 2008 domain.

    Read the article

  • nginx rewrite or internal redirection cycle

    - by gyre
    Im banging my head against a table trying to figure out what is causing redirection cycle in my nginx configuration when trying to access URL which does not exist Configuration goes as follows: server { listen 127.0.0.1:8080; server_name .somedomain.com; root /var/www/somedomain.com; access_log /var/log/nginx/somedomain.com-access.nginx.log; error_log /var/log/nginx/somedomain.com-error.nginx.log debug; location ~* \.php.$ { # Proxy all requests with an URI ending with .php* # (includes PHP, PHP3, PHP4, PHP5...) include /etc/nginx/fastcgi.conf; } # all other files location / { root /var/www/somedomain.com; try_files $uri $uri/ ; } error_page 404 /errors/404.html; location /errors/ { alias /var/www/errors/; } #this loads custom logging configuration which disables favicon error logging include /etc/nginx/drop.conf; } this domain is a simple STATIC HTML site just for some testing purposes. I'd expect that the error_page directive would kick in in response to PHP-FPM not being able to find given files as I have fastcgi_intercept_errors on; in http block and nave error_page set up, but I'm guessing the request fails even before that somewhere on internal redirects. Any help would be much appreciated.

    Read the article

  • How to HIDE "client denied by server configuration:" error in log

    - by Keith
    I want to block access to my web server by default as a precaution but I keep getting the following errors showing up in my error log. [Wed Jun 27 23:30:54 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Edu.jar [Wed Jun 27 23:32:40 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/REST.jar [Wed Jun 27 23:35:39 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Set.jar [Thu Jun 28 01:01:17 2012] [error] [client 58.218.199.227] client denied by server configuration: /home/www/default/proxyheader.php [Thu Jun 28 02:34:57 2012] [error] [client 58.218.199.227] client denied by server configuration: /home/www/default/proxy.php [Thu Jun 28 05:41:33 2012] [error] [client 58.218.199.227] client denied by server configuration: /home/www/default/proxyheader.php [Thu Jun 28 06:55:10 2012] [error] [client 180.76.6.20] client denied by server configuration: /home/www/default/ [Thu Jun 28 07:31:26 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Edu.jar [Thu Jun 28 07:32:25 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/REST.jar [Thu Jun 28 07:36:10 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Set.jar I don't really want these errors to show up but whatever I do, I can't get rid of them. Does anyone know how I can achieve this? Here is a copy of my configuration. <VirtualHost *:80> DocumentRoot /home/www/default <Directory /> AllowOverride None Order Deny,Allow Deny from all </Directory> #ErrorLog /var/log/apache2/error.log #LogLevel warn CustomLog /var/log/apache2/access.log combined </VirtualHost>

    Read the article

  • Ubuntu 10.04 server delay responding to AJAX requests

    - by DanielAttard
    I manage a Ubuntu 10.04 server with a couple of domains hosted on it. As I continue to learn more about all these wonderful new (for me), one issue that I have begun to notice is the delay it sometimes takes for the server to respond to certain requests. As an example, when I view the timeline of events using firebug I can see that most of the time when I make a POST, the server responds in under 100ms. Sometimes, however, there is a substantial delay before the RESPONSE from the server. I can't seem to tell when the delay will happen and when it won't, however, when it happens the delay is always for about 4.5 seconds. The delay seems to happen about 30-40% of the time. Here is the section of apache2.conf dealing with logs: # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent I have no idea where to look to try and debug this problem or investigate further. Any suggestions?

    Read the article

  • Apache logging issues

    - by Dan
    I'm trying to parse apache log files, but I'm finding some strange results and I'm not sure what they mean. Hopefully someone can provide some insight. (all of the IP addresses were altered. none actually start with 192, I didn't figure the search engines mattered though.) In the first example, multiple ip addresses are showing up in the host field: 192.249.71.25 - - [04/Aug/2009:04:21:44 -0500] "GET /publications/example.pdf HTTP/1.1" 200 2738 192.0.100.93, 192.20.31.86 - - [04/Aug/2009:04:21:22 -0500] "GET /docs/another.pdf HTTP/1.0" 206 371469 What causes this? Does it have to do with proxy servers? Is there a way to have Apache only log one? In the second example, a bunch of information is just completely missing! What would cause this? msnbot-65-55-207-50.search.msn.com - - [29/Dec/2009:15:45:16 -0600] "GET /publications/example.pdf HTTP/1.1" 200 3470073 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)" 266 3476792 - - - - "-" - - "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; InfoPath.1)" 285 594 - - - - "-" - - "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; InfoPath.1)" 285 4195 - - - - "-" - - "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; InfoPath.1)" 299 109218 crawl-17c.cuil.com - - [29/Dec/2009:15:45:46 -0600] "GET /publications/another.pdf HTTP/1.0" 200 101481 "-" "Mozilla/5.0 (Twiceler-0.9 http://www.cuil.com/twiceler/robot.html)" 253 101704 My CustomLog configuration says: LogFormat "%h %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-agent}i\" %I %O" common

    Read the article

  • VPN Error 691 but server says authenticated on server

    - by Andy
    Hello all, I have a problem with a vpn connection on Windows XP SP3 that appears to be related to an account (maybe privilleges or an option that I have missed). When connecting using my account, which is a domain administrator account it will connect to through the vpn fine. However, using an account created for another person they receive Error 691: Username or Password is not valid for this domain. On the domain controller (windows 2003) I see a logon successful message: User DOMAIN\user was granted access. Fully-Qualified-User-Name = int.company.net.au/People/Management/User NAS-IP-Address = 10.30.0.3 NAS-Identifier = not present Client-Friendly-Name = MelbourneCore Client-IP-Address = Router-ip Calling-Station-Identifier = not present NAS-Port-Type = Virtual NAS-Port = 77 Proxy-Policy-Name = Use Windows authentication for all users Authentication-Provider = Windows Authentication-Server = undetermined Policy-Name = Remote VPN Access Authentication-Type = MS-CHAPv1 EAP-Type = Does anyone have any ideas as to where else I should look for finding a solution? If i use the wrong password it gives a logon failure error in the event viewer. Also removing them from the remote access group gives a logon failure error. Nothing appears in the event viewer on the local machine. In the past all that is required is to add them into our Remote Access Users group. Any help?

    Read the article

  • DansGuardian/Squid Traffic doesn't get back to user

    - by DKNUCKLES
    I've purchased a Squid appliance that I'm attempting to implement, however the lack of documentation has left me a bit high and dry. Forgive me if this is a silly question, but this is my first attempt at implementing Squid. From what I can ascertain from the documentation (or lack thereof), the users connect to DansGuardian first at port 8080 where the filtering is done, at which point it forwards it to the Squid appliance at port 3128. The traffic is then sent to the internet. The setup I have is as follows Gateway (MikroTik router) : 192.168.88.1 Squid/DansGuardian :192.168.88.100 Client : 192.168.88.238 Client --- Gateway --- Proxy --- Internet I have set up a simple NAT rule to forward all traffic from the client machine (for testing purposes) to go to the DansGuardian. The traffic seems to get there, although I see a lot of SYN_RECV w/ a netstat -antp command on the virtual appliance machine. From this I gather that the traffic is NOT being routed back to the client machine. Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN - tcp 0 0 192.168.88.100:8080 192.168.88.238:55786 SYN_RECV - tcp 0 0 192.168.88.100:8080 192.168.88.238:55787 SYN_RECV - tcp 0 0 192.168.88.100:8080 192.168.88.238:55785 SYN_RECV - tcp 0 0 192.168.88.100:8080 192.168.88.238:55788 SYN_RECV - tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN - Is this a routing issue or an issue with the Squid Appliance?

    Read the article

  • fedora apache/nginx pylons

    - by microchasm
    I'm trying to wrap my head around Pylons and how it works. So far... it's been confusing... I'm using EC2 with Fedora8. Everything is working so far (i.e. I have Pylons/python et al installed and after creating a test app and running paster serve I can access the default page via my domain name). As the Pylons docs explain and as I understand, the built in paster serve server is not suited for a production environment. What I am not clear on, then, is what to do next... It seems like nginx is a good option, but I am more familiar with Apache (like .0002%). I plan on having virtualhosts (which nginx says can accomodate). However, I am totally unclear on how the big picture is supposed to work. In order to serve an app, does paster serve need to be running? Does then nginx/apache basically just act as a proxy to shuttle connections to the paster server? How do I start it so it doesn't terminate after closing the ssh connection? If running multiple apps, what do I set as the host/port in development.ini to differentiate the apps? Or if this is not the right way, how do I differentiate beween apps? I am more familiar with MySQL, but willing to negotiate PostgreSQL if it's a better fit. Is it? Is virtualenv a prerequisite to running multiple apps on the same machine? Thanks in advance for any tips.

    Read the article

  • Unable to get squid working for remote users

    - by Sean
    I am trying to setup squid 3.2.4, but I have not been able to get it working for remote users. Works fine locally. Unable to figure out what I am doing wrong... http_port 3128 transparent ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB cert=/usr/share/ssl-cert/myCA.pem refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 acl localnet src 10.0.0.0/8 # RFC 1918 possible internal network acl localnet src 172.16.0.0/12 # RFC 1918 possible internal network acl localnet src 192.168.0.0/16 # RFC 1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access allow localhost http_access allow localnet http_access allow all cache deny all via off forwarded_for off header_access From deny all header_access Server deny all header_access WWW-Authenticate deny all header_access Link deny all header_access Cache-Control deny all header_access Proxy-Connection deny all header_access X-Cache deny all header_access X-Cache-Lookup deny all header_access Via deny all header_access Forwarded-For deny all header_access X-Forwarded-For deny all header_access Pragma deny all header_access Keep-Alive deny all acl ip1 localip 1.1.1.90 acl ip2 localip 1.1.1.91 acl ip3 localip 1.1.1.92 acl ip4 localip 1.1.1.93 acl ip5 localip 1.1.1.94 tcp_outgoing_address 1.1.1.90 ip1 tcp_outgoing_address 1.1.1.91 ip2 tcp_outgoing_address 1.1.1.92 ip3 tcp_outgoing_address 1.1.1.93 ip4 tcp_outgoing_address 1.1.1.94 ip5 tcp_outgoing_address 1.1.1.90

    Read the article

  • How use DNS server to create simple HA (High availability) of my website?

    - by marc22
    Welcome, How can i use DNS server to create simple HA (High availability) of website ? For example if my web-server ( for better understanding i use internal IP in real it will be other hosting companies) 192.168.0.120 :80 (is offline) traffic go to 192.168.0.130 :80 You have right, i use bad word "hight avability" of course i was thinking about failover. Using few IP in A records is good for simple load-balancing. But not in case, if i want notice user about failure (for example display page, Oops something is wrong without our server, we working on it) against "can't establish connection". I was thinking about setting up something like this 2 DNS servers, one installed on www server Both have low TTL on my domain, set up 2 ns records first for DNS with my apache server second to other dns If user try connect he will get ip of www server using first dns, if that dns is offline (probably www server is also down) so it will try second NS record, what will point to another dns, that dns will point to "backup" page. That's what i would like to do. If You have other idea please share. Reverse proxy is not option, because IP of server can change, or i can use other country for backup.

    Read the article

  • Why I am getting "Problem loading the page" after enabling HTTPS for Apache on Windows 7?

    - by Anish
    I enabled HTTPS on the Apache server (2.2.15) Windows 7 Enterprise by uncommenting: Include /private/etc/apache2/extra/httpd-ssl.conf in C:\Program Files (x86)\Apache Software Foundation\Apache2.2\conf\httpd.conf and modifying C:\Program Files (x86)\Apache Software Foundation\Apache2.2\conf\httpd-ssl.conf to include: DocumentRoot "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/htdocs" ServerName myserver.com:443 ServerAdmin [email protected] ... SSLCertificateFile "SSLCertificateFile "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/cert.pem SSLCertificateKeyFile "SSLCertificateFile "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/key.pem" Then I restart apache (going to start-All Progranms-Apache Server 2.2-Control-restart) and go to localhost on port 443 in Firefox , where I get: Index of / Index of / Links/ ..... .... But on Display of WebPage I see: Unable to connect Firefox can't establish a connection to the server at localhost. *The site could be temporarily unavailable or too busy. Try again in a few moments. *If you are unable to load any pages, check your computer's network onnection. *If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web. I read: Why am I getting 403 Forbidden after enabling HTTPS for Apache on Mac OS X? and added default web server configuration block to match my DocumentRoot The error Log C:\Program Files (x86)\Apache Software Foundation\Apache2.2\logs\error.log gives following error: The Apache2.2 service is running. (OS 5)Access is denied. : Init: Can't open server certificate file C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/cert.pem I checked the permissions for cert.pem and it indicates: All the permissions (Full control, Read, Read and modify, execute, Write) are marked for Admin and I am currently logged in as Admin. I tried using oldcert.pem and oldkey.pem on the same server and it works fine. Is there anything that I missed?

    Read the article

  • Web based file search in the lan?

    - by Magnetic_dud
    I would like to search files in my lan easily. (over 500k files on SMB shares, it would take ages with other ways) I mean, i just need to do a quick search on file names, i don't care content indexing at all, as most of my files are in a proprietary format, and the file name is explicative enough. But, date range filters are a must for me. I just need a quick search like voidtools' everything can do, but in a network way The files are on a WHS box (lol, Videos and Music share names are not appropriate for a company, but a license for that win2003-based os is cheaper than an xp home one!) I tried: Lansearch pro: it is not good for me, as i need a quick index Network Search Engine: it would be perfect, but does not offer a date range filter Microsoft Search Server 2008 Express, but it is horrible! First, does NOT index filenames, and then, my Core2Duo is not powerful enough to run it smoothly. Google Desktop with a proxy on localhost to make it run on the lan, but i don't like the hacked result. The preinstalled Windows Search 4.0 but it sucks totally in choosing the relevance of data - uninstalled Docco... what's that? I am considering to try: Ibm omnifind DocFetcher (can it work as a client? did not investigated yet) Strigi (it looks like that it can work as a client, right?) Any ideas/suggestions?

    Read the article

  • IPSec for LAN traffic: Basic considerations?

    - by chris_l
    This is a follow-up to my Encrypting absolutely everything... question. Important: This is not about the more usual IPSec setup, where you want to encrypt traffic between two LANs. My basic goal is to encrypt all traffic within a small company's LAN. One solution could be IPSec. I have just started to learn about IPSec, and before I decide on using it and dive in more deeply, I'd like to get an overview of how this could look like. Is there good cross-platform support? It must work on Linux, MacOS X and Windows clients, Linux servers, and it shouldn't require expensive network hardware. Can I enable IPSec for an entire machine (so there can be no other traffic incoming/outgoing), or for a network interface, or is it determined by firewall settings for individual ports/...? Can I easily ban non-IPSec IP packets? And also "Mallory's evil" IPSec traffic that is signed by some key, but not ours? My ideal conception is to make it impossible to have any such IP traffic on the LAN. For LAN-internal traffic: I would choose "ESP with authentication (no AH)", AES-256, in "Transport mode". Is this a reasonable decision? For LAN-Internet traffic: How would it work with the internet gateway? Would I use "Tunnel mode" to create an IPSec tunnel from each machine to the gateway? Or could I also use "Transport mode" to the gateway? The reason I ask is, that the gateway would have to be able to decrypt packages coming from the LAN, so it will need the keys to do that. Is that possible, if the destination address isn't the gateway's address? Or would I have to use a proxy in this case? Is there anything else I should consider? I really just need a quick overview of these things, not very detailed instructions.

    Read the article

  • Where to place Nginx IP blacklist config file?

    - by ProfessionalAmateur
    I have an Nginx web server hosting two sites. I created a blockips.conf file to blacklist IP addresses that are constantly probing the server and included this file in the nginx.conf file. However in my access logs for the sites I still see these IP addresses showing up. Do I need to include the black list in each site's conf instead of the global conf for Nginx? Here is my nginx.conf user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; include /etc/nginx/conf.d/*.conf; # Load virtual host configuration files. include /etc/nginx/sites-enabled/*; # BLOCK SPAMMERS IP ADDRESSES include /etc/nginx/conf.d/blockips.conf; } blockips.conf deny 58.218.199.250; access.log still shows this IP address. 58.218.199.250 - - [27/Sep/2012:06:41:03 -0600] "GET http://59.53.91.9/proxy/judge.php HTTP/1.1" 403 570 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" "-" What am I doing incorrectly?

    Read the article

  • Widespread misinterpretation of DNS rules in resolving wildcards

    - by Dominic Sayers
    [EDITED to add: This problem has gone away on its own. I believe Cloudflare's name resolution may have been to blame. See my own answer below] Here is a snippet of my zone file *.example.com. 300 IN CNAME proxy.herokuapp.com. foo.example.com. 300 IN A 111.111.111.111 If I dig @8.8.8.8 foo.example.com I get the answer I expect: ;; ANSWER SECTION: foo.example.com. 30 IN A 111.111.111.111 The same is true of all other public DNS servers I've tried. However, when I try to set up a check with Pingdom to a URL on foo.example.com it instead sends the traffic to my Heroku app referenced by the *.example.com RR. The same is true of checks set up on New Relic, Errplane and traffic generated by the Heroku app itself. So on the one side, all public DNS servers interpret the zone file one way. Yet four service providers all interpret it a different way, one that differs to the standard suggested by RFC 4592. My question is: are these reputable, mature service providers all wrong? Or is it little me?

    Read the article

  • linux + create host file from CSV file by sed or awk or perl

    - by yael
    I have the following CSV file this file defined which Linux machine exist in the system and there ip's my target is to create host file from this file please advice how to create host file as example 1 from my CSV file ( I need to match the IP address from CSV file and put it on the first field of the host file , then match the LINUX name and locate this name in the sec field – as example 1 ) remark - should be performed by sed or awk or perl .. , I need to write the solution in my bash script CSV file , machine , VM-LINUX1 , SZ , Phy , 10.213.158.18 , PROXY , VM-LINUX2 , SZ , 10.213.158.19 , OLD HW , VM-LINUX3 , SZ , 10.213.158.20 , , VM-LINUX4 , SZ , Phy , 10.213.158.21 , , VM-LINUX5 , SZ , Phy , OUT , EXT , LAN3 , 10.213.158.22 , INTERNAL , VM-LINUX6 , SZ , Phy , 10.213.158.23 , , server , new HW , VM-LINUX7 , SZ , Phy , 10.213.158.24 , OUT, LAN3 , VM-LINUX8 , SZ , 10.213.158.25 , OLD HW , machine , VM-LINUX9 , SZ , Phy , INT , 10.213.158.26 , LAN2, AN45, , VM-LINUX10 , SZ , Phy , 10.213.158.27 , , VM-LINUX11 , SZ , Phy , LAN5 , 10.213.158.28 , example 1 ( host file ) 10.213.158.18 VM-LINUX1 10.213.158.19 VM-LINUX2 10.213.158.20 VM-LINUX3 10.213.158.21 VM-LINUX4 10.213.158.22 VM-LINUX5 10.213.158.23 VM-LINUX6 10.213.158.24 VM-LINUX7 10.213.158.25 VM-LINUX8 10.213.158.26 VM-LINUX9 10.213.158.27 VM-LINUX10 10.213.158.25 VM-MACHINE8 10.213.158.26 STAR9 10.213.158.27 TOP10 10.213.158.28 SERVER11

    Read the article

  • Users database empty after Samba3 to Samba4 migration on different servers

    - by ouzmoutous
    I have to migrate a Samba 3 to a new Samba 4 server. My problem is that the database on the samba 3 server seems a bit empty. The secrets.dtb file is only 20K whereas the “pbedit -L |wc -l”command give me 16970 lines. On my Samba3 /var/lib/samba is 1,5M After I had migrate the databse (following instructions on http://dev.tranquil.it/index.php/SAMBA_-_Migration_Samba3_Samba4), “pdbedit -L” command on the new server give me only : SAMBA4$, Administrator, dns-samba4, krbtgt and nobody. So I tried to create a VM with a Samba3. I added some users, done the same things I did for the migration and now I can see the users created on the VM. It’s like users on the Samba 3 server are in a sort of cache. I already migrate the /etc/{passwd,shadow,group} files and I can see users with the “getent passwd” command. Any ideas why my users are present when I use pdbedit but the database is so empty ? The global part of my smb.conf on the Samba 3 server : [global] workgroup = INTERNET netbios name = PDC-SMB3 server string = %h server interfaces = eth0 obey pam restrictions = Yes passdb backend = smbpasswd passwd program = /usr/bin/passwd %u passwd chat = *new* %n\n *Re* %n\n *pa* username map = /etc/samba/smbusers unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%U max log size = 1000 socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192 add user script = /usr/sbin/useradd -s /bin/false -m '%u' -g users delete user script = /usr/sbin/userdel -r '%u' add group script = /usr/sbin/groupadd '%g' delete group script = /usr/sbin/groupdel '%g' add user to group script = /usr/sbin/usermod -G '%g' '%u' add machine script = /usr/sbin/useradd -s /bin/false -d /dev/null '%u' -g machines logon script = logon.cmd logon home = \\$L\%U domain logons = Yes os level = 255 preferred master = Yes local master = Yes domain master = Yes dns proxy = No ldap ssl = no panic action = /usr/share/samba/panic-action %d invalid users = root admin users = admin, root, administrateur log level = 2

    Read the article

  • ASA Slow IPSec Performance with Inconsistent Window Size

    - by Brent
    I have a IPSec link between two sites over ASA 5520s running 8.4(3) and I am getting extremely poor performance when traffic passes over the IPSec VPN. CPU on the devices is ~13%, Memory at 408 MB, and active VPN sessions 2. The load on both of the the devices is particularly low. Latency between the two sites is ~40ms. Screenshot of wireshark file transfer between the two hosts over the firewall IPSec VPN performing at 10MBPS. Note the changing window size. http://imgur.com/wGTB8Cr Screenshot of wireshark file transfer between the two hosts over the firewall not going over IPSec performing at 55MBPS. Constant window size. http://imgur.com/EU23W1e I'm showing an inconsistent window size when transferring over the IPSec VPN ranging in 46,796 to 65535. When performing at 55+MBPS, the window size is consistently 65,535. Does this show a problem in my configuration of the IPSec VPN in the ASA or a Layer1/2 issue? Using ping xxxxxx -f -l I finally get a non-fragment at 1418 bytes so 1418+28 for IP/ICMP headers = 1446. I know that I have 1500 set on the ASA and Ethernet. I do have "Force Maximum segment size for TCP proxy connection to be" "1380" bytes set under Configuration Advanced TCP Options on the ASA. Using IPERF, I am getting a "TCP Window Full" every few seconds and ~3 MBPS performance. http://imgur.com/elRlMpY Show Run on the ASA http://pastebin.com/uKM4Jh76 Show cry accelerator stats http://pastebin.com/xQahnqK3

    Read the article

  • One server running Django (with Nginx and Apache) and Wordpress Blog

    - by JCWong
    I have nginx listening to port 80 for my primary site foo.com. It proxys to port 8080 which is where the Django app lives server { listen 80; server_name www.foo.com foo.com; access_log /home/jeffrey/www/ddt/logs/nginx_access.log; error_log /home/jeffrey/www/ddt/logs/nginx_error.log; location / { proxy_pass http://127.0.0.1:8080; include /etc/nginx/proxy.conf; } location /media/ { root /home/jeffrey/www/ddt/; } location /static/ { root /home/jeffrey/www/ddt/; } location /public/ { root /home/jeffrey/www/ddt/; } } I'd like to have a wordpress blog run on the same server. Apache is listening to port 8080 with this http.conf file NameVirtualHost *:8080 WSGIScriptAlias / /home/jeffrey/www/ddt/apache/ddt.wsgi WSGIPythonPath /home/jeffrey/www/ddt <Directory /home/jeffrey/www/ddt/apache/> <Files ddt.wsgi> Order deny,allow Allow from all </Files> </Directory> I added my Wordpress site using a virtualhost <VirtualHost *:8080> ServerName www.bar.com ServerAlias bar.com DocumentRoot /home/jeffrey/www/jeffrey_wp </VirtualHost> When I go to bar.com I still see my django app. Is it possible for these two sites to run on the same server?

    Read the article

  • Apache2 - Hosting two sites on the same domain with different ports

    - by user1026361
    I am hosting a staging site (test.mydomain.com) which currently work well on port 80 for two sites (test.mydomain.com and test.FRmydomain.com) I am working on a new backend and I would like to deploy a third site on this server for testing. My hope is that it will live at test.mydomain.com:4204. I've got some experience with apache and quickly added statements: Listen 4204 NameVirtualHost *:4204 and created a new config for my site. What I imagine are the relevant parts of my config: <VirtualHost *:4204 > ServerAdmin [email protected] ServerName test.mydomain.com:4204 However, the site is not publicly available, by name or ip. If i curl localhost:4204 from the server, I get the expected page content At this point, I'm a bit of a loss on how to go forwards. It seems like my config is correct but not available to be served. Am I better off defining a proxy definition so that, for instance: test.mydomain.com/4204 proxies to my localhost server or is there a way to make the site available via the internet? EDIT: I have added an iptable rule after further Googling with the command: iptables -I INPUT -p tcp --dport 4204 -j ACCEPT I can see apache listening on 4204 and the rule is definitely in place but cant reach the site

    Read the article

  • Can't connect to research.microsoft.com on home Qwest DSL connection

    - by rakingleaves
    I have a puzzling issue regarding accessing research.microsoft.com from my home Qwest DSL connection. By default, I frequently get timeouts when accessing research.microsoft.com from Firefox, Safari, or Chrome on my Mac. I also cannot access the site from Internet Explorer in a Windows VM. However, I am able to access the site through proxify.com, so I know the site is not down. Furthermore, I haven't noticed problems accessing other sites (in particular, www.microsoft.com works fine). Also, I can access research.microsoft.com when I'm connected to networks other than my home Qwest DSL connection. Together, the above make me suspect a problem with either my router (Airport Express) or, more likely, my ISP. Anyone have any thoughts on how I can narrow down the problem further? I could call my ISP and tell them the above, but my feeling is that probably won't get me very far. I can get by browsing research.microsoft.com through a proxy, but it would be nice to figure out what's going on here and fix the problem. Oh, the only relevant discussion I found via Google was here: http://forums.whirlpool.net.au/forum-replies-archive.cfm/1311734.html Update: Thanks to those who have tried to help! I found one other thing while Googling that may be vaguely relevant: http://thedaneshproject.com/posts/supportmicrosoftcom-not-working-behind-squid/ Disabling the Accept-Encoding headers in Firefox actually didn't make a difference for me. I just thought the above might spark some other ideas about how mishandling of HTTP headers somewhere might be causing this problem. Thanks again! Another update: In case anyone is still thinking about this; I've found that I can't surf research.microsoft.com using the links text-based browser, but I can reliably download individual files with wget. Maybe that helps?

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >