Search Results

Search found 15131 results on 606 pages for 'clang static analyzer'.

Page 126/606 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • How to deny the web access to some files?

    - by Strae
    I need to do an operation a bit strange. First, i run on Debian, apache2 (which 'runs' as user www-data) So, I have simple text file with .txt ot .ini, or whatever extension, doesnt matter. These files are located in subfolders with a structure like this: www.example.com/folder1/car/foobar.txt www.example.com/folder1/cycle/foobar.txt www.example.com/folder1/fish/foobar.txt www.example.com/folder1/fruit/foobar.txt therefore, the file name always the same, ditto for the 'hierarchy', just change the name of the folder: /folder-name-static/folder-name-dinamyc/file-name-static.txt What I should do is (I think) relatively simple: I must be able to read that file by programs on the server (python, php for example), but if I try to retrieve the file contents by broswer (digiting the url www.example.com/folder1/car/foobar.txt, or via cUrl, etc..) I must get a forbidden error, or whatever, but not access the file. It would also be nice that even accessing those files via FTP are 'hidden', or anyway couldnt be downloaded (at least that I use with the ftp root and user data) How can I do? I found this online, be put in the file .htaccess: <Files File.txt> Order allow, deny Deny from all </ Files> It seems to work, but only if the file is in the web root (www.example.com / myfile.txt), and not in subfolders. Moreover, the folders in the second level (www.example.com/folder1/fruit/foobar.txt) will be dinamycally created.. I would like to avoid having to change .htaccess file from time to time. It is possible to create a rule, something like that, that goes for all files with given name, which is on www.example.com/folder-name-static/folder-name-dinamyc/file-name-static.txt, where those parts are allways the same, just that one change ? EDIT: As Dave Drager said, i could semplify this keeping those file outside the web accessible directory. But those directory's will contain others files too, images, and stuff used by my users, so i'm simply try to not have a duplicate folders system, like: /var/www/vhosts/example.com/httpdocs/folder1/car/[other folders and files here] /var/www/vhosts/example.com/httpdocs/folder1/cycle/[other folders and files here] /var/www/vhosts/example.com/httpdocs/folder1/fish/[other folders and files here] //and, then for the 'secrets' files: /folder1/data/car/foobar.txt /folder1/data/cycle/foobar.txt /folder1/data/fish/foobar.txt

    Read the article

  • 6to4 tunnel: cannot ping6 to ipv6.google.com?

    - by quanta
    Hi folks, Follow the Setup of 6to4 tunnel guide, I want to test ipv6 connectivity, but I cannot ping6 to ipv6.google.com. Details below: # traceroute 192.88.99.1 traceroute to 192.88.99.1 (192.88.99.1), 30 hops max, 40 byte packets 1 static.vdc.vn (123.30.53.1) 1.514 ms 2.622 ms 3.760 ms 2 static.vdc.vn (123.30.63.117) 0.608 ms 0.696 ms 0.735 ms 3 static.vdc.vn (123.30.63.101) 0.474 ms 0.477 ms 0.506 ms 4 203.162.231.214 (203.162.231.214) 11.327 ms 11.320 ms 11.312 ms 5 static.vdc.vn (222.255.165.34) 11.546 ms 11.684 ms 11.768 ms 6 203.162.217.26 (203.162.217.26) 42.460 ms 42.424 ms 42.401 ms 7 218.188.104.173 (218.188.104.173) 42.489 ms 42.462 ms 42.415 ms 8 218.189.5.10 (218.189.5.10) 42.613 ms 218.189.5.42 (218.189.5.42) 42.273 ms 42.300 ms 9 d1-26-224-143-118-on-nets.com (118.143.224.26) 205.752 ms d1-18-224-143-118-on-nets.com (118.143.224.18) 207.130 ms d1-14-224-143-118-on-nets.com (118.143.224.14) 206.970 ms 10 218.189.5.150 (218.189.5.150) 207.456 ms 206.349 ms 206.941 ms 11 * * * 12 10gigabitethernet2-1.core1.lax1.he.net (72.52.92.121) 214.087 ms 214.426 ms 214.818 ms 13 192.88.99.1 (192.88.99.1) 207.215 ms 199.270 ms 209.391 ms # ifconfig tun6to4 tun6to4 Link encap:IPv6-in-IPv4 inet6 addr: 2002:x:x::/16 Scope:Global inet6 addr: ::x.x.x.x/128 Scope:Compat UP RUNNING NOARP MTU:1480 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:11 dropped:0 overruns:0 carrier:11 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) # iptunnel sit0: ipv6/ip remote any local any ttl 64 nopmtudisc tun6to4: ipv6/ip remote any local x.x.x.x ttl 64 # ip -6 route show ::/96 via :: dev tun6to4 metric 256 expires 21332777sec mtu 1480 advmss 1420 hoplimit 4294967295 2002::/16 dev tun6to4 metric 256 expires 21332794sec mtu 1480 advmss 1420 hoplimit 4294967295 fe80::/64 dev eth0 metric 256 expires 15674592sec mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev eth1 metric 256 expires 15674597sec mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev tun6to4 metric 256 expires 21332794sec mtu 1480 advmss 1420 hoplimit 4294967295 default via ::192.88.99.1 dev tun6to4 metric 1 expires 21332861sec mtu 1480 advmss 1420 hoplimit 4294967295 # ping6 -n -c 4 ipv6.google.com PING ipv6.google.com(2404:6800:8005::68) 56 data bytes From 2002:x:x:: icmp_seq=0 Destination unreachable: Address unreachable From 2002:x:x:: icmp_seq=1 Destination unreachable: Address unreachable From 2002:x:x:: icmp_seq=2 Destination unreachable: Address unreachable From 2002:x:x:: icmp_seq=3 Destination unreachable: Address unreachable --- ipv6.google.com ping statistics --- 4 packets transmitted, 0 received, +4 errors, 100% packet loss, time 2999ms What is my problem? Thanks,

    Read the article

  • Varnish configuration to only cache for non-logged in users

    - by davidsmalley
    I have a Ruby on Rails application fronted by varnish+nginx. As most of the sites content is static unless you are a logged in user, I want to cache the site heavily with varnish when a user is logged out but only to cache static assets when they are logged in. When a user is logged in they will have the cookie 'user_credentials' present in their Cookie: header, in addition I need to skip caching on /login and /sessions in order that a user can get their 'user_credentials' cookie in the first place. Rails by default does not set a cache friendly Cache-control header, but my application sets a "public,s-max-age=60" header when a user is not logged in. Nginx is set to return 'far future' expires headers for all static assets. The configuration I have at the moment is totally bypassing the cache for everything when logged in, including static assets — and is returning cache MISS for everything when logged out. I've spent hours going around in circles and here is my current default.vcl director rails_director round-robin { { .backend = { .host = "xxx.xxx.xxx.xxx"; .port = "http"; .probe = { .url = "/lbcheck/lbuptest"; .timeout = 0.3 s; .window = 8; .threshold = 3; } } } } sub vcl_recv { if (req.url ~ "^/login") { pipe; } if (req.url ~ "^/sessions") { pipe; } # The regex used here matches the standard rails cache buster urls # e.g. /images/an-image.png?1234567 if (req.url ~ "\.(css|js|jpg|jpeg|gif|ico|png)\??\d*$") { unset req.http.cookie; lookup; } else { if (req.http.cookie ~ "user_credentials") { pipe; } } # Only cache GET and HEAD requests if (req.request != "GET" && req.request != "HEAD") { pipe; } } sub vcl_fetch { if (req.url ~ "^/login") { pass; } if (req.url ~ "^/sessions") { pass; } if (req.http.cookie ~ "user_credentials") { pass; } else { unset req.http.Set-Cookie; } # cache CSS and JS files if (req.url ~ "\.(css|js|jpg|jpeg|gif|ico|png)\??\d*$") { unset req.http.Set-Cookie; } if (obj.status >=400 && obj.status <500) { error 404 "File not found"; } if (obj.status >=500 && obj.status <600) { error 503 "File is Temporarily Unavailable"; } } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } }

    Read the article

  • Definitive download location for MBSA "wsusscn2.cab" file for offline mode scans?

    - by Chris W. Rea
    I'm running Microsoft Baseline Security Analyzer 2.1 against some servers that don't have outbound access to the Internet, by design of firewall restrictions, and therefore I'm wishing to run MBSA in offline mode. In order to do so, I need the list of updates in the file named "wsusscn2.cab". Is there a well-known page or URL at Microsoft for downloading the most up-to-date version of that file for MBSA offline mode? Thank you.

    Read the article

  • How to configure DNS so that www.example.com goes to one server, *.example.com to another

    - by fishwebby
    I'm trying to set up my domain as follows, but I'm not actually sure if it's possible. I have a domain where I would like the base and www addresses to go to my static site, but others to go to my application server. For example: My domain is registered with Dreamhost, and my application is on a VPS at Webbynode. I've set up the domain in Dreamhost to use Webbynode's nameservers: ns1.dnswebby.com ns2.dnswebby.com ns3.dnswebby.com And in Webbynode I've set up a wildcard A record to point to the IP address of my VPS: * 1.2.3.4 A and this works nicely, if I go to app.example.com it resolves to my application server at Webbynode. However, what I'd like to do is have example.com and www.example.com go to my static site, hosted back at Dreamhost, whilst still having any other domain go to my app. What I've done to try and achieve this is set up these DNS "NS" entries at Webbynode, trying to get Dreamhost to resolve these domain names: (empty) ns1.dreamhost.com NS (empty) ns2.dreamhost.com NS (empty) ns3.dreamhost.com NS www ns1.dreamhost.com NS www ns2.dreamhost.com NS www ns3.dreamhost.com NS (I don't have a fixed IP address at Dreamhost so I can't just set up simple A records). However this doesn't work... does anyone have any idea if this is possible and if so how it could be done? Update: I've got this working now, as above for the domain (i.e. registered with Dreamhost, but using Webbynode's nameservers). To delegate the DNS for www.example.com to Dreamhost, I've got the following DNS entries set up: www.example.com. ns1.dreamhost.com. NS www.example.com. ns2.dreamhost.com. NS www.example.com. ns3.dreamhost.com. NS (note the full stops at the end) And to get example.com to resolve to my static site, I set up CNAME record: example.com. www.example.com. CNAME So now, example.com and www.example.com go to my static site on Dreamhost, and if they change the IP address of my shared hosting it won't affect me, and all other subdomains go to my application server. This seems to work nicely, but if anyone knows a better way to do it I'd be happy to hear it. Thanks to all who replied.

    Read the article

  • Server 2012, ADFS 2.1, and Office 365

    - by Matt Bear
    Has anyone gotten ADFS 2.1 on Server 2012 working with o365 SSO? I have it working up to a point, I tweaked the registry to allow the powershell commands to run, user accounts syncs fine. Even the remote connectivity analyzer shows no errors. But SSO itself does not seem to be passing the credentials correctly. Microsoft claims that ADFS 2.1 is not supported to work with o365, but I'm just being stubborn and not giving up that easy.

    Read the article

  • (Serving PHP) Does Apache2 will create new thread on every connection?

    - by apasajja
    Based on many online sources, in serving static files, Apache2 will create new thread on every different connection... results in resource hungry But how about serving PHP through Apache2 (mod_php, MPM worker, etc)? Does apache will also open new thread like serving static files? (AFAIK, in nginx php-fpm, we can set the max thread, but I dont know how many connection per thread) I'm planning to use Apache2 in serving PHP, and hope it will be same as nginx PHP-FPM or even better in resource usage and performance.

    Read the article

  • Cisco IPSec, nat, and port forwarding don't play well together

    - by Alan
    I have two Cisco ADSL modems configured conventionally to nat the inside traffic to the ISP. That works. I have two port forwards on one of them for SMTP and IMAP from the outside to the inside this provides external access to the mail server. This works. The modem doing the port forwarding also terminates PPTP VPN traffic. There are two DNS servers one inside the office which resolves mail to the local address, one outside the office which resolves mail for the rest of the world to the external interface. That all works. I recently added an IPSec VPN between the two modems and that works for every thing EXCEPT connections over the IPSec VPN to the mail server on port 25 or 143 from workstations on the remote lan. It would seem that the modem with the port forwards is confusing traffic from the mail server destined for a machine on the other side of the IPSec VPN for traffic that should go back to a port forward connection. PPTP VPN traffic to the mail server is fine. Is this a scenario anybody is familiar with and are there any suggestions on how to work around it? Many thanks Alan But wait there is more..... This is the strategic parts of the nat config. A route map is used to exclude the lans that are reachable via IPSec tunnels from being Nated. int ethernet0 ip nat inside int dialer1 ip nat outside ip nat inside source route-map nonat interface Dialer1 overload route-map nonat permit 10 match ip address 105 access-list 105 remark *** Traffic to NAT access-list 105 deny ip 192.168.1.0 0.0.0.255 192.168.9.0 0.0.0.255 access-list 105 deny ip 192.168.1.0 0.0.0.255 192.168.48.0 0.0.0.255 access-list 105 permit ip 192.168.1.0 0.0.0.255 any ip nat inside source static tcp 192.168.1.241 25 interface Dialer1 25 ip nat inside source static tcp 192.168.1.241 143 interface Dialer1 143 At the risk of answering my own question, I resolved this outside the Cisco realm. I bound a secondary ip address to mail server 192.168.1.244, changed the port forwards to use it while leaving all the local and IPSec traffic to use 192.168.1.241 and the problem was solved. New port forwards. ip nat inside source static tcp 192.168.1.244 25 interface Dialer1 25 ip nat inside source static tcp 192.168.1.244 143 interface Dialer1 143 Obviously this is a messy solution and being able to fix this in the Cisco would be preferable.

    Read the article

  • I disconnected my cellphone while transferring files to its Mini SD card. Now the files aren't there

    - by Martín Fixman
    I use Ubuntu 9.10, and the MiniSD card shows as having the space used as if there were files. Baobab (the disk usage analyzer) shows that the card only has 118 MB used (of the 401 Ubuntu claims there are). Of course, I already tried the obvious (rebooting the phone, adding and removing files, etc.), but I don't want to format my card, because I still have some files on it, the transfer to my computer is slow, and because I use an old wire it fails often.

    Read the article

  • CDN vs own apache servers?

    - by ajsie
    i know that CDN is just for static contents. but then i still have to spread out by apache servers to all corners of the world right? so when i have done it, why dont i just set up some dedicated apache servers only serving static content just like CDN? are there real benefits from still using CDN compared to that scenario?

    Read the article

  • Which web server architecture do you think is better?

    - by ngache
    use apache to server dynamic requests that need to be processed by php,and use nginx to serve static files use nginx to serve all requests So the key point is: which of them is more efficient in serving dynamic requests(we have no doubt that nginx is much better than apache in serving static files)?

    Read the article

  • lighttpd config and rewriting/disabling attempts to access favicon.ico

    - by Kyle
    I've got lighttpd and apache working together on an app I'm building. lighty is serving out static content. However, each time a static asset is requested, I see a not found: favicon.ico message in the logs. I have added the following url rewrite: url.rewrite-once = ( "^/favicon.ico$" => "/assets/images/favicon.png" ) But to no avail; still getting the message. Any ideas?

    Read the article

  • debian gateway using iptables

    - by meijuh
    I am having problems setting up a debian gateway server. My goal: Having eth1 the WAN interface. Having eth0 the LAN interface. Allow both ports 22 (SSH) and 80 (HTTP) accessed from the outside world on the gateway (SSH and HTTP run on this server). What I did was the following: Create a file /etc/iptables.rules with contents: /etc/iptables.rules: *nat -A POSTROUTING -o eth1 -j MASQUERADE COMMIT *filter -A INPUT -i lo -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -i eth1 -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -i eth1 -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -i eth1 -j DROP COMMIT edit /etc/network/interfaces as follows: /etc/network/interfaces: # The loopback network interface auto lo iface lo inet loopback pre-up iptables-restore < /etc/iptables.rules auto eth0 allow-hotplug eth0 iface eth0 inet dhcp #auto eth1 #allow-hotplug eth1 #iface eth1 inet dhcp allow-hotplug eth1 iface eth1 inet static address 217.119.224.51 netmask 255.255.255.248 gateway 217.119.224.49 dns-nameservers 217.119.226.67 217.119.226.68 Uncomment the rule net.ipv4.ip_forward=1 in /etc/sysctl.conf to allow packet forwarding. The static settings for eth1 such as the ip address I got from my router (which I want to replace); I simply copied these. I have a (windows) DNS + DHCP server on ip address 10.180.1.10, which assigns ip address 10.180.1.44 to eth0. What this server does is not really interesting it only maps domain names on our local network and assigns one static ip to the gateway. What works: on the gateway itself I can ping 8.8.8.8 and google.nl. So that is okey. What does not work: (1) Every machine connected to eth0 (indirectly via a switch) can not ping an ip or a domain. So I guess the gateway can not be found. (2) Also when I configure my linux machine (a laptop) to use a static ip 10.180.1.41, a mask and a gateway (10.180.1.44) I can not ping an ip or domain either. This means that maybe my iptables is incorrect of not loaded correctly. Or I maybe have to configure my DNS/DHCP on my windows machine. I have not reset the windows machine net, restart the DNS/DHCP services, should I do this? I did not install dnsmasq as desribed here: http://blog.noviantech.com/2010/12/22/debian-router-gateway-in-15-minutes/. I don't think this is necessary?

    Read the article

  • Why isn't this rewrite rule (nginx) applied? (trying to setup Wordpress multisite)

    - by Brian Park
    Hi, I'm trying to setup Wordpress multisite (subfolder structure) with nginx, but having a problem with this rewrite rule. Below is the Apache's .htaccess, which I have to translate into nginx configuration. RewriteEngine On RewriteBase /blogs/ RewriteRule ^index\.php$ - [L] # uploaded files RewriteRule ^([_0-9a-zA-Z-]+/)?files/(.+) wp-includes/ms-files.php?file=$2 [L] # add a trailing slash to /wp-admin RewriteRule ^([_0-9a-zA-Z-]+/)?wp-admin$ $1wp-admin/ [R=301,L] RewriteCond %{REQUEST_FILENAME} -f [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^ - [L] RewriteRule ^([_0-9a-zA-Z-]+/)?(wp-(content|admin|includes).*) $2 [L] RewriteRule ^([_0-9a-zA-Z-]+/)?(.*\.php)$ $2 [L] RewriteRule . index.php [L] Below is what I came up with: server { listen 80; server_name example.com; server_name_in_redirect off; expires 1d; access_log /srv/www/example.com/logs/access.log; error_log /srv/www/example.com/logs/error.log; root /srv/www/example.com/public; index index.html; try_files $uri $uri/ /index.html; # rewriting uploaded files rewrite ^/blogs/(.+/)?files/(.+) /blogs/wp-includes/ms-files.php?file=$2 last; # add a trailing slash to /wp-admin rewrite ^/blogs/(.+/)?wp-admin$ /blogs/$1wp-admin/ permanent; if (!-e $request_filename) { rewrite ^/blogs/(.+/)?(wp-(content|admin|includes).*) /blogs/$2 last; rewrite ^/blogs/(.+/)?(.*\.php)$ /blogs/$2 last; } location /blogs/ { index index.php; #try_files $uri $uri/ /blogs/index.php?q=$uri&$args; } location ~ \.php$ { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/www/example.com/public$fastcgi_script_name; } # static assets location ~* ^.+\.(manifest)$ { access_log /srv/www/example.com/logs/static.log; } location ~* ^.+\.(ico|ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { # only set expires max IFF the file is a static file and exists if (-f $request_filename) { expires max; access_log /srv/www/example.com/logs/static.log; } } } In the above code, I believe rewrite ^/blogs/(.+/)?(.*\.php)$ /blogs/$2 last; has no effect because when I look at the access_log file, I see the following line: 2010/09/15 01:14:55 [error] 10166#0: *8 "/srv/www/example.com/public/blogs/test/index.php" is not found (2: No such file or directory), request: "GET /blogs/test/ HTTP/1.1" (Here, 'test' is the second blog created using multisite feature) What I'm expecting is that /blogs/test/index.php gets rewritten to /blogs/index.php, but it doesn't seem to do that... Am I overlooking something obvious? Thanks!

    Read the article

  • Facebook, Twitter, Yahoo doesn't work. CDN problem. Akamai?

    - by Toktik
    Some sites doesn't work normally, they are open, without css, images and javascript errors... Facebook stucks on static.ak.fbcdn.net Twitter stucks on a1.twimg.com Yahoo stucks on l.yimg.com On firefox I'm receiving Waiting for ...(any of those). I can access facebook only with SSL. Like https://facebook.com I ping them, only receive request timed out. Update: When I ping static.ak.fbcdn.net I refer to a749.g.akamai.net, when I ping this server I get Request timed out.

    Read the article

  • Determining speed (class) of USB on PC

    - by Thomas Matthews
    Is there a method for determining the speed of a USB port on a Windows XP PC, without using a USB protocol analyzer? I'm looking for something simple, such as looking at the Device Manager. I would like to hook up some USB equipment to a USB 2.0 port, rather than the slower USB 1.0 or 1.1.

    Read the article

  • Best tool for Analyzing IIS 7 SMTP Logs

    - by EfficionDave
    We're using IIS 7's SMTP service for sending out emails from our sites. I'm looking for a SMTP Log analyzer to make it easier for me to view the results and identify and problems (Blocks, Unauthorized relay attempts, blacklisting, ...). What is the best tool to use for this?

    Read the article

  • Blocking ICMP outgoing requests only in eth1

    - by Raj
    I am creating a NAT with iptables: Computer A: eth0 (dhcp) + eth1 (static ip 192.168.0.1 - gateway) Computer B: eth1 (static ip 192.168.0.2, using Computer A as gateway) I know how to block ICMP outgoing requests (-A OUTPUT -p icmp --icmp-type echo-request -j DROP), but that would block ICMP requests from Computer A, but not from Computer B (in fact, only for Computer A - Computer B can keep doing those). I tried with the same command, but adding -o eth1, but that does not block at all. Any idea?

    Read the article

  • Email hosting on home's Windows server 2003

    - by klay
    Hi guys, I am new to Server management, I have a static Ip address and I bought recently a domain name, I configure the domain name to target my Ip address. I am running windows server 2003 standard. what are the steps to host my email adresses? Do I need to buy anything else, or what I have is enough (static ip address, domain name, win server 2003, exchange server 2003) ?? thanks Guys

    Read the article

  • routing through multiple subinterfaces in debian

    - by Kstro21
    my question is as simple as the title, i have a debian 6 , 2 NICs, 3 different subnets in a single interface, just like this: auto eth0 iface eth0 inet static address 192.168.106.254 netmask 255.255.255.0 auto eth0:0 iface eth0:0 inet static address 172.19.221.81 netmask 255.255.255.248 auto eth0:1 iface eth0:1 inet static address 192.168.254.1 netmask 255.255.255.248 auto eth1 iface eth1 inet static address 172.19.216.3 netmask 255.255.255.0 gateway 172.19.216.13 eth0 is conected to a swith with 3 differents vlans, eth1 is conected to a router. No iptables DROP, so, all traffic is allowed. Now, passing the traffic through eth0 is OK, passing the traffic through eth0:0 is OK, but, passing the traffic through eth0:1 is not working, i can ping the ip address of that sub interface from a pc where this ip is the default gateway, but can't get to servers in the subnet of the eth1 interface, the traffic is not passing, even when i set the iptables to log all the traffic in the FORWARD chain and i can see the traffic there, but, the traffic is not really passing. And the funny is i can do any the other way around, i mean, passing from eth1 to eth0:1, RDP, telnet, ping, etc, doing some work with the iptable, i manage to pass some traffic from eth0:1 to eth1, the iptables look like this: iptables -t nat PREROUTING -d 192.168.254.1/32 -p tcp -m multiport --dports 25,110,5269 -j DNAT --to-destination 172.19.216.1 iptables -t nat PREROUTING -d 192.168.254.1/32 -p udp -m udp --dport 53 -j DNAT --to-destination 172.19.216.9 iptables -t nat PREROUTING -d 192.168.254.1/32 -p tcp -m tcp --dport 21 -j DNAT --to-destination 172.19.216.11 iptables -t nat POSTROUTING -s 172.19.216.0/24 -d 172.19.221.80/29 -j SNAT --to-source 172.19.221.81 iptables -t nat POSTROUTING -s 172.19.216.0/24 -d 192.168.254.0/29 -j SNAT --to-source 192.168.254.1 iptables -t nat POSTROUTING -s 172.19.216.0/24 -o eth0 -j SNAT --to-source 192.168.106.254 dong this is working, but,it is really a headache have to map each port with the server, imagine if i move the service from server, so, now i have doubts: can debian route through multiple subinterfaces?? exist a limit for this?? if not, what i'm doing wrong when i have the same setup with other subnets and it is working ok?? without the iptables rules in the nat, it doesn't work thanks and i hope good comments/answers

    Read the article

  • Cobbler 2.2.2 problems

    - by Peter
    I have setup a dedicated LAN for Cobbler tests. My setup is: Cobbler server: openSUSE 12.3, cobbler 2.2.2 (from openSUSE repos) Imported distros: Centos 6.5, Red Hat 6.5, Red Hat 7.0, openSUSE 13.1 Target Machine: VMs in a Windows 7 Virtualbox Systems provisioning works OK, but I have some problems. The first one is that cobbler does not honor the "pxe_just_once: 1" setting. When the setup of the target OS is finished, after the reboot the target systems continues to PXE boot! The second problem is that the target server is not correctly configured! See my setup: cobbler system report --name=test Name : test TFTP Boot Files : {} Comment : Fetchable Files : {} Gateway : 192.168.0.1 Hostname : testcob1.example.com Image : IPv6 Autoconfiguration : False IPv6 Default Device : Kernel Options : {} Kernel Options (Post Install) : {} Kickstart : <<inherit>> Kickstart Metadata : {} LDAP Enabled : False LDAP Management Type : authconfig Management Classes : [] Management Parameters : <<inherit>> Monit Enabled : False Name Servers : ['192.168.0.1', '8.8.8.8'] Name Servers Search Path : [] Netboot Enabled : False Owners : ['admin'] Power Management Address : Power ID : Power Password : Power Management Type : ipmitool Power Username : Profile : RHEL-6.5-x86_64 Proxy : <<inherit>> Red Hat Management Key : <<inherit>> Red Hat Management Server : <<inherit>> Repos Enabled : False Server Override : <<inherit>> Status : testing Template Files : {} Virt Auto Boot : <<inherit>> Virt CPUs : <<inherit>> Virt Disk Driver Type : <<inherit>> Virt File Size(GB) : <<inherit>> Virt Path : <<inherit>> Virt RAM (MB) : <<inherit>> Virt Type : <<inherit>> Interface ===== : eth0 Bonding Opts : Bridge Opts : DHCP Tag : DNS Name : Master Interface : Interface Type : IP Address : 192.168.0.200 IPv6 Address : IPv6 Default Gateway : IPv6 MTU : IPv6 Secondaries : [] IPv6 Static Routes : [] MAC Address : Management Interface : True MTU : Subnet Mask : 255.255.255.0 Static : True Static Routes : [] Virt Bridge : So, although I have setup the hostname and the network interface of the target system, after the setup, the hostname is set to localhost.localdomain and eth0 is configured as a DHCP not static! How can I find the problem and fix it? Note that I have synced and restarted cobbler a couple of times, but the problems persists.

    Read the article

  • Collecting and viewing statistical data on website usage? Want to give Google Analytics the boot.

    - by amn
    I have always been somewhat reluctant to "outsource" site statistics to Google. We have an Apache server running on a Windows server. I am pretty sure all the foundation to collect the needed visitor data are there. I would like to stop using GA, and use some form of application where the data does not travel to a third party but remains at the host, or at least travels to the remote administrator, if it is a log analyzer in a browser. What are my options?

    Read the article

  • default virtual network interface

    - by Zulakis
    I got a single ethernet connection to a network but need multiple ips. Because of this, I am using virtual network interfaces like this: auto intern iface intern inet static address ... netmask ... gateway ...U auto intern:1 iface intern:1 inet static address ... netmask ... gateway ... I need to specify which IP should be used by default for outgoing traffic. How can I do that?

    Read the article

  • ASA5505 Novice. Setting up Outside/Inside/and DMZ as Guest Network

    - by GriffJ
    I need a little help in developing a config for our ASA5505. I'm an MCSA/MCITPAS but I don't have a lot of practical cisco experience. Here is what I need help with, we currently have a PIX as our boarder gateway and well it's antiquated and it only has a 50 user license which means I'm constantly clearing local-host throughout the day as people complain. I discovered that the last IT person bought at couple ASA5505s and they've been sitting in the back of a cupboard. So far I've duplicated the configuration from the pix to the asa but as I was going to be going this far I thought I'd go further and remove another old cisco router that was used only for the guest network, I know the asa can do both jobs. So I'm going to paste a scenario I wrote up with the actual IPs changed to protect the innocent. ... Outside Network: 1.2.3.10 255.255.255.248 (we have a /29) Inside Network: 10.10.36.0 255.255.252.0 DMZ Network: 192.168.15.0 255.255.255.0 Outside Network on e0/0 DMZ Network on e0/1 Inside Network on e0/2-7 DMZ Network has DHCPD Enabled. DMZ DHCPD Pool is 192.168.15.50-192.168.15.250 DMZ Network needs to be able to see DNS on Inside Network at 10.10.37.11 and 10.10.37.12 DMZ Network needs to be able to access webmail on inside network at 10.10.37.15 DMZ Network needs to be able to access business website on inside network at 10.10.37.17 DMZ Network needs to be able to access the outside network (access to the internet). Inside Network has NO DHCPD. (dhcp is handled by domain controller) Inside Network needs to be able to see anything on the DMZ network. Inside Network needs to be able to access the outside network (access to the internet). There is some access-list stuff already, some static mapping already. Maps external IPs from our ISP to our inside server IPs static (inside,outside) 1.2.3.11 10.10.37.15 netmask 255.255.255.255 static (inside,outside) 1.2.3.12 10.10.37.17 netmask 255.255.255.255 static (inside,outside) 1.2.3.13 10.10.37.20 netmask 255.255.255.255 Allows access to our Webserver/Mailserver/VPN from the Outside. access-list 108 permit tcp any host 1.2.3.11 eq https access-list 108 permit tcp any host 1.2.3.11 eq smtp access-list 108 permit tcp any host 1.2.3.11 eq 993 access-list 108 permit tcp any host 1.2.3.11 eq 465 access-list 108 permit tcp any host 1.2.3.12 eq www access-list 108 permit tcp any host 1.2.3.12 eq https access-list 108 permit tcp any host 1.2.3.13 eq pptp Here is all the NAT and route stuff I have so far. global (outside) 1 interface global (outside) 2 1.2.3.11-1.2.3.14 netmask 255.255.255.248 nat (inside) 1 0.0.0.0 0.0.0.0 nat (dmz) 1 0.0.0.0 0.0.0.0 route outside 0.0.0.0 0.0.0.0 1.2.3.9 1

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >