Search Results

Search found 1519 results on 61 pages for 'chain'.

Page 32/61 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Jquery live() vs delegate()

    - by PeeHaa
    I've read some posts here and on the web about the differences of live() and delegate(). However I haven't found the answer I'm looking for (if this is a dupe please tell me). I know the difference between live and delegate is that live can not be used in a chain. As I also read somewhere delegate is in some case faster (better performance). So I am wondering is there a situation where you would use live instead of delegate?

    Read the article

  • TLS (STARTTLS) Failure After 10.6 Upgrade to Open Directory Master

    - by Thomas Kishel
    Hello, Environment: Mac OS X 10.6.3 install/import of a MacOS X 10.5.8 Open Directory Master server. After that upgrade, LDAP+TLS fails on our MacOS X 10.5, 10.6, CentOS, Debian, and FreeBSD clients (Apache2 and PAM). Testing using ldapsearch: ldapsearch -ZZ -H ldap://gnome.darkhorse.com -v -x -b "dc=darkhorse,dc=com" '(uid=donaldr)' uid ... fails with: ldap_start_tls: Protocol error (2) Testing adding "-d 9" fails with: res_errno: 2, res_error: <unsupported extended operation>, res_matched: <> Testing without requiring STARTTLS or with LDAPS: ldapsearch -H ldap://gnome.darkhorse.com -v -x -b "dc=darkhorse,dc=com" '(uid=donaldr)' uid ldapsearch -H ldaps://gnome.darkhorse.com -v -x -b "dc=darkhorse,dc=com" '(uid=donaldr)' uid ... succeeds with: # donaldr, users, darkhorse.com dn: uid=donaldr,cn=users,dc=darkhorse,dc=com uid: donaldr # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 result: 0 Success (We are specifying "TLS_REQCERT never" in /etc/openldap/ldap.conf) Testing with openssl: openssl s_client -connect gnome.darkhorse.com:636 -showcerts -state ... succeeds: CONNECTED(00000003) SSL_connect:before/connect initialization SSL_connect:SSLv2/v3 write client hello A SSL_connect:SSLv3 read server hello A depth=1 /C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department verify error:num=19:self signed certificate in certificate chain verify return:0 SSL_connect:SSLv3 read server certificate A SSL_connect:SSLv3 read server done A SSL_connect:SSLv3 write client key exchange A SSL_connect:SSLv3 write change cipher spec A SSL_connect:SSLv3 write finished A SSL_connect:SSLv3 flush data SSL_connect:SSLv3 read finished A --- Certificate chain 0 s:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=MIS/CN=gnome.darkhorse.com i:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department 1 s:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department i:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department --- Server certificate -----BEGIN CERTIFICATE----- <deleted for brevity> -----END CERTIFICATE----- subject=/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=MIS/CN=gnome.darkhorse.com issuer=/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department --- No client certificate CA names sent --- SSL handshake has read 2640 bytes and written 325 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: D3F9536D3C64BAAB9424193F81F09D5C53B7D8E7CB5A9000C58E43285D983851 Session-ID-ctx: Master-Key: E224CC065924DDA6FABB89DBCC3E6BF89BEF6C0BD6E5D0B3C79E7DE927D6E97BF12219053BA2BB5B96EA2F6A44E934D3 Key-Arg : None Start Time: 1271202435 Timeout : 300 (sec) Verify return code: 0 (ok) So we believe that the slapd daemon is reading our certificate and writing it to LDAP clients. Apple Server Admin adds ProgramArguments ("-h ldaps:///") to /System/Library/LaunchDaemons/org.openldap.slapd.plist and TLSCertificateFile, TLSCertificateKeyFile, TLSCACertificateFile, and TLSCertificatePassphraseTool to /etc/openldap/slapd_macosxserver.conf when enabling SSL in the LDAP section of the Open Directory service. While that appears enough for LDAPS, it appears that this is not enough for TLS. Comparing our 10.6 and 10.5 slapd.conf and slapd_macosxserver.conf configuration files yields no clues. Replacing our certificate (generated with a self-signed ca) with an Apple Server Admin generated self signed certificate results in no change in ldapsearch results. Setting -d to 256 in /System/Library/LaunchDaemons/org.openldap.slapd.plist logs: 4/13/10 5:23:35 PM org.openldap.slapd[82162] conn=384 op=0 EXT oid=1.3.6.1.4.1.1466.20037 4/13/10 5:23:35 PM org.openldap.slapd[82162] conn=384 op=0 do_extended: unsupported operation "1.3.6.1.4.1.1466.20037" 4/13/10 5:23:35 PM org.openldap.slapd[82162] conn=384 op=0 RESULT tag=120 err=2 text=unsupported extended operation Any debugging advice much appreciated. -- Tom Kishel

    Read the article

  • UFW as an active service on Ubuntu

    - by lamcro
    Every time I restart my computer, and check the status of the UFW firewall (sudo ufw status), it is disabled, even if I then enable and restart it. I tried putting sudo ufw enable as one of the startup applications but it asks for the sudo password every time I log on, and I'm guessing it does not protect anyone else who logs on my computer. How can I setup ufw so it is activated when I turn on my computer, and protects all accounts? Update I just tried /etc/init.d/ufw start, and it activated the firewall. Then I restarted the computer, and again it was disabled. content of /etc/ufw/ufw.conf # /etc/ufw/ufw.conf # # set to yes to start on boot ENABLED=yes # set to one of 'off', 'low', 'medium', 'high' LOGLEVEL=full content of /etc/default/ufw # /etc/default/ufw # # Set to yes to apply rules to support IPv6 (no means only IPv6 on loopback # accepted). You will need to 'disable' and then 'enable' the firewall for # the changes to take affect. IPV6=no # Set the default input policy to ACCEPT, ACCEPT_NO_TRACK, DROP, or REJECT. # ACCEPT enables connection tracking for NEW inbound packets on the INPUT # chain, whereas ACCEPT_NO_TRACK does not use connection tracking. Please note # that if you change this you will most likely want to adjust your rules. DEFAULT_INPUT_POLICY="DROP" # Set the default output policy to ACCEPT, ACCEPT_NO_TRACK, DROP, or REJECT. # ACCEPT enables connection tracking for NEW outbound packets on the OUTPUT # chain, whereas ACCEPT_NO_TRACK does not use connection tracking. Please note # that if you change this you will most likely want to adjust your rules. DEFAULT_OUTPUT_POLICY="ACCEPT" # Set the default forward policy to ACCEPT, DROP or REJECT. Please note that # if you change this you will most likely want to adjust your rules DEFAULT_FORWARD_POLICY="DROP" # Set the default application policy to ACCEPT, DROP, REJECT or SKIP. Please # note that setting this to ACCEPT may be a security risk. See 'man ufw' for # details DEFAULT_APPLICATION_POLICY="SKIP" # By default, ufw only touches its own chains. Set this to 'yes' to have ufw # manage the built-in chains too. Warning: setting this to 'yes' will break # non-ufw managed firewall rules MANAGE_BUILTINS=no # # IPT backend # # only enable if using iptables backend IPT_SYSCTL=/etc/ufw/sysctl.conf # extra connection tracking modules to load IPT_MODULES="nf_conntrack_ftp nf_nat_ftp nf_conntrack_irc nf_nat_irc" Update Followed your advise and ran update-rc.d with no luck. lester@mcgrath-pc:~$ sudo update-rc.d ufw defaults update-rc.d: warning: /etc/init.d/ufw missing LSB information update-rc.d: see <http://wiki.debian.org/LSBInitScripts> Adding system startup for /etc/init.d/ufw ... /etc/rc0.d/K20ufw -> ../init.d/ufw /etc/rc1.d/K20ufw -> ../init.d/ufw /etc/rc6.d/K20ufw -> ../init.d/ufw /etc/rc2.d/S20ufw -> ../init.d/ufw /etc/rc3.d/S20ufw -> ../init.d/ufw /etc/rc4.d/S20ufw -> ../init.d/ufw /etc/rc5.d/S20ufw -> ../init.d/ufw lester@mcgrath-pc:~$ ls -l /etc/rc?.d/*ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc0.d/K20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc1.d/K20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc2.d/S20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc3.d/S20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc4.d/S20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc5.d/S20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc6.d/K20ufw -> ../init.d/ufw

    Read the article

  • Expert iptables help needed?

    - by Asad Moeen
    After a detailed analysis, I collected these details. I am under a UDP Flood which is more of application dependent. I run a Game-Server and an attacker is flooding me with "getstatus" query which makes the GameServer respond by making the replies to the query which cause output to the attacker's IP as high as 30mb/s and server lag. Here are the packet details, Packet starts with 4 bytes 0xff and then getstatus. Theoretically, the packet is like "\xff\xff\xff\xffgetstatus " Now that I've tried a lot of iptables variations like state and rate-limiting along side but those didn't work. Rate Limit works good but only when the Server is not started. As soon as the server starts, no iptables rule seems to block it. Anyone else got more solutions? someone asked me to contact the provider and get it done at the Network/Router but that looks very odd and I believe they might not do it since that would also affect other clients. Responding to all those answers, I'd say: Firstly, its a VPS so they can't do it for me. Secondly, I don't care if something is coming in but since its application generated so there has to be a OS level solution to block the outgoing packets. At least the outgoing ones must be stopped. Secondly, its not Ddos since just 400kb/s input generates 30mb/s output from my GameServer. That never happens in a D-dos. Asking the provider/hardware level solution should be used in that case but this one is different. And Yes, Banning his IP stops the flood of outgoing packets but he has many more IP-Addresses as he spoofs his original so I just need something to block him automatically. Even tried a lot of Firewalls but as you know they are just front-ends to iptables so if something doesn't work on iptables, what would the firewalls do? These were the rules I tried, iptables -A INPUT -p udp -m state --state NEW -m recent --set --name DDOS --rsource iptables -A INPUT -p udp -m state --state NEW -m recent --update --seconds 1 --hitcount 5 --name DDOS --rsource -j DROP It works for the attacks on un-used ports but when the server is listening and responding to the incoming queries by the attacker, it never works. Okay Tom.H, your rules were working when I modified them somehow like this: iptables -A INPUT -p udp -m length --length 1:1024 -m recent --set --name XXXX --rsource iptables -A INPUT -p udp -m string --string "xxxxxxxxxx" --algo bm --to 65535 -m recent --update --seconds 1 --hitcount 15 --name XXXX --rsource -j DROP They worked for about 3 days very good where the string "xxxxxxxxx" would be rate-limited, blocked if someone flooded and also didn't affect the clients. But just today, I tried updating the chain to try to remove a previously blocked IP so for that I had to flush the chain and restore this rule ( iptables -X and iptables -F ), some clients were already connected to servers including me. So restoring the rules now would also block some of the clients string completely while some are not affected. So does this mean I need to restart the server or why else would this happen because the last time the rules were working, there was no one connected?

    Read the article

  • iptables - quick safety eval & limit max conns over time

    - by Peter Hanneman
    Working on locking down a *nix server box with some fancy iptable(v1.4.4) rules. I'm approaching the matter with a "paranoid, everyone's out to get me" style, not necessarily because I expect the box to be a hacker magnet but rather just for the sake of learning iptables and *nix security more throughly. Everything is well commented - so if anyone sees something I missed please let me know! The *nat table's "--to-ports" point to the only ports with actively listening services. (aside from pings) Layer 2 apps listen exclusively on chmod'ed sockets bridged by one of the layer 1 daemons. Layers 3+ inherit from layer 2 in a similar fashion. The two lines giving me grief are commented out at the very bottom of the *filter rules. The first line runs fine but it's all or nothing. :) Many thanks, Peter H. *nat #Flush previous rules, chains and counters for the 'nat' table -F -X -Z #Redirect traffic to alternate internal ports -I PREROUTING --src 0/0 -p tcp --dport 80 -j REDIRECT --to-ports 8080 -I PREROUTING --src 0/0 -p tcp --dport 443 -j REDIRECT --to-ports 8443 -I PREROUTING --src 0/0 -p udp --dport 53 -j REDIRECT --to-ports 8053 -I PREROUTING --src 0/0 -p tcp --dport 9022 -j REDIRECT --to-ports 8022 COMMIT *filter #Flush previous settings, chains and counters for the 'filter' table -F -X -Z #Set default behavior for all connections and protocols -P INPUT DROP -P OUTPUT DROP -A FORWARD -j DROP #Only accept loopback traffic originating from the local NIC -A INPUT -i lo -j ACCEPT -A INPUT ! -i lo -d 127.0.0.0/8 -j DROP #Accept all outgoing non-fragmented traffic having a valid state -A OUTPUT ! -f -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT #Drop fragmented incoming packets (Not always malicious - acceptable for use now) -A INPUT -f -j DROP #Allow ping requests rate limited to one per second (burst ensures reliable results for high latency connections) -A INPUT -p icmp --icmp-type 8 -m limit --limit 1/sec --limit-burst 2 -j ACCEPT #Declaration of custom chains -N INSPECT_TCP_FLAGS -N INSPECT_STATE -N INSPECT #Drop incoming tcp connections with invalid tcp-flags -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL ALL -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL NONE -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ACK,FIN FIN -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ACK,PSH PSH -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ACK,URG URG -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags SYN,FIN SYN,FIN -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL FIN,PSH,URG -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags FIN,RST FIN,RST -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags SYN,RST SYN,RST -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL SYN,FIN,PSH,URG -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL SYN,RST,ACK,FIN,URG -j DROP #Accept incoming traffic having either an established or related state -A INSPECT_STATE -m state --state ESTABLISHED,RELATED -j ACCEPT #Drop new incoming tcp connections if they aren't SYN packets -A INSPECT_STATE -m state --state NEW -p tcp ! --syn -j DROP #Drop incoming traffic with invalid states -A INSPECT_STATE -m state --state INVALID -j DROP #INSPECT chain definition -A INSPECT -p tcp -j INSPECT_TCP_FLAGS -A INSPECT -j INSPECT_STATE #Route incoming traffic through the INSPECT chain -A INPUT -j INSPECT #Accept redirected HTTP traffic via HA reverse proxy -A INPUT -p tcp --dport 8080 -j ACCEPT #Accept redirected HTTPS traffic via STUNNEL SSH gateway (As well as tunneled HTTPS traffic destine for other services) -A INPUT -p tcp --dport 8443 -j ACCEPT #Accept redirected DNS traffic for NSD authoritative nameserver -A INPUT -p udp --dport 8053 -j ACCEPT #Accept redirected SSH traffic for OpenSSH server #Temp solution: -A INPUT -p tcp --dport 8022 -j ACCEPT #Ideal solution: #Limit new ssh connections to max 10 per 10 minutes while allowing an "unlimited" (or better reasonably limited?) number of established connections. #-A INPUT -p tcp --dport 8022 --state NEW,ESTABLISHED -m recent --set -j ACCEPT #-A INPUT -p tcp --dport 8022 --state NEW -m recent --update --seconds 600 --hitcount 11 -j DROP COMMIT *mangle #Flush previous rules, chains and counters in the 'mangle' table -F -X -Z COMMIT

    Read the article

  • cannot send mail to postfix /w iptables linux proxy

    - by Juzzam
    I have two separate servers, both running Ubuntu 8.04. Server 1 has the real domain name of our site, let's refer to it as example.com. Server 2 is a mail server I have setup with postfix/courier. The hostname for this server is mail.example.com. I've setup iptables on Server 1 to forward all traffic on port 25 to Server 2. I used this script (except I changed the target ip address and the port from 80 to 25). When I send an email to [email protected] it works. However, when I try to send an email to [email protected] from gmail, I get this error: 550 550 #5.1.0 Address rejected [email protected] (state 14) /var/log/mail.log shows no new lines when this happens. What is strange is that it works with telnet from my local machine. For example: $ telnet example.com 25 220 VO13421.localdomain SMTP Postfix EHLO example.com 250-VO13421.localdomain 250-PIPELINING 250-SIZE 10240000 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN MAIL FROM: [email protected] 250 2.1.0 Ok RCPT TO: [email protected] 250 2.1.5 Ok data 354 Please start mail input. hello user... how have you been? . 250 Mail queued for delivery. quit 221 Closing connection. Good bye. /var/log/mail.log shows success (and the email goes to the maildr): Feb 24 09:47:36 VO13421 postfix/smtpd[2212]: connect from 81.208.68.208.static.dnsptr.net[208.68.xxx.xxx] Feb 24 09:48:01 VO13421 postfix/smtpd[2212]: warning: restriction `smtpd_data_restrictions' after `permit' is ignored Feb 24 09:48:01 VO13421 postfix/smtpd[2212]: 65C68120321: client=81.208.68.208.static.dnsptr.net[208.68.xxx.xxx] Feb 24 09:48:29 VO13421 postfix/smtpd[2212]: warning: restriction `smtpd_data_restrictions' after `permit' is ignored Feb 24 09:48:29 VO13421 postfix/smtpd[2212]: 6BDFA120321: client=81.208.68.208.static.dnsptr.net[208.68.xxx.xxx] Feb 24 09:48:29 VO13421 postfix/cleanup[2216]: 6BDFA120321: message-id= Feb 24 09:48:29 VO13421 postfix/qmgr[2042]: 6BDFA120321: from=, size=395, nrcpt=1 (queue active) Feb 24 09:48:29 VO13421 postfix/virtual[2217]: 6BDFA120321: to=, relay=virtual, delay=0.28, delays=0.25/0.02/0/0.01, dsn=2.0.0, status=sent (delivered to maildir) Feb 24 09:48:29 VO13421 postfix/qmgr[2042]: 6BDFA120321: removed Feb 24 09:48:30 VO13421 postfix/smtpd[2212]: disconnect from 81.208.68.208.static.dnsptr.net[208.68.xxx.xxx] iptables -L -n -v --line on example.com yields the following. Anyone know an iptables command to see the port forwarding? Also, it seems to accept all traffic, that's probably bad right? ;] num pkts bytes target prot opt in out source destination 1 14041 1023K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) num pkts bytes target prot opt in out source destination 1 338 20722 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 419K packets, 425M bytes) num pkts bytes target prot opt in out source destination 1 13711 2824K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 postconf -n results in: alias_database = hash:/etc/postfix/aliases alias_maps = hash:/etc/postfix/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix delay_warning_time = 4h disable_vrfy_command = yes inet_interfaces = all local_recipient_maps = mailbox_size_limit = 0 masquerade_domains = mail.example.com mail1.example.com masquerade_exceptions = root maximal_backoff_time = 8000s maximal_queue_lifetime = 7d minimal_backoff_time = 1000s mydestination = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mynetworks_style = host myorigin = example.com readme_directory = no recipient_delimiter = + relayhost = smtp_helo_timeout = 60s smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname SMTP $mail_name smtpd_client_restrictions = reject_rbl_client sbl.spamhaus.org, reject_rbl_client blackholes.easynet.nl, reject_rbl_client dnsbl.njabl.org smtpd_delay_reject = yes smtpd_hard_error_limit = 12 smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_hostname, reject_invalid_hostname, permit smtpd_recipient_limit = 16 smtpd_recipient_restrictions = reject_unauth_pipelining, permit_mynetworks, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_destination, permit smtpd_data_restrictions = reject_unauth_pipelining smtpd_sender_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_pipelining, permit smtpd_soft_error_limit = 3 smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes unknown_local_recipient_reject_code = 450 virtual_alias_maps = mysql:/etc/postfix/mysql_alias.cf virtual_gid_maps = mysql:/etc/postfix/mysql_gid.cf virtual_mailbox_base = /var/spool/mail/virtual virtual_mailbox_domains = mysql:/etc/postfix/mysql_domains.cf virtual_mailbox_maps = mysql:/etc/postfix/mysql_mailbox.cf virtual_uid_maps = mysql:/etc/postfix/mysql_uid.cf

    Read the article

  • cannot send mail to postfix /w iptables linux proxy

    - by Juzzam
    I have two separate servers, both running Ubuntu 8.04. Server 1 has the real domain name of our site, let's refer to it as example.com. Server 2 is a mail server I have setup with postfix/courier. The hostname for this server is mail.example.com. I've setup iptables on Server 1 to forward all traffic on port 25 to Server 2. I used this script (except I changed the target ip address and the port from 80 to 25). When I send an email to [email protected] it works. However, when I try to send an email to [email protected] from gmail, I get this error: 550 550 #5.1.0 Address rejected [email protected] (state 14) /var/log/mail.log shows no new lines when this happens. What is strange is that it works with telnet from my local machine. For example: $ telnet example.com 25 220 VO13421.localdomain SMTP Postfix EHLO example.com 250-VO13421.localdomain 250-PIPELINING 250-SIZE 10240000 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN MAIL FROM: [email protected] 250 2.1.0 Ok RCPT TO: [email protected] 250 2.1.5 Ok data 354 Please start mail input. hello user... how have you been? . 250 Mail queued for delivery. quit 221 Closing connection. Good bye. /var/log/mail.log shows success (and the email goes to the maildr): Feb 24 09:47:36 VO13421 postfix/smtpd[2212]: connect from 81.208.68.208.static.dnsptr.net[208.68.xxx.xxx] Feb 24 09:48:01 VO13421 postfix/smtpd[2212]: warning: restriction `smtpd_data_restrictions' after `permit' is ignored Feb 24 09:48:01 VO13421 postfix/smtpd[2212]: 65C68120321: client=81.208.68.208.static.dnsptr.net[208.68.xxx.xxx] Feb 24 09:48:29 VO13421 postfix/smtpd[2212]: warning: restriction `smtpd_data_restrictions' after `permit' is ignored Feb 24 09:48:29 VO13421 postfix/smtpd[2212]: 6BDFA120321: client=81.208.68.208.static.dnsptr.net[208.68.xxx.xxx] Feb 24 09:48:29 VO13421 postfix/cleanup[2216]: 6BDFA120321: message-id= Feb 24 09:48:29 VO13421 postfix/qmgr[2042]: 6BDFA120321: from=, size=395, nrcpt=1 (queue active) Feb 24 09:48:29 VO13421 postfix/virtual[2217]: 6BDFA120321: to=, relay=virtual, delay=0.28, delays=0.25/0.02/0/0.01, dsn=2.0.0, status=sent (delivered to maildir) Feb 24 09:48:29 VO13421 postfix/qmgr[2042]: 6BDFA120321: removed Feb 24 09:48:30 VO13421 postfix/smtpd[2212]: disconnect from 81.208.68.208.static.dnsptr.net[208.68.xxx.xxx] iptables -L -n -v --line on example.com yields the following. Anyone know an iptables command to see the port forwarding? Also, it seems to accept all traffic, that's probably bad right? ;] num pkts bytes target prot opt in out source destination 1 14041 1023K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) num pkts bytes target prot opt in out source destination 1 338 20722 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 419K packets, 425M bytes) num pkts bytes target prot opt in out source destination 1 13711 2824K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 postconf -n results in: alias_database = hash:/etc/postfix/aliases alias_maps = hash:/etc/postfix/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix delay_warning_time = 4h disable_vrfy_command = yes inet_interfaces = all local_recipient_maps = mailbox_size_limit = 0 masquerade_domains = mail.example.com mail1.example.com masquerade_exceptions = root maximal_backoff_time = 8000s maximal_queue_lifetime = 7d minimal_backoff_time = 1000s mydestination = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mynetworks_style = host myorigin = example.com readme_directory = no recipient_delimiter = + relayhost = smtp_helo_timeout = 60s smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname SMTP $mail_name smtpd_client_restrictions = reject_rbl_client sbl.spamhaus.org, reject_rbl_client blackholes.easynet.nl, reject_rbl_client dnsbl.njabl.org smtpd_delay_reject = yes smtpd_hard_error_limit = 12 smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_hostname, reject_invalid_hostname, permit smtpd_recipient_limit = 16 smtpd_recipient_restrictions = reject_unauth_pipelining, permit_mynetworks, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_destination, permit smtpd_data_restrictions = reject_unauth_pipelining smtpd_sender_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_pipelining, permit smtpd_soft_error_limit = 3 smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes unknown_local_recipient_reject_code = 450 virtual_alias_maps = mysql:/etc/postfix/mysql_alias.cf virtual_gid_maps = mysql:/etc/postfix/mysql_gid.cf virtual_mailbox_base = /var/spool/mail/virtual virtual_mailbox_domains = mysql:/etc/postfix/mysql_domains.cf virtual_mailbox_maps = mysql:/etc/postfix/mysql_mailbox.cf virtual_uid_maps = mysql:/etc/postfix/mysql_uid.cf

    Read the article

  • MvcExtensions – Bootstrapping

    - by kazimanzurrashid
    When you create a new ASP.NET MVC application you will find that the global.asax contains the following lines: namespace MvcApplication1 { // Note: For instructions on enabling IIS6 or IIS7 classic mode, // visit http://go.microsoft.com/?LinkId=9394801 public class MvcApplication : System.Web.HttpApplication { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults ); } protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); } } } As the application grows, there are quite a lot of plumbing code gets into the global.asax which quickly becomes a design smell. Lets take a quick look at the code of one of the open source project that I recently visited: public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute("Default","{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = "" }); } protected override void OnApplicationStarted() { Error += OnError; EndRequest += OnEndRequest; var settings = new SparkSettings() .AddNamespace("System") .AddNamespace("System.Collections.Generic") .AddNamespace("System.Web.Mvc") .AddNamespace("System.Web.Mvc.Html") .AddNamespace("MvcContrib.FluentHtml") .AddNamespace("********") .AddNamespace("********.Web") .SetPageBaseType("ApplicationViewPage") .SetAutomaticEncoding(true); #if DEBUG settings.SetDebug(true); #endif var viewFactory = new SparkViewFactory(settings); ViewEngines.Engines.Add(viewFactory); #if !DEBUG PrecompileViews(viewFactory); #endif RegisterAllControllersIn("********.Web"); log4net.Config.XmlConfigurator.Configure(); RegisterRoutes(RouteTable.Routes); Factory.Load(new Components.WebDependencies()); ModelBinders.Binders.DefaultBinder = new Binders.GenericBinderResolver(Factory.TryGet<IModelBinder>); ValidatorConfiguration.Initialize("********"); HtmlValidationExtensions.Initialize(ValidatorConfiguration.Rules); } private void OnEndRequest(object sender, System.EventArgs e) { if (((HttpApplication)sender).Context.Handler is MvcHandler) { CreateKernel().Get<ISessionSource>().Close(); } } private void OnError(object sender, System.EventArgs e) { CreateKernel().Get<ISessionSource>().Close(); } protected override IKernel CreateKernel() { return Factory.Kernel; } private static void PrecompileViews(SparkViewFactory viewFactory) { var batch = new SparkBatchDescriptor(); batch.For<HomeController>().For<ManageController>(); viewFactory.Precompile(batch); } As you can see there are quite a few of things going on in the above code, Registering the ViewEngine, Compiling the Views, Registering the Routes/Controllers/Model Binders, Settings up Logger, Validations and as you can imagine the more it becomes complex the more things will get added in the application start. One of the goal of the MVCExtensions is to reduce the above design smell. Instead of writing all the plumbing code in the application start, it contains BootstrapperTask to register individual services. Out of the box, it contains BootstrapperTask to register Controllers, Controller Factory, Action Invoker, Action Filters, Model Binders, Model Metadata/Validation Providers, ValueProvideraFactory, ViewEngines etc and it is intelligent enough to automatically detect the above types and register into the ASP.NET MVC Framework. Other than the built-in tasks you can create your own custom task which will be automatically executed when the application starts. When the BootstrapperTasks are in action you will find the global.asax pretty much clean like the following: public class MvcApplication : UnityMvcApplication { public void ErrorLog_Filtering(object sender, ExceptionFilterEventArgs e) { Check.Argument.IsNotNull(e, "e"); HttpException exception = e.Exception.GetBaseException() as HttpException; if ((exception != null) && (exception.GetHttpCode() == (int)HttpStatusCode.NotFound)) { e.Dismiss(); } } } The above code is taken from my another open source project Shrinkr, as you can see the global.asax is longer cluttered with any plumbing code. One special thing you have noticed that it is inherited from the UnityMvcApplication rather than regular HttpApplication. There are separate version of this class for each IoC Container like NinjectMvcApplication, StructureMapMvcApplication etc. Other than executing the built-in tasks, the Shrinkr also has few custom tasks which gets executed when the application starts. For example, when the application starts, we want to ensure that the default users (which is specified in the web.config) are created. The following is the custom task that is used to create those default users: public class CreateDefaultUsers : BootstrapperTask { protected override TaskContinuation ExecuteCore(IServiceLocator serviceLocator) { IUserRepository userRepository = serviceLocator.GetInstance<IUserRepository>(); IUnitOfWork unitOfWork = serviceLocator.GetInstance<IUnitOfWork>(); IEnumerable<User> users = serviceLocator.GetInstance<Settings>().DefaultUsers; bool shouldCommit = false; foreach (User user in users) { if (userRepository.GetByName(user.Name) == null) { user.AllowApiAccess(ApiSetting.InfiniteLimit); userRepository.Add(user); shouldCommit = true; } } if (shouldCommit) { unitOfWork.Commit(); } return TaskContinuation.Continue; } } There are several other Tasks in the Shrinkr that we are also using which you will find in that project. To create a custom bootstrapping task you have create a new class which either implements the IBootstrapperTask interface or inherits from the abstract BootstrapperTask class, I would recommend to start with the BootstrapperTask as it already has the required code that you have to write in case if you choose the IBootstrapperTask interface. As you can see in the above code we are overriding the ExecuteCore to create the default users, the MVCExtensions is responsible for populating the  ServiceLocator prior calling this method and in this method we are using the service locator to get the dependencies that are required to create the users (I will cover the custom dependencies registration in the next post). Once the users are created, we are returning a special enum, TaskContinuation as the return value, the TaskContinuation can have three values Continue (default), Skip and Break. The reason behind of having this enum is, in some  special cases you might want to skip the next task in the chain or break the complete chain depending upon the currently running task, in those cases you will use the other two values instead of the Continue. The last thing I want to cover in the bootstrapping task is the Order. By default all the built-in tasks as well as newly created task order is set to the DefaultOrder(a static property), in some special cases you might want to execute it before/after all the other tasks, in those cases you will assign the Order in the Task constructor. For Example, in Shrinkr, we want to run few background services when the all the tasks are executed, so we assigned the order as DefaultOrder + 1. Here is the code of that Task: public class ConfigureBackgroundServices : BootstrapperTask { private IEnumerable<IBackgroundService> backgroundServices; public ConfigureBackgroundServices() { Order = DefaultOrder + 1; } protected override TaskContinuation ExecuteCore(IServiceLocator serviceLocator) { backgroundServices = serviceLocator.GetAllInstances<IBackgroundService>().ToList(); backgroundServices.Each(service => service.Start()); return TaskContinuation.Continue; } protected override void DisposeCore() { backgroundServices.Each(service => service.Stop()); } } That’s it for today, in the next post I will cover the custom service registration, so stay tuned.

    Read the article

  • ParallelWork: Feature rich multithreaded fluent task execution library for WPF

    - by oazabir
    ParallelWork is an open source free helper class that lets you run multiple work in parallel threads, get success, failure and progress update on the WPF UI thread, wait for work to complete, abort all work (in case of shutdown), queue work to run after certain time, chain parallel work one after another. It’s more convenient than using .NET’s BackgroundWorker because you don’t have to declare one component per work, nor do you need to declare event handlers to receive notification and carry additional data through private variables. You can safely pass objects produced from different thread to the success callback. Moreover, you can wait for work to complete before you do certain operation and you can abort all parallel work while they are in-flight. If you are building highly responsive WPF UI where you have to carry out multiple job in parallel yet want full control over those parallel jobs completion and cancellation, then the ParallelWork library is the right solution for you. I am using the ParallelWork library in my PlantUmlEditor project, which is a free open source UML editor built on WPF. You can see some realistic use of the ParallelWork library there. Moreover, the test project comes with 400 lines of Behavior Driven Development flavored tests, that confirms it really does what it says it does. The source code of the library is part of the “Utilities” project in PlantUmlEditor source code hosted at Google Code. The library comes in two flavors, one is the ParallelWork static class, which has a collection of static methods that you can call. Another is the Start class, which is a fluent wrapper over the ParallelWork class to make it more readable and aesthetically pleasing code. ParallelWork allows you to start work immediately on separate thread or you can queue a work to start after some duration. You can start an immediate work in a new thread using the following methods: void StartNow(Action doWork, Action onComplete) void StartNow(Action doWork, Action onComplete, Action<Exception> failed) For example, ParallelWork.StartNow(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }, () => { workEndedAt = DateTime.Now; }); Or you can use the fluent way Start.Work: Start.Work(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }) .OnComplete(() => { workCompletedAt = DateTime.Now; }) .Run(); Besides simple execution of work on a parallel thread, you can have the parallel thread produce some object and then pass it to the success callback by using these overloads: void StartNow<T>(Func<T> doWork, Action<T> onComplete) void StartNow<T>(Func<T> doWork, Action<T> onComplete, Action<Exception> fail) For example, ParallelWork.StartNow<Dictionary<string, string>>( () => { test = new Dictionary<string,string>(); test.Add("test", "test"); return test; }, (result) => { Assert.True(result.ContainsKey("test")); }); Or, the fluent way: Start<Dictionary<string, string>>.Work(() => { test = new Dictionary<string, string>(); test.Add("test", "test"); return test; }) .OnComplete((result) => { Assert.True(result.ContainsKey("test")); }) .Run(); You can also start a work to happen after some time using these methods: DispatcherTimer StartAfter(Action onComplete, TimeSpan duration) DispatcherTimer StartAfter(Action doWork,Action onComplete,TimeSpan duration) You can use this to perform some timed operation on the UI thread, as well as perform some operation in separate thread after some time. ParallelWork.StartAfter( () => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }, () => { workCompletedAt = DateTime.Now; }, waitDuration); Or, the fluent way: Start.Work(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }) .OnComplete(() => { workCompletedAt = DateTime.Now; }) .RunAfter(waitDuration);   There are several overloads of these functions to have a exception callback for handling exceptions or get progress update from background thread while work is in progress. For example, I use it in my PlantUmlEditor to perform background update of the application. // Check if there's a newer version of the app Start<bool>.Work(() => { return UpdateChecker.HasUpdate(Settings.Default.DownloadUrl); }) .OnComplete((hasUpdate) => { if (hasUpdate) { if (MessageBox.Show(Window.GetWindow(me), "There's a newer version available. Do you want to download and install?", "New version available", MessageBoxButton.YesNo, MessageBoxImage.Information) == MessageBoxResult.Yes) { ParallelWork.StartNow(() => { var tempPath = System.IO.Path.Combine( Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData), Settings.Default.SetupExeName); UpdateChecker.DownloadLatestUpdate(Settings.Default.DownloadUrl, tempPath); }, () => { }, (x) => { MessageBox.Show(Window.GetWindow(me), "Download failed. When you run next time, it will try downloading again.", "Download failed", MessageBoxButton.OK, MessageBoxImage.Warning); }); } } }) .OnException((x) => { MessageBox.Show(Window.GetWindow(me), x.Message, "Download failed", MessageBoxButton.OK, MessageBoxImage.Exclamation); }); The above code shows you how to get exception callbacks on the UI thread so that you can take necessary actions on the UI. Moreover, it shows how you can chain two parallel works to happen one after another. Sometimes you want to do some parallel work when user does some activity on the UI. For example, you might want to save file in an editor while user is typing every 10 second. In such case, you need to make sure you don’t start another parallel work every 10 seconds while a work is already queued. You need to make sure you start a new work only when there’s no other background work going on. Here’s how you can do it: private void ContentEditor_TextChanged(object sender, EventArgs e) { if (!ParallelWork.IsAnyWorkRunning()) { ParallelWork.StartAfter(SaveAndRefreshDiagram, TimeSpan.FromSeconds(10)); } } If you want to shutdown your application and want to make sure no parallel work is going on, then you can call the StopAll() method. ParallelWork.StopAll(); If you want to wait for parallel works to complete without a timeout, then you can call the WaitForAllWork(TimeSpan timeout). It will block the current thread until the all parallel work completes or the timeout period elapses. result = ParallelWork.WaitForAllWork(TimeSpan.FromSeconds(1)); The result is true, if all parallel work completed. If it’s false, then the timeout period elapsed and all parallel work did not complete. For details how this library is built and how it works, please read the following codeproject article: ParallelWork: Feature rich multithreaded fluent task execution library for WPF http://www.codeproject.com/KB/WPF/parallelwork.aspx If you like the article, please vote for me.

    Read the article

  • Big Data – Is Big Data Relevant to me? – Big Data Questionnaires – Guest Post by Vinod Kumar

    - by Pinal Dave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions of SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. I think the series from Pinal is a good one for anyone planning to start on Big Data journey from the basics. In my daily customer interactions this buzz of “Big Data” always comes up, I react generally saying – “Sir, do you really have a ‘Big Data’ problem or do you have a big Data problem?” Generally, there is a silence in the air when I ask this question. Data is everywhere in organizations – be it big data, small data, all data and for few it is bad data which is same as no data :). Wow, don’t discount me as someone who opposes “Big Data”, I am a big supporter as much as I am a critic of the abuse of this term by the people. In this post, I wanted to let my mind flow so that you can also think in the direction I want you to see these concepts. In any case, this is not an exhaustive dump of what is in my mind – but you will surely get the drift how I am going to question Big Data terms from customers!!! Is Big Data Relevant to me? Many of my customers talk to me like blank whiteboard with no idea – “why Big Data”. They want to jump into the bandwagon of technology and they want to decipher insights from their unexplored data a.k.a. unstructured data with structured data. So what are these industry scenario’s that come to mind? Here are some of them: Financials Fraud detection: Banks and Credit cards are monitoring your spending habits on real-time basis. Customer Segmentation: applies in every industry from Banking to Retail to Aviation to Utility and others where they deal with end customer who consume their products and services. Customer Sentiment Analysis: Responding to negative brand perception on social or amplify the positive perception. Sales and Marketing Campaign: Understand the impact and get closer to customer delight. Call Center Analysis: attempt to take unstructured voice recordings and analyze them for content and sentiment. Medical Reduce Re-admissions: How to build a proactive follow-up engagements with patients. Patient Monitoring: How to track Inpatient, Out-Patient, Emergency Visits, Intensive Care Units etc. Preventive Care: Disease identification and Risk stratification is a very crucial business function for medical. Claims fraud detection: There is no precise dollars that one can put here, but this is a big thing for the medical field. Retail Customer Sentiment Analysis, Customer Care Centers, Campaign Management. Supply Chain Analysis: Every sensors and RFID data can be tracked for warehouse space optimization. Location based marketing: Based on where a check-in happens retail stores can be optimize their marketing. Telecom Price optimization and Plans, Finding Customer churn, Customer loyalty programs Call Detail Record (CDR) Analysis, Network optimizations, User Location analysis Customer Behavior Analysis Insurance Fraud Detection & Analysis, Pricing based on customer Sentiment Analysis, Loyalty Management Agents Analysis, Customer Value Management This list can go on to other areas like Utility, Manufacturing, Travel, ITES etc. So as you can see, there are obviously interesting use cases for each of these industry verticals. These are just representative list. Where to start? A lot of times I try to quiz customers on a number of dimensions before starting a Big Data conversation. Are you getting the data you need the way you want it and in a timely manner? Can you get in and analyze the data you need? How quickly is IT to respond to your BI Requests? How easily can you get at the data that you need to run your business/department/project? How are you currently measuring your business? Can you get the data you need to react WITHIN THE QUARTER to impact behaviors to meet your numbers or is it always “rear-view mirror?” How are you measuring: The Brand Customer Sentiment Your Competition Your Pricing Your performance Supply Chain Efficiencies Predictive product / service positioning What are your key challenges of driving collaboration across your global business?  What the challenges in innovation? What challenges are you facing in getting more information out of your data? Note: Garbage-in is Garbage-out. Hold good for all reporting / analytics requirements Big Data POCs? A number of customers get into the realm of setting a small team to work on Big Data – well it is a great start from an understanding point of view, but I tend to ask a number of other questions to such customers. Some of these common questions are: To what degree is your advanced analytics (natural language processing, sentiment analysis, predictive analytics and classification) paired with your Big Data’s efforts? Do you have dedicated resources exploring the possibilities of advanced analytics in Big Data for your business line? Do you plan to employ machine learning technology while doing Advanced Analytics? How is Social Media being monitored in your organization? What is your ability to scale in terms of storage and processing power? Do you have a system in place to sort incoming data in near real time by potential value, data quality, and use frequency? Do you use event-driven architecture to manage incoming data? Do you have specialized data services that can accommodate different formats, security, and the management requirements of multiple data sources? Is your organization currently using or considering in-memory analytics? To what degree are you able to correlate data from your Big Data infrastructure with that from your enterprise data warehouse? Have you extended the role of Data Stewards to include ownership of big data components? Do you prioritize data quality based on the source system (that is Facebook/Twitter data has lower quality thresholds than radio frequency identification (RFID) for a tracking system)? Do your retention policies consider the different legal responsibilities for storing Big Data for a specific amount of time? Do Data Scientists work in close collaboration with Data Stewards to ensure data quality? How is access to attributes of Big Data being given out in the organization? Are roles related to Big Data (Advanced Analyst, Data Scientist) clearly defined? How involved is risk management in the Big Data governance process? Is there a set of documented policies regarding Big Data governance? Is there an enforcement mechanism or approach to ensure that policies are followed? Who is the key sponsor for your Big Data governance program? (The CIO is best) Do you have defined policies surrounding the use of social media data for potential employees and customers, as well as the use of customer Geo-location data? How accessible are complex analytic routines to your user base? What is the level of involvement with outside vendors and third parties in regard to the planning and execution of Big Data projects? What programming technologies are utilized by your data warehouse/BI staff when working with Big Data? These are some of the important questions I ask each customer who is actively evaluating Big Data trends for their organizations. These questions give you a sense of direction where to start, what to use, how to secure, how to analyze and more. Sign off Any Big data is analysis is incomplete without a compelling story. The best way to understand this is to watch Hans Rosling – Gapminder (2:17 to 6:06) videos about the third world myths. Don’t get overwhelmed with the Big Data buzz word, the destination to what your data speaks is important. In this blog post, we did not particularly look at any Big Data technologies. This is a set of questionnaire one needs to keep in mind as they embark their journey of Big Data. I did write some of the basics in my blog: Big Data – Big Hype yet Big Opportunity. Do let me know if these questions make sense?  Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service

    - by Elton Stoneman
    We're in the process of delivering an enabling project to expose on-premise WCF services securely to Internet consumers. The Azure Service Bus Relay is doing the clever stuff, we register our on-premise service with Azure, consumers call into our .servicebus.windows.net namespace, and their requests are relayed and serviced on-premise. In theory it's all wonderfully simple; by using the relay we get lots of protocol options, free HTTPS and load balancing, and by integrating to ACS we get plenty of security options. Part of our delivery is a suite of sample consumers for the service - .NET, jQuery, PHP - and this set of posts will cover setting up the service and the consumers. Part 1: Exposing the on-premise service In theory, this is ultra-straightforward. In practice, and on a dev laptop it is - but in a corporate network with firewalls and proxies, it isn't, so we'll walkthrough some of the pitfalls. Note that I'm using the "old" Azure portal which will soon be out of date, but the new shiny portal should have the same steps available and be easier to use. We start with a simple WCF service which takes a string as input, reverses the string and returns it. The Part 1 version of the code is on GitHub here: on GitHub here: IPASBR Part 1. Configuring Azure Service Bus Start by logging into the Azure portal and registering a Service Bus namespace which will be our endpoint in the cloud. Give it a globally unique name, set it up somewhere near you (if you’re in Europe, remember Europe (North) is Ireland, and Europe (West) is the Netherlands), and  enable ACS integration by ticking "Access Control" as a service: Authenticating and authorizing to ACS When we try to register our on-premise service as a listener for the Service Bus endpoint, we need to supply credentials, which means only trusted service providers can act as listeners. We can use the default "owner" credentials, but that has admin permissions so a dedicated service account is better (Neil Mackenzie has a good post On Not Using owner with the Azure AppFabric Service Bus with lots of permission details). Click on "Access Control Service" for the namespace, navigate to Service Identities and add a new one. Give the new account a sensible name and description: Let ACS generate a symmetric key for you (this will be the shared secret we use in the on-premise service to authenticate as a listener), but be sure to set the expiration date to something usable. The portal defaults to expiring new identities after 1 year - but when your year is up *your identity will expire without warning* and everything will stop working. In production, you'll need governance to manage identity expiration and a process to make sure you renew identities and roll new keys regularly. The new service identity needs to be authorized to listen on the service bus endpoint. This is done through claim mapping in ACS - we'll set up a rule that says if the nameidentifier in the input claims has the value serviceProvider, in the output we'll have an action claim with the value Listen. In the ACS portal you'll see that there is already a Relying Party Application set up for ServiceBus, which has a Default rule group. Edit the rule group and click Add to add this new rule: The values to use are: Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: serviceProvider Output claim type: net.windows.servicebus.action Output claim value: Listen When your service namespace and identity are set up, open the Part 1 solution and put your own namespace, service identity name and secret key into the file AzureConnectionDetails.xml in Solution Items, e.g: <azure namespace="sixeyed-ipasbr">    <!-- ACS credentials for the listening service (Part1):-->   <service identityName="serviceProvider"            symmetricKey="nuR2tHhlrTCqf4YwjT2RA2BZ/+xa23euaRJNLh1a/V4="/>  </azure> Build the solution, and the T4 template will generate the Web.config for the service project with your Azure details in the transportClientEndpointBehavior:           <behavior name="SharedSecret">             <transportClientEndpointBehavior credentialType="SharedSecret">               <clientCredentials>                 <sharedSecret issuerName="serviceProvider"                               issuerSecret="nuR2tHhlrTCqf4YwjT2RA2BZ/+xa23euaRJNLh1a/V4="/>               </clientCredentials>             </transportClientEndpointBehavior>           </behavior> , and your service namespace in the Azure endpoint:         <!-- Azure Service Bus endpoints -->          <endpoint address="sb://sixeyed-ipasbr.servicebus.windows.net/net"                   binding="netTcpRelayBinding"                   contract="Sixeyed.Ipasbr.Services.IFormatService"                   behaviorConfiguration="SharedSecret">         </endpoint> The sample project is hosted in IIS, but it won't register with Azure until the service is activated. Typically you'd install AppFabric 1.1 for Widnows Server and set the service to auto-start in IIS, but for dev just navigate to the local REST URL, which will activate the service and register it with Azure. Testing the service locally As well as an Azure endpoint, the service has a WebHttpBinding for local REST access:         <!-- local REST endpoint for internal use -->         <endpoint address="rest"                   binding="webHttpBinding"                   behaviorConfiguration="RESTBehavior"                   contract="Sixeyed.Ipasbr.Services.IFormatService" /> Build the service, then navigate to: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/reverse?string=abc123 - and you should see the reversed string response: If your network allows it, you'll get the expected response as before, but in the background your service will also be listening in the cloud. Good stuff! Who needs network security? Onto the next post for consuming the service with the netTcpRelayBinding.  Setting up network access to Azure But, if you get an error, it's because your network is secured and it's doing something to stop the relay working. The Service Bus relay bindings try to use direct TCP connections to Azure, so if ports 9350-9354 are available *outbound*, then the relay will run through them. If not, the binding steps down to standard HTTP, and issues a CONNECT across port 443 or 80 to set up a tunnel for the relay. If your network security guys are doing their job, the first option will be blocked by the firewall, and the second option will be blocked by the proxy, so you'll get this error: System.ServiceModel.CommunicationException: Unable to reach sixeyed-ipasbr.servicebus.windows.net via TCP (9351, 9352) or HTTP (80, 443) - and that will probably be the start of lots of discussions. Network guys don't really like giving servers special permissions for the web proxy, and they really don't like opening ports, so they'll need to be convinced about this. The resolution in our case was to put up a dedicated box in a DMZ, tinker with the firewall and the proxy until we got a relay connection working, then run some traffic which the the network guys monitored to do a security assessment afterwards. Along the way we hit a few more issues, diagnosed mainly with Fiddler and Wireshark: System.Net.ProtocolViolationException: Chunked encoding upload is not supported on the HTTP/1.0 protocol - this means the TCP ports are not available, so Azure tries to relay messaging traffic across HTTP. The service can access the endpoint, but the proxy is downgrading traffic to HTTP 1.0, which does not support tunneling, so Azure can’t make its connection. We were using the Squid proxy, version 2.6. The Squid project is incrementally adding HTTP 1.1 support, but there's no definitive list of what's supported in what version (here are some hints). System.ServiceModel.Security.SecurityNegotiationException: The X.509 certificate CN=servicebus.windows.net chain building failed. The certificate that was used has a trust chain that cannot be verified. Replace the certificate or change the certificateValidationMode. The evocation function was unable to check revocation because the revocation server was offline. - by this point we'd given up on the HTTP proxy and opened the TCP ports. We got this error when the relay binding does it's authentication hop to ACS. The messaging traffic is TCP, but the control traffic still goes over HTTP, and as part of the ACS authentication the process checks with a revocation server to see if Microsoft’s ACS cert is still valid, so the proxy still needs some clearance. The service account (the IIS app pool identity) needs access to: www.public-trust.com mscrl.microsoft.com We still got this error periodically with different accounts running the app pool. We fixed that by ensuring the machine-wide proxy settings are set up, so every account uses the correct proxy: netsh winhttp set proxy proxy-server="http://proxy.x.y.z" - and you might need to run this to clear out your credential cache: certutil -urlcache * delete If your network guys end up grudgingly opening ports, they can restrict connections to the IP address range for your chosen Azure datacentre, which might make them happier - see Windows Azure Datacenter IP Ranges. After all that you've hopefully got an on-premise service listening in the cloud, which you can consume from pretty much any technology.

    Read the article

  • Changing CSS with jQuery syntax in Silverlight using jLight

    - by Timmy Kokke
    Lately I’ve ran into situations where I had to change elements or had to request a value in the DOM from Silverlight. jLight, which was introduced in an earlier article, can help with that. jQuery offers great ways to change CSS during runtime. Silverlight can access the DOM, but it isn’t as easy as jQuery. All examples shown in this article can be looked at in this online demo. The code can be downloaded here.   Part 1: The easy stuff Selecting and changing properties is pretty straight forward. Setting the text color in all <B> </B> elements can be done using the following code:   jQuery.Select("b").Css("color", "red");   The Css() method is an extension method on jQueryObject which is return by the jQuery.Select() method. The Css() method takes to parameters. The first is the Css style property. All properties used in Css can be entered in this string. The second parameter is the value you want to give the property. In this case the property is “color” and it is changed to “red”. To specify which element you want to select you can add a :selector parameter to the Select() method as shown in the next example.   jQuery.Select("b:first").Css("font-family", "sans-serif");   The “:first” pseudo-class selector selects only the first element. This example changes the “font-family” property of the first <B></B> element to “sans-serif”. To make use of intellisense in Visual Studio I’ve added a extension methods to help with the pseudo-classes. In the example below the “font-weight” of every “Even” <LI></LI> is set to “bold”.   jQuery.Select("li".Even()).Css("font-weight", "bold");   Because the Css() extension method returns a jQueryObject it is possible to chain calls to Css(). The following example show setting the “color”, “background-color” and the “font-size” of all headers in one go.   jQuery.Select(":header").Css("color", "#12FF70") .Css("background-color", "yellow") .Css("font-size", "25px");   Part 2: More complex stuff In only a few cases you need to change only one style property. More often you want to change an entire set op style properties all in one go.  You could chain a lot of Css() methods together. A better way is to add a class to a stylesheet and define all properties in there. With the AddClass() method you can set a style class to a set of elements. This example shows how to add the “demostyle” class to all <B></B> in the document.   jQuery.Select("b").AddClass("demostyle");   Removing the class works in the same way:   jQuery.Select("b").RemoveClass("demostyle");   jLight is build for interacting with to the DOM from Silverlight using jQuery. A jQueryObjectCss object can be used to define different sets of style properties in Silverlight. The over 60 most common Css style properties are defined in the jQueryObjectCss class. A string indexer can be used to access all style properties ( CssObject1[“background-color”] equals CssObject1.BackgroundColor). In the code below, two jQueryObjectCss objects are defined and instantiated.   private jQueryObjectCss CssObject1; private jQueryObjectCss CssObject2;   public Demo2() { CssObject1 = new jQueryObjectCss { BackgroundColor = "Lime", Color="Black", FontSize = "12pt", FontFamily = "sans-serif", FontWeight = "bold", MarginLeft = 150, LineHeight = "28px", Border = "Solid 1px #880000" }; CssObject2 = new jQueryObjectCss { FontStyle = "Italic", FontSize = "48", Color = "#225522" }; InitializeComponent(); }   Now instead of chaining to set all different properties you can just pass one of the jQueryObjectCss objects to the Css() method. In this case all <LI></LI> elements are set to match this object.   jQuery.Select("li").Css(CssObject1); When using the jQueryObjectCss objects chaining is still possible. In the following example all headers are given a blue backgroundcolor and the last is set to match CssObject2.   jQuery.Select(":header").Css(new jQueryObjectCss{BackgroundColor = "Blue"}) .Eq(-1).Css(CssObject2);   Part 3: The fun stuff Having Silverlight call JavaScript and than having JavaScript to call Silverlight requires a lot of plumbing code. Everything has to be registered and strings are passed back and forth to execute the JavaScript. jLight makes this kind of stuff so easy, it becomes fun to use. In a lot of situations jQuery can call a function to decide what to do, setting a style class based on complex expressions for example. jLight can do the same, but the callback methods are defined in Silverlight. This example calls the function() method for each <LI></LI> element. The callback method has to take a jQueryObject, an integer and a string as parameters. In this case jLight differs a bit from the actual jQuery implementation. jQuery uses only the index and the className parameters. A jQueryObject is added to make it simpler to access the attributes and properties of the element. If the text of the listitem starts with a ‘D’ or an ‘M’ the class is set. Otherwise null is returned and nothing happens.   private void button1_Click(object sender, RoutedEventArgs e) { jQuery.Select("li").AddClass(function); }   private string function(jQueryObject obj, int index, string className) { if (obj.Text[0] == 'D' || obj.Text[0] == 'M') return "demostyle"; return null; }   The last thing I would like to demonstrate uses even more Silverlight and less jLight, but demonstrates the power of the combination. Animating a style property using a Storyboard with easing functions. First a dependency property is defined. In this case it is a double named Intensity. By handling the changed event the color is set using jQuery.   public double Intensity { get { return (double)GetValue(IntensityProperty); } set { SetValue(IntensityProperty, value); } }   public static readonly DependencyProperty IntensityProperty = DependencyProperty.Register("Intensity", typeof(double), typeof(Demo3), new PropertyMetadata(0.0, IntensityChanged));   private static void IntensityChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) { var i = (byte)(double)e.NewValue; jQuery.Select("span").Css("color", string.Format("#{0:X2}{0:X2}{0:X2}", i)); }   An animation has to be created. This code defines a Storyboard with one keyframe that uses a bounce ease as an easing function. The animation is set to target the Intensity dependency property defined earlier.   private Storyboard CreateAnimation(double value) { Storyboard storyboard = new Storyboard(); var da = new DoubleAnimationUsingKeyFrames(); var d = new EasingDoubleKeyFrame { EasingFunction = new BounceEase(), KeyTime = KeyTime.FromTimeSpan(TimeSpan.FromSeconds(1.0)), Value = value }; da.KeyFrames.Add(d); Storyboard.SetTarget(da, this); Storyboard.SetTargetProperty(da, new PropertyPath(Demo3.IntensityProperty)); storyboard.Children.Add(da); return storyboard; }   Initially the Intensity is set to 128 which results in a gray color. When one of the buttons is pressed, a new animation is created an played. One to animate to black, and one to animate to white.   public Demo3() { InitializeComponent(); Intensity = 128; }   private void button2_Click(object sender, RoutedEventArgs e) { CreateAnimation(255).Begin(); }   private void button3_Click(object sender, RoutedEventArgs e) { CreateAnimation(0).Begin(); }   Conclusion As you can see jLight can make the life of a Silverlight developer a lot easier when accessing the DOM. Almost all jQuery functions that are defined in jLight use the same constructions as described above. I’ve tried to stay as close as possible to the real jQuery. Having JavaScript perform callbacks to Silverlight using jLight will be described in more detail in a future tutorial about AJAX or eventing.

    Read the article

  • The Business of Winning Innovation: An Exclusive Blog Series

    - by Kerrie Foy
    "The Business of Winning Innovation” is a series of articles authored by Oracle Agile PLM experts on what it takes to make innovation a successful and lucrative competitive advantage. Our customers have proven Agile PLM applications to be enormously flexible and comprehensive, so we’ve launched this article series to showcase some of the most fascinating, value-packed use cases. In this article by Keith Colonna, we kick-off the series by taking a look at the science side of innovation within the Consumer Products industry and how PLM can help companies innovate faster, cheaper, smarter. This article will review how innovation has become the lifeline for growth within consumer products companies and how certain companies are “winning” by creating a competitive advantage for themselves by taking a more enterprise-wide,systematic approach to “innovation”.   Managing the Science of Innovation within the Consumer Products Industry By: Keith Colonna, Value Chain Solution Manager, Oracle The consumer products (CP) industry is very mature and competitive. Most companies within this industry have saturated North America (NA) with their products thus maximizing their NA growth potential. Future growth is expected to come from either expansion outside of North America and/or by way of new ideas and products. Innovation plays an integral role in both of these strategies, whether you’re innovating business processes or the products themselves, and may cause several challenges for the typical CP company, Becoming more innovative is both an art and a science. Most CP companies are very good at the art of coming up with new innovative ideas, but many struggle with perfecting the science aspect that involves the best practice processes that help companies quickly turn ideas into sellable products and services. Symptoms and Causes of Business Pain Struggles associated with the science of innovation show up in a variety of ways, like: · Establishing and storing innovative product ideas and data · Funneling these ideas to the chosen few · Time to market cycle time and on-time launch rates · Success rates, or how often the best idea gets chosen · Imperfect decision making (i.e. the ability to kill projects that are not projected to be winners) · Achieving financial goals · Return on R&D investment · Communicating internally and externally as more outsource partners are added globally · Knowing your new product pipeline and project status These challenges (and others) can be consolidated into three root causes: A lack of visibility Poor data with limited access The inability to truly collaborate enterprise-wide throughout your extended value chain Choose the Right Remedy Product Lifecycle Management (PLM) solutions are uniquely designed to help companies solve these types challenges and their root causes. However, PLM solutions can vary widely in terms of configurability, functionality, time-to-value, etc. Business leaders should evaluate PLM solution in terms of their own business drivers and long-term vision to determine the right fit. Many of these solutions are point solutions that can help you cure only one or two business pains in the short term. Others have been designed to serve other industries with different needs. Then there are those solutions that demo well but are owned by companies that are either unable or unwilling to continuously improve their solution to stay abreast of the ever changing needs of the CP industry to grow through innovation. What the Right PLM Solution Should Do for You Based on more than twenty years working in the CP industry, I recommend investing in a single solution that can help you solve all of the issues associated with the science of innovation in a totally integrated fashion. By integration I mean the (1) integration of the all of the processes associated with the development, maintenance and delivery of your product data, and (2) the integration, or harmonization of this product data with other downstream sources, like ERP, product catalogues and the GS1 Global Data Synchronization Network (or GDSN, which is now a CP industry requirement for doing business with most retailers). The right PLM solution should help you: Increase Revenue. A best practice PLM solution should help a company grow its revenues by consolidating product development cycle-time and helping companies get new and improved products to market sooner. PLM should also eliminate many of the root causes for a product being returned, refused and/or reclaimed (which takes away from top-line growth) by creating an enterprise-wide, collaborative, workflow-driven environment. Reduce Costs. A strong PLM solution should help shave many unnecessary costs that companies typically take for granted. Rationalizing SKU’s, components (ingredients and packaging) and suppliers is a major opportunity at most companies that PLM should help address. A natural outcome of this rationalization is lower direct material spend and a reduction of inventory. Another cost cutting opportunity comes with PLM when it helps companies avoid certain costs associated with process inefficiencies that lead to scrap, rework, excess and obsolete inventory, poor end of life administration, higher cost of quality and regulatory and increased expediting. Mitigate Risk. Risks are the hardest to quantify but can be the most costly to a company. Food safety, recalls, line shutdowns, customer dissatisfaction and, worst of all, the potential tarnishing of your brands are a few of the debilitating risks that CP companies deal with on a daily basis. These risks are so uniquely severe that they require an enterprise PLM solution specifically designed for the CP industry that safeguards product information and processes while still allowing the art of innovation to flourish. Many CP companies have already created a winning advantage by leveraging a single, best practice PLM solution to establish an enterprise-wide, systematic approach to innovation. Oracle’s Answer for the Consumer Products Industry Oracle is dedicated to solving the growth and innovation challenges facing the CP industry. Oracle’s Agile Product Lifecycle Management for Process solution was originally developed with and for CP companies and is driven by a specialized development staff solely focused on maintaining and continuously improving the solution per the latest industry requirements. Agile PLM for Process helps CP companies handle all of the processes associated with managing the science of the innovation process, including: specification management, new product development/project and portfolio management, formulation optimization, supplier management, and quality and regulatory compliance to name a few. And as I mentioned earlier, integration is absolutely critical. Many Oracle CP customers, both with Oracle ERP systems and non-Oracle ERP systems, report benefits from Oracle’s Agile PLM for Process. In future articles we will explain in greater detail how both existing Oracle customers (like Gallo, Smuckers, Land-O-Lakes and Starbucks) and new Oracle customers (like ConAgra, Tyson, McDonalds and Heinz) have all realized the benefits of Agile PLM for Process and its integration to their ERP systems. More to Come Stay tuned for more articles in our blog series “The Business of Winning Innovation.” While we will also feature articles focused on other industries, look forward to more on how Agile PLM for Process addresses innovation challenges facing the CP industry. Additional topics include: Innovation Data Management (IDM), New Product Development (NPD), Product Quality Management (PQM), Menu Management,Private Label Management, and more! . Watch this video for more info about Agile PLM for Process

    Read the article

  • How to OpenSSL decrypt smime.p7m

    - by tntu
    I have received an email that has no content, just a file called smime.p7m attached. I was looking into the OpenSSL and it's smime module but I cannot figure out exactly how. I must be doing something wrong. I extracted the certificate chain form the p7m file. # openssl pkcs7 -inform DER -in smime.p7m -out pkcs7.pem # openssl pkcs7 -in pkcs7.pem -print_certs -out certs.pem Then I tried to decrypt: # openssl smime -decrypt -in smime.p7m -signer certs.pem -out smime.eml No recipient certificate or key specified And also with my server's SSL cert: # openssl smime -decrypt -in smime.p7m -recip server.nopass.key.crt.ca.pem -out smime.eml Error reading S/MIME message 140078540371784:error:0D0D40D1:asn1 encoding routines:SMIME_read_ASN1:no content type:asn_mime.c:447: Can anyone shed some light on what steps I need to take to extract the email?

    Read the article

  • Problems with self-signed SSL certificate for SSTP in Windows Server Foundation 2008

    - by John Barton
    I am trying to configure SSTP in Windows Server Foundation 2008. I want to use a self-signed SSL certificate to do authentication. When the server is running, I get the following error when trying to connect: 0x800B0109: A certificate chain processed, but terminated in a root certificate that is not trusted by the trust provider. I created the self-signed certificate in the IIS "Server Certificates" panel. From that panel, I exported the certificate, with the private key, to a .pfx file. I installed this certificate on the client computer which I tried to connect from. The certificate bound to the SSL listener in the RRAS-Security panel is present in the Trusted Root Certificate Authority stores on both machines. I've been getting super annoyed setting up certificates. Any advice here?

    Read the article

  • Why can blocked IPs get through my iptables? What's wrong with this configuration?

    - by NeedSomeHelp
    (Why can/How are) blocked IPs (get/getting) through my iptables? Hello and thanks for your consideration... I have configured iptables and included (below) output from the command "iptables --line-numbers -n -L" yet IP addresses (like 31.41.219.180) from IP blocks I have already blocked are getting through. Please take a look and share any input you may have. Thank you. P.S. The initial ACCEPT IP addresses are for CloudFlare. . Chain INPUT (policy DROP 0 packets, 0 bytes) num pkts bytes target prot opt in out source destination 1 32267 14M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2 0 0 REJECT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:!0x17/0x02 state NEW reject-with tcp-reset 3 149 8570 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 state INVALID 4 434 25606 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 5 0 0 ACCEPT udp -- * * 103.21.244.0/22 0.0.0.0/0 6 0 0 ACCEPT udp -- * * 103.22.200.0/22 0.0.0.0/0 7 0 0 ACCEPT udp -- * * 103.31.4.0/22 0.0.0.0/0 8 0 0 ACCEPT udp -- * * 104.16.0.0/12 0.0.0.0/0 9 0 0 ACCEPT udp -- * * 108.162.192.0/18 0.0.0.0/0 10 0 0 ACCEPT udp -- * * 141.101.64.0/18 0.0.0.0/0 11 0 0 ACCEPT udp -- * * 162.158.0.0/15 0.0.0.0/0 12 0 0 ACCEPT udp -- * * 173.245.48.0/20 0.0.0.0/0 13 0 0 ACCEPT udp -- * * 188.114.96.0/20 0.0.0.0/0 14 0 0 ACCEPT udp -- * * 190.93.240.0/20 0.0.0.0/0 15 0 0 ACCEPT udp -- * * 197.234.240.0/22 0.0.0.0/0 16 0 0 ACCEPT udp -- * * 198.41.128.0/17 0.0.0.0/0 17 0 0 ACCEPT udp -- * * 199.27.128.0/21 0.0.0.0/0 18 0 0 ACCEPT tcp -- * * 103.21.244.0/22 0.0.0.0/0 19 9 468 ACCEPT tcp -- * * 103.22.200.0/22 0.0.0.0/0 20 0 0 ACCEPT tcp -- * * 103.31.4.0/22 0.0.0.0/0 21 0 0 ACCEPT tcp -- * * 104.16.0.0/12 0.0.0.0/0 22 858 44616 ACCEPT tcp -- * * 108.162.192.0/18 0.0.0.0/0 23 376 19552 ACCEPT tcp -- * * 141.101.64.0/18 0.0.0.0/0 24 0 0 ACCEPT tcp -- * * 162.158.0.0/15 0.0.0.0/0 25 257 13364 ACCEPT tcp -- * * 173.245.48.0/20 0.0.0.0/0 26 0 0 ACCEPT tcp -- * * 188.114.96.0/20 0.0.0.0/0 27 0 0 ACCEPT tcp -- * * 190.93.240.0/20 0.0.0.0/0 28 0 0 ACCEPT tcp -- * * 197.234.240.0/22 0.0.0.0/0 29 0 0 ACCEPT tcp -- * * 198.41.128.0/17 0.0.0.0/0 30 92 4784 ACCEPT tcp -- * * 199.27.128.0/21 0.0.0.0/0 31 0 0 DROP tcp -- * * 1.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 32 0 0 DROP tcp -- * * 101.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 33 0 0 DROP tcp -- * * 102.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 34 0 0 DROP tcp -- * * 103.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 35 18 1080 DROP tcp -- * * 109.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 36 0 0 DROP tcp -- * * 112.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 37 12 656 DROP tcp -- * * 113.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 38 0 0 DROP tcp -- * * 114.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 39 0 0 DROP tcp -- * * 115.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 40 8 352 DROP tcp -- * * 116.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 41 0 0 DROP tcp -- * * 117.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 42 0 0 DROP tcp -- * * 118.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 43 2 120 DROP tcp -- * * 119.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 44 0 0 DROP tcp -- * * 120.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 45 0 0 DROP tcp -- * * 121.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 46 4 160 DROP tcp -- * * 122.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 47 4 240 DROP tcp -- * * 123.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 48 0 0 DROP tcp -- * * 125.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 49 0 0 DROP tcp -- * * 134.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 50 0 0 DROP tcp -- * * 146.185.0.0/16 0.0.0.0/0 tcp dpts:1:50000 51 6 360 DROP tcp -- * * 148.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 52 0 0 DROP tcp -- * * 151.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 53 0 0 DROP tcp -- * * 175.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 54 0 0 DROP tcp -- * * 176.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 55 0 0 DROP tcp -- * * 177.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 56 46 2696 DROP tcp -- * * 178.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 57 0 0 DROP tcp -- * * 179.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 58 4 224 DROP tcp -- * * 180.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 59 0 0 DROP tcp -- * * 181.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 60 0 0 DROP tcp -- * * 182.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 61 34 2040 DROP tcp -- * * 183.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 62 0 0 DROP tcp -- * * 185.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 63 0 0 DROP tcp -- * * 186.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 64 0 0 DROP tcp -- * * 187.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 65 18 912 DROP tcp -- * * 188.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 66 0 0 DROP tcp -- * * 189.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 67 0 0 DROP tcp -- * * 190.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 68 2 120 DROP tcp -- * * 192.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 69 0 0 DROP tcp -- * * 196.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 70 0 0 DROP tcp -- * * 197.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 71 5 300 DROP tcp -- * * 198.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 72 0 0 DROP tcp -- * * 2.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 73 0 0 DROP tcp -- * * 200.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 74 0 0 DROP tcp -- * * 201.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 75 6 360 DROP tcp -- * * 202.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 76 0 0 DROP tcp -- * * 203.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 77 4 160 DROP tcp -- * * 210.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 78 0 0 DROP tcp -- * * 211.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 79 2 96 DROP tcp -- * * 212.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 80 4 240 DROP tcp -- * * 213.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 81 0 0 DROP tcp -- * * 214.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 82 0 0 DROP tcp -- * * 215.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 83 0 0 DROP tcp -- * * 216.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 84 0 0 DROP tcp -- * * 217.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 85 4 172 DROP tcp -- * * 218.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 86 12 576 DROP tcp -- * * 219.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 87 7 372 DROP tcp -- * * 220.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 88 0 0 DROP tcp -- * * 222.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 89 0 0 DROP tcp -- * * 27.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 90 12 608 DROP tcp -- * * 31.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 91 11 528 DROP tcp -- * * 37.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 92 0 0 DROP tcp -- * * 41.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 93 0 0 DROP tcp -- * * 42.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 94 0 0 DROP tcp -- * * 43.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 95 8 480 DROP tcp -- * * 46.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 96 0 0 DROP tcp -- * * 49.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 97 6 360 DROP tcp -- * * 5.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 98 0 0 DROP tcp -- * * 58.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 99 0 0 DROP tcp -- * * 60.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 100 4 160 DROP tcp -- * * 61.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 101 32 1848 DROP tcp -- * * 62.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 102 0 0 DROP tcp -- * * 63.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 103 20 1200 DROP tcp -- * * 64.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 104 0 0 DROP tcp -- * * 65.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 105 266 15960 DROP tcp -- * * 66.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 106 3 180 DROP tcp -- * * 69.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 107 5 272 DROP tcp -- * * 72.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 108 0 0 DROP tcp -- * * 78.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 109 0 0 DROP tcp -- * * 81.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 110 3 180 DROP tcp -- * * 82.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 111 0 0 DROP tcp -- * * 83.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 112 8 384 DROP tcp -- * * 84.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 113 0 0 DROP tcp -- * * 85.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 114 0 0 DROP tcp -- * * 86.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 115 6 360 DROP tcp -- * * 87.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 116 7 408 DROP tcp -- * * 88.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 117 0 0 DROP tcp -- * * 89.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 118 0 0 DROP tcp -- * * 90.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 119 0 0 DROP tcp -- * * 91.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 120 3 152 DROP tcp -- * * 92.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 121 20 992 DROP tcp -- * * 93.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 122 9 512 DROP tcp -- * * 94.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 123 5 272 DROP tcp -- * * 95.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 124 0 0 DROP udp -- * * 1.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 125 0 0 DROP udp -- * * 101.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 126 0 0 DROP udp -- * * 102.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 127 0 0 DROP udp -- * * 103.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 128 0 0 DROP udp -- * * 109.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 129 0 0 DROP udp -- * * 112.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 130 0 0 DROP udp -- * * 113.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 131 0 0 DROP udp -- * * 114.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 132 1 112 DROP udp -- * * 115.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 133 0 0 DROP udp -- * * 116.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 134 0 0 DROP udp -- * * 117.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 135 0 0 DROP udp -- * * 118.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 136 0 0 DROP udp -- * * 119.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 137 0 0 DROP udp -- * * 120.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 138 0 0 DROP udp -- * * 121.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 139 0 0 DROP udp -- * * 122.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 140 0 0 DROP udp -- * * 123.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 141 0 0 DROP udp -- * * 125.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 142 0 0 DROP udp -- * * 134.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 143 0 0 DROP udp -- * * 146.185.0.0/16 0.0.0.0/0 udp dpts:1:50000 144 0 0 DROP udp -- * * 148.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 145 0 0 DROP udp -- * * 151.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 146 0 0 DROP udp -- * * 175.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 147 0 0 DROP udp -- * * 176.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 148 1 70 DROP udp -- * * 177.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 149 0 0 DROP udp -- * * 178.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 150 0 0 DROP udp -- * * 179.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 151 0 0 DROP udp -- * * 180.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 152 0 0 DROP udp -- * * 181.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 153 0 0 DROP udp -- * * 182.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 154 0 0 DROP udp -- * * 183.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 155 0 0 DROP udp -- * * 185.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 156 1 74 DROP udp -- * * 186.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 157 0 0 DROP udp -- * * 187.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 158 0 0 DROP udp -- * * 188.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 159 0 0 DROP udp -- * * 189.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 160 0 0 DROP udp -- * * 190.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 161 0 0 DROP udp -- * * 192.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 162 0 0 DROP udp -- * * 196.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 163 0 0 DROP udp -- * * 197.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 164 0 0 DROP udp -- * * 198.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 165 0 0 DROP udp -- * * 2.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 166 0 0 DROP udp -- * * 200.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 167 0 0 DROP udp -- * * 201.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 168 0 0 DROP udp -- * * 202.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 169 0 0 DROP udp -- * * 203.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 170 0 0 DROP udp -- * * 210.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 171 0 0 DROP udp -- * * 211.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 172 0 0 DROP udp -- * * 212.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 173 0 0 DROP udp -- * * 213.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 174 0 0 DROP udp -- * * 214.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 175 0 0 DROP udp -- * * 215.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 176 0 0 DROP udp -- * * 216.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 177 0 0 DROP udp -- * * 217.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 178 1 80 DROP udp -- * * 218.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 179 0 0 DROP udp -- * * 219.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 180 0 0 DROP udp -- * * 220.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 181 0 0 DROP udp -- * * 222.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 182 0 0 DROP udp -- * * 27.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 183 0 0 DROP udp -- * * 31.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 184 0 0 DROP udp -- * * 37.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 185 0 0 DROP udp -- * * 41.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 186 0 0 DROP udp -- * * 42.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 187 0 0 DROP udp -- * * 43.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 188 0 0 DROP udp -- * * 46.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 189 0 0 DROP udp -- * * 49.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 190 0 0 DROP udp -- * * 5.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 191 0 0 DROP udp -- * * 58.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 192 0 0 DROP udp -- * * 60.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 193 0 0 DROP udp -- * * 61.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 194 0 0 DROP udp -- * * 62.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 195 0 0 DROP udp -- * * 63.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 196 0 0 DROP udp -- * * 64.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 197 0 0 DROP udp -- * * 65.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 198 0 0 DROP udp -- * * 66.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 199 0 0 DROP udp -- * * 69.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 200 0 0 DROP udp -- * * 72.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 201 0 0 DROP udp -- * * 78.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 202 0 0 DROP udp -- * * 81.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 203 0 0 DROP udp -- * * 82.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 204 0 0 DROP udp -- * * 83.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 205 0 0 DROP udp -- * * 84.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 206 0 0 DROP udp -- * * 85.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 207 0 0 DROP udp -- * * 86.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 208 0 0 DROP udp -- * * 87.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 209 0 0 DROP udp -- * * 88.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 210 0 0 DROP udp -- * * 89.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 211 0 0 DROP udp -- * * 90.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 212 0 0 DROP udp -- * * 91.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 213 0 0 DROP udp -- * * 92.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 214 2 72 DROP udp -- * * 93.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 215 0 0 DROP udp -- * * 94.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 216 0 0 DROP udp -- * * 95.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 217 0 0 DROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:12443 218 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:11443 219 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:11444 220 23 1104 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8447 221 24 1152 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8443 222 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8880 223 207 11096 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 224 19 996 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 225 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:21 226 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 227 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:587 228 4 216 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 229 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:465 230 14 840 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:110 231 2 120 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:995 232 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:143 233 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:993 234 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:106 235 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:3306 236 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:5432 237 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:9008 238 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:9080 239 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:137 240 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:138 241 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:139 242 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:445 243 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:1194 244 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:53 245 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 246 73 4488 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmp type 8 code 0 247 77 23598 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy DROP 0 packets, 0 bytes) num pkts bytes target prot opt in out source destination 1 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2 0 0 REJECT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:!0x17/0x02 state NEW reject-with tcp-reset 3 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 state INVALID 4 0 0 ACCEPT all -- lo lo 0.0.0.0/0 0.0.0.0/0 5 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy DROP 0 packets, 0 bytes) num pkts bytes target prot opt in out source destination 1 31004 25M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2 1 333 REJECT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:!0x17/0x02 state NEW reject-with tcp-reset 3 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 state INVALID 4 434 25606 ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0 5 328 21324 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0

    Read the article

  • tmux combine multiple commands to one vi-copy command or tmux command to yank a line

    - by MIkhail
    In tmux, i know we can chain multiple commands to a key by using \; See Here But in vi mode, i want one single key press to go to the beginning of the current line, begin-selection, go to end-of-line, copy-selection. In tmux.conf if i give the following bind-key -t vi-copy 's' start-of-line \; begin-selection \; end-of-line \; copy-selection \; It gives me this : 69: usage: bind-key [-cnr] [-t key-table] key command [arguments] error. Or is there any alternative way to yank the current line in single key.

    Read the article

  • Help with Ms Access 2007 Combo boxes

    - by Yaaqov
    What's the most efficient way to "chain" combo/boxes in an Access 2007 form, so that the result of the first affected the contents of the second? I already know how to associate a combo box on a form with a query. Here's a example of my scenario: cmbCarMake Behavior: User starts typing, and list shows all manufacturers in a table starting with those characters (e.g., "Ford") cmbCarModel Behavior: Once cmbCarMake has a selected a Make, this object will limit the possible models the user can search for by only displaying models from that one manufacturer. (e.g., "F-150") Thank you for any examples/links.

    Read the article

  • Receiving SSL certificate errors only from some clients

    - by Nico M
    I am receiving SSL certificate errors from Chrome (latest version (23.0.1271.52 beta-m) and Internet Explorer 6 (not used) on my home desktop machine (Windows XP SP2). In Firefox, it works fine on this PC. My laptop and work desktop (both Windows 7) work fine. Most SSL website checking sites report that the certificate and chain up to the root CA are setup correctly, but I have come across 2 that that say I have an invalid certificate but don't give much information on what part is failing. I know it used to work properly on this desktop (in Chrome and IE) in the past, but I'm not sure what has changed that is causing the site to fail in these browsers. Can anyone provide any assistance? This is driving me nuts! Screenshot of error:

    Read the article

  • Amazon EC2 - HTTPS - Certificate body is invalid. The body must not contain a private key

    - by Tam Minh
    I'm very new to Amazon EC2. I am trying to setup https for my website, I follow the offical instruction from amazon doc: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https.html When I Upload a Signed Certificate using AWS command aws iam upload-server-certificate --server-certificate-name dichcumga --certificate-body file://mycert.pem --private-key file://signedkey.pem --certificate-chain file://mychain.pem And I got error A client error (MalformedCertificate) occurred when calling the UploadServerCert ificate operation: Certificate body is invalid. The body must not contain a private key. mycert.pem is a combination of private.pem and signedkey.pem (which return by VeriSign) copy private.pem+signedkey.pem mycert.pem Please help to shed a light. Thank you in advance.

    Read the article

  • Nexenta/OpenSolaris filer kernel panic/crash

    - by ewwhite
    I've an x4540 Sun storage server running NexentaStor Enterprise. It's serving NFS over 10GbE CX4 for several VMWare vSphere hosts. There are 30 virtual machines running. For the past few weeks, I've had random crashes spaced 10-14 days apart. This system used to open OpenSolaris and was stable in that arrangement. The crashes trigger the automated system recovery feature on the hardware, forcing a hard system reset. Here's the output from mdb debugger: panic[cpu5]/thread=ffffff003fefbc60: Deadlock: cycle in blocking chain ffffff003fefb570 genunix:turnstile_block+795 () ffffff003fefb5d0 unix:mutex_vector_enter+261 () ffffff003fefb630 zfs:dbuf_find+5d () ffffff003fefb6c0 zfs:dbuf_hold_impl+59 () ffffff003fefb700 zfs:dbuf_hold+2e () ffffff003fefb780 zfs:dmu_buf_hold+8e () ffffff003fefb820 zfs:zap_lockdir+6d () ffffff003fefb8b0 zfs:zap_update+5b () ffffff003fefb930 zfs:zap_increment+9b () ffffff003fefb9b0 zfs:zap_increment_int+68 () ffffff003fefba10 zfs:do_userquota_update+8a () ffffff003fefba70 zfs:dmu_objset_do_userquota_updates+de () ffffff003fefbaf0 zfs:dsl_pool_sync+112 () ffffff003fefbba0 zfs:spa_sync+37b () ffffff003fefbc40 zfs:txg_sync_thread+247 () ffffff003fefbc50 unix:thread_start+8 () Any ideas what this means?

    Read the article

  • Adeos's role w.r.t Linux

    - by Anisha Kaul
    The event pipeline The fundamental Adeos structure one must keep in mind is the chain of client domains asking for event control. A domain is a kernelbased software component which can ask the Adeos layer to be notified of: · Every incoming external interrupt, or autogenerated virtual interrupt; · Every system call issued by Linux applications, · Other system events triggered by the kernel code (e.g. Linux task switching, signal notification, Linux task exits etc.). From: Life with Adeos: http://www.xenomai.org/documentation/xenomai-2.4/pdf/Life-with-Adeos-rev-B.pdf Question: Adeos is supposed to be between the hardware and the Linux kernel, I can understand about Adeos telling the Linux about hardware interrupts but Why should Adeos know about the "system call" issued by Linux?

    Read the article

  • SSL cert issued to and SAN attribute

    - by Jai
    I have added a cert to my application cacerts file. The new cert is issued to one DNS(abc.com) and they have added few other DNS(XYZ.com, TEST.com) to the SAN attribute while creating. I tried accessing one of the DNS(XYZ.com) given in SAN attribute, it throws me the below mentioned error. <Certificate chain received from XYZ.com failed hostname verification check. Certificate contained abc.com but check expected XYZ.com> If we have more DNS for an application, Do we need to generate cert for every single DNS?

    Read the article

  • Upload a Signed Certificate to Amazon EC2

    - by Tam Minh
    I'm very new to Amazon EC2. I am trying to setup https for my website, I follow the offical instruction from amazon doc: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https.html And I get stuck at Upload the Signed Certificate step aws iam upload-server-certificate --server-certificate-name <certificate_object_name> --certificate-body <public_key_certificate_file> --private-key <privatekey.pem> --certificate-chain <certificate_chain_file> As a instruction, I just create a private key (privatekey.pem) and A Certificate Signing Request (csr.pem), but in the command line they request 4 params 1. certificate_object_name 2. public_key_certificate_file 3. *private-key --> I only have this one* 4. certificate_chain_file I don't know where to get 3 remain params, please help to shed a light. Thank you in advance.

    Read the article

  • iptables to allow 80 and 443 on chillispot running ddwrt

    - by user76682
    I am having problems setting this up. this is what I am trying to do. I have Chillispot (hotpsot) running on dd-wrt. Everything is setup, but the client wants only 80 and 443 to go through through the hotspot. I found this tutorial for dd-wrt but that doesnt seem to work. http://www.dd-wrt.com/wiki/index.php/Iptables#Allow_HTTP_traffic_only_to_specific_domain.28s.29 Initially I tried to place the options at the top but didnt work. then i flushed the iptables and set only these three. I can see the pkts number grow but for some reason I can browse. root@DD-WRT:~# iptables -nvL FORWARD Chain FORWARD (policy ACCEPT 3105 packets, 2442K bytes) pkts bytes target prot opt in out source destination 1629 230K ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 21,80,443 2346 2792K ACCEPT 0 -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 328 46420 DROP 0 -- * * 0.0.0.0/0 0.0.0.0/0 Heres some info from the router, chillispot is the tun0 interface. root@DD-WRT:~# iptables -vnL FORWARD --line-numbers Chain FORWARD (policy DROP 0 packets, 0 bytes) num pkts bytes target prot opt in out source destination 1 0 0 ACCEPT 47 -- * vlan1 192.168.8.0/24 0.0.0.0/0 2 0 0 ACCEPT tcp -- * vlan1 192.168.8.0/24 0.0.0.0/0 tcp dpt:1723 3 32 1851 ACCEPT 0 -- tun0 * 0.0.0.0/0 0.0.0.0/0 state NEW 4 0 0 ACCEPT 0 -- br0 br0 0.0.0.0/0 0.0.0.0/0 5 48 2408 TCPMSS tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU 6 756 452K lan2wan 0 -- * * 0.0.0.0/0 0.0.0.0/0 7 756 452K ACCEPT 0 -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 8 0 0 TRIGGER 0 -- vlan1 br0 0.0.0.0/0 0.0.0.0/0 TRIGGER type:in match:0 relate:0 9 0 0 trigger_out 0 -- br0 * 0.0.0.0/0 0.0.0.0/0 10 0 0 ACCEPT 0 -- br0 * 0.0.0.0/0 0.0.0.0/0 state NEW 11 0 0 DROP 0 -- * * 0.0.0.0/0 0.0.0.0/0 12 0 0 DROP 0 -- br0 * 0.0.0.0/0 0.0.0.0/0 13 0 0 DROP 0 -- * br0 0.0.0.0/0 0.0.0.0/0 The interfaces: root@DD-WRT:~# ifconfig -a br0 Link encap:Ethernet HWaddr 00:12:17:CF:80:5F inet addr:192.168.8.1 Bcast:192.168.8.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2371 errors:0 dropped:0 overruns:0 frame:0 TX packets:1862 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:259721 (253.6 KiB) TX bytes:254862 (248.8 KiB) br0:0 Link encap:Ethernet HWaddr 00:12:17:CF:80:5F inet addr:169.254.255.1 Bcast:169.254.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 eth0 Link encap:Ethernet HWaddr 00:12:17:CF:80:5F UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5050 errors:0 dropped:0 overruns:0 frame:0 TX packets:2508 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1066410 (1.0 MiB) TX bytes:376001 (367.1 KiB) Interrupt:5 eth1 Link encap:Ethernet HWaddr 00:12:17:CF:80:61 UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:729 errors:0 dropped:0 overruns:0 frame:114693 TX packets:697 errors:2 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:107869 (105.3 KiB) TX bytes:473134 (462.0 KiB) Interrupt:4 Base address:0x1000 etherip0 Link encap:Ethernet HWaddr 1E:13:B7:09:CC:8C BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MULTICAST MTU:16436 Metric:1 RX packets:18 errors:0 dropped:0 overruns:0 frame:0 TX packets:18 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1210 (1.1 KiB) TX bytes:1210 (1.1 KiB) teql0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 NOARP MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:192.168.182.1 P-t-P:192.168.182.1 Mask:255.255.255.0 UP POINTOPOINT RUNNING MTU:1500 Metric:1 RX packets:662 errors:0 dropped:0 overruns:0 frame:0 TX packets:587 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:10 RX bytes:92167 (90.0 KiB) TX bytes:427657 (417.6 KiB) vlan0 Link encap:Ethernet HWaddr 00:12:17:CF:80:5F UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2371 errors:0 dropped:0 overruns:0 frame:0 TX packets:1864 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:269558 (263.2 KiB) TX bytes:262680 (256.5 KiB) vlan1 Link encap:Ethernet HWaddr 00:12:17:CF:80:60 inet addr:10.3.2.47 Bcast:10.255.255.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2675 errors:0 dropped:0 overruns:0 frame:0 TX packets:645 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:705429 (688.8 KiB) TX bytes:102197 (99.8 KiB) The routing table: root@DD-WRT:~# netstat -nr Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 192.168.182.0 0.0.0.0 255.255.255.0 U 0 0 0 tun0 10.3.2.0 0.0.0.0 255.255.255.0 U 0 0 0 vlan1 192.168.8.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br0 127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo 0.0.0.0 10.3.2.1 0.0.0.0 UG 0 0 0 vlan1 Highly appreciate your help. TIA, Arun

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >