Search Results

Search found 1519 results on 61 pages for 'chain'.

Page 16/61 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Keytool and SSL Apache config

    - by Safari
    I have a question about SSL certificate... I have generate a certificate using this keytool command.. keytool -genkey -alias myalias -keyalg RSA -keysize 2048 and I used this command to export the certificate keytool -export -alias myalias -file certificate.crt So, I have a file .crt Now I would to configure my Apache ssl module. I need to use keytool...At the moment I can't to use Openssl How can I configure the module if I have only this certificate.crt file? I see these sections in my ssl.conf # Server Certificate: # Point SSLCertificateFile at a PEM encoded certificate. If # the certificate is encrypted, then you will be prompted for a # pass phrase. Note that a kill -HUP will prompt again. A new # certificate can be generated using the genkey(1) command. #SSLCertificateFile /etc/pki/tls/certs/localhost.crt # Server Private Key: # If the key is not combined with the certificate, use this # directive to point at the key file. Keep in mind that if # you've both a RSA and a DSA private key you can configure # both in parallel (to also allow the use of DSA ciphers, etc.) #SSLCertificateKeyFile /etc/pki/tls/private/localhost.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/pki/tls/certs/server-chain.crt How can I configure the correct section?

    Read the article

  • Host name change breaking http? Fedora

    - by Dave
    OK so I have been messing around on my development server. It has been a while since I have had my head in linux and I suspect I have broken something. I have SSH running and that is working fine. I also have HTTP and I had FTP running also. Earlier today I decided I wanted to rename the machine so I updated the /etc/hosts file and /etc/sysconfig/network. I also changed the server name in the httpd.conf. I rebooted the machine and reconnected to SSH fine. Later I was messing around with the FTP service (trying to tighten up the user security) and when i tried to connect remotely to FTP no joy, it said cannot connect. I thought that was weird but had planned to remove ftp as we will be using github so removed ftp and moved on. Then I tried to connect to the website but major fail. even connecting to the IP address is failing. I used lynx to connect to the localhost and there was my site so something going on at server level. I thought maybe something up with iptables but I have not changed them but tried adding http but still no joy. I have a - Fedora release 17 (Beefy Miracle) NAME=Fedora VERSION="17 (Beefy Miracle)" ID=fedora VERSION_ID=17 PRETTY_NAME="Fedora 17 (Beefy Miracle)" ANSI_COLOR="0;34" CPE_NAME="cpe:/o:fedoraproject:fedora:17" Fedora release 17 (Beefy Miracle) Fedora release 17 (Beefy Miracle) Linux version 3.3.4-5.fc17.x86_64 ([email protected]) (gcc version 4.7.0 20120504 (Red Hat 4.7.0-4) (GCC) ) #1 SMP Mon May 7 17:29:34 UTC 2012 This is my iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT icmp -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) target prot opt source destination REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) target prot opt source destination Like I say I can use SSH no issue but http although running is a no go from a remote computer. Any ideas?

    Read the article

  • Can't connect to vsftpd on Ubuntu 10.04

    - by Johnny
    I started the vsftpd on Ubuntu 10.04, but can't connect to it. The error says(FTP Client): Status: Connecting to 124.205.xx.xx:21... Error: Connection timed out Error: Could not connect to server I've checked the server status, and vsftpd is running: $ ps ax | grep vsftpd 23646 ? Ss 0:00 /usr/sbin/vsftpd 23650 pts/1 S+ 0:00 grep --color=auto vsftpd port 21 is under listening as well: $ netstat -tlnp | grep 21 (No info could be read for "-p": geteuid()=1000 but you should be root.) tcp 0 0 0.0.0.0:21 0.0.0.0:* LISTEN - I can connect to localhost: $ ftp localhost Connected to localhost. 220 (vsFTPd 2.2.2) Name (localhost:jlee): 331 Please specify the password. Password: 230 Login successful. Remote system type is UNIX. Using binary mode to transfer files. ftp> Here is iptables output $ sudo iptables -vL Chain INPUT (policy ACCEPT 191 packets, 144K bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 124 packets, 28502 bytes) pkts bytes target prot opt in out source destination What's the problem here?

    Read the article

  • SNMP closed state in CentOS

    - by anksoWX
    I'm having a problem here, I've added to my IPtables rules this: -A INPUT -p tcp -m state --state NEW -m tcp --dport 161 -j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport 161 -j ACCEPT but when I scan with nmap or any other tool it says this: Not shown: 998 filtered ports PORT STATE SERVICE 22/tcp open ssh 161/tcp closed snmp also when I am doing: netstat -apn | grep snmpd tcp 0 0 127.0.0.1:199 0.0.0.0:* LISTEN 3669/snmpd<br> udp 0 0 0.0.0.0:161 0.0.0.0:* 3669/snmpd<br> unix 2 [ ] DGRAM 226186 3669/snmpd Also: service iptables status Table: filter Chain INPUT (policy ACCEPT) num target prot opt source destination 1 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 3 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 4 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:161 5 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:161 6 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 7 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) num target prot opt source destination 1 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) num target prot opt source destination Any idea what's going on? There is no UDP in closed/open state. what do I have to do?

    Read the article

  • Iptables ignoring a rule in the config file

    - by Overdeath
    I see lot of established connections to my apache server from the ip 188.241.114.22 which eventually causes apache to hang . After I restart the service everything works fine. I tried adding a rule in iptables -A INPUT -s 188.241.114.22 -j DROP but despite that I keep seeing connections from that IP. I'm using centOS and i'm adding the rule like thie: iptables -A INPUT -s 188.241.114.22 -j DROP Right afther that I save it using: service iptables save Here is the output of iptables -L -v ` Chain INPUT (policy ACCEPT 120K packets, 16M bytes) pkts bytes target prot opt in out source destination 0 0 DROP all -- any any lg01.mia02.pccwbtn.net anywhere 0 0 DROP all -- any any c-98-210-5-174.hsd1.ca.comcast.net anywhere 0 0 DROP all -- any any c-98-201-5-174.hsd1.tx.comcast.net anywhere 0 0 DROP all -- any any lg01.mia02.pccwbtn.net anywhere 0 0 DROP all -- any any www.dabacus2.com anywhere 0 0 DROP all -- any any 116.255.163.100 anywhere 0 0 DROP all -- any any 94.23.119.11 anywhere 0 0 DROP all -- any any 164.bajanet.mx anywhere 0 0 DROP all -- any any 173-203-71-136.static.cloud-ips.com anywhere 0 0 DROP all -- any any v1.oxygen.ro anywhere 0 0 DROP all -- any any 74.122.177.12 anywhere 0 0 DROP all -- any any 58.83.227.150 anywhere 0 0 DROP all -- any any v1.oxygen.ro anywhere 0 0 DROP all -- any any v1.oxygen.ro anywhere Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 186K packets, 224M bytes) pkts bytes target prot opt in out source destination `

    Read the article

  • Nginx Ubuntu Postfix Config - Can't connect to incoming IMAP server 'server not responding' but can send mail via outgoing using same details?

    - by daveaspinall
    I'm pretty to new server admin and especially nginx but seem to be getting ok fine apart from accessing my mail via my iPhone? I've changed my domain to 'domain.com' The thing is I can send mail via my outgoing IMAP server but can't connect to the incoming one? I just get the message "the mail server at mail.domain.com is not responding" /etc/postfix/main.cf alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix home_mailbox = Maildir/ inet_interfaces = all inet_protocols = all mailbox_command = mailbox_size_limit = 0 mydestination = domain.com, mail.domain.com, localhost.com, , localhost, localhost.localdomain mydomain = domain.com myhostname = mail.domain.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname recipient_delimiter = + relayhost = smtp_tls_note_starttls_offer = yes smtp_tls_security_level = may smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = smtpd_sasl_security_options = noanonymous smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem smtpd_tls_auth_only = no smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt smtpd_tls_key_file = /etc/ssl/private/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_security_level = may smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom telnet localhost 25 ehlo locahost 250-mail.domain.com 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-AUTH LOGIN PLAIN 250-AUTH=LOGIN PLAIN 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN Using the following details to connect: username password hostname: mail.domain.com port: 25 iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination I also sent mail to the server as a test and got this missage if it helps? Technical details of temporary failure: [mail.domain.com. (10): Connection refused] I also looked in /var/log/mail.log and it has multiple entries of: postfix/smtpd[12239]: connect from 5acefc9a.bb.sky.com[90.206.252.xxx] Mar 23 06:47:09 new-domain postfix/smtpd[12239]: lost connection after CONNECT from 5acefc9a.bb.sky.com[90.206.252.154] Notice new-domain which is incorrect but the server hostname and hostname in the configs are correct? I recently moves servers and the host has set the primary domain on the service as new-domain.com so this may be the issue? Like I said, it works to connect to outgoing server, but incoming gets the not responding error? Any idea would be much appreciated!

    Read the article

  • How to configure apache to basic authentication or allow when ntlm while proxying?

    - by trotzim
    Here is my study case: browser --- apache proxy --- ISA server --- internet The ISA server requires an authentication. The issue is to allow HTTPS through the two proxies. A configuration that works with HTTP is something like this: (yes, I don't want to use ProxyPass but ProxyRequests) <virtualhost *:8080> ... SetEnv auth-proxy-chain on ... ProxyRequests On ProxyRemote * http://isaproxy:80 ... <proxy *> AuthName "ISA server auth" AuthType Basic [here a module to authenticate] require valid-user Allow from all </proxy> ... </virtualhost> The user can authenticate on the apache proxy then the authentication chain is sent to the ISA server that allows the HTTP trafic. But, while the browser switchs to HTTPS, the ISA server "speaks" NTLM and breaks the authentication on the apache proxy. If I try to use the SSPI module (ntlm) with something like this: blablabla <proxy *> AuthName "ISA server auth" AuthType ntlm [ SSPI stuff ] Require valid-user Allow from all </proxy> The apache server reject the authentication (or the ISA server I don't really know). I use wireshark to look at the nominal process while using directly the ISA server as proxy. The first auth-chain is a BASIC type then it switchs to NTLM (and the challenge continues with NTLM). How should I configure apache that it transfers the NTLM authentication to the ISA proxy without checking it(*)? Or to rewrite headers to force BASIC authentication? (*) It seems not to be as easy as it seems...

    Read the article

  • iptables blank after reboot

    - by theillien
    We've started encountering an issue with iptables on our RHEL 6.3 systems in that after a reboot, when the service starts, the rules are not loaded. We get the empty ruleset: [msnyder@matt-test ~]$ sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination This is in spite of the fact that we have rules defined and the service is, indeed, running. That I know because when I run service iptables start it simply drops back to the prompt. If I run service iptables restart it actually stops and then restarts the service. And, of course, if I run service iptables stop it indicates that iptables is actually stopping. Knowing that I need to restart the service, I do so and the rules load up properly. They simply don't get loaded after a reboot. Unless they get loaded differently during a reboot I don't see how our rules would be wrong. If they were, they wouldn't even load during a service restart. Has anyone else ever encountered this? EDIT: The rules are already saved in /etc/sysconfig/iptables. They are not added on the fly from the command line so service iptables save is unnecessary.

    Read the article

  • Redirect traffic from 127.0.0.1 to 127.0.0.1 on port 53 to port 5300 with iptables

    - by Zagorax
    I'm running a local dns server on port 5300 to develop a software. I need my machine to use that dns but I wasn't able to tell /etc/resolv.conf to check on a different port. I searched a bit on google and I didn't find a solution. I set 127.0.0.1 as nameserver on /etc/resolv.conf. This is my whole /etc/resolv.conf: nameserver 127.0.0.1 Could you please tell me how can I redirect outbound traffic on port 53 to another port? I tried the following but it didn't work: iptables -t nat -A PREROUTING -p tcp --dport 53 -j DNAT --to 127.0.0.1:5300 iptables -t nat -A PREROUTING -p udp --dport 53 -j DNAT --to 127.0.0.1:5300 Here is the output of iptables -t nat -L -v -n (with suggested rules): Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 REDIRECT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 redir ports 5300 0 0 REDIRECT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:53 redir ports 5300 Chain POSTROUTING (policy ACCEPT 302 packets, 19213 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 302 packets, 19213 bytes) pkts bytes target prot opt in out source destination

    Read the article

  • add a decorate function to a class

    - by wiso
    I have a decorated function (simplified version): class Memoize: def __init__(self, function): self.function = function self.memoized = {} def __call__(self, *args, **kwds): hash = args try: return self.memoized[hash] except KeyError: self.memoized[hash] = self.function(*args) return self.memoized[hash] @Memoize def _DrawPlot(self, options): do something... now I want to add this method to a pre-esisting class. ROOT.TChain.DrawPlot = _DrawPlot when I call this method: chain = TChain() chain.DrawPlot(opts) I got: self.memoized[hash] = self.function(*args) TypeError: _DrawPlot() takes exactly 2 arguments (1 given) why doesn't it propagate self?

    Read the article

  • prolog sets problem, stack overflow

    - by garm0nboz1a
    Hi. I'm gonna show some code and ask, what could be optimized and where am I sucked? sublist([], []). sublist([H | Tail1], [H | Tail2]) :- sublist(Tail1, Tail2). sublist(H, [_ | Tail]) :- sublist(H, Tail). less(X, X, _). less(X, Z, RelationList) :- member([X,Z], RelationList). less(X, Z, RelationList) :- member([X,Y], RelationList), less(Y, Z, RelationList), \+less(Z, X, RelationList). lessList(X, LessList, RelationList) :- findall(Y, less(X, Y, RelationList), List), list_to_set(List, L), sort(L, LessList), !. list_mltpl(List1, List2, List) :- findall(X, ( member(X, List1), member(X, List2)), List). chain([_], _). chain([H,T | Tail], RelationList) :- less(H, T, RelationList), chain([T|Tail], RelationList), !. have_inf(X1, X2, RelationList) :- lessList(X1, X1_cone, RelationList), lessList(X2, X2_cone, RelationList), list_mltpl(X1_cone, X2_cone, Cone), chain(Cone, RelationList), !. relations(List, E) :- findall([X1,X2], (member(X1, E), member(X2, E), X1 =\= X2), Relations), sublist(List, Relations). semilattice(List, E) :- forall( (member(X1, E), member(X2, E), X1 < X2), have_inf(X1, X2, List) ). main(E) :- relations(X, E), semilattice(X, E). I'm trying to model all possible graph sets of N elements. Predicate relations(List, E) connects list of possible graphs(List) and input set E. Then I'm describing semilattice predicate to check relations' List for some properties. So, what I have. 1) semilattice/2 is working fast and clear ?- semilattice([[1,3],[2,4],[3,5],[4,5]],[1,2,3,4,5]). true. ?- semilattice([[1,3],[1,4],[2,3],[2,4],[3,5],[4,5]],[1,2,3,4,5]). false. 2) relations/2 is working not well ?- findall(X, relations(X,[1,2,3,4]), List), length(List, Len), writeln(Len),fail. 4096 false. ?- findall(X, relations(X,[1,2,3,4,5]), List), length(List, Len), writeln(Len),fail. ERROR: Out of global stack ^ Exception: (11) setup_call_catcher_cleanup('$bags':'$new_findall_bag'(17852886), '$bags':fa_loop(_G263, user:relations(_G263, [1, 2, 3, 4|...]), 17852886, _G268, []), _G835, '$bags':'$destroy_findall_bag'(17852886)) ? abort % Execution Aborted 3) Mix of them to finding all possible semilattice does not work at all. ?- main([1,2]). ERROR: Out of local stack ^ Exception: (15) setup_call_catcher_cleanup('$bags':'$new_findall_bag'(17852886), '$bags':fa_loop(_G41, user:less(1, _G41, [[1, 2], [2, 1]]), 17852886, _G52, []), _G4767764, '$bags':'$destroy_findall_bag'(17852886)) ?

    Read the article

  • NFS Mounts Issues

    - by user554005
    Having some issue with a NFS Setup on the clients it just times out refuses to connect [root@host9 ~]# mount 192.168.0.17:/home/export /mnt/export mount: mount to NFS server '192.168.0.17' failed: timed out (retrying). mount: mount to NFS server '192.168.0.17' failed: timed out (retrying). mount: mount to NFS server '192.168.0.17' failed: timed out (retrying). mount: mount to NFS server '192.168.0.17' failed: timed out (retrying). Here are the settings I'm using: [root@host17 /home/export]# cat /etc/hosts.allow # # hosts.allow This file contains access rules which are used to # allow or deny connections to network services that # either use the tcp_wrappers library or that have been # started through a tcp_wrappers-enabled xinetd. # # See 'man 5 hosts_options' and 'man 5 hosts_access' # for information on rule syntax. # See 'man tcpd' for information on tcp_wrappers # portmap: 192.168.0.0/255.255.255.0 lockd: 192.168.0.0/255.255.255.0 rquotad: 192.168.0.0/255.255.255.0 mountd: 192.168.0.0/255.255.255.0 statd: 192.168.0.0/255.255.255.0 [root@host17 /home/export]# cat /etc/hosts.deny # # hosts.deny This file contains access rules which are used to # deny connections to network services that either use # the tcp_wrappers library or that have been # started through a tcp_wrappers-enabled xinetd. # # The rules in this file can also be set up in # /etc/hosts.allow with a 'deny' option instead. # # See 'man 5 hosts_options' and 'man 5 hosts_access' # for information on rule syntax. # See 'man tcpd' for information on tcp_wrappers # portmap:ALL lockd:ALL mountd:ALL rquotad:ALL statd:ALL [root@host17 /home/export]# cat /etc/exports /home/export 192.168.0.0/255.255.255.0(rw) [root@host17 /home/export]# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination RH-Firewall-1-INPUT all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination RH-Firewall-1-INPUT all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain RH-Firewall-1-INPUT (2 references) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT icmp -- anywhere anywhere icmp any ACCEPT esp -- anywhere anywhere ACCEPT ah -- anywhere anywhere ACCEPT udp -- anywhere 224.0.0.251 udp dpt:mdns ACCEPT udp -- anywhere anywhere udp dpt:ipp ACCEPT tcp -- anywhere anywhere tcp dpt:ipp ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:http ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:6379 ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:sunrpc ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:sunrpc ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:nfs ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:32803 ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:filenet-rpc ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:892 ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:892 ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:rquotad ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:rquotad ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:pftp ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:pftp REJECT all -- anywhere anywhere reject-with icmp-host-prohibited on the clients here is some rpcinfos [root@host9 ~]# rpcinfo -p 192.168.0.17 program vers proto port 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100011 1 udp 875 rquotad 100011 2 udp 875 rquotad 100011 1 tcp 875 rquotad 100011 2 tcp 875 rquotad 100005 1 udp 45857 mountd 100005 1 tcp 55772 mountd 100005 2 udp 34021 mountd 100005 2 tcp 59542 mountd 100005 3 udp 60930 mountd 100005 3 tcp 53086 mountd 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100227 2 udp 2049 nfs_acl 100227 3 udp 2049 nfs_acl 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100227 2 tcp 2049 nfs_acl 100227 3 tcp 2049 nfs_acl 100021 1 udp 59832 nlockmgr 100021 3 udp 59832 nlockmgr 100021 4 udp 59832 nlockmgr 100021 1 tcp 36140 nlockmgr 100021 3 tcp 36140 nlockmgr 100021 4 tcp 36140 nlockmgr 100024 1 udp 46494 status 100024 1 tcp 49672 status [root@host9 ~]# [root@host9 ~]# rpcinfo -u 192.168.0.17 nfs rpcinfo: RPC: Timed out program 100003 version 0 is not available [root@host9 ~]# rpcinfo -u 192.168.0.17 portmap program 100000 version 2 ready and waiting program 100000 version 3 ready and waiting program 100000 version 4 ready and waiting [root@host9 ~]# rpcinfo -u 192.168.0.17 mount rpcinfo: RPC: Timed out program 100005 version 0 is not available [root@host9 ~]# I'm running CentOS 5.8 on all systems

    Read the article

  • Cannot connect to MySQL Server on RHEL 5.7

    - by Jeffrey Wong
    I have a standard MySQL Server running on Red hat 5.7. I have edited /etc/my.cnf to specify the bind address as my server's public IP address. [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Default to using old password format for compatibility with mysql 3.x # clients (those using the mysqlclient10 compatibility package). old_passwords=1 # Disabling symbolic-links is recommended to prevent assorted security risks ; # to do so, uncomment this line: # symbolic-links=0 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid bind-address=171.67.88.25 port=3306 And I have also restarted my firewall sudo /sbin/iptables -A INPUT -i eth0 -p tcp --destination-port 3306 -j ACCEPT /sbin/service iptables save The network administrator has already opened port 3306 for this box. When connecting from a remote computer (running Ubuntu 10.10, server is running RHEL 5.7), I issue mysql -u jeffrey -p --host=171.67.88.25 --port=3306 --socket=/var/lib/mysql/mysql.sock but receive a ERROR 2003 (HY000): Can't connect to MySQL server on '171.67.88.25' (113). I've noticed that the socket file /var/lib/mysql/mysql.sock is blank. Should this be the case? UPDATE The result of netstat -an | grep 3306 tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN Result of sudo netstat -tulpen Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name tcp 0 0 127.0.0.1:2208 0.0.0.0:* LISTEN 0 7602 3168/hpiod tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 27 7827 3298/mysqld tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 0 5110 2802/portmap tcp 0 0 0.0.0.0:8787 0.0.0.0:* LISTEN 0 8431 3326/rserver tcp 0 0 0.0.0.0:915 0.0.0.0:* LISTEN 0 5312 2853/rpc.statd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 0 7655 3188/sshd tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 0 7688 3199/cupsd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 0 8025 3362/sendmail: acce tcp 0 0 127.0.0.1:2207 0.0.0.0:* LISTEN 0 7620 3173/python udp 0 0 0.0.0.0:909 0.0.0.0:* 0 5300 2853/rpc.statd udp 0 0 0.0.0.0:912 0.0.0.0:* 0 5309 2853/rpc.statd udp 0 0 0.0.0.0:68 0.0.0.0:* 0 4800 2598/dhclient udp 0 0 0.0.0.0:36177 0.0.0.0:* 70 8314 3476/avahi-daemon: udp 0 0 0.0.0.0:5353 0.0.0.0:* 70 8313 3476/avahi-daemon: udp 0 0 0.0.0.0:111 0.0.0.0:* 0 5109 2802/portmap udp 0 0 0.0.0.0:631 0.0.0.0:* 0 7691 3199/cupsd Result of sudo /sbin/iptables -L -v -n Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 6373 2110K RH-Firewall-1-INPUT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 RH-Firewall-1-INPUT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 1241 packets, 932K bytes) pkts bytes target prot opt in out source destination Chain RH-Firewall-1-INPUT (2 references) pkts bytes target prot opt in out source destination 572 861K ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 1 28 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmp type 255 0 0 ACCEPT esp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT ah -- * * 0.0.0.0/0 0.0.0.0/0 46 6457 ACCEPT udp -- * * 0.0.0.0/0 224.0.0.251 udp dpt:5353 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:631 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:631 782 157K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2 120 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:443 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:23 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 4970 1086K REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Result of nmap -P0 -p3306 171.67.88.25 Host is up (0.027s latency). PORT STATE SERVICE 3306/tcp filtered mysql Nmap done: 1 IP address (1 host up) scanned in 0.09 seconds Solution When everything else fails, go GUI! system-config-securitylevel and add port 3306. All done!

    Read the article

  • SQL SERVER – How to Recover SQL Database Data Deleted by Accident

    - by Pinal Dave
    In Repair a SQL Server database using a transaction log explorer, I showed how to use ApexSQL Log, a SQL Server transaction log viewer, to recover a SQL Server database after a disaster. In this blog, I’ll show you how to use another SQL Server disaster recovery tool from ApexSQL in a situation when data is accidentally deleted. You can download ApexSQL Recover here, install, and play along. With a good SQL Server disaster recovery strategy, data recovery is not a problem. You have a reliable full database backup with valid data, a full database backup and subsequent differential database backups, or a full database backup and a chain of transaction log backups. But not all situations are ideal. Here we’ll address some sub-optimal scenarios, where you can still successfully recover data. If you have only a full database backup This is the least optimal SQL Server disaster recovery strategy, as it doesn’t ensure minimal data loss. For example, data was deleted on Wednesday. Your last full database backup was created on Sunday, three days before the records were deleted. By using the full database backup created on Sunday, you will be able to recover SQL database records that existed in the table on Sunday. If there were any records inserted into the table on Monday or Tuesday, they will be lost forever. The same goes for records modified in this period. This method will not bring back modified records, only the old records that existed on Sunday. If you restore this full database backup, all your changes (intentional and accidental) will be lost and the database will be reverted to the state it had on Sunday. What you have to do is compare the records that were in the table on Sunday to the records on Wednesday, create a synchronization script, and execute it against the Wednesday database. If you have a full database backup followed by differential database backups Let’s say the situation is the same as in the example above, only you create a differential database backup every night. Use the full database backup created on Sunday, and the last differential database backup (created on Tuesday). In this scenario, you will lose only the data inserted and updated after the differential backup created on Tuesday. If you have a full database backup and a chain of transaction log backups This is the SQL Server disaster recovery strategy that provides minimal data loss. With a full chain of transaction logs, you can recover the SQL database to an exact point in time. To provide optimal results, you have to know exactly when the records were deleted, because restoring to a later point will not bring back the records. This method requires restoring the full database backup first. If you have any differential log backup created after the last full database backup, restore the most recent one. Then, restore transaction log backups, one by one, it the order they were created starting with the first created after the restored differential database backup. Now, the table will be in the state before the records were deleted. You have to identify the deleted records, script them and run the script against the original database. Although this method is reliable, it is time-consuming and requires a lot of space on disk. How to easily recover deleted records? The following solution enables you to recover SQL database records even if you have no full or differential database backups and no transaction log backups. To understand how ApexSQL Recover works, I’ll explain what happens when table data is deleted. Table data is stored in data pages. When you delete table records, they are not immediately deleted from the data pages, but marked to be overwritten by new records. Such records are not shown as existing anymore, but ApexSQL Recover can read them and create undo script for them. How long will deleted records stay in the MDF file? It depends on many factors, as time passes it’s less likely that the records will not be overwritten. The more transactions occur after the deletion, the more chances the records will be overwritten and permanently lost. Therefore, it’s recommended to create a copy of the database MDF and LDF files immediately (if you cannot take your database offline until the issue is solved) and run ApexSQL Recover on them. Note that a full database backup will not help here, as the records marked for overwriting are not included in the backup. First, I’ll delete some records from the Person.EmailAddress table in the AdventureWorks database.   I can delete these records in SQL Server Management Studio, or execute a script such as DELETE FROM Person.EmailAddress WHERE BusinessEntityID BETWEEN 70 AND 80 Then, I’ll start ApexSQL Recover and select From DELETE operation in the Recovery tab.   In the Select the database to recover step, first select the SQL Server instance. If it’s not shown in the drop-down list, click the Server icon right to the Server drop-down list and browse for the SQL Server instance, or type the instance name manually. Specify the authentication type and select the database in the Database drop-down list.   In the next step, you’re prompted to add additional data sources. As this can be a tricky step, especially for new users, ApexSQL Recover offers help via the Help me decide option.   The Help me decide option guides you through a series of questions about the database transaction log and advises what files to add. If you know that you have no transaction log backups or detached transaction logs, or the online transaction log file has been truncated after the data was deleted, select No additional transaction logs are available. If you know that you have transaction log backups that contain the delete transactions you want to recover, click Add transaction logs. The online transaction log is listed and selected automatically.   Click Add if to add transaction log backups. It would be best if you have a full transaction log chain, as explained above. The next step for this option is to specify the time range.   Selecting a small time range for the time of deletion will create the recovery script just for the accidentally deleted records. A wide time range might script the records deleted on purpose, and you don’t want that. If needed, you can check the script generated and manually remove such records. After that, for all data sources options, the next step is to select the tables. Be careful here, if you deleted some data from other tables on purpose, and don’t want to recover them, don’t select all tables, as ApexSQL Recover will create the INSERT script for them too.   The next step offers two options: to create a recovery script that will insert the deleted records back into the Person.EmailAddress table, or to create a new database, create the Person.EmailAddress table in it, and insert the deleted records. I’ll select the first one.   The recovery process is completed and 11 records are found and scripted, as expected.   To see the script, click View script. ApexSQL Recover has its own script editor, where you can review, modify, and execute the recovery script. The insert into statements look like: INSERT INTO Person.EmailAddress( BusinessEntityID, EmailAddressID, EmailAddress, rowguid, ModifiedDate) VALUES( 70, 70, N'[email protected]' COLLATE SQL_Latin1_General_CP1_CI_AS, 'd62c5b4e-c91f-403f-b630-7b7e0fda70ce', '20030109 00:00:00.000' ); To execute the script, click Execute in the menu.   If you want to check whether the records are really back, execute SELECT * FROM Person.EmailAddress WHERE BusinessEntityID BETWEEN 70 AND 80 As shown, ApexSQL Recover recovers SQL database data after accidental deletes even without the database backup that contains the deleted data and relevant transaction log backups. ApexSQL Recover reads the deleted data from the database data file, so this method can be used even for databases in the Simple recovery model. Besides recovering SQL database records from a DELETE statement, ApexSQL Recover can help when the records are lost due to a DROP TABLE, or TRUNCATE statement, as well as repair a corrupted MDF file that cannot be attached to as SQL Server instance. You can find more information about how to recover SQL database lost data and repair a SQL Server database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • How can I block a specific type of DDoS attack?

    - by Mark
    My site is being attacked and is using up all the RAM. I looked at the Apache logs and every malicious hit seems to simply be a POST request on /, which is never required by a normal user. So I thought and wondered if there's any sort of solution or utility that will monitor my Apache logs and block every IP that performs a POST request on the site root. I'm not familiar with DDoS protection and searching didn't seem to give me an answer, so I came here. Thanks. Example logs: 103.3.221.202 - - [30/Sep/2012:16:02:03 +0000] "POST / HTTP/1.1" 302 485 "-" "Mozilla/5.0 (iPad; CPU OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3" 122.72.80.100 - - [30/Sep/2012:16:02:03 +0000] "POST / HTTP/1.1" 302 485 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11" 122.72.28.15 - - [30/Sep/2012:16:02:04 +0000] "POST / HTTP/1.1" 302 485 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727)" 210.75.120.5 - - [30/Sep/2012:16:02:04 +0000] "POST / HTTP/1.1" 302 485 "-" "Mozilla/5.0 (Windows NT 6.1; rv:12.0) Gecko/20100101 Firefox/12.0" 122.96.59.103 - - [30/Sep/2012:16:02:04 +0000] "POST / HTTP/1.1" 302 485 "-" "Mozilla/5.0 (Linux; U; Android 2.2; fr-fr; Desire_A8181 Build/FRF91) App3leWebKit/53.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1" 122.96.59.103 - - [30/Sep/2012:16:02:04 +0000] "POST / HTTP/1.1" 302 485 "-" "Mozilla/5.0 (Linux; U; Android 2.2; fr-fr; Desire_A8181 Build/FRF91) App3leWebKit/53.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1" 122.72.124.3 - - [30/Sep/2012:16:02:04 +0000] "POST / HTTP/1.1" 302 485 "-" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:13.0) Gecko/20100101 Firefox/13.0.1" 122.72.112.148 - - [30/Sep/2012:16:02:04 +0000] "POST / HTTP/1.1" 302 485 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:13.0) Gecko/20100101 Firefox/13.0.1" 190.39.210.26 - - [30/Sep/2012:16:02:04 +0000] "POST / HTTP/1.0" 302 485 "-" "Mozilla/5.0 (Windows NT 6.0; rv:13.0) Gecko/20100101 Firefox/13.0.1" 210.213.245.230 - - [30/Sep/2012:16:02:04 +0000] "POST / HTTP/1.0" 302 485 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727)" 101.44.1.25 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (iPhone; CPU iPhone OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3" 101.44.1.28 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1; rv:13.0) Gecko/20100101 Firefox/13.0.1" 101.44.1.28 - - [30/Sep/2012:16:02:14 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1; rv:13.0) Gecko/20100101 Firefox/13.0.1" 103.3.221.202 - - [30/Sep/2012:16:02:13 +0000] "POST / HTTP/1.1" 302 466 "-" "Mozilla/5.0 (iPad; CPU OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3" 211.161.152.104 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" 101.44.1.25 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11" 101.44.1.25 - - [30/Sep/2012:16:02:11 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11" 211.161.152.105 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6" 211.161.152.105 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; MRA 5.8 (build 4157); .NET CLR 2.0.50727; AskTbPTV/5.11.3.15590)" 211.161.152.105 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; MRA 5.8 (build 4157); .NET CLR 2.0.50727; AskTbPTV/5.11.3.15590)" 101.44.1.25 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11" 101.44.1.25 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (iPhone; CPU iPhone OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3" 211.161.152.108 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (iPad; CPU OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3" 101.44.1.28 - - [30/Sep/2012:16:02:13 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1; rv:13.0) Gecko/20100101 Firefox/13.0.1" 211.161.152.106 - - [30/Sep/2012:16:02:11 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1; rv:5.0.1) Gecko/20100101 Firefox/5.0.1" 103.3.221.202 - - [30/Sep/2012:16:02:13 +0000] "POST / HTTP/1.1" 302 466 "-" "Mozilla/5.0 (iPad; CPU OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3" 101.44.1.28 - - [30/Sep/2012:16:02:11 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1; rv:13.0) Gecko/20100101 Firefox/13.0.1" 211.161.152.105 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; MRA 5.8 (build 4157); .NET CLR 2.0.50727; AskTbPTV/5.11.3.15590)" 211.161.152.104 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" 211.161.152.104 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" 211.161.152.105 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6" 101.44.1.25 - - [30/Sep/2012:16:02:10 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11" 122.72.124.2 - - [30/Sep/2012:16:02:17 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1; rv:13.0) Gecko/20100101 Firefox/13.0.1" 122.72.124.2 - - [30/Sep/2012:16:02:11 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1; rv:13.0) Gecko/20100101 Firefox/13.0.1" 122.72.124.2 - - [30/Sep/2012:16:02:17 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1; rv:13.0) Gecko/20100101 Firefox/13.0.1" 210.213.245.230 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.0" 302 522 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727)" iptables -L: Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination - bui@debian:~$ sudo iptables -I INPUT 1 -m string --algo bm --string 'Keep-Alive: 300' -j DROP iptables: No chain/target/match by that name. bui@debian:~$ sudo iptables -A INPUT -m string --algo bm --string 'Keep-Alive: 300' -j DROP iptables: No chain/target/match by that name.

    Read the article

  • MvcExtensions - PerRequestTask

    - by kazimanzurrashid
    In the previous post, we have seen the BootstrapperTask which executes when the application starts and ends, similarly there are times when we need to execute some custom logic when a request starts and ends. Usually, for this kind of scenario we create HttpModule and hook the begin and end request events. There is nothing wrong with this approach, except HttpModules are not at all IoC containers friendly, also defining the HttpModule execution order is bit cumbersome, you either have to modify the machine.config or clear the HttpModules and add it again in web.config. Instead, you can use the PerRequestTask which is very much container friendly as well as supports execution orders. Lets few examples where it can be used. Remove www Subdomain Lets say we want to remove the www subdomain, so that if anybody types http://www.mydomain.com it will automatically redirects to http://mydomain.com. public class RemoveWwwSubdomain : PerRequestTask { public RemoveWww() { Order = DefaultOrder - 1; } protected override TaskContinuation ExecuteCore(PerRequestExecutionContext executionContext) { const string Prefix = "http://www."; Check.Argument.IsNotNull(executionContext, "executionContext"); HttpContextBase httpContext = executionContext.HttpContext; string url = httpContext.Request.Url.ToString(); bool startsWith3W = url.StartsWith(Prefix, StringComparison.OrdinalIgnoreCase); bool shouldContinue = true; if (startsWith3W) { string newUrl = "http://" + url.Substring(Prefix.Length); HttpResponseBase response = httpContext.Response; response.StatusCode = (int)HttpStatusCode.MovedPermanently; response.Status = "301 Moved Permanently"; response.RedirectLocation = newUrl; response.SuppressContent = true; shouldContinue = false; } return shouldContinue ? TaskContinuation.Continue : TaskContinuation.Break; } } As you can see, first, we are setting the order so that we do not have to execute the remaining tasks of the chain when we are redirecting, next in the ExecuteCore, we checking the whether www is present, if present we are sending a permanently moved http status code and breaking the task execution chain otherwise we are continuing with the chain. Blocking IP Address Lets take another scenario, your application is hosted in a shared hosting environment where you do not have the permission to change the IIS setting and you want to block certain IP addresses from visiting your application. Lets say, you maintain a list of IP address in database/xml files which you want to block, you have a IBannedIPAddressRepository service which is used to match banned IP Address. public class BlockRestrictedIPAddress : PerRequestTask { protected override TaskContinuation ExecuteCore(PerRequestExecutionContext executionContext) { bool shouldContinue = true; HttpContextBase httpContext = executionContext.HttpContext; if (!httpContext.Request.IsLocal) { string ipAddress = httpContext.Request.UserHostAddress; HttpResponseBase httpResponse = httpContext.Response; if (executionContext.ServiceLocator.GetInstance<IBannedIPAddressRepository>().IsMatching(ipAddress)) { httpResponse.StatusCode = (int)HttpStatusCode.Forbidden; httpResponse.StatusDescription = "IPAddress blocked."; shouldContinue = false; } } return shouldContinue ? TaskContinuation.Continue : TaskContinuation.Break; } } Managing Database Session Now, let see how it can be used to manage NHibernate session, assuming that ISessionFactory of NHibernate is already registered in our container. public class ManageNHibernateSession : PerRequestTask { private ISession session; protected override TaskContinuation ExecuteCore(PerRequestExecutionContext executionContext) { ISessionFactory factory = executionContext.ServiceLocator.GetInstance<ISessionFactory>(); session = factory.OpenSession(); return TaskContinuation.Continue; } protected override void DisposeCore() { session.Close(); session.Dispose(); } } As you can see PerRequestTask can be used to execute small and precise tasks in the begin/end request, certainly if you want to execute other than begin/end request there is no other alternate of HttpModule. That’s it for today, in the next post, we will discuss about the Action Filters, so stay tuned.

    Read the article

  • Vitality of Product Information Management Showcased at OpenWorld 2012

    - by Mala Narasimharajan
     By Sachin Patel Can you hear the countdown clock ticking!! OpenWorld 2012 is almost here and as I write this Oracle is buzzing with fresh new ideas and solutions that will be showcased this year. What an exciting time for all of us to be in midst of a digital revolution. Whether it is Apple fans clamoring to find every new feature that has been added to the iPhone 5 or a startup launching a new digital thermostat (has anyone looked at the new one from Nest ), product information is a vital for companies to grow and compete in this cut-throat market. Customer today struggle to aggregate and enrich this product data from the myriad of systems they have in place to run their businesses and operations. Having a product information strategy is paramount to align your sales channels and operations with the most accurate and upto date product data. We have a number of sessions this year at OpenWorld where you can gain more insight into how Oracle’s next generation of Fusion Applications, in this case Fusion Product Hub can provide you with a solution to streamline and get control of your Product Master Data. Enabling Trusted Enterprise Product Data with Oracle Fusion Product HubTuesday, October 2nd 11:45 am, Moscone West 2022 Join me Sachin Patel, Director of Product Strategy and Milan Bhatia, VP of Development as we discuss how you can enable trusted product master data in your enterprise. In this session we plan to cover the challenges companies face today in mastering product data. The discussion will also include how Fusion Product Hub brings new and innovative features to empower your product data owners to create a holistic and rich product definition that can be leveraged across your enterprise. We will also be joined by Pawel Fidelus from Fideltronik an Early Adopter for Fusion Product Hub who will showcase their plans to implement Fusion Product Hub and the value it will bring to Fideltronik Multichannel Fulfillment Excellence in Direct-to-Consumer Market Thursday, October 4th, 12:45 am, Moscone West 2024 Do you have multiple order capture systems? Do you have difficulty in fulfilling orders for your customers across various channels and suppliers? Mark Carson, Director, Fusion DOO and Brad Kerr, Director, AGSS will be showcasing the Fusion Distributed Order Orchestration solution and how companies can orchestrate orders from multiple order capture systems and route them to the appropriate fulfillment system. Sachin Patel, Director Product Strategy for Product MDM will highlight the business pain points in consolidating and commercializing data from a Multi Channel Commerce point of view and how Fusion Product Hub helps in allowing you to provide a single source of truth to drive a singular and rich customer experience. Oracle Fusion Supply Chain Management: Customer Adoption and Experiences                                                Wednesday, October 3rd 10:15 am, Moscone West 2003 This is a great session to attend to learn about how Fusion Supply Chain Management and Fusion Product Hub Early Adopters, including Boeing and Fideltronik are leveraging Fusion Applications to improve their Supply Chain operations. Have a great OpenWorld and see you soon!!

    Read the article

  • Orchestrating the Virtual Enterprise, Part II

    - by Kathryn Perry
    A guest post by Jon Chorley, Oracle's CSO & Vice President, SCM Product Strategy Almost everyone has ordered from Amazon.com at one time or another. Our orders are as likely to be fulfilled by third parties as they are by Amazon itself. To deliver the order promptly and efficiently, Amazon has to send it to the right fulfillment location and know the availability in that location. It needs to be able to track status of the fulfillment and deal with exceptions. As a virtual enterprise, Amazon's operations, using thousands of trading partners, requires a very different approach to fulfillment than the traditional 'take an order and ship it from your own warehouse' model. Amazon had no choice but to develop a complex, expensive and custom solution to tackle this problem as there used to be no product solution available. Now, other companies who want to follow similar models have a better off-the-shelf choice -- Oracle Distributed Order Orchestration (DOO).  Consider how another of our customers is using our distributed orchestration solution. This major airplane manufacturer has a highly complex business and interacts regularly with the U.S. Government and major airlines. It sits in the middle of an intricate supply chain and needed to improve visibility across its many different entities. Oracle Fusion DOO gives the company an orchestration mechanism so it could improve quality, speed, flexibility, and consistency without requiring an organ transplant of these highly complex legacy systems. Many retailers face the challenge of dealing with brick and mortar, Web, and reseller channels. They all need to be knitted together into a virtual enterprise experience that is consistent for their customers. When a large U.K. grocer with a strong brick and mortar retail operation added an online business, they turned to Oracle Fusion DOO to bring these entities together. Disturbing the Peace with Acquisitions Quite often a company's ERP system is disrupted when it acquires a new company. An acquisition can inject a new set of processes and systems -- or even introduce an entirely new business like Sun's hardware did at Oracle. This challenge has been a driver for some of our DOO customers. A large power management company is using Oracle Fusion DOO to provide the flexibility to rapidly integrate additional products and services into its central fulfillment operation. The Flip Side of Fulfillment Meanwhile, we haven't ignored similar challenges on the supply side of the equation. Specifically, how to manage complex supply in a flexible way when there are multiple trading parties involved? How to manage the supply to suppliers? How to manage critical components that need to merge in a tier two or tier three supply chain? By investing in supply orchestration solutions for the virtual enterprise, we plan to give users better visibility into their network of suppliers to help them drive down costs. We also think this technology and full orchestration process can be applied to the financial side of organizations. An example is transactions that flow through complex internal structures to minimize tax exposure. We can help companies manage those transactions effectively by thinking about the internal organization as a virtual enterprise and bringing the same solution set to this internal challenge.  The Clear Front Runner No other company is investing in solving the virtual enterprise supply chain issues like Oracle is. Oracle is in a unique position to become the gold standard in this market space. We have the infrastructure of Oracle technology. We already have an Oracle Fusion DOO application which embraces the best of what's required in this area. And we're absolutely committed to extending our Fusion solution to other use cases and delivering even more business value. Jon ChorleyChief Sustainability Officer & Vice President, SCM Product StrategyOracle Corporation

    Read the article

  • ifconfig networking telnet

    - by jhon
    Hi guys, I'm newbie around networking, I have a question: what I want is to telnet a specific IP/server, let us say 192.168.128.1 then, I try $telnet 192.168.128.1 Trying... and that's all.. I never get connected one of my friends made some script that "fixes" it, AFTER running it I was able to connect to the server using $telnet 192.168.128.1 $ user: unfortunately I lost that script, so I'm here requesting your help. Reading my old notes, I remember that the script performed some modification to the entries listed by ifconfig -a, I also have the ifconfig's output (copy & paste) $ ifconfig -a adapter0: flags=5e080863,c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),PSEG,LARGESEND,CHAIN> inet 192.168.128.150 netmask 0xffffff00 broadcast 192.168.128.255 tcp_sendspace 131072 tcp_recvspace 65536 rfc1323 0 adapter1: flags=5e080863,c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),PSEG,LARGESEND,CHAIN> inet 192.168.251.150 netmask 0xffffff00 broadcast 192.168.251.255 tcp_sendspace 131072 tcp_recvspace 65536 rfc1323 0 adapter2: flags=5e080863,c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),PSEG,LARGESEND,CHAIN> inet 192.168.250.150 netmask 0xffffff00 broadcast 192.168.250.255 tcp_sendspace 131072 tcp_recvspace 65536 rfc1323 0 lo0: flags=e08084b<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT> inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255 inet6 ::1/0 tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1 more than commands, I'm looking for some explanation why does "adding/changing" those entries enables me to connect to the server. I do not see the server ip (i.e 192.168.128.1) listed above. thanks

    Read the article

  • Comparing RPG to C# and SQL.

    - by Kevin
    In an RPG program (One of IBM's languages on the AS/400) I can "chain" out to a file to see if a record (say, a certain customer record) exists in the file. If it does, then I can update that record instantly with new data. If the record doesn't exist, I can write a new record. The code would look like this: Customer Chain CustFile 71 ;turn on indicator 71 if not found if *in71 ;if 71 is "on" eval CustID = Customer; eval CustCredit = 10000; write CustRecord else ;71 not on, record found. CustCredit = 10000; update CustRecord endif Not being real familiar with SQL/C#, I'm wondering if there is a way to do a random retrieval from a file (which is what "chain" does in RPG). Basically I want to see if a record exists. If it does, update the record with some new information. If it does not, then I want to write a new record. I'm sure it's possible, but not quite sure how to go about doing it. Any advice would be greatly appreciated.

    Read the article

  • filterSecurityInterceptor and metadatasource implementation spring-security

    - by Mike
    Hi! I created a class that implements the FilterInvocationSecurityMetadataSource interface. I implemented it like this: public List<ConfigAttribute> getAttributes(Object object) { FilterInvocation fi = (FilterInvocation) object; Object principal = SecurityContextHolder.getContext().getAuthentication().getPrincipal(); Long companyId = ((ExtenededUser) principal).getCompany().getId(); String url = fi.getRequestUrl(); // String httpMethod = fi.getRequest().getMethod(); List<ConfigAttribute> attributes = new ArrayList<ConfigAttribute>(); FilterSecurityService service = (FilterSecurityService) SpringBeanFinder.findBean("filterSecurityService"); Collection<Role> roles = service.getRoles(companyId); for (Role role : roles) { for (View view : role.getViews()) { if (view.getUrl().equalsIgnoreCase(url)) attributes.add(new SecurityConfig(role.getName() + "_" + role.getCompany().getName())); } } return attributes; } when I debug my application I see it reaches this class, it only reaches getAllConfigAttributes method, that is empty as I said, and return null. after that it prints this warning: Could not validate configuration attributes as the SecurityMetadataSource did not return any attributes from getAllConfigAttributes(). My aplicationContext- security is like this: <beans:bean id="filterChainProxy" class="org.springframework.security.web.FilterChainProxy"> <filter-chain-map path-type="ant"> <filter-chain filters="sif,filterSecurityInterceptor" pattern="/**" /> </filter-chain-map> </beans:bean> <beans:bean id="filterSecurityInterceptor" class="org.springframework.security.web.access.intercept.FilterSecurityInterceptor"> <beans:property name="authenticationManager" ref="authenticationManager" /> <beans:property name="accessDecisionManager" ref="accessDecisionManager" /> <beans:property name="securityMetadataSource" ref="filterSecurityMetadataSource" /> </beans:bean> <beans:bean id="filterSecurityMetadataSource" class="com.mycompany.filter.FilterSecurityMetadataSource"> </beans:bean> what could be the problem?

    Read the article

  • Illegal offset type error when => value is omitted

    - by thomas
    Line 3 of the code below fails if I omit the => $v portion. I get the following error: Warning: Illegal offset type in /home/site/page.php on line 404 When the [$k] in line 5 is changed to ['$k'], i receive the following error. Notice: Undefined index: $k in /home/site/page.php on line 404 When it is like below with the the full $k => $v everything works though. I don't even use $v. Why do I need it in the foreach loop to make it work then? <? if ( $arr[ 'status'][ 'chain'] ) { foreach ( $arr[ 'status'][ 'chain'] as $k => $v) { ?> <tr> <td class="line_item_data status_td"> <?= $ arr[ 'status'][ 'chain'][$k][ 'message'] ?> </td> <td align="center"> <img src="images/green_check.gif" width="20" /> </td> </tr> <? } } ?> I did see this answer, but don't know that it really applies. Thanks so much!

    Read the article

  • WCF newbie - how to install and use a SSL certificate?

    - by Shaul
    This should be a snap for anyone who's done it before... I'm trying to set up a self-hosted WCF service using NetTcpBinding. I got a trial SSL certificate from Thawte and successfully installed that in my IIS store, and I think I've got it correctly set up in the service - at least it doesn't exception out on me! Now, I'm trying to connect the client (this is still all on my dev machine), and it's giving me an error, "Message = "The X.509 certificate CN=ssl.mydomain.com, OU=For Test Purposes Only. No assurances., OU=IT, O=My Company, L=My Town, S=None, C=IL chain building failed. The certificate that was used has a trust chain that cannot be verified. Replace the certificate or change the certificateValidationMode. A certificate chain processed, but terminated in a root certificate which is not trusted by the trust provider." Ooookeeeey... now what? Client code (I want to do this in code, not app.config): var baseAddress = "localhost"; var factory = new DuplexChannelFactory<IMyWCFService>(new InstanceContext(SiteServer.Instance)); factory.Endpoint.Address = new EndpointAddress("net.tcp://{0}:8000/".Fmt(baseAddress)); var binding = new NetTcpBinding(SecurityMode.Message); binding.Security.Message.ClientCredentialType = MessageCredentialType.UserName; factory.Endpoint.Binding = binding; var u = factory.Credentials.UserName; u.UserName = userName; u.Password = password; return factory.CreateChannel()

    Read the article

  • Suffix if statement

    - by Aardschok
    I was looking for a way to add a suffix to jointchain in Maya. The jointchain has specific naming so I create a list with the names they need to be. The first chain has "_1" as suffix, result: R_Clavicle_1|R_UpperArm_1|R_UnderArm_1|R_Wrist_1 When I create the second this is the result: R_Clavicle_2|R_UpperArm_1|R_UnderArm_1|R_Wrist_1 The code: DRClavPos = cmds.xform ('DRClavicle', q=True, ws=True, t=True) DRUpArmPos = cmds.xform ('DRUpperArm', q=True, ws=True, t=True) DRUnArmPos = cmds.xform ('DRUnderArm', q=True, ws=True, t=True) DRWristPos = cmds.xform ('DRWrist', q=True, ws=True, t=True), cmds.xform('DRWrist', q=True, os=True, ro=True) suffix = 1 jntsA = cmds.ls(type="joint", long=True) while True: jntname = ["R_Clavicle_"+str(suffix),"R_UpperArm_"+str(suffix),"R_UnderArm_"+str(suffix),"R_Wrist_"+str(suffix)] if jntname not in jntsA: cmds.select (d=True) cmds.joint ( p=(DRClavPos)) cmds.joint ( p=(DRUpArmPos)) cmds.joint ( 'joint1', e=True, zso=True, oj='xyz', radius=0.5, n=jntname[0]) cmds.joint ( p=(DRUnArmPos)) cmds.joint ( 'joint2', e=True, zso=True, oj='xyz', radius=0.5, n=jntname[1]) cmds.joint ( p=(DRWristPos[0])) cmds.joint ( 'joint3', e=True, zso=True, oj='xyz', radius=0.5, n=jntname[2]) cmds.rename ('joint4', jntname[3]) cmds.select ( cl=True) break else: suffix + 1 I tried adding +1 in jntname which resulted in a good second chain but the third chain had "_2" after R_Clavicle_3 The code, in my eyes should work. Can anybody point me in the correct direction :)

    Read the article

  • Can I pass a non-generic type where a generic type is expected?

    - by Water Cooler v2
    I want to define a set of classes that collect and persist data. I want to call them either on-demand basis, or in a chain-of-responsibility fashion, as the caller pleases. To support the chaining, I have declared my interface like so: interface IDataManager<T, K> { T GetData(K args); void WriteData(Stream stream); void WriteData(T data, Stream stream); IDataCollectionPolicy Policy; IDataManager<T, K> NextDataManager; } But the T's and K's for each concrete types will be different. If I give it like this: IDataManager<T, K> NextDataManager; I assume that the calling code will only be able to chain types that have the same T's and K's. Is there a way I can have it chain any type of IDataManager? One thing that occurs to me is to have IDataManager inherit from a non-generic IDataManager like so: interface IDataManager { } interface IDataManager<T, K>: IDataManager { T GetData(K args); void WriteData(Stream stream); void WriteData(T data, Stream stream); IDataCollectionPolicy Policy; IDataManager NextDataManager; } Is this going to work?

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >