Search Results

Search found 14127 results on 566 pages for 'plz send me teh codez'.

Page 543/566 | < Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >

  • Remote host: can tracert, can telnet, can*not* browse: what gives?

    - by MacThePenguin
    One of my customers of the company I work for has made a change to their Internet connection, and now we can't connect to them any more from our LAN. To help me troubleshoot this issue, the network guy on the customer's site has configured their firewall so that a HTTPS connection to their public IP address is open to any IP. I should put https://<customer's IP> in my browser and get a web page. Well, it works from any network I've tried (even from my smartphone), just not from my company's LAN. I thought it may be an issue with our firewall (though I checked its rules and it allows outbound TCP port 443 to anywhere), so I just connected a PC directly to the network connection of our provider, bypassing out firewall completely, and still it didn't work (everything else worked). So I asked for help to our Internet provider's customer service, and they asked me to do a tracert to our customer's IP. The tracert is successful, as the final hop shown in the output is the host I want to reach. So they said there's no problem. :( I also tried telnet <customer's IP> 443 and that works as well: I get a blank page with the cursor blinking (I've tried using another random port and that gives me an error message, as it should). Still, from any browser of any PC in my LAN I can't open that URL. I tried checking the network traffic with Wireshark: I see the packages going through and answers coming back, thought the packets I see passing are far less than they are if I successfully connect to another HTTPS website. See the attached screenshot: I had to blur the IPs, anyway the longer string is my PC's local IP address, the shorter one is the customer's public IP. I don't know what else to try. This is the only IP doing this... Any idea what could I try to find a solution to this issue? Thanks, let me know if you need further details. Edit: when I say "it doesn't work" I mean: the page doesn't open, the browser keeps loading for a long time and eventually shows an error saying that the page cannot be opened. I'm not in my office now so I can't paste the exact message, but it's the usual message you get when the browser reaches its timeout. When I say "it works", I mean the browser loads and shows a webpage (it's the logon page for the customers' firewall admin interface: so there's the firewall brand's logo and there are fields to enter a user id and a password). Update 13/09/2012: tried again to connect to the customer's network through our Internet connection without a firewall. This is what I did: Run a Kubuntu 12.04 live distro on a spare laptop; Updated all the packages I could and installed WireShark; Attached it to my LAN and verified that I couldn't open https://<customer's IP>. Verified that the Wireshark trace for this attempt was the same as the one I've already posted; Verified that I could connect to another customer's host using rdesktop (it worked); Tried to rdesktop to <customer's IP>, here's the output: kubuntu@kubuntu:/etc$ rdesktop <customer's IP> Autoselected keyboard map en-us ERROR: recv: Connection reset by peer Disconnected the laptop from the LAN; Disconnected the firewall from the Extranet connection, connected the laptop instead. Set its network configuration so that I could access the Internet; Verified that I could connect to other websites in http and https and in RDP to other customers' hosts - it all worked as expected; Verified that I could still traceroute to <customer's IP>: I could; Verified that I still couldn't open https://<customer's IP> (same exact result as before); Checked the WireShark trace for this attempt and noticed a different behaviour: I could see packets going out to the customer's IP, but no replies at all; Tried to run rdesktop again, with a slightly different result: kubuntu@kubuntu:/etc/network$ rdesktop <customer's IP> Autoselected keyboard map en-us ERROR: <customer's IP>: unable to connect Finally gave up, put everything back as it was before, turned off the laptop and lost the WireShark traces I had saved. :( I still remember them very well though. :) Can you get anything out of it? Thank you very much. Update 12/09/2012 n.2: I followed the suggestion by MadHatter in the comments. From inside the firewall, this is what I get: user@ubuntu-mantis:~$ openssl s_client -connect <customer's IP>:443 CONNECTED(00000003) If I now type GET / the output pauses for several seconds and then I get: write:errno=104 I'm going to try the same, but bypassing the firewall, as soon as I can. Thanks. Update 12/09/2012 n.3: So, I think ISA Server is altering the results of my tests... I tried installing Wireshark directly on the firewall and monitoring the packets on the Extranet network card. When the destination is the customer's IP, whatever service I try to connect to (HTTPS, RDP or SAProuter), I can only see outbound packets and no response packets whatsoever from their side. It looks like ISA Server is "faking" the remote server's replies, that's why I get a connection using telnet or the openSSL client. This is the wireshark trace from inside our LAN: But this is the trace on the Extranet network card: This makes a bit more sense... I'll send this info to the customer's tech and see if he can make anything out of it. Thanks to all that took the time to read my question and post suggestions. I'll update this post again.

    Read the article

  • With dnsmasq as the DNS server, 'dig' and 'ping' succeed while 'nslookup' fails

    - by einpoklum
    I installed dnsmasq on a machine of mine (It's a Kubuntu 12.04 LTS), backed only by /etc/hosts (no connection to the Internet until later). Now, if I dig mymachine, I get 192.168.0.1, but if I try to nslookup mymachine, I get: >> connection timed out; no servers could be reached Tried also nslookup mymachine.mynicedomain.org - didn't work either. pinging (Edit:) succeeds. This happens both on the server machine itself and on other machines on the network. How can I the DNS lookups to work? What problem is preventing nslookup from succeeding? Additional Information In the server's /etc/hosts: 192.168.0.1 mymachine In the server's nsswitch.conf: hosts: files mdns4_mininal [NOTFOUND=return] dns mdns4 (admittedly, this is a bit weird; but I also tried: hosts: files dns instead, with the same effect) In resolv.conf (which is generated by dnsmasq): nameserver 127.0.0.1 search mynicedomain.org In the server's /etc/hosts.allow: domain: ALL In the other machines' /etc/resolv.conf (this is set by the DHCP client): nameserver 192.168.0.1 search mynicedomain.org Relevant netstat output on the server: Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN tcp 0 0 192.168.0.1:53 0.0.0.0:* LISTEN Finally, here's the ipconfig output from one of the client machines on the network (running Windows 7): Connection-specific DNS Suffix . : mynicedomain.org Description . . . . . . . . . . . : Intel(R) 82579LM Gigabit Network Connection Physical Address. . . . . . . . . : 12-34-56-78-9A-BC DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 192.168.0.50(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : Sunday, October 20th 2013 16:20:25 Lease Expires . . . . . . . . . . : Sunday, October 20th 2013 18:20:24 Default Gateway . . . . . . . . . : 192.168.0.1 DHCP Server . . . . . . . . . . . : 192.168.0.1 DNS Servers . . . . . . . . . . . : 192.168.0.1 NetBIOS over Tcpip. . . . . . . . : Enabled Notes: May be related to this question.

    Read the article

  • Cluster failover and strange gratuitous arp behavior

    - by lazerpld
    I am experiencing a strange Windows 2008R2 cluster related issue that is bothering me. I feel that I have come close as to what the issue is, but still don't fully understand what is happening. I have a two node exchange 2007 cluster running on two 2008R2 servers. The exchange cluster application works fine when running on the "primary" cluster node. The problem occurs when failing over the cluster ressource to the secondary node. When failing over the cluster to the "secondary" node, which for instance is on the same subnet as the "primary", the failover initially works ok and the cluster ressource continues to work for a couple of minutes on the new node. Which means that the recieving node does send out a gratuitous arp reply packet that updated the arp tables on the network. But after x amount of time (typically within 5 minutes time) something updates the arp-tables again because all of a sudden the cluster service does not answer to pings. So basically I start a ping to the exchange cluster address when its running on the "primary node". It works just great. I failover the cluster ressource group to the "secondary node" and I only have loss of one ping which is acceptable. The cluster ressource still answers for some time after being failed over and all of a sudden the ping starts timing out. This is telling me that the arp table initially is updated by the secondary node, but then something (which I haven't found out yet) wrongfully updates it again, probably with the primary node's MAC. Why does this happen - has anyone experienced the same problem? The cluster is NOT running NLB and the problem stops immidiately after failing over back to the primary node where there are no problems. Each node is using NIC teaming (intel) with ALB. Each node is on the same subnet and has gateway and so on entered correctly as far as I am concerned. Edit: I was wondering if it could be related to network binding order maybe? Because I have noticed that the only difference I can see from node to node is when showing the local arp table. On the "primary" node the arp table is generated on the cluster address as the source. While on the "secondary" its generated from the nodes own network card. Any input on this? Edit: Ok here is the connection layout. Cluster address: A.B.6.208/25 Exchange application address: A.B.6.212/25 Node A: 3 physical nics. Two teamed using intels teaming with the address A.B.6.210/25 called public The last one used for cluster traffic called private with 10.0.0.138/24 Node B: 3 physical nics. Two teamed using intels teaming with the address A.B.6.211/25 called public The last one used for cluster traffic called private with 10.0.0.139/24 Each node sits in a seperate datacenter connected together. End switches being cisco in DC1 and NEXUS 5000/2000 in DC2. Edit: I have been testing a little more. I have now created an empty application on the same cluster, and given it another ip address on the same subnet as the exchange application. After failing this empty application over, I see the exact same problem occuring. After one or two minutes clients on other subnets cannot ping the virtual ip of the application. But while clients on other subnets cannot, another server from another cluster on the same subnet has no trouble pinging. But if i then make another failover to the original state, then the situation is the opposite. So now clients on same subnet cannot, and on other they can. We have another cluster set up the same way and on the same subnet, with the same intel network cards, the same drivers and same teaming settings. Here we are not seeing this. So its somewhat confusing. Edit: OK done some more research. Removed the NIC teaming of the secondary node, since it didnt work anyway. After some standard problems following that, I finally managed to get it up and running again with the old NIC teaming settings on one single physical network card. Now I am not able to reproduce the problem described above. So it is somehow related to the teaming - maybe some kind of bug? Edit: Did some more failing over without being able to make it fail. So removing the NIC team looks like it was a workaround. Now I tried to reestablish the intel NIC teaming with ALB (as it was before) and i still cannot make it fail. This is annoying due to the fact that now i actually cannot pinpoint the root of the problem. Now it just seems to be some kind of MS/intel hick-up - which is hard to accept because what if the problem reoccurs in 14 days? There is a strange thing that happened though. After recreating the NIC team I was not able to rename the team to "PUBLIC" which the old team was called. So something has not been cleaned up in windows - although the server HAS been restarted! Edit: OK after restablishing the ALB teaming the error came back. So I am now going to do some thorough testing and i will get back with my observations. One thing is for sure. It is related to Intel 82575EB NICS, ALB and Gratuitous Arp.

    Read the article

  • help setting up an IPSEC vpn from my linux box

    - by robthewolf
    I have an office with a router and a remote server (Linux - Ubuntu 10.10). Both locations need to connect to a data supplier through a VPN. The VPN is an IPSEC gateway. I was able to configure my Linksys rv42 router to create a VPN connection successfully and now I need to do the same for Linux server. I have been messing around with this for too long. First I tried OpenVPN, but that is SSL and not IPSEC. Then I tried Shrew. I think I have the settings correct but I haven't been able to create the connection. It maybe that I have to use something else like a direct IPSEC config or something like that. If someone knows of a way to turn the following settings that I have been given below into a working IPSEC VPN connection I would be very grateful. Here are the settings I was given that must be used to connect to my supplier: Local destination network: 192.168.4.0/24 Local destination hosts: 192.168.4.100 Remote destination network: 192.167.40.0/24 Remote destination hosts: 192.168.40.27 VPN peering point: xxx.xxx.xxx.xxx Then they have given me the following details: IPSEC/ISAKMP Phase 1 Parameters: Authentication method: pre shared secret Diffie Hellman group: group 2 Encryption Algorithm: 3DES Lifetime in seconds:28800 Phase 2 parameters: IPSEC security: ESP Encryption algortims: 3DES Authentication algorithms: MD5 lifetime in seconds: 28800 pfs: disabled Here are the settings from my attempt to use shrew: n:version:2 n:network-ike-port:500 n:network-mtu-size:1380 n:client-addr-auto:0 n:network-frag-size:540 n:network-dpd-enable:1 n:network-notify-enable:1 n:client-banner-enable:1 n:client-dns-used:1 b:auth-mutual-psk:YjJzN2QzdDhyN2EyZDNpNG42ZzQ= n:phase1-dhgroup:2 n:phase1-keylen:0 n:phase1-life-secs:28800 n:phase1-life-kbytes:0 n:vendor-chkpt-enable:0 n:phase2-keylen:0 n:phase2-pfsgroup:-1 n:phase2-life-secs:28800 n:phase2-life-kbytes:0 n:policy-nailed:0 n:policy-list-auto:1 n:client-dns-auto:1 n:network-natt-port:4500 n:network-natt-rate:15 s:client-dns-addr:0.0.0.0 s:client-dns-suffix: s:network-host:xxx.xxx.xxx.xxx s:client-auto-mode:pull s:client-iface:virtual s:client-ip-addr:192.168.4.0 s:client-ip-mask:255.255.255.0 s:network-natt-mode:enable s:network-frag-mode:disable s:auth-method:mutual-psk s:ident-client-type:address s:ident-client-data:192.168.4.0 s:ident-server-type:address s:ident-server-data:192.168.40.0 s:phase1-exchange:aggressive s:phase1-cipher:3des s:phase1-hash:md5 s:phase2-transform:3des s:phase2-hmac:md5 s:ipcomp-transform:disabled Finally here is the debug output from the shrew log: 10/12/22 17:22:18 ii : ipc client process thread begin ... 10/12/22 17:22:18 < A : peer config add message 10/12/22 17:22:18 DB : peer added ( obj count = 1 ) 10/12/22 17:22:18 ii : local address 217.xxx.xxx.xxx selected for peer 10/12/22 17:22:18 DB : tunnel added ( obj count = 1 ) 10/12/22 17:22:18 < A : proposal config message 10/12/22 17:22:18 < A : proposal config message 10/12/22 17:22:18 < A : client config message 10/12/22 17:22:18 < A : local id '192.168.4.0' message 10/12/22 17:22:18 < A : remote id '192.168.40.0' message 10/12/22 17:22:18 < A : preshared key message 10/12/22 17:22:18 < A : peer tunnel enable message 10/12/22 17:22:18 DB : new phase1 ( ISAKMP initiator ) 10/12/22 17:22:18 DB : exchange type is aggressive 10/12/22 17:22:18 DB : 217.xxx.xxx.xxx:500 <- 206.xxx.xxx.xxx:500 10/12/22 17:22:18 DB : c1a8b31ac860995d:0000000000000000 10/12/22 17:22:18 DB : phase1 added ( obj count = 1 ) 10/12/22 17:22:18 : security association payload 10/12/22 17:22:18 : - proposal #1 payload 10/12/22 17:22:18 : -- transform #1 payload 10/12/22 17:22:18 : key exchange payload 10/12/22 17:22:18 : nonce payload 10/12/22 17:22:18 : identification payload 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports nat-t ( draft v00 ) 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports nat-t ( draft v01 ) 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports nat-t ( draft v02 ) 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports nat-t ( draft v03 ) 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports nat-t ( rfc ) 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports DPDv1 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local is SHREW SOFT compatible 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local is NETSCREEN compatible 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local is SIDEWINDER compatible 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local is CISCO UNITY compatible 10/12/22 17:22:18 = : cookies c1a8b31ac860995d:0000000000000000 10/12/22 17:22:18 = : message 00000000 10/12/22 17:22:18 - : send IKE packet 217.xxx.xxx.xxx:500 - 206.xxx.xxx.xxx:500 ( 484 bytes ) 10/12/22 17:22:18 DB : phase1 resend event scheduled ( ref count = 2 ) 10/12/22 17:22:18 ii : opened tap device tap0 10/12/22 17:22:28 - : resend 1 phase1 packet(s) 217.xxx.xxx.xxx:500 - 206.xxx.xxx.xxx:500 10/12/22 17:22:38 - : resend 1 phase1 packet(s) 217.xxx.xxx.xxx:500 - 206.xxx.xxx.xxx:500 10/12/22 17:22:48 - : resend 1 phase1 packet(s) 217.xxx.xxx.xxx:500 - 206.xxx.xxx.xxx:500 10/12/22 17:22:58 ii : resend limit exceeded for phase1 exchange 10/12/22 17:22:58 ii : phase1 removal before expire time 10/12/22 17:22:58 DB : phase1 deleted ( obj count = 0 ) 10/12/22 17:22:58 ii : closed tap device tap0 10/12/22 17:22:58 DB : tunnel stats event canceled ( ref count = 1 ) 10/12/22 17:22:58 DB : removing tunnel config references 10/12/22 17:22:58 DB : removing tunnel phase2 references 10/12/22 17:22:58 DB : removing tunnel phase1 references 10/12/22 17:22:58 DB : tunnel deleted ( obj count = 0 ) 10/12/22 17:22:58 DB : removing all peer tunnel refrences 10/12/22 17:22:58 DB : peer deleted ( obj count = 0 ) 10/12/22 17:22:58 ii : ipc client process thread exit ...

    Read the article

  • Tomcat 7 on Ubuntu 12.04 with JRE 7 not starting

    - by Andreas Krueger
    I am running a virtual server in the web on Ubuntu 12.04 LTS / 32 Bit. After a clean install of JRE 7 and Tomcat 7, following the instructions on http://www.sysadminslife.com, I don't get Tomcat 7 up and running. > java -version java version "1.7.0_09" Java(TM) SE Runtime Environment (build 1.7.0_09-b05) Java HotSpot(TM) Client VM (build 23.5-b02, mixed mode) > /etc/init.d/tomcat start Starting Tomcat Using CATALINA_BASE: /usr/local/tomcat Using CATALINA_HOME: /usr/local/tomcat Using CATALINA_TMPDIR: /usr/local/tomcat/temp Using JRE_HOME: /usr/lib/jvm/java-7-oracle Using CLASSPATH: /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar > telnet localhost 8080 Trying ::1... Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused netstat sometimes shows a Java process, most of the times not. If it does, nothing works either. Does anyone have a solution or encountered similar situations? Here are the contents of catalina.out: 16.11.2012 18:36:39 org.apache.catalina.core.AprLifecycleListener init INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/lib/jvm/java-6-oracle/lib/i386/client:/usr/lib/jvm/java-6-oracle/lib/i386:/usr/lib/jvm/java-6-oracle/../lib/i386:/usr/java/packages/lib/i386:/lib:/usr/lib 16.11.2012 18:36:40 org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["http-bio-8080"] 16.11.2012 18:36:40 org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["ajp-bio-8009"] 16.11.2012 18:36:40 org.apache.catalina.startup.Catalina load INFO: Initialization processed in 1509 ms 16.11.2012 18:36:40 org.apache.catalina.core.StandardService startInternal INFO: Starting service Catalina 16.11.2012 18:36:40 org.apache.catalina.core.StandardEngine startInternal INFO: Starting Servlet Engine: Apache Tomcat/7.0.29 16.11.2012 18:36:40 org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory /usr/local/tomcat/webapps/manager Here come the results of ps -ef, iptables --list and netstat -plut: > ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 Nov16 ? 00:00:00 init root 2 1 0 Nov16 ? 00:00:00 [kthreadd/206616] root 3 2 0 Nov16 ? 00:00:00 [khelper/2066167] root 4 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 5 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 6 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 7 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 8 2 0 Nov16 ? 00:00:00 [nfsiod/2066167] root 119 1 0 Nov16 ? 00:00:00 upstart-udev-bridge --daemon root 125 1 0 Nov16 ? 00:00:00 /sbin/udevd --daemon root 157 125 0 Nov16 ? 00:00:00 /sbin/udevd --daemon root 158 125 0 Nov16 ? 00:00:00 /sbin/udevd --daemon root 205 1 0 Nov16 ? 00:00:00 upstart-socket-bridge --daemon root 276 1 0 Nov16 ? 00:00:00 /usr/sbin/sshd -D root 335 1 0 Nov16 ? 00:00:00 /usr/sbin/xinetd -dontfork -pidfile /var/run/xinetd.pid -stayalive -inetd root 348 1 0 Nov16 ? 00:00:00 cron syslog 368 1 0 Nov16 ? 00:00:00 /sbin/syslogd -u syslog root 472 1 0 Nov16 ? 00:00:00 /usr/lib/postfix/master postfix 482 472 0 Nov16 ? 00:00:00 qmgr -l -t fifo -u root 520 1 0 Nov16 ? 00:00:04 /usr/sbin/apache2 -k start www-data 523 520 0 Nov16 ? 00:00:00 /usr/sbin/apache2 -k start www-data 525 520 0 Nov16 ? 00:00:00 /usr/sbin/apache2 -k start www-data 526 520 0 Nov16 ? 00:00:00 /usr/sbin/apache2 -k start tomcat 1074 1 0 Nov16 ? 00:01:08 /usr/lib/jvm/java-6-oracle/bin/java -Djava.util.logging.config.file=/usr/ postfix 1351 472 0 Nov16 ? 00:00:00 tlsmgr -l -t unix -u -c postfix 3413 472 0 17:00 ? 00:00:00 pickup -l -t fifo -u -c root 3457 276 0 17:31 ? 00:00:00 sshd: root@pts/0 root 3459 3457 0 17:31 pts/0 00:00:00 -bash root 3470 3459 0 17:31 pts/0 00:00:00 ps -ef > iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:http-alt ACCEPT tcp -- anywhere anywhere tcp dpt:8005 ACCEPT tcp -- anywhere anywhere tcp dpt:http-alt Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination > netstat -plut Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:smtp *:* LISTEN 472/master tcp 0 0 *:3213 *:* LISTEN 276/sshd tcp6 0 0 [::]:smtp [::]:* LISTEN 472/master tcp6 0 0 [::]:8009 [::]:* LISTEN 1074/java tcp6 0 0 [::]:3213 [::]:* LISTEN 276/sshd tcp6 0 0 [::]:http-alt [::]:* LISTEN 1074/java tcp6 0 0 [::]:http [::]:* LISTEN 520/apache2

    Read the article

  • SVN Server not responding

    - by Rob Forrest
    I've been bashing my head against a wall with this one all day and I would greatly appreciate a few more eyes on the problem at hand. We have an in-house SVN Server that contains all live and development code for our website. Our live server can connect to this and get updates from the repository. This was all working fine until we migrated the SVN Server from a physical machine to a vSphere VM. Now, for some reason that continues to fathom me, we can no longer connect to the SVN Server. The SVN Server runs CentOS 6.2, Apache and SVN 1.7.2. SELinux is well and trully disabled and the problem remains when iptables is stopped. Our production server does run an older version of CentOS and SVN but the same system worked previously so I don't think that this is the issue. Of note, if I have iptables enabled, using service iptables status, I can see a single packet coming in and being accepted but the production server simply hangs on any svn command. If I give up waiting and do a CTRL-C to break the process I get a "could not connect to server". To me it appears to be something to do with the SVN Server rejecting external connections but I have no idea how this would happen. Any thoughts on what I can try from here? Thanks, Rob Edit: Network topology Production server sits externally to our in-house SVN server. Our IPCop (?) firewall allows connections from it (and it alone) on port 80 and passes the connection to the SVN Server. The hardware is all pretty decent and I don't doubt that its doing its job correctly, especially as iptables is seeing the new connections. subversion.conf (in /etc/httpd/conf.d) LoadModule dav_svn_module modules/mod_dav_svn.so <Location /repos> DAV svn SVNPath /var/svn/repos <LimitExcept PROPFIND OPTIONS REPORT> AuthType Basic AuthName "SVN Server" AuthUserFile /var/svn/svn-auth Require valid-user </LimitExcept> </Location> ifconfig eth0 Link encap:Ethernet HWaddr 00:0C:29:5F:C8:3A inet addr:172.16.0.14 Bcast:172.16.0.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe5f:c83a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:32317 errors:0 dropped:0 overruns:0 frame:0 TX packets:632 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2544036 (2.4 MiB) TX bytes:143207 (139.8 KiB) netstat -lntp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 1484/mysqld tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1135/rpcbind tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1351/sshd tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1230/cupsd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1575/master tcp 0 0 0.0.0.0:58401 0.0.0.0:* LISTEN 1153/rpc.statd tcp 0 0 0.0.0.0:5672 0.0.0.0:* LISTEN 1626/qpidd tcp 0 0 :::139 :::* LISTEN 1678/smbd tcp 0 0 :::111 :::* LISTEN 1135/rpcbind tcp 0 0 :::80 :::* LISTEN 1615/httpd tcp 0 0 :::22 :::* LISTEN 1351/sshd tcp 0 0 ::1:631 :::* LISTEN 1230/cupsd tcp 0 0 ::1:25 :::* LISTEN 1575/master tcp 0 0 :::445 :::* LISTEN 1678/smbd tcp 0 0 :::56799 :::* LISTEN 1153/rpc.statd iptables --list -v -n (when iptables is stopped) Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination iptables --list -v -n (when iptables is running, after one attempted svn connection) Chain INPUT (policy ACCEPT 68 packets, 6561 bytes) pkts bytes target prot opt in out source destination 19 1304 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 1 60 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:80 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 17 packets, 1612 bytes) pkts bytes target prot opt in out source destination tcpdump 17:08:18.455114 IP 'production server'.43255 > 'svn server'.local.http: Flags [S], seq 3200354543, win 5840, options [mss 1380,sackOK,TS val 2011458346 ecr 0,nop,wscale 7], length 0 17:08:18.455169 IP 'svn server'.local.http > 'production server'.43255: Flags [S.], seq 629885453, ack 3200354544, win 14480, options [mss 1460,sackOK,TS val 816478 ecr 2011449346,nop,wscale 7], length 0 17:08:19.655317 IP 'svn server'.local.http > 'production server'k.43255: Flags [S.], seq 629885453, ack 3200354544, win 14480, options [mss 1460,sackOK,TS val 817679 ecr 2011449346,nop,wscale 7], length 0

    Read the article

  • Improving TCP performance over a gigabit network lots of connections and high traffic for storage and streaming services

    - by Linux Guy
    I have two servers, Both servers hardware Specification are Processor : Dual Processor RAM : over 128 G.B Hard disk : SSD Hard disk Outging Traffic bandwidth : 3 Gbps network cards speed : 10 Gbps Server A : for Encoding videos Server B : for storage videos andstream videos over web interface like youtube The inbound bandwidth between two servers is 10Gbps , the outbound bandwidth internet bandwidth is 500Mpbs Both servers using public ip addresses in public and private network Both servers transfer and connection on nginx port , and the server B used for streaming media , like youtube stream videos Both servers in same network , when i do ping from Server A to Server B i got high time latency above 1.0ms , the time range time=52.7 ms to time=215.7 ms - This is the output of iftop utility 353Mb 707Mb 1.04Gb 1.38Gb 1.73Gb mqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqq server.example.com => ip.address 6.36Mb 4.31Mb 1.66Mb <= 158Kb 94.8Kb 35.1Kb server.example.com => ip.address 1.23Mb 4.28Mb 1.12Mb <= 17.1Kb 83.5Kb 21.9Kb server.example.com => ip.address 395Kb 3.89Mb 1.07Mb <= 6.09Kb 109Kb 28.6Kb server.example.com => ip.address 4.55Mb 3.83Mb 1.04Mb <= 55.6Kb 45.4Kb 13.0Kb server.example.com => ip.address 649Kb 3.38Mb 1.47Mb <= 9.00Kb 38.7Kb 16.7Kb server.example.com => ip.address 5.00Mb 3.32Mb 1.80Mb <= 65.7Kb 55.1Kb 29.4Kb server.example.com => ip.address 387Kb 3.13Mb 1.06Mb <= 18.4Kb 39.9Kb 15.0Kb server.example.com => ip.address 3.27Mb 3.11Mb 1.01Mb <= 81.2Kb 64.5Kb 20.9Kb server.example.com => ip.address 1.75Mb 3.08Mb 2.72Mb <= 16.6Kb 35.6Kb 32.5Kb server.example.com => ip.address 1.75Mb 2.90Mb 2.79Mb <= 22.4Kb 32.6Kb 35.6Kb server.example.com => ip.address 3.03Mb 2.78Mb 1.82Mb <= 26.6Kb 27.4Kb 20.2Kb server.example.com => ip.address 2.26Mb 2.66Mb 1.36Mb <= 51.7Kb 49.1Kb 24.4Kb server.example.com => ip.address 586Kb 2.50Mb 1.03Mb <= 4.17Kb 26.1Kb 10.7Kb server.example.com => ip.address 2.42Mb 2.49Mb 2.44Mb <= 31.6Kb 29.7Kb 29.9Kb server.example.com => ip.address 2.41Mb 2.46Mb 2.41Mb <= 26.4Kb 24.5Kb 23.8Kb server.example.com => ip.address 2.37Mb 2.39Mb 2.40Mb <= 28.9Kb 27.0Kb 28.5Kb server.example.com => ip.address 525Kb 2.20Mb 1.05Mb <= 7.03Kb 26.0Kb 12.8Kb qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq TX: cum: 102GB peak: 1.65Gb rates: 1.46Gb 1.44Gb 1.48Gb RX: 1.31GB 24.3Mb 19.5Mb 18.9Mb 20.0Mb TOTAL: 103GB 1.67Gb 1.48Gb 1.46Gb 1.50Gb I check the transfer speed using iperf utility From Server A to Server B # iperf -c 0.0.0.2 -p 8777 ------------------------------------------------------------ Client connecting to 0.0.0.2, TCP port 8777 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 3] local 0.0.0.1 port 38895 connected with 0.0.0.2 port 8777 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.8 sec 528 KBytes 399 Kbits/sec My Current Connections in Server B # netstat -an|grep ":8777"|awk '/tcp/ {print $6}'|sort -nr| uniq -c 2072 TIME_WAIT 28 SYN_RECV 1 LISTEN 189 LAST_ACK 139 FIN_WAIT2 373 FIN_WAIT1 3381 ESTABLISHED 34 CLOSING Server A Network Card Information Settings for eth0: Supported ports: [ TP ] Supported link modes: 100baseT/Full 1000baseT/Full 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Speed: 10000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 0 Transceiver: external Auto-negotiation: on MDI-X: Unknown Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes Server B Network Card Information Settings for eth2: Supported ports: [ FIBRE ] Supported link modes: 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: No Advertised link modes: 10000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: No Speed: 10000Mb/s Duplex: Full Port: Direct Attach Copper PHYAD: 0 Transceiver: external Auto-negotiation: off Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes ifconfig server A eth0 Link encap:Ethernet HWaddr 00:25:90:ED:9E:AA inet addr:0.0.0.1 Bcast:0.0.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1202795665 errors:0 dropped:64334 overruns:0 frame:0 TX packets:2313161968 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:893413096188 (832.0 GiB) TX bytes:3360949570454 (3.0 TiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:2207544 errors:0 dropped:0 overruns:0 frame:0 TX packets:2207544 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:247769175 (236.2 MiB) TX bytes:247769175 (236.2 MiB) ifconfig Server B eth2 Link encap:Ethernet HWaddr 00:25:90:82:C4:FE inet addr:0.0.0.2 Bcast:0.0.0.2 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:39973046980 errors:0 dropped:1828387600 overruns:0 frame:0 TX packets:69618752480 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3013976063688 (2.7 TiB) TX bytes:102250230803933 (92.9 TiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:1049495 errors:0 dropped:0 overruns:0 frame:0 TX packets:1049495 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:129012422 (123.0 MiB) TX bytes:129012422 (123.0 MiB) Netstat -i on Server B # netstat -i Kernel Interface table Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg eth2 9000 0 42098629968 0 2131223717 0 73698797854 0 0 0 BMRU lo 65536 0 1077908 0 0 0 1077908 0 0 0 LRU I Turn up send/receive buffers on the network card to 2048 and problem still persist I increase the MTU for server A and problem still persist and i increase the MTU for server B for better connectivity and transfer speed but it couldn't transfer at all The problem is : as you can see from iperf utility, the transfer speed from server A to server B slow when i restart network service in server B the transfer in server A at full speed, after 2 minutes , it's getting slow How could i troubleshoot slow speed issue and fix it in server B ? Notice : if there any other commands i should execute in servers for more information, so it might help resolve the problem , let me know in comments

    Read the article

  • BugZilla XML RPC Interface

    - by Damo
    I am attempting to setup BugZilla to receive but reports from another system using the XML-RPC interface. BugZilla works fine on its own with its own interface. When I attempt to test the XML-RPC functionality by accessing "xmlrpc.cgi" in my browser I get the error: The XML-RPC Interface feature is not available in this Bugzilla at C:\BugZilla\xmlrpc.cgi line 27 main::BEGIN(...) called at C:\BugZilla\xmlrpc.cgi line 29 eval {...} called at C:\BugZilla\xmlrpc.cgi line 29 Following this I install test-taint package from the default perl repository, this installs version 1.04. Re-running "xmlrpc.cgi" gives me an IIS error: 502 - Web server received an invalid response while acting as a gateway or proxy server. There is a problem with the page you are looking for, and it cannot be displayed. When the Web server (while acting as a gateway or proxy) contacted the upstream content server, it received an invalid response from the content server. So I run the checksetup.pl which inform me that: Use of uninitialized value in open at C:/Perl/site/lib/Test/Taint.pm line 334, <DATA> line 558. Installing Test-Taint from CPAN is the same. I assume XML-RPC is relent on Test-Taint, but Test-Taint doesn't seem to run correctly. If I ignore this error and attempt to invoke "bz_webservice_demo.pl" to add an entry the script times out. How can I get the XML-RPC / Test-Taint function working ? Current Setup: IIS7.5 on Windows Server 2008 Bugzilla 4.2.2 Perl 5.14.2 C:\BugZilla>perl checksetup.pl Set up gcc environment - 3.4.5 (mingw-vista special r3) * This is Bugzilla 4.2.2 on perl 5.14.2 * Running on Win2008 Build 6002 (Service Pack 2) Checking perl modules... Checking for CGI.pm (v3.51) ok: found v3.59 Checking for Digest-SHA (any) ok: found v5.62 Checking for TimeDate (v2.21) ok: found v2.24 Checking for DateTime (v0.28) ok: found v0.76 Checking for DateTime-TimeZone (v0.79) ok: found v1.48 Checking for DBI (v1.614) ok: found v1.622 Checking for Template-Toolkit (v2.22) ok: found v2.24 Checking for Email-Send (v2.16) ok: found v2.198 Checking for Email-MIME (v1.904) ok: found v1.911 Checking for URI (v1.37) ok: found v1.59 Checking for List-MoreUtils (v0.22) ok: found v0.33 Checking for Math-Random-ISAAC (v1.0.1) ok: found v1.004 Checking for Win32 (v0.35) ok: found v0.44 Checking for Win32-API (v0.55) ok: found v0.64 Checking available perl DBD modules... Checking for DBD-Pg (v1.45) ok: found v2.18.1 Checking for DBD-mysql (v4.001) ok: found v4.021 Checking for DBD-SQLite (v1.29) ok: found v1.33 Checking for DBD-Oracle (v1.19) ok: found v1.30 The following Perl modules are optional: Checking for GD (v1.20) ok: found v2.46 Checking for Chart (v2.1) ok: found v2.4.5 Checking for Template-GD (any) ok: found v1.56 Checking for GDTextUtil (any) ok: found v0.86 Checking for GDGraph (any) ok: found v1.44 Checking for MIME-tools (v5.406) ok: found v5.503 Checking for libwww-perl (any) ok: found v6.02 Checking for XML-Twig (any) ok: found v3.41 Checking for PatchReader (v0.9.6) ok: found v0.9.6 Checking for perl-ldap (any) ok: found v0.44 Checking for Authen-SASL (any) ok: found v2.15 Checking for RadiusPerl (any) ok: found v0.20 Checking for SOAP-Lite (v0.712) ok: found v0.715 Checking for JSON-RPC (any) ok: found v0.96 Checking for JSON-XS (v2.0) ok: found v2.32 Use of uninitialized value in open at C:/Perl/site/lib/Test/Taint.pm line 334, <DATA> line 558. Checking for Test-Taint (any) ok: found v1.04 Checking for HTML-Parser (v3.67) ok: found v3.68 Checking for HTML-Scrubber (any) ok: found v0.09 Checking for Encode (v2.21) ok: found v2.44 Checking for Encode-Detect (any) not found Checking for Email-MIME-Attachment-Stripper (any) ok: found v1.316 Checking for Email-Reply (any) ok: found v1.202 Checking for TheSchwartz (any) not found Checking for Daemon-Generic (any) not found Checking for mod_perl (v1.999022) not found Checking for Apache-SizeLimit (v0.96) not found *********************************************************************** * OPTIONAL MODULES * *********************************************************************** * Certain Perl modules are not required by Bugzilla, but by * * installing the latest version you gain access to additional * * features. * * * * The optional modules you do not have installed are listed below, * * with the name of the feature they enable. Below that table are the * * commands to install each module. * *********************************************************************** * MODULE NAME * ENABLES FEATURE(S) * *********************************************************************** * Encode-Detect * Automatic charset detection for text attachments * * TheSchwartz * Mail Queueing * * Daemon-Generic * Mail Queueing * * mod_perl * mod_perl * * Apache-SizeLimit * mod_perl * *********************************************************************** COMMANDS TO INSTALL OPTIONAL MODULES: Encode-Detect: ppm install Encode-Detect TheSchwartz: ppm install TheSchwartz Daemon-Generic: ppm install Daemon-Generic mod_perl: ppm install mod_perl Apache-SizeLimit: ppm install Apache-SizeLimit Reading ./localconfig... OPTIONAL NOTE: If you want to be able to use the 'difference between two patches' feature of Bugzilla (which requires the PatchReader Perl module as well), you should install patchutils from: http://cyberelk.net/tim/patchutils/ Checking for DBD-mysql (v4.001) ok: found v4.021 Checking for MySQL (v5.0.15) ok: found v5.5.27 WARNING: You need to set the max_allowed_packet parameter in your MySQL configuration to at least 3276750. Currently it is set to 3275776. You can set this parameter in the [mysqld] section of your MySQL configuration file. Removing existing compiled templates... Precompiling templates...done. checksetup.pl complete.

    Read the article

  • Packets marked by iptables only sent to the correct routing table sometimes

    - by cookiecaper
    I am trying to route packets generated by a specific user out over a VPN. I have this configuration: $ sudo iptables -S -t nat -P PREROUTING ACCEPT -P OUTPUT ACCEPT -P POSTROUTING ACCEPT -A POSTROUTING -o tun0 -j MASQUERADE $ sudo iptables -S -t mangle -P PREROUTING ACCEPT -P INPUT ACCEPT -P FORWARD ACCEPT -P OUTPUT ACCEPT -P POSTROUTING ACCEPT -A OUTPUT -m owner --uid-owner guy -j MARK --set-xmark 0xb/0xffffffff $ sudo ip rule show 0: from all lookup local 32765: from all fwmark 0xb lookup 11 32766: from all lookup main 32767: from all lookup default $ sudo ip route show table 11 10.8.0.5 dev tun0 proto kernel scope link src 10.8.0.6 10.8.0.6 dev tun0 scope link 10.8.0.1 via 10.8.0.5 dev tun0 0.0.0.0/1 via 10.8.0.5 dev tun0 $ sudo iptables -S -t raw -P PREROUTING ACCEPT -P OUTPUT ACCEPT -A OUTPUT -m owner --uid-owner guy -j TRACE -A OUTPUT -p tcp -m tcp --dport 80 -j TRACE It seems that some sites work fine and use the VPN, but others don't and fall back to the normal interface. This is bad. This is a packet trace that used VPN: Oct 27 00:24:28 agent kernel: [612979.976052] TRACE: raw:OUTPUT:rule:2 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=23.1.17.194 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=14494 DF PROTO=TCP SPT=57502 DPT=80 SEQ=2294732931 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6E01D0000000001030307) UID=999 GID=999 Oct 27 00:24:28 agent kernel: [612979.976105] TRACE: raw:OUTPUT:policy:3 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=23.1.17.194 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=14494 DF PROTO=TCP SPT=57502 DPT=80 SEQ=2294732931 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6E01D0000000001030307) UID=999 GID=999 Oct 27 00:24:28 agent kernel: [612979.976164] TRACE: mangle:OUTPUT:rule:1 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=23.1.17.194 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=14494 DF PROTO=TCP SPT=57502 DPT=80 SEQ=2294732931 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6E01D0000000001030307) UID=999 GID=999 Oct 27 00:24:28 agent kernel: [612979.976210] TRACE: mangle:OUTPUT:policy:2 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=23.1.17.194 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=14494 DF PROTO=TCP SPT=57502 DPT=80 SEQ=2294732931 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6E01D0000000001030307) UID=999 GID=999 MARK=0xb Oct 27 00:24:28 agent kernel: [612979.976269] TRACE: nat:OUTPUT:policy:1 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=23.1.17.194 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=14494 DF PROTO=TCP SPT=57502 DPT=80 SEQ=2294732931 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6E01D0000000001030307) UID=999 GID=999 MARK=0xb Oct 27 00:24:28 agent kernel: [612979.976320] TRACE: filter:OUTPUT:policy:1 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=23.1.17.194 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=14494 DF PROTO=TCP SPT=57502 DPT=80 SEQ=2294732931 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6E01D0000000001030307) UID=999 GID=999 MARK=0xb Oct 27 00:24:28 agent kernel: [612979.976367] TRACE: mangle:POSTROUTING:policy:1 IN= OUT=tun0 SRC=XXX.YYY.ZZZ.AAA DST=23.1.17.194 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=14494 DF PROTO=TCP SPT=57502 DPT=80 SEQ=2294732931 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6E01D0000000001030307) UID=999 GID=999 MARK=0xb Oct 27 00:24:28 agent kernel: [612979.976414] TRACE: nat:POSTROUTING:rule:1 IN= OUT=tun0 SRC=XXX.YYY.ZZZ.AAA DST=23.1.17.194 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=14494 DF PROTO=TCP SPT=57502 DPT=80 SEQ=2294732931 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6E01D0000000001030307) UID=999 GID=999 MARK=0xb and this is one that didn't: Oct 27 00:22:41 agent kernel: [612873.662559] TRACE: raw:OUTPUT:rule:2 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=209.68.27.16 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=40425 DF PROTO=TCP SPT=45305 DPT=80 SEQ=604973951 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6B6960000000001030307) UID=999 GID=999 Oct 27 00:22:41 agent kernel: [612873.662609] TRACE: raw:OUTPUT:policy:3 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=209.68.27.16 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=40425 DF PROTO=TCP SPT=45305 DPT=80 SEQ=604973951 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6B6960000000001030307) UID=999 GID=999 Oct 27 00:22:41 agent kernel: [612873.662664] TRACE: mangle:OUTPUT:rule:1 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=209.68.27.16 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=40425 DF PROTO=TCP SPT=45305 DPT=80 SEQ=604973951 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6B6960000000001030307) UID=999 GID=999 Oct 27 00:22:41 agent kernel: [612873.662709] TRACE: mangle:OUTPUT:policy:2 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=209.68.27.16 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=40425 DF PROTO=TCP SPT=45305 DPT=80 SEQ=604973951 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6B6960000000001030307) UID=999 GID=999 MARK=0xb Oct 27 00:22:41 agent kernel: [612873.662761] TRACE: nat:OUTPUT:policy:1 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=209.68.27.16 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=40425 DF PROTO=TCP SPT=45305 DPT=80 SEQ=604973951 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6B6960000000001030307) UID=999 GID=999 MARK=0xb Oct 27 00:22:41 agent kernel: [612873.662808] TRACE: filter:OUTPUT:policy:1 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=209.68.27.16 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=40425 DF PROTO=TCP SPT=45305 DPT=80 SEQ=604973951 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6B6960000000001030307) UID=999 GID=999 MARK=0xb Oct 27 00:22:41 agent kernel: [612873.662855] TRACE: mangle:POSTROUTING:policy:1 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=209.68.27.16 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=40425 DF PROTO=TCP SPT=45305 DPT=80 SEQ=604973951 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6B6960000000001030307) UID=999 GID=999 MARK=0xb I have already tried "ip route flush cache", to no avail. I do not know why the first packet goes through the correct routing table, and the second doesn't. Both are marked. Once again, I do not want ALL packets system-wide to go through the VPN, I only want packets from a specific user (UID=999) to go through the VPN. I am testing ipchicken.com and walmart.com via links, from the same user, same shell. walmart.com appears to use the VPN; ipchicken.com does not. Any help appreciated. Will send 0.5 bitcoins to answerer who makes this fixed.

    Read the article

  • SPF record doesn't work (not sure which DNS server to tweak)

    - by Ion
    Problem: Google (and perhaps others) marks our emails as SPF neutral. Let me give you some background about the setup: initially got a dedicated server (Hetzner) with Plesk installed to host a domain/web application, let's say: bigjaws.com. Plesk automatically creates a DNS zone for it with some records for the various services it provides out of the box, e.g. webmail.bigjaws.com as a CNAME to bigjaws.com to provide Horde/whatever, etc. Let me point out four relevant of these records (where XXX.XXX.XXX.158 is our dedicated IP): bigjaws.com. A XXX.XXX.XXX.158 mail.bigjaws.com. A XXX.XXX.XXX.158 bigjaws.com MX (10) mail.bigjaws.com. bigjaws.com. TXT v=spf1 +a +mx -all The above records are not(?) valid anymore though, because after using this dedicated server for a while, our site got bigger and bigger so we decided to move our operations over to AWS (EC2, RDS, ELB, etc), but we retained the mail functionality as is, i.e. emails from [email protected] are sent by connecting to our dedicated server where Plesk takes care of things. This was decided in order not to setup anything from scratch. Of course for all DNS-related things we now use Route53. In Route53 I have the following records: mail.schoox.com. A XXX.XXX.XXX.158 bigjaws.com. MX (10) mail.bigjaws.com bigjaws.com. SPF "v=spf1 +ip4:XXX.XXX.XXX.158 +mx ~all" From my understanding of SPF, the SPF status should have been passed: I designate that all email being sent by bigjaws.com from XXX.XXX.XXX.158 are valid/not spam (I added +mx there but I'm not sure if needed). When a mail server receives an email, doesn't it lookup the SPF record of the domain and checks against the IP it got the email from? Checking with spfquery: root@box:~# spfquery -ip XXX.XXX.XXX.158 -sender [email protected] -rcpt-to [email protected] StartError Context: Failed to query MAIL-FROM ErrorCode: (2) Could not find a valid SPF record Error: No DNS data for 'bigjaws.com'. EndError noneneutral Please see http://www.openspf.org/Why?id=employee1%40bigjaws.com&ip=XXX.XXX.XXX.158&receiver=spfquery : Reason: default spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com Received-SPF: neutral (spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com) client-ip=XXX.XXX.XXX.158; [email protected]; If I go to the address listed above (openspf.org) it tells me that the message should have been accepted(!): spfquery rejected a message that claimed an envelope sender address of [email protected]. spfquery received a message from static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) that claimed an envelope sender address of [email protected]. The domain bigjaws.com has authorized static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) to send mail on its behalf, so the message should have been accepted. It is impossible for us to say why it was rejected. What should I do? If the problem persists, contact the bigjaws.com postmaster. Also, here are some headers from an email sent by one of our [email protected] addresses to a gmail.com address (by the way, bigjaws.de listed in the "Received: from" field was the initial domain hosted on the dedicated server before adding the .com one -- both are still listed as separate subscriptions under Plesk). Delivered-To: [email protected] Received: by 10.14.177.70 with SMTP id c46csp289656eem; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) X-Received: by 10.14.102.66 with SMTP id c42mr306186eeg.47.1382515860386; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) Return-Path: <[email protected]> Received: from bigjaws.de (static.158.XXX.XXX.XXX.clients.your-server.de. [XXX.XXX.XXX.158]) by mx.google.com with ESMTPS id l4si19438578eew.161.2013.10.23.01.10.59 for <[email protected]> (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 23 Oct 2013 01:10:59 -0700 (PDT) Received-SPF: neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=XXX.XXX.XXX.158; Authentication-Results: mx.google.com; spf=neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected] DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=default; d=bigjaws.com; b=WwRAS0WKjp9lO17iMluYPXOHzqRcOueiQT4rPdvy3WFf0QzoXiy6rLfxU/Ra53jL1vlPbwlLNa5gjoJBi7ZwKfUcvs3s02hJI7b3ozl0fEgJtTPKoCfnwl4bLPbtXNFu; h=Received:Received:Message-ID:Date:From:User-Agent:MIME-Version:To:Subject:Content-Type:Content-Transfer-Encoding; Received: (qmail 22722 invoked from network); 23 Oct 2013 10:10:59 +0200 Received: from hostname.static.ISP.com (HELO ?192.168.1.60?) (YYY.YYY.ISP.IP) by static.158.XXX.XXX.XXX.clients.your-server.de. with ESMTPSA (DHE-RSA-AES256-SHA encrypted, authenticated); 23 Oct 2013 10:10:59 +0200 Message-ID: <[email protected]> Date: Wed, 23 Oct 2013 11:11:00 +0300 From: BigJaws Employee <[email protected]> User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.0.1 MIME-Version: 1.0 To: [email protected] Subject: test SPF Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit test SPF Any ideas why SPF is not working correctly? Also, are there any DNS settings that are not needed anymore and create a problem?

    Read the article

  • Help with OpenVPN setup on Windows Server 2003

    - by Bill Johnson
    Hi all, Just wondering if someone can assist me further with the set-up of OpenVPN on my Windows Server 2003. I have configured Win Server as per the following guide: http://tinyurl.com/kxusv and I'm now at the stage of Creating the config files. I have a few questions that I need some assistance with. My server IP is 192.168.1.10 and my routers IP address is 192.168.1.1 (the router is a Netgear DGN2000). I have edited the server.ovpn file as per the following: push "dhcp-option DNS X.X.X.X" # Replace the Xs with the IP address of the DNS for your home network (usually your ISP's DNS) push "dhcp-option DNS X.X.X.X" # A second DNS server if you have one to include my ISP DNS and I have not edited anything else. Now my issue is with the client1.opvpn file as per the below: client dev tap #dev-node MyTAP #If you renamed your TAP interface or have more than one TAP interface then remove the # at the beginning and change "MyTAP" to its name proto udp remote YOURHOST.dyndns.org 1194 #You will need to enter you dyndns account or static IP address here. The number following it is the port you set in the server's config route 192.168.1.0 255.255.255.0 vpn_gateway 3 #This it the IP address scheme and subnet of your normal network your server is on. Your router would usually be 192.168.1.1 resolv-retry infinite nobind persist-key persist-tun ca "C:\\Program Files\\OpenVPN\\easy-rsa\\keys\\ca.crt" cert "C:\\Program Files\\OpenVPN\\easy-rsa\\keys\\client1.crt" # Change the next two lines to match the files in the keys directory. This should be be different for each client. key "C:\\Program Files\\OpenVPN\\easy-rsa\\keys\\client1.key" # This file should be kept secret ns-cert-type server cipher BF-CBC # Blowfish (default) encrytion comp-lzo verb 1 To me it looks like I will need to amend the following: remote YOURHOST.dyndns.org 1194 #You will need to enter you dyndns account or static IP address here. The number following it is the port you set in the server's config route 192.168.1.0 255.255.255.0 vpn_gateway 3 #This it the IP address scheme and subnet of your normal network your server is on. Your router would usually be 192.168.1.1 So, should the first line be the static IP of the machine that I'm applying this to? The IP address of the server (192.168.1.10) or something else? I'm also stuck on the second part 'route 192.168.1.0 255.255.255.0 vpn_gateway 3' Should this be the router IP which is 192.168.1.1 and the subnet is 255.255.255.0 and that is all I need to alter? The final part that I'm stuggling with is Configuring the router. Basically I have a Netgear DGN2000 and as it mentions that the router should be configured to port forward port 1194 to the server’s IP address of 192.168.1.150 all I have been able to do is in 'Firewall Rules' and on 'Inbound Services', set the Service to 'Any(ALL) and Send to LAN Server point to 1923.168.1.150. I'm not sure if this is correct? It is the following stage of the help guide that I'm struggling with and really need some help with: You need to make sure the port you configured OpenVPN to listen on is forwarded on the router to the IP address of your server. On the WRT54G, port forwarding is configured in the “Applications & Gaming” section. Enter 1194 for the port, UDP for the protocol, and 192.168.1.150 for the IP address. Make sure the entry is enabled and then save the setting. Next, you need to add an entry to the router’s Routing Table. This will enable the router to properly route requests from the clients to the TAP interface of the server. On the WRT54G you would go to the “Setup” page and then the “Advanced Routing” section. Enter the follwing info to make the entry: Enter Route Name: openVPN Destination LAN IP: 192.168.10.0 Subnet Mask: 255.255.255.252 Default Gateway: 192.168.1.150 Interface: LAN & Wireless Once the info has been typed in make sure you save the setting. Can anyone possibly guide me through setting this part up with my Netgear router. I see that once I have these 2 parts complete I'm there so I would really appreciate someone walking me through what is required in completing this. Much appreciated.

    Read the article

  • Nginx + PHP-FPM on Centos 6.5 gives me 502 Bad Gateway (fpm error: unable to read what child say: Bad file descriptor)

    - by Latheesan Kanes
    I am setting up a standard LEMP stack. My current setup is giving me the following error: 502 Bad Gateway This is what is currently installed on my server: Here's the configurations I've created/updated so far, can some one take a look at the following and see where the error might be? I've already checked my logs, there's nothing in there (http://i.imgur.com/iRq3ksb.png). And I saw the following in /var/log/php-fpm/error.log file. sidenote: both the nginx and php-fpm has been configured to run under a local account called www-data and the following folders exits on the server nginx.conf global nginx configuration user www-data; worker_processes 6; worker_rlimit_nofile 100000; error_log /var/log/nginx/error.log crit; pid /var/run/nginx.pid; events { worker_connections 2048; use epoll; multi_accept on; } http { include /etc/nginx/mime.types; default_type application/octet-stream; # cache informations about FDs, frequently accessed files can boost performance open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # to boost IO on HDD we can disable access logs access_log off; # copies data between one FD and other from within the kernel # faster then read() + write() sendfile on; # send headers in one peace, its better then sending them one by one tcp_nopush on; # don't buffer data sent, good for small data bursts in real time tcp_nodelay on; # server will close connection after this time keepalive_timeout 60; # number of requests client can make over keep-alive -- for testing keepalive_requests 100000; # allow the server to close connection on non responding client, this will free up memory reset_timedout_connection on; # request timed out -- default 60 client_body_timeout 60; # if client stop responding, free up memory -- default 60 send_timeout 60; # reduce the data that needs to be sent over network gzip on; gzip_min_length 10240; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml; gzip_disable "MSIE [1-6]\."; # Load vHosts include /etc/nginx/conf.d/*.conf; } conf.d/www.domain.com.conf my vhost entry ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } /etc/php-fpm.d/www-data.conf my php-fpm pool config ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } I've got a file in /home/www-data/public_html/index.php with the code <?php phpinfo(); ?> (file uploaded as user www-data).

    Read the article

  • Mail Server on Google Apps, Hotmail Blocks

    - by detay
    My company provides private research tools (sites) for companies. Each site has its own domain name and users. These sites need to be able to send outbound mail for notifications such as password reminders. We started using Google Apps for this purpose, but lately we are having tremendous problems with mailing, especially to hotmail. I added the SPF record as required. Mxtoolbox says my domains are not in any blacklists. I was told to contact https://postmaster.live.com/snds/addnetwork.aspx and add my network. Given that I am using google's mail servers for sending mail, should I add their IP address for registering my domain? Or should I add my server's IP even though I am not hosting the mail server on it? Here's an example of a returned message; > Delivered-To: [email protected] Received: by 10.112.1.195 with SMTP > id 3csp62757lbo; > Mon, 10 Dec 2012 10:42:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; > d=gmail.com; s=20120113; > h=mime-version:from:to:x-failed-recipients:subject:message-id:date > :content-type; > bh=x41JiCH75hsonh+5c/kzwIs7R8u4Hum2u396lkV3g2w=; > b=v+bBPvufqfVkNz2UrnhyyoGj1Cuwf6x/yMj0IXJgH27uJU7TqBOvbwnNHf33sQ7B9T > uGzxwryC2fmxBh72cGz1spwWK348nUp6KK73MXKnF8mpc3nPQ8Ke+EQpSPeJdq/7oZvd > scZTGpy//IiGEUDU5bJ7YPqYXQRycY5N6AF5iI4mWWwbS4opybp3IKpDDktu11p/YEEO > Fj9wnSCx3nLMXB/XZgSjjmnaluGhYNdh3JFtz83Vmr50qXTG/TuIXlkirP73GfGvIt2S > 7sy8/YdEbZR92mYe9wucditYr7MOuyjyYrZYCg+weXeTZiJX1PfBVieD+kD9MnCairnP > rQeQ== Received: by 10.14.215.197 with SMTP id e45mr52766237eep.0.1355164974784; > Mon, 10 Dec 2012 10:42:54 -0800 (PST) MIME-Version: 1.0 Return-Path: <> Received: by 10.14.215.197 with SMTP id > e45mr65950764eep.0; Mon, 10 Dec 2012 10:42:54 -0800 (PST) From: Mail > Delivery Subsystem <[email protected]> To: > [email protected] X-Failed-Recipients: ********@hotmail.fr > Subject: Delivery Status Notification (Failure) Message-ID: > <[email protected]> Date: Mon, 10 Dec 2012 > 18:42:54 +0000 Content-Type: text/plain; charset=ISO-8859-1 > > Delivery to the following recipient failed permanently: > > *********@hotmail.fr > > Technical details of permanent failure: Message rejected. See > http://support.google.com/mail/bin/answer.py?answer=69585 for more > information. > > ----- Original message ----- > > Received: by 10.14.215.197 with SMTP id > e45mr52766228eep.0.1355164974700; > Mon, 10 Dec 2012 10:42:54 -0800 (PST) Return-Path: <[email protected]> Received: from WIN-1BBHFMJGUGS ([91.93.102.12]) > by mx.google.com with ESMTPS id l3sm26787704eem.14.2012.12.10.10.42.53 > (version=TLSv1/SSLv3 cipher=OTHER); > Mon, 10 Dec 2012 10:42:54 -0800 (PST) Message-ID: <[email protected]> MIME-Version: 1.0 > From: [email protected] To: *********@hotmail.fr Date: Mon, 10 Dec > 2012 10:42:54 -0800 (PST) Subject: Rejoignez-nous sur achbanlek.com > Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: > base64 > > ----- End of message -----

    Read the article

  • Someone tried to hack my Node.js server, need to understand a GET request in the logs

    - by Akay
    Alright, so I left my Node.js server alone for a while and came back to find some really interesting stuff in the logs. Apparently some moron from China or Poland tried to hack my server using directory traversal and what not, while it seems though he did not succeed I am unable understand few entries in the log. This is the output of a "hohup.out" file. The attack starts, apparently he is trying to find out some console entry in my server. All of which fail and return a 404. [90mGET /../../../../../../../../../../../ [31m500 [90m6ms - 2b[0m [90mGET /<script>alert(53416)</script> [33m404 [90m7ms[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET / [32m200 [90m1ms - 240b[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET /pz3yvy3lyzgja41w2sp [33m404 [90m1ms[0m [90mGET /stylesheets/style.css [33m404 [90m0ms[0m [90mGET /index.html [33m404 [90m1ms[0m [90mGET /index.htm [33m404 [90m0ms[0m [90mGET /default.html [33m404 [90m0ms[0m [90mGET /default.htm [33m404 [90m1ms[0m [90mGET /default.asp [33m404 [90m1ms[0m [90mGET /index.php [33m404 [90m0ms[0m [90mGET /default.php [33m404 [90m1ms[0m [90mGET /index.asp [33m404 [90m0ms[0m [90mGET /index.cgi [33m404 [90m0ms[0m [90mGET /index.jsp [33m404 [90m1ms[0m [90mGET /index.php3 [33m404 [90m0ms[0m [90mGET /index.pl [33m404 [90m0ms[0m [90mGET /default.jsp [33m404 [90m0ms[0m [90mGET /default.php3 [33m404 [90m0ms[0m [90mGET /index.html.en [33m404 [90m0ms[0m [90mGET /web.gif [33m404 [90m34ms[0m [90mGET /header.html [33m404 [90m1ms[0m [90mGET /homepage.nsf [33m404 [90m1ms[0m [90mGET /homepage.htm [33m404 [90m1ms[0m [90mGET /homepage.asp [33m404 [90m1ms[0m [90mGET /home.htm [33m404 [90m0ms[0m [90mGET /home.html [33m404 [90m1ms[0m [90mGET /home.asp [33m404 [90m1ms[0m [90mGET /login.asp [33m404 [90m0ms[0m [90mGET /login.html [33m404 [90m0ms[0m [90mGET /login.htm [33m404 [90m1ms[0m [90mGET /login.php [33m404 [90m0ms[0m [90mGET /index.cfm [33m404 [90m0ms[0m [90mGET /main.php [33m404 [90m1ms[0m [90mGET /main.asp [33m404 [90m1ms[0m [90mGET /main.htm [33m404 [90m1ms[0m [90mGET /main.html [33m404 [90m2ms[0m [90mGET /Welcome.html [33m404 [90m1ms[0m [90mGET /welcome.htm [33m404 [90m1ms[0m [90mGET /start.htm [33m404 [90m1ms[0m [90mGET /fleur.png [33m404 [90m0ms[0m [90mGET /level/99/ [33m404 [90m1ms[0m [90mGET /chl.css [33m404 [90m0ms[0m [90mGET /images/ [33m404 [90m0ms[0m [90mGET /robots.txt [33m404 [90m2ms[0m [90mGET /hb1/presign.asp [33m404 [90m1ms[0m [90mGET /NFuse/ASP/login.htm [33m404 [90m0ms[0m [90mGET /CCMAdmin/main.asp [33m404 [90m1ms[0m [90mGET /TiVoConnect?Command=QueryServer [33m404 [90m1ms[0m [90mGET /admin/images/rn_logo.gif [33m404 [90m1ms[0m [90mGET /vncviewer.jar [33m404 [90m1ms[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET / [32m200 [90m7ms - 240b[0m [90mOPTIONS / [32m200 [90m1ms - 3b[0m [90mTRACE / [33m404 [90m0ms[0m [90mPROPFIND / [33m404 [90m0ms[0m [90mGET /\./ [33m404 [90m1ms[0m But here is when things start getting fishy. [90mGET http://www.google.com/ [32m200 [90m2ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET / [32m200 [90m1ms - 240b[0m [90mGET /robots.txt [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m0ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m3ms[0m [90mGET /manager/html [33m404 [90m0ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m0ms[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET http://37.28.156.211/sprawdza.php [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m2ms - 240b[0m [90mHEAD / [32m200 [90m1ms - 240b[0m [90mGET http://www.daydaydata.com/proxy.txt [33m404 [90m19ms[0m [90mHEAD / [32m200 [90m1ms - 240b[0m [90mGET /manager/html [33m404 [90m2ms[0m [90mGET / [32m200 [90m4ms - 240b[0m [90mGET http://www.google.pl/search?q=wp.pl [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m0ms[0m [90mHEAD / [32m200 [90m2ms - 240b[0m [90mGET http://www.google.pl/search?q=onet.pl [33m404 [90m1ms[0m [90mHEAD / [32m200 [90m2ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET http://www.google.pl/search?q=ostro%C5%82%C4%99ka [33m404 [90m1ms[0m [90mGET http://www.google.pl/search?q=google [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m2ms - 240b[0m [90mHEAD / [32m200 [90m2ms - 240b[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m0ms[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET http://www.baidu.com/ [32m200 [90m2ms - 240b[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mPOST /api/login [32m200 [90m1ms - 28b[0m [90mGET /web-console/ServerInfo.jsp [33m404 [90m2ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m10ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://proxyjudge.info [32m200 [90m2ms - 240b[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET / [32m200 [90m1ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m3ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m3ms - 240b[0m [90mGET http://www.baidu.com/ [32m200 [90m1ms - 240b[0m [90mGET /manager/html [33m404 [90m0ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m2ms - 240b[0m [90mHEAD / [32m200 [90m1ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET http://www.google.com/search?tbo=d&source=hp&num=1&btnG=Search&q=niceman [33m404 [90m2ms[0m So my questions are, how come my server is returning a "200" OK for root level domains? How did the hacker even manage to send a GET request to my server such that "http://www.google.com" shows up in the log while my server is simply an API that works on relative URLs such as "/api/login". And, while I looked up the OPTIONS, TRACE and PROPFIND HTTP requests that my server has logged it would be great if someone could explain what exactly was the hacker trying to achieve by using these verbs? Also what in the world does "[90m [32m [90m1ms - 240b[0m" mean? The "ms" makes sense, probably milliseconds for the request, rest I am unable to understand. Thank you!

    Read the article

  • C# SOCKS proxy service for HTTP requests

    - by Ed
    I'm trying to build a service that will forward HTTP requests from agents like a browser to the Tor service. Problem is, the Tor service only accepts SOCKS4a connections. So my solution is to listen for HTTP requests, get the URL they're requesting, and make a request via Tor with the help of the Starksoft.Net.Proxy library. Then return the response. The library kind of works, but I'm not happy. It returns HTTP headers with the response and it can't handle images. So the responses are messed up. How could I improve my code? I'm very new to network programming. Sorry for the long example. public AnonymiserService(ILogger logger) { try { _logger = logger; _logger.Log("Listening on port {0}...", Properties.Settings.Default.ListeningPort); StartListener(new string[] { string.Format("http://*:{0}/", Properties.Settings.Default.ListeningPort) }); } catch (Exception ex) { _logger.LogError("Exception!", ex); } } private void StartListener(string[] prefixes) { if (!HttpListener.IsSupported) { _logger.LogError("HttpListener isn't supported on this machine!"); return; } HttpListener listener = new HttpListener(); foreach (string s in prefixes) listener.Prefixes.Add(s); while (true) { listener.Start(); IAsyncResult result = listener.BeginGetContext(new AsyncCallback(ListenerCallback), listener); result.AsyncWaitHandle.WaitOne(); } } private void ListenerCallback(IAsyncResult result) { try { // Get HTTP request HttpListener listener = (HttpListener)result.AsyncState; HttpListenerContext context = listener.EndGetContext(result); _logger.Log("Retrieving [{0}]", context.Request.RawUrl); // Create connection // Use Tor as proxy IProxyClient proxyClient = new Socks4aProxyClient("localhost", 9050); TcpClient tcpClient = proxyClient.CreateConnection(context.Request.UserHostName, 80); // Create message // Need to set Connection: close to close the connection as soon as it's done byte[] data = Encoding.UTF8.GetBytes(String.Format("GET {0} HTTP/1.1\r\nHost: {1}\r\nConnection: close\r\n\r\n", context.Request.Url.PathAndQuery, context.Request.UserHostName)); // Send message NetworkStream ns = tcpClient.GetStream(); ns.Write(data, 0, data.Length); // Pass on HTTP response HttpListenerResponse responseOut = context.Response; if (ns.CanRead) { byte[] buffer = new byte[32768]; int read = 0; string responseString = string.Empty; // Read response while ((read = ns.Read(buffer, 0, buffer.Length)) > 0) { responseString += Encoding.UTF8.GetString(buffer, 0, read); } // Remove headers if (responseString.IndexOf("HTTP/1.1 200 OK") > -1) responseString = responseString.Substring(responseString.IndexOf("\r\n\r\n")); // Forward response byte[] byteArray = Encoding.UTF8.GetBytes(responseString); responseOut.OutputStream.Write(byteArray, 0, byteArray.Length); } // Close streams responseOut.OutputStream.Close(); ns.Close(); // Close connection tcpClient.Close(); _logger.Log("Retrieved [{0}]", context.Request.RawUrl); } catch (Exception ex) { _logger.LogError("Exception in ListenerCallback!", ex); } }

    Read the article

  • Failed to Install Xdebug

    - by burnt1ce
    've registered xdebug in php.ini (as per http://xdebug.org/docs/install) but it's not showing up when i run "php -m" or when i get a test page to run "phpinfo()". I've just installed the latest version of XAMPP. I've used both "zend_extention" and "zend_extention_ts" to specify the path of the xdebug dll. I ensured that my apache server restarted and used the latest change of my php.ini by executing "httpd -k restart". Can anyone provide any suggestions in getting xdebug to show up? Here are the contents of my php.ini file. [PHP] ;;;;;;;;;;;;;;;;;;; ; About php.ini ; ;;;;;;;;;;;;;;;;;;; ; PHP's initialization file, generally called php.ini, is responsible for ; configuring many of the aspects of PHP's behavior. ; PHP attempts to find and load this configuration from a number of locations. ; The following is a summary of its search order: ; 1. SAPI module specific location. ; 2. The PHPRC environment variable. (As of PHP 5.2.0) ; 3. A number of predefined registry keys on Windows (As of PHP 5.2.0) ; 4. Current working directory (except CLI) ; 5. The web server's directory (for SAPI modules), or directory of PHP ; (otherwise in Windows) ; 6. The directory from the --with-config-file-path compile time option, or the ; Windows directory (C:\windows or C:\winnt) ; See the PHP docs for more specific information. ; http://php.net/configuration.file ; The syntax of the file is extremely simple. Whitespace and Lines ; beginning with a semicolon are silently ignored (as you probably guessed). ; Section headers (e.g. [Foo]) are also silently ignored, even though ; they might mean something in the future. ; Directives following the section heading [PATH=/www/mysite] only ; apply to PHP files in the /www/mysite directory. Directives ; following the section heading [HOST=www.example.com] only apply to ; PHP files served from www.example.com. Directives set in these ; special sections cannot be overridden by user-defined INI files or ; at runtime. Currently, [PATH=] and [HOST=] sections only work under ; CGI/FastCGI. ; http://php.net/ini.sections ; Directives are specified using the following syntax: ; directive = value ; Directive names are *case sensitive* - foo=bar is different from FOO=bar. ; Directives are variables used to configure PHP or PHP extensions. ; There is no name validation. If PHP can't find an expected ; directive because it is not set or is mistyped, a default value will be used. ; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one ; of the INI constants (On, Off, True, False, Yes, No and None) or an expression ; (e.g. E_ALL & ~E_NOTICE), a quoted string ("bar"), or a reference to a ; previously set variable or directive (e.g. ${foo}) ; Expressions in the INI file are limited to bitwise operators and parentheses: ; | bitwise OR ; ^ bitwise XOR ; & bitwise AND ; ~ bitwise NOT ; ! boolean NOT ; Boolean flags can be turned on using the values 1, On, True or Yes. ; They can be turned off using the values 0, Off, False or No. ; An empty string can be denoted by simply not writing anything after the equal ; sign, or by using the None keyword: ; foo = ; sets foo to an empty string ; foo = None ; sets foo to an empty string ; foo = "None" ; sets foo to the string 'None' ; If you use constants in your value, and these constants belong to a ; dynamically loaded extension (either a PHP extension or a Zend extension), ; you may only use these constants *after* the line that loads the extension. ;;;;;;;;;;;;;;;;;;; ; About this file ; ;;;;;;;;;;;;;;;;;;; ; PHP comes packaged with two INI files. One that is recommended to be used ; in production environments and one that is recommended to be used in ; development environments. ; php.ini-production contains settings which hold security, performance and ; best practices at its core. But please be aware, these settings may break ; compatibility with older or less security conscience applications. We ; recommending using the production ini in production and testing environments. ; php.ini-development is very similar to its production variant, except it's ; much more verbose when it comes to errors. We recommending using the ; development version only in development environments as errors shown to ; application users can inadvertently leak otherwise secure information. ;;;;;;;;;;;;;;;;;;; ; Quick Reference ; ;;;;;;;;;;;;;;;;;;; ; The following are all the settings which are different in either the production ; or development versions of the INIs with respect to PHP's default behavior. ; Please see the actual settings later in the document for more details as to why ; we recommend these changes in PHP's behavior. ; allow_call_time_pass_reference ; Default Value: On ; Development Value: Off ; Production Value: Off ; display_errors ; Default Value: On ; Development Value: On ; Production Value: Off ; display_startup_errors ; Default Value: Off ; Development Value: On ; Production Value: Off ; error_reporting ; Default Value: E_ALL & ~E_NOTICE ; Development Value: E_ALL | E_STRICT ; Production Value: E_ALL & ~E_DEPRECATED ; html_errors ; Default Value: On ; Development Value: On ; Production value: Off ; log_errors ; Default Value: Off ; Development Value: On ; Production Value: On ; magic_quotes_gpc ; Default Value: On ; Development Value: Off ; Production Value: Off ; max_input_time ; Default Value: -1 (Unlimited) ; Development Value: 60 (60 seconds) ; Production Value: 60 (60 seconds) ; output_buffering ; Default Value: Off ; Development Value: 4096 ; Production Value: 4096 ; register_argc_argv ; Default Value: On ; Development Value: Off ; Production Value: Off ; register_long_arrays ; Default Value: On ; Development Value: Off ; Production Value: Off ; request_order ; Default Value: None ; Development Value: "GP" ; Production Value: "GP" ; session.bug_compat_42 ; Default Value: On ; Development Value: On ; Production Value: Off ; session.bug_compat_warn ; Default Value: On ; Development Value: On ; Production Value: Off ; session.gc_divisor ; Default Value: 100 ; Development Value: 1000 ; Production Value: 1000 ; session.hash_bits_per_character ; Default Value: 4 ; Development Value: 5 ; Production Value: 5 ; short_open_tag ; Default Value: On ; Development Value: Off ; Production Value: Off ; track_errors ; Default Value: Off ; Development Value: On ; Production Value: Off ; url_rewriter.tags ; Default Value: "a=href,area=href,frame=src,form=,fieldset=" ; Development Value: "a=href,area=href,frame=src,input=src,form=fakeentry" ; Production Value: "a=href,area=href,frame=src,input=src,form=fakeentry" ; variables_order ; Default Value: "EGPCS" ; Development Value: "GPCS" ; Production Value: "GPCS" ;;;;;;;;;;;;;;;;;;;; ; php.ini Options ; ;;;;;;;;;;;;;;;;;;;; ; Name for user-defined php.ini (.htaccess) files. Default is ".user.ini" ;user_ini.filename = ".user.ini" ; To disable this feature set this option to empty value ;user_ini.filename = ; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes) ;user_ini.cache_ttl = 300 ;;;;;;;;;;;;;;;;;;;; ; Language Options ; ;;;;;;;;;;;;;;;;;;;; ; Enable the PHP scripting language engine under Apache. ; http://php.net/engine engine = On ; This directive determines whether or not PHP will recognize code between ; <? and ?> tags as PHP source which should be processed as such. It's been ; recommended for several years that you not use the short tag "short cut" and ; instead to use the full <?php and ?> tag combination. With the wide spread use ; of XML and use of these tags by other languages, the server can become easily ; confused and end up parsing the wrong code in the wrong context. But because ; this short cut has been a feature for such a long time, it's currently still ; supported for backwards compatibility, but we recommend you don't use them. ; Default Value: On ; Development Value: Off ; Production Value: Off ; http://php.net/short-open-tag short_open_tag = Off ; Allow ASP-style <% %> tags. ; http://php.net/asp-tags asp_tags = Off ; The number of significant digits displayed in floating point numbers. ; http://php.net/precision precision = 14 ; Enforce year 2000 compliance (will cause problems with non-compliant browsers) ; http://php.net/y2k-compliance y2k_compliance = On ; Output buffering is a mechanism for controlling how much output data ; (excluding headers and cookies) PHP should keep internally before pushing that ; data to the client. If your application's output exceeds this setting, PHP ; will send that data in chunks of roughly the size you specify. ; Turning on this setting and managing its maximum buffer size can yield some ; interesting side-effects depending on your application and web server. ; You may be able to send headers and cookies after you've already sent output ; through print or echo. You also may see performance benefits if your server is ; emitting less packets due to buffered output versus PHP streaming the output ; as it gets it. On production servers, 4096 bytes is a good setting for performance ; reasons. ; Note: Output buffering can also be controlled via Output Buffering Control ; functions. ; Possible Values: ; On = Enabled and buffer is unlimited. (Use with caution) ; Off = Disabled ; Integer = Enables the buffer and sets its maximum size in bytes. ; Note: This directive is hardcoded to Off for the CLI SAPI ; Default Value: Off ; Development Value: 4096 ; Production Value: 4096 ; http://php.net/output-buffering output_buffering = Off ; You can redirect all of the output of your scripts to a function. For ; example, if you set output_handler to "mb_output_handler", character ; encoding will be transparently converted to the specified encoding. ; Setting any output handler automatically turns on output buffering. ; Note: People who wrote portable scripts should not depend on this ini ; directive. Instead, explicitly set the output handler using ob_start(). ; Using this ini directive may cause problems unless you know what script ; is doing. ; Note: You cannot use both "mb_output_handler" with "ob_iconv_handler" ; and you cannot use both "ob_gzhandler" and "zlib.output_compression". ; Note: output_handler must be empty if this is set 'On' !!!! ; Instead you must use zlib.output_handler. ; http://php.net/output-handler ;output_handler = ; Transparent output compression using the zlib library ; Valid values for this option are 'off', 'on', or a specific buffer size ; to be used for compression (default is 4KB) ; Note: Resulting chunk size may vary due to nature of compression. PHP ; outputs chunks that are few hundreds bytes each as a result of ; compression. If you prefer a larger chunk size for better ; performance, enable output_buffering in addition. ; Note: You need to use zlib.output_handler instead of the standard ; output_handler, or otherwise the output will be corrupted. ; http://php.net/zlib.output-compression zlib.output_compression = Off ; http://php.net/zlib.output-compression-level ;zlib.output_compression_level = -1 ; You cannot specify additional output handlers if zlib.output_compression ; is activated here. This setting does the same as output_handler but in ; a different order. ; http://php.net/zlib.output-handler ;zlib.output_handler = ; Implicit flush tells PHP to tell the output layer to flush itself ; automatically after every output block. This is equivalent to calling the ; PHP function flush() after each and every call to print() or echo() and each ; and every HTML block. Turning this option on has serious performance ; implications and is generally recommended for debugging purposes only. ; http://php.net/implicit-flush ; Note: This directive is hardcoded to On for the CLI SAPI implicit_flush = Off ; The unserialize callback function will be called (with the undefined class' ; name as parameter), if the unserializer finds an undefined class ; which should be instantiated. A warning appears if the specified function is ; not defined, or if the function doesn't include/implement the missing class. ; So only set this entry, if you really want to implement such a ; callback-function. unserialize_callback_func = ; When floats & doubles are serialized store serialize_precision significant ; digits after the floating point. The default value ensures that when floats ; are decoded with unserialize, the data will remain the same. serialize_precision = 100 ; This directive allows you to enable and disable warnings which PHP will issue ; if you pass a value by reference at function call time. Passing values by ; reference at function call time is a deprecated feature which will be removed ; from PHP at some point in the near future. The acceptable method for passing a ; value by reference to a function is by declaring the reference in the functions ; definition, not at call time. This directive does not disable this feature, it ; only determines whether PHP will warn you about it or not. These warnings ; should enabled in development environments only. ; Default Value: On (Suppress warnings) ; Development Value: Off (Issue warnings) ; Production Value: Off (Issue warnings) ; http://php.net/allow-call-time-pass-reference allow_call_time_pass_reference = On ; Safe Mode ; http://php.net/safe-mode safe_mode = Off ; By default, Safe Mode does a UID compare check when ; opening files. If you want to relax this to a GID compare, ; then turn on safe_mode_gid. ; http://php.net/safe-mode-gid safe_mode_gid = Off ; When safe_mode is on, UID/GID checks are bypassed when ; including files from this directory and its subdirectories. ; (directory must also be in include_path or full path must ; be used when including) ; http://php.net/safe-mode-include-dir safe_mode_include_dir = ; When safe_mode is on, only executables located in the safe_mode_exec_dir ; will be allowed to be executed via the exec family of functions. ; http://php.net/safe-mode-exec-dir safe_mode_exec_dir = ; Setting certain environment variables may be a potential security breach. ; This directive contains a comma-delimited list of prefixes. In Safe Mode, ; the user may only alter environment variables whose names begin with the ; prefixes supplied here. By default, users will only be able to set ; environment variables that begin with PHP_ (e.g. PHP_FOO=BAR). ; Note: If this directive is empty, PHP will let the user modify ANY ; environment variable! ; http://php.net/safe-mode-allowed-env-vars safe_mode_allowed_env_vars = PHP_ ; This directive contains a comma-delimited list of environment variables that ; the end user won't be able to change using putenv(). These variables will be ; protected even if safe_mode_allowed_env_vars is set to allow to change them. ; http://php.net/safe-mode-protected-env-vars safe_mode_protected_env_vars = LD_LIBRARY_PATH ; open_basedir, if set, limits all file operations to the defined directory ; and below. This directive makes most sense if used in a per-directory ; or per-virtualhost web server configuration file. This directive is ; *NOT* affected by whether Safe Mode is turned On or Off. ; http://php.net/open-basedir ;open_basedir = ; This directive allows you to disable certain functions for security reasons. ; It receives a comma-delimited list of function names. This directive is ; *NOT* affected by whether Safe Mode is turned On or Off. ; http://php.net/disable-functions disable_functions = ; This directive allows you to disable certain classes for security reasons. ; It receives a comma-delimited list of class names. This directive is ; *NOT* affected by whether Safe Mode is turned On or Off. ; http://php.net/disable-classes disable_classes = ; Colors for Syntax Highlighting mode. Anything that's acceptable in ; <span style="color: ???????"> would work. ; http://php.net/syntax-highlighting ;highlight.string = #DD0000 ;highlight.comment = #FF9900 ;highlight.keyword = #007700 ;highlight.bg = #FFFFFF ;highlight.default = #0000BB ;highlight.html = #000000 ; If enabled, the request will be allowed to complete even if the user aborts ; the request. Consider enabling it if executing long requests, which may end up ; being interrupted by the user or a browser timing out. PHP's default behavior ; is to disable this feature. ; http://php.net/ignore-user-abort ;ignore_user_abort = On ; Determines the size of the realpath cache to be used by PHP. This value should ; be increased on systems where PHP opens many files to reflect the quantity of ; the file operations performed. ; http://php.net/realpath-cache-size ;realpath_cache_size = 16k ; Duration of time, in seconds for which to cache realpath information for a given ; file or directory. For systems with rarely changing files, consider increasing this ; value. ; http://php.net/realpath-cache-ttl ;realpath_cache_ttl = 120 ;;;;;;;;;;;;;;;;; ; Miscellaneous ; ;;;;;;;;;;;;;;;;; ; Decides whether PHP may expose the fact that it is installed on the server ; (e.g. by adding its signature to the Web server header). It is no security ; threat in any way, but it makes it possible to determine whether you use PHP ; on your server or not. ; http://php.net/expose-php expose_php = On ;;;;;;;;;;;;;;;;;;; ; Resource Limits ; ;;;;;;;;;;;;;;;;;;; ; Maximum execution time of each script, in seconds ; http://php.net/max-execution-time ; Note: This directive is hardcoded to 0 for the CLI SAPI max_execution_time = 60 ; Maximum amount of time each script may spend parsing request data. It's a good ; idea to limit this time on productions servers in order to eliminate unexpectedly ; long running scripts. ; Note: This directive is hardcoded to -1 for the CLI SAPI ; Default Value: -1 (Unlimited) ; Development Value: 60 (60 seconds) ; Production Value: 60 (60 seconds) ; http://php.net/max-input-time max_input_time = 60 ; Maximum input variable nesting level ; http://php.net/max-input-nesting-level ;max_input_nesting_level = 64 ; Maximum amount of memory a script may consume (128MB) ; http://php.net/memory-limit memory_limit = 128M ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; Error handling and logging ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; This directive informs PHP of which errors, warnings and notices you would like ; it to take action for. The recommended way of setting values for this ; directive is through the use of the error level constants and bitwise ; operators. The error level constants are below here for convenience as well as ; some common settings and their meanings. ; By default, PHP is set to take action on all errors, notices and warnings EXCEPT ; those related to E_NOTICE and E_STRICT, which together cover best practices and ; recommended coding standards in PHP. For performance reasons, this is the ; recommend error reporting setting. Your production server shouldn't be wasting ; resources complaining about best practices and coding standards. That's what ; development servers and development settings are for. ; Note: The php.ini-development file has this setting as E_ALL | E_STRICT. This ; means it pretty much reports everything which is exactly what you want during ; development and early testing. ; ; Error Level Constants: ; E_ALL - All errors and warnings (includes E_STRICT as of PHP 6.0.0) ; E_ERROR - fatal run-time errors ; E_RECOVERABLE_ERROR - almost fatal run-time errors ; E_WARNING - run-time warnings (non-fatal errors) ; E_PARSE - compile-time parse errors ; E_NOTICE - run-time notices (these are warnings which often result ; from a bug in your code, but it's possible that it was ; intentional (e.g., using an uninitialized variable and ; relying on the fact it's automatically initialized to an ; empty string) ; E_STRICT - run-time notices, enable to have PHP suggest changes ; to your code which will ensure the best interoperability ; and forward compatibility of your code ; E_CORE_ERROR - fatal errors that occur during PHP's initial startup ; E_CORE_WARNING - warnings (non-fatal errors) that occur during PHP's ; initial startup ; E_COMPILE_ERROR - fatal compile-time errors ; E_COMPILE_WARNING - compile-time warnings (non-fatal errors) ; E_USER_ERROR - user-generated error message ; E_USER_WARNING - user-generated warning message ; E_USER_NOTICE - user-generated notice message ; E_DEPRECATED - warn about code that will not work in future versions ; of PHP ; E_USER_DEPRECATED - user-generated deprecation warnings ; ; Common Values: ; E_ALL & ~E_NOTICE (Show all errors, except for notices and coding standards warnings.) ; E_ALL & ~E_NOTICE | E_STRICT (Show all errors, except for notices) ; E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR (Show only errors) ; E_ALL | E_STRICT (Show all errors, warnings and notices including coding standards.) ; Default Value: E_ALL & ~E_NOTICE ; Development Value: E_ALL | E_STRICT ; Production Value: E_ALL & ~E_DEPRECATED ; http://php.net/error-reporting error_reporting = E_ALL & ~E_NOTICE & ~E_DEPRECATED ; This directive controls whether or not and where PHP will output errors, ; notices and warnings too. Error output is very useful during development, but ; it could be very dangerous in production environments. Depending on the code ; which is triggering the error, sensitive information could potentially leak ; out of your application such as database usernames and passwords or worse. ; It's recommended that errors be logged on production servers rather than ; having the errors sent to STDOUT. ; Possible Values: ; Off = Do not display any errors ; stderr = Display errors to STDERR (affects only CGI/CLI binaries!) ; On or stdout = Display errors to STDOUT ; Default Value: On ; Development Value: On ; Production Value: Off ; http://php.net/display-errors display_errors = On ; The display of errors which occur during PHP's startup sequence are handled ; separately from display_errors. PHP's default behavior is to suppress those ; errors from clients. Turning the display of startup errors on can be useful in ; debugging configuration problems. But, it's strongly recommended that you ; leave this setting off on production servers. ; Default Value: Off ; Development Value: On ; Production Value: Off ; http://php.net/display-startup-errors display_startup_errors = On ; Besides displaying errors, PHP can also log errors to locations such as a ; server-specific log, STDERR, or a location specified by the error_log ; directive found below. While errors should not be displayed on productions ; servers they should still be monitored and logging is a great way to do that. ; Default Value: Off ; Development Value: On ; Production Value: On ; http://php.net/log-errors log_errors = Off ; Set maximum length of log_errors. In error_log information about the source is ; added. The default is 1024 and 0 allows to not apply any maximum length at all. ; http://php.net/log-errors-max-len log_errors_max_len = 1024 ; Do not log repeated messages. Repeated errors must occur in same file on same ; line unless ignore_repeated_source is set true. ; http://php.net/ignore-repeated-errors ignore_repeated_errors = Off ; Ignore source of message when ignoring repeated messages. When this setting ; is On you will not log errors with repeated messages from different files or ; source lines. ; http://php.net/ignore-repeated-source ignore_repeated_source = Off ; If this parameter is set to Off, then memory leaks will not be shown (on ; stdout or in the log). This has only effect in a debug compile, and if ; error reporting includes E_WARNING in the allowed list ; http://php.net/report-memleaks report_memleaks = On ; This setting is on by default. ;report_zend_debug = 0 ; Store the last error/warning message in $php_errormsg (boolean). Setting this value ; to On can assist in debugging and is appropriate for development servers. It should ; however be disabled on production servers. ; Default Value: Off ; Development Value: On ; Production Value: Off ; http://php.net/track-errors track_errors = Off ; Turn off normal error reporting and emit XML-RPC error XML ; http://php.net/xmlrpc-errors ;xmlrpc_errors = 0 ; An XML-RPC faultCode ;xmlrpc_error_number = 0 ; When PHP displays or logs an error, it has the capability of inserting html ; links to documentation related to that error. This directive controls whether ; those HTML links appear in error messages or not. For performance and security ; reasons, it's recommended you disable this on production servers. ; Note: This directive is hardcoded to Off for the CLI SAPI ; Default Value: On ; Development Value: On ; Production value: Off ; http://php.net/html-errors html_errors = On ; If html_errors is set On PHP produces clickable error messages that direct ; to a page describing the error or function causing the error in detail. ; You can download a copy of the PHP manual from http://php.net/docs ; and change docref_root to the base URL of your local copy including the ; leading '/'. You must also specify the file extension being used including ; the dot. PHP's default behavior is to leave these settings empty. ; Note: Never use this feature for production boxes. ; http://php.net/docref-root ; Examples ;docref_root = "/phpmanual/" ; http://php.net/docref-ext ;docref_ext = .html ; String to output before an error message. PHP's default behavior is to leave ; this setting blank. ; http://php.net/error-prepend-string ; Example: ;error_prepend_string = "<font color=#ff0000>" ; String to output after an error message. PHP's default behavior is to leave ; this setting blank. ; http://php.net/error-append-string ; Example: ;error_append_string = "</font>" ; Log errors to specified file. PHP's default behavior is to leave this value ; empty. ; http://php.net/error-log ; Example: ;error_log = php_errors.log ; Log errors to syslog (Event Log on NT, not valid in Windows 95). ;error_log = syslog ;error_log = "C:\xampp\apache\logs\php_error.log" ;;;;;;;;;;;;;;;;; ; Data Handling ; ;;;;;;;;;;;;;;;;; ; Note - track_vars is ALWAYS enabled ; The separator used in PHP generated URLs to separate arguments. ; PHP's default setting is "&". ; http://php.net/arg-separator.output ; Example: arg_separator.output = "&amp;" ; List of separator(s) used by PHP to parse input URLs into variables. ; PHP's default setting is "&

    Read the article

  • Set the JAXB context factory initialization class to be used

    - by user1902288
    I have updated our projects (Java EE based running on Websphere 8.5) to use a new release of a company internal framework (and Ejb 3.x deployment descriptors rather than the 2.x ones). Since then my integration Tests fail with the following exception: [java.lang.ClassNotFoundException: com.ibm.xml.xlxp2.jaxb.JAXBContextFactory] I can build the application with the previous framework release and everything works fine. While debugging i noticed that within the ContextFinder (javax.xml.bind) there are two different behaviours: Previous Version (Everything works just fine): None of the different places brings up a factory class so the default factory class gets loaded which is com.sun.xml.internal.bind.v2.ContextFactory (defined as String constant within the class). Upgraded Version (ClassNotFound): There is a resource "META-INF/services/javax.xml.bind.JAXBContext" beeing loaded successfully and the first line read makes the ContextFinder attempt to load "com.ibm.xml.xlxp2.jaxb.JAXBContextFactory" which causes the error. I now have two questions: What sort is that resource? Because inside our EAR there is two WARs and none of those two contains a folder services in its META-INF directory. Where could that value be from otherwise? Because a filediff showed me no new or changed properties files. No need to say i am going to read all about the JAXB configuration possibilities but if you have first insights on what could have gone wrong or help me out with that resource (is it a real file i have to look for?) id appreciate a lot. Many Thanks! EDIT (according to comments Input/Questions): Out of curiosity, does your framework include JAXB JARs? Did the old version of your framework include jaxb.properties? Indeed (i am a bit surprised) the framework has a customized eclipselink-2.4.1-.jar inside the EAR that includes both a JAXB implementation and a jaxb.properties file that shows the following entry in both versions (the one that finds the factory as well as in the one that throws the exception): javax.xml.bind.context.factory=org.eclipse.persistence.jaxb.JAXBContextFactory I think this is has nothing to do with the current issue since the jar stayed exactly the same in both EARs (the one that runs/ the one with the expection) It's also not clear to me why the old version of the framework was ever selecting the com.sun implementation There is a class javax.xml.bind.ContextFinder which is responsible for initializing the JAXBContextFactory. This class searches various placess for the existance of a jaxb.properties file or a "javax.xml.bind.JAXBContext" resource. If ALL of those places dont show up which Context Factory to use there is a deault factory loaded which is hardcoded in the class itself: private static final String PLATFORM_DEFAULT_FACTORY_CLASS = "com.sun.xml.internal.bind.v2.ContextFactory"; Now back to my problem: Building with the previous version of the framework (and EJB 2.x deployment descriptors) everything works fine). While debugging i can see that there is no configuration found and thatfore above mentioned default factory is loaded. Building with the new version of the framework (and EJB 3.x deployment descriptors so i can deploy) ONLY A TESTCASE fails but the rest of the functionality works (like i can send requests to our webservice and they dont trigger any errors). While debugging i can see that there is a configuration found. This resource is named "META-INF/services/javax.xml.bind.JAXBContext". Here are the most important lines of how this resource leads to the attempt to load 'com.ibm.xml.xlxp2.jaxb.JAXBContextFactory' which then throws the ClassNotFoundException. This is simplified source of the mentioned javax.xml.bind.ContextFinder class: URL resourceURL = ClassLoader.getSystemResource("META-INF/services/javax.xml.bind.JAXBContext"); BufferedReader r = new BufferedReader(new InputStreamReader(resourceURL.openStream(), "UTF-8")); String factoryClassName = r.readLine().trim(); The field factoryClassName now has the value 'com.ibm.xml.xlxp2.jaxb.JAXBContextFactory' (The day i understand how to format source code on stackoverflow will be my biggest step ahead.... sorry for the formatting after 20 mins it still looks the same :() Because this has become a super lager question i will also add a bounty :)

    Read the article

  • Does the ulkJSON library have limitations when dealing with base64 in Delphi 7?

    - by Da Gopherboy
    I'm working on a project that is using Delphi 7 to consume RESTful services. We are creating and decoding JSON with the ulkJSON library. Up to this point I've been able to successfully build and send JSON containing a base64 string that exceed 5,160kb. I can verify that the base64 is being received by the services and verify the integrity of the base64 once its there. In addition to sending, I can also receive and successfully decode JSON with a smaller (~ 256KB or less) base64. However I am experiencing some issues on the return trip when larger (~1,024KB+) base64 is involved for some reason. Specifically when attempting to use the following JSON format and function combination: JSON: { "message" : "/9j/4AAQSkZJRgABAQEAYABgAAD...." } Function: function checkResults(JSONFormattedString: String): String; var jsonObject : TlkJSONObject; iteration : Integer; i : Integer; x : Integer; begin jsonObject := TlkJSONobject.Create; // Validate that the JSONFormatted string is not empty. // If it is empty, inform the user/programmer, and exit from this routine. if JSONFormattedString = '' then begin result := 'Error: JSON returned is Null'; jsonObject.Free; exit; end; // Now that we can validate that this string is not empty, we are going to // assume that the string is a JSONFormatted string and attempt to parse it. // // If the string is not a valid JSON object (such as an http status code) // throw an exception informing the user/programmer that an unexpected value // has been passed. And exit from this routine. try jsonObject := TlkJSON.ParseText(JSONFormattedString) as TlkJSONobject; except on e:Exception do begin result := 'Error: No JSON was received from web services'; jsonObject.Free; exit; end; end; // Now that the object has been parsed, lets check the contents. try result := jsonObject.Field['message'].value; jsonObject.Free; exit; except on e:Exception do begin result := 'Error: No Message received from Web Services '+e.message; jsonObject.Free; exit; end; end; end; As mentioned above when using the above function, I am able to get small (256KB and less) base64 strings out of the 'message' field of a JSON object. But for some reason if the received JSON is larger than say 1,024kb the following line seems to just stop in its tracks: jsonObject := TlkJSON.ParseText(JSONFormattedString) as TlkJSONobject; No errors, no results. Following the debugger, I can go into the library, and see that the JSON string being passed is not considered to be JSON despite being in the format listed above. The only difference I can find between calls that work as expected and calls that do not work as expect appears to be the size of base64 being transmitted. Am I missing something completely obvious and should be shot for my code implementation (very possible)? Have I missed some notation regarding the limitations of the ulkJSON library? Any input would be extremely helpful. Thanks in advance stack!

    Read the article

  • Split a long JSON string into lines in Ruby

    - by David J.
    First, the background: I'm writing a Ruby app that uses SendGrid to send mass emails. SendGrid uses a custom email header (in JSON format) to set recipients, values to substitute, etc. SendGrid's documentation recommends splitting up the header so that the lines are shorter than 1,000 bytes. My question, then, is this: given a long JSON string, how can I split it into lines < 1,000 so that lines are split at appropriate places (i.e., after a comma) rather than in the middle of a word? This is probably unnecessary, but here's an example of the sort of string I'd like to split: X-SMTPAPI: {"sub": {"pet": ["dog", "cat"]}, "to": ["[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]"]} Thanks in advance for any help you can provide!

    Read the article

  • delayed job problem in rails.

    - by krunal shah
    My controller data_files_controller.rb def upload_balances DataFile.load_balances(params) end My model data_file.rb def self.load_balances(params) # Pull the file out of the http request, write it to file system name = params['Filename'] directory = "public/uploads" errors_table_name = "snapshot_errors" upload_file = File.join(directory, name) File.open(upload_file, "wb") { |f| f.write(params['Filedata'].read) } # Remove the old data from the table Balance.destroy_all ------ more code----- end It's working fine. Now i want to use delayed job with my controller to call my model action like .. My controller data_files_controller.rb def upload_balances DataFile.send_later(:load_balances,params) end Is it possible?? What's the other way to do it? Is it create any problem? With this send_later i am getting this error in column last_error in delayed_job table. uninitialized stream C:/cyncabc/app/models/data_file.rb:12:in read' C:/cyncabc/app/models/data_file.rb:12:inload_balances' C:/cyncabc/app/models/data_file.rb:12:in open' C:/cyncabc/app/models/data_file.rb:12:inload_balances' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/performable_method.rb:35:in send' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/performable_method.rb:35:inperform' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/backend/base.rb:66:in invoke_job' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:120:inrun' c:/ruby/lib/ruby/1.8/timeout.rb:62:in timeout' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:120:inrun' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/core_ext/benchmark.rb:10:in realtime' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:119:inrun' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:180:in reserve_and_run_one_job' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:104:inwork_off' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:103:in times' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:103:inwork_off' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:78:in start' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/core_ext/benchmark.rb:10:inrealtime' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:77:in start' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:74:inloop' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:74:in start' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/command.rb:93:inrun' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/command.rb:72:in run_process' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/application.rb:215:incall' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/application.rb:215:in start_proc' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/application.rb:225:incall' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/application.rb:225:in start_proc' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/application.rb:255:instart' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/controller.rb:72:in run' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons.rb:188:inrun_proc' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/cmdline.rb:105:in call' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/cmdline.rb:105:incatch_exceptions' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons.rb:187:in run_proc' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/command.rb:71:inrun_process' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/command.rb:65:in daemonize' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/command.rb:63:intimes' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/command.rb:63:in `daemonize' script/delayed_job:5 Without send_later it's working fine... Is there any solution?

    Read the article

  • WCF timeouts are a nightmare

    - by Greg
    We have a bunch of WCF services that work almost all of the time, using various bindings, ports, max sizes, etc. The super-frustrating thing about WCF is that when it (rarely) fails, we are powerless to find out why it failed. Sometimes you will get a message that looks like this: System.ServiceModel.CommunicationException: The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '01:00:00'. --- System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. The problem is that the local socket timeout it's giving you is merely an attempt to be convenient. It may or may not be the cause of the problem. But OK, sometimes networks have issues. No big deal. We can retry or something. But here's the huge problem. On top of failing to tell you which precisely which timeout (if any) resulted in the failure ("your server-side receive timeout was exceeded," or something, would be helpful), WCF seems to have two types of timeouts. Timeout Type #1) A timeout, that, if increased, would increase the chance of your operation's success. So, the pertinent timeout is an hour, you are uploading a huge file that will take an hour and twenty minutes. It fails. You increase the timeout, it succeeds. I have no no problem with this type of timeout. Timeout Type #2) A timeout which merely defines how long you have to wait for the service to actually fail and give you an error, but modifying the value of this timeout has no impact on the chance of success. Basically, something happens during the first second of the service request which mucks things up. It will never recover. WCF doesn't magically retry the network connection for you. Fine, sometimes establishing a network connection doesn't go well. But, if your timeout is 2 hours, you have to wait 2 whole hours with no chance of it ever working before it finally acknowledges that it didn't work and gives you the error. But the error you see in both cases looks the same. With timeout Type #2, it still looks like you are running into a timeout. But, you could increase all of your timeouts to 4 years, and all it would do is make it take 4 years to get an error message. I know that Type #2 exists because I can do an operation that is known to complete in less than a minute when successful, and have it take 2 hours to fail. But, if I kill it and retry, it succeeds quickly. (If you are wondering why there might be a 2 hour timeout on an operation that takes less than a minute, there are times I run the operation with a much larger file and it could take over an hour.) So, to combat the problem with Type #2, you'd want your timeout to be really quick so you immediately know if there is a problem. Then you can retry. But the insurmountable problem is that because I don't know which timeouts are the cause of failure, I don't know what timeouts are Type #1 and which ones are Type #2. There may be one timeout (let's say the client-side send timeout) that acts like Type #1 in some cases and Type #2 in others. I have no idea, and I have no way of finding out. Does anyone know how to track down Type #2 timeouts so I can set them to low values without having to shorten actual (read: Type #1) timeouts and lower the chance of success? Thank you.

    Read the article

  • FluentNHibernate Unit Of Work / Repository Design Pattern Questions

    - by Echiban
    Hi all, I think I am at a impasse here. I have an application I built from scratch using FluentNHibernate (ORM) / SQLite (file db). I have decided to implement the Unit of Work and Repository Design pattern. I am at a point where I need to think about the end game, which will start as a WPF windows app (using MVVM) and eventually implement web services / ASP.Net as UI. Now I already created domain objects (entities) for ORM. And now I don't know how should I use it outside of ORM. Questions about it include: Should I use ORM entity objects directly as models in MVVM? If yes, do I put business logic (such as certain values must be positive and be greater than another Property) in those entity objects? It is certainly the simpler approach, and one I am leaning right now. However, will there be gotchas that would trash this plan? If the answer above is no, do I then create a new set of classes to implement business logic and use those as Models in MVVM? How would I deal with the transition between model objects and entity objects? I guess a type converter implementation would work well here. Now I followed this well written article to implement the Unit Of Work pattern. However, due to the fact that I am using FluentNHibernate instead of NHibernate, I had to bastardize the implementation of UnitOfWorkFactory. Here's my implementation: using System; using FluentNHibernate.Cfg; using FluentNHibernate.Cfg.Db; using NHibernate; using NHibernate.Cfg; using NHibernate.Tool.hbm2ddl; namespace ELau.BlindsManagement.Business { public class UnitOfWorkFactory : IUnitOfWorkFactory { private static readonly string DbFilename; private static Configuration _configuration; private static ISession _currentSession; private ISessionFactory _sessionFactory; static UnitOfWorkFactory() { // arbitrary default filename DbFilename = "defaultBlindsDb.db3"; } internal UnitOfWorkFactory() { } #region IUnitOfWorkFactory Members public ISession CurrentSession { get { if (_currentSession == null) { throw new InvalidOperationException(ExceptionStringTable.Generic_NotInUnitOfWork); } return _currentSession; } set { _currentSession = value; } } public ISessionFactory SessionFactory { get { if (_sessionFactory == null) { _sessionFactory = BuildSessionFactory(); } return _sessionFactory; } } public Configuration Configuration { get { if (_configuration == null) { Fluently.Configure().ExposeConfiguration(c => _configuration = c); } return _configuration; } } public IUnitOfWork Create() { ISession session = CreateSession(); session.FlushMode = FlushMode.Commit; _currentSession = session; return new UnitOfWorkImplementor(this, session); } public void DisposeUnitOfWork(UnitOfWorkImplementor adapter) { CurrentSession = null; UnitOfWork.DisposeUnitOfWork(adapter); } #endregion public ISession CreateSession() { return SessionFactory.OpenSession(); } public IStatelessSession CreateStatelessSession() { return SessionFactory.OpenStatelessSession(); } private static ISessionFactory BuildSessionFactory() { ISessionFactory result = Fluently.Configure() .Database( SQLiteConfiguration.Standard .UsingFile(DbFilename) ) .Mappings(m => m.FluentMappings.AddFromAssemblyOf<UnitOfWorkFactory>()) .ExposeConfiguration(BuildSchema) .BuildSessionFactory(); return result; } private static void BuildSchema(Configuration config) { // this NHibernate tool takes a configuration (with mapping info in) // and exports a database schema from it _configuration = config; new SchemaExport(_configuration).Create(false, true); } } } I know that this implementation is flawed because a few tests pass when run individually, but when all tests are run, it would fail for some unknown reason. Whoever wants to help me out with this one, given its complexity, please contact me by private message. I am willing to send some $$$ by Paypal to someone who can address the issue and provide solid explanation. I am new to ORM, so any assistance is appreciated.

    Read the article

  • mexTcpBinding in WCF - IMetadataExchange errors

    - by David
    I'm wanting to get a WCF-over-TCP service working. I was having some problems with modifying my own project, so I thought I'd start with the "base" WCF template included in VS2008. Here is the initial WCF App.config and when I run the service the WCF Test Client can work with it fine: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.web> <compilation debug="true" /> </system.web> <system.serviceModel> <services> <service name="WcfTcpTest.Service1" behaviorConfiguration="WcfTcpTest.Service1Behavior"> <host> <baseAddresses> <add baseAddress="http://localhost:8731/Design_Time_Addresses/WcfTcpTest/Service1/" /> </baseAddresses> </host> <endpoint address="" binding="wsHttpBinding" contract="WcfTcpTest.IService1"> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="WcfTcpTest.Service1Behavior"> <serviceMetadata httpGetEnabled="True"/> <serviceDebug includeExceptionDetailInFaults="True" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> This works perfectly, no issues at all. I figured changing it from HTTP to TCP would be trivial: change the bindings to their TCP equivalents and remove the httpGetEnabled serviceMetadata element: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.web> <compilation debug="true" /> </system.web> <system.serviceModel> <services> <service name="WcfTcpTest.Service1" behaviorConfiguration="WcfTcpTest.Service1Behavior"> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1337/Service1/" /> </baseAddresses> </host> <endpoint address="" binding="netTcpBinding" contract="WcfTcpTest.IService1"> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexTcpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="WcfTcpTest.Service1Behavior"> <serviceDebug includeExceptionDetailInFaults="True" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> But when I run this I get this error in the WCF Service Host: System.InvalidOperationException: The contract name 'IMetadataExchange' could not be found in the list of contracts implemented by the service Service1. Add a ServiceMetadataBehavior to the configuration file or to the ServiceHost directly to enable support for this contract. I get the feeling that you can't send metadata using TCP, but that's the case why is there a mexTcpBinding option?

    Read the article

  • migrating simple rails database to mysql

    - by joseph-misiti
    i am interested in creating a rails app with a mysql database. i am new to rails and am just trying to start creating something simple: rails -d mysql MyMoviesSQL cd MyMoviesSQL script/generate scaffold Movies title:string rating:integer rake db:migrate i am seeing the following error: rake aborted! NoMethodError: undefined method `ord' for 0:Fixnum: SET NAMES 'utf8' if i do a trace: ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:migrate rake aborted! NoMethodError: undefined method ord' for 0:Fixnum: SET NAMES 'utf8' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract_adapter.rb:219:inlog' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:323:in execute' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:599:inconfigure_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:594:in connect' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:203:ininitialize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:75:in new' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:75:inmysql_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:in send' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:innew_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:245:in checkout_new_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:188:incheckout' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:in loop' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:incheckout' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:in synchronize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:183:incheckout' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:98:in connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:326:inretrieve_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:123:in retrieve_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:115:inconnection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:435:in initialize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:400:innew' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:400:in up' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:383:inmigrate' /Library/Ruby/Gems/1.8/gems/rails-2.3.5/lib/tasks/databases.rake:116 /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in call' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:inexecute' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in each' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:inexecute' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:597:in invoke_with_call_chain' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:insynchronize' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in invoke_with_call_chain' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:583:ininvoke' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2051:in invoke_task' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:intop_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in each' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:intop_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in standard_exception_handling' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2023:intop_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2001:in run' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:instandard_exception_handling' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:in run' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/bin/rake:31 /usr/bin/rake:19:inload' /usr/bin/rake:19 here are my versions: rails - 2.3.5 ruby - 1.8.6 gem list * LOCAL GEMS * actionmailer (2.3.5, 1.3.6) actionpack (2.3.5, 1.13.6) actionwebservice (1.2.6) activerecord (2.3.5, 1.15.6) activeresource (2.3.5) activesupport (2.3.5, 1.4.4) acts_as_ferret (0.4.1) capistrano (2.0.0) cgi_multipart_eof_fix (2.5.0) daemons (1.0.9) dbi (0.4.3) deprecated (2.0.1) dnssd (0.6.0) fastthread (1.0.1) fcgi (0.8.7) ferret (0.11.4) gem_plugin (0.2.3) highline (1.2.9) hpricot (0.6) libxml-ruby (0.9.5, 0.3.8.4) mongrel (1.1.4) needle (1.3.0) net-sftp (1.1.0) net-ssh (1.1.2) rack (1.0.1) rails (2.3.5) rake (0.8.7, 0.7.3) RedCloth (3.0.4) ruby-openid (1.1.4) ruby-yadis (0.3.4) rubygems-update (1.3.6) rubynode (0.1.3) sqlite3-ruby (1.2.1) termios (0.9.4) also, if i need to add a patch to FixNum, can someone please tell which file to add the patch to. thanks for your help

    Read the article

  • Uploading images from Flex to Rails using Paperclip

    - by 23tux
    Hi everyone, I'm looking for a way to upload images that were created in my flex app to rails. I've tried to use paperclip, but it don't seem to work. I've got this tutorial here: http://blog.alexonrails.net/?p=218 The problem is, that they are using a FileReference to browse for files on the clients computer. They call the .upload(...) function and send the data to the upload controller. But I'm using a URLLoader to upload a image, that is modified in my Flex-App. First, here is the code from the tutorial: private function selectHandler(event:Event):void { var vars:URLVariables = new URLVariables(); var request:URLRequest = new URLRequest(uri); request.method = URLRequestMethod.POST; vars.description = "My Description"; request.data = vars; var uploadDataFieldName:String = 'filemanager[file]'; fileReference.upload(request, uploadDataFieldName); } I don't know how to set that var uploadDataFieldName:String = 'filemanager[file]'; in a URLLoader. I've got the image data compressed as a JPEG in a ByteArray. It looks like this: public function savePicture():void { var filename:String = "blubblub.jpg"; var vars:URLVariables = new URLVariables(); vars.position = layoutView.currentPicPosition; vars.url = filename; vars.user_id = 1; vars.comic_id = 1; vars.file_content_type = "image/jpeg"; vars.file_file_name = filename; var rawBytes:ByteArray = new JPGEncoder(75).encode(bitmapdata); vars.picture = rawBytes; var request:URLRequest = new URLRequest(Data.SERVER_ADDR + "pictures/upload"); request.method = URLRequestMethod.POST; request.data = vars; var loader:URLLoader = new URLLoader(request); loader.addEventListener(Event.COMPLETE, savePictureHandler); loader.addEventListener(IOErrorEvent.IO_ERROR, errorHandlerUpload); loader.load(request); } If I set the var.picture URLVariable to the bytearray, then I get the error, that the upload is nil. Here is the Rails part: Picture-Model: require 'paperclip' class Picture < ActiveRecord::Base # relations from picture belongs_to :comic belongs_to :user has_many :picture_bubbles has_many :bubbles, :through => :picture_bubbles # attached file for picture upload -> with paperclip plugin has_attached_file :file, :path => "public/system/pictures/:basename.:extension" end and the picture controller with the upload function: class PicturesController < ApplicationController protect_from_forgery :except => :upload def upload @picture = Picture.new(params[:picture]) @picture.position = params[:position] @picture.comic_id = params[:comic_id] @picture.url = params[:url] @picture.user_id = params[:user_id] if @picture.save render(:nothing => true, :status => 200) else render(:nothing => true, :status => 500) end end end Does anyone know how to solve this problem? thx, tux

    Read the article

< Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >