Search Results

Search found 10476 results on 420 pages for 'enterprise guide'.

Page 418/420 | < Previous Page | 414 415 416 417 418 419 420  | Next Page >

  • Cisco 891w multiple VLAN configuration

    - by Jessica
    I'm having trouble getting my guest network up. I have VLAN 1 that contains all our network resources (servers, desktops, printers, etc). I have the wireless configured to use VLAN1 but authenticate with wpa2 enterprise. The guest network I just wanted to be open or configured with a simple WPA2 personal password on it's own VLAN2. I've looked at tons of documentation and it should be working but I can't even authenticate on the guest network! I've posted this on cisco's support forum a week ago but no one has really responded. I could really use some help. So if anyone could take a look at the configurations I posted and steer me in the right direction I would be extremely grateful. Thank you! version 15.0 service timestamps debug datetime msec service timestamps log datetime msec no service password-encryption ! hostname ESI ! boot-start-marker boot-end-marker ! logging buffered 51200 warnings ! aaa new-model ! ! aaa authentication login userauthen local aaa authorization network groupauthor local ! ! ! ! ! aaa session-id common ! ! ! clock timezone EST -5 clock summer-time EDT recurring service-module wlan-ap 0 bootimage autonomous ! crypto pki trustpoint TP-self-signed-3369945891 enrollment selfsigned subject-name cn=IOS-Self-Signed-Certificate-3369945891 revocation-check none rsakeypair TP-self-signed-3369945891 ! ! crypto pki certificate chain TP-self-signed-3369945891 certificate self-signed 01 (cert is here) quit ip source-route ! ! ip dhcp excluded-address 192.168.1.1 ip dhcp excluded-address 192.168.1.5 ip dhcp excluded-address 192.168.1.2 ip dhcp excluded-address 192.168.1.200 192.168.1.210 ip dhcp excluded-address 192.168.1.6 ip dhcp excluded-address 192.168.1.8 ip dhcp excluded-address 192.168.3.1 ! ip dhcp pool ccp-pool import all network 192.168.1.0 255.255.255.0 default-router 192.168.1.1 dns-server 10.171.12.5 10.171.12.37 lease 0 2 ! ip dhcp pool guest import all network 192.168.3.0 255.255.255.0 default-router 192.168.3.1 dns-server 10.171.12.5 10.171.12.37 ! ! ip cef no ip domain lookup no ipv6 cef ! ! multilink bundle-name authenticated license udi pid CISCO891W-AGN-A-K9 sn FTX153085WL ! ! username ESIadmin privilege 15 secret 5 $1$g1..$JSZ0qxljZAgJJIk/anDu51 username user1 password 0 pass ! ! ! class-map type inspect match-any ccp-cls-insp-traffic match protocol cuseeme match protocol dns match protocol ftp match protocol h323 match protocol https match protocol icmp match protocol imap match protocol pop3 match protocol netshow match protocol shell match protocol realmedia match protocol rtsp match protocol smtp match protocol sql-net match protocol streamworks match protocol tftp match protocol vdolive match protocol tcp match protocol udp class-map type inspect match-all ccp-insp-traffic match class-map ccp-cls-insp-traffic class-map type inspect match-any ccp-cls-icmp-access match protocol icmp class-map type inspect match-all ccp-invalid-src match access-group 100 class-map type inspect match-all ccp-icmp-access match class-map ccp-cls-icmp-access class-map type inspect match-all ccp-protocol-http match protocol http ! ! policy-map type inspect ccp-permit-icmpreply class type inspect ccp-icmp-access inspect class class-default pass policy-map type inspect ccp-inspect class type inspect ccp-invalid-src drop log class type inspect ccp-protocol-http inspect class type inspect ccp-insp-traffic inspect class class-default drop policy-map type inspect ccp-permit class class-default drop ! zone security out-zone zone security in-zone zone-pair security ccp-zp-self-out source self destination out-zone service-policy type inspect ccp-permit-icmpreply zone-pair security ccp-zp-in-out source in-zone destination out-zone service-policy type inspect ccp-inspect zone-pair security ccp-zp-out-self source out-zone destination self service-policy type inspect ccp-permit ! ! crypto isakmp policy 1 encr 3des authentication pre-share group 2 ! crypto isakmp client configuration group 3000client key 67Nif8LLmqP_ dns 10.171.12.37 10.171.12.5 pool dynpool acl 101 ! ! crypto ipsec transform-set myset esp-3des esp-sha-hmac ! crypto dynamic-map dynmap 10 set transform-set myset ! ! crypto map clientmap client authentication list userauthen crypto map clientmap isakmp authorization list groupauthor crypto map clientmap client configuration address initiate crypto map clientmap client configuration address respond crypto map clientmap 10 ipsec-isakmp dynamic dynmap ! ! ! ! ! interface FastEthernet0 ! ! interface FastEthernet1 ! ! interface FastEthernet2 ! ! interface FastEthernet3 ! ! interface FastEthernet4 ! ! interface FastEthernet5 ! ! interface FastEthernet6 ! ! interface FastEthernet7 ! ! interface FastEthernet8 ip address dhcp ip nat outside ip virtual-reassembly duplex auto speed auto ! ! interface GigabitEthernet0 description $FW_OUTSIDE$$ES_WAN$ ip address 10...* 255.255.254.0 ip nat outside ip virtual-reassembly zone-member security out-zone duplex auto speed auto crypto map clientmap ! ! interface wlan-ap0 description Service module interface to manage the embedded AP ip unnumbered Vlan1 arp timeout 0 ! ! interface Wlan-GigabitEthernet0 description Internal switch interface connecting to the embedded AP switchport trunk allowed vlan 1-3,1002-1005 switchport mode trunk ! ! interface Vlan1 description $ETH-SW-LAUNCH$$INTF-INFO-FE 1$$FW_INSIDE$ ip address 192.168.1.1 255.255.255.0 ip nat inside ip virtual-reassembly zone-member security in-zone ip tcp adjust-mss 1452 crypto map clientmap ! ! interface Vlan2 description guest ip address 192.168.3.1 255.255.255.0 ip access-group 120 in ip nat inside ip virtual-reassembly zone-member security in-zone ! ! interface Async1 no ip address encapsulation slip ! ! ip local pool dynpool 192.168.1.200 192.168.1.210 ip forward-protocol nd ip http server ip http access-class 23 ip http authentication local ip http secure-server ip http timeout-policy idle 60 life 86400 requests 10000 ! ! ip dns server ip nat inside source list 23 interface GigabitEthernet0 overload ip route 0.0.0.0 0.0.0.0 10.165.0.1 ! access-list 23 permit 192.168.1.0 0.0.0.255 access-list 100 remark CCP_ACL Category=128 access-list 100 permit ip host 255.255.255.255 any access-list 100 permit ip 127.0.0.0 0.255.255.255 any access-list 100 permit ip 10.165.0.0 0.0.1.255 any access-list 110 permit ip 192.168.0.0 0.0.5.255 any access-list 120 remark ESIGuest Restriction no cdp run ! ! ! ! ! ! control-plane ! ! alias exec dot11radio service-module wlan-ap 0 session Access point version 12.4 no service pad service timestamps debug datetime msec service timestamps log datetime msec no service password-encryption ! hostname ESIRouter ! no logging console enable secret 5 $1$yEH5$CxI5.9ypCBa6kXrUnSuvp1 ! aaa new-model ! ! aaa group server radius rad_eap server 192.168.1.5 auth-port 1812 acct-port 1813 ! aaa group server radius rad_acct server 192.168.1.5 auth-port 1812 acct-port 1813 ! aaa authentication login eap_methods group rad_eap aaa authentication enable default line enable aaa authorization exec default local aaa authorization commands 15 default local aaa accounting network acct_methods start-stop group rad_acct ! aaa session-id common clock timezone EST -5 clock summer-time EDT recurring ip domain name ESI ! ! dot11 syslog dot11 vlan-name one vlan 1 dot11 vlan-name two vlan 2 ! dot11 ssid one vlan 1 authentication open eap eap_methods authentication network-eap eap_methods authentication key-management wpa version 2 accounting rad_acct ! dot11 ssid two vlan 2 authentication open guest-mode ! dot11 network-map ! ! username ESIadmin privilege 15 secret 5 $1$p02C$WVHr5yKtRtQxuFxPU8NOx. ! ! bridge irb ! ! interface Dot11Radio0 no ip address no ip route-cache ! encryption vlan 1 mode ciphers aes-ccm ! broadcast-key vlan 1 change 30 ! ! ssid one ! ssid two ! antenna gain 0 station-role root ! interface Dot11Radio0.1 encapsulation dot1Q 1 native no ip route-cache bridge-group 1 bridge-group 1 subscriber-loop-control bridge-group 1 block-unknown-source no bridge-group 1 source-learning no bridge-group 1 unicast-flooding bridge-group 1 spanning-disabled ! interface Dot11Radio0.2 encapsulation dot1Q 2 no ip route-cache bridge-group 2 bridge-group 2 subscriber-loop-control bridge-group 2 block-unknown-source no bridge-group 2 source-learning no bridge-group 2 unicast-flooding bridge-group 2 spanning-disabled ! interface Dot11Radio1 no ip address no ip route-cache shutdown ! encryption vlan 1 mode ciphers aes-ccm ! broadcast-key vlan 1 change 30 ! ! ssid one ! antenna gain 0 dfs band 3 block channel dfs station-role root ! interface Dot11Radio1.1 encapsulation dot1Q 1 native no ip route-cache bridge-group 1 bridge-group 1 subscriber-loop-control bridge-group 1 block-unknown-source no bridge-group 1 source-learning no bridge-group 1 unicast-flooding bridge-group 1 spanning-disabled ! interface GigabitEthernet0 description the embedded AP GigabitEthernet 0 is an internal interface connecting AP with the host router no ip address no ip route-cache ! interface GigabitEthernet0.1 encapsulation dot1Q 1 native no ip route-cache bridge-group 1 no bridge-group 1 source-learning bridge-group 1 spanning-disabled ! interface GigabitEthernet0.2 encapsulation dot1Q 2 no ip route-cache bridge-group 2 no bridge-group 2 source-learning bridge-group 2 spanning-disabled ! interface BVI1 ip address 192.168.1.2 255.255.255.0 no ip route-cache ! ip http server no ip http secure-server ip http help-path http://www.cisco.com/warp/public/779/smbiz/prodconfig/help/eag access-list 10 permit 192.168.1.0 0.0.0.255 radius-server host 192.168.1.5 auth-port 1812 acct-port 1813 key ***** bridge 1 route ip

    Read the article

  • Wireless access point -> Powerline -> Router -> Internet, should this work?

    - by Anthony
    My network at home used to be a laptop and desktop connected wirelessly to a single Wireless ADSL router, a Cisco 877W. Wireless reception around the house with this setup was quite unreliable, so I've gone about looking to improve it. I purchased some Belkin Gigabit powerline adapters and I've got these working fine. I can hook a computer up to one of the powerline adapters, and with the other one plugged into the ADSL router the computer has internet access. Additionally I can hook a Netgear DG834G Wireless ADSL router into it with the adsl not plugged in, and after turning off DHCP can RJ45 a computer up to the network. Everything works fine. However, if I setup a wireless network on the Netgear then any computer that connects wirelessly to it cannot access the internet. It gets an IP address very slowly via DHCP which is a good one, but it cannot access the internet. It can however communicate with the RJ45'd computer also connected to the Netgear. I wondered whether this could be a problem with the Netgear so I've borrowed a Cisco Aironet 1200 and got this working fine when it's attached directly to the primary ADSL router. I can connect to it wireless and get onto the internet. However, if I then plug it into the Netgear I can communicate with other devices attached to the Netgear, but can't get any further than the Netgear. All the while though the other devices RJ45'd to the Netgear are communicating with the internet just fine. I'm starting to suspect it's one of two things causing the problem: 1) For some reason the belkin powerline adapters don't like carrying wireless-originating signals. Could this be possible? 2) The primary Cisco ADSL router doesn't want to communicate with other devices on my network more than one hop away from it. I'm making an assumption here that within the Netgear box the wireless and wired sides are handled differently. Could this be true? Has anyone successfully setup something similar to what I'm trying, with a wireless device on the otherside of a pair of powerline connectors? Update 06/07/2010 - Response to irrational John 28 June Thanks for the answer John - and for clearing up some of my questions. The model number of the belkin powerline adapters are F5D4076. Security was apparently enabled by default on them, and I didn't change them from their default setting. The network diagram in your answer shows exactly what I'm trying to setup: I've followed that guide and I'm still not able to get things working properly. The thing that perplexes me is that wired network traffic works just fine - it's only the wireless traffic that doesn't. This is with the same laptop, and the same DHCP or static IPs. "1. What IP addresses did you assign to each router? What subnet masks are you using?" - subnet is 255.255.255.0, the router connected to the adsl is 192.168.153.1 and that has the DHCP server. The access point on the other side of the powerline adapters I've tried both a static IP of 192.168.153.110, same subnet, and a DHCP-assigned IP. The other devices are DHCP, although I also tried manually entering IP settings. "2. Have you correctly enabled DHCP on only one of the routers and disabled it on all the others?" Yes I have - only the internet-connected router has DHCP enabled. The IP range for the DHCP is from 192.168.153.11 - 192.168.153.200. The strange thing is that wired connections work fine on the LAN, plugged into any router, work fine - it's only the wireless connections that aren't working when they're plugged into the non-primary AP. "Since the routers you are using appear to integrate an ADSL modem I'm assuming there is no WAN port on them." There's no NAT within the LAN, and all wired connections are connected to LAN ports. It's something wrong with the wireless - wired works fine throughout the whole LAN. Update 06/07/2010 - Response to irrational John 29 June The diagram you've drawn in your answer shows pretty much exactly what I'm trying to do. I've spent another evening trying different things and made some progress but I'm still scratching my head. I've borrowed a Netgear access point and been trying with this, and the strange thing is that my PC is working now - this is a Windows 7 PC connected to the access point in the position of where the DG834G is in the diagram. Meanwhile, however, I have an old Powerbook G4 12" I use for music, and while that has a DHCP-assigned IP address, it's not getting any network throughput to either LAN or internet addresses. To make matters more strange, my phone appears to be intermittently working when it's on the wifi. The access point is a Netgear WPN802v1, DHCP, NAT both switched off, running firmware 2.0.9.0. Last night I set it up with exactly the same settings, and similar to tonight I could get a couple of devices to work, and a couple not to. By the morning, however, everything had stopped working - nothing could get a DHCP IP address. I rebooted the 877W earlier this evening and I'm wondering whether this is why a few things are working now. "Could it be possible that the issue could be with the 877W?" I didn't configure this - is it possible that the DHCP server only likes assigning devices that are immediately attached to it? Or similar, could a firewall be stopping too many addresses that are coming through one device? (ie. the Access Point) This could explain why devices are working at the start but then not by the end. In reply to your questions, "1. I looked at the Netgear DG834G support page. There are five versions of this router. Which version do you have? Netgear usually lists this on the label on the bottom of the router. What version of the firmware does it have?" It's a DG834Gv3, and the firmware is the last on the netgear site version 4.01.40. "3. Not knowing which version you have, I glanced at the reference manual for the DG834G v3. In the section for Wireless Settings under the subsection Wireless Access Point there is a check box for a Wireless Isolation setting. If you have this setting it should be off/unchecked. If it is checked then any device connected via wireless would not be able to talk to any other device on the LAN. This sounds like your problem so maybe this is the cause?" I've checked this and it's switched off. I've made a change to the IP of the access point to something outside the DHCP range - it's now 192.158.153.5, with DHCP starting at 11 and going up to 254. Thanks for the tip about this - I only have a few devices so wouldn't anticipate the DHCP server assigning up to 110, but better safe than sorry. Finally one more thing I thought I should add, is with the Powerbook G4 that's not working - it's getting a DHCP IP address and it can communicate with the WPN802 as I can visit the administration page. Anything further than this, however, it can't reach; I can't administrate the 192.168.153.1 (877W router). Strangely, however, when I open Finder on the same powerbook it's detecting my NAS which is attached directly via wire to the 877W. If I try to browse it, it says connection failed. RE: "Perhaps the problem with your Powerbook is with DNS?.." The IP settings on the powerbook are identical to that of the PC with the exception of the IP address; the PC is 192.168.153.17 and the powerbook is 192.168.153.12. Subnets are the same, 255.255.255.0 and default gateway is the same, .1, and the DNS servers are the same. I administrate the 877W by going to 192.168.153.1 in the browser. This is what isn't working from the Powerbook, despite the PC working fine when I do the same. Meanwhile, however, I can administrate the AP on 192.168.153.5 from both PC and Powerbook Update 06/07/2010 - FINAL RESOLUTION of sorts: First off, sorry for the length of this question. I need start to practice a more concise writing style, so I'm going to try to keep this bit brief. After much fiddling, and with the hugely-appreciated help of irrational John, I have come to the conclusion that it's something wrong with the powerbook. I believe that this was perhaps the reason I doubted things worked at the very beginning. I now have the original DG834Gv3 running both wirelessly and wired, and both wired devices and wireless devices get internet connectivity. The only anomaly is the powerbook which I've had to keep wired, as no matter what I do it refuses to work wirelessly. I still have suspicions that the 877W isn't quite right; I'm fairly sure that if I RJ45 the powerline adapter into a different LAN port on it then everything will break. I've just about run out of patience to test this further, and I think I need to go into the 877W's config to match the 877w's lan port's settings. I'm accepting irrational John's answer as he's been enormously helpful, way above the call of duty, and for this line he wrote: Beats the heck out of me. which in the midst of great frustration made me chuckle, and for a sentence in one of his comments to the same answer: If it is specific to the Powerbook I would put that issue aside until after you feel you have the rest of your LAN and the additional WAP all working together correctlyt It was this second sentence that made me put the powerbook aside and concentrate on the other devices that ultimately led me to getting things working.

    Read the article

  • Unicenter Software Delivery 4 not able to connect to MS SQL 2000 Database after W2003 SP2 upgrade

    - by grub
    Hello Everyone Yesterday I installed the Windows Server 2003 Service Pack 2 on a Windows Server 2003 which has Unicenter Software Delivery 4 installed. Prior to the installation I disabled every CA service on the server (Brightstor, SDO , RCO, TNG) and the MS SQL 2000 service. After the installation of the SP2 I enabled the services again but the Unicenter Service is not able to connect to the MS SQL 2000 Database anymore. The database itself is up and running and I can connect to it with the Enterprise Manager. A dbcc checkdb doesnt return any errors on the Unicenter database. The Unicenter service throws the following error messages during startup: IM[1] 27/05 10:38:31,272 Installation Manager in init phase IM[1] 27/05 10:38:31,694 Process IM(L) - [004152] failed to open database SDDATA. dbopen() call failed. IM[1] 27/05 10:38:31,694 sqls error details: IM[1] 27/05 10:38:31,694 (null) IM[1] 27/05 10:38:32,069 ##EXCEPTION## TableError T@:PS_SQLS\isam_db.cxx:744. IM[1] 27/05 10:38:32,069 ##EXCEPTION## TableError C@:TaskmgrL\ASMTML.CXX:596. IM[1] 27/05 10:38:32,069 ##EXCEPTION## ErrorCode: 4711 in SDDATA:Isam::Isam. Process IM(L) - [004152] failed to open database SDDATA. dbopen() call failed. IM[1] 27/05 10:38:32,069 sqls error details: IM[1] 27/05 10:38:32,069 (null) IM[1] 27/05 10:38:32,069 returned 0. IM[1] 27/05 10:38:32,084 Persistent Storage could not be opened. Error cause is found in the ASM Event Log. Restart Task Manager. IM[1] 27/05 10:38:32,084 Failed to open database. IM[1] 27/05 10:38:32,084 Installation Manager ends> If I check the Unicenter configutation with *chkmib_l* the tool throws an exception and creates a small dump file. An Exception Occurred: Time: 27/05 09:49:38,928 Reason: ChkMIB_l.exe caused an UNKNOWN_EXCEPTION in module kernel32.dll at 7C82001B:77E4BEE7 Registers: EAX=0012F908 EBX=00000000 ECX=00000000 EDX=02410004 ESI=0012F998 EDI=0012F998 EBP=0012F958 ESP=0012F904 EIP=77E4BEE7 FLG=00000206 CS =7C82001B DS =B90023 SS =120023 ES =120023 FS =7C82003B GS =3F0000 Call Stack: 7C82001B:77E4BEE7 (0xE06D7363 0x00000001 0x00000003 0x0012F98C) kernel32.dll 7C82001B:77BB3259 (0x0012F9B8 0x2B017C50 0x2B024404 0x00B68C98) MSVCRT.dll 7C82001B:2B010C42 (0x00020003 0x010C00FE 0x003F0190 0x00B69050) PS.dll << SOFTWARE DELIVERY INSTANCE INFO >> TRIGGER 0(1) instances: JCE 0(1) instances: TM 0(1) instances: IM 0(1) instances: DM 0(1) instances: DPU 0(71) instances: NATF 0(1) instances: MIBCONV 0(0) instances: API 0(4) instances: DTSFT 0(0) instances: TNGPOP 0(0) instances: DGATE 0(0) instances: << FLUSHING MEMORY TRACES >> << STOP FLUSHING MEMORY TRACES >> I compared the configuration of the SDO service and the system configuration with another server on which the Windows Server 2003 SP2 is installed and SDO is working. The configuration is the same and the same driver and software versions are used. Do you have any idea what causes the connection issue? Should I deinstall the unicenter service and make a fresh installation on the server or should I remove the Windows Server 2003 SP2? I don't want to remove the SP2 because it's a requirement for WSUS3 SP2 and I really don't want to know how many possible exploits are possible in such an old system ;-) Thank you very much and have a nice day. Below you can find more detailed information about the system and the SDO service. psinfo output (system information) System information for \\CZZAAS1003: Uptime: 0 days 14 hours 38 minutes 50 seconds Kernel version: Microsoft Windows Server 2003, Multiprocessor Free Product type: Standard Edition Product version: 5.2 Service pack: 2 Kernel build number: 3790 Install date: 23.9.2004, 11:16:11s IE version: 6.0000 System root: C:\WINDOWS Processors: 2 Processor speed: 2.3 GHz Processor type: Intel(R) Xeon(TM) CPU Physical memory: 1024 MB Video driver: RAGE XL PCI Family (Microsoft Corporation) sdver output (Unicenter Software delivery version) Unicenter Software Delivery 4.0 SP1 I2 ENU [2901] Copyright 2004 Computer Associates International, Incorporated ms sql 2000 version and odbc driver version MS SQL 2000 Server Standard Edition Product Version: 8.00.760 (SP3) ODBC Driver: SQL Server - Version 2000.86.3959.00 complete Unicenter Software delivery service log file TRIGGER[1] 27/05 10:38:28,366 SD Trigger Agent has started NATF[1] 27/05 10:38:28,928 Initiation phase finished IM[1] 27/05 10:38:31,272 Installation Manager in init phase IM[1] 27/05 10:38:31,694 Process IM(L) - [004152] failed to open database SDDATA. dbopen() call failed. IM[1] 27/05 10:38:31,694 sqls error details: IM[1] 27/05 10:38:31,694 (null) IM[1] 27/05 10:38:32,069 ##EXCEPTION## TableError T@:PS_SQLS\isam_db.cxx:744. IM[1] 27/05 10:38:32,069 ##EXCEPTION## TableError C@:TaskmgrL\ASMTML.CXX:596. IM[1] 27/05 10:38:32,069 ##EXCEPTION## ErrorCode: 4711 in SDDATA:Isam::Isam. Process IM(L) - [004152] failed to open database SDDATA. dbopen() call failed. IM[1] 27/05 10:38:32,069 sqls error details: IM[1] 27/05 10:38:32,069 (null) IM[1] 27/05 10:38:32,069 returned 0. IM[1] 27/05 10:38:32,084 Persistent Storage could not be opened. Error cause is found in the ASM Event Log. Restart Task Manager. IM[1] 27/05 10:38:32,084 Failed to open database. IM[1] 27/05 10:38:32,084 Installation Manager ends TM[1] 27/05 10:38:32,116 Task Manager in init phase TM[1] 27/05 10:38:32,334 Process TM(L) - [006132] failed to open database SDDATA. dbopen() call failed. TM[1] 27/05 10:38:32,334 sqls error details: TM[1] 27/05 10:38:32,334 (null) TM[1] 27/05 10:38:32,381 ##EXCEPTION## TableError T@:PS_SQLS\isam_db.cxx:744. TM[1] 27/05 10:38:32,381 ##EXCEPTION## TableError C@:TaskmgrL\ASMTML.CXX:596. TM[1] 27/05 10:38:32,381 ##EXCEPTION## ErrorCode: 4711 in SDDATA:Isam::Isam. Process TM(L) - [006132] failed to open database SDDATA. dbopen() call failed. TM[1] 27/05 10:38:32,381 sqls error details: TM[1] 27/05 10:38:32,381 (null) TM[1] 27/05 10:38:32,381 returned 0. TM[1] 27/05 10:38:32,381 Persistent Storage could not be opened. Error cause is found in the ASM Event Log. Restart Task Manager. TM[1] 27/05 10:38:32,381 Failed to open database. TM[1] 27/05 10:38:32,381 Task Manager ends DM[1] 27/05 10:38:33,272 Dialogue Manager is now active API[1] 27/05 10:38:34,397 API Server Process in init phase API[1] 27/05 10:38:34,397 API - SDNLS_Init API[1] 27/05 10:38:34,397 API - connectEM API[1] 27/05 10:38:34,412 API - apiServ.init DM[1] 27/05 10:38:34,678 **AND** 1 Agents triggered API[1] 27/05 10:38:34,709 Process API(L) - [005680] failed to open database SDDATA. dbopen() call failed. API[1] 27/05 10:38:34,709 sqls error details: API[1] 27/05 10:38:34,709 (null) API[1] 27/05 10:38:34,756 ##EXCEPTION## TableError T@:PS_SQLS\isam_db.cxx:744. API[1] 27/05 10:38:34,756 ##EXCEPTION## TableError C@:MainAPIL\APISERVL.CXX:246. API[1] 27/05 10:38:34,756 ##EXCEPTION## ErrorCode: 4711 in SDDATA:Isam::Isam. Process API(L) - [005680] failed to open database SDDATA. dbopen() call failed. API[1] 27/05 10:38:34,756 sqls error details: API[1] 27/05 10:38:34,756 (null) API[1] 27/05 10:38:34,756 returned 0. API[1] 27/05 10:38:34,756 Open of the database failed. API[1] 27/05 10:38:34,756 API - apiServ.init complete API[1] 27/05 10:38:34,756 API - start_APIServer DM[1] 27/05 10:38:34,803 CZZAAR1037 DPU[1:CZZAAR1037] 27/05 10:38:35,772 DPU in init phase DPU[1:CZZAAR1037] 27/05 10:38:36,100 >> GetManagerData DPU[1:CZZAAR1037] 27/05 10:38:36,287 >> SetCompInfo DPU[1:CZZAAR1037] 27/05 10:38:36,334 >> GetContainerList DPU[1:CZZAAR1037] 27/05 10:38:36,350 getJobState 3 from 5b6ad DPU[1:CZZAAR1037] 27/05 10:38:36,350 getJobState 3 from 5b6ad DPU[1:CZZAAR1037] 27/05 10:38:36,350 getJobState 3 from 5b6b7 DPU[1:CZZAAR1037] 27/05 10:38:36,350 getJobState 3 from 5b6b7 DPU[1:CZZAAR1037] 27/05 10:38:36,350 getJobState 3 from 5b6c1 DPU[1:CZZAAR1037] 27/05 10:38:36,350 getJobState 3 from 5b6c1 DPU[1:CZZAAR1037] 27/05 10:38:36,366 getJobState 3 from 5b6cb DPU[1:CZZAAR1037] 27/05 10:38:36,366 getJobState 3 from 5b6cb DPU[1:CZZAAR1037] 27/05 10:38:36,366 getJobState 3 from 5b6f9 DPU[1:CZZAAR1037] 27/05 10:38:36,366 getJobState 3 from 5b6f9 DPU[1:CZZAAR1037] 27/05 10:38:36,366 getJobState 3 from 5b71a DPU[1:CZZAAR1037] 27/05 10:38:36,366 getJobState 3 from 5b71a DPU[1:CZZAAR1037] 27/05 10:38:36,366 getJobState 3 from 5b724 DPU[1:CZZAAR1037] 27/05 10:38:36,381 getJobState 3 from 5b724 DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b72e DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b72e DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b738 DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b738 DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b742 DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b742 DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b74c DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b74c DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b756 DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b756 DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b78a DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b78a DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b7af DPU[1:CZZAAR1037] 27/05 10:38:36,397 getJobState 3 from 5b7af DPU[1:CZZAAR1037] 27/05 10:38:36,522 >> SetCompAttr DPU[1:CZZAAR1037] 27/05 10:38:36,569 >> SetDetected DPU[1:CZZAAR1037] 27/05 10:38:36,584 disconnect DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b6ad DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b6b7 DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b6c1 DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b6cb DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b6f9 DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b71a DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b724 DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b72e DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b738 DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b742 DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b74c DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b756 DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b78a DPU[1:CZZAAR1037] 27/05 10:38:36,584 getJobState 3 from 5b7af DPU[1:CZZAAR1037] 27/05 10:38:36,584 DPU ends DM[1] 27/05 10:38:38,006 **AND** 0 Agents triggered JCE[1] 27/05 10:38:38,053 JCE starts DM[1] 27/05 10:38:38,287 CZZAAS1003 DPU[2:CZZAAS1003] 27/05 10:38:38,412 DPU in init phase DPU[2:CZZAAS1003] 27/05 10:38:38,647 >> GetManagerData DPU[2:CZZAAS1003] 27/05 10:38:38,756 >> SetCompInfo DPU[2:CZZAAS1003] 27/05 10:38:38,787 >> GetContainerList DM[1] 27/05 10:38:38,850 **AND** 1 Agents triggered DM[1] 27/05 10:38:38,928 CZZAAR1124 DPU[3:CZZAAR1124] 27/05 10:38:39,053 DPU in init phase DPU[3:CZZAAR1124] 27/05 10:38:39,272 >> GetManagerData DM[1] 27/05 10:38:39,334 **AND** 1 Agents triggered DPU[3:CZZAAR1124] 27/05 10:38:39,381 >> SetCompInfo DPU[3:CZZAAR1124] 27/05 10:38:39,412 >> GetContainerList DM[1] 27/05 10:38:39,412 CZZAAR1125 DPU[3:CZZAAR1124] 27/05 10:38:39,428 getJobState 3 from 5b88e DPU[3:CZZAAR1124] 27/05 10:38:39,428 getJobState 3 from 5b88e DPU[2:CZZAAS1003] 27/05 10:38:39,491 >> SetCompAttr DPU[3:CZZAAR1124] 27/05 10:38:39,522 >> SetCompAttr DPU[4:CZZAAR1125] 27/05 10:38:39,522 DPU in init phase DPU[3:CZZAAR1124] 27/05 10:38:39,584 >> SetDetected DPU[2:CZZAAS1003] 27/05 10:38:39,584 >> SetDetected DPU[3:CZZAAR1124] 27/05 10:38:39,584 disconnect DPU[3:CZZAAR1124] 27/05 10:38:39,600 getJobState 3 from 5b88e DPU[3:CZZAAR1124] 27/05 10:38:39,600 DPU ends DPU[2:CZZAAS1003] 27/05 10:38:39,631 disconnect DPU[2:CZZAAS1003] 27/05 10:38:39,631 DPU ends DPU[4:CZZAAR1125] 27/05 10:38:39,756 >> GetManagerData DPU[4:CZZAAR1125] 27/05 10:38:39,850 >> SetCompInfo DPU[4:CZZAAR1125] 27/05 10:38:39,881 >> GetContainerList DPU[4:CZZAAR1125] 27/05 10:38:39,897 getJobState 3 from 5b8a9 DPU[4:CZZAAR1125] 27/05 10:38:39,897 getJobState 3 from 5b8a9 DPU[4:CZZAAR1125] 27/05 10:38:39,991 >> SetCompAttr DPU[4:CZZAAR1125] 27/05 10:38:40,100 >> SetDetected DPU[4:CZZAAR1125] 27/05 10:38:40,116 disconnect DPU[4:CZZAAR1125] 27/05 10:38:40,116 getJobState 3 from 5b8a9 DPU[4:CZZAAR1125] 27/05 10:38:40,116 DPU ends DM[1] 27/05 10:38:40,741 **AND** 0 Agents triggered JCE[1] 27/05 10:38:42,756 JCE ends DM[1] 27/05 10:38:47,475 **AND** 0 Agents triggered DM[1] 27/05 10:38:54,241 **AND** 0 Agents triggered

    Read the article

  • CentOS 6 LEMP update - dependency error issue

    - by Latheesan Kanes
    I have setup a LEMP server following the guide Install Nginx/PHP-FPM on Fedora 20/19, CentOS/RHEL 6.5/5.10. It's been a while since I did the setup, so I wanted to grab the latest updates from REMI repository. I ran the following command: yum --enablerepo=remi,remi-php55 update I now get these dependency related errors: # yum --enablerepo=remi update Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.nl.leaseweb.net * epel: mirror.1000mbps.com * extras: mirror.nl.leaseweb.net * remi: remi.schlundtech.de * updates: centos.mirror1.spango.com Setting up Update Process Resolving Dependencies --> Running transaction check ---> Package chkconfig.x86_64 0:1.3.49.3-2.el6 will be updated ---> Package chkconfig.x86_64 0:1.3.49.3-2.el6_4.1 will be an update ---> Package glibc.x86_64 0:2.12-1.107.el6_4.4 will be updated ---> Package glibc.x86_64 0:2.12-1.107.el6_4.5 will be an update ---> Package glibc-common.x86_64 0:2.12-1.107.el6_4.4 will be updated ---> Package glibc-common.x86_64 0:2.12-1.107.el6_4.5 will be an update ---> Package gnupg2.x86_64 0:2.0.14-4.el6 will be updated ---> Package gnupg2.x86_64 0:2.0.14-6.el6_4 will be an update ---> Package iputils.x86_64 0:20071127-17.el6_4 will be updated ---> Package iputils.x86_64 0:20071127-17.el6_4.2 will be an update ---> Package kernel.x86_64 0:2.6.32-358.23.2.el6 will be installed ---> Package kernel-firmware.noarch 0:2.6.32-358.18.1.el6 will be updated ---> Package kernel-firmware.noarch 0:2.6.32-358.23.2.el6 will be an update ---> Package libgcrypt.x86_64 0:1.4.5-9.el6_2.2 will be updated ---> Package libgcrypt.x86_64 0:1.4.5-11.el6_4 will be an update ---> Package mysql-libs.x86_64 0:5.1.69-1.el6_4 will be updated --> Processing Dependency: libmysqlclient.so.16()(64bit) for package: 2:postfix-2.6.6-2.2.el6_1.x86_64 --> Processing Dependency: libmysqlclient.so.16(libmysqlclient_16)(64bit) for package: 2:postfix-2.6.6-2.2.el6_1.x86_64 ---> Package mysql-libs.x86_64 0:5.5.34-1.el6.remi will be an update ---> Package nginx.x86_64 0:1.4.2-1.el6.ngx will be updated ---> Package nginx.x86_64 0:1.4.3-1.el6.ngx will be an update ---> Package php-pear.noarch 1:1.9.4-20.el6.remi will be updated ---> Package php-pear.noarch 1:1.9.4-23.el6.remi will be an update ---> Package php-pecl-jsonc.x86_64 0:1.3.2-1.el6.remi.1 will be updated ---> Package php-pecl-jsonc.x86_64 0:1.3.2-2.el6.remi will be an update --> Processing Dependency: php(zend-abi) = 20100525-x86-64 for package: php-pecl-jsonc-1.3.2-2.el6.remi.x86_64 --> Processing Dependency: php(api) = 20100412-x86-64 for package: php-pecl-jsonc-1.3.2-2.el6.remi.x86_64 ---> Package php-pecl-mongo.x86_64 0:1.4.3-1.el6.remi.1 will be updated ---> Package php-pecl-mongo.x86_64 0:1.4.4-1.el6.remi will be an update --> Processing Dependency: php(zend-abi) = 20100525-x86-64 for package: php-pecl-mongo-1.4.4-1.el6.remi.x86_64 --> Processing Dependency: php(api) = 20100412-x86-64 for package: php-pecl-mongo-1.4.4-1.el6.remi.x86_64 ---> Package php-pecl-sqlite.x86_64 0:2.0.0-0.3.svn313074.el6.remi.5 will be updated ---> Package php-pecl-sqlite.x86_64 0:2.0.0-0.4.svn332053.el6.remi.5.4 will be an update --> Processing Dependency: php(zend-abi) = 20100525-x86-64 for package: php-pecl-sqlite-2.0.0-0.4.svn332053.el6.remi.5.4.x86_64 --> Processing Dependency: php(api) = 20100412-x86-64 for package: php-pecl-sqlite-2.0.0-0.4.svn332053.el6.remi.5.4.x86_64 ---> Package postgresql-libs.x86_64 0:8.4.13-1.el6_3 will be updated ---> Package postgresql-libs.x86_64 0:8.4.18-1.el6_4 will be an update ---> Package remi-release.noarch 0:6-2.el6.remi will be updated ---> Package remi-release.noarch 0:6.4-1.el6.remi will be an update ---> Package rsync.x86_64 0:3.0.6-9.el6 will be updated ---> Package rsync.x86_64 0:3.0.6-9.el6_4.1 will be an update ---> Package selinux-policy.noarch 0:3.7.19-195.el6_4.12 will be updated ---> Package selinux-policy.noarch 0:3.7.19-195.el6_4.18 will be an update ---> Package selinux-policy-targeted.noarch 0:3.7.19-195.el6_4.12 will be updated ---> Package selinux-policy-targeted.noarch 0:3.7.19-195.el6_4.18 will be an update ---> Package setup.noarch 0:2.8.14-20.el6 will be updated ---> Package setup.noarch 0:2.8.14-20.el6_4.1 will be an update ---> Package tzdata.noarch 0:2013c-2.el6 will be updated ---> Package tzdata.noarch 0:2013g-1.el6 will be an update ---> Package xinetd.x86_64 2:2.3.14-38.el6 will be updated ---> Package xinetd.x86_64 2:2.3.14-39.el6_4 will be an update --> Running transaction check ---> Package compat-mysql51.x86_64 0:5.1.54-1.el6.remi will be installed ---> Package php-pecl-jsonc.x86_64 0:1.3.2-2.el6.remi will be an update --> Processing Dependency: php(zend-abi) = 20100525-x86-64 for package: php-pecl-jsonc-1.3.2-2.el6.remi.x86_64 --> Processing Dependency: php(api) = 20100412-x86-64 for package: php-pecl-jsonc-1.3.2-2.el6.remi.x86_64 ---> Package php-pecl-mongo.x86_64 0:1.4.4-1.el6.remi will be an update --> Processing Dependency: php(zend-abi) = 20100525-x86-64 for package: php-pecl-mongo-1.4.4-1.el6.remi.x86_64 --> Processing Dependency: php(api) = 20100412-x86-64 for package: php-pecl-mongo-1.4.4-1.el6.remi.x86_64 ---> Package php-pecl-sqlite.x86_64 0:2.0.0-0.4.svn332053.el6.remi.5.4 will be an update --> Processing Dependency: php(zend-abi) = 20100525-x86-64 for package: php-pecl-sqlite-2.0.0-0.4.svn332053.el6.remi.5.4.x86_64 --> Processing Dependency: php(api) = 20100412-x86-64 for package: php-pecl-sqlite-2.0.0-0.4.svn332053.el6.remi.5.4.x86_64 --> Finished Dependency Resolution Error: Package: php-pecl-jsonc-1.3.2-2.el6.remi.x86_64 (remi) Requires: php(zend-abi) = 20100525-x86-64 Installed: php-common-5.5.4-1.el6.remi.x86_64 (@remi-test) php(zend-abi) = 20121212-64 Available: php-common-5.3.3-22.el6.x86_64 (base) php(zend-abi) = 20090626 Available: php-common-5.3.3-23.el6_4.x86_64 (updates) php(zend-abi) = 20090626 Available: php-common-5.4.21-1.el6.remi.x86_64 (remi) php(zend-abi) = 20100525-x86-64 Available: php-common-5.4.21-2.el6.remi.x86_64 (remi) php(zend-abi) = 20100525-x86-64 Error: Package: php-pecl-mongo-1.4.4-1.el6.remi.x86_64 (remi) Requires: php(zend-abi) = 20100525-x86-64 Installed: php-common-5.5.4-1.el6.remi.x86_64 (@remi-test) php(zend-abi) = 20121212-64 Available: php-common-5.3.3-22.el6.x86_64 (base) php(zend-abi) = 20090626 Available: php-common-5.3.3-23.el6_4.x86_64 (updates) php(zend-abi) = 20090626 Available: php-common-5.4.21-1.el6.remi.x86_64 (remi) php(zend-abi) = 20100525-x86-64 Available: php-common-5.4.21-2.el6.remi.x86_64 (remi) php(zend-abi) = 20100525-x86-64 Error: Package: php-pecl-jsonc-1.3.2-2.el6.remi.x86_64 (remi) Requires: php(api) = 20100412-x86-64 Installed: php-common-5.5.4-1.el6.remi.x86_64 (@remi-test) php(api) = 20121113-64 Available: php-common-5.3.3-22.el6.x86_64 (base) php(api) = 20090626 Available: php-common-5.3.3-23.el6_4.x86_64 (updates) php(api) = 20090626 Available: php-common-5.4.21-1.el6.remi.x86_64 (remi) php(api) = 20100412-x86-64 Available: php-common-5.4.21-2.el6.remi.x86_64 (remi) php(api) = 20100412-x86-64 Error: Package: php-pecl-sqlite-2.0.0-0.4.svn332053.el6.remi.5.4.x86_64 (remi) Requires: php(zend-abi) = 20100525-x86-64 Installed: php-common-5.5.4-1.el6.remi.x86_64 (@remi-test) php(zend-abi) = 20121212-64 Available: php-common-5.3.3-22.el6.x86_64 (base) php(zend-abi) = 20090626 Available: php-common-5.3.3-23.el6_4.x86_64 (updates) php(zend-abi) = 20090626 Available: php-common-5.4.21-1.el6.remi.x86_64 (remi) php(zend-abi) = 20100525-x86-64 Available: php-common-5.4.21-2.el6.remi.x86_64 (remi) php(zend-abi) = 20100525-x86-64 Error: Package: php-pecl-mongo-1.4.4-1.el6.remi.x86_64 (remi) Requires: php(api) = 20100412-x86-64 Installed: php-common-5.5.4-1.el6.remi.x86_64 (@remi-test) php(api) = 20121113-64 Available: php-common-5.3.3-22.el6.x86_64 (base) php(api) = 20090626 Available: php-common-5.3.3-23.el6_4.x86_64 (updates) php(api) = 20090626 Available: php-common-5.4.21-1.el6.remi.x86_64 (remi) php(api) = 20100412-x86-64 Available: php-common-5.4.21-2.el6.remi.x86_64 (remi) php(api) = 20100412-x86-64 Error: Package: php-pecl-sqlite-2.0.0-0.4.svn332053.el6.remi.5.4.x86_64 (remi) Requires: php(api) = 20100412-x86-64 Installed: php-common-5.5.4-1.el6.remi.x86_64 (@remi-test) php(api) = 20121113-64 Available: php-common-5.3.3-22.el6.x86_64 (base) php(api) = 20090626 Available: php-common-5.3.3-23.el6_4.x86_64 (updates) php(api) = 20090626 Available: php-common-5.4.21-1.el6.remi.x86_64 (remi) php(api) = 20100412-x86-64 Available: php-common-5.4.21-2.el6.remi.x86_64 (remi) php(api) = 20100412-x86-64 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest Any idea how to solve these errors? Am I missing a package? or is this a bug?

    Read the article

  • Implementation of ZipCrypto / Zip 2.0 encryption in java

    - by gomesla
    I'm trying o implement the zipcrypto / zip 2.0 encryption algoritm to deal with encrypted zip files as discussed in http://www.pkware.com/documents/casestudies/APPNOTE.TXT I believe I've followed the specs but just can't seem to get it working. I'm fairly sure the issue has to do with my interpretation of the crc algorithm. The documentation states CRC-32: (4 bytes) The CRC-32 algorithm was generously contributed by David Schwaderer and can be found in his excellent book "C Programmers Guide to NetBIOS" published by Howard W. Sams & Co. Inc. The 'magic number' for the CRC is 0xdebb20e3. The proper CRC pre and post conditioning is used, meaning that the CRC register is pre-conditioned with all ones (a starting value of 0xffffffff) and the value is post-conditioned by taking the one's complement of the CRC residual. Here is the snippet that I'm using for the crc32 public class PKZIPCRC32 { private static final int CRC32_POLYNOMIAL = 0xdebb20e3; private int crc = 0xffffffff; private int CRCTable[]; public PKZIPCRC32() { buildCRCTable(); } private void buildCRCTable() { int i, j; CRCTable = new int[256]; for (i = 0; i <= 255; i++) { crc = i; for (j = 8; j > 0; j--) if ((crc & 1) == 1) crc = (crc >>> 1) ^ CRC32_POLYNOMIAL; else crc >>>= 1; CRCTable[i] = crc; } } private int crc32(byte buffer[], int start, int count, int lastcrc) { int temp1, temp2; int i = start; crc = lastcrc; while (count-- != 0) { temp1 = crc >>> 8; temp2 = CRCTable[(crc ^ buffer[i++]) & 0xFF]; crc = temp1 ^ temp2; } return crc; } public int crc32(int crc, byte buffer) { return crc32(new byte[] { buffer }, 0, 1, crc); } } Below is my complete code. Can anyone see what I'm doing wrong. package org.apache.commons.compress.archivers.zip; import java.io.IOException; import java.io.InputStream; public class ZipCryptoInputStream extends InputStream { public class PKZIPCRC32 { private static final int CRC32_POLYNOMIAL = 0xdebb20e3; private int crc = 0xffffffff; private int CRCTable[]; public PKZIPCRC32() { buildCRCTable(); } private void buildCRCTable() { int i, j; CRCTable = new int[256]; for (i = 0; i <= 255; i++) { crc = i; for (j = 8; j > 0; j--) if ((crc & 1) == 1) crc = (crc >>> 1) ^ CRC32_POLYNOMIAL; else crc >>>= 1; CRCTable[i] = crc; } } private int crc32(byte buffer[], int start, int count, int lastcrc) { int temp1, temp2; int i = start; crc = lastcrc; while (count-- != 0) { temp1 = crc >>> 8; temp2 = CRCTable[(crc ^ buffer[i++]) & 0xFF]; crc = temp1 ^ temp2; } return crc; } public int crc32(int crc, byte buffer) { return crc32(new byte[] { buffer }, 0, 1, crc); } } private static final long ENCRYPTION_KEY_1 = 0x12345678; private static final long ENCRYPTION_KEY_2 = 0x23456789; private static final long ENCRYPTION_KEY_3 = 0x34567890; private InputStream baseInputStream = null; private final PKZIPCRC32 checksumEngine = new PKZIPCRC32(); private long[] keys = null; public ZipCryptoInputStream(ZipArchiveEntry zipEntry, InputStream inputStream, String passwd) throws Exception { baseInputStream = inputStream; // Decryption // ---------- // PKZIP encrypts the compressed data stream. Encrypted files must // be decrypted before they can be extracted. // // Each encrypted file has an extra 12 bytes stored at the start of // the data area defining the encryption header for that file. The // encryption header is originally set to random values, and then // itself encrypted, using three, 32-bit keys. The key values are // initialized using the supplied encryption password. After each byte // is encrypted, the keys are then updated using pseudo-random number // generation techniques in combination with the same CRC-32 algorithm // used in PKZIP and described elsewhere in this document. // // The following is the basic steps required to decrypt a file: // // 1) Initialize the three 32-bit keys with the password. // 2) Read and decrypt the 12-byte encryption header, further // initializing the encryption keys. // 3) Read and decrypt the compressed data stream using the // encryption keys. // Step 1 - Initializing the encryption keys // ----------------------------------------- // // Key(0) <- 305419896 // Key(1) <- 591751049 // Key(2) <- 878082192 // // loop for i <- 0 to length(password)-1 // update_keys(password(i)) // end loop // // Where update_keys() is defined as: // // update_keys(char): // Key(0) <- crc32(key(0),char) // Key(1) <- Key(1) + (Key(0) & 000000ffH) // Key(1) <- Key(1) * 134775813 + 1 // Key(2) <- crc32(key(2),key(1) >> 24) // end update_keys // // Where crc32(old_crc,char) is a routine that given a CRC value and a // character, returns an updated CRC value after applying the CRC-32 // algorithm described elsewhere in this document. keys = new long[] { ENCRYPTION_KEY_1, ENCRYPTION_KEY_2, ENCRYPTION_KEY_3 }; for (int i = 0; i < passwd.length(); ++i) { update_keys((byte) passwd.charAt(i)); } // Step 2 - Decrypting the encryption header // ----------------------------------------- // // The purpose of this step is to further initialize the encryption // keys, based on random data, to render a plaintext attack on the // data ineffective. // // Read the 12-byte encryption header into Buffer, in locations // Buffer(0) thru Buffer(11). // // loop for i <- 0 to 11 // C <- buffer(i) ^ decrypt_byte() // update_keys(C) // buffer(i) <- C // end loop // // Where decrypt_byte() is defined as: // // unsigned char decrypt_byte() // local unsigned short temp // temp <- Key(2) | 2 // decrypt_byte <- (temp * (temp ^ 1)) >> 8 // end decrypt_byte // // After the header is decrypted, the last 1 or 2 bytes in Buffer // should be the high-order word/byte of the CRC for the file being // decrypted, stored in Intel low-byte/high-byte order. Versions of // PKZIP prior to 2.0 used a 2 byte CRC check; a 1 byte CRC check is // used on versions after 2.0. This can be used to test if the password // supplied is correct or not. byte[] encryptionHeader = new byte[12]; baseInputStream.read(encryptionHeader); for (int i = 0; i < encryptionHeader.length; i++) { encryptionHeader[i] ^= decrypt_byte(); update_keys(encryptionHeader[i]); } } protected byte decrypt_byte() { byte temp = (byte) (keys[2] | 2); return (byte) ((temp * (temp ^ 1)) >> 8); } @Override public int read() throws IOException { // // Step 3 - Decrypting the compressed data stream // ---------------------------------------------- // // The compressed data stream can be decrypted as follows: // // loop until done // read a character into C // Temp <- C ^ decrypt_byte() // update_keys(temp) // output Temp // end loop int read = baseInputStream.read(); read ^= decrypt_byte(); update_keys((byte) read); return read; } private final void update_keys(byte ch) { keys[0] = checksumEngine.crc32((int) keys[0], ch); keys[1] = keys[1] + (byte) keys[0]; keys[1] = keys[1] * 134775813 + 1; keys[2] = checksumEngine.crc32((int) keys[2], (byte) (keys[1] >> 24)); } }

    Read the article

  • Intermittent Could not load file or assembly / PolicyExceptions

    - by Mark S. Rasmussen
    Intermittently we'll get errors like these from our .NET 3.5 web applications: Exception: System.Configuration.ConfigurationErrorsException: Could not load file or assembly 'itextsharp, Version=4.1.2.0, Culture=neutral, PublicKeyToken=8354ae6d2174ddca' or one of its dependencies. Failed to grant permission to execute. (Exception from HRESULT: 0x80131418) (C:\Windows\Microsoft.NET\Framework64\v2.0.50727\Config\web.config line 59) ---> System.IO.FileLoadException: Could not load file or assembly 'itextsharp, Version=4.1.2.0, Culture=neutral, PublicKeyToken=8354ae6d2174ddca' or one of its dependencies. Failed to grant permission to execute. (Exception from HRESULT: 0x80131418) File name: 'itextsharp, Version=4.1.2.0, Culture=neutral, PublicKeyToken=8354ae6d2174ddca' ---> System.Security.Policy.PolicyException: Execution permission cannot be acquired. at System.Security.SecurityManager.ResolvePolicy(Evidence evidence, PermissionSet reqdPset, PermissionSet optPset, PermissionSet denyPset, PermissionSet& denied, Boolean checkExecutionPermission) at System.Security.SecurityManager.ResolvePolicy(Evidence evidence, PermissionSet reqdPset, PermissionSet optPset, PermissionSet denyPset, PermissionSet& denied, Int32& securitySpecialFlags, Boolean checkExecutionPermission) at System.Reflection.Assembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoad(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.Reflection.Assembly.Load(String assemblyString) at System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) --- End of inner exception stack trace --- at System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) at System.Web.Configuration.CompilationSection.LoadAllAssembliesFromAppDomainBinDirectory() at System.Web.Configuration.CompilationSection.LoadAssembly(AssemblyInfo ai) at System.Web.Configuration.AssemblyInfo.get_AssemblyInternal() at System.Web.Compilation.BuildManager.GetReferencedAssemblies(CompilationSection compConfig) at System.Web.Compilation.WebDirectoryBatchCompiler..ctor(VirtualDirectory vdir) at System.Web.Compilation.BuildManager.BatchCompileWebDirectoryInternal(VirtualDirectory vdir, Boolean ignoreErrors) at System.Web.Compilation.BuildManager.CompileWebFile(VirtualPath virtualPath) at System.Web.Compilation.BuildManager.GetVPathBuildResultInternal(VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile) at System.Web.Compilation.BuildManager.GetVPathBuildResultWithNoAssert(HttpContext context, VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile) at System.Web.Compilation.BuildManager.GetVirtualPathObjectFactory(VirtualPath virtualPath, HttpContext context, Boolean allowCrossApp, Boolean noAssert) at System.Web.Compilation.BuildManager.GetCompiledType(String virtualPath) at System.Web.Script.Services.WebServiceData.GetWebServiceData(HttpContext context, String virtualPath, Boolean failIfNoData, Boolean pageMethods, Boolean inlineScript) at System.Web.Script.Services.RestHandler.CreateHandler(HttpContext context) at System.Web.Script.Services.ScriptHandlerFactory.GetHandler(HttpContext context, String requestType, String url, String pathTranslated) at System.Web.HttpApplication.MaterializeHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Inner exception: System.IO.FileLoadException: Could not load file or assembly 'itextsharp, Version=4.1.2.0, Culture=neutral, PublicKeyToken=8354ae6d2174ddca' or one of its dependencies. Failed to grant permission to execute. (Exception from HRESULT: 0x80131418) File name: 'itextsharp, Version=4.1.2.0, Culture=neutral, PublicKeyToken=8354ae6d2174ddca' ---> System.Security.Policy.PolicyException: Execution permission cannot be acquired. at System.Security.SecurityManager.ResolvePolicy(Evidence evidence, PermissionSet reqdPset, PermissionSet optPset, PermissionSet denyPset, PermissionSet& denied, Boolean checkExecutionPermission) at System.Security.SecurityManager.ResolvePolicy(Evidence evidence, PermissionSet reqdPset, PermissionSet optPset, PermissionSet denyPset, PermissionSet& denied, Int32& securitySpecialFlags, Boolean checkExecutionPermission) at System.Reflection.Assembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoad(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.Reflection.Assembly.Load(String assemblyString) at System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) web.config line 59 being: <add assembly="*"/> When these occur, the sites will YSOD untill we recycle the application pool. The sites may run for days/weeks before this occurs, or it might happen twice within the hour. I have not been able to pinpoint this to any specific request/function in our system. In this case it points to itextsharp, but it randomly points to any assembly referenced by our application, both internal and external. Running caspol verifies that the DLL has full trust permissions: C:\Windows\Microsoft.NET\Framework64\v2.0.50727>caspol -rsg D:\...\bin\itextsharp.dll Microsoft (R) .NET Framework CasPol 2.0.50727.3053 Copyright (c) Microsoft Corporation. All rights reserved. Level = Enterprise Code Groups: 1. All code: FullTrust Level = Machine Code Groups: 1. All code: Nothing 1.1. Zone - MyComputer: FullTrust Level = User Code Groups: 1. All code: FullTrust Success Our application is running on three servers, two of them are on Server 2008 Web x64 while the third is running Server 2008 R2 Web x64, all have .NET 3.5 installed, no .NET 4.0 installations. The problem only occurs on the first two that are running 2008 non R2. Running depends.exe on all three servers gives equal results for the nonR2 servers: My DLL is shown as x86 (compiled as AnyCPU, running in x64 w3wp), all other modules show as x64. Missing IESHIMS.DLL and LINKINFO.DLL - both of these seem to be red herrings according to Google. The third server shows the same, except it does not miss LINKINFO.DLL All servers are running IIS7 (7.5 for the R2 one) under a custom domain account that has been granted the necessary permissions: aspnet_regiis -ga [user] Load user profile is set to false on all three servers. I've tried setting this to true on one of the faulting servers, according to: http://stackoverflow.com/questions/1846816/iis7-failed-to-grant-minimum-permission-requests By running processmonitor I can see that it's now using the C:\Users\TEMP\AppData\Local\Temp directory for various temp files - the other ones are not using any such directory. So far I'll let it run in this way to see if this changes anything. I'm in doubt however given that the third server is not exhibiting the problems, yet still has "Load user profile" set to the same value, false. I've also tried running Fuslogvw on all three servers, logging binding failures to disk. All three servers report the same binding errors for VJSharpCodeProvider and CppCodeProvider, but these seem to be normal as well and can be solved by not defining the DEBUG and TRACE constants during build. We're running about 500 websites on each server (identical, load balanced), of which 50 are under moderate load, the problem has arisen both under heavy load as well as under minimal load however. Right now I'm waiting for the errors to happen again so I can hopefully see a pattern and determine whether "Load user profile" alleviates the issue. Any suggestions in the meantime would be very welcome! Also, I don't understand how the lack of "Load user profile" would cause an issue like this? And even further, how it would seemingly work on R2 but not on plain 2008? Thanks!

    Read the article

  • Compiling OpenCV in Android NDK

    - by evident
    PLEASE SEE THE ADDITIONS AT THE BOTTOM! The first problem is solved in Linux, not under Windows and Cygwin yet, but there is a new problem. Please see below! I am currently trying to compile OpenCV for Android NDK so that I can use it in my apps. For this I tried to follow this guide: http://www.stanford.edu/~zxwang/android_opencv.html But when compiling the downloaded stuff with ndk-build I get this error: $ /cygdrive/u/flori/workspace/android-ndk-r5b/ndk-build Compile++ thumb : opencv <= cvjni.cpp Compile++ thumb : cxcore <= cxalloc.cpp Compile++ thumb : cxcore <= cxarithm.cpp Compile++ thumb : cxcore <= cxarray.cpp Compile++ thumb : cxcore <= cxcmp.cpp Compile++ thumb : cxcore <= cxconvert.cpp Compile++ thumb : cxcore <= cxcopy.cpp Compile++ thumb : cxcore <= cxdatastructs.cpp Compile++ thumb : cxcore <= cxdrawing.cpp Compile++ thumb : cxcore <= cxdxt.cpp Compile++ thumb : cxcore <= cxerror.cpp Compile++ thumb : cxcore <= cximage.cpp Compile++ thumb : cxcore <= cxjacobieigens.cpp Compile++ thumb : cxcore <= cxlogic.cpp Compile++ thumb : cxcore <= cxlut.cpp Compile++ thumb : cxcore <= cxmathfuncs.cpp Compile++ thumb : cxcore <= cxmatmul.cpp Compile++ thumb : cxcore <= cxmatrix.cpp Compile++ thumb : cxcore <= cxmean.cpp Compile++ thumb : cxcore <= cxmeansdv.cpp Compile++ thumb : cxcore <= cxminmaxloc.cpp Compile++ thumb : cxcore <= cxnorm.cpp Compile++ thumb : cxcore <= cxouttext.cpp Compile++ thumb : cxcore <= cxpersistence.cpp Compile++ thumb : cxcore <= cxprecomp.cpp Compile++ thumb : cxcore <= cxrand.cpp Compile++ thumb : cxcore <= cxsumpixels.cpp Compile++ thumb : cxcore <= cxsvd.cpp Compile++ thumb : cxcore <= cxswitcher.cpp Compile++ thumb : cxcore <= cxtables.cpp Compile++ thumb : cxcore <= cxutils.cpp StaticLibrary : libstdc++.a StaticLibrary : libcxcore.a Compile++ thumb : cv <= cvaccum.cpp Compile++ thumb : cv <= cvadapthresh.cpp Compile++ thumb : cv <= cvapprox.cpp Compile++ thumb : cv <= cvcalccontrasthistogram.cpp Compile++ thumb : cv <= cvcalcimagehomography.cpp Compile++ thumb : cv <= cvcalibinit.cpp Compile++ thumb : cv <= cvcalibration.cpp Compile++ thumb : cv <= cvcamshift.cpp Compile++ thumb : cv <= cvcanny.cpp Compile++ thumb : cv <= cvcolor.cpp Compile++ thumb : cv <= cvcondens.cpp Compile++ thumb : cv <= cvcontours.cpp Compile++ thumb : cv <= cvcontourtree.cpp Compile++ thumb : cv <= cvconvhull.cpp Compile++ thumb : cv <= cvcorner.cpp Compile++ thumb : cv <= cvcornersubpix.cpp Compile++ thumb : cv <= cvderiv.cpp Compile++ thumb : cv <= cvdistransform.cpp Compile++ thumb : cv <= cvdominants.cpp Compile++ thumb : cv <= cvemd.cpp Compile++ thumb : cv <= cvfeatureselect.cpp Compile++ thumb : cv <= cvfilter.cpp Compile++ thumb : cv <= cvfloodfill.cpp Compile++ thumb : cv <= cvfundam.cpp Compile++ thumb : cv <= cvgeometry.cpp Compile++ thumb : cv <= cvhaar.cpp Compile++ thumb : cv <= cvhistogram.cpp Compile++ thumb : cv <= cvhough.cpp Compile++ thumb : cv <= cvimgwarp.cpp Compile++ thumb : cv <= cvinpaint.cpp Compile++ thumb : cv <= cvkalman.cpp Compile++ thumb : cv <= cvlinefit.cpp Compile++ thumb : cv <= cvlkpyramid.cpp Compile++ thumb : cv <= cvmatchcontours.cpp Compile++ thumb : cv <= cvmoments.cpp Compile++ thumb : cv <= cvmorph.cpp Compile++ thumb : cv <= cvmotempl.cpp Compile++ thumb : cv <= cvoptflowbm.cpp Compile++ thumb : cv <= cvoptflowhs.cpp Compile++ thumb : cv <= cvoptflowlk.cpp Compile++ thumb : cv <= cvpgh.cpp Compile++ thumb : cv <= cvposit.cpp Compile++ thumb : cv <= cvprecomp.cpp Compile++ thumb : cv <= cvpyramids.cpp Compile++ thumb : cv <= cvpyrsegmentation.cpp Compile++ thumb : cv <= cvrotcalipers.cpp Compile++ thumb : cv <= cvsamplers.cpp Compile++ thumb : cv <= cvsegmentation.cpp Compile++ thumb : cv <= cvshapedescr.cpp Compile++ thumb : cv <= cvsmooth.cpp Compile++ thumb : cv <= cvsnakes.cpp Compile++ thumb : cv <= cvstereobm.cpp Compile++ thumb : cv <= cvstereogc.cpp Compile++ thumb : cv <= cvsubdivision2d.cpp Compile++ thumb : cv <= cvsumpixels.cpp Compile++ thumb : cv <= cvsurf.cpp Compile++ thumb : cv <= cvswitcher.cpp Compile++ thumb : cv <= cvtables.cpp Compile++ thumb : cv <= cvtemplmatch.cpp Compile++ thumb : cv <= cvthresh.cpp Compile++ thumb : cv <= cvundistort.cpp Compile++ thumb : cv <= cvutils.cpp StaticLibrary : libcv.a SharedLibrary : libopencv.so U:/flori/workspace/android-ndk-r5b/toolchains/arm-linux-androideabi-4.4.3/prebui lt/windows/bin/../lib/gcc/arm-linux-androideabi/4.4.3/../../../../arm-linux-andr oideabi/bin/ld.exe: cannot find -lcxcore collect2: ld returned 1 exit status make: *** [/cygdrive/u/flori/workspace/android/testOpenCV/obj/local/armeabi/libo pencv.so] Error 1 I am trying to compile it on a Windows system and with the newest NDK version... Does anybody have an idea what this linking error means and what I can to to have it work again? Would be great if anybody could help After getting the problem to work I found that there is another way of compiling OpenCV for Android, using the current version of OpenCV (instead of the 1.1 one from above) and the modified Android NDK from crystax, which supports STL and exceptions and therefore supports the newest OpenCV Version. All information on that can be found here: http://opencv.willowgarage.com/wiki/Android There it says to download the current svn trunk and the crystax-r4 android-ndk, as well as swig, which I did. I entered the folder, created the build directory, ran cmake and then built the static libs, which seemed to work. At least it successfully ran the make-command without errors. I now wanted to build the shared libraries so I entered the android-jni folder and ran 'make' again, but got this error: % make -j4 OPENCV_CONFIG = ../build/android-opencv.mk make clean-swig &&\ mkdir -p jni/gen &&\ mkdir -p src/com/opencv/jni &&\ swig -java -c++ -package "com.opencv.jni" \ -outdir src/com/opencv/jni \ -o jni/gen/android_cv_wrap.cpp jni/android-cv.i OPENCV_CONFIG = ../build/android-opencv.mk make[1]: Entering directory `/home/florian/android-opencv-willowgarage/android/android-jni' make[1]: warning: jobserver unavailable: using -j1. Add `+' to parent make rule. rm -f jni/gen/android_cv_wrap.cpp make[1]: Leaving directory `/home/florian/android-opencv-willowgarage/android/android-jni' /home/florian/android-ndk-r4-crystax/ndk-build OPENCV_CONFIG=../build/android-opencv.mk \ PROJECT_PATH= ARM_TARGETS="armeabi armeabi-v7a" V= /home/florian/android-ndk-r4-crystax/ndk-build OPENCV_CONFIG=../build/android-opencv.mk \ PROJECT_PATH= ARM_TARGETS="armeabi armeabi-v7a" V= make[1]: Entering directory `/home/florian/android-opencv-willowgarage/android/android-jni' /home/florian/android-opencv-willowgarage/android/android-jni/jni/Android.mk:10: ../build/android-opencv.mk: No such file or directory make[1]: Entering directory `/home/florian/android-opencv-willowgarage/android/android-jni' /home/florian/android-opencv-willowgarage/android/android-jni/jni/Android.mk:10: ../build/android-opencv.mk: No such file or directory /home/florian/android-opencv-willowgarage/android/android-jni/jni/Android.mk:10: ../build/android-opencv.mk: No such file or directory make[1]: warning: jobserver unavailable: using -j1. Add `+' to parent make rule. /home/florian/android-opencv-willowgarage/android/android-jni/jni/Android.mk:10: ../build/android-opencv.mk: No such file or directory make[1]: *** No rule to make target `../build/android-opencv.mk'. Stop. make[1]: Leaving directory `/home/florian/android-opencv-willowgarage/android/android-jni' make: *** [libs/armeabi/libandroid-opencv.so] Error 2 make: *** Waiting for unfinished jobs.... make[1]: warning: jobserver unavailable: using -j1. Add `+' to parent make rule. make[1]: *** No rule to make target `../build/android-opencv.mk'. Stop. make[1]: Leaving directory `/home/florian/android-opencv-willowgarage/android/android-jni' make: *** [libs/armeabi-v7a/libandroid-opencv.so] Error 2 Does anybody have an idea what this means and what I can do to build the shared libraries? ... Ok after having a look at the error message it came to me that it seems to have something missing in the build directory... but there wasn't even a build directory in the android folder so I created one, ran 'cmake' in there and 'make' again but get this error: Compile thumb : opencv_lapack <= /home/florian/android-opencv-willowgarage/3rdparty/lapack/sgetrf.c Compile thumb : opencv_lapack <= /home/florian/android-opencv-willowgarage/3rdparty/lapack/scopy.c Compile++ thumb: opencv_core <= /home/florian/android-opencv-willowgarage/modules/core/src/matrix.cpp cc1plus: error: /home/florian/android-opencv-willowgarage/android/../modules/index.rst/include: Not a directory make[3]: *** [/home/florian/android-opencv-willowgarage/android/build/obj/local/armeabi/objs/opencv_core/src/matrix.o] Error 1 make[3]: *** Waiting for unfinished jobs.... make[2]: *** [android-opencv] Error 2 make[1]: *** [CMakeFiles/ndk.dir/all] Error 2 make: *** [all] Error 2 Anybody know what this means?

    Read the article

  • OpenLDAP and SSL

    - by Stormshadow
    I am having trouble trying to connect to a secure OpenLDAP server which I have set up. On running my LDAP client code java -Djavax.net.debug=ssl LDAPConnector I get the following exception trace (java version 1.6.0_17) trigger seeding of SecureRandom done seeding SecureRandom %% No cached client session *** ClientHello, TLSv1 RandomCookie: GMT: 1256110124 bytes = { 224, 19, 193, 148, 45, 205, 108, 37, 101, 247, 112, 24, 157, 39, 111, 177, 43, 53, 206, 224, 68, 165, 55, 185, 54, 203, 43, 91 } Session ID: {} Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_W ITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_DHE_DSS_WITH_DES_CBC_SH A, SSL_RSA_EXPORT_WITH_RC4_40_MD5, SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA] Compression Methods: { 0 } *** Thread-0, WRITE: TLSv1 Handshake, length = 73 Thread-0, WRITE: SSLv2 client hello message, length = 98 Thread-0, received EOFException: error Thread-0, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake Thread-0, SEND TLSv1 ALERT: fatal, description = handshake_failure Thread-0, WRITE: TLSv1 Alert, length = 2 Thread-0, called closeSocket() main, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake javax.naming.CommunicationException: simple bind failed: ldap.natraj.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: Remote host closed connection during hands hake] at com.sun.jndi.ldap.LdapClient.authenticate(Unknown Source) at com.sun.jndi.ldap.LdapCtx.connect(Unknown Source) at com.sun.jndi.ldap.LdapCtx.<init>(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(Unknown Source) at javax.naming.spi.NamingManager.getInitialContext(Unknown Source) at javax.naming.InitialContext.getDefaultInitCtx(Unknown Source) at javax.naming.InitialContext.init(Unknown Source) at javax.naming.InitialContext.<init>(Unknown Source) at javax.naming.directory.InitialDirContext.<init>(Unknown Source) at LDAPConnector.CallSecureLDAPServer(LDAPConnector.java:43) at LDAPConnector.main(LDAPConnector.java:237) Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.AppInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at com.sun.jndi.ldap.Connection.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.io.EOFException: SSL peer shut down incorrectly at com.sun.net.ssl.internal.ssl.InputRecord.read(Unknown Source) ... 9 more I am able to connect to the same secure LDAP server however if I use another version of java (1.6.0_14) I have created and installed the server certificates in the cacerts of both the JRE's as mentioned in this guide -- OpenLDAP with SSL When I run ldapsearch -x on the server I get # extended LDIF # # LDAPv3 # base <dc=localdomain> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # localdomain dn: dc=localdomain objectClass: top objectClass: dcObject objectClass: organization o: localdomain dc: localdomain # admin, localdomain dn: cn=admin,dc=localdomain objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator # search result search: 2 result: 0 Success # numResponses: 3 # numEntries: 2 On running openssl s_client -connect ldap.natraj.com:636 -showcerts , I obtain the self signed certificate. My slapd.conf file is as follows ####################################################################### # Global Directives: # Features to permit #allow bind_v2 # Schema and objectClass definitions include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema # Where the pid file is put. The init.d script # will not stop the server if you change this. pidfile /var/run/slapd/slapd.pid # List of arguments that were passed to the server argsfile /var/run/slapd/slapd.args # Read slapd.conf(5) for possible values loglevel none # Where the dynamically loaded modules are stored modulepath /usr/lib/ldap moduleload back_hdb # The maximum number of entries that is returned for a search operation sizelimit 500 # The tool-threads parameter sets the actual amount of cpu's that is used # for indexing. tool-threads 1 ####################################################################### # Specific Backend Directives for hdb: # Backend specific directives apply to this backend until another # 'backend' directive occurs backend hdb ####################################################################### # Specific Backend Directives for 'other': # Backend specific directives apply to this backend until another # 'backend' directive occurs #backend <other> ####################################################################### # Specific Directives for database #1, of type hdb: # Database specific directives apply to this databasse until another # 'database' directive occurs database hdb # The base of your directory in database #1 suffix "dc=localdomain" # rootdn directive for specifying a superuser on the database. This is needed # for syncrepl. rootdn "cn=admin,dc=localdomain" # Where the database file are physically stored for database #1 directory "/var/lib/ldap" # The dbconfig settings are used to generate a DB_CONFIG file the first # time slapd starts. They do NOT override existing an existing DB_CONFIG # file. You should therefore change these settings in DB_CONFIG directly # or remove DB_CONFIG and restart slapd for changes to take effect. # For the Debian package we use 2MB as default but be sure to update this # value if you have plenty of RAM dbconfig set_cachesize 0 2097152 0 # Sven Hartge reported that he had to set this value incredibly high # to get slapd running at all. See http://bugs.debian.org/303057 for more # information. # Number of objects that can be locked at the same time. dbconfig set_lk_max_objects 1500 # Number of locks (both requested and granted) dbconfig set_lk_max_locks 1500 # Number of lockers dbconfig set_lk_max_lockers 1500 # Indexing options for database #1 index objectClass eq # Save the time that the entry gets modified, for database #1 lastmod on # Checkpoint the BerkeleyDB database periodically in case of system # failure and to speed slapd shutdown. checkpoint 512 30 # Where to store the replica logs for database #1 # replogfile /var/lib/ldap/replog # The userPassword by default can be changed # by the entry owning it if they are authenticated. # Others should not be able to see it, except the # admin entry below # These access lines apply to database #1 only access to attrs=userPassword,shadowLastChange by dn="cn=admin,dc=localdomain" write by anonymous auth by self write by * none # Ensure read access to the base for things like # supportedSASLMechanisms. Without this you may # have problems with SASL not knowing what # mechanisms are available and the like. # Note that this is covered by the 'access to *' # ACL below too but if you change that as people # are wont to do you'll still need this if you # want SASL (and possible other things) to work # happily. access to dn.base="" by * read # The admin dn has full write access, everyone else # can read everything. access to * by dn="cn=admin,dc=localdomain" write by * read # For Netscape Roaming support, each user gets a roaming # profile for which they have write access to #access to dn=".*,ou=Roaming,o=morsnet" # by dn="cn=admin,dc=localdomain" write # by dnattr=owner write ####################################################################### # Specific Directives for database #2, of type 'other' (can be hdb too): # Database specific directives apply to this databasse until another # 'database' directive occurs #database <other> # The base of your directory for database #2 #suffix "dc=debian,dc=org" ####################################################################### # SSL: # Uncomment the following lines to enable SSL and use the default # snakeoil certificates. #TLSCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem #TLSCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key TLSCipherSuite TLS_RSA_AES_256_CBC_SHA TLSCACertificateFile /etc/ldap/ssl/server.pem TLSCertificateFile /etc/ldap/ssl/server.pem TLSCertificateKeyFile /etc/ldap/ssl/server.pem My ldap.conf file is # # LDAP Defaults # # See ldap.conf(5) for details # This file should be world readable but not world writable. HOST ldap.natraj.com PORT 636 BASE dc=localdomain URI ldaps://ldap.natraj.com TLS_CACERT /etc/ldap/ssl/server.pem TLS_REQCERT allow #SIZELIMIT 12 #TIMELIMIT 15 #DEREF never

    Read the article

  • Error with JSF2 and RichFaces

    - by Miguel Ping
    Hi, I'm trying to use RichFaces on a working JSF2 application. I incorporated the RichFaces jars, changed the web.xml but got the following error: 17:49:13,097 SEVERE [javax.enterprise.resource.webcontainer.jsf.application] Error Rendering View[/login.xhtml]: java.lang.NullPointerExcept ion at com.sun.faces.application.ApplicationImpl.createComponent(ApplicationImpl.java:936) at com.sun.faces.facelets.tag.jsf.CompositeComponentTagHandler.createComponent(CompositeComponentTagHandler.java:154) at com.sun.faces.facelets.tag.jsf.ComponentTagHandlerDelegateImpl.createComponent(ComponentTagHandlerDelegateImpl.java:311) at com.sun.faces.facelets.tag.jsf.ComponentTagHandlerDelegateImpl.apply(ComponentTagHandlerDelegateImpl.java:145) at javax.faces.view.facelets.DelegatingMetaTagHandler.apply(DelegatingMetaTagHandler.java:114) at javax.faces.view.facelets.CompositeFaceletHandler.apply(CompositeFaceletHandler.java:91) at javax.faces.view.facelets.DelegatingMetaTagHandler.applyNextHandler(DelegatingMetaTagHandler.java:120) at com.sun.faces.facelets.tag.jsf.ComponentTagHandlerDelegateImpl.apply(ComponentTagHandlerDelegateImpl.java:204) at javax.faces.view.facelets.DelegatingMetaTagHandler.apply(DelegatingMetaTagHandler.java:114) at javax.faces.view.facelets.CompositeFaceletHandler.apply(CompositeFaceletHandler.java:91) at com.sun.faces.facelets.compiler.NamespaceHandler.apply(NamespaceHandler.java:86) at javax.faces.view.facelets.CompositeFaceletHandler.apply(CompositeFaceletHandler.java:91) at com.sun.faces.facelets.compiler.EncodingHandler.apply(EncodingHandler.java:75) at com.sun.faces.facelets.impl.DefaultFacelet.include(DefaultFacelet.java:301) at com.sun.faces.facelets.impl.DefaultFacelet.include(DefaultFacelet.java:360) at com.sun.faces.facelets.impl.DefaultFacelet.include(DefaultFacelet.java:339) at com.sun.faces.facelets.impl.DefaultFaceletContext.includeFacelet(DefaultFaceletContext.java:191) at com.sun.faces.facelets.tag.ui.CompositionHandler.apply(CompositionHandler.java:149) at com.sun.faces.facelets.compiler.NamespaceHandler.apply(NamespaceHandler.java:86) at com.sun.faces.facelets.compiler.EncodingHandler.apply(EncodingHandler.java:75) at com.sun.faces.facelets.impl.DefaultFacelet.apply(DefaultFacelet.java:145) at com.sun.faces.application.view.FaceletViewHandlingStrategy.buildView(FaceletViewHandlingStrategy.java:716) at com.sun.faces.application.view.FaceletViewHandlingStrategy.renderView(FaceletViewHandlingStrategy.java:351) at com.sun.faces.application.view.MultiViewHandler.renderView(MultiViewHandler.java:126) at org.ajax4jsf.application.ViewHandlerWrapper.renderView(ViewHandlerWrapper.java:100) at org.ajax4jsf.application.AjaxViewHandler.renderView(AjaxViewHandler.java:176) at com.sun.faces.lifecycle.RenderResponsePhase.execute(RenderResponsePhase.java:127) at com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101) at com.sun.faces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:139) at javax.faces.webapp.FacesServlet.service(FacesServlet.java:313) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:336) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242) at org.ajax4jsf.webapp.BaseXMLFilter.doXmlFilter(BaseXMLFilter.java:206) at org.ajax4jsf.webapp.BaseFilter.handleRequest(BaseFilter.java:290) at org.ajax4jsf.webapp.BaseFilter.processUploadsAndHandleRequest(BaseFilter.java:388) at org.ajax4jsf.webapp.BaseFilter.doFilter(BaseFilter.java:515) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:734) at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:541) at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:479) at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:407) at org.apache.catalina.authenticator.FormAuthenticator.forwardToLoginPage(FormAuthenticator.java:318) at org.apache.catalina.authenticator.FormAuthenticator.authenticate(FormAuthenticator.java:243) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:559) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:95) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.process(SecurityContextEstablishmentValve.java:126) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:70) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:368) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:872) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:653) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:951) at java.lang.Thread.run(Thread.java:619) It seems that some jars are missing, but I cannot seem to find this cause. The above error is the only thing that the log spits out. Here's web.xml: <context-param> <param-name>javax.faces.FACELETS_LIBRARIES</param-name> <param-value>/WEB-INF/faces-validator-tags/general.taglib.xml; /WEB-INF/faces-converter-tags/general.converter.taglib.xml </param-value> </context-param> <!-- Startup Servlet <servlet> <servlet-name>StartUpServlet</servlet-name> <servlet-class>pt.cgd.agile.util.StartupServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> --> <context-param> <param-name>javax.faces.DISABLE_FACELET_JSF_VIEWHANDLER</param-name> <param-value>true</param-value> </context-param> <context-param> <param-name>org.richfaces.SKIN</param-name> <param-value>blueSky</param-value> </context-param> <!-- Making the RichFaces skin spread to standard HTML controls --> <context-param> <param-name>org.richfaces.CONTROL_SKINNING</param-name> <param-value>enable</param-value> </context-param> <context-param> <param-name>javax.faces.STATE_SAVING_METHOD</param-name> <param-value>server</param-value> </context-param> <context-param> <param-name>org.richfaces.SKIN</param-name> <param-value>blueSky</param-value> </context-param> <context-param> <param-name>org.richfaces.CONTROL_SKINNING</param-name> <param-value>enable</param-value> </context-param> <filter> <display-name>RichFaces Filter</display-name> <filter-name>richfaces</filter-name> <filter-class>org.ajax4jsf.Filter</filter-class> </filter> <filter-mapping> <filter-name>richfaces</filter-name> <servlet-name>Faces Servlet</servlet-name> <dispatcher>REQUEST</dispatcher> <dispatcher>FORWARD</dispatcher> <dispatcher>INCLUDE</dispatcher> </filter-mapping> <listener> <listener-class>com.sun.faces.config.ConfigureListener</listener-class> </listener> <!-- Just here so the JSF implementation can initialize, *not* used at runtime --> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <!-- Just here so the JSF implementation can initialize --> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>*.jsf</url-pattern> </servlet-mapping> <login-config> <auth-method>FORM</auth-method> <form-login-config> <form-login-page>/login.jsf</form-login-page> <form-error-page>/loginError.jsf</form-error-page> </form-login-config> </login-config> <error-page> <exception-type>java.lang.Throwable</exception-type> <location>/errors/error.jsf</location> </error-page>

    Read the article

  • Please help to clean up my RoR development environment

    - by PeterWong
    I started RoR development a few months ago, and being new to Mac... Time flies and now I have a lot different ruby versions, rails versions and gems versions located everywhere......And currently I installed rvm and things got even worst, all things messed! And so I started want to clean all things and use rvm again! I want to uninstall all gems, all rails, and all ruby versions, except the system's default one (the very old one born with the mac). Or any other better solutions or suggestions!? Please help! there is some info that I think will be useful: which -a ruby /opt/local/bin/ruby /opt/local/bin/ruby /usr/local/bin/ruby /usr/bin/ruby /usr/local/bin/ruby which -a rails /usr/local/bin/rails /usr/bin/rails /usr/local/bin/rails which -a compass # simliar for rspec and many other gems /usr/local/bin/compass /usr/local/bin/compass gem list *** LOCAL GEMS *** abstract (1.0.0) actionmailer (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.9, 2.3.5, 2.3.4) actionpack (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.9, 2.3.5, 2.3.4) activemodel (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2) activerecord (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.9, 2.3.5, 2.3.4) activeresource (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.9, 2.3.5, 2.3.4) activesupport (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.9, 2.3.5, 2.3.4) addressable (2.2.2) arel (2.0.6, 1.0.1, 1.0.0.rc1) authlogic (2.1.6, 2.1.3) aws-s3 (0.6.2) base32 (0.1.2) block_helpers (0.3.3) bluecloth (2.0.9) bowline (0.9.4) bowline-bundler (0.0.4) bson (1.1.2) builder (2.1.2) bundler (1.0.2, 1.0.0) compass (0.10.6) crack (0.1.7) devise (1.1.3) diff-lcs (1.1.2) differ (0.1.1) dynamic_form (1.1.3) engineyard (1.3.1) engineyard-serverside-adapter (1.3.3) erubis (2.6.6) escape (0.0.4) extlib (0.9.15) facebooker (1.0.75) faker (0.3.1) faraday (0.5.3, 0.5.2) fast_gettext (0.5.10, 0.4.17) fastercsv (1.5.3) fastthread (1.0.7) ffi (0.6.3) formatize (1.0.1) formtastic (1.1.0, 1.0.1) gemcutter (0.5.0) gettext (2.1.0) git (1.2.5) gosu (0.7.25 universal-darwin) haml (3.0.24, 3.0.23, 3.0.22, 3.0.21, 3.0.18) haml-rails (0.3.4) heroku (1.10.13, 1.9.13) highline (1.5.2) hirb (0.3.4, 0.3.3) hpricot (0.8.2) i18n (0.5.0, 0.4.2, 0.4.1, 0.3.7) jeweler (1.4.0) json (1.4.6) json_pure (1.4.3) linkedin (0.1.8) locale (2.0.5) mail (2.2.12, 2.2.11, 2.2.10, 2.2.9, 2.2.7, 2.2.6.1) memcache-client (1.8.5) meta_search (0.9.8, 0.9.7.2, 0.9.7.1, 0.9.6, 0.9.4) mime-types (1.16) mongo (1.1.2) mongoid (2.0.0.beta.20) multi_json (0.0.5) multipart-post (1.0.1) mysql (2.8.1) mysql2 (0.2.6, 0.2.4, 0.2.3) net-ldap (0.1.1) nice-ffi (0.4) nokogiri (1.4.4, 1.4.2) oa-basic (0.1.6) oa-core (0.1.6) oa-enterprise (0.1.6) oa-oauth (0.1.6) oa-openid (0.1.6) oauth (0.4.4, 0.4.3, 0.4.1) oauth-plugin (0.4.0.pre1) oauth2 (0.1.0) omniauth (0.1.6) paperclip (2.3.6, 2.3.4, 2.3.1.1) passenger (2.2.12) polyglot (0.3.1) pyu-ruby-sasl (0.0.3.2) querybuilder (0.9.2, 0.5.9) rack (1.2.1, 1.1.0, 1.0.1) rack-cache (0.5.3) rack-cache-purge (0.0.2, 0.0.1) rack-mount (0.6.13) rack-openid (1.2.0) rack-test (0.5.6, 0.5.4) railroady (0.11.2) rails (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.9, 2.3.5, 2.3.4) railties (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2) rake (0.8.7) RedCloth (3.0.4) rest-client (1.6.1) roxml (3.1.5) rscribd (1.2.0) rspec (2.3.0, 2.2.0, 2.1.0, 2.0.1) rspec-core (2.3.0, 2.2.1, 2.1.0, 2.0.1) rspec-expectations (2.3.0, 2.2.0, 2.1.0, 2.0.1) rspec-mocks (2.3.0, 2.2.0, 2.1.0, 2.0.1) rspec-rails (2.3.0, 2.2.0, 2.1.0, 2.0.1) ruby-hmac (0.4.0) ruby-mysql (2.9.3) ruby-ole (1.2.10.1) ruby-openid (2.1.8) ruby-openid-apps-discovery (1.2.0) ruby-recaptcha (1.0.2, 1.0.0) ruby-sdl-ffi (0.3) ruby-termios (0.9.6) ruby_parser (2.0.5) rubyforge (2.0.4) rubygame (2.6.4) rubygems-update (1.3.7) rubyless (0.7.0, 0.6.0, 0.3.5) rubyntlm (0.1.1) rubyzip2 (2.0.1) scribd_fu (2.0.6) searchlogic (2.4.27, 2.4.23) sequel (3.16.0, 3.15.0, 3.13.0) sexp_processor (3.0.5) shoulda (2.11.3) sinatra (1.0) slim (0.8.0) slim-rails (0.1.2) spreadsheet (0.6.4.1) sqlite3-ruby (1.3.2, 1.3.1) ssl_requirement (0.1.0) subdomain-fu (1.0.0.beta2, 0.5.4) supermodel (0.1.4) syntax (1.0.0) taps (0.3.13, 0.3.11) templater (1.0.0) temple (0.1.6) text-format (1.0.0) text-hyphen (1.0.0) thor (0.14.6, 0.14.4, 0.14.3, 0.14.1, 0.14.0) tilt (1.1) treetop (1.4.9, 1.4.8) tzinfo (0.3.23) uuidtools (2.1.1, 2.0.0) validates_timeliness (3.0.0.beta.4, 2.3.1) warden (0.10.7) will_paginate (3.0.pre2, 2.3.15, 2.3.14) xml-simple (1.0.12) ya2yaml (0.30) yajl-ruby (0.7.8, 0.7.7) yamltest (0.7.0) zena (0.16.9, 0.16.8) ====== I have ran sudo rvm implode and sudo rm -rf ~/.rvm, so no rvm now. gem env RubyGems Environment: - RUBYGEMS VERSION: 1.3.7 - RUBY VERSION: 1.8.7 (2009-06-12 patchlevel 174) [i686-darwin10.2.0] - INSTALLATION DIRECTORY: /usr/local/lib/ruby/gems/1.8 - RUBY EXECUTABLE: /usr/local/bin/ruby - EXECUTABLE DIRECTORY: /usr/local/bin - RUBYGEMS PLATFORMS: - ruby - x86-darwin-10 - GEM PATHS: - /usr/local/lib/ruby/gems/1.8 - /Users/peter/.gem/ruby/1.8 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - :sources => ["http://rubygems.org/", "http://gems.github.com"] - REMOTE SOURCES: - http://rubygems.org/ - http://gems.github.com === ls -al /usr/local/lib/ total 5704 drwxr-xr-x 7 root wheel 238 Jun 1 2010 . drwxr-xr-x 9 root wheel 306 Dec 15 16:20 .. -rw-r--r-- 1 root wheel 1717208 Jun 1 2010 libruby-static.a -rwxr-xr-x 1 root wheel 1191880 Jun 1 2010 libruby.1.8.7.dylib lrwxrwxrwx 1 root wheel 19 Jun 1 2010 libruby.1.8.dylib -> libruby.1.8.7.dylib lrwxrwxrwx 1 root wheel 19 Jun 1 2010 libruby.dylib -> libruby.1.8.7.dylib drwxr-xr-x 6 root wheel 204 Jun 1 2010 ruby

    Read the article

  • Maven does not resolve a local Grails plug-in

    - by Drew
    My goal is to take a Grails web application and build it into a Web ARchive (WAR file) using Maven, and the key is that it must populate the "plugins" folder without live access to the internet. An "out of the box" Grails webapp will already have the plugins folder populated with JAR files, but the maven build script should take care of populating it, just like it does for any traditional WAR projects (such as WEB-INF/lib/ if it's empty) This is an error when executing mvn grails:run-app with Grails 1.1 using Maven 2.0.10 and org.grails:grails-maven-plugin:1.0. (This "hibernate-1.1" plugin is needed to do GORM.) [INFO] [grails:run-app] Running pre-compiled script Environment set to development Plugin [hibernate-1.1] not installed, resolving.. Reading remote plugin list ... Error reading remote plugin list [svn.codehaus.org], building locally... Unable to list plugins, please check you have a valid internet connection: svn.codehaus.org Reading remote plugin list ... Error reading remote plugin list [plugins.grails.org], building locally... Unable to list plugins, please check you have a valid internet connection: plugins.grails.org Plugin 'hibernate' was not found in repository. If it is not stored in a configured repository you will need to install it manually. Type 'grails list-plugins' to find out what plugins are available. The build machine does not have access to the internet and must use an internal/enterprise repository, so this error is just saying that maven can't find the required artifact anywhere. That dependency is already included with the stock Grails software that's installed locally, so I just need to figure out how to get my POM file to unpackage that ZIP file into my webapp's "plugins" folder. I've tried installing the plugin manually to my local repository and making it an explicit dependency in POM.xml, but it's still not being recognized. Maybe you can't pull down grails plugins like you would a standard maven reference? mvn install:install-file -DgroupId=org.grails -DartifactId=grails-hibernate -Dversion=1.1 -Dpackaging=zip -Dfile=%GRAILS_HOME%/plugins/grails-hibernate-1.1.zip I can manually setup the Grails webapp from the command-line, which creates that local ./plugins folder properly. This is a step in the right direction, so maybe the question is: how can I incorporate this goal into my POM? mvn grails:install-plugin -DpluginUrl=%GRAILS_HOME%/plugins/grails-hibernate-1.1.zip Here is a copy of my POM.xml file, which was generated using an archetype. <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.samples</groupId> <artifactId>sample-grails</artifactId> <packaging>war</packaging> <name>Sample Grails webapp</name> <properties> <sourceComplianceLevel>1.5</sourceComplianceLevel> </properties> <version>0.0.1-SNAPSHOT</version> <dependencies> <dependency> <groupId>org.grails</groupId> <artifactId>grails-crud</artifactId> <version>1.1</version> </dependency> <dependency> <groupId>org.grails</groupId> <artifactId>grails-gorm</artifactId> <version>1.1</version> </dependency> <dependency> <groupId>opensymphony</groupId> <artifactId>oscache</artifactId> <version>2.4</version> <exclusions> <exclusion> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> </exclusion> <exclusion> <groupId>javax.jms</groupId> <artifactId>jms</artifactId> </exclusion> <exclusion> <groupId>javax.servlet</groupId> <artifactId>servlet-api</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>hsqldb</groupId> <artifactId>hsqldb</artifactId> <version>1.8.0.7</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>1.5.6</version> <scope>runtime</scope> </dependency> <dependency> <groupId>javax.servlet</groupId> <artifactId>jstl</artifactId> <version>1.2</version> </dependency> <!-- <dependency> <groupId>org.grails</groupId> <artifactId>grails-hibernate</artifactId> <version>1.1</version> <type>zip</type> </dependency> --> </dependencies> <build> <pluginManagement /> <plugins> <plugin> <groupId>org.grails</groupId> <artifactId>grails-maven-plugin</artifactId> <version>1.0</version> <extensions>true</extensions> <executions> <execution> <goals> <goal>init</goal> <goal>maven-clean</goal> <goal>validate</goal> <goal>config-directories</goal> <goal>maven-compile</goal> <goal>maven-test</goal> <goal>maven-war</goal> <goal>maven-functional-test</goal> </goals> </execution> </executions> </plugin> <plugin> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>${sourceComplianceLevel}</source> <target>${sourceComplianceLevel}</target> </configuration> </plugin> </plugins> </build> </project>

    Read the article

  • PE Header Requirements

    - by Pindatjuh
    What are the requirements of a PE file (PE/COFF)? What fields should be set, which value, at a bare minimum for enabling it to "run" on Windows (i.e. executing "ret" instruction and then close, without error). The library I am building first is the linker: Now, the problem I have is the PE file (PE/COFF). I don't know what is "required" for a PE file before it can actually execute on my platform. My testing platform is Vista. I get an error message, saying "This is not a valid Win32 executable." when I execute it by double-clicking, and I get an "Access Denied." when executing it with CLI cmd. I have two sections, .text and .data. I've implemented the PE headers as provided by several online documents, i.e. MSDN and some other thirdparty documentation. If I use a hex-editor, it looks almost like a regular PE file. I don't use any imports, nor IAT, nor any directories in the PE header. Edit: I've added an import table, still not a valid .exe-file, says my Windows. I've tried to use values which are also mentioned at the smallest PE-file guide. No luck. Really the only thing I can't seem to figure out is what is required and what isn't. Some guides tell me everything is required, whilst others say about deprications: and it can be zero. I hope this is enough information. Thank you, in advance. Raw data (as requested) of current PE header: 4D 5A 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 40 00 00 00 50 45 00 00 4C 01 02 00 C8 7A 55 4B 00 00 00 00 00 00 00 00 E0 00 82 01 0B 01 0D 25 00 10 00 00 00 10 00 00 00 00 00 00 00 10 00 00 00 10 00 00 00 20 00 00 00 00 40 00 00 10 00 00 00 02 00 00 01 00 0B 00 00 00 00 00 03 00 0A 00 00 00 00 00 00 22 00 00 38 01 00 00 00 00 00 00 03 00 00 00 00 40 00 00 00 40 00 00 00 40 00 00 00 40 00 00 00 00 00 00 0E 00 00 00 00 00 00 00 00 00 00 00 00 20 00 00 24 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 2E 74 65 78 74 00 00 00 00 00 00 00 00 10 00 00 00 02 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 20 00 00 60 2E 69 64 61 74 61 00 00 00 00 00 00 00 20 00 00 00 02 00 00 00 04 00 00 00 00 00 00 00 00 00 00 00 00 00 00 40 00 00 C0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 3C 20 00 00 00 00 00 00 00 00 00 00 24 20 00 00 34 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 4B 45 52 4E 45 4C 33 32 2E 64 6C 6C 00 00 00 00 01 00 00 80 00 00 00 00 01 00 00 80 00 00 00 00

    Read the article

  • Customer Attribute, not sorting select options

    - by Bosworth99
    Made a module that creates some customer EAV attributes. One of these attributes is a Select, and I'm dropping a bunch of options into their respective tables. Everything lines up and is accessible on both the front end and the back. Last thing before calling this part of things finished is the sort order of the options. They come out all scrambled, instead of the obvious default or alphabetical (seemingly at random... very wierd). I'm on Mage v1.11 (Pro/Enterprise). config.xml <config> <modules> <WACI_CustomerAttr> <version>0.1.0</version> </WACI_CustomerAttr> </modules> <global> <resources> <customerattr_setup> <setup> <module>WACI_CustomerAttr</module> <class>Mage_Customer_Model_Entity_Setup</class> </setup> <connection> <use>core_setup</use> </connection> </customerattr_setup> </resources> <models> <WACI_CustomerAttr> <class>WACI_CustomerAttr_Model</class> </WACI_CustomerAttr> </models> <fieldsets> <customer_account> <agency><create>1</create><update>1</update></agency> <title><create>1</create><update>1</update></title> <phone><create>1</create><update>1</update></phone> <mailing_address><create>1</create><update>1</update></mailing_address> <city><create>1</create><update>1</update></city> <state><create>1</create><update>1</update></state> <zip><create>1</create><update>1</update></zip> <fed_id><create>1</create><update>1</update></fed_id> <ubi><create>1</create><update>1</update></ubi> </customer_account> </fieldsets> </global> </config> mysql4-install-0.1.0.php <?php Mage::log('Installing WACI_CustomerAttr'); echo 'Running Upgrade: '.get_class($this)."\n <br /> \n"; //die ( 'its running' ); $installer = $this; /* @var $installer Mage_Customer_Model_Entity_Setup */ $installer->startSetup(); // bunch of attributes // State $installer->addAttribute('customer','state', array( 'type' => 'varchar', 'group' => 'Default', 'label' => 'State', 'input' => 'select', 'default' => 'Washington', 'source' => 'WACI_CustomerAttr/customer_attribute_data_select', 'global' => Mage_Catalog_Model_Resource_Eav_Attribute::SCOPE_STORE, 'required' => true, 'visible' => true, 'user_defined' => 1, 'position' => 67 ) ); $attrS = Mage::getSingleton('eav/config')->getAttribute('customer', 'state'); $attrS->addData(array('sort_order'=>67)); $attrS->setData('used_in_forms', array('adminhtml_customer','customer_account_edit','customer_account_create'))->save(); $state_list = array('Alabama','Alaska','Arizona','Arkansas','California','Colorado','Connecticut','Delaware','Florida','Georgia', 'Hawaii','Idaho','Illinois','Indiana','Iowa','Kansas','Kentucky','Louisiana','Maine','Maryland','Massachusetts','Michigan', 'Minnesota','Mississippi','Missouri','Montana','Nebraska','Nevada','New Hampshire','New Jersey','New Mexico','New York', 'North Carolina','North Dakota','Ohio','Oklahoma','Oregon','Pennsylvania','Rhode Island','South Carolina','South Dakota', 'Tennessee','Texas','Utah','Vermont','Virginia','Washington','West Virginia','Wisconsin','Wyoming'); $aOption = array(); $aOption['attribute_id'] = $installer->getAttributeId('customer', 'state'); for($iCount=0;$iCount<sizeof($state_list);$iCount++){ $aOption['value']['option'.$iCount][0] = $state_list[$iCount]; } $installer->addAttributeOption($aOption); // a few more $installer->endSetup(); app/code/local/WACI/CustomerAttr/Model/Customer/Attribute/Data/Select.php <?php class WACI_CustomerAttr_Model_Customer_Attribute_Data_Select extends Mage_Eav_Model_Entity_Attribute_Source_Abstract{ function getAllOptions(){ if (is_null($this->_options)) { $this->_options = Mage::getResourceModel('eav/entity_attribute_option_collection') ->setAttributeFilter($this->getAttribute()->getId()) ->setStoreFilter($this->getAttribute()->getStoreId()) ->setPositionOrder('asc') ->load() ->toOptionArray(); } $options = $this->_options; return $options; } } theme/variation/template/persistent/customer/form/register.phtml <li> <?php $attribute = Mage::getModel('eav/config')->getAttribute('customer','state'); ?> <label for="state" class="<?php if($attribute->getIsRequired() == true){?>required<?php } ?>"><?php if($attribute->getIsRequired() == true){?><em>*</em><?php } ?><?php echo $this->__('State') ?></label> <div class="input-box"> <select name="state" id="state" class="<?php if($attribute->getIsRequired() == true){?>required-entry<?php } ?>"> <?php $options = $attribute->getSource()->getAllOptions(); foreach($options as $option){ ?> <option value='<?php echo $option['value']?>' <?php if($this->getFormData()->getState() == $option['value']){ echo 'selected="selected"';}?>><?php echo $this->__($option['label'])?></option> <?php } ?> </select> </div> </li> All options are getting loaded into table eav_attribute_option just fine (albeit without a sort_order defined), as well as table eav_attribute_option_value. In the adminhtml / customer-manage customers-account information this select is showing up fine (but its delivered automatically by the system). Seems I should be able to set the sort-order on creation of the attributeOptions, or, certainly, define the sort order in the data/select class. But nothing I've tried works. I'd rather not do a front-end hack either... Oh, and how do I set the default value of this select? (Different question, I know, but related). Setting the attributes 'default' = 'washington' seems to do nothing. There seem to be a lot of ways to set up attribute select options like this. Is there a better way that the one I've outlined here? Perhaps I'm messing something up. Cheers

    Read the article

  • Form Validation using javascript in joomla...

    - by Ankur
    I want to use form validation. I have used javascript for this and I have downloaded the com_php0.1alpha-J15.tar component for writing php code but the blank entries are goes to the database. Please guide me... sample code is here... <script language="javascript" type="text/javascript"> function Validation() { if(document.getElementById("name").value=="") { document.getElementById("nameerr").innerHTML="Enter Name"; document.getElementById("name").style.backgroundColor = "yellow"; } else { document.getElementById("nameerr").innerHTML=""; document.getElementById("name").style.backgroundColor = "White"; } if(document.getElementById("email").value=="") { document.getElementById("emailerr").innerHTML="Enter Email"; document.getElementById("email").style.backgroundColor = "yellow"; } else { document.getElementById("emailerr").innerHTML=""; document.getElementById("email").style.backgroundColor = "White"; } if(document.getElementById("phone").value=="") { document.getElementById("phoneerr").innerHTML="Enter Contact No"; document.getElementById("phone").style.backgroundColor = "yellow"; } else { document.getElementById("phoneerr").innerHTML=""; document.getElementById("phone").style.backgroundColor = "White"; } if(document.getElementById("society").value=="") { document.getElementById("societyerr").innerHTML="Enter Society"; document.getElementById("society").style.backgroundColor = "yellow"; } else { document.getElementById("societyerr").innerHTML=""; document.getElementById("society").style.backgroundColor = "White"; } if(document.getElementById("occupation").value=="") { document.getElementById("occupationerr").innerHTML="Enter Occupation"; document.getElementById("occupation").style.backgroundColor = "yellow"; } else { document.getElementById("occupationerr").innerHTML=""; document.getElementById("occupation").style.backgroundColor = "White"; } if(document.getElementById("feedback").value=="") { document.getElementById("feedbackerr").innerHTML="Enter Feedback"; document.getElementById("feedback").style.backgroundColor = "yellow"; } else { document.getElementById("feedbackerr").innerHTML=""; document.getElementById("feedback").style.backgroundColor = "White"; } if(document.getElementById("name").value=="" || document.getElementById("email").value=="" || document.getElementById("phone").value=="" || document.getElementById("society").value=="" || document.getElementById("occupation").value=="" || document.getElementById("feedback").value=="") return false; else return true; } </script> <?php if(isset($_POST['submit'])) { $conn = mysql_connect('localhost','root',''); mysql_select_db('society_f',$conn); $name = $_POST['name']; $email = $_POST['email']; $phone = $_POST['phone']; $society = $_POST['society']; $occupation = $_POST['occupation']; $feedback = $_POST['feedback']; $qry = "insert into feedback values(null". ",'" . $name . "','" . $email . "','" . $phone . "','" . $society . "','" . $occupation . "','" . $feedback . "')" ; $res = mysql_query($qry); if(!$res) { echo "Could not run a query" . mysql_error(); exit(); } } ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <title>Untitled Document</title> <style type="text/css"> .td{ background-color:#FFFFFF; color:#000066; width:100px; } .text{ border:1px solid #0033FF; color:#000000; } </style> </head> <body> <form id="form1" method="post"> <input type="hidden" name="check" value="post"/> <table border="0" align="center" cellpadding="2" cellspacing="2"> <tr> <td colspan="3" style="text-align:center;"><span style="font-size:24px;color:#000066;">Feedback Form</span></td> </tr> <tr> <td class="td">Name</td> <td><input type="text" id="name" name="name" class="text" ></td> <td style="font-style:italic;color:#FF0000;" id="nameerr"></td> </tr> <tr> <td class="td">E-mail</td> <td><input type="text" id="Email" name="Email" class="text"></td> <td style="font-style:italic;color:#FF0000;" id="emailerr"></td> </tr> <tr> <td class="td">Contact No</td> <td><input type="text" id="Phone" name="Phone" maxlength="15" class="text"></td> <td style="font-style:italic;color:#FF0000;" id="Phoneerr"></td> </tr> <tr> <td class="td">Your Society</td> <td><input type="text" id="society" name="society" class="text"></td> <td style="font-style:italic;color:#FF0000;" id="societyerr"></td> </tr> <tr> <td class="td">Occupation</td> <td><input type="text" id="occupation" name="occupation" class="text"></td> <td style="font-style:italic;color:#FF0000;" id="occupationerr"></td> </tr> <tr> <td class="td">Feedback</td> <td><textarea name="feedback" id="feedback" class="text"></textarea></td> <td style="font-style:italic;color:#FF0000;" id="feedbackerr"></td> </tr> <tr> <td colspan="3" style="text-align:center;"> <input type="submit" value="Submit" id="submit" onClick="Validation();" /> <input type="reset" value="Reset" onClick="Resetme();" /> </td> </tr> </table> </form> </body> </html>

    Read the article

  • Form Validation using javascript in joomla

    - by Ankur
    I want to use form validation. I have used javascript for this and I have downloaded the com_php0.1alpha-J15.tar component for writing php code but the blank entries are goes to the database. Please guide me... sample code is here... <script language="javascript" type="text/javascript"> function Validation() { if(document.getElementById("name").value=="") { document.getElementById("nameerr").innerHTML="Enter Name"; document.getElementById("name").style.backgroundColor = "yellow"; } else { document.getElementById("nameerr").innerHTML=""; document.getElementById("name").style.backgroundColor = "White"; } if(document.getElementById("email").value=="") { document.getElementById("emailerr").innerHTML="Enter Email"; document.getElementById("email").style.backgroundColor = "yellow"; } else { document.getElementById("emailerr").innerHTML=""; document.getElementById("email").style.backgroundColor = "White"; } if(document.getElementById("phone").value=="") { document.getElementById("phoneerr").innerHTML="Enter Contact No"; document.getElementById("phone").style.backgroundColor = "yellow"; } else { document.getElementById("phoneerr").innerHTML=""; document.getElementById("phone").style.backgroundColor = "White"; } if(document.getElementById("society").value=="") { document.getElementById("societyerr").innerHTML="Enter Society"; document.getElementById("society").style.backgroundColor = "yellow"; } else { document.getElementById("societyerr").innerHTML=""; document.getElementById("society").style.backgroundColor = "White"; } if(document.getElementById("occupation").value=="") { document.getElementById("occupationerr").innerHTML="Enter Occupation"; document.getElementById("occupation").style.backgroundColor = "yellow"; } else { document.getElementById("occupationerr").innerHTML=""; document.getElementById("occupation").style.backgroundColor = "White"; } if(document.getElementById("feedback").value=="") { document.getElementById("feedbackerr").innerHTML="Enter Feedback"; document.getElementById("feedback").style.backgroundColor = "yellow"; } else { document.getElementById("feedbackerr").innerHTML=""; document.getElementById("feedback").style.backgroundColor = "White"; } if(document.getElementById("name").value=="" || document.getElementById("email").value=="" || document.getElementById("phone").value=="" || document.getElementById("society").value=="" || document.getElementById("occupation").value=="" || document.getElementById("feedback").value=="") return false; else return true; } </script> <?php if(isset($_POST['submit'])) { $conn = mysql_connect('localhost','root',''); mysql_select_db('society_f',$conn); $name = $_POST['name']; $email = $_POST['email']; $phone = $_POST['phone']; $society = $_POST['society']; $occupation = $_POST['occupation']; $feedback = $_POST['feedback']; $qry = "insert into feedback values(null". ",'" . $name . "','" . $email . "','" . $phone . "','" . $society . "','" . $occupation . "','" . $feedback . "')" ; $res = mysql_query($qry); if(!$res) { echo "Could not run a query" . mysql_error(); exit(); } } ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <title>Untitled Document</title> <style type="text/css"> .td{ background-color:#FFFFFF; color:#000066; width:100px; } .text{ border:1px solid #0033FF; color:#000000; } </style> </head> <body> <form id="form1" method="post"> <input type="hidden" name="check" value="post"/> <table border="0" align="center" cellpadding="2" cellspacing="2"> <tr> <td colspan="3" style="text-align:center;"><span style="font-size:24px;color:#000066;">Feedback Form</span></td> </tr> <tr> <td class="td">Name</td> <td><input type="text" id="name" name="name" class="text" ></td> <td style="font-style:italic;color:#FF0000;" id="nameerr"></td> </tr> <tr> <td class="td">E-mail</td> <td><input type="text" id="Email" name="Email" class="text"></td> <td style="font-style:italic;color:#FF0000;" id="emailerr"></td> </tr> <tr> <td class="td">Contact No</td> <td><input type="text" id="Phone" name="Phone" maxlength="15" class="text"></td> <td style="font-style:italic;color:#FF0000;" id="Phoneerr"></td> </tr> <tr> <td class="td">Your Society</td> <td><input type="text" id="society" name="society" class="text"></td> <td style="font-style:italic;color:#FF0000;" id="societyerr"></td> </tr> <tr> <td class="td">Occupation</td> <td><input type="text" id="occupation" name="occupation" class="text"></td> <td style="font-style:italic;color:#FF0000;" id="occupationerr"></td> </tr> <tr> <td class="td">Feedback</td> <td><textarea name="feedback" id="feedback" class="text"></textarea></td> <td style="font-style:italic;color:#FF0000;" id="feedbackerr"></td> </tr> <tr> <td colspan="3" style="text-align:center;"> <input type="submit" value="Submit" id="submit" onClick="Validation();" /> <input type="reset" value="Reset" onClick="Resetme();" /> </td> </tr> </table> </form> </body> </html>

    Read the article

  • Trying to Understand PLSQL Function

    - by Rachel
    I am new to PLSQL and I have this huge plsql function which am trying to understand and am having hard time understanding the flow and so I would really appreciate if anyone can run me through the big pieces so that I can understand the flow. Guidance would be highly appreciated. FUNCTION monthly_analysis( REGION_ID_P VARCHAR2, COUNTRY_ID_P VARCHAR2 , SUB_REGION_ID_P VARCHAR2 , CUSTOMER_TYPE_ID_P VARCHAR2 , RECEIVED_FROM_DATE_P VARCHAR2 , RECEIVED_TO_DATE_P VARCHAR2, CUSTOMER_ID_P VARCHAR2 , PRIORITY_ID_P VARCHAR2, WORK_GROUP_ID_P VARCHAR2, CITY_ID_P VARCHAR2, USER_ID_P VARCHAR2 ) RETURN AP_ANALYSIS_REPORT_TAB_TYPE pipelined IS with_sql LONG; e_sql LONG; where_sql LONG; group_by_sql LONG; curent_date Date; v_row AP_ANALYSIS_REPORT_ROW_TYPE := AP_ANALYSIS_REPORT_ROW_TYPE( NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ); TYPE rectyp IS REF CURSOR; -- define weak REF CURSOR type rrc_rectyp rectyp; TYPE recordvar IS RECORD( MONTHS VARCHAR2(100), ORDERBY_MONTHS VARCHAR2(100), REQ_RECEIVED NUMBER(9,2), REQ_STILL_OPEN NUMBER(9,2), REQ_AWAIT_ACCEPTANCE NUMBER(9,2), REQ_WITH_ATT NUMBER(9,2), REQ_CLOSED NUMBER(9,2), REQ_CANCELLED NUMBER(9,2) ); res_rec recordvar; BEGIN select sysdate +substr(to_char(systimestamp, 'tzr'),3,1)/24 into curent_date from dual; where_sql := ' AND 1=1 '; IF COUNTRY_ID_P IS NOT NULL THEN where_sql := where_sql ||' AND x.country_id ='|| COUNTRY_ID_P; END IF; IF SUB_REGION_ID_P IS NOT NULL THEN where_sql := where_sql ||' AND x.SUB_REGION_ID ='|| SUB_REGION_ID_P; END IF; IF CUSTOMER_TYPE_ID_P IS NOT NULL THEN where_sql := where_sql ||' AND x.CUSTOMER_TYPE_ID ='|| CUSTOMER_TYPE_ID_P; END IF; IF RECEIVED_FROM_DATE_P IS NOT NULL THEN where_sql := where_sql||' AND convert_time(received_date, ''GMT'', ''GMT'') >= convert_time(trunc(to_date('''||RECEIVED_FROM_DATE_P||''',''dd/mm/yyyy HH24:MI:SS'')), ''Europe/Paris'', ''GMT'')'; END IF; IF RECEIVED_TO_DATE_P IS NOT NULL THEN where_sql := where_sql||' AND convert_time(received_date, ''GMT'', ''GMT'') <= convert_time(trunc(to_date('''||RECEIVED_TO_DATE_P||''',''dd/mm/yyyy HH24:MI:SS'')), ''Europe/Paris'', ''GMT'')'; END IF; IF CUSTOMER_ID_P IS NOT NULL THEN where_sql := where_sql||' AND x.CUSTOMER_ID in(select CUSTOMER_ID from lk_customer where upper(CUSTOMER_NAME) like upper('''||CUSTOMER_ID_P||'%''))'; END IF; IF PRIORITY_ID_P IS NOT NULL THEN where_sql := where_sql ||' AND x.PRIORITY_ID ='|| PRIORITY_ID_P; END IF; IF WORK_GROUP_ID_P IS NOT NULL THEN where_sql := where_sql ||' AND x.WORKGROUP_ID ='|| WORK_GROUP_ID_P; END IF; IF CITY_ID_P IS NOT NULL THEN where_sql := where_sql ||' AND x.CITY_ID = ' || CITY_ID_P; END IF; group_by_sql := ' group by to_char(convert_time(received_date, ''GMT'', ''Europe/Paris''),''mm/YYYY''),to_char(convert_time(received_date, ''GMT'', ''Europe/Paris''),''yyyy/mm'')'; with_sql := 'with b AS (select cep_work_item_no from ap_main where req_accept_date is null and ecep_ap_utils.f_business_days(received_date,'''||curent_date||''')>30), e AS (select cep_work_item_no from ap_main where status_id=1 and req_accept_date is not null and stage_ID != 10 and stage_Id !=4 and ecep_ap_utils.f_business_days(received_date,'''||curent_date||''')>30), --f AS (select cep_work_item_no from ap_main where received_date is not null), m AS (select cep_work_item_no from ap_main where received_date is not null and status_id=1), n AS (select cep_work_item_no from ap_main where status_id=2), o AS (select cep_work_item_no from ap_main where status_id=3)'; --e_sql := ' SELECT MONTHS, REQ_RECEIVED,REQ_STILL_OPEN, REQ_AWAIT_ACCEPTANCE, REQ_WITH_ATT from ('; --e_sql := with_sql; e_sql := with_sql||' select to_char(convert_time(received_date, ''GMT'', ''Europe/Paris''),''mm/YYYY'') MONTHS, to_char(convert_time(received_date, ''GMT'', ''Europe/Paris''),''yyyy/mm'') ORDERBY_MONTHS, count(x.cep_work_item_no) REQ_RECEIVED, count(m.cep_work_item_no) REQ_STILL_OPEN,count(b.cep_work_item_no) REQ_AWAIT_ACCEPTANCE,count(e.cep_work_item_no) REQ_WITH_ATT, count(n.cep_work_item_no) REQ_CLOSED, count(o.cep_work_item_no) REQ_CANCELLED from emea_main x,m,b,e,n,o where x.cep_work_item_no=m.cep_work_item_no(+) and x.cep_work_item_no = b.cep_work_item_no(+) and x.cep_work_item_no=e.cep_work_item_no(+) and x.cep_work_item_no=n.cep_work_item_no(+) and x.cep_work_item_no=o.cep_work_item_no(+) and x.received_date is not null'; e_sql := e_sql|| where_sql||group_by_sql; OPEN rrc_rectyp FOR e_sql; LOOP FETCH rrc_rectyp INTO res_rec; EXIT WHEN rrc_rectyp%NOTFOUND; v_row.MONTHS := res_rec.MONTHS ; v_row.ORDERBY_MONTHS := res_rec.ORDERBY_MONTHS ; v_row.REQ_RECEIVED := res_rec.REQ_RECEIVED; v_row.REQ_STILL_OPEN := res_rec.REQ_STILL_OPEN; v_row.REQ_AWAIT_ACCEPTANCE := res_rec.REQ_AWAIT_ACCEPTANCE; v_row.REQ_WITH_ATT := res_rec.REQ_WITH_ATT; v_row.REQ_CLOSED := res_rec.REQ_CLOSED; v_row.REQ_CANCELLED := res_rec.REQ_CANCELLED; pipe ROW(v_row); END LOOP; RETURN; END monthly_analysis; And would also appreciate if someone can let me know as to what are the important plsql concepts used here so that I can go ahead and understand them in a better way and some small explanation would go long way. As suggested by dcp, i am trying to use debugger, again I have not used it before and so pardon me, here is what am getting: DECLARE REGION_ID_P VARCHAR2(200); COUNTRY_ID_P VARCHAR2(200); SUB_REGION_ID_P VARCHAR2(200); CUSTOMER_TYPE_ID_P VARCHAR2(200); RECEIVED_FROM_DATE_P VARCHAR2(200); RECEIVED_TO_DATE_P VARCHAR2(200); CUSTOMER_ID_P VARCHAR2(200); PRIORITY_ID_P VARCHAR2(200); WORK_GROUP_ID_P VARCHAR2(200); CITY_ID_P VARCHAR2(200); USER_ID_P VARCHAR2(200); v_Return GECEPDEV.AP_ANALYSIS_REPORT_TAB_TYPE; BEGIN REGION_ID_P := NULL; COUNTRY_ID_P := NULL; SUB_REGION_ID_P := NULL; CUSTOMER_TYPE_ID_P := NULL; RECEIVED_FROM_DATE_P := NULL; RECEIVED_TO_DATE_P := NULL; CUSTOMER_ID_P := NULL; PRIORITY_ID_P := NULL; WORK_GROUP_ID_P := NULL; CITY_ID_P := NULL; USER_ID_P := NULL; v_Return := ECEP_AP_REPORTS.MONTHLY_ANALYSIS( REGION_ID_P => REGION_ID_P, COUNTRY_ID_P => COUNTRY_ID_P, SUB_REGION_ID_P => SUB_REGION_ID_P, CUSTOMER_TYPE_ID_P => CUSTOMER_TYPE_ID_P, RECEIVED_FROM_DATE_P => RECEIVED_FROM_DATE_P, RECEIVED_TO_DATE_P => RECEIVED_TO_DATE_P, CUSTOMER_ID_P => CUSTOMER_ID_P, PRIORITY_ID_P => PRIORITY_ID_P, WORK_GROUP_ID_P => WORK_GROUP_ID_P, CITY_ID_P => CITY_ID_P, USER_ID_P => USER_ID_P ); -- Modify the code to output the variable -- DBMS_OUTPUT.PUT_LINE('v_Return = ' || v_Return); END; Can anyone guide me through this query and its goal ?

    Read the article

  • RemoteViewsService never gets called whenever we update app widget

    - by user3689160
    I am making one task widget application. This is a simple widget application which can be used to add new tasks by clicking on "+New Task". So ideally what I have done is, I've added a widget first, then when we click on "+New Task" a new activity opens up from where we can type in a new task and it should get updated in the ListView inside the home screen widget. Problem: Now whenever I add a new task, the ListView does not get updated. The data gets inside the database a new is created as well. My onUpdate(...) method gets called as well. There is a WidgetService.java which is a RemoteViewService that is being used to call the RemoteViewFactory but WidgetService.java gets called only when we create the widget, never after that. WidgetService.java never gets called again. I have the following setup MyWidgetProvider.java AppWidgetProvider class for updating the widget and filling its remote views. public class MyWidgetProvider extends AppWidgetProvider { public static int randomNumber=132; static RemoteViews remoteViews = null; @Override public void onUpdate(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds) { updateList(context,appWidgetManager,appWidgetIds); } public static void updateList(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds){ final int N = appWidgetIds.length; Log.d("MyWidgetProvider", "length="+appWidgetIds.length+""); for (int i = 0; i<N; ++i) { Log.d("MyWidgetProvider", "value"+ i + "="+appWidgetIds[i]); // Calling updateWidgetListView(context, appWidgetIds[i]); to load the listview with data remoteViews = updateWidgetListView(context, appWidgetIds[i]); // Updating the Widget with the new data filled remoteviews appWidgetManager.updateAppWidget(appWidgetIds[i], remoteViews); appWidgetManager.notifyAppWidgetViewDataChanged(appWidgetIds[i], R.id.lv_tasks); } } private static RemoteViews updateWidgetListView(Context context, int appWidgetId) { Intent svcIntent = null; remoteViews = new RemoteViews(context.getPackageName(),R.layout.widget_demo); //RemoteViews Service needed to provide adapter for ListView svcIntent = new Intent(context, WidgetService.class); //passing app widget id to that RemoteViews Service svcIntent.putExtra(AppWidgetManager.EXTRA_APPWIDGET_ID, appWidgetId); //setting a unique Uri to the intent //binding the data from the WidgetService to the svcIntent svcIntent.setData(Uri.parse(svcIntent.toUri(Intent.URI_INTENT_SCHEME))); //setting adapter to listview of the widget remoteViews.setRemoteAdapter(R.id.lv_tasks, svcIntent); //setting click listner on the "New Task" (TextView) of the widget remoteViews.setOnClickPendingIntent(R.id.tv_add_task, buildButtonPendingIntent(context)); return remoteViews; } // Just to create PendingIntent, this will be broadcasted everytime textview containing "New Task" is clicked and OnReceive method of MyWidgetIntentReceiver.java will run public static PendingIntent buildButtonPendingIntent(Context context) { Intent intent = new Intent(); intent.putExtra("TASK_DONE_VALUE_VALID", false); intent.setAction("com.ommzi.intent.action.CLEARTASK"); return PendingIntent.getBroadcast(context, 0, intent, PendingIntent.FLAG_UPDATE_CURRENT); } // This method is defined here so that we can update the remoteviews from other classes as well, like we'll do it in MyWidgetIntentReceiver.java public static void pushWidgetUpdate(Context context, RemoteViews remoteViews) { ComponentName myWidget = new ComponentName(context, MyWidgetProvider.class); AppWidgetManager manager = AppWidgetManager.getInstance(context); manager.updateAppWidget(myWidget, remoteViews); } } WidgetService.java A RemoteViewsService class for creating a RemoteViewsFactory. public class WidgetService extends RemoteViewsService { @Override public RemoteViewsFactory onGetViewFactory(Intent intent) { int appWidgetId = intent.getIntExtra(AppWidgetManager.EXTRA_APPWIDGET_ID,AppWidgetManager.INVALID_APPWIDGET_ID); return (new ListProvider(this.getApplicationContext(), intent)); } } ListProvider.java A RemoteViewsFactory class for filling the ListView inside the widget. public class ListProvider implements RemoteViewsFactory { private List<TemplateTaskData> tasks; private Context context = null; private int appWidgetId; int increment=0; public ListProvider(Context context, Intent intent) { this.context = context; appWidgetId = intent.getIntExtra(AppWidgetManager.EXTRA_APPWIDGET_ID,AppWidgetManager.INVALID_APPWIDGET_ID); //appWidgetId = Integer.valueOf(intent.getData().getSchemeSpecificPart())- MyWidgetProvider.randomNumber; populateListItem(); } private void populateListItem() { tasks = new ArrayList<TemplateTaskData>(); com.ommzi.database.sqliteDataBase letStartDB = new com.ommzi.database.sqliteDataBase(context); letStartDB.open(); tasks=letStartDB.getData(); letStartDB.close(); } public int getCount() { return tasks.size(); } public long getItemId(int position) { return position; } public RemoteViews getViewAt(int position) { // We are only showing the text view field here as there is limitation on using Checkbox within the widget // We prefer to on an activity for the user to make comprehensive changes as changes inside the widget is always limited. // To view the supported list of views for the widget you can visit http://developer.android.com/guide/topics/appwidgets/index.html final RemoteViews remoteView = new RemoteViews(context.getPackageName(), R.layout.screen_wcitem); TemplateTaskData taskDTO = tasks.get(position); Log.d("My ListProvider", "1"); remoteView.setTextViewText(R.id.wc_add_task, taskDTO.getTaskName()); if(taskDTO.getTaskDone()){ remoteView.setImageViewResource(R.id.imageView1, R.drawable.checked); }else{ remoteView.setImageViewResource(R.id.imageView1, R.drawable.unchecked); } Intent intent = new Intent(); intent.setAction("com.ommzi.intent.action.GETTASK"); intent.putExtra("TASK_DONE_VALUE_VALID", true); intent.putExtra("ROWID", taskDTO.getRowId()); intent.putExtra("TASK_NAME", taskDTO.getTaskName()); intent.putExtra("TASK_DONE_VALUE", taskDTO.getTaskDone()); remoteView.setOnClickPendingIntent(R.id.imageView1, PendingIntent.getBroadcast(context, 0, intent, PendingIntent.FLAG_UPDATE_CURRENT)); Log.d("My ListProvider, Inside getView", "Value of Incrementor=" + increment + "\n ROWID=" + taskDTO.getRowId() + "\n TASK_NAME=" + taskDTO.getTaskName() + "\n TASK_DONE_VALUE=" +taskDTO.getTaskDone() + "\n Total Database Size=" + tasks.size()); return remoteView; } public RemoteViews getLoadingView() { // TODO Auto-generated method stub return null; } public int getViewTypeCount() { // TODO Auto-generated method stub return 1; } public boolean hasStableIds() { // TODO Auto-generated method stub return false; } public void onCreate() { // TODO Auto-generated method stub } public void onDataSetChanged() { // TODO Auto-generated method stub } public void onDestroy() { // TODO Auto-generated method stub } } GetTaskActivity.java This is another activity that is fired from a BroadcastReceiver(MyWidgetIntentReceiver.java). Purpose of this is to get a new task added to the list of task. MyWidgetIntentReceiver.java BroadcastReceiver fired using a pending intent that starts an activity GetTaskActivity.java. Please help me out with this problem. I am seen all the posts here and on other websites no body is having any solution. But I am sure a solution exist to this. Thanks for your help!

    Read the article

  • MVC4 Model in View has nested data - cannot get data in model

    - by Taersious
    I have a Model defined that gets me a View with a list of RadioButtons, per IEnumerable. Within that Model, I want to display a list of checkboxes that will vary based on the item selected. Finally, there will be a Textarea in the same view once the user has selected from the available checkboxes, with some dynamic text there based on the CheckBoxes that are selected. What we should end up with is a Table-per-hierarchy. The layout is such that the RadioButtonList is in the first table cell, the CheckBoxList is in the middle table cell, and the Textarea is ini the right table cell. If anyone can guide me to what my model-view should be to achieve this result, I'll be most pleased... Here are my codes: // // View Model for implementing radio button list public class RadioButtonViewModel { // objects public List<RadioButtonItem> RadioButtonList { get; set; } public string SelectedRadioButton { get; set; } } // // Object for handling each radio button public class RadioButtonItem { // this object public string Name { get; set; } public bool Selected { get; set; } public int ObjectId { get; set; } // columns public virtual IEnumerable<CheckBoxItem> CheckBoxItems { get; set; } } // // Object for handling each checkbox public class CheckBoxViewModel { public List<CheckBoxItem> CheckBoxList { get; set; } } // // Object for handling each check box public class CheckBoxItem { public string Name { get; set; } public bool Selected { get; set; } public int ObjectId { get; set; } public virtual RadioButtonItem RadioButtonItem { get; set; } } and the view @model IEnumerable<EF_Utility.Models.RadioButtonItem> @{ ViewBag.Title = "Connect"; ViewBag.Selected = Request["name"] != null ? Request["name"].ToString() : ""; } @using (Html.BeginForm("Objects" , "Home", FormMethod.Post) ){ @Html.ValidationSummary(true) <table> <tbody> <tr> <td style="border: 1px solid grey; vertical-align:top;"> <table> <tbody> <tr> <th style="text-align:left; width: 50px;">Select</th> <th style="text-align:left;">View or Table Name</th> </tr> @{ foreach (EF_Utility.Models.RadioButtonItem item in @Model ) { <tr> <td> @Html.RadioButton("RadioButtonViewModel.SelectedRadioButton", item.Name, ViewBag.Selected == item.Name ? true : item.Selected, new { @onclick = "this.form.action='/Home/Connect?name=" + item.Name + "'; this.form.submit(); " }) </td> <td> @Html.DisplayFor(i => item.Name) </td> </tr> } } </tbody> </table> </td> <td style="border: 1px solid grey; width: 220px; vertical-align:top; @(ViewBag.Selected == "" ? "display:none;" : "")"> <table> <tbody> <tr> <th>Column </th> </tr> <tr> <td><!-- checkboxes will go here --> </td> </tr> </tbody> </table> </td> <td style="border: 1px solid grey; vertical-align:top; @(ViewBag.Selected == "" ? "display:none;" : "")"> <textarea name="output" id="output" rows="24" cols="48"></textarea> </td> </tr> </tbody> </table> } and the relevant controller public ActionResult Connect() { /* TEST SESSION FIRST*/ if( Session["connstr"] == null) return RedirectToAction("Index"); else { ViewBag.Message = ""; ViewBag.ConnectionString = Server.UrlDecode( Session["connstr"].ToString() ); ViewBag.Server = ParseConnectionString( ViewBag.ConnectionString, "Data Source" ); ViewBag.Database = ParseConnectionString( ViewBag.ConnectionString, "Initial Catalog" ); using( var db = new SysDbContext(ViewBag.ConnectionString)) { var objects = db.Set<SqlObject>().ToArray(); var model = objects .Select( o => new RadioButtonItem { Name = o.Name, Selected = false, ObjectId = o.Object_Id, CheckBoxItems = Enumerable.Empty<EF_Utility.Models.CheckBoxItem>() } ) .OrderBy( rb => rb.Name ); return View( model ); } } } What I am missing it seems, is the code in my Connect() method that will bring the data context forward; at that point, it should be fairly straight-forward to set up the Html for the View. EDIT ** So I am going to need to bind the RadioButtonItem to the view with something like the following, except my CheckBoxList will NOT be an empty set. // // POST: /Home/Connect/ [HttpPost] public ActionResult Connect( RadioButtonItem rbl ) { /* TEST SESSION FIRST*/ if ( Session["connstr"] == null ) return RedirectToAction( "Index" ); else { ViewBag.Message = ""; ViewBag.ConnectionString = Server.UrlDecode( Session["connstr"].ToString() ); ViewBag.Server = ParseConnectionString( ViewBag.ConnectionString, "Data Source" ); ViewBag.Database = ParseConnectionString( ViewBag.ConnectionString, "Initial Catalog" ); using ( var db = new SysDbContext( ViewBag.ConnectionString ) ) { var objects = db.Set<SqlObject>().ToArray(); var model = objects .Select( o => new RadioButtonItem { Name = o.Name, Selected = false, ObjectId = o.Object_Id, CheckBoxItems = Enumerable.Empty<EF_Utility.Models.CheckBoxItem>() } ) .OrderBy( rb => rb.Name ); return View( model ); } } }

    Read the article

  • Ruby on Rails DataTable now working.

    - by Nimroo
    [Ruby on Rails DataTable guide][1]https://github.com/phronos/rails_datatables/blob/master/README.md I"m following the above and have installed the git plugin as well. All i'm getting is the <%= datatable() % returning " <script type="text/javascript"> $(function() { $('#expenses').dataTable({ "oLanguage": { "sSearch": "Search", "sProcessing": 'Processing' }, "sPaginationType": "full_numbers", "iDisplayLength": 25, "bProcessing": true, "bServerSide": false, "bLengthChange": false, "bStateSave": true, "bFilter": true, "bAutoWidth": true, 'aaSorting': [[0, 'desc']], "aoColumns": [ { 'sType': 'html', 'bSortable':true, 'bSearchable':true ,'sClass':'first' },{ 'sType': 'html', 'bSortable':true, 'bSearchable':true },{ 'sType': 'html', 'bSortable':true, 'bSearchable':true },{ 'sType': 'string', 'bSortable':true, 'bSearchable':true ,'sClass':'last' } ], "fnServerData": function ( sSource, aoData, fnCallback ) { aoData.push( ); $.getJSON( sSource, aoData, function (json) { fnCallback(json); } ); } }); }); </script>". My .html.erb looks like this: <% @page_title="User Page"%> <link rel="stylesheet" href="http://code.jquery.com/ui/1.9.2/themes/base/jquery-ui.css" /> <script src="http://code.jquery.com/jquery-1.8.3.js"></script> <script src="http://code.jquery.com/ui/1.9.2/jquery-ui.js"></script> <%=javascript_include_tag "jquery.dataTables" %> <%=stylesheet_link_tag "jquery-ui-1.9.2.custom" %> <script> $(function() { $( "#tabs" ).tabs(); }); </script> <% if current_user %> <div id="tabs"> <ul> <li><a href="#tabs-1">Expenses</a></li> <li><a href="#tabs-2">Accountant</a></li> <li><a href="#tabs-3">Requests (<%[email protected]%>)</a></li> </ul> <div id="tabs-3"> <p> <% if @requests.count != 0 %> <h2> Accountant Requests </h2> <table > <tr> <thead> <th>First Name</th> <th>Last Name</th> <th>Email Address</th> <th>Accept</th> <th>Reject</th> </thead> </tr> <% @requests.each do |request| %> <tr> <td><%= request.accountant.first_name %></td> <td><%= request.accountant.last_name %></td> <td><%= request.accountant.email %></td> <td><%= link_to 'accept', confirm_accountant_path(:accountant_id => request.accountant_id) %></td> <td><%= link_to 'Edit', edit_expense_path(request) %></td> </tr> <% end %> </tbody> </table> <% else %> <h4> You have no pending requests <h4> <% end %> </p> </div> <div id="tabs-2"> <p> <% if @accountants.count != 0 %> <h2> Accountant Info </h2> <table> <tr> <th>First Name</th> <th>Last Name</th> <th>Email Address</th> </tr> <% @accountants.each do |accountant| %> <tr> <td><%= accountant.first_name %></td> <td><%= accountant.last_name %></td> <td><%= accountant.email %></td> </tr> <% end %> </table> <% else %> <h4> Add Accountant <h4> <p> You don't have an accountant yet, perhaps consider adding one by e-mail </p> <%= render 'add_accountant_form' %> <% end %> <% end %> </p> </div> <div id="tabs-1"> <p><% if current_user %> <h4> Submit new expense </h4> <%= render 'expenses/form' %> <% columns = [{:type => 'html', :class => "first"}, {:type => 'html'}, {:type => 'html'}, {:type => nil, :class => "last"}] %> <%= datatable(columns, {:sort_by => "[0, 'desc']", table_dom_id:"expenses" }) %> <table id="expenses" class="datatable"> <thead> <tr> <th>Entry Date</th> <th>Last Update</th> <th>Amount</th> <th>User</th> <th>Receipt</th> <th></th> <th></th> </tr> </thead> <% @expenses.each do |user_expense| %> <tbody> <tr> <td><%= user_expense.created_at %></td> <td><%= user_expense.updated_at %></td> <td><%= user_expense.amount %></td> <td><%= user_expense.user.username %></td> <% if !user_expense.receipt_img.nil? %> <td><%= image_tag user_expense.receipt_img.url(:thumb) %></td> <% else %> <td>Future Button Here</td> <% end %> <td><%= link_to 'Show', user_expense %></td> <td><%= link_to 'Edit', edit_expense_path(user_expense) %></td> </tr> </tbody> <% end %> </table> <% end %></p> </div> </div>

    Read the article

  • Slow NFS and GFS2 performance

    - by Tiago
    Recently I've designed and configured a 4 node cluster for a webapp that does lots of file handling. The cluster have been broken down into 2 main roles, webserver and storage. Each role is replicated to a second server using drbd in active/passive mode. The webserver does a NFS mount of the data directory of the storage server and the latter also has a webserver running to serve files to browser clients. In the storage servers I've created a GFS2 FS to hold the data which is wired to drbd. I've chose GFS2 mainly because the announced performance and also because the volume size which has to be pretty high. Since we entered production I've been facing two problems that I think are deeply connected. First of all, the NFS mount on the webservers keeps hanging for a minute or so and then resumes normal operations. By analyzing the logs I've found out that NFS stops answering for a while and outputs the following log lines: Oct 15 18:15:42 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:44 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:46 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:51 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:58 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK In this case, the hang lasted for 16 seconds but sometimes it takes 1 or 2 minutes to resume normal operations. My first guess was this was happening due to heavy load of the NFS mount and that by increasing RPCNFSDCOUNT to a higher value, this would become stable. I've increased it several times and apparently, after a while, the logs started appearing less times. The value is now on 32. After further investigating the issue, I've came across a different hang, despite the NFS messages still appear in the logs. Sometimes, the GFS2 FS simply hangs which causes both the NFS and the storage webserver to serve files. Both stay hang for a while and then they resume normal operations. This hangs leaves no trace on client side (also leaves no NFS ... not responding messages) and, on the storage side, the log system appears to be empty, even though the rsyslogd is running. The nodes connect themselves through a 10Gbps non-dedicated connection but I don't think this is an issue because the GFS2 hang is confirmed but connecting directly to the active storage server. I've been trying to solve this for a while now and I've tried different NFS configuration options, before I've found out the GFS2 FS is also hanging. The NFS mount is exported as such: /srv/data/ <ip_address>(rw,async,no_root_squash,no_all_squash,fsid=25) And the NFS client mounts with: mount -o "async,hard,intr,wsize=8192,rsize=8192" active.storage.vlan:/srv/data /srv/data After some tests, these were the configurations that yielded more performance to the cluster. I am desperate to find a solution for this as the cluster is already in production mode and I need to fix this so that this hangs won't happen in the future and I don't really know for sure what and how I should be benchmarking. What I can tell is that this is happening due to heavy loads as I have tested the cluster earlier and this problems weren't happening at all. Please tell me if you need me to provide configuration details of the cluster, and which do you want me to post. As last resort I can migrate the files to a different FS but I need some solid pointers on whether this will solve this problems as the volume size is extremely large at this point. The servers are being hosted by a third-party enterprise and I don't have physical access to them. Best regards. EDIT 1: The servers are physical servers and their specs are: Webservers: Intel Bi Xeon E5606 2x4 2.13GHz 24GB DDR3 Intel SSD 320 2 x 120GB Raid 1 Storage: Intel i5 3550 3.3GHz 16GB DDR3 12 x 2TB SATA Initially there was a VRack setup between the servers but we've upgraded one of the storage servers to have more RAM and it wasn't inside the VRack. They connect through a shared 10Gbps connection between them. Please note that it is the same connection that is used for public access. They use a single IP (using IP Failover) to connect between them and to allow for a graceful failover. NFS is therefore over a public connection and not under any private network (it was before the upgrade, were the problem still existed). The firewall was configured and tested thoroughly but I disabled it for a while to see if the problem still occurred, and it did. From my knowledge the hosting provider isn't blocking or limiting the connection between either the servers and the public domain (at least under a given bandwidth consumption threshold that hasn't been reached yet). Hope this helps figuring out the problem. EDIT 2: Relevant software versions: CentOS 2.6.32-279.9.1.el6.x86_64 nfs-utils-1.2.3-26.el6.x86_64 nfs-utils-lib-1.1.5-4.el6.x86_64 gfs2-utils-3.0.12.1-32.el6_3.1.x86_64 kmod-drbd84-8.4.2-1.el6_3.elrepo.x86_64 drbd84-utils-8.4.2-1.el6.elrepo.x86_64 DRBD configuration on storage servers: #/etc/drbd.d/storage.res resource storage { protocol C; on <server1 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server1 ip>:7788; meta-disk internal; } on <server2 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server2 ip>:7788; meta-disk internal; } } NFS Configuration in storage servers: #/etc/sysconfig/nfs RPCNFSDCOUNT=32 STATD_PORT=10002 STATD_OUTGOING_PORT=10003 MOUNTD_PORT=10004 RQUOTAD_PORT=10005 LOCKD_UDPPORT=30001 LOCKD_TCPPORT=30001 (can there be any conflict in using the same port for both LOCKD_UDPPORT and LOCKD_TCPPORT?) GFS2 configuration: # gfs2_tool gettune <mountpoint> incore_log_blocks = 1024 log_flush_secs = 60 quota_warn_period = 10 quota_quantum = 60 max_readahead = 262144 complain_secs = 10 statfs_slow = 0 quota_simul_sync = 64 statfs_quantum = 30 quota_scale = 1.0000 (1, 1) new_files_jdata = 0 Storage network environment: eth0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip address> Bcast:<bcast address> Mask:<ip mask> inet6 addr: <ip address> Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:957025127 errors:0 dropped:0 overruns:0 frame:0 TX packets:1473338731 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2630984979622 (2.3 TiB) TX bytes:1648430431523 (1.4 TiB) eth0:0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip failover address> Bcast:<bcast address> Mask:<ip mask> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 The IP addresses are statically assigned with the given network configurations: DEVICE="eth0" BOOTPROTO="static" HWADDR=<mac address> ONBOOT="yes" TYPE="Ethernet" IPADDR=<ip address> NETMASK=<net mask> and DEVICE="eth0:0" BOOTPROTO="static" HWADDR=<mac address> IPADDR=<ip failover> NETMASK=<net mask> ONBOOT="yes" BROADCAST=<bcast address> Hosts file to allow for a graceful NFS failover in conjunction with NFS option fsid=25 set on both storage servers: #/etc/hosts <storage ip failover address> active.storage.vlan <webserver ip failover address> active.service.vlan As you can see, packet errors are down to 0. I've also ran ping for a long time without any packet loss. MTU size is the normal 1500. As there is no VLan by now, this is the MTU used to communicate between servers. The webservers' network environment is similar. One thing I forgot to mention is that the storage servers handle ~200GB of new files each day through the NFS connection, which is a key point for me to think this is some kind of heavy load problem with either NFS or GFS2. If you need further configuration details please tell me. EDIT 3: Earlier today we had a major filesystem crash on the storage server. I couldn't get the details of the crash right away because the server stop responding. After the reboot, I noticed the filesystem was extremely slow, and I was not being able to serve a single file through either NFS or httpd, perhaps due to cache warming or so. Nevertheless, I've been monitoring the server closely and the following error came up in dmesg. The source of the problem is clearly GFS, which is waiting for a lock and ends up starving after a while. INFO: task nfsd:3029 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. nfsd D 0000000000000000 0 3029 2 0x00000080 ffff8803814f79e0 0000000000000046 0000000000000000 ffffffff8109213f ffff880434c5e148 ffff880624508d88 ffff8803814f7960 ffffffffa037253f ffff8803815c1098 ffff8803814f7fd8 000000000000fb88 ffff8803815c1098 Call Trace: [<ffffffff8109213f>] ? wake_up_bit+0x2f/0x40 [<ffffffffa037253f>] ? gfs2_holder_wake+0x1f/0x30 [gfs2] [<ffffffff814ff42e>] __mutex_lock_slowpath+0x13e/0x180 [<ffffffff814ff2cb>] mutex_lock+0x2b/0x50 [<ffffffffa0379f21>] gfs2_log_reserve+0x51/0x190 [gfs2] [<ffffffffa0390da2>] gfs2_trans_begin+0x112/0x1d0 [gfs2] [<ffffffffa0369b05>] ? gfs2_dir_check+0x35/0xe0 [gfs2] [<ffffffffa0377943>] gfs2_createi+0x1a3/0xaa0 [gfs2] [<ffffffff8121aab1>] ? avc_has_perm+0x71/0x90 [<ffffffffa0383d1e>] gfs2_create+0x7e/0x1a0 [gfs2] [<ffffffffa037783f>] ? gfs2_createi+0x9f/0xaa0 [gfs2] [<ffffffff81188cf4>] vfs_create+0xb4/0xe0 [<ffffffffa04217d6>] nfsd_create_v3+0x366/0x4c0 [nfsd] [<ffffffffa0429703>] nfsd3_proc_create+0x123/0x1b0 [nfsd] [<ffffffffa041a43e>] nfsd_dispatch+0xfe/0x240 [nfsd] [<ffffffffa025a5d4>] svc_process_common+0x344/0x640 [sunrpc] [<ffffffff810602a0>] ? default_wake_function+0x0/0x20 [<ffffffffa025ac10>] svc_process+0x110/0x160 [sunrpc] [<ffffffffa041ab62>] nfsd+0xc2/0x160 [nfsd] [<ffffffffa041aaa0>] ? nfsd+0x0/0x160 [nfsd] [<ffffffff81091de6>] kthread+0x96/0xa0 [<ffffffff8100c14a>] child_rip+0xa/0x20 [<ffffffff81091d50>] ? kthread+0x0/0xa0 [<ffffffff8100c140>] ? child_rip+0x0/0x20

    Read the article

  • TCP RST Reset Every 5 Minutes on Windows 2003 sp2

    - by Dan
    Hey, Recently I had a web developer come to me and ask why he was receiving connection errors in his app that was accessing a sql database. So, I went through my normal trouble shooting steps to isolate or reproduce the issue. I discovered that if I connected to the database using Query Analyzer and let the connection idle for 5 minutes it would disconnect. Meaning... I would no longer be able to refresh my tables or any other object/node within the object browser in Query Analyzer. I would have to right click on the instance and refresh for it to re-establish the connection. Next I went to wireshark and ran a capture on the client pc's nic card. Sure enough it was receiving a TCP RST reset every 5 min if the connection idled longer than 5 min. I also ran a capture on the SQL Server and noticed the TCP RST reset command as well. Attached below is the capture from the client Machine. If someone could please assist... That would be great. -I checked all settings within SQL Server 2000 against another server and they all seem to be the same. -Issue does not occur if I connect to any other SQL server 2000 server. -Issue does not occur if connecting to SQL on the server itself... so only over the network. -I consulted with network team and this is the response back: There are no firewalls or proxies in between SQL Server and your desktop. The traffic flows like this: Desktop-Access Switch-Distro Switch-Core Switch-Datacenter Switch-SQL Server None of the switches have security ACL’s configured on them. Also they stated that NAT was not turned on. -Issue does not occur with SQL server Enterprise Manager. -Ran SQL Profiler at the same time and did not see anything out of the ordinary during the RST I HAVE SEARCHED HIGH AND LOW ON GOOGLE FOR A RESOLUTION FOR THIS ISSUE. NO LUCK! My questions are: What could be causing this? Wrong Sequence number? setting in a router or switch the network team may have over looked? Setting within Windows? Setting within SQL Server 2000 that I have over looked? Better way to utilize Wireshark to find more answers? RST is about 10 from the bottom. No. Time Source Destination Protocol Info 258 24.390708 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [SYN] Seq=0 Len=0 MSS=1260 259 24.401679 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [SYN, ACK] Seq=0 Ack=1 Win=64240 Len=0 MSS=1460 260 24.401729 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [ACK] Seq=1 Ack=1 Win=65535 [TCP CHECKSUM INCORRECT] Len=0 261 24.402212 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1 Ack=1 Win=65535 [TCP CHECKSUM INCORRECT] Len=42 262 24.413335 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=1 Ack=43 Win=64198 Len=37 285 24.466512 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [ACK] Seq=43 Ack=38 Win=65498 [TCP CHECKSUM INCORRECT] Len=1260 286 24.466536 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1303 Ack=38 Win=65498 [TCP CHECKSUM INCORRECT] Len=437 289 24.478168 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [ACK] Seq=38 Ack=1740 Win=64240 Len=0 290 24.480078 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=38 Ack=1740 Win=64240 Len=385 293 24.493629 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1740 Ack=423 Win=65113 [TCP CHECKSUM INCORRECT] Len=60 294 24.504637 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=423 Ack=1800 Win=64180 Len=17 295 24.533197 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1800 Ack=440 Win=65096 [TCP CHECKSUM INCORRECT] Len=44 296 24.544098 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=440 Ack=1844 Win=64136 Len=17 297 24.544524 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1844 Ack=457 Win=65079 [TCP CHECKSUM INCORRECT] Len=58 298 24.558033 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=457 Ack=1902 Win=64078 Len=31 299 24.558493 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1902 Ack=488 Win=65048 [TCP CHECKSUM INCORRECT] Len=92 300 24.569984 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=488 Ack=1994 Win=63986 Len=70 301 24.577395 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1994 Ack=558 Win=64978 [TCP CHECKSUM INCORRECT] Len=448 303 24.589834 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=558 Ack=2442 Win=63538 Len=64 304 24.590122 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [FIN, ACK] Seq=2442 Ack=622 Win=64914 [TCP CHECKSUM INCORRECT] Len=0 305 24.601094 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [ACK] Seq=622 Ack=2443 Win=63538 Len=0 306 24.601659 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [FIN, ACK] Seq=622 Ack=2443 Win=63538 Len=0 307 24.601686 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [ACK] Seq=2443 Ack=623 Win=64914 [TCP CHECKSUM INCORRECT] Len=0 321 25.839371 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [SYN] Seq=0 Len=0 MSS=1260 322 25.850291 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [SYN, ACK] Seq=0 Ack=1 Win=64240 Len=0 MSS=1460 323 25.850321 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [ACK] Seq=1 Ack=1 Win=65535 [TCP CHECKSUM INCORRECT] Len=0 324 25.850660 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=1 Ack=1 Win=65535 [TCP CHECKSUM INCORRECT] Len=42 325 25.861573 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=1 Ack=43 Win=64198 Len=37 326 25.863103 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [ACK] Seq=43 Ack=38 Win=65498 [TCP CHECKSUM INCORRECT] Len=1260 327 25.863130 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=1303 Ack=38 Win=65498 [TCP CHECKSUM INCORRECT] Len=463 328 25.874417 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [ACK] Seq=38 Ack=1766 Win=64240 Len=0 329 25.876315 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=38 Ack=1766 Win=64240 Len=385 330 25.876905 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=1766 Ack=423 Win=65113 [TCP CHECKSUM INCORRECT] Len=60 331 25.887773 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=423 Ack=1826 Win=64180 Len=17 332 25.888299 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=1826 Ack=440 Win=65096 [TCP CHECKSUM INCORRECT] Len=44 333 25.899169 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=440 Ack=1870 Win=64136 Len=17 334 25.899574 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=1870 Ack=457 Win=65079 [TCP CHECKSUM INCORRECT] Len=58 335 25.910618 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=457 Ack=1928 Win=64078 Len=31 336 25.911051 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=1928 Ack=488 Win=65048 [TCP CHECKSUM INCORRECT] Len=92 337 25.922068 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=488 Ack=2020 Win=63986 Len=70 338 25.922500 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2020 Ack=558 Win=64978 [TCP CHECKSUM INCORRECT] Len=34 339 25.933621 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=558 Ack=2054 Win=63952 Len=29 340 25.941165 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2054 Ack=587 Win=64949 [TCP CHECKSUM INCORRECT] Len=54 341 25.952164 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=587 Ack=2108 Win=63898 Len=17 342 25.952993 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2108 Ack=604 Win=64932 [TCP CHECKSUM INCORRECT] Len=72 343 25.963889 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=604 Ack=2180 Win=63826 Len=17 344 25.964366 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2180 Ack=621 Win=64915 [TCP CHECKSUM INCORRECT] Len=52 345 25.975253 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=621 Ack=2232 Win=63774 Len=17 346 25.975590 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2232 Ack=638 Win=64898 [TCP CHECKSUM INCORRECT] Len=32 347 25.986588 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=638 Ack=2264 Win=63742 Len=167 348 25.987262 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2264 Ack=805 Win=64731 [TCP CHECKSUM INCORRECT] Len=512 349 25.998464 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=805 Ack=2776 Win=63230 Len=89 350 25.998861 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2776 Ack=894 Win=64642 [TCP CHECKSUM INCORRECT] Len=46 351 26.009849 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=894 Ack=2822 Win=63184 Len=17 352 26.010175 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2822 Ack=911 Win=64625 [TCP CHECKSUM INCORRECT] Len=80 353 26.021220 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=911 Ack=2902 Win=63104 Len=33 354 26.022613 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2902 Ack=944 Win=64592 [TCP CHECKSUM INCORRECT] Len=498 355 26.034018 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=944 Ack=3400 Win=64240 Len=89 356 26.046501 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [SYN] Seq=0 Len=0 MSS=1260 357 26.057323 x.x.x.10 x.x.x.99 TCP 2226 > 14493 [SYN, ACK] Seq=0 Ack=1 Win=64240 Len=0 MSS=1460 358 26.057355 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [ACK] Seq=1 Ack=1 Win=65535 [TCP CHECKSUM INCORRECT] Len=0 359 26.057661 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [PSH, ACK] Seq=1 Ack=1 Win=65535 [TCP CHECKSUM INCORRECT] Len=42 361 26.068606 x.x.x.10 x.x.x.99 TCP 2226 > 14493 [PSH, ACK] Seq=1 Ack=43 Win=64198 Len=37 362 26.070087 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [ACK] Seq=43 Ack=38 Win=65498 [TCP CHECKSUM INCORRECT] Len=1260 363 26.070113 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [PSH, ACK] Seq=1303 Ack=38 Win=65498 [TCP CHECKSUM INCORRECT] Len=485 364 26.081336 x.x.x.10 x.x.x.99 TCP 2226 > 14493 [ACK] Seq=38 Ack=1788 Win=64240 Len=0 365 26.083330 x.x.x.10 x.x.x.99 TCP 2226 > 14493 [PSH, ACK] Seq=38 Ack=1788 Win=64240 Len=385 366 26.083943 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [PSH, ACK] Seq=1788 Ack=423 Win=65113 [TCP CHECKSUM INCORRECT] Len=46 368 26.094921 x.x.x.10 x.x.x.99 TCP 2226 > 14493 [PSH, ACK] Seq=423 Ack=1834 Win=64194 Len=17 369 26.095317 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [PSH, ACK] Seq=1834 Ack=440 Win=65096 [TCP CHECKSUM INCORRECT] Len=48 370 26.107553 x.x.x.10 x.x.x.99 TCP 2226 > 14493 [PSH, ACK] Seq=440 Ack=1882 Win=64146 Len=877 371 26.241285 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [ACK] Seq=3400 Ack=1033 Win=64503 [TCP CHECKSUM INCORRECT] Len=0 372 26.241307 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [ACK] Seq=1882 Ack=1317 Win=65535 [TCP CHECKSUM INCORRECT] Len=0 653 55.913838 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 > 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 654 55.924547 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 > 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 910 85.887176 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 > 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 911 85.898010 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 > 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 1155 115.859520 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 1156 115.870285 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 1395 145.934403 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 1396 145.945938 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 1649 175.906767 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 1650 175.917741 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 1887 205.881080 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 1888 205.891818 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 2112 235.854408 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 2113 235.865482 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 2398 265.928342 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 2399 265.939242 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 2671 295.900714 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 2672 295.911590 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 2880 315.705029 x.x.x.10 x.x.x.99 TCP 2226 14493 [RST] Seq=1317 Len=0 2973 325.975607 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 2974 325.986337 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 2975 326.154327 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive] 2226 14492 [ACK] Seq=1032 Ack=3400 Win=64240 Len=1 2976 326.154350 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive ACK] 14492 2226 [ACK] Seq=3400 Ack=1033 Win=64503 [TCP CHECKSUM INCORRECT] Len=0

    Read the article

  • Red Gate Coder interviews: Alex Davies

    - by Michael Williamson
    Alex Davies has been a software engineer at Red Gate since graduating from university, and is currently busy working on .NET Demon. We talked about tackling parallel programming with his actors framework, a scientific approach to debugging, and how JavaScript is going to affect the programming languages we use in years to come. So, if we start at the start, how did you get started in programming? When I was seven or eight, I was given a BBC Micro for Christmas. I had asked for a Game Boy, but my dad thought it would be better to give me a proper computer. For a year or so, I only played games on it, but then I found the user guide for writing programs in it. I gradually started doing more stuff on it and found it fun. I liked creating. As I went into senior school I continued to write stuff on there, trying to write games that weren’t very good. I got a real computer when I was fourteen and found ways to write BASIC on it. Visual Basic to start with, and then something more interesting than that. How did you learn to program? Was there someone helping you out? Absolutely not! I learnt out of a book, or by experimenting. I remember the first time I found a loop, I was like “Oh my God! I don’t have to write out the same line over and over and over again any more. It’s amazing!” When did you think this might be something that you actually wanted to do as a career? For a long time, I thought it wasn’t something that you would do as a career, because it was too much fun to be a career. I thought I’d do chemistry at university and some kind of career based on chemical engineering. And then I went to a careers fair at school when I was seventeen or eighteen, and it just didn’t interest me whatsoever. I thought “I could be a programmer, and there’s loads of money there, and I’m good at it, and it’s fun”, but also that I shouldn’t spoil my hobby. Now I don’t really program in my spare time any more, which is a bit of a shame, but I program all the rest of the time, so I can live with it. Do you think you learnt much about programming at university? Yes, definitely! I went into university knowing how to make computers do anything I wanted them to do. However, I didn’t have the language to talk about algorithms, so the algorithms course in my first year was massively important. Learning other language paradigms like functional programming was really good for breadth of understanding. Functional programming influences normal programming through design rather than actually using it all the time. I draw inspiration from it to write imperative programs which I think is actually becoming really fashionable now, but I’ve been doing it for ages. I did it first! There were also some courses on really odd programming languages, a bit of Prolog, a little bit of C. Having a little bit of each of those is something that I would have never done on my own, so it was important. And then there are knowledge-based courses which are about not programming itself but things that have been programmed like TCP. Those are really important for examples for how to approach things. Did you do any internships while you were at university? Yeah, I spent both of my summers at the same company. I thought I could code well before I went there. Looking back at the crap that I produced, it was only surpassed in its crappiness by all of the other code already in that company. I’m so much better at writing nice code now than I used to be back then. Was there just not a culture of looking after your code? There was, they just didn’t hire people for their abilities in that area. They hired people for raw IQ. The first indicator of it going wrong was that they didn’t have any computer scientists, which is a bit odd in a programming company. But even beyond that they didn’t have people who learnt architecture from anyone else. Most of them had started straight out of university, so never really had experience or mentors to learn from. There wasn’t the experience to draw from to teach each other. In the second half of my second internship, I was being given tasks like looking at new technologies and teaching people stuff. Interns shouldn’t be teaching people how to do their jobs! All interns are going to have little nuggets of things that you don’t know about, but they shouldn’t consistently be the ones who know the most. It’s not a good environment to learn. I was going to ask how you found working with people who were more experienced than you… When I reached Red Gate, I found some people who were more experienced programmers than me, and that was difficult. I’ve been coding since I was tiny. At university there were people who were cleverer than me, but there weren’t very many who were more experienced programmers than me. During my internship, I didn’t find anyone who I classed as being a noticeably more experienced programmer than me. So, it was a shock to the system to have valid criticisms rather than just formatting criticisms. However, Red Gate’s not so big on the actual code review, at least it wasn’t when I started. We did an entire product release and then somebody looked over all of the UI of that product which I’d written and say what they didn’t like. By that point, it was way too late and I’d disagree with them. Do you think the lack of code reviews was a bad thing? I think if there’s going to be any oversight of new people, then it should be continuous rather than chunky. For me I don’t mind too much, I could go out and get oversight if I wanted it, and in those situations I felt comfortable without it. If I was managing the new person, then maybe I’d be keener on oversight and then the right way to do it is continuously and in very, very small chunks. Have you had any significant projects you’ve worked on outside of a job? When I was a teenager I wrote all sorts of stuff. I used to write games, I derived how to do isomorphic projections myself once. I didn’t know what the word was so I couldn’t Google for it, so I worked it out myself. It was horrifically complicated. But it sort of tailed off when I started at university, and is now basically zero. If I do side-projects now, they tend to be work-related side projects like my actors framework, NAct, which I started in a down tools week. Could you explain a little more about NAct? It is a little C# framework for writing parallel code more easily. Parallel programming is difficult when you need to write to shared data. Sometimes parallel programming is easy because you don’t need to write to shared data. When you do need to access shared data, you could just have your threads pile in and do their work, but then you would screw up the data because the threads would trample on each other’s toes. You could lock, but locks are really dangerous if you’re using more than one of them. You get interactions like deadlocks, and that’s just nasty. Actors instead allows you to say this piece of data belongs to this thread of execution, and nobody else can read it. If you want to read it, then ask that thread of execution for a piece of it by sending a message, and it will send the data back by a message. And that avoids deadlocks as long as you follow some obvious rules about not making your actors sit around waiting for other actors to do something. There are lots of ways to write actors, NAct allows you to do it as if it was method calls on other objects, which means you get all the strong type-safety that C# programmers like. Do you think that this is suitable for the majority of parallel programming, or do you think it’s only suitable for specific cases? It’s suitable for most difficult parallel programming. If you’ve just got a hundred web requests which are all independent of each other, then I wouldn’t bother because it’s easier to just spin them up in separate threads and they can proceed independently of each other. But where you’ve got difficult parallel programming, where you’ve got multiple threads accessing multiple bits of data in multiple ways at different times, then actors is at least as good as all other ways, and is, I reckon, easier to think about. When you’re using actors, you presumably still have to write your code in a different way from you would otherwise using single-threaded code. You can’t use actors with any methods that have return types, because you’re not allowed to call into another actor and wait for it. If you want to get a piece of data out of another actor, then you’ve got to use tasks so that you can use “async” and “await” to await asynchronously for it. But other than that, you can still stick things in classes so it’s not too different really. Rather than having thousands of objects with mutable state, you can use component-orientated design, where there are only a few mutable classes which each have a small number of instances. Then there can be thousands of immutable objects. If you tend to do that anyway, then actors isn’t much of a jump. If I’ve already built my system without any parallelism, how hard is it to add actors to exploit all eight cores on my desktop? Usually pretty easy. If you can identify even one boundary where things look like messages and you have components where some objects live on one side and these other objects live on the other side, then you can have a granddaddy object on one side be an actor and it will parallelise as it goes across that boundary. Not too difficult. If we do get 1000-core desktop PCs, do you think actors will scale up? It’s hard. There are always in the order of twenty to fifty actors in my whole program because I tend to write each component as actors, and I tend to have one instance of each component. So this won’t scale to a thousand cores. What you can do is write data structures out of actors. I use dictionaries all over the place, and if you need a dictionary that is going to be accessed concurrently, then you could build one of those out of actors in no time. You can use queuing to marshal requests between different slices of the dictionary which are living on different threads. So it’s like a distributed hash table but all of the chunks of it are on the same machine. That means that each of these thousand processors has cached one small piece of the dictionary. I reckon it wouldn’t be too big a leap to start doing proper parallelism. Do you think it helps if actors get baked into the language, similarly to Erlang? Erlang is excellent in that it has thread-local garbage collection. C# doesn’t, so there’s a limit to how well C# actors can possibly scale because there’s a single garbage collected heap shared between all of them. When you do a global garbage collection, you’ve got to stop all of the actors, which is seriously expensive, whereas in Erlang garbage collections happen per-actor, so they’re insanely cheap. However, Erlang deviated from all the sensible language design that people have used recently and has just come up with crazy stuff. You can definitely retrofit thread-local garbage collection to .NET, and then it’s quite well-suited to support actors, even if it’s not baked into the language. Speaking of language design, do you have a favourite programming language? I’ll choose a language which I’ve never written before. I like the idea of Scala. It sounds like C#, only with some of the niggles gone. I enjoy writing static types. It means you don’t have to writing tests so much. When you say it doesn’t have some of the niggles? C# doesn’t allow the use of a property as a method group. It doesn’t have Scala case classes, or sum types, where you can do a switch statement and the compiler checks that you’ve checked all the cases, which is really useful in functional-style programming. Pattern-matching, in other words. That’s actually the major niggle. C# is pretty good, and I’m quite happy with C#. And what about going even further with the type system to remove the need for tests to something like Haskell? Or is that a step too far? I’m quite a pragmatist, I don’t think I could deal with trying to write big systems in languages with too few other users, especially when learning how to structure things. I just don’t know anyone who can teach me, and the Internet won’t teach me. That’s the main reason I wouldn’t use it. If I turned up at a company that writes big systems in Haskell, I would have no objection to that, but I wouldn’t instigate it. What about things in C#? For instance, there’s contracts in C#, so you can try to statically verify a bit more about your code. Do you think that’s useful, or just not worthwhile? I’ve not really tried it. My hunch is that it needs to be built into the language and be quite mathematical for it to work in real life, and that doesn’t seem to have ended up true for C# contracts. I don’t think anyone who’s tried them thinks they’re any good. I might be wrong. On a slightly different note, how do you like to debug code? I think I’m quite an odd debugger. I use guesswork extremely rarely, especially if something seems quite difficult to debug. I’ve been bitten spending hours and hours on guesswork and not being scientific about debugging in the past, so now I’m scientific to a fault. What I want is to see the bug happening in the debugger, to step through the bug happening. To watch the program going from a valid state to an invalid state. When there’s a bug and I can’t work out why it’s happening, I try to find some piece of evidence which places the bug in one section of the code. From that experiment, I binary chop on the possible causes of the bug. I suppose that means binary chopping on places in the code, or binary chopping on a stage through a processing cycle. Basically, I’m very stupid about how I debug. I won’t make any guesses, I won’t use any intuition, I will only identify the experiment that’s going to binary chop most effectively and repeat rather than trying to guess anything. I suppose it’s quite top-down. Is most of the time then spent in the debugger? Absolutely, if at all possible I will never debug using print statements or logs. I don’t really hold much stock in outputting logs. If there’s any bug which can be reproduced locally, I’d rather do it in the debugger than outputting logs. And with SmartAssembly error reporting, there’s not a lot that can’t be either observed in an error report and just fixed, or reproduced locally. And in those other situations, maybe I’ll use logs. But I hate using logs. You stare at the log, trying to guess what’s going on, and that’s exactly what I don’t like doing. You have to just look at it and see does this look right or wrong. We’ve covered how you get to grip with bugs. How do you get to grips with an entire codebase? I watch it in the debugger. I find little bugs and then try to fix them, and mostly do it by watching them in the debugger and gradually getting an understanding of how the code works using my process of binary chopping. I have to do a lot of reading and watching code to choose where my slicing-in-half experiment is going to be. The last time I did it was SmartAssembly. The old code was a complete mess, but at least it did things top to bottom. There wasn’t too much of some of the big abstractions where flow of control goes all over the place, into a base class and back again. Code’s really hard to understand when that happens. So I like to choose a little bug and try to fix it, and choose a bigger bug and try to fix it. Definitely learn by doing. I want to always have an aim so that I get a little achievement after every few hours of debugging. Once I’ve learnt the codebase I might be able to fix all the bugs in an hour, but I’d rather be using them as an aim while I’m learning the codebase. If I was a maintainer of a codebase, what should I do to make it as easy as possible for you to understand? Keep distinct concepts in different places. And name your stuff so that it’s obvious which concepts live there. You shouldn’t have some variable that gets set miles up the top of somewhere, and then is read miles down to choose some later behaviour. I’m talking from a very much SmartAssembly point of view because the old SmartAssembly codebase had tons and tons of these things, where it would read some property of the code and then deal with it later. Just thousands of variables in scope. Loads of things to think about. If you can keep concepts separate, then it aids me in my process of fixing bugs one at a time, because each bug is going to more or less be understandable in the one place where it is. And what about tests? Do you think they help at all? I’ve never had the opportunity to learn a codebase which has had tests, I don’t know what it’s like! What about when you’re actually developing? How useful do you find tests in finding bugs or regressions? Finding regressions, absolutely. Running bits of code that would be quite hard to run otherwise, definitely. It doesn’t happen very often that a test finds a bug in the first place. I don’t really buy nebulous promises like tests being a good way to think about the spec of the code. My thinking goes something like “This code works at the moment, great, ship it! Ah, there’s a way that this code doesn’t work. Okay, write a test, demonstrate that it doesn’t work, fix it, use the test to demonstrate that it’s now fixed, and keep the test for future regressions.” The most valuable tests are for bugs that have actually happened at some point, because bugs that have actually happened at some point, despite the fact that you think you’ve fixed them, are way more likely to appear again than new bugs are. Does that mean that when you write your code the first time, there are no tests? Often. The chance of there being a bug in a new feature is relatively unaffected by whether I’ve written a test for that new feature because I’m not good enough at writing tests to think of bugs that I would have written into the code. So not writing regression tests for all of your code hasn’t affected you too badly? There are different kinds of features. Some of them just always work, and are just not flaky, they just continue working whatever you throw at them. Maybe because the type-checker is particularly effective around them. Writing tests for those features which just tend to always work is a waste of time. And because it’s a waste of time I’ll tend to wait until a feature has demonstrated its flakiness by having bugs in it before I start trying to test it. You can get a feel for whether it’s going to be flaky code as you’re writing it. I try to write it to make it not flaky, but there are some things that are just inherently flaky. And very occasionally, I’ll think “this is going to be flaky” as I’m writing, and then maybe do a test, but not most of the time. How do you think your programming style has changed over time? I’ve got clearer about what the right way of doing things is. I used to flip-flop a lot between different ideas. Five years ago I came up with some really good ideas and some really terrible ideas. All of them seemed great when I thought of them, but they were quite diverse ideas, whereas now I have a smaller set of reliable ideas that are actually good for structuring code. So my code is probably more similar to itself than it used to be back in the day, when I was trying stuff out. I’ve got more disciplined about encapsulation, I think. There are operational things like I use actors more now than I used to, and that forces me to use immutability more than I used to. The first code that I wrote in Red Gate was the memory profiler UI, and that was an actor, I just didn’t know the name of it at the time. I don’t really use object-orientation. By object-orientation, I mean having n objects of the same type which are mutable. I want a constant number of objects that are mutable, and they should be different types. I stick stuff in dictionaries and then have one thing that owns the dictionary and puts stuff in and out of it. That’s definitely a pattern that I’ve seen recently. I think maybe I’m doing functional programming. Possibly. It’s plausible. If you had to summarise the essence of programming in a pithy sentence, how would you do it? Programming is the form of art that, without losing any of the beauty of architecture or fine art, allows you to produce things that people love and you make money from. So you think it’s an art rather than a science? It’s a little bit of engineering, a smidgeon of maths, but it’s not science. Like architecture, programming is on that boundary between art and engineering. If you want to do it really nicely, it’s mostly art. You can get away with doing architecture and programming entirely by having a good engineering mind, but you’re not going to produce anything nice. You’re not going to have joy doing it if you’re an engineering mind. Architects who are just engineering minds are not going to enjoy their job. I suppose engineering is the foundation on which you build the art. Exactly. How do you think programming is going to change over the next ten years? There will be an unfortunate shift towards dynamically-typed languages, because of JavaScript. JavaScript has an unfair advantage. JavaScript’s unfair advantage will cause more people to be exposed to dynamically-typed languages, which means other dynamically-typed languages crop up and the best features go into dynamically-typed languages. Then people conflate the good features with the fact that it’s dynamically-typed, and more investment goes into dynamically-typed languages. They end up better, so people use them. What about the idea of compiling other languages, possibly statically-typed, to JavaScript? It’s a reasonable idea. I would like to do it, but I don’t think enough people in the world are going to do it to make it pick up. The hordes of beginners are the lifeblood of a language community. They are what makes there be good tools and what makes there be vibrant community websites. And any particular thing which is the same as JavaScript only with extra stuff added to it, although it might be technically great, is not going to have the hordes of beginners. JavaScript is always to be quickest and easiest way for a beginner to start programming in the browser. And dynamically-typed languages are great for beginners. Compilers are pretty scary and beginners don’t write big code. And having your errors come up in the same place, whether they’re statically checkable errors or not, is quite nice for a beginner. If someone asked me to teach them some programming, I’d teach them JavaScript. If dynamically-typed languages are great for beginners, when do you think the benefits of static typing start to kick in? The value of having a statically typed program is in the tools that rely on the static types to produce a smooth IDE experience rather than actually telling me my compile errors. And only once you’re experienced enough a programmer that having a really smooth IDE experience makes a blind bit of difference, does static typing make a blind bit of difference. So it’s not really about size of codebase. If I go and write up a tiny program, I’m still going to get value out of writing it in C# using ReSharper because I’m experienced with C# and ReSharper enough to be able to write code five times faster if I have that help. Any other visions of the future? Nobody’s going to use actors. Because everyone’s going to be running on single-core VMs connected over network-ready protocols like JSON over HTTP. So, parallelism within one operating system is going to die. But until then, you should use actors. More Red Gater Coder interviews

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • MySQL Syslog Audit Plugin

    - by jonathonc
    This post shows the construction process of the Syslog Audit plugin that was presented at MySQL Connect 2012. It is based on an environment that has the appropriate development tools enabled including gcc,g++ and cmake. It also assumes you have downloaded the MySQL source code (5.5.16 or higher) and have compiled and installed the system into the /usr/local/mysql directory ready for use.  The information provided below is designed to show the different components that make up a plugin, and specifically an audit type plugin, and how it comes together to be used within the MySQL service. The MySQL Reference Manual contains information regarding the plugin API and how it can be used, so please refer there for more detailed information. The code in this post is designed to give the simplest information necessary, so handling every return code, managing race conditions etc is not part of this example code. Let's start by looking at the most basic implementation of our plugin code as seen below: /*    Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved.    Author:  Jonathon Coombes    Licence: GPL    Description: An auditing plugin that logs to syslog and                 can adjust the loglevel via the system variables. */ #include <stdio.h> #include <string.h> #include <mysql/plugin_audit.h> #include <syslog.h> There is a commented header detailing copyright/licencing and meta-data information and then the include headers. The two important include statements for our plugin are the syslog.h plugin, which gives us the structures for syslog, and the plugin_audit.h include which has details regarding the audit specific plugin api. Note that we do not need to include the general plugin header plugin.h, as this is done within the plugin_audit.h file already. To implement our plugin within the current implementation we need to add it into our source code and compile. > cd /usr/local/src/mysql-5.5.28/plugin > mkdir audit_syslog > cd audit_syslog A simple CMakeLists.txt file is created to manage the plugin compilation: MYSQL_ADD_PLUGIN(audit_syslog audit_syslog.cc MODULE_ONLY) Run the cmake  command at the top level of the source and then you can compile the plugin using the 'make' command. This results in a compiled audit_syslog.so library, but currently it is not much use to MySQL as there is no level of api defined to communicate with the MySQL service. Now we need to define the general plugin structure that enables MySQL to recognise the library as a plugin and be able to install/uninstall it and have it show up in the system. The structure is defined in the plugin.h file in the MySQL source code.  /*   Plugin library descriptor */ mysql_declare_plugin(audit_syslog) {   MYSQL_AUDIT_PLUGIN,           /* plugin type                    */   &audit_syslog_descriptor,     /* descriptor handle               */   "audit_syslog",               /* plugin name                     */   "Author Name",                /* author                          */   "Simple Syslog Audit",        /* description                     */   PLUGIN_LICENSE_GPL,           /* licence                         */   audit_syslog_init,            /* init function     */   audit_syslog_deinit,          /* deinit function */   0x0001,                       /* plugin version                  */   NULL,                         /* status variables        */   NULL,                         /* system variables                */   NULL,                         /* no reserves                     */   0,                            /* no flags                        */ } mysql_declare_plugin_end; The general plugin descriptor above is standard for all plugin types in MySQL. The plugin type is defined along with the init/deinit functions and interface methods into the system for sharing information, and various other metadata information. The descriptors have an internally recognised version number so that plugins can be matched against the api on the running server. The other details are usually related to the type-specific methods and structures to implement the plugin. Each plugin has a type-specific descriptor as well which details how the plugin is implemented for the specific purpose of that plugin type. /*   Plugin type-specific descriptor */ static struct st_mysql_audit audit_syslog_descriptor= {   MYSQL_AUDIT_INTERFACE_VERSION,                        /* interface version    */   NULL,                                                 /* release_thd function */   audit_syslog_notify,                                  /* notify function      */   { (unsigned long) MYSQL_AUDIT_GENERAL_CLASSMASK |                     MYSQL_AUDIT_CONNECTION_CLASSMASK }  /* class mask           */ }; In this particular case, the release_thd function has not been defined as it is not required. The important method for auditing is the notify function which is activated when an event occurs on the system. The notify function is designed to activate on an event and the implementation will determine how it is handled. For the audit_syslog plugin, the use of the syslog feature sends all events to the syslog for recording. The class mask allows us to determine what type of events are being seen by the notify function. There are currently two major types of event: 1. General Events: This includes general logging, errors, status and result type events. This is the main one for tracking the queries and operations on the database. 2. Connection Events: This group is based around user logins. It monitors connections and disconnections, but also if somebody changes user while connected. With most audit plugins, the principle behind the plugin is to track changes to the system over time and counters can be an important part of this process. The next step is to define and initialise the counters that are used to track the events in the service. There are 3 counters defined in total for our plugin - the # of general events, the # of connection events and the total number of events.  static volatile int total_number_of_calls; /* Count MYSQL_AUDIT_GENERAL_CLASS event instances */ static volatile int number_of_calls_general; /* Count MYSQL_AUDIT_CONNECTION_CLASS event instances */ static volatile int number_of_calls_connection; The init and deinit functions for the plugin are there to be called when the plugin is activated and when it is terminated. These offer the best option to initialise the counters for our plugin: /*  Initialize the plugin at server start or plugin installation. */ static int audit_syslog_init(void *arg __attribute__((unused))) {     openlog("mysql_audit:",LOG_PID|LOG_PERROR|LOG_CONS,LOG_USER);     total_number_of_calls= 0;     number_of_calls_general= 0;     number_of_calls_connection= 0;     return(0); } The init function does a call to openlog to initialise the syslog functionality. The parameters are the service to log under ("mysql_audit" in this case), the syslog flags and the facility for the logging. Then each of the counters are initialised to zero and a success is returned. If the init function is not defined, it will return success by default. /*  Terminate the plugin at server shutdown or plugin deinstallation. */ static int audit_syslog_deinit(void *arg __attribute__((unused))) {     closelog();     return(0); } The deinit function will simply close our syslog connection and return success. Note that the syslog functionality is part of the glibc libraries and does not require any external factors.  The function names are what we define in the general plugin structure, so these have to match otherwise there will be errors. The next step is to implement the event notifier function that was defined in the type specific descriptor (audit_syslog_descriptor) which is audit_syslog_notify. /* Event notifier function */ static void audit_syslog_notify(MYSQL_THD thd __attribute__((unused)), unsigned int event_class, const void *event) { total_number_of_calls++; if (event_class == MYSQL_AUDIT_GENERAL_CLASS) { const struct mysql_event_general *event_general= (const struct mysql_event_general *) event; number_of_calls_general++; syslog(audit_loglevel,"%lu: User: %s Command: %s Query: %s\n", event_general->general_thread_id, event_general->general_user, event_general->general_command, event_general->general_query ); } else if (event_class == MYSQL_AUDIT_CONNECTION_CLASS) { const struct mysql_event_connection *event_connection= (const struct mysql_event_connection *) event; number_of_calls_connection++; syslog(audit_loglevel,"%lu: User: %s@%s[%s] Event: %d Status: %d\n", event_connection->thread_id, event_connection->user, event_connection->host, event_connection->ip, event_connection->event_subclass, event_connection->status ); } }   In the case of an event, the notifier function is called. The first step is to increment the total number of events that have occurred in our database.The event argument is then cast into the appropriate event structure depending on the class type, of general event or connection event. The event type counters are incremented and details are sent via the syslog() function out to the system log. There are going to be different line formats and information returned since the general events have different data compared to the connection events, even though some of the details overlap, for example, user, thread id, host etc. On compiling the code now, there should be no errors and the resulting audit_syslog.so can be loaded into the server and ready to use. Log into the server and type: mysql> INSTALL PLUGIN audit_syslog SONAME 'audit_syslog.so'; This will install the plugin and will start updating the syslog immediately. Note that the audit plugin attaches to the immediate thread and cannot be uninstalled while that thread is active. This means that you cannot run the UNISTALL command until you log into a different connection (thread) on the server. Once the plugin is loaded, the system log will show output such as the following: Oct  8 15:33:21 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: (null)  Query: INSTALL PLUGIN audit_syslog SONAME 'audit_syslog.so' Oct  8 15:33:21 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: Query  Query: INSTALL PLUGIN audit_syslog SONAME 'audit_syslog.so' Oct  8 15:33:40 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: (null)  Query: show tables Oct  8 15:33:40 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: Query  Query: show tables Oct  8 15:33:43 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: (null)  Query: select * from t1 Oct  8 15:33:43 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: Query  Query: select * from t1 It appears that two of each event is being shown, but in actuality, these are two separate event types - the result event and the status event. This could be refined further by changing the audit_syslog_notify function to handle the different event sub-types in a different manner.  So far, it seems that the logging is working with events showing up in the syslog output. The issue now is that the counters created earlier to track the number of events by type are not accessible when the plugin is being run. Instead there needs to be a way to expose the plugin specific information to the service and vice versa. This could be done via the information_schema plugin api, but for something as simple as counters, the obvious choice is the system status variables. This is done using the standard structure and the declaration: /*  Plugin status variables for SHOW STATUS */ static struct st_mysql_show_var audit_syslog_status[]= {   { "Audit_syslog_total_calls",     (char *) &total_number_of_calls,     SHOW_INT },   { "Audit_syslog_general_events",     (char *) &number_of_calls_general,     SHOW_INT },   { "Audit_syslog_connection_events",     (char *) &number_of_calls_connection,     SHOW_INT },   { 0, 0, SHOW_INT } };   The structure is simply the name that will be displaying in the mysql service, the address of the associated variables, and the data type being used for the counter. It is finished with a blank structure to show that there are no more variables. Remember that status variables may have the same name for variables from other plugin, so it is considered appropriate to add the plugin name at the start of the status variable name to avoid confusion. Looking at the status variables in the mysql client shows something like the following: mysql> show global status like "audit%"; +--------------------------------+-------+ | Variable_name                  | Value | +--------------------------------+-------+ | Audit_syslog_connection_events | 1     | | Audit_syslog_general_events    | 2     | | Audit_syslog_total_calls       | 3     | +--------------------------------+-------+ 3 rows in set (0.00 sec) The final connectivity piece for the plugin is to allow the interactive change of the logging level between the plugin and the system. This requires the ability to send changes via the mysql service through to the plugin. This is done using the system variables interface and defining a single variable to keep track of the active logging level for the facility. /* Plugin system variables for SHOW VARIABLES */ static MYSQL_SYSVAR_STR(loglevel, audit_loglevel,                         PLUGIN_VAR_RQCMDARG,                         "User can specify the log level for auditing",                         audit_loglevel_check, audit_loglevel_update, "LOG_NOTICE"); static struct st_mysql_sys_var* audit_syslog_sysvars[] = {     MYSQL_SYSVAR(loglevel),     NULL }; So now the system variable 'loglevel' is defined for the plugin and associated to the global variable 'audit_loglevel'. The check or validation function is defined to make sure that no garbage values are attempted in the update of the variable. The update function is used to save the new value to the variable. Note that the audit_syslog_sysvars structure is defined in the general plugin descriptor to associate the link between the plugin and the system and how much they interact. Next comes the implementation of the validation function and the update function for the system variable. It is worth noting that if you have a simple numeric such as integers for the variable types, the validate function is often not required as MySQL will handle the automatic check and validation of simple types. /* longest valid value */ #define MAX_LOGLEVEL_SIZE 100 /* hold the valid values */ static const char *possible_modes[]= { "LOG_ERROR", "LOG_WARNING", "LOG_NOTICE", NULL };  static int audit_loglevel_check(     THD*                        thd,    /*!< in: thread handle */     struct st_mysql_sys_var*    var,    /*!< in: pointer to system                                         variable */     void*                       save,   /*!< out: immediate result                                         for update function */     struct st_mysql_value*      value)  /*!< in: incoming string */ {     char buff[MAX_LOGLEVEL_SIZE];     const char *str;     const char **found;     int length;     length= sizeof(buff);     if (!(str= value->val_str(value, buff, &length)))         return 1;     /*         We need to return a pointer to a locally allocated value in "save".         Here we pick to search for the supplied value in an global array of         constant strings and return a pointer to one of them.         The other possiblity is to use the thd_alloc() function to allocate         a thread local buffer instead of the global constants.     */     for (found= possible_modes; *found; found++)     {         if (!strcmp(*found, str))         {             *(const char**)save= *found;             return 0;         }     }     return 1; } The validation function is simply to take the value being passed in via the SET GLOBAL VARIABLE command and check if it is one of the pre-defined values allowed  in our possible_values array. If it is found to be valid, then the value is assigned to the save variable ready for passing through to the update function. static void audit_loglevel_update(     THD*                        thd,        /*!< in: thread handle */     struct st_mysql_sys_var*    var,        /*!< in: system variable                                             being altered */     void*                       var_ptr,    /*!< out: pointer to                                             dynamic variable */     const void*                 save)       /*!< in: pointer to                                             temporary storage */ {     /* assign the new value so that the server can read it */     *(char **) var_ptr= *(char **) save;     /* assign the new value to the internal variable */     audit_loglevel= *(char **) save; } Since all the validation has been done already, the update function is quite simple for this plugin. The first part is to update the system variable pointer so that the server can read the value. The second part is to update our own global plugin variable for tracking the value. Notice that the save variable is passed in as a void type to allow handling of various data types, so it must be cast to the appropriate data type when assigning it to the variables. Looking at how the latest changes affect the usage of the plugin and the interaction within the server shows: mysql> show global variables like "audit%"; +-----------------------+------------+ | Variable_name         | Value      | +-----------------------+------------+ | audit_syslog_loglevel | LOG_NOTICE | +-----------------------+------------+ 1 row in set (0.00 sec) mysql> set global audit_syslog_loglevel="LOG_ERROR"; Query OK, 0 rows affected (0.00 sec) mysql> show global status like "audit%"; +--------------------------------+-------+ | Variable_name                  | Value | +--------------------------------+-------+ | Audit_syslog_connection_events | 1     | | Audit_syslog_general_events    | 11    | | Audit_syslog_total_calls       | 12    | +--------------------------------+-------+ 3 rows in set (0.00 sec) mysql> show global variables like "audit%"; +-----------------------+-----------+ | Variable_name         | Value     | +-----------------------+-----------+ | audit_syslog_loglevel | LOG_ERROR | +-----------------------+-----------+ 1 row in set (0.00 sec)   So now we have a plugin that will audit the events on the system and log the details to the system log. It allows for interaction to see the number of different events within the server details and provides a mechanism to change the logging level interactively via the standard system methods of the SET command. A more complex auditing plugin may have more detailed code, but each of the above areas is what will be involved and simply expanded on to add more functionality. With the above skeleton code, it is now possible to create your own audit plugins to implement your own auditing requirements. If, however, you are not of the coding persuasion, then you could always consider the option of the MySQL Enterprise Audit plugin that is available to purchase.

    Read the article

  • Setting up Mono/ASP.NET 4.0 on Apache2/Ubuntu: Virtual hosts?

    - by Dave
    I'm attempting to setup Mono/ASP.NET 4.0 on my Apache server (which is running on Ubuntu). Thus far, I've been following a few tutorials/scripts supplied here, and here. As of now: Apache 2.2 is installed (accessible via 'localhost') Mono 2.10.5 is installed However, I'm struggling to configure Apache correctly... apparently the Virtual Host setting isn't doing its job and invoking the mod_mono plugin, nor is it even pulling source from the proper directory. While the Virtual Host setting points to '\srv\www\localhost', it clearly is pulling content instead from 'var/www/', which I've found is the default DocumentRoot for virtual hosts. I can confirm: "/opt/mono-2.10/bin/mod-mono-server4" exists. Virtual hosts file is being read, since undoing the comment in the main httpd.conf changed the root directory from 'htdocs' to 'var/www/' The Mono installation is at least semi-capable of running ASP 4.0, as evidenced by running XSP, navigating to 0.0.0.0:8080/ and getting an ASP.NET style error page with "Mono ASP 4.0.x" at the bottom. Can anyone point out how to fix these configurations and get Mono linked up with Apache? Here are my configs and relevant information: /usr/local/apache2/conf/httpd.conf: # # This is the main Apache HTTP server configuration file. It contains the # configuration directives that give the server its instructions. # See <URL:http://httpd.apache.org/docs/2.2> for detailed information. # In particular, see # <URL:http://httpd.apache.org/docs/2.2/mod/directives.html> # for a discussion of each configuration directive. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "logs/foo_log" # with ServerRoot set to "/usr/local/apache2" will be interpreted by the # server as "/usr/local/apache2/logs/foo_log". # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # Do not add a slash at the end of the directory path. If you point # ServerRoot at a non-local disk, be sure to point the LockFile directive # at a local disk. If you wish to share the same ServerRoot for multiple # httpd daemons, you will need to change at least LockFile and PidFile. # ServerRoot "/usr/local/apache2" # # Listen: Allows you to bind Apache to specific IP addresses and/or # ports, instead of the default. See also the <VirtualHost> # directive. # # Change this to Listen on specific IP addresses as shown below to # prevent Apache from glomming onto all bound IP addresses. # #Listen 12.34.56.78:80 Listen 80 # # Dynamic Shared Object (DSO) Support # # To be able to use the functionality of a module which was built as a DSO you # have to place corresponding `LoadModule' lines at this location so the # directives contained in it are actually available _before_ they are used. # Statically compiled modules (those listed by `httpd -l') do not need # to be loaded here. # # Example: # LoadModule foo_module modules/mod_foo.so # <IfModule !mpm_netware_module> <IfModule !mpm_winnt_module> # # If you wish httpd to run as a different user or group, you must run # httpd as root initially and it will switch. # # User/Group: The name (or #number) of the user/group to run httpd as. # It is usually good practice to create a dedicated user and group for # running httpd, as with most system services. # User daemon Group daemon </IfModule> </IfModule> # 'Main' server configuration # # The directives in this section set up the values used by the 'main' # server, which responds to any requests that aren't handled by a # <VirtualHost> definition. These values also provide defaults for # any <VirtualHost> containers you may define later in the file. # # All of these directives may appear inside <VirtualHost> containers, # in which case these default settings will be overridden for the # virtual host being defined. # # # ServerAdmin: Your address, where problems with the server should be # e-mailed. This address appears on some server-generated pages, such # as error documents. e.g. [email protected] # ServerAdmin david@localhost # # ServerName gives the name and port that the server uses to identify itself. # This can often be determined automatically, but we recommend you specify # it explicitly to prevent problems during startup. # # If your host doesn't have a registered DNS name, enter its IP address here. # ServerName localhost:80 # # DocumentRoot: The directory out of which you will serve your # documents. By default, all requests are taken from this directory, but # symbolic links and aliases may be used to point to other locations. # DocumentRoot "/usr/local/apache2/htdocs" # # Each directory to which Apache has access can be configured with respect # to which services and features are allowed and/or disabled in that # directory (and its subdirectories). # # First, we configure the "default" to be a very restrictive set of # features. # <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> # # Note that from this point forward you must specifically allow # particular features to be enabled - so if something's not working as # you might expect, make sure that you have specifically enabled it # below. # # # This should be changed to whatever you set DocumentRoot to. # <Directory "/usr/local/apache2/htdocs"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # # Note that "MultiViews" must be named *explicitly* --- "Options All" # doesn't give it to you. # # The Options directive is both complicated and important. Please see # http://httpd.apache.org/docs/2.2/mod/core.html#options # for more information. # Options Indexes FollowSymLinks # # AllowOverride controls what directives may be placed in .htaccess files. # It can be "All", "None", or any combination of the keywords: # Options FileInfo AuthConfig Limit # AllowOverride None # # Controls who can get stuff from this server. # Order allow,deny Allow from all </Directory> # # DirectoryIndex: sets the file that Apache will serve if a directory # is requested. # <IfModule dir_module> DirectoryIndex index.html </IfModule> # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <FilesMatch "^\.ht"> Order allow,deny Deny from all Satisfy All </FilesMatch> # # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog "logs/error_log" # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn <IfModule log_config_module> # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> # You need to enable mod_logio.c to use %I and %O LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> # # The location and format of the access logfile (Common Logfile Format). # If you do not define any access logfiles within a <VirtualHost> # container, they will be logged here. Contrariwise, if you *do* # define per-<VirtualHost> access logfiles, transactions will be # logged therein and *not* in this file. # CustomLog "logs/access_log" common # # If you prefer a logfile with access, agent, and referer information # (Combined Logfile Format) you can use the following directive. # #CustomLog "logs/access_log" combined </IfModule> <IfModule alias_module> # # Redirect: Allows you to tell clients about documents that used to # exist in your server's namespace, but do not anymore. The client # will make a new request for the document at its new location. # Example: # Redirect permanent /foo http://www.example.com/bar # # Alias: Maps web paths into filesystem paths and is used to # access content that does not live under the DocumentRoot. # Example: # Alias /webpath /full/filesystem/path # # If you include a trailing / on /webpath then the server will # require it to be present in the URL. You will also likely # need to provide a <Directory> section to allow access to # the filesystem path. # # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the target directory are treated as applications and # run by the server when requested rather than as documents sent to the # client. The same rules about trailing "/" apply to ScriptAlias # directives as to Alias. # ScriptAlias /cgi-bin/ "/usr/local/apache2/cgi-bin/" </IfModule> <IfModule cgid_module> # # ScriptSock: On threaded servers, designate the path to the UNIX # socket used to communicate with the CGI daemon of mod_cgid. # #Scriptsock logs/cgisock </IfModule> # # "/usr/local/apache2/cgi-bin" should be changed to whatever your ScriptAliased # CGI directory exists, if you have that configured. # <Directory "/usr/local/apache2/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> # # DefaultType: the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain <IfModule mime_module> # # TypesConfig points to the file containing the list of mappings from # filename extension to MIME-type. # TypesConfig conf/mime.types # # AddType allows you to add to or override the MIME configuration # file specified in TypesConfig for specific file types. # #AddType application/x-gzip .tgz # # AddEncoding allows you to have certain browsers uncompress # information on the fly. Note: Not all browsers support this. # #AddEncoding x-compress .Z #AddEncoding x-gzip .gz .tgz # # If the AddEncoding directives above are commented-out, then you # probably should define those extensions to indicate media types: # AddType application/x-compress .Z AddType application/x-gzip .gz .tgz # # AddHandler allows you to map certain file extensions to "handlers": # actions unrelated to filetype. These can be either built into the server # or added with the Action directive (see below) # # To use CGI scripts outside of ScriptAliased directories: # (You will also need to add "ExecCGI" to the "Options" directive.) # #AddHandler cgi-script .cgi # For type maps (negotiated resources): #AddHandler type-map var # # Filters allow you to process content before it is sent to the client. # # To parse .shtml files for server-side includes (SSI): # (You will also need to add "Includes" to the "Options" directive.) # #AddType text/html .shtml #AddOutputFilter INCLUDES .shtml </IfModule> # # The mod_mime_magic module allows the server to use various hints from the # contents of the file itself to determine its type. The MIMEMagicFile # directive tells the module where the hint definitions are located. # #MIMEMagicFile conf/magic # # Customizable error responses come in three flavors: # 1) plain text 2) local redirects 3) external redirects # # Some examples: #ErrorDocument 500 "The server made a boo boo." #ErrorDocument 404 /missing.html #ErrorDocument 404 "/cgi-bin/missing_handler.pl" #ErrorDocument 402 http://www.example.com/subscription_info.html # # # MaxRanges: Maximum number of Ranges in a request before # returning the entire resource, or 0 for unlimited # Default setting is to accept 200 Ranges #MaxRanges 0 # # EnableMMAP and EnableSendfile: On systems that support it, # memory-mapping or the sendfile syscall is used to deliver # files. This usually improves server performance, but must # be turned off when serving from networked-mounted # filesystems or if support for these functions is otherwise # broken on your system. # #EnableMMAP off #EnableSendfile off # Supplemental configuration # # The configuration files in the conf/extra/ directory can be # included to add extra features or to modify the default configuration of # the server, or you may simply copy their contents here and change as # necessary. # Server-pool management (MPM specific) #Include conf/extra/httpd-mpm.conf # Multi-language error messages #Include conf/extra/httpd-multilang-errordoc.conf # Fancy directory listings #Include conf/extra/httpd-autoindex.conf # Language settings #Include conf/extra/httpd-languages.conf # User home directories #Include conf/extra/httpd-userdir.conf # Real-time info on requests and configuration #Include conf/extra/httpd-info.conf # Virtual hosts Include conf/extra/httpd-vhosts.conf # Local access to the Apache HTTP Server Manual #Include conf/extra/httpd-manual.conf # Distributed authoring and versioning (WebDAV) #Include conf/extra/httpd-dav.conf # Various default settings #Include conf/extra/httpd-default.conf # Secure (SSL/TLS) connections #Include conf/extra/httpd-ssl.conf # # Note: The following must must be present to support # starting without SSL on platforms with no /dev/random equivalent # but a statically compiled-in mod_ssl. # <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> * /usr/local/apache2/conf/extra/httpd-vhosts.conf * # # Virtual Hosts # # If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.2/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # Use name-based virtual hosting. # NameVirtualHost *:80 # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. # <VirtualHost *:80> ServerName localhost ServerAdmin david@localhost DocumentRoot "/srv/www/localhost" # MonoServerPath can be changed to specify which version of ASP.NET is hosted # mod-mono-server1 = ASP.NET 1.1 / mod-mono-server2 = ASP.NET 2.0 # For SUSE Linux Enterprise Mono Extension, uncomment the line below: # MonoServerPath localhost "/opt/novell/mono/bin/mod-mono-server2" # For Mono on openSUSE, uncomment the line below instead: MonoServerPath localhost "/opt/mono-2.10/bin/mod-mono-server4" # To obtain line numbers in stack traces you need to do two things: # 1) Enable Debug code generation in your page by using the Debug="true" # page directive, or by setting <compilation debug="true" /> in the # application's Web.config # 2) Uncomment the MonoDebug true directive below to enable mod_mono debugging MonoDebug localhost true # The MONO_IOMAP environment variable can be configured to provide platform abstraction # for file access in Linux. Valid values for MONO_IOMAP are: # case # drive # all # Uncomment the line below to alter file access behavior for the configured application MonoSetEnv localhost PATH=/opt/mono-2.10/bin:$PATH;LD_LIBRARY_PATH=/opt/mono-2.10/lib:$LD_LIBRARY_PATH; # # Additional environtment variables can be set for this server instance using # the MonoSetEnv directive. MonoSetEnv takes a string of 'name=value' pairs # separated by semicolons. For instance, to enable platform abstraction *and* # use Mono's old regular expression interpreter (which is slower, but has a # shorter setup time), uncomment the line below instead: # MonoSetEnv localhost MONO_IOMAP=all;MONO_OLD_RX=1 MonoApplications localhost "/:/srv/www/localhost" <Location "/"> Allow from all Order allow,deny MonoSetServerAlias localhost SetHandler mono SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip dont-vary </Location> <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript </IfModule> </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/usr/local/apache2/docs/dummy-host.example.com" ServerName dummy-host.example.com ServerAlias www.dummy-host.example.com ErrorLog "logs/dummy-host.example.com-error_log" CustomLog "logs/dummy-host.example.com-access_log" common </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/usr/local/apache2/docs/dummy-host2.example.com" ServerName dummy-host2.example.com ErrorLog "logs/dummy-host2.example.com-error_log" CustomLog "logs/dummy-host2.example.com-access_log" common </VirtualHost> mono -V output: root@david-ubuntu:~# mono -V Mono JIT compiler version 2.6.7 (Debian 2.6.7-5ubuntu3) Copyright (C) 2002-2010 Novell, Inc and Contributors. www.mono-project.com TLS: __thread GC: Included Boehm (with typed GC and Parallel Mark) SIGSEGV: altstack Notifications: epoll Architecture: amd64 Disabled: none

    Read the article

< Previous Page | 414 415 416 417 418 419 420  | Next Page >